# NumPy benchmarks Benchmarking NumPy with Airspeed Velocity. ## Usage Airspeed Velocity manages building and Python virtualenvs by itself, unless told otherwise. To run the benchmarks, you do not need to install a development version of NumPy to your current Python environment. Before beginning, ensure that _airspeed velocity_ is installed. By default, `asv` ships with support for anaconda and virtualenv: pip install asv pip install virtualenv After contributing new benchmarks, you should test them locally before submitting a pull request. To run all benchmarks, navigate to the root NumPy directory at the command line and execute: spin bench This builds NumPy and runs all available benchmarks defined in `benchmarks/`. (Note: this could take a while. Each benchmark is run multiple times to measure the distribution in execution times.) For **testing** benchmarks locally, it may be better to run these without replications: cd benchmarks/ export REGEXP="bench.*Ufunc" asv run --dry-run --show-stderr --python=same --quick -b $REGEXP Where the regular expression used to match benchmarks is stored in `$REGEXP`, and `–quick` is used to avoid repetitions. To run benchmarks from a particular benchmark module, such as `bench_core.py`, simply append the filename without the extension: spin bench -t bench_core To run a benchmark defined in a class, such as `MeshGrid` from `bench_creation.py`: spin bench -t bench_creation.MeshGrid Compare changes in benchmark results to another version/commit/branch, use the `--compare` option (or the equivalent `-c`): spin bench --compare v1.6.2 -t bench_core spin bench --compare 20d03bcfd -t bench_core spin bench -c main -t bench_core All of the commands above display the results in plain text in the console, and the results are not saved for comparison with future commits. For greater control, a graphical view, and to have results saved for future comparison you can run ASV commands (record results and generate HTML): cd benchmarks asv run -n -e --python=same asv publish asv preview More on how to use `asv` can be found in [ASV documentation](https://asv.readthedocs.io/) Command-line help is available as usual via `asv --help` and `asv run --help`. ## Benchmarking versions To benchmark or visualize only releases on different machines locally, the tags with their commits can be generated, before being run with `asv`, that is: cd benchmarks # Get commits for tags # delete tag_commits.txt before re-runs for gtag in $(git tag --list --sort taggerdate | grep "^v"); do git log $gtag --oneline -n1 --decorate=no | awk '{print $1;}' >> tag_commits.txt done # Use the last 20 tail --lines=20 tag_commits.txt > 20_vers.txt asv run HASHFILE:20_vers.txt # Publish and view asv publish asv preview For details on contributing these, see the [benchmark results repository](https://github.com/HaoZeke/asv-numpy). ## Writing benchmarks See [ASV documentation](https://asv.readthedocs.io/) for basics on how to write benchmarks. Some things to consider: * The benchmark suite should be importable with any NumPy version. * The benchmark parameters etc. should not depend on which NumPy version is installed. * Try to keep the runtime of the benchmark reasonable. * Prefer ASV’s `time_` methods for benchmarking times rather than cooking up time measurements via `time.clock`, even if it requires some juggling when writing the benchmark. * Preparing arrays etc. should generally be put in the `setup` method rather than the `time_` methods, to avoid counting preparation time together with the time of the benchmarked operation. * Be mindful that large arrays created with `np.empty` or `np.zeros` might not be allocated in physical memory until the memory is accessed. If this is desired behaviour, make sure to comment it in your setup function. If you are benchmarking an algorithm, it is unlikely that a user will be executing said algorithm on a newly created empty/zero array. One can force pagefaults to occur in the setup phase either by calling `np.ones` or `arr.fill(value)` after creating the array. # BLAS and LAPACK ## Default behavior for BLAS and LAPACK selection When a NumPy build is invoked, BLAS and LAPACK library detection happens automatically. The build system will attempt to locate a suitable library, and try a number of known libraries in a certain order - most to least performant. A typical order is: MKL, Accelerate, OpenBLAS, FlexiBLAS, BLIS, plain `libblas`/`liblapack`. This may vary per platform or over releases. That order, and which libraries are tried, can be changed through the `blas-order` and `lapack-order` build options, for example: $ python -m pip install . -Csetup-args=-Dblas-order=openblas,mkl,blis -Csetup-args=-Dlapack-order=openblas,mkl,lapack The first suitable library that is found will be used. In case no suitable library is found, the NumPy build will print a warning and then use (slow!) NumPy-internal fallback routines. In order to disallow use of those slow routines, the `allow-noblas` build option can be used: $ python -m pip install . -Csetup-args=-Dallow-noblas=false By default the LP64 (32-bit integer) interface to BLAS and LAPACK will be used. For building against the ILP64 (64-bit integer) interface, one must use the `use-ilp64` build option: $ python -m pip install . -Csetup-args=-Duse-ilp64=true ## Selecting specific BLAS and LAPACK libraries The `blas` and `lapack` build options are set to “auto” by default, which means trying all known libraries. If you want to use a specific library, you can set these build options to the library name (typically the lower-case name that `pkg-config` expects). For example, to select plain `libblas` and `liblapack` (this is typically Netlib BLAS/LAPACK on Linux distros, and can be dynamically switched between implementations on conda-forge), use: $ # for a development build $ spin build -C-Dblas=blas -C-Dlapack=lapack $ # to build and install a wheel $ python -m build -Csetup-args=-Dblas=blas -Csetup-args=-Dlapack=lapack $ pip install dist/numpy*.whl $ # Or, with pip>=23.1, this works too: $ python -m pip install . -Csetup-args=-Dblas=blas -Csetup-args=-Dlapack=lapack Other options that should work (as long as they’re installed with `pkg-config` support; otherwise they may still be detected but things are inherently more fragile) include `openblas`, `mkl`, `accelerate`, `atlas` and `blis`. ## Using pkg-config to detect libraries in a nonstandard location The way BLAS and LAPACK detection works under the hood is that Meson tries to discover the specified libraries first with `pkg-config`, and then with CMake. If all you have is a standalone shared library file (e.g., `armpl_lp64.so` in `/a/random/path/lib/` and a corresponding header file in `/a/random/path/include/`), then what you have to do is craft your own pkg- config file. It should have a matching name (so in this example, `armpl_lp64.pc`) and may be located anywhere. The `PKG_CONFIG_PATH` environment variable should be set to point to the location of the `.pc` file. The contents of that file should be: libdir=/path/to/library-dir # e.g., /a/random/path/lib includedir=/path/to/include-dir # e.g., /a/random/path/include version=1.2.3 # set to actual version extralib=-lm -lpthread -lgfortran # if needed, the flags to link in dependencies Name: armpl_lp64 Description: ArmPL - Arm Performance Libraries Version: ${version} Libs: -L${libdir} -larmpl_lp64 # linker flags Libs.private: ${extralib} Cflags: -I${includedir} To check that this works as expected, you should be able to run: $ pkg-config --libs armpl_lp64 -L/path/to/library-dir -larmpl_lp64 $ pkg-config --cflags armpl_lp64 -I/path/to/include-dir ## Full list of BLAS and LAPACK related build options BLAS and LAPACK are complex dependencies. Some libraries have more options that are exposed via build options (see `meson.options` in the root of the repo for all of NumPy’s build options). * `blas`: name of the BLAS library to use (default: `auto`), * `lapack`: name of the LAPACK library to use (default: `auto`), * `allow-noblas`: whether or not to allow building without external BLAS/LAPACK libraries (default: `true`), * `blas-order`: order of BLAS libraries to try detecting (default may vary per platform), * `lapack-order`: order of LAPACK libraries to try detecting, * `use-ilp64`: whether to use the ILP64 interface (default: `false`), * `blas-symbol-suffix`: the symbol suffix to use for the detected libraries (default: `auto`), * `mkl-threading`: which MKL threading layer to use, one of `seq`, `iomp`, `gomp`, `tbb` (default: `auto`). # Compiler selection and customizing a build ## Selecting a specific compiler Meson supports the standard environment variables `CC`, `CXX` and `FC` to select specific C, C++ and/or Fortran compilers. These environment variables are documented in [the reference tables in the Meson docs](https://mesonbuild.com/Reference-tables.html#compiler-and-linker-flag- environment-variables). Note that environment variables only get applied from a clean build, because they affect the configure stage (i.e., `meson setup`). An incremental rebuild does not react to changes in environment variables - you have to run `git clean -xdf` and do a full rebuild, or run `meson setup --reconfigure`. ## Adding a custom compiler or linker flag Meson by design prefers builds being configured through command-line options passed to `meson setup`. It provides many built-in options: * For enabling a debug build and the optimization level, see the next section on “build types”, * Enabling `-Werror` in a portable manner is done via `-Dwerror=true`, * Enabling warning levels is done via `-Dwarning_level=`, with `` one of `{0, 1, 2, 3, everything}`, * There are many other builtin options, from activating Visual Studio (`-Dvsenv=true`) and building with link time optimization (`-Db_lto`) to changing the default C++ language level (`-Dcpp_std='c++17'`) or linker flags (`-Dcpp_link_args='-Wl,-z,defs'`). For a comprehensive overview of options, see [Meson’s builtin options docs page](https://mesonbuild.com/Builtin-options.html). Meson also supports the standard environment variables `CFLAGS`, `CXXFLAGS`, `FFLAGS` and `LDFLAGS` to inject extra flags - with the same caveat as in the previous section about those environment variables being picked up only for a clean build and not an incremental build. ## Using different build types with Meson Meson provides different build types while configuring the project. You can see the available options for build types in [the “core options” section of the Meson documentation](https://mesonbuild.com/Builtin-options.html#core- options). Assuming that you are building from scratch (do `git clean -xdf` if needed), you can configure the build as following to use the `debug` build type: spin build -- -Dbuildtype=debug Now, you can use the `spin` interface for further building, installing and testing NumPy as normal: spin test -s linalg This will work because after initial configuration, Meson will remember the config options. ## Controlling build parallelism By default, `ninja` will launch `2*n_cpu + 2`, with `n_cpu` the number of physical CPU cores, parallel build jobs. This is fine in the vast majority of cases, and results in close to optimal build times. In some cases, on machines with a small amount of RAM relative to the number of CPU cores, this leads to a job running out of memory. In case that happens, lower the number of jobs `N` such that you have at least 2 GB RAM per job. For example, to launch 6 jobs: python -m pip install . -Ccompile-args="-j6" or: spin build -j6 # Cross compilation Cross compilation is a complex topic, we only add some hopefully helpful hints here (for now). As of May 2023, cross-compilation based on `crossenv` is known to work, as used (for example) in conda-forge. Cross-compilation without `crossenv` requires some manual overrides. You instruct these overrides by passing options to `meson setup` via [meson-python](https://meson- python.readthedocs.io/en/latest/how-to-guides/meson-args.html). All distributions that are known to successfully cross compile NumPy are using `python -m build` (`pypa/build`), but using `pip` for that should be possible as well. Here are links to the NumPy “build recipes” on those distros: * [Void Linux](https://github.com/void-linux/void-packages/blob/master/srcpkgs/python3-numpy/template) * [Nix](https://github.com/nixos/nixpkgs/blob/master/pkgs/development/python-modules/numpy/default.nix) * [Conda-forge](https://github.com/conda-forge/numpy-feedstock/blob/main/recipe/build.sh) See also [Meson’s documentation on cross compilation](https://mesonbuild.com/Cross-compilation.html) to learn what options you may need to pass to Meson to successfully cross compile. One possible hiccup is that the build requires running a compiled executable in order to determine the `long double` format for the host platform. This may be an obstacle, since it requires `crossenv` or QEMU to run the host (cross) Python. To avoid this problem, specify the paths to the relevant directories in your _cross file_ : [properties] longdouble_format = 'IEEE_DOUBLE_LE' For more details and the current status around cross compilation, see: * The state of cross compilation in Python: [pypackaging-native key issue page](https://pypackaging-native.github.io/key-issues/cross_compilation/) * Tracking issue for SciPy cross-compilation needs and issues: [scipy#14812](https://github.com/scipy/scipy/issues/14812) # Meson and distutils ways of doing things _Old workflows (numpy.distutils based):_ 1. `python runtests.py` 2. `python setup.py build_ext -i` \+ `export PYTHONPATH=/home/username/path/to/numpy/reporoot` (and then edit pure Python code in NumPy and run it with `python some_script.py`). 3. `python setup.py develop` \- this is similar to (2), except in-place build is made permanently visible in env. 4. `python setup.py bdist_wheel` \+ `pip install dist/numpy*.whl` \- build wheel in current env and install it. 5. `pip install .` \- build wheel in an isolated build env against deps in `pyproject.toml` and install it. _Note: be careful, this is usually not the correct command for development installs - typically you want to use (4) or_ `pip install . -v --no-build-isolation`. _New workflows (Meson and meson-python based):_ 1. `spin test` 2. `pip install -e . --no-build-isolation` (note: only for working on NumPy itself - for more details, see [IDE support & editable installs](index#meson-editable-installs)) 3. the same as (2) 4. `python -m build --no-isolation` \+ `pip install dist/numpy*.whl` \- see [pypa/build](https://pypa-build.readthedocs.io/en/latest/). 5. `pip install .` # Building from source Note If you are only trying to install NumPy, we recommend using binaries - see [Installation](https://numpy.org/install) for details on that. Building NumPy from source requires setting up system-level dependencies (compilers, BLAS/LAPACK libraries, etc.) first, and then invoking a build. The build may be done in order to install NumPy for local usage, develop NumPy itself, or build redistributable binary packages. And it may be desired to customize aspects of how the build is done. This guide will cover all these aspects. In addition, it provides background information on how the NumPy build works, and links to up-to-date guides for generic Python build & packaging documentation that is relevant. ## System-level dependencies NumPy uses compiled code for speed, which means you need compilers and some other system-level (i.e, non-Python / non-PyPI) dependencies to build it on your system. Note If you are using Conda, you can skip the steps in this section - with the exception of installing compilers for Windows or the Apple Developer Tools for macOS. All other dependencies will be installed automatically by the `mamba env create -f environment.yml` command. Linux If you want to use the system Python and `pip`, you will need: * C and C++ compilers (typically GCC). * Python header files (typically a package named `python3-dev` or `python3-devel`) * BLAS and LAPACK libraries. [OpenBLAS](https://github.com/OpenMathLib/OpenBLAS/) is the NumPy default; other variants include Apple Accelerate, [MKL](https://software.intel.com/en-us/intel-mkl), [ATLAS](http://math-atlas.sourceforge.net/) and [Netlib](https://www.netlib.org/lapack/index.html) (or “Reference”) BLAS and LAPACK. * `pkg-config` for dependency detection. * A Fortran compiler is needed only for running the `f2py` tests. The instructions below include a Fortran compiler, however you can safely leave it out. Debian/Ubuntu Linux To install NumPy build requirements, you can do: sudo apt install -y gcc g++ gfortran libopenblas-dev liblapack-dev pkg-config python3-pip python3-dev Alternatively, you can do: sudo apt build-dep numpy This command installs whatever is needed to build NumPy, with the advantage that new dependencies or updates to required versions are handled by the package managers. Fedora To install NumPy build requirements, you can do: sudo dnf install gcc-gfortran python3-devel openblas-devel lapack-devel pkgconfig Alternatively, you can do: sudo dnf builddep numpy This command installs whatever is needed to build NumPy, with the advantage that new dependencies or updates to required versions are handled by the package managers. CentOS/RHEL To install NumPy build requirements, you can do: sudo yum install gcc-gfortran python3-devel openblas-devel lapack-devel pkgconfig Alternatively, you can do: sudo yum-builddep numpy This command installs whatever is needed to build NumPy, with the advantage that new dependencies or updates to required versions are handled by the package managers. Arch To install NumPy build requirements, you can do: sudo pacman -S gcc-fortran openblas pkgconf macOS Install Apple Developer Tools. An easy way to do this is to [open a terminal window](https://blog.teamtreehouse.com/introduction-to-the-mac-os-x-command- line), enter the command: xcode-select --install and follow the prompts. Apple Developer Tools includes Git, the Clang C/C++ compilers, and other development utilities that may be required. Do _not_ use the macOS system Python. Instead, install Python with [the python.org installer](https://www.python.org/downloads/) or with a package manager like Homebrew, MacPorts or Fink. On macOS >=13.3, the easiest build option is to use Accelerate, which is already installed and will be automatically used by default. On older macOS versions you need a different BLAS library, most likely OpenBLAS, plus pkg-config to detect OpenBLAS. These are easiest to install with [Homebrew](https://brew.sh/): brew install openblas pkg-config gfortran Windows On Windows, the use of a Fortran compiler is more tricky than on other platforms, because MSVC does not support Fortran, and gfortran and MSVC can’t be used together. If you don’t need to run the `f2py` tests, simply using MSVC is easiest. Otherwise, you will need one of these sets of compilers: 1. MSVC + Intel Fortran (`ifort`) 2. Intel compilers (`icc`, `ifort`) 3. Mingw-w64 compilers (`gcc`, `g++`, `gfortran`) Compared to macOS and Linux, building NumPy on Windows is a little more difficult, due to the need to set up these compilers. It is not possible to just call a one-liner on the command prompt as you would on other platforms. First, install Microsoft Visual Studio - the 2019 Community Edition or any newer version will work (see the [Visual Studio download site](https://visualstudio.microsoft.com/downloads/)). This is needed even if you use the MinGW-w64 or Intel compilers, in order to ensure you have the Windows Universal C Runtime (the other components of Visual Studio are not needed when using Mingw-w64, and can be deselected if desired, to save disk space). The recommended version of the UCRT is >= 10.0.22621.0. MSVC The MSVC installer does not put the compilers on the system path, and the install location may change. To query the install location, MSVC comes with a `vswhere.exe` command-line utility. And to make the C/C++ compilers available inside the shell you are using, you need to run a `.bat` file for the correct bitness and architecture (e.g., for 64-bit Intel CPUs, use `vcvars64.bat`). If using a Conda environment while a version of Visual Studio 2019+ is installed that includes the MSVC v142 package (VS 2019 C++ x86/x64 build tools), activating the conda environment should cause Visual Studio to be found and the appropriate .bat file executed to set these variables. For detailed guidance, see [Use the Microsoft C++ toolset from the command line](https://learn.microsoft.com/en-us/cpp/build/building-on-the-command- line?view=msvc-170). Intel Similar to MSVC, the Intel compilers are designed to be used with an activation script (`Intel\oneAPI\setvars.bat`) that you run in the shell you are using. This makes the compilers available on the path. For detailed guidance, see [Get Started with the Intel® oneAPI HPC Toolkit for Windows](https://www.intel.com/content/www/us/en/docs/oneapi-hpc-toolkit/get- started-guide-windows/2023-1/overview.html). MinGW-w64 There are several sources of binaries for MinGW-w64. We recommend the RTools versions, which can be installed with Chocolatey (see Chocolatey install instructions [here](https://chocolatey.org/install)): choco install rtools -y --no-progress --force --version=4.0.0.20220206 Note Compilers should be on the system path (i.e., the `PATH` environment variable should contain the directory in which the compiler executables can be found) in order to be found, with the exception of MSVC which will be found automatically if and only if there are no other compilers on the `PATH`. You can use any shell (e.g., Powershell, `cmd` or Git Bash) to invoke a build. To check that this is the case, try invoking a Fortran compiler in the shell you use (e.g., `gfortran --version` or `ifort --version`). Warning When using a conda environment it is possible that the environment creation will not work due to an outdated Fortran compiler. If that happens, remove the `compilers` entry from `environment.yml` and try again. The Fortran compiler should be installed as described in this section. ## Building NumPy from source If you want to only install NumPy from source once and not do any development work, then the recommended way to build and install is to use `pip`. Otherwise, conda is recommended. Note If you don’t have a conda installation yet, we recommend using [Miniforge](https://github.com/conda-forge/miniforge); any conda flavor will work though. ### Building from source to use NumPy Conda env If you are using a conda environment, `pip` is still the tool you use to invoke a from-source build of NumPy. It is important to always use the `--no- build-isolation` flag to the `pip install` command, to avoid building against a `numpy` wheel from PyPI. In order for that to work you must first install the remaining build dependencies into the conda environment: # Either install all NumPy dev dependencies into a fresh conda environment mamba env create -f environment.yml # Or, install only the required build dependencies mamba install python numpy cython compilers openblas meson-python pkg-config # To build the latest stable release: pip install numpy --no-build-isolation --no-binary numpy # To build a development version, you need a local clone of the NumPy git repository: git clone https://github.com/numpy/numpy.git cd numpy git submodule update --init pip install . --no-build-isolation Warning On Windows, the AR, LD, and LDFLAGS environment variables may be set, which will cause the pip install command to fail. These variables are only needed for flang and can be safely unset prior to running pip install. Virtual env or system Python # To build the latest stable release: pip install numpy --no-binary numpy # To build a development version, you need a local clone of the NumPy git repository: git clone https://github.com/numpy/numpy.git cd numpy git submodule update --init pip install . ### Building from source for NumPy development If you want to build from source in order to work on NumPy itself, first clone the NumPy repository: git clone https://github.com/numpy/numpy.git cd numpy git submodule update --init Then you want to do the following: 1. Create a dedicated development environment (virtual environment or conda environment), 2. Install all needed dependencies (_build_ , and also _test_ , _doc_ and _optional_ dependencies), 3. Build NumPy with the `spin` developer interface. Step (3) is always the same, steps (1) and (2) are different between conda and virtual environments: Conda env To create a `numpy-dev` development environment with every required and optional dependency installed, run: mamba env create -f environment.yml mamba activate numpy-dev Virtual env or system Python Note There are many tools to manage virtual environments, like `venv`, `virtualenv`/`virtualenvwrapper`, `pyenv`/`pyenv-virtualenv`, Poetry, PDM, Hatch, and more. Here we use the basic `venv` tool that is part of the Python stdlib. You can use any other tool; all we need is an activated Python environment. Create and activate a virtual environment in a new directory named `venv` ( note that the exact activation command may be different based on your OS and shell - see [“How venvs work”](https://docs.python.org/3/library/venv.html#how-venvs-work) in the `venv` docs). Linux python -m venv venv source venv/bin/activate macOS python -m venv venv source venv/bin/activate Windows python -m venv venv .\venv\Scripts\activate Then install the Python-level dependencies from PyPI with: python -m pip install -r requirements/all_requirements.txt To build NumPy in an activated development environment, run: spin build This will install NumPy inside the repository (by default in a `build-install` directory). You can then run tests (`spin test`), drop into IPython (`spin ipython`), or take other development steps like build the html documentation or running benchmarks. The `spin` interface is self-documenting, so please see `spin --help` and `spin --help` for detailed guidance. Warning In an activated conda enviroment on Windows, the AR, LD, and LDFLAGS environment variables may be set, which will cause the build to fail. These variables are only needed for flang and can be safely unset for build. IDE support & editable installs While the `spin` interface is our recommended way of working on NumPy, it has one limitation: because of the custom install location, NumPy installed using `spin` will not be recognized automatically within an IDE (e.g., for running a script via a “run” button, or setting breakpoints visually). This will work better with an _in-place build_ (or “editable install”). Editable installs are supported. It is important to understand that **you may use either an editable install or ``spin`` in a given repository clone, but not both**. If you use editable installs, you have to use `pytest` and other development tools directly instead of using `spin`. To use an editable install, ensure you start from a clean repository (run `git clean -xdf` if you’ve built with `spin` before) and have all dependencies set up correctly as described higher up on this page. Then do: # Note: the --no-build-isolation is important! pip install -e . --no-build-isolation # To run the tests for, e.g., the `numpy.linalg` module: pytest numpy/linalg When making changes to NumPy code, including to compiled code, there is no need to manually rebuild or reinstall. NumPy is automatically rebuilt each time NumPy is imported by the Python interpreter; see the [meson- python](https://mesonbuild.com/meson-python/) documentation on editable installs for more details on how that works under the hood. When you run `git clean -xdf`, which removes the built extension modules, remember to also uninstall NumPy with `pip uninstall numpy`. Warning Note that editable installs are fundamentally incomplete installs. Their only guarantee is that `import numpy` works - so they are suitable for working on NumPy itself, and for working on pure Python packages that depend on NumPy. Headers, entrypoints, and other such things may not be available from an editable install. ## Customizing builds * [Compiler selection and customizing a build](compilers_and_options) * [BLAS and LAPACK](blas_lapack) * [Cross compilation](cross_compilation) * [Building redistributable binaries](redistributable_binaries) ## Background information * [Understanding Meson](understanding_meson) * [Introspecting build steps](introspecting_a_build) * [Meson and `distutils` ways of doing things](distutils_equivalents) # Introspecting build steps When you have an issue with a particular Python extension module or other build target, there are a number of ways to figure out what the build system is doing exactly. Beyond looking at the `meson.build` content for the target of interest, these include: 1. Reading the generated `build.ninja` file in the build directory, 2. Using `meson introspect` to learn more about build options, dependencies and flags used for the target, 3. Reading `/meson-info/*.json` for details on discovered dependencies, where Meson plans to install files to, etc. These things are all available after the configure stage of the build (i.e., `meson setup`) has run. It is typically more effective to look at this information, rather than running the build and reading the full build log. For more details on this topic, see the [SciPy doc page on build introspection](http://scipy.github.io/devdocs/building/introspecting_a_build.html). # Building redistributable binaries When `python -m build` or `pip wheel` is used to build a NumPy wheel, that wheel will rely on external shared libraries (at least for BLAS/LAPACK and a Fortran compiler runtime library, perhaps other libraries). Such wheels therefore will only run on the system on which they are built. See [the pypackaging-native content under “Building and installing or uploading artifacts”](https://pypackaging-native.github.io/meta- topics/build_steps_conceptual/#building-and-installing-or-uploading-artifacts) for more context on that. A wheel like that is therefore an intermediate stage to producing a binary that can be distributed. That final binary may be a wheel - in that case, run `auditwheel` (Linux), `delocate` (macOS) or `delvewheel` (Windows) to vendor the required shared libraries into the wheel. The final binary may also be in another packaging format (e.g., a `.rpm`, `.deb` or `.conda` package). In that case, there are packaging ecosystem- specific tools to first install the wheel into a staging area, then making the extension modules in that install location relocatable (e.g., by rewriting RPATHs), and then repackaging it into the final package format. # Understanding Meson Building NumPy relies on the following tools, which can be considered part of the build system: * `meson`: the Meson build system, installable as a pure Python package from PyPI or conda-forge * `ninja`: the build tool invoked by Meson to do the actual building (e.g. invoking compilers). Installable also from PyPI (on all common platforms) or conda-forge. * `pkg-config`: the tool used for discovering dependencies (in particular BLAS/LAPACK). Available on conda-forge (and Homebrew, Chocolatey, and Linux package managers), but not packaged on PyPI. * `meson-python`: the Python build backend (i.e., the thing that gets invoked via a hook in `pyproject.toml` by a build frontend like `pip` or `pypa/build`). This is a thin layer on top of Meson, with as main roles (a) interface with build frontends, and (b) produce sdists and wheels with valid file names and metadata. Warning As of Dec’23, NumPy vendors a custom version of Meson, which is needed for SIMD and BLAS/LAPACK features that are not yet available in upstream Meson. Hence, using the `meson` executable directly is not possible. Instead, wherever instructions say `meson xxx`, use `python vendored- meson/meson/meson.py xxx` instead. Building with Meson happens in stages: * A configure stage (`meson setup`) to detect compilers, dependencies and build options, and create the build directory and `build.ninja` file, * A compile stage (`meson compile` or `ninja`), where the extension modules that are part of a built NumPy package get compiled, * An install stage (`meson install`) to install the installable files from the source and build directories to the target install directory, Meson has a good build dependency tracking system, so invoking a build for a second time will rebuild only targets for which any sources or dependencies have changed. ## To learn more about Meson Meson has [very good documentation](https://mesonbuild.com/); it pays off to read it, and is often the best source of answers for “how to do X”. Furthermore, an extensive pdf book on Meson can be obtained for free at To learn more about the design principles Meson uses, the recent talks linked from [mesonbuild.com/Videos](https://mesonbuild.com/Videos.html) are also a good resource. ## Explanation of build stages _This is for teaching purposes only; there should be no need to execute these stages separately!_ Assume we’re starting from a clean repo and a fully set up conda environment: git clone git@github.com:numpy/numpy.git git submodule update --init mamba env create -f environment.yml mamba activate numpy-dev To now run the configure stage of the build and instruct Meson to put the build artifacts in `build/` and a local install under `build-install/` relative to the root of the repo, do: meson setup build --prefix=$PWD/build-install To then run the compile stage of the build, do: ninja -C build In the command above, `-C` is followed by the name of the build directory. You can have multiple build directories at the same time. Meson is fully out-of- place, so those builds will not interfere with each other. You can for example have a GCC build, a Clang build and a debug build in different directories. To then install NumPy into the prefix (`build-install/` here, but note that that’s just an arbitrary name we picked here): meson install -C build It will then install to `build-install/lib/python3.11/site-packages/numpy`, which is not on your Python path, so to add it do (_again, this is for learning purposes, using ``PYTHONPATH`` explicitly is typically not the best idea_): export PYTHONPATH=$PWD/build-install/lib/python3.11/site-packages/ Now we should be able to import `numpy` and run the tests. Remembering that we need to move out of the root of the repo to ensure we pick up the package and not the local `numpy/` source directory: cd doc python -c "import numpy as np; np.test()" The above runs the “fast” numpy test suite. Other ways of running the tests should also work, for example: pytest --pyargs numpy The full test suite should pass, without any build warnings on Linux (with the GCC version for which `-Werror` is enforced in CI at least) and with at most a moderate amount of warnings on other platforms. # Memory alignment ## NumPy alignment goals There are three use-cases related to memory alignment in NumPy (as of 1.14): 1. Creating [structured datatypes](../glossary#term-structured-data-type) with [fields](../glossary#term-field) aligned like in a C-struct. 2. Speeding up copy operations by using [`uint`](../reference/arrays.scalars#numpy.uint "numpy.uint") assignment in instead of `memcpy`. 3. Guaranteeing safe aligned access for ufuncs/setitem/casting code. NumPy uses two different forms of alignment to achieve these goals: “True alignment” and “Uint alignment”. “True” alignment refers to the architecture-dependent alignment of an equivalent C-type in C. For example, in x64 systems [`float64`](../reference/arrays.scalars#numpy.float64 "numpy.float64") is equivalent to `double` in C. On most systems, this has either an alignment of 4 or 8 bytes (and this can be controlled in GCC by the option `malign- double`). A variable is aligned in memory if its memory offset is a multiple of its alignment. On some systems (eg. sparc) memory alignment is required; on others, it gives a speedup. “Uint” alignment depends on the size of a datatype. It is defined to be the “True alignment” of the uint used by NumPy’s copy-code to copy the datatype, or undefined/unaligned if there is no equivalent uint. Currently, NumPy uses `uint8`, `uint16`, `uint32`, `uint64`, and `uint64` to copy data of size 1, 2, 4, 8, 16 bytes respectively, and all other sized datatypes cannot be uint- aligned. For example, on a (typical Linux x64 GCC) system, the NumPy [`complex64`](../reference/arrays.scalars#numpy.complex64 "numpy.complex64") datatype is implemented as `struct { float real, imag; }`. This has “true” alignment of 4 and “uint” alignment of 8 (equal to the true alignment of `uint64`). Some cases where uint and true alignment are different (default GCC Linux): arch | type | true-aln | uint-aln ---|---|---|--- x86_64 | complex64 | 4 | 8 x86_64 | float128 | 16 | 8 x86 | float96 | 4 | - ## Variables in NumPy which control and describe alignment There are 4 relevant uses of the word `align` used in NumPy: * The [`dtype.alignment`](../reference/generated/numpy.dtype.alignment#numpy.dtype.alignment "numpy.dtype.alignment") attribute (`descr->alignment` in C). This is meant to reflect the “true alignment” of the type. It has arch-dependent default values for all datatypes, except for the structured types created with `align=True` as described below. * The `ALIGNED` flag of an ndarray, computed in `IsAligned` and checked by [`PyArray_ISALIGNED`](../reference/c-api/array#c.PyArray_ISALIGNED "PyArray_ISALIGNED"). This is computed from [`dtype.alignment`](../reference/generated/numpy.dtype.alignment#numpy.dtype.alignment "numpy.dtype.alignment"). It is set to `True` if every item in the array is at a memory location consistent with [`dtype.alignment`](../reference/generated/numpy.dtype.alignment#numpy.dtype.alignment "numpy.dtype.alignment"), which is the case if the `data ptr` and all strides of the array are multiples of that alignment. * The `align` keyword of the dtype constructor, which only affects [Structured arrays](../user/basics.rec#structured-arrays). If the structure’s field offsets are not manually provided, NumPy determines offsets automatically. In that case, `align=True` pads the structure so that each field is “true” aligned in memory and sets [`dtype.alignment`](../reference/generated/numpy.dtype.alignment#numpy.dtype.alignment "numpy.dtype.alignment") to be the largest of the field “true” alignments. This is like what C-structs usually do. Otherwise if offsets or itemsize were manually provided `align=True` simply checks that all the fields are “true” aligned and that the total itemsize is a multiple of the largest field alignment. In either case [`dtype.isalignedstruct`](../reference/generated/numpy.dtype.isalignedstruct#numpy.dtype.isalignedstruct "numpy.dtype.isalignedstruct") is also set to True. * `IsUintAligned` is used to determine if an ndarray is “uint aligned” in an analogous way to how `IsAligned` checks for true alignment. ## Consequences of alignment Here is how the variables above are used: 1. Creating aligned structs: To know how to offset a field when `align=True`, NumPy looks up `field.dtype.alignment`. This includes fields that are nested structured arrays. 2. Ufuncs: If the `ALIGNED` flag of an array is False, ufuncs will buffer/cast the array before evaluation. This is needed since ufunc inner loops access raw elements directly, which might fail on some archs if the elements are not true-aligned. 3. Getitem/setitem/copyswap function: Similar to ufuncs, these functions generally have two code paths. If `ALIGNED` is False they will use a code path that buffers the arguments so they are true-aligned. 4. Strided copy code: Here, “uint alignment” is used instead. If the itemsize of an array is equal to 1, 2, 4, 8 or 16 bytes and the array is uint aligned then instead NumPy will do `*(uintN*)dst) = *(uintN*)src)` for appropriate N. Otherwise, NumPy copies by doing `memcpy(dst, src, N)`. 5. Nditer code: Since this often calls the strided copy code, it must check for “uint alignment”. 6. Cast code: This checks for “true” alignment, as it does `*dst = CASTFUNC(*src)` if aligned. Otherwise, it does `memmove(srcval, src); dstval = CASTFUNC(srcval); memmove(dst, dstval)` where dstval/srcval are aligned. Note that the strided-copy and strided-cast code are deeply intertwined and so any arrays being processed by them must be both uint and true aligned, even though the copy-code only needs uint alignment and the cast code only true alignment. If there is ever a big rewrite of this code it would be good to allow them to use different alignments. # For downstream package authors This document aims to explain some best practices for authoring a package that depends on NumPy. ## Understanding NumPy’s versioning and API/ABI stability NumPy uses a standard, [**PEP 440**](https://peps.python.org/pep-0440/) compliant, versioning scheme: `major.minor.bugfix`. A _major_ release is highly unusual and if it happens it will most likely indicate an ABI break. NumPy 1.xx releases happened from 2006 to 2023; NumPy 2.0 in early 2024 is the first release which changed the ABI (minor ABI breaks for corner cases may have happened a few times in minor releases). _Minor_ versions are released regularly, typically every 6 months. Minor versions contain new features, deprecations, and removals of previously deprecated code. _Bugfix_ releases are made even more frequently; they do not contain any new features or deprecations. It is important to know that NumPy, like Python itself and most other well known scientific Python projects, does **not** use semantic versioning. Instead, backwards incompatible API changes require deprecation warnings for at least two releases. For more details, see [NEP 23 — Backwards compatibility and deprecation policy](https://numpy.org/neps/nep-0023-backwards- compatibility.html#nep23 "\(in NumPy Enhancement Proposals\)"). NumPy has both a Python API and a C API. The C API can be used directly or via Cython, f2py, or other such tools. If your package uses the C API, then ABI (application binary interface) stability of NumPy is important. NumPy’s ABI is forward but not backward compatible. This means: binaries compiled against a given target version of NumPy’s C API will still run correctly with newer NumPy versions, but not with older versions. ## Testing against the NumPy main branch or pre-releases For large, actively maintained packages that depend on NumPy, we recommend testing against the development version of NumPy in CI. To make this easy, nightly builds are provided as wheels at . Example install command: pip install -U --pre --only-binary :all: -i https://pypi.anaconda.org/scientific-python-nightly-wheels/simple numpy This helps detect regressions in NumPy that need fixing before the next NumPy release. Furthermore, we recommend to raise errors on warnings in CI for this job, either all warnings or otherwise at least `DeprecationWarning` and `FutureWarning`. This gives you an early warning about changes in NumPy to adapt your code. If you want to test your own wheel builds against the latest NumPy nightly build and you’re using `cibuildwheel`, you may need something like this in your CI config file: CIBW_ENVIRONMENT: "PIP_PRE=1 PIP_EXTRA_INDEX_URL=https://pypi.anaconda.org/scientific-python-nightly-wheels/simple" ## Adding a dependency on NumPy ### Build-time dependency Note Before NumPy 1.25, the NumPy C-API was _not_ exposed in a backwards compatible way by default. This means that when compiling with a NumPy version earlier than 1.25 you have to compile with the oldest version you wish to support. This can be done by using [oldest-supported- numpy](https://github.com/scipy/oldest-supported-numpy/). Please see the [NumPy 1.24 documentation](https://numpy.org/doc/1.24/dev/depending_on_numpy.html). If a package either uses the NumPy C API directly or it uses some other tool that depends on it like Cython or Pythran, NumPy is a _build-time_ dependency of the package. By default, NumPy will expose an API that is backwards compatible with the oldest NumPy version that supports the currently oldest compatible Python version. NumPy 1.25.0 supports Python 3.9 and higher and NumPy 1.19 is the first version to support Python 3.9. Thus, we guarantee that, when using defaults, NumPy 1.25 will expose a C-API compatible with NumPy 1.19. (the exact version is set within NumPy-internal header files). NumPy is also forward compatible for all minor releases, but a major release will require recompilation (see NumPy 2.0-specific advice further down). The default behavior can be customized for example by adding: #define NPY_TARGET_VERSION NPY_1_22_API_VERSION before including any NumPy headers (or the equivalent `-D` compiler flag) in every extension module that requires the NumPy C-API. This is mainly useful if you need to use newly added API at the cost of not being compatible with older versions. If for some reason you wish to compile for the currently installed NumPy version by default you can add: #ifndef NPY_TARGET_VERSION #define NPY_TARGET_VERSION NPY_API_VERSION #endif Which allows a user to override the default via `-DNPY_TARGET_VERSION`. This define must be consistent for each extension module (use of `import_array()`) and also applies to the umath module. When you compile against NumPy, you should add the proper version restrictions to your `pyproject.toml` (see PEP 517). Since your extension will not be compatible with a new major release of NumPy and may not be compatible with very old versions. For conda-forge packages, please see [here](https://conda- forge.org/docs/maintainer/knowledge_base.html#building-against-numpy). as of now, it is usually as easy as including: host: - numpy run: - {{ pin_compatible('numpy') }} ### Runtime dependency & version ranges NumPy itself and many core scientific Python packages have agreed on a schedule for dropping support for old Python and NumPy versions: [NEP29](https://docs.scipy.org/doc/scipy/dev/toolchain.html#nep29 "\(in SciPy v1.14.1\)"). We recommend all packages depending on NumPy to follow the recommendations in NEP 29. For _run-time dependencies_ , specify version bounds using `install_requires` in `setup.py` (assuming you use `numpy.distutils` or `setuptools` to build). Most libraries that rely on NumPy will not need to set an upper version bound: NumPy is careful to preserve backward-compatibility. That said, if you are (a) a project that is guaranteed to release frequently, (b) use a large part of NumPy’s API surface, and (c) is worried that changes in NumPy may break your code, you can set an upper bound of `=2.0 only. The latter is simpler, but may be more restrictive for your users. In that case, simply add `numpy>=2.0` (or `numpy>=2.0.0rc1`) to your build and runtime requirements and you’re good to go. We’ll focus on the “keep compatibility with 1.xx and 2.x” now, which is a little more involved. _Example for a package using the NumPy C API (via C/Cython/etc.) which wants to support NumPy 1.23.5 and up_ : [build-system] build-backend = ... requires = [ # Note for packagers: this constraint is specific to wheels # for PyPI; it is also supported to build against 1.xx still. # If you do so, please ensure to include a `numpy<2.0` # runtime requirement for those binary packages. "numpy>=2.0.0rc1", ... ] [project] dependencies = [ "numpy>=1.23.5", ] We recommend that you have at least one CI job which builds/installs via a wheel, and then runs tests against the oldest numpy version that the package supports. For example: - name: Build wheel via wheel, then install it run: | python -m build # This will pull in numpy 2.0 in an isolated env python -m pip install dist/*.whl - name: Test against oldest supported numpy version run: | python -m pip install numpy==1.23.5 # now run test suite The above only works once NumPy 2.0 is available on PyPI. If you want to test against a NumPy 2.0-dev wheel, you have to use a numpy nightly build (see this section higher up) or build numpy from source. # Advanced debugging tools If you reached here, you want to dive into, or use, more advanced tooling. This is usually not necessary for first time contributors and most day-to-day development. These are used more rarely, for example close to a new NumPy release, or when a large or particular complex change was made. Since not all of these tools are used on a regular bases and only available on some systems, please expect differences, issues, or quirks; we will be happy to help if you get stuck and appreciate any improvements or suggestions to these workflows. ## Finding C errors with additional tooling Most development will not require more than a typical debugging toolchain as shown in [Debugging](development_environment#debugging). But for example memory leaks can be particularly subtle or difficult to narrow down. We do not expect any of these tools to be run by most contributors. However, you can ensure that we can track down such issues more easily: * Tests should cover all code paths, including error paths. * Try to write short and simple tests. If you have a very complicated test consider creating an additional simpler test as well. This can be helpful, because often it is only easy to find which test triggers an issue and not which line of the test. * Never use `np.empty` if data is read/used. `valgrind` will notice this and report an error. When you do not care about values, you can generate random values instead. This will help us catch any oversights before your change is released and means you do not have to worry about making reference counting errors, which can be intimidating. ### Python debug build Debug builds of Python are easily available for example via the system package manager on Linux systems, but are also available on other platforms, possibly in a less convenient format. If you cannot easily install a debug build of Python from a system package manager, you can build one yourself using [pyenv](https://github.com/pyenv/pyenv). For example, to install and globally activate a debug build of Python 3.10.8, one would do: pyenv install -g 3.10.8 pyenv global 3.10.8 Note that `pyenv install` builds Python from source, so you must ensure that Python’s dependencies are installed before building, see the pyenv documentation for platform-specific installation instructions. You can use `pip` to install Python dependencies you may need for your debugging session. If there is no debug wheel available on `pypi,` you will need to build the dependencies from source and ensure that your dependencies are also compiled as debug builds. Often debug builds of Python name the Python executable `pythond` instead of `python`. To check if you have a debug build of Python installed, you can run e.g. `pythond -m sysconfig` to get the build configuration for the Python executable. A debug build will be built with debug compiler options in `CFLAGS` (e.g. `-g -Og`). Running the Numpy tests or an interactive terminal is usually as easy as: python3.8d runtests.py # or python3.8d runtests.py --ipython and were already mentioned in [Debugging](development_environment#debugging). A Python debug build will help: * Find bugs which may otherwise cause random behaviour. One example is when an object is still used after it has been deleted. * Python debug builds allows to check correct reference counting. This works using the additional commands: sys.gettotalrefcount() sys.getallocatedblocks() * Python debug builds allow easier debugging with gdb and other C debuggers. #### Use together with `pytest` Running the test suite only with a debug python build will not find many errors on its own. An additional advantage of a debug build of Python is that it allows detecting memory leaks. A tool to make this easier is [pytest- leaks](https://github.com/abalkin/pytest-leaks), which can be installed using `pip`. Unfortunately, `pytest` itself may leak memory, but good results can usually (currently) be achieved by removing: @pytest.fixture(autouse=True) def add_np(doctest_namespace): doctest_namespace['np'] = numpy @pytest.fixture(autouse=True) def env_setup(monkeypatch): monkeypatch.setenv('PYTHONHASHSEED', '0') from `numpy/conftest.py` (This may change with new `pytest-leaks` versions or `pytest` updates). This allows to run the test suite, or part of it, conveniently: python3.8d runtests.py -t numpy/_core/tests/test_multiarray.py -- -R2:3 -s where `-R2:3` is the `pytest-leaks` command (see its documentation), the `-s` causes output to print and may be necessary (in some versions captured output was detected as a leak). Note that some tests are known (or even designed) to leak references, we try to mark them, but expect some false positives. ### `valgrind` Valgrind is a powerful tool to find certain memory access problems and should be run on complicated C code. Basic use of `valgrind` usually requires no more than: PYTHONMALLOC=malloc valgrind python runtests.py where `PYTHONMALLOC=malloc` is necessary to avoid false positives from python itself. Depending on the system and valgrind version, you may see more false positives. `valgrind` supports “suppressions” to ignore some of these, and Python does have a suppression file (and even a compile time option) which may help if you find it necessary. Valgrind helps: * Find use of uninitialized variables/memory. * Detect memory access violations (reading or writing outside of allocated memory). * Find _many_ memory leaks. Note that for _most_ leaks the python debug build approach (and `pytest-leaks`) is much more sensitive. The reason is that `valgrind` can only detect if memory is definitely lost. If: dtype = np.dtype(np.int64) arr.astype(dtype=dtype) Has incorrect reference counting for `dtype`, this is a bug, but valgrind cannot see it because `np.dtype(np.int64)` always returns the same object. However, not all dtypes are singletons, so this might leak memory for different input. In rare cases NumPy uses `malloc` and not the Python memory allocators which are invisible to the Python debug build. `malloc` should normally be avoided, but there are some exceptions (e.g. the `PyArray_Dims` structure is public API and cannot use the Python allocators.) Even though using valgrind for memory leak detection is slow and less sensitive it can be a convenient: you can run most programs with valgrind without modification. Things to be aware of: * Valgrind does not support the numpy `longdouble`, this means that tests will fail or be flagged errors that are completely fine. * Expect some errors before and after running your NumPy code. * Caches can mean that errors (specifically memory leaks) may not be detected or are only detect at a later, unrelated time. A big advantage of valgrind is that it has no requirements aside from valgrind itself (although you probably want to use debug builds for better tracebacks). #### Use together with `pytest` You can run the test suite with valgrind which may be sufficient when you are only interested in a few tests: PYTHOMMALLOC=malloc valgrind python runtests.py \ -t numpy/_core/tests/test_multiarray.py -- --continue-on-collection-errors Note the `--continue-on-collection-errors`, which is currently necessary due to missing `longdouble` support causing failures (this will usually not be necessary if you do not run the full test suite). If you wish to detect memory leaks you will also require `--show-leak- kinds=definite` and possibly more valgrind options. Just as for `pytest-leaks` certain tests are known to leak cause errors in valgrind and may or may not be marked as such. We have developed [pytest-valgrind](https://github.com/seberg/pytest-valgrind) which: * Reports errors for each test individually * Narrows down memory leaks to individual tests (by default valgrind only checks for memory leaks after a program stops, which is very cumbersome). Please refer to its `README` for more information (it includes an example command for NumPy). # Setting up and using your development environment ## Recommended development setup Since NumPy contains parts written in C and Cython that need to be compiled before use, make sure you have the necessary compilers and Python development headers installed - see [Building from source](../building/index#building- from-source). Building NumPy as of version `2.0` requires C11 and C++17 compliant compilers. Having compiled code also means that importing NumPy from the development sources needs some additional steps, which are explained below. For the rest of this chapter we assume that you have set up your git repo as described in [Working with scikit-image source code](https://scikit- image.org/docs/stable/gitwash/index.html#using-git "\(in skimage v0.25.0\)"). Note If you are having trouble building NumPy from source or setting up your local development environment, you can try to build NumPy with GitHub Codespaces. It allows you to create the correct development environment right in your browser, reducing the need to install local development environments and deal with incompatible dependencies. If you have good internet connectivity and want a temporary set-up, it is often faster to work on NumPy in a Codespaces environment. For documentation on how to get started with Codespaces, see [the Codespaces docs](https://docs.github.com/en/codespaces). When creating a codespace for the `numpy/numpy` repository, the default 2-core machine type works; 4-core will build and work a bit faster (but of course at a cost of halving your number of free usage hours). Once your codespace has started, you can run `conda activate numpy-dev` and your development environment is completely set up - you can then follow the relevant parts of the NumPy documentation to build, test, develop, write docs, and contribute to NumPy. ## Using virtual environments A frequently asked question is “How do I set up a development version of NumPy in parallel to a released version that I use to do my job/research?”. One simple way to achieve this is to install the released version in site- packages, by using pip or conda for example, and set up the development version in a virtual environment. If you use conda, we recommend creating a separate virtual environment for numpy development using the `environment.yml` file in the root of the repo (this will create the environment and install all development dependencies at once): $ conda env create -f environment.yml # `mamba` works too for this command $ conda activate numpy-dev If you installed Python some other way than conda, first install [virtualenv](https://virtualenv.pypa.io/) (optionally use [virtualenvwrapper](https://doughellmann.com/projects/virtualenvwrapper/)), then create your virtualenv (named `numpy-dev` here), activate it, and install all project dependencies with: $ virtualenv numpy-dev $ source numpy-dev/bin/activate # activate virtual environment $ python -m pip install -r requirements/all_requirements.txt Now, whenever you want to switch to the virtual environment, you can use the command `source numpy-dev/bin/activate`, and `deactivate` to exit from the virtual environment and back to your previous shell. ## Building from source See [Building from source](../building/index#building-from-source). ## Testing builds Before running the tests, first install the test dependencies: $ python -m pip install -r requirements/test_requirements.txt $ python -m pip install asv # only for running benchmarks To build the development version of NumPy and run tests, spawn interactive shells with the Python import paths properly set up etc., use the [spin](https://github.com/scientific-python/spin) utility. To run tests, do one of: $ spin test -v $ spin test numpy/random # to run the tests in a specific module $ spin test -v -t numpy/_core/tests/test_nditer.py::test_iter_c_order This builds NumPy first, so the first time it may take a few minutes. You can also use `spin bench` for benchmarking. See `spin --help` for more command line options. Note If the above commands result in `RuntimeError: Cannot parse version 0+untagged.xxxxx`, run `git pull upstream main --tags`. Additional arguments may be forwarded to `pytest` by passing the extra arguments after a bare `--`. For example, to run a test method with the `--pdb` flag forwarded to the target, run the following: $ spin test -t numpy/tests/test_scripts.py::test_f2py -- --pdb You can also [match test names using python operators](https://docs.pytest.org/en/latest/usage.html#specifying-tests- selecting-tests) by passing the `-k` argument to pytest: $ spin test -v -t numpy/_core/tests/test_multiarray.py -- -k "MatMul and not vector" To run “doctests” – to check that the code examples in the documentation is correct – use the `check-docs` spin command. It relies on the `scipy-docs` package, which provides several additional features on top of the standard library `doctest` package. Install `scipy-doctest` and run one of: $ spin check-docs -v $ spin check-docs numpy/linalg $ spin check-docs -v -- -k 'det and not slogdet' Note Remember that all tests of NumPy should pass before committing your changes. Note Some of the tests in the test suite require a large amount of memory, and are skipped if your system does not have enough. ## Other build options For more options including selecting compilers, setting custom compiler flags and controlling parallelism, see [Compiler selection and customizing a build](https://docs.scipy.org/doc/scipy/building/compilers_and_options.html "\(in SciPy v1.14.1\)") (from the SciPy documentation.) ## Running tests Besides using `spin`, there are various ways to run the tests. Inside the interpreter, tests can be run like this: >>> np.test() >>> np.test('full') # Also run tests marked as slow >>> np.test('full', verbose=2) # Additionally print test name/file An example of a successful test : ``4686 passed, 362 skipped, 9 xfailed, 5 warnings in 213.99 seconds`` Or a similar way from the command line: $ python -c "import numpy as np; np.test()" Tests can also be run with `pytest numpy`, however then the NumPy-specific plugin is not found which causes strange side effects. Running individual test files can be useful; it’s much faster than running the whole test suite or that of a whole module (example: `np.random.test()`). This can be done with: $ python path_to_testfile/test_file.py That also takes extra arguments, like `--pdb` which drops you into the Python debugger when a test fails or an exception is raised. Running tests with [tox](https://tox.readthedocs.io/) is also supported. For example, to build NumPy and run the test suite with Python 3.9, use: $ tox -e py39 For more extensive information, see [Testing guidelines](../reference/testing#testing-guidelines). Note: do not run the tests from the root directory of your numpy git repo without `spin`, that will result in strange test errors. ## Running linting Lint checks can be performed on newly added lines of Python code. Install all dependent packages using pip: $ python -m pip install -r requirements/linter_requirements.txt To run lint checks before committing new code, run: $ python tools/linter.py To check all changes in newly added Python code of current branch with target branch, run: $ python tools/linter.py --branch main If there are no errors, the script exits with no message. In case of errors, check the error message for details: $ python tools/linter.py --branch main ./numpy/_core/tests/test_scalarmath.py:34:5: E303 too many blank lines (3) 1 E303 too many blank lines (3) It is advisable to run lint checks before pushing commits to a remote branch since the linter runs as part of the CI pipeline. For more details on Style Guidelines: * [Python Style Guide](https://www.python.org/dev/peps/pep-0008/) * [NEP 45 — C style guide](https://numpy.org/neps/nep-0045-c_style_guide.html#nep45 "\(in NumPy Enhancement Proposals\)") ## Rebuilding & cleaning the workspace Rebuilding NumPy after making changes to compiled code can be done with the same build command as you used previously - only the changed files will be re- built. Doing a full build, which sometimes is necessary, requires cleaning the workspace first. The standard way of doing this is (_note: deletes any uncommitted files!_): $ git clean -xdf When you want to discard all changes and go back to the last commit in the repo, use one of: $ git checkout . $ git reset --hard ## Debugging Another frequently asked question is “How do I debug C code inside NumPy?”. First, ensure that you have gdb installed on your system with the Python extensions (often the default on Linux). You can see which version of Python is running inside gdb to verify your setup: (gdb) python >import sys >print(sys.version_info) >end sys.version_info(major=3, minor=7, micro=0, releaselevel='final', serial=0) Most python builds do not include debug symbols and are built with compiler optimizations enabled. To get the best debugging experience using a debug build of Python is encouraged, see [Advanced debugging tools](development_advanced_debugging#advanced-debugging). In terms of debugging, NumPy also needs to be built in a debug mode. You need to use `debug` build type and disable optimizations to make sure `-O0` flag is used during object building. Note that NumPy should NOT be installed in your environment before you build with the `spin build` command. To generate source-level debug information within the build process run: $ spin build --clean -- -Dbuildtype=debug -Ddisable-optimization=true Note In case you are using conda environment be aware that conda sets `CFLAGS` and `CXXFLAGS` automatically, and they will include the `-O2` flag by default. You can safely use `unset CFLAGS && unset CXXFLAGS` to avoid them or provide them at the beginning of the `spin` command: `CFLAGS="-O0 -g" CXXFLAGS="-O0 -g"`. Alternatively, to take control of these variables more permanently, you can create `env_vars.sh` file in the `/numpy- dev/etc/conda/activate.d` directory. In this file you can export `CFLAGS` and `CXXFLAGS` variables. For complete instructions please refer to . Next you need to write a Python script that invokes the C code whose execution you want to debug. For instance `mytest.py`: import numpy as np x = np.arange(5) np.empty_like(x) Note that your test file needs to be outside the NumPy clone you have. Now, you can run: $ spin gdb /path/to/mytest.py In case you are using clang toolchain: $ spin lldb /path/to/mytest.py And then in the debugger: (gdb) break array_empty_like (gdb) run lldb counterpart: (lldb) breakpoint set --name array_empty_like (lldb) run The execution will now stop at the corresponding C function and you can step through it as usual. A number of useful Python-specific commands are available. For example to see where in the Python code you are, use `py-list`, to see the python traceback, use `py-bt`. For more details, see [DebuggingWithGdb](https://wiki.python.org/moin/DebuggingWithGdb). Here are some commonly used commands: * `list`: List specified function or line. * `next`: Step program, proceeding through subroutine calls. * `step`: Continue program being debugged, after signal or breakpoint. * `print`: Print value of expression EXP. Rich support for Python debugging requires that the `python-gdb.py` script distributed with Python is installed in a path where gdb can find it. If you installed your Python build from your system package manager, you likely do not need to manually do anything. However, if you built Python from source, you will likely need to create a `.gdbinit` file in your home directory pointing gdb at the location of your Python installation. For example, a version of python installed via [pyenv](https://github.com/pyenv/pyenv) needs a `.gdbinit` file with the following contents: add-auto-load-safe-path ~/.pyenv Building NumPy with a Python built with debug support (on Linux distributions typically packaged as `python-dbg`) is highly recommended. ## Understanding the code & getting started The best strategy to better understand the code base is to pick something you want to change and start reading the code to figure out how it works. When in doubt, you can ask questions on the mailing list. It is perfectly okay if your pull requests aren’t perfect, the community is always happy to help. As a volunteer project, things do sometimes get dropped and it’s totally fine to ping us if something has sat without a response for about two to four weeks. So go ahead and pick something that annoys or confuses you about NumPy, experiment with the code, hang around for discussions or go through the reference documents to try to fix it. Things will fall in place and soon you’ll have a pretty good understanding of the project as a whole. Good Luck! # Development workflow You already have your own forked copy of the NumPy repository, have configured Git, and have linked the upstream repository as explained in [Linking your repository to the upstream repo](https://scikit- image.org/docs/stable/gitwash/set_up_fork.html#linking-to-upstream "\(in skimage v0.25.0\)"). What is described below is a recommended workflow with Git. ## Basic workflow In short: 1. Start a new _feature branch_ for each set of edits that you do. See below. 2. Hack away! See below 3. When finished: * _Contributors_ : push your feature branch to your own Github repo, and create a pull request. * _Core developers_ : If you want to push changes without further review, see the notes below. This way of working helps to keep work well organized and the history as clear as possible. ### Making a new feature branch First, fetch new commits from the `upstream` repository: git fetch upstream Then, create a new branch based on the main branch of the upstream repository: git checkout -b my-new-feature upstream/main ### The editing workflow #### Overview # hack hack git status # Optional git diff # Optional git add modified_file git commit # push the branch to your own Github repo git push origin my-new-feature #### In more detail 1. Make some changes. When you feel that you’ve made a complete, working set of related changes, move on to the next steps. 2. Optional: Check which files have changed with `git status`. You’ll see a listing like this one: # On branch my-new-feature # Changed but not updated: # (use "git add ..." to update what will be committed) # (use "git checkout -- ..." to discard changes in working directory) # # modified: README # # Untracked files: # (use "git add ..." to include in what will be committed) # # INSTALL no changes added to commit (use "git add" and/or "git commit -a") 3. Optional: Compare the changes with the previous version using with `git diff`. This brings up a simple text browser interface that highlights the difference between your files and the previous version. 4. Add any relevant modified or new files using `git add modified_file`. This puts the files into a staging area, which is a queue of files that will be added to your next commit. Only add files that have related, complete changes. Leave files with unfinished changes for later commits. 5. To commit the staged files into the local copy of your repo, do `git commit`. At this point, a text editor will open up to allow you to write a commit message. Read the commit message section to be sure that you are writing a properly formatted and sufficiently detailed commit message. After saving your message and closing the editor, your commit will be saved. For trivial commits, a short commit message can be passed in through the command line using the `-m` flag. For example, `git commit -am "ENH: Some message"`. In some cases, you will see this form of the commit command: `git commit -a`. The extra `-a` flag automatically commits all modified files and removes all deleted files. This can save you some typing of numerous `git add` commands; however, it can add unwanted changes to a commit if you’re not careful. 6. Push the changes to your fork on GitHub: git push origin my-new-feature Note Assuming you have followed the instructions in these pages, git will create a default link to your GitHb repo called `origin`. You can ensure that the link to origin is permanently set by using the `--set-upstream` option: git push --set-upstream origin my-new-feature From now on, `git` will know that `my-new-feature` is related to the `my-new- feature` branch in your own GitHub repo. Subsequent push calls are then simplified to the following: git push You have to use `--set-upstream` for each new branch that you create. It may be the case that while you were working on your edits, new commits have been added to `upstream` that affect your work. In this case, follow the Rebasing on main section of this document to apply those changes to your branch. #### Writing the commit message Commit messages should be clear and follow a few basic rules. Example: ENH: add functionality X to numpy.. The first line of the commit message starts with a capitalized acronym (options listed below) indicating what type of commit this is. Then a blank line, then more text if needed. Lines shouldn't be longer than 72 characters. If the commit is related to a ticket, indicate that with "See #3456", "See ticket 3456", "Closes #3456" or similar. Describing the motivation for a change, the nature of a bug for bug fixes or some details on what an enhancement does are also good to include in a commit message. Messages should be understandable without looking at the code changes. A commit message like `MAINT: fixed another one` is an example of what not to do; the reader has to go look for context elsewhere. Standard acronyms to start the commit message with are: API: an (incompatible) API change BENCH: changes to the benchmark suite BLD: change related to building numpy BUG: bug fix CI: continuous integration DEP: deprecate something, or remove a deprecated object DEV: development tool or utility DOC: documentation ENH: enhancement MAINT: maintenance commit (refactoring, typos, etc.) MNT: alias for MAINT NEP: NumPy enhancement proposals REL: related to releasing numpy REV: revert an earlier commit STY: style fix (whitespace, PEP8) TST: addition or modification of tests TYP: static typing WIP: work in progress, do not merge ##### Commands to skip continuous integration By default a lot of continuous integration (CI) jobs are run for every PR, from running the test suite on different operating systems and hardware platforms to building the docs. In some cases you already know that CI isn’t needed (or not all of it), for example if you work on CI config files, text in the README, or other files that aren’t involved in regular build, test or docs sequences. In such cases you may explicitly skip CI by including one or more of these fragments in each commit message of a PR: * `[skip ci]`: skip all CI Only recommended if you are still not ready for the checks to run on your PR (for example, if this is only a draft.) * `[skip actions]`: skip GitHub Actions jobs [GitHub Actions](https://docs.github.com/actions) is where most of the CI checks are run, including the linter, benchmarking, running basic tests for most architectures and OSs, and several compiler and CPU optimization settings. [See the configuration files for these checks.](https://github.com/numpy/numpy/tree/main/.github/workflows) * `[skip azp]`: skip Azure jobs [Azure](https://azure.microsoft.com/en-us/products/devops/pipelines) is where all comprehensive tests are run. This is an expensive run, and one you could typically skip if you do documentation-only changes, for example. [See the main configuration file for these checks.](https://github.com/numpy/numpy/blob/main/azure-pipelines.yml) * `[skip circle]`: skip CircleCI jobs [CircleCI](https://circleci.com/) is where we build the documentation and store the generated artifact for preview in each PR. This check will also run all the docstrings examples and verify their results. If you don’t make documentation changes, but you make changes to a function’s API, for example, you may need to run these tests to verify that the doctests are still valid. [See the configuration file for these checks.](https://github.com/numpy/numpy/blob/main/.circleci/config.yml) * `[skip cirrus]`: skip Cirrus jobs [CirrusCI](https://cirrus-ci.org/) mostly triggers Linux aarch64 and MacOS Arm64 wheels uploads. [See the configuration file for these checks.](https://github.com/numpy/numpy/blob/main/.cirrus.star) ##### Test building wheels Numpy currently uses [cibuildwheel](https://https//cibuildwheel.readthedocs.io/en/stable/) in order to build wheels through continuous integration services. To save resources, the cibuildwheel wheel builders are not run by default on every single PR or commit to main. If you would like to test that your pull request do not break the wheel builders, you can do so by appending `[wheel build]` to the first line of the commit message of the newest commit in your PR. Please only do so for build- related PRs, because running all wheel builds is slow and expensive. The wheels built via github actions (including 64-bit Linux, x86-64 macOS, and 32/64-bit Windows) will be uploaded as artifacts in zip files. You can access them from the Summary page of the “Wheel builder” action. The aarch64 Linux and arm64 macOS wheels built via Cirrus CI are not available as artifacts. Additionally, the wheels will be uploaded to on the following conditions: * by a weekly cron job or * if the GitHub Actions or Cirrus build has been manually triggered, which requires appropriate permissions The wheels will be uploaded to if the build was triggered by a tag to the repo that begins with `v` ### Get the mailing list’s opinion If you plan a new feature or API change, it’s wisest to first email the NumPy [mailing list](https://mail.python.org/mailman/listinfo/numpy-discussion) asking for comment. If you haven’t heard back in a week, it’s OK to ping the list again. ### Asking for your changes to be merged with the main repo When you feel your work is finished, you can create a pull request (PR). If your changes involve modifications to the API or addition/modification of a function, add a release note to the `doc/release/upcoming_changes/` directory, following the instructions and format in the `doc/release/upcoming_changes/README.rst` file. ### Getting your PR reviewed We review pull requests as soon as we can, typically within a week. If you get no review comments within two weeks, feel free to ask for feedback by adding a comment on your PR (this will notify maintainers). If your PR is large or complicated, asking for input on the numpy-discussion mailing list may also be useful. ### Rebasing on main This updates your feature branch with changes from the upstream NumPy GitHub repo. If you do not absolutely need to do this, try to avoid doing it, except perhaps when you are finished. The first step will be to update the remote repository with new commits from upstream: git fetch upstream Next, you need to update the feature branch: # go to the feature branch git checkout my-new-feature # make a backup in case you mess up git branch tmp my-new-feature # rebase on upstream main branch git rebase upstream/main If you have made changes to files that have changed also upstream, this may generate merge conflicts that you need to resolve. See below for help in this case. Finally, remove the backup branch upon a successful rebase: git branch -D tmp Note Rebasing on main is preferred over merging upstream back to your branch. Using `git merge` and `git pull` is discouraged when working on feature branches. ### Recovering from mess-ups Sometimes, you mess up merges or rebases. Luckily, in Git it is relatively straightforward to recover from such mistakes. If you mess up during a rebase: git rebase --abort If you notice you messed up after the rebase: # reset branch back to the saved point git reset --hard tmp If you forgot to make a backup branch: # look at the reflog of the branch git reflog show my-feature-branch 8630830 my-feature-branch@{0}: commit: BUG: io: close file handles immediately 278dd2a my-feature-branch@{1}: rebase finished: refs/heads/my-feature-branch onto 11ee694744f2552d 26aa21a my-feature-branch@{2}: commit: BUG: lib: make seek_gzip_factory not leak gzip obj ... # reset the branch to where it was before the botched rebase git reset --hard my-feature-branch@{2} If you didn’t actually mess up but there are merge conflicts, you need to resolve those. ## Additional things you might want to do ### Rewriting commit history Note Do this only for your own feature branches. There’s an embarrassing typo in a commit you made? Or perhaps you made several false starts you would like the posterity not to see. This can be done via _interactive rebasing_. Suppose that the commit history looks like this: git log --oneline eadc391 Fix some remaining bugs a815645 Modify it so that it works 2dec1ac Fix a few bugs + disable 13d7934 First implementation 6ad92e5 * masked is now an instance of a new object, MaskedConstant 29001ed Add pre-nep for a couple of structured_array_extensions. ... and `6ad92e5` is the last commit in the `main` branch. Suppose we want to make the following changes: * Rewrite the commit message for `13d7934` to something more sensible. * Combine the commits `2dec1ac`, `a815645`, `eadc391` into a single one. We do as follows: # make a backup of the current state git branch tmp HEAD # interactive rebase git rebase -i 6ad92e5 This will open an editor with the following text in it: pick 13d7934 First implementation pick 2dec1ac Fix a few bugs + disable pick a815645 Modify it so that it works pick eadc391 Fix some remaining bugs # Rebase 6ad92e5..eadc391 onto 6ad92e5 # # Commands: # p, pick = use commit # r, reword = use commit, but edit the commit message # e, edit = use commit, but stop for amending # s, squash = use commit, but meld into previous commit # f, fixup = like "squash", but discard this commit's log message # # If you remove a line here THAT COMMIT WILL BE LOST. # However, if you remove everything, the rebase will be aborted. # To achieve what we want, we will make the following changes to it: r 13d7934 First implementation pick 2dec1ac Fix a few bugs + disable f a815645 Modify it so that it works f eadc391 Fix some remaining bugs This means that (i) we want to edit the commit message for `13d7934`, and (ii) collapse the last three commits into one. Now we save and quit the editor. Git will then immediately bring up an editor for editing the commit message. After revising it, we get the output: [detached HEAD 721fc64] FOO: First implementation 2 files changed, 199 insertions(+), 66 deletions(-) [detached HEAD 0f22701] Fix a few bugs + disable 1 files changed, 79 insertions(+), 61 deletions(-) Successfully rebased and updated refs/heads/my-feature-branch. and the history looks now like this: 0f22701 Fix a few bugs + disable 721fc64 ENH: Sophisticated feature 6ad92e5 * masked is now an instance of a new object, MaskedConstant If it went wrong, recovery is again possible as explained above. ### Deleting a branch on GitHub git checkout main # delete branch locally git branch -D my-unwanted-branch # delete branch on github git push origin --delete my-unwanted-branch See also: ### Several people sharing a single repository If you want to work on some stuff with other people, where you are all committing into the same repository, or even the same branch, then just share it via GitHub. First fork NumPy into your account, as from [Making your own copy (fork) of scikit-image](https://scikit- image.org/docs/stable/gitwash/forking_hell.html#forking "\(in skimage v0.25.0\)"). Then, go to your forked repository github page, say `https://github.com/your- user-name/numpy` Click on the ‘Admin’ button, and add anyone else to the repo as a collaborator: Now all those people can do: git clone git@github.com:your-user-name/numpy.git Remember that links starting with `git@` use the ssh protocol and are read- write; links starting with `git://` are read-only. Your collaborators can then commit directly into that repo with the usual: git commit -am 'ENH - much better code' git push origin my-feature-branch # pushes directly into your repo ### Checkout changes from an existing pull request If you want to test the changes in a pull request or continue the work in a new pull request, the commits are to be cloned into a local branch in your forked repository First ensure your upstream points to the main repo, as from [Linking your repository to the upstream repo](https://scikit- image.org/docs/stable/gitwash/set_up_fork.html#linking-to-upstream "\(in skimage v0.25.0\)") Then, fetch the changes and create a local branch. Assuming `$ID` is the pull request number and `$BRANCHNAME` is the name of the _new local_ branch you wish to create: git fetch upstream pull/$ID/head:$BRANCHNAME Checkout the newly created branch: git checkout $BRANCHNAME You now have the changes in the pull request. ### Exploring your repository To see a graphical representation of the repository branches and commits: gitk --all To see a linear list of commits for this branch: git log ### Backporting Backporting is the process of copying new feature/fixes committed in NumPy’s `main` branch back to stable release branches. To do this you make a branch off the branch you are backporting to, cherry pick the commits you want from `numpy/main`, and then submit a pull request for the branch containing the backport. 1. First, you need to make the branch you will work on. This needs to be based on the older version of NumPy (not main): # Make a new branch based on numpy/maintenance/1.8.x, # backport-3324 is our new name for the branch. git checkout -b backport-3324 upstream/maintenance/1.8.x 2. Now you need to apply the changes from main to this branch using `git cherry-pick`: # Update remote git fetch upstream # Check the commit log for commits to cherry pick git log upstream/main # This pull request included commits aa7a047 to c098283 (inclusive) # so you use the .. syntax (for a range of commits), the ^ makes the # range inclusive. git cherry-pick aa7a047^..c098283 ... # Fix any conflicts, then if needed: git cherry-pick --continue 3. You might run into some conflicts cherry picking here. These are resolved the same way as merge/rebase conflicts. Except here you can use `git blame` to see the difference between main and the backported branch to make sure nothing gets screwed up. 4. Push the new branch to your Github repository: git push -u origin backport-3324 5. Finally make a pull request using Github. Make sure it is against the maintenance branch and not main, Github will usually suggest you make the pull request against main. ### Pushing changes to the main repo _Requires commit rights to the main NumPy repo._ When you have a set of “ready” changes in a feature branch ready for NumPy’s `main` or `maintenance` branches, you can push them to `upstream` as follows: 1. First, merge or rebase on the target branch. 1. Only a few, unrelated commits then prefer rebasing: git fetch upstream git rebase upstream/main See Rebasing on main. 2. If all of the commits are related, create a merge commit: git fetch upstream git merge --no-ff upstream/main 2. Check that what you are going to push looks sensible: git log -p upstream/main.. git log --oneline --graph 3. Push to upstream: git push upstream my-feature-branch:main Note It’s usually a good idea to use the `-n` flag to `git push` to check first that you’re about to push the changes you want to the place you want. # NumPy project governance and decision-making The purpose of this document is to formalize the governance process used by the NumPy project in both ordinary and extraordinary situations, and to clarify how decisions are made and how the various elements of our community interact, including the relationship between open source collaborative development and work that may be funded by for-profit or non-profit entities. ## Summary NumPy is a community-owned and community-run project. To the maximum extent possible, decisions about project direction are made by community consensus (but note that “consensus” here has a somewhat technical meaning that might not match everyone’s expectations – see below). Some members of the community additionally contribute by serving on the NumPy steering council, where they are responsible for facilitating the establishment of community consensus, for stewarding project resources, and – in extreme cases – for making project decisions if the normal community-based process breaks down. ## The project The NumPy Project (The Project) is an open source software project affiliated with the 501(c)3 NumFOCUS Foundation. The goal of The Project is to develop open source software for array-based computing in Python, and in particular the `numpy` package, along with related software such as `f2py` and the NumPy Sphinx extensions. The Software developed by The Project is released under the BSD (or similar) open source license, developed openly and hosted on public GitHub repositories under the `numpy` GitHub organization. The Project is developed by a team of distributed developers, called Contributors. Contributors are individuals who have contributed code, documentation, designs or other work to the Project. Anyone can be a Contributor. Contributors can be affiliated with any legal entity or none. Contributors participate in the project by submitting, reviewing and discussing GitHub Pull Requests and Issues and participating in open and public Project discussions on GitHub, mailing lists, and other channels. The foundation of Project participation is openness and transparency. The Project Community consists of all Contributors and Users of the Project. Contributors work on behalf of and are responsible to the larger Project Community and we strive to keep the barrier between Contributors and Users as low as possible. The Project is formally affiliated with the 501(c)3 NumFOCUS Foundation (), which serves as its fiscal sponsor, may hold project trademarks and other intellectual property, helps manage project donations and acts as a parent legal entity. NumFOCUS is the only legal entity that has a formal relationship with the project (see Institutional Partners section below). ## Governance This section describes the governance and leadership model of The Project. The foundations of Project governance are: * Openness & Transparency * Active Contribution * Institutional Neutrality ### Consensus-based decision making by the community Normally, all project decisions will be made by consensus of all interested Contributors. The primary goal of this approach is to ensure that the people who are most affected by and involved in any given change can contribute their knowledge in the confidence that their voices will be heard, because thoughtful review from a broad community is the best mechanism we know of for creating high-quality software. The mechanism we use to accomplish this goal may be unfamiliar for those who are not experienced with the cultural norms around free/open-source software development. We provide a summary here, and highly recommend that all Contributors additionally read [Chapter 4: Social and Political Infrastructure](https://producingoss.com/en/producingoss.html#social- infrastructure) of Karl Fogel’s classic _Producing Open Source Software_ , and in particular the section on [Consensus-based Democracy](https://producingoss.com/en/producingoss.html#consensus-democracy), for a more detailed discussion. In this context, consensus does _not_ require: * that we wait to solicit everybody’s opinion on every change, * that we ever hold a vote on anything, * or that everybody is happy or agrees with every decision. For us, what consensus means is that we entrust _everyone_ with the right to veto any change if they feel it necessary. While this may sound like a recipe for obstruction and pain, this is not what happens. Instead, we find that most people take this responsibility seriously, and only invoke their veto when they judge that a serious problem is being ignored, and that their veto is necessary to protect the project. And in practice, it turns out that such vetoes are almost never formally invoked, because their mere possibility ensures that Contributors are motivated from the start to find some solution that everyone can live with – thus accomplishing our goal of ensuring that all interested perspectives are taken into account. How do we know when consensus has been achieved? In principle, this is rather difficult, since consensus is defined by the absence of vetos, which requires us to somehow prove a negative. In practice, we use a combination of our best judgement (e.g., a simple and uncontroversial bug fix posted on GitHub and reviewed by a core developer is probably fine) and best efforts (e.g., all substantive API changes must be posted to the mailing list in order to give the broader community a chance to catch any problems and suggest improvements; we assume that anyone who cares enough about NumPy to invoke their veto right should be on the mailing list). If no-one bothers to comment on the mailing list after a few days, then it’s probably fine. And worst case, if a change is more controversial than expected, or a crucial critique is delayed because someone was on vacation, then it’s no big deal: we apologize for misjudging the situation, [back up, and sort things out](https://producingoss.com/en/producingoss.html#version-control- relaxation). If one does need to invoke a formal veto, then it should consist of: * an unambiguous statement that a veto is being invoked, * an explanation of why it is being invoked, and * a description of what conditions (if any) would convince the vetoer to withdraw their veto. If all proposals for resolving some issue are vetoed, then the status quo wins by default. In the worst case, if a Contributor is genuinely misusing their veto in an obstructive fashion to the detriment of the project, then they can be ejected from the project by consensus of the Steering Council – see below. ### Steering council The Project will have a Steering Council that consists of Project Contributors who have produced contributions that are substantial in quality and quantity, and sustained over at least one year. The overall role of the Council is to ensure, with input from the Community, the long-term well-being of the project, both technically and as a community. During the everyday project activities, council members participate in all discussions, code review and other project activities as peers with all other Contributors and the Community. In these everyday activities, Council Members do not have any special power or privilege through their membership on the Council. However, it is expected that because of the quality and quantity of their contributions and their expert knowledge of the Project Software and Services that Council Members will provide useful guidance, both technical and in terms of project direction, to potentially less experienced contributors. The Steering Council and its Members play a special role in certain situations. In particular, the Council may, if necessary: * Make decisions about the overall scope, vision and direction of the project. * Make decisions about strategic collaborations with other organizations or individuals. * Make decisions about specific technical issues, features, bugs and pull requests. They are the primary mechanism of guiding the code review process and merging pull requests. * Make decisions about the Services that are run by The Project and manage those Services for the benefit of the Project and Community. * Update policy documents such as this one. * Make decisions when regular community discussion doesn’t produce consensus on an issue in a reasonable time frame. However, the Council’s primary responsibility is to facilitate the ordinary community-based decision making procedure described above. If we ever have to step in and formally override the community for the health of the Project, then we will do so, but we will consider reaching this point to indicate a failure in our leadership. #### Council decision making If it becomes necessary for the Steering Council to produce a formal decision, then they will use a form of the [Apache Foundation voting process](https://www.apache.org/foundation/voting.html). This is a formalized version of consensus, in which +1 votes indicate agreement, -1 votes are vetoes (and must be accompanied with a rationale, as above), and one can also vote fractionally (e.g. -0.5, +0.5) if one wishes to express an opinion without registering a full veto. These numeric votes are also often used informally as a way of getting a general sense of people’s feelings on some issue, and should not normally be taken as formal votes. A formal vote only occurs if explicitly declared, and if this does occur then the vote should be held open for long enough to give all interested Council Members a chance to respond – at least one week. In practice, we anticipate that for most Steering Council decisions (e.g., voting in new members) a more informal process will suffice. #### Council membership A list of current Steering Council Members is maintained at the page [About Us](https://numpy.org/about/). To become eligible to join the Steering Council, an individual must be a Project Contributor who has produced contributions that are substantial in quality and quantity, and sustained over at least one year. Potential Council Members are nominated by existing Council members, and become members following consensus of the existing Council members, and confirmation that the potential Member is interested and willing to serve in that capacity. The Council will be initially formed from the set of existing Core Developers who, as of late 2015, have been significantly active over the last year. When considering potential Members, the Council will look at candidates with a comprehensive view of their contributions. This will include but is not limited to code, code review, infrastructure work, mailing list and chat participation, community help/building, education and outreach, design work, etc. We are deliberately not setting arbitrary quantitative metrics (like “100 commits in this repo”) to avoid encouraging behavior that plays to the metrics rather than the project’s overall well-being. We want to encourage a diverse array of backgrounds, viewpoints and talents in our team, which is why we explicitly do not define code as the sole metric on which council membership will be evaluated. If a Council member becomes inactive in the project for a period of one year, they will be considered for removal from the Council. Before removal, inactive Member will be approached to see if they plan on returning to active participation. If not they will be removed immediately upon a Council vote. If they plan on returning to active participation soon, they will be given a grace period of one year. If they don’t return to active participation within that time period they will be removed by vote of the Council without further grace period. All former Council members can be considered for membership again at any time in the future, like any other Project Contributor. Retired Council members will be listed on the project website, acknowledging the period during which they were active in the Council. The Council reserves the right to eject current Members, if they are deemed to be actively harmful to the project’s well-being, and attempts at communication and conflict resolution have failed. This requires the consensus of the remaining Members. #### Conflict of interest It is expected that the Council Members will be employed at a wide range of companies, universities and non-profit organizations. Because of this, it is possible that Members will have conflict of interests. Such conflict of interests include, but are not limited to: * Financial interests, such as investments, employment or contracting work, outside of The Project that may influence their work on The Project. * Access to proprietary information of their employer that could potentially leak into their work with the Project. All members of the Council shall disclose to the rest of the Council any conflict of interest they may have. Members with a conflict of interest in a particular issue may participate in Council discussions on that issue, but must recuse themselves from voting on the issue. #### Private communications of the Council To the maximum extent possible, Council discussions and activities will be public and done in collaboration and discussion with the Project Contributors and Community. The Council will have a private mailing list that will be used sparingly and only when a specific matter requires privacy. When private communications and decisions are needed, the Council will do its best to summarize those to the Community after eliding personal/private/sensitive information that should not be posted to the public internet. #### Subcommittees The Council can create subcommittees that provide leadership and guidance for specific aspects of the project. Like the Council as a whole, subcommittees should conduct their business in an open and public manner unless privacy is specifically called for. Private subcommittee communications should happen on the main private mailing list of the Council unless specifically called for. #### NumFOCUS Subcommittee The Council will maintain one narrowly focused subcommittee to manage its interactions with NumFOCUS. * The NumFOCUS Subcommittee is comprised of 5 persons who manage project funding that comes through NumFOCUS. It is expected that these funds will be spent in a manner that is consistent with the non-profit mission of NumFOCUS and the direction of the Project as determined by the full Council. * This Subcommittee shall NOT make decisions about the direction, scope or technical direction of the Project. * This Subcommittee will have 5 members, 4 of whom will be current Council Members and 1 of whom will be external to the Steering Council. No more than 2 Subcommittee Members can report to one person through employment or contracting work (including the reportee, i.e. the reportee + 1 is the max). This avoids effective majorities resting on one person. The current membership of the NumFOCUS Subcommittee is listed at the page [About Us](https://numpy.org/about/). ## Institutional partners and funding The Steering Council are the primary leadership for the project. No outside institution, individual or legal entity has the ability to own, control, usurp or influence the project other than by participating in the Project as Contributors and Council Members. However, because institutions can be an important funding mechanism for the project, it is important to formally acknowledge institutional participation in the project. These are Institutional Partners. An Institutional Contributor is any individual Project Contributor who contributes to the project as part of their official duties at an Institutional Partner. Likewise, an Institutional Council Member is any Project Steering Council Member who contributes to the project as part of their official duties at an Institutional Partner. With these definitions, an Institutional Partner is any recognized legal entity in the United States or elsewhere that employs at least 1 Institutional Contributor of Institutional Council Member. Institutional Partners can be for-profit or non-profit entities. Institutions become eligible to become an Institutional Partner by employing individuals who actively contribute to The Project as part of their official duties. To state this another way, the only way for a Partner to influence the project is by actively contributing to the open development of the project, in equal terms to any other member of the community of Contributors and Council Members. Merely using Project Software in institutional context does not allow an entity to become an Institutional Partner. Financial gifts do not enable an entity to become an Institutional Partner. Once an institution becomes eligible for Institutional Partnership, the Steering Council must nominate and approve the Partnership. If at some point an existing Institutional Partner stops having any contributing employees, then a one year grace period commences. If at the end of this one year period they continue not to have any contributing employees, then their Institutional Partnership will lapse, and resuming it will require going through the normal process for new Partnerships. An Institutional Partner is free to pursue funding for their work on The Project through any legal means. This could involve a non-profit organization raising money from private foundations and donors or a for-profit company building proprietary products and services that leverage Project Software and Services. Funding acquired by Institutional Partners to work on The Project is called Institutional Funding. However, no funding obtained by an Institutional Partner can override the Steering Council. If a Partner has funding to do NumPy work and the Council decides to not pursue that work as a project, the Partner is free to pursue it on their own. However in this situation, that part of the Partner’s work will not be under the NumPy umbrella and cannot use the Project trademarks in a way that suggests a formal relationship. Institutional Partner benefits are: * Acknowledgement on the NumPy websites, in talks and T-shirts. * Ability to acknowledge their own funding sources on the NumPy websites, in talks and T-shirts. * Ability to influence the project through the participation of their Council Member. * Council Members invited to NumPy Developer Meetings. A list of current Institutional Partners is maintained at the page [About Us](https://numpy.org/about/). ## Document history [numpy/numpy](https://github.com/numpy/numpy/commits/main/doc/source/dev/governance/governance.rst) ## Acknowledgements Substantial portions of this document were adapted from the [Jupyter/IPython project’s governance document](https://github.com/jupyter/governance) ## License To the extent possible under law, the authors have waived all copyright and related or neighboring rights to the NumPy project governance and decision- making document, as per the [CC-0 public domain dedication / license](https://creativecommons.org/publicdomain/zero/1.0/). # NumPy governance * [NumPy project governance and decision-making](governance) * [Summary](governance#summary) * [The project](governance#the-project) * [Governance](governance#governance) * [Consensus-based decision making by the community](governance#consensus-based-decision-making-by-the-community) * [Steering council](governance#steering-council) * [Institutional partners and funding](governance#institutional-partners-and-funding) * [Document history](governance#document-history) * [Acknowledgements](governance#acknowledgements) * [License](governance#license) # How to contribute to the NumPy documentation This guide will help you decide what to contribute and how to submit it to the official NumPy documentation. ## Documentation team meetings The NumPy community has set a firm goal of improving its documentation. We hold regular documentation meetings on Zoom (dates are announced on the [numpy-discussion mailing list](https://mail.python.org/mailman/listinfo/numpy-discussion)), and everyone is welcome. Reach out if you have questions or need someone to guide you through your first steps – we’re happy to help. Minutes are taken [on hackmd.io](https://hackmd.io/oB_boakvRqKR-_2jRV-Qjg) and stored in the [NumPy Archive repository](https://github.com/numpy/archive). ## What’s needed The [NumPy Documentation](../index#numpy-docs-mainpage) has the details covered. API reference documentation is generated directly from [docstrings](https://www.python.org/dev/peps/pep-0257/) in the code when the documentation is [built](howto_build_docs#howto-build-docs). Although we have mostly complete reference documentation for each function and class exposed to users, there is a lack of usage examples for some of them. What we lack are docs with broader scope – tutorials, how-tos, and explanations. Reporting defects is another way to contribute. We discuss both. ## Contributing fixes We’re eager to hear about and fix doc defects. But to attack the biggest problems we end up having to defer or overlook some bug reports. Here are the best defects to go after. Top priority goes to **technical inaccuracies** – a docstring missing a parameter, a faulty description of a function/parameter/method, and so on. Other “structural” defects like broken links also get priority. All these fixes are easy to confirm and put in place. You can submit a [pull request (PR)](https://numpy.org/devdocs/dev/index.html#devindex) with the fix, if you know how to do that; otherwise please [open an issue](https://github.com/numpy/numpy/issues). **Typos and misspellings** fall on a lower rung; we welcome hearing about them but may not be able to fix them promptly. These too can be handled as pull requests or issues. Obvious **wording** mistakes (like leaving out a “not”) fall into the typo category, but other rewordings – even for grammar – require a judgment call, which raises the bar. Test the waters by first presenting the fix as an issue. Some functions/objects like numpy.ndarray.transpose, numpy.array etc. defined in C-extension modules have their docstrings defined separately in [_add_newdocs.py](https://github.com/numpy/numpy/blob/main/numpy/_core/_add_newdocs.py) ## Contributing new pages Your frustrations using our documents are our best guide to what needs fixing. If you write a missing doc you join the front line of open source, but it’s a meaningful contribution just to let us know what’s missing. If you want to compose a doc, run your thoughts by the [mailing list](https://mail.python.org/mailman/listinfo/numpy-discussion) for further ideas and feedback. If you want to alert us to a gap, [open an issue](https://github.com/numpy/numpy/issues). See [this issue](https://github.com/numpy/numpy/issues/15760) for an example. If you’re looking for subjects, our formal roadmap for documentation is a _NumPy Enhancement Proposal (NEP)_ , [NEP 44 — Restructuring the NumPy documentation](https://numpy.org/neps/nep-0044-restructuring-numpy- docs.html#nep44 "\(in NumPy Enhancement Proposals\)"). It identifies areas where our docs need help and lists several additions we’d like to see, including Jupyter notebooks. ### Documentation framework There are formulas for writing useful documents, and four formulas cover nearly everything. There are four formulas because there are four categories of document – `tutorial`, `how-to guide`, `explanation`, and `reference`. The insight that docs divide up this way belongs to Daniele Procida and his [Diátaxis Framework](https://diataxis.fr/). When you begin a document or propose one, have in mind which of these types it will be. ### NumPy tutorials In addition to the documentation that is part of the NumPy source tree, you can submit content in Jupyter Notebook format to the [NumPy Tutorials](https://numpy.org/numpy-tutorials) page. This set of tutorials and educational materials is meant to provide high-quality resources by the NumPy project, both for self-learning and for teaching classes with. These resources are developed in a separate GitHub repository, [numpy- tutorials](https://github.com/numpy/numpy-tutorials), where you can check out existing notebooks, open issues to suggest new topics or submit your own tutorials as pull requests. ### More on contributing Don’t worry if English is not your first language, or if you can only come up with a rough draft. Open source is a community effort. Do your best – we’ll help fix issues. Images and real-life data make text more engaging and powerful, but be sure what you use is appropriately licensed and available. Here again, even a rough idea for artwork can be polished by others. For now, the only data formats accepted by NumPy are those also used by other Python scientific libraries like pandas, SciPy, or Matplotlib. We’re developing a package to accept more formats; contact us for details. NumPy documentation is kept in the source code tree. To get your document into the docbase you must download the tree, [build it](howto_build_docs#howto- build-docs), and submit a pull request. If GitHub and pull requests are new to you, check our [Contributor Guide](index#devindex). Our markup language is reStructuredText (rST), which is more elaborate than Markdown. Sphinx, the tool many Python projects use to build and link project documentation, converts the rST into HTML and other formats. For more on rST, see the [Quick reStructuredText Guide](https://docutils.sourceforge.io/docs/user/rst/quickref.html) or the [reStructuredText Primer](https://www.sphinx- doc.org/en/master/usage/restructuredtext/basics.html) ## Contributing indirectly If you run across outside material that would be a useful addition to the NumPy docs, let us know by [opening an issue](https://github.com/numpy/numpy/issues). You don’t have to contribute here to contribute to NumPy. You’ve contributed if you write a tutorial on your blog, create a YouTube video, or answer questions on Stack Overflow and other sites. ## Documentation style ### User documentation * In general, we follow the [Google developer documentation style guide](https://developers.google.com/style) for the User Guide. * NumPy style governs cases where: * Google has no guidance, or * We prefer not to use the Google style Our current rules: * We pluralize _index_ as _indices_ rather than [indexes](https://developers.google.com/style/word-list#letter-i), following the precedent of [`numpy.indices`](../reference/generated/numpy.indices#numpy.indices "numpy.indices"). * For consistency we also pluralize _matrix_ as _matrices_. * Grammatical issues inadequately addressed by the NumPy or Google rules are decided by the section on “Grammar and Usage” in the most recent edition of the [Chicago Manual of Style](https://en.wikipedia.org/wiki/The_Chicago_Manual_of_Style). * We welcome being [alerted](https://github.com/numpy/numpy/issues) to cases we should add to the NumPy style rules. ### Docstrings When using [Sphinx](https://www.sphinx-doc.org/) in combination with the NumPy conventions, you should use the `numpydoc` extension so that your docstrings will be handled correctly. For example, Sphinx will extract the `Parameters` section from your docstring and convert it into a field list. Using `numpydoc` will also avoid the reStructuredText errors produced by plain Sphinx when it encounters NumPy docstring conventions like section headers (e.g. `-------------`) that sphinx does not expect to find in docstrings. It is available from: * [numpydoc on PyPI](https://pypi.python.org/pypi/numpydoc) * [numpydoc on GitHub](https://github.com/numpy/numpydoc/) Note that for documentation within NumPy, it is not necessary to do `import numpy as np` at the beginning of an example. Please use the `numpydoc` [formatting standard](https://numpydoc.readthedocs.io/en/latest/format.html#format "\(in numpydoc v1.9.0rc0.dev0\)") as shown in their [example](https://numpydoc.readthedocs.io/en/latest/example.html#example "\(in numpydoc v1.9.0rc0.dev0\)"). ### Documenting C/C++ code NumPy uses [Doxygen](https://www.doxygen.nl/index.html) to parse specially- formatted C/C++ comment blocks. This generates XML files, which are converted by [Breathe](https://breathe.readthedocs.io/en/latest/) into RST, which is used by Sphinx. **It takes three steps to complete the documentation process** : #### 1\. Writing the comment blocks Although there is still no commenting style set to follow, the Javadoc is more preferable than the others due to the similarities with the current existing non-indexed comment blocks. Note Please see [“Documenting the code”](https://www.doxygen.nl/manual/docblocks.html). **This is what Javadoc style looks like** : /** * This a simple brief. * * And the details goes here. * Multi lines are welcome. * * @param num leave a comment for parameter num. * @param str leave a comment for the second parameter. * @return leave a comment for the returned value. */ int doxy_javadoc_example(int num, const char *str); **And here is how it is rendered** : Warning doxygenfunction: Cannot find file: /home/matti/oss/numpy/doc/build/doxygen/xml/index.xml **For line comment, you can use a triple forward slash. For example** : /** * Template to represent limbo numbers. * * Specializations for integer types that are part of nowhere. * It doesn't support with any real types. * * @param Tp Type of the integer. Required to be an integer type. * @param N Number of elements. */ template class DoxyLimbo { public: /// Default constructor. Initialize nothing. DoxyLimbo(); /// Set Default behavior for copy the limbo. DoxyLimbo(const DoxyLimbo &l); /// Returns the raw data for the limbo. const Tp *data(); protected: Tp p_data[N]; ///< Example for inline comment. }; **And here is how it is rendered** : Warning doxygenclass: Cannot find file: /home/matti/oss/numpy/doc/build/doxygen/xml/index.xml ##### Common Doxygen Tags: Note For more tags/commands, please take a look at `@brief` Starts a paragraph that serves as a brief description. By default the first sentence of the documentation block is automatically treated as a brief description, since option [JAVADOC_AUTOBRIEF](https://www.doxygen.nl/manual/config.html#cfg_javadoc_autobrief) is enabled within doxygen configurations. `@details` Just like `@brief` starts a brief description, `@details` starts the detailed description. You can also start a new paragraph (blank line) then the `@details` command is not needed. `@param` Starts a parameter description for a function parameter with name , followed by a description of the parameter. The existence of the parameter is checked and a warning is given if the documentation of this (or any other) parameter is missing or not present in the function declaration or definition. `@return` Starts a return value description for a function. Multiple adjacent `@return` commands will be joined into a single paragraph. The `@return` description ends when a blank line or some other sectioning command is encountered. `@code/@endcode` Starts/Ends a block of code. A code block is treated differently from ordinary text. It is interpreted as source code. `@rst/@endrst` Starts/Ends a block of reST markup. ##### Example **Take a look at the following example** : /** * A comment block contains reST markup. * @rst * .. note:: * * Thanks to Breathe_, we were able to bring it to Doxygen_ * * Some code example:: * * int example(int x) { * return x * 2; * } * @endrst */ void doxy_reST_example(void); **And here is how it is rendered** : Warning doxygenfunction: Cannot find file: /home/matti/oss/numpy/doc/build/doxygen/xml/index.xml #### 2\. Feeding Doxygen Not all headers files are collected automatically. You have to add the desired C/C++ header paths within the sub-config files of Doxygen. Sub-config files have the unique name `.doxyfile`, which you can usually find near directories that contain documented headers. You need to create a new config file if there’s not one located in a path close(2-depth) to the headers you want to add. Sub-config files can accept any of [Doxygen](https://www.doxygen.nl/index.html) [configuration options](https://www.doxygen.nl/manual/config.html), but do not override or re-initialize any configuration option, rather only use the concatenation operator “+=”. For example: # to specify certain headers INPUT += @CUR_DIR/header1.h \ @CUR_DIR/header2.h # to add all headers in certain path INPUT += @CUR_DIR/to/headers # to define certain macros PREDEFINED += C_MACRO(X)=X # to enable certain branches PREDEFINED += NPY_HAVE_FEATURE \ NPY_HAVE_FEATURE2 Note @CUR_DIR is a template constant returns the current dir path of the sub-config file. #### 3\. Inclusion directives [Breathe](https://breathe.readthedocs.io/en/latest/) provides a wide range of custom directives to allow converting the documents generated by [Doxygen](https://www.doxygen.nl/index.html) into reST files. Note For more information, please check out “[Directives & Config Variables](https://breathe.readthedocs.io/en/latest/directives.html)” ##### Common directives: `doxygenfunction` This directive generates the appropriate output for a single function. The function name is required to be unique in the project. .. doxygenfunction:: :outline: :no-link: Checkout the [example](https://breathe.readthedocs.io/en/latest/function.html#function- example) to see it in action. `doxygenclass` This directive generates the appropriate output for a single class. It takes the standard project, path, outline and no-link options and additionally the members, protected-members, private-members, undoc-members, membergroups and members-only options: .. doxygenclass:: :members: [...] :protected-members: :private-members: :undoc-members: :membergroups: ... :members-only: :outline: :no-link: Checkout the [doxygenclass documentation](https://breathe.readthedocs.io/en/latest/class.html#class- example) for more details and to see it in action. `doxygennamespace` This directive generates the appropriate output for the contents of a namespace. It takes the standard project, path, outline and no-link options and additionally the content-only, members, protected-members, private-members and undoc-members options. To reference a nested namespace, the full namespaced path must be provided, e.g. foo::bar for the bar namespace inside the foo namespace. .. doxygennamespace:: :content-only: :outline: :members: :protected-members: :private-members: :undoc-members: :no-link: Checkout the [doxygennamespace documentation](https://breathe.readthedocs.io/en/latest/namespace.html#namespace- example) for more details and to see it in action. `doxygengroup` This directive generates the appropriate output for the contents of a doxygen group. A doxygen group can be declared with specific doxygen markup in the source comments as covered in the doxygen [grouping documentation](https://www.doxygen.nl/manual/grouping.html). It takes the standard project, path, outline and no-link options and additionally the content-only, members, protected-members, private-members and undoc-members options. .. doxygengroup:: :content-only: :outline: :members: :protected-members: :private-members: :undoc-members: :no-link: :inner: Checkout the [doxygengroup documentation](https://breathe.readthedocs.io/en/latest/group.html#group- example) for more details and to see it in action. ### Legacy directive If a function, module or API is in _legacy_ mode, meaning that it is kept around for backwards compatibility reasons, but is not recommended to use in new code, you can use the `.. legacy::` directive. By default, if used with no arguments, the legacy directive will generate the following output: Legacy This submodule is considered legacy and will no longer receive updates. This could also mean it will be removed in future NumPy versions. We strongly recommend that you also add a custom message, such as a new API to replace the old one: .. legacy:: For more details, see :ref:`distutils-status-migration`. This message will be appended to the default message and will create the following output: Legacy This submodule is considered legacy and will no longer receive updates. This could also mean it will be removed in future NumPy versions. For more details, see [Status of numpy.distutils and migration advice](../reference/distutils_status_migration#distutils-status-migration). Finally, if you want to mention a function, method (or any custom object) instead of a _submodule_ , you can use an optional argument: .. legacy:: function This will create the following output: Legacy This function is considered legacy and will no longer receive updates. This could also mean it will be removed in future NumPy versions. ## Documentation reading * The leading organization of technical writers, [Write the Docs](https://www.writethedocs.org/), holds conferences, hosts learning resources, and runs a Slack channel. * “Every engineer is also a writer,” says Google’s [collection of technical writing resources](https://developers.google.com/tech-writing), which includes free online courses for developers in planning and writing documents. * [Software Carpentry’s](https://software-carpentry.org/lessons) mission is teaching software to researchers. In addition to hosting the curriculum, the website explains how to present ideas effectively. # Building the NumPy API and reference docs If you only want to get the documentation, note that pre-built versions can be found at in several different formats. ## Development environments Before proceeding further it should be noted that the documentation is built with the `make` tool, which is not natively available on Windows. MacOS or Linux users can jump to Prerequisites. It is recommended for Windows users to set up their development environment on GitHub Codespaces (see [Recommended development setup](development_environment#recommended-development-setup)) or [Windows Subsystem for Linux (WSL)](https://learn.microsoft.com/en- us/windows/wsl/install). WSL is a good option for a persistent local set-up. ## Prerequisites Building the NumPy documentation and API reference requires the following: ### NumPy Since large parts of the main documentation are obtained from NumPy via `import numpy` and examining the docstrings, you will need to first [build](development_environment#development-environment) and install it so that the correct version is imported. NumPy has to be re-built and re- installed every time you fetch the latest version of the repository, before generating the documentation. This ensures that the NumPy version and the git repository version are in sync. Note that you can e.g. install NumPy to a temporary location and set the PYTHONPATH environment variable appropriately. Alternatively, if using Python virtual environments (via e.g. `conda`, `virtualenv` or the `venv` module), installing NumPy into a new virtual environment is recommended. ### Dependencies All of the necessary dependencies for building the NumPy docs except for [Doxygen](https://www.doxygen.nl/index.html) can be installed with: pip install -r requirements/doc_requirements.txt Note It may be necessary to install development versions of the doc dependencies to build the docs locally: pip install --pre --force-reinstall --extra-index-url \ https://pypi.anaconda.org/scientific-python-nightly-wheels/simple \ -r requirements/doc_requirements.txt We currently use [Sphinx](https://www.sphinx-doc.org/) along with [Doxygen](https://www.doxygen.nl/index.html) for generating the API and reference documentation for NumPy. In addition, building the documentation requires the Sphinx extension `plot_directive`, which is shipped with [Matplotlib](https://matplotlib.org/stable/index.html "\(in Matplotlib v3.9.3\)"). We also use [numpydoc](https://numpydoc.readthedocs.io/en/latest/index.html) to render docstrings in the generated API documentation. [SciPy](https://docs.scipy.org/doc/scipy/index.html "\(in SciPy v1.14.1\)") is installed since some parts of the documentation require SciPy functions. For installing [Doxygen](https://www.doxygen.nl/index.html), please check the official [download](https://www.doxygen.nl/download.html#srcbin) and [installation](https://www.doxygen.nl/manual/install.html) pages, or if you are using Linux then you can install it through your distribution package manager. Note Try to install a newer version of [Doxygen](https://www.doxygen.nl/index.html) > 1.8.10 otherwise you may get some warnings during the build. ### Submodules If you obtained NumPy via git, also get the git submodules that contain additional parts required for building the documentation: git submodule update --init ## Instructions Now you are ready to generate the docs, so write: spin docs This will build NumPy from source if you haven’t already, and run Sphinx to build the `html` docs. If all goes well, this will generate a `build/html` subdirectory in the `/doc` directory, containing the built documentation. The documentation for NumPy distributed at in html and pdf format is also built with `make dist`. See [HOWTO RELEASE](https://github.com/numpy/numpy/blob/main/doc/HOWTO_RELEASE.rst) for details on how to update . # Contributing to NumPy Not a coder? Not a problem! NumPy is multi-faceted, and we can use a lot of help. These are all activities we’d like to get help with (they’re all important, so we list them in alphabetical order): * Code maintenance and development * Community coordination * DevOps * Developing educational content & narrative documentation * Fundraising * Marketing * Project management * Translating content * Website design and development * Writing technical documentation We understand that everyone has a different level of experience, also NumPy is a pretty well-established project, so it’s hard to make assumptions about an ideal “first-time-contributor”. So, that’s why we don’t mark issues with the “good-first-issue” label. Instead, you’ll find [issues labeled “Sprintable”](https://github.com/numpy/numpy/labels/sprintable). These issues can either be: * **Easily fixed** when you have guidance from an experienced contributor (perfect for working in a sprint). * **A learning opportunity** for those ready to dive deeper, even if you’re not in a sprint. Additionally, depending on your prior experience, some “Sprintable” issues might be easy, while others could be more challenging for you. The rest of this document discusses working on the NumPy code base and documentation. We’re in the process of updating our descriptions of other activities and roles. If you are interested in these other activities, please contact us! You can do this via the [numpy-discussion mailing list](https://mail.python.org/mailman/listinfo/numpy-discussion), or on [GitHub](https://github.com/numpy/numpy) (open an issue or comment on a relevant issue). These are our preferred communication channels (open source is open by nature!), however if you prefer to discuss in a more private space first, you can do so on Slack (see [numpy.org/contribute](https://numpy.org/contribute/) for details). ## Development process - summary Here’s the short summary, complete TOC links are below: 1. If you are a first-time contributor: * Go to [numpy/numpy](https://github.com/numpy/numpy) and click the “fork” button to create your own copy of the project. * Clone the project to your local computer: git clone --recurse-submodules https://github.com/your-username/numpy.git * Change the directory: cd numpy * Add the upstream repository: git remote add upstream https://github.com/numpy/numpy.git * Now, `git remote -v` will show two remote repositories named: * `upstream`, which refers to the `numpy` repository * `origin`, which refers to your personal fork * Pull the latest changes from upstream, including tags: git checkout main git pull upstream main --tags * Initialize numpy’s submodules: git submodule update --init 2. Develop your contribution: * Create a branch for the feature you want to work on. Since the branch name will appear in the merge message, use a sensible name such as ‘linspace-speedups’: git checkout -b linspace-speedups * Commit locally as you progress (`git add` and `git commit`) Use a [properly formatted](development_workflow#writing-the-commit-message) commit message, write tests that fail before your change and pass afterward, run all the [tests locally](development_environment#development-environment). Be sure to document any changed behavior in docstrings, keeping to the NumPy docstring [standard](howto-docs#howto-document). 3. To submit your contribution: * Push your changes back to your fork on GitHub: git push origin linspace-speedups * Go to GitHub. The new branch will show up with a green Pull Request button. Make sure the title and message are clear, concise, and self- explanatory. Then click the button to submit it. * If your commit introduces a new feature or changes functionality, post on the [mailing list](https://mail.python.org/mailman/listinfo/numpy-discussion) to explain your changes. For bug fixes, documentation updates, etc., this is generally not necessary, though if you do not get any reaction, do feel free to ask for review. 4. Review process: * Reviewers (the other developers and interested community members) will write inline and/or general comments on your Pull Request (PR) to help you improve its implementation, documentation and style. Every single developer working on the project has their code reviewed, and we’ve come to see it as friendly conversation from which we all learn and the overall code quality benefits. Therefore, please don’t let the review discourage you from contributing: its only aim is to improve the quality of project, not to criticize (we are, after all, very grateful for the time you’re donating!). See our [Reviewer Guidelines](reviewer_guidelines#reviewer-guidelines) for more information. * To update your PR, make your changes on your local repository, commit, **run tests, and only if they succeed** push to your fork. As soon as those changes are pushed up (to the same branch as before) the PR will update automatically. If you have no idea how to fix the test failures, you may push your changes anyway and ask for help in a PR comment. * Various continuous integration (CI) services are triggered after each PR update to build the code, run unit tests, measure code coverage and check coding style of your branch. The CI tests must pass before your PR can be merged. If CI fails, you can find out why by clicking on the “failed” icon (red cross) and inspecting the build and test log. To avoid overuse and waste of this resource, [test your work](development_environment#recommended-development-setup) locally before committing. * A PR must be **approved** by at least one core team member before merging. Approval means the core team member has carefully reviewed the changes, and the PR is ready for merging. 5. Document changes Beyond changes to a functions docstring and possible description in the general documentation, if your change introduces any user-facing modifications they may need to be mentioned in the release notes. To add your change to the release notes, you need to create a short file with a summary and place it in `doc/release/upcoming_changes`. The file `doc/release/upcoming_changes/README.rst` details the format and filename conventions. If your change introduces a deprecation, make sure to discuss this first on GitHub or the mailing list first. If agreement on the deprecation is reached, follow [NEP 23 deprecation policy](https://numpy.org/neps/nep-0023-backwards- compatibility.html#nep23 "\(in NumPy Enhancement Proposals\)") to add the deprecation. 6. Cross referencing issues If the PR relates to any issues, you can add the text `xref gh-xxxx` where `xxxx` is the number of the issue to github comments. Likewise, if the PR solves an issue, replace the `xref` with `closes`, `fixes` or any of the other flavors [github accepts](https://help.github.com/en/articles/closing-issues- using-keywords). In the source code, be sure to preface any issue or PR reference with `gh- xxxx`. For a more detailed discussion, read on and follow the links at the bottom of this page. ### Guidelines * All code should have tests (see test coverage below for more details). * All code should be [documented](https://numpydoc.readthedocs.io/en/latest/format.html#docstring-standard). * No changes are ever committed without review and approval by a core team member. Please ask politely on the PR or on the [mailing list](https://mail.python.org/mailman/listinfo/numpy-discussion) if you get no response to your pull request within a week. ### Stylistic guidelines * Set up your editor to follow [PEP 8](https://www.python.org/dev/peps/pep-0008/) (remove trailing white space, no tabs, etc.). Check code with pyflakes / flake8. * Use NumPy data types instead of strings (`np.uint8` instead of `"uint8"`). * Use the following import conventions: import numpy as np * For C code, see [NEP 45](https://numpy.org/neps/nep-0045-c_style_guide.html#nep45 "\(in NumPy Enhancement Proposals\)"). ### Test coverage Pull requests (PRs) that modify code should either have new tests, or modify existing tests to fail before the PR and pass afterwards. You should [run the tests](development_environment#development-environment) before pushing a PR. Running NumPy’s test suite locally requires some additional packages, such as `pytest` and `hypothesis`. The additional testing dependencies are listed in `requirements/test_requirements.txt` in the top-level directory, and can conveniently be installed with: $ python -m pip install -r requirements/test_requirements.txt Tests for a module should ideally cover all code in that module, i.e., statement coverage should be at 100%. To measure the test coverage, run: $ spin test --coverage This will create a report in `html` format at `build/coverage`, which can be viewed with your browser, e.g.: $ firefox build/coverage/index.html ### Building docs To build the HTML documentation, use: spin docs You can also run `make` from the `doc` directory. `make help` lists all targets. To get the appropriate dependencies and other requirements, see [Building the NumPy API and reference docs](howto_build_docs#howto-build-docs). ## Development process - details The rest of the story * [Setting up and using your development environment](development_environment) * [Recommended development setup](development_environment#recommended-development-setup) * [Using virtual environments](development_environment#using-virtual-environments) * [Building from source](development_environment#building-from-source) * [Testing builds](development_environment#testing-builds) * [Other build options](development_environment#other-build-options) * [Running tests](development_environment#running-tests) * [Running linting](development_environment#running-linting) * [Rebuilding & cleaning the workspace](development_environment#rebuilding-cleaning-the-workspace) * [Debugging](development_environment#debugging) * [Understanding the code & getting started](development_environment#understanding-the-code-getting-started) * [Building the NumPy API and reference docs](howto_build_docs) * [Development environments](howto_build_docs#development-environments) * [Prerequisites](howto_build_docs#prerequisites) * [Instructions](howto_build_docs#instructions) * [Development workflow](development_workflow) * [Basic workflow](development_workflow#basic-workflow) * [Additional things you might want to do](development_workflow#additional-things-you-might-want-to-do) * [Advanced debugging tools](development_advanced_debugging) * [Finding C errors with additional tooling](development_advanced_debugging#finding-c-errors-with-additional-tooling) * [Reviewer guidelines](reviewer_guidelines) * [Who can be a reviewer?](reviewer_guidelines#who-can-be-a-reviewer) * [Communication guidelines](reviewer_guidelines#communication-guidelines) * [Reviewer checklist](reviewer_guidelines#reviewer-checklist) * [Standard replies for reviewing](reviewer_guidelines#standard-replies-for-reviewing) * [NumPy benchmarks](../benchmarking) * [Usage](../benchmarking#usage) * [Benchmarking versions](../benchmarking#benchmarking-versions) * [Writing benchmarks](../benchmarking#writing-benchmarks) * [NumPy C style guide](https://numpy.org/neps/nep-0045-c_style_guide.html) * [For downstream package authors](depending_on_numpy) * [Understanding NumPy’s versioning and API/ABI stability](depending_on_numpy#understanding-numpy-s-versioning-and-api-abi-stability) * [Testing against the NumPy main branch or pre-releases](depending_on_numpy#testing-against-the-numpy-main-branch-or-pre-releases) * [Adding a dependency on NumPy](depending_on_numpy#adding-a-dependency-on-numpy) * [Releasing a version](releasing) * [How to prepare a release](releasing#how-to-prepare-a-release) * [Step-by-step directions](releasing#step-by-step-directions) * [Branch walkthrough](releasing#branch-walkthrough) * [NumPy governance](governance/index) * [NumPy project governance and decision-making](governance/governance) * [How to contribute to the NumPy documentation](howto-docs) * [Documentation team meetings](howto-docs#documentation-team-meetings) * [What’s needed](howto-docs#what-s-needed) * [Contributing fixes](howto-docs#contributing-fixes) * [Contributing new pages](howto-docs#contributing-new-pages) * [Contributing indirectly](howto-docs#contributing-indirectly) * [Documentation style](howto-docs#documentation-style) * [Documentation reading](howto-docs#documentation-reading) NumPy-specific workflow is in [numpy-development- workflow](development_workflow#development-workflow). # NumPy C code explanations Fanaticism consists of redoubling your efforts when you have forgotten your aim. — _George Santayana_ An authority is a person who can tell you more about something than you really care to know. — _Unknown_ This page attempts to explain the logic behind some of the new pieces of code. The purpose behind these explanations is to enable somebody to be able to understand the ideas behind the implementation somewhat more easily than just staring at the code. Perhaps in this way, the algorithms can be improved on, borrowed from, and/or optimized by more people. ## Memory model One fundamental aspect of the [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") is that an array is seen as a “chunk” of memory starting at some location. The interpretation of this memory depends on the [stride](../glossary#term-stride) information. For each dimension in an \\(N\\)-dimensional array, an integer ([stride](../glossary#term-stride)) dictates how many bytes must be skipped to get to the next element in that dimension. Unless you have a single-segment array, this [stride](../glossary#term-stride) information must be consulted when traversing through an array. It is not difficult to write code that accepts strides, you just have to use `char*` pointers because strides are in units of bytes. Keep in mind also that strides do not have to be unit-multiples of the element size. Also, remember that if the number of dimensions of the array is 0 (sometimes called a `rank-0` array), then the [strides](../glossary#term- stride) and [dimensions](../glossary#term-dimension) variables are `NULL`. Besides the structural information contained in the strides and dimensions members of the [`PyArrayObject`](../reference/c-api/types-and- structures#c.PyArrayObject "PyArrayObject"), the flags contain important information about how the data may be accessed. In particular, the [`NPY_ARRAY_ALIGNED`](../reference/c-api/array#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") flag is set when the memory is on a suitable boundary according to the datatype array. Even if you have a [contiguous](../glossary#term-contiguous) chunk of memory, you cannot just assume it is safe to dereference a datatype-specific pointer to an element. Only if the [`NPY_ARRAY_ALIGNED`](../reference/c-api/array#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") flag is set, this is a safe operation. On some platforms it will work but on others, like Solaris, it will cause a bus error. The [`NPY_ARRAY_WRITEABLE`](../reference/c-api/array#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") should also be ensured if you plan on writing to the memory area of the array. It is also possible to obtain a pointer to an unwritable memory area. Sometimes, writing to the memory area when the [`NPY_ARRAY_WRITEABLE`](../reference/c-api/array#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") flag is not set will just be rude. Other times it can cause program crashes (_e.g._ a data-area that is a read-only memory-mapped file). ## Data-type encapsulation See also [Data type objects (dtype)](../reference/arrays.dtypes#arrays-dtypes) The [datatype](../reference/arrays.dtypes#arrays-dtypes) is an important abstraction of the [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). Operations will look to the datatype to provide the key functionality that is needed to operate on the array. This functionality is provided in the list of function pointers pointed to by the `f` member of the [`PyArray_Descr`](../reference/c-api/types-and-structures#c.PyArray_Descr "PyArray_Descr") structure. In this way, the number of datatypes can be extended simply by providing a [`PyArray_Descr`](../reference/c-api/types-and- structures#c.PyArray_Descr "PyArray_Descr") structure with suitable function pointers in the `f` member. For built-in types, there are some optimizations that bypass this mechanism, but the point of the datatype abstraction is to allow new datatypes to be added. One of the built-in datatypes, the [`void`](../reference/arrays.scalars#numpy.void "numpy.void") datatype allows for arbitrary [structured types](../glossary#term-structured-data-type) containing 1 or more fields as elements of the array. A [field](../glossary#term-field) is simply another datatype object along with an offset into the current structured type. In order to support arbitrarily nested fields, several recursive implementations of datatype access are implemented for the void type. A common idiom is to cycle through the elements of the dictionary and perform a specific operation based on the datatype object stored at the given offset. These offsets can be arbitrary numbers. Therefore, the possibility of encountering misaligned data must be recognized and taken into account if necessary. ## N-D iterators See also [Iterating over arrays](../reference/arrays.nditer#arrays-nditer) A very common operation in much of NumPy code is the need to iterate over all the elements of a general, strided, N-dimensional array. This operation of a general-purpose N-dimensional loop is abstracted in the notion of an iterator object. To write an N-dimensional loop, you only have to create an iterator object from an ndarray, work with the [`dataptr`](../reference/c-api/types- and-structures#c.PyArrayIterObject.dataptr "PyArrayIterObject.dataptr") member of the iterator object structure and call the macro [`PyArray_ITER_NEXT`](../reference/c-api/array#c.PyArray_ITER_NEXT "PyArray_ITER_NEXT") on the iterator object to move to the next element. The `next` element is always in C-contiguous order. The macro works by first special-casing the C-contiguous, 1-D, and 2-D cases which work very simply. For the general case, the iteration works by keeping track of a list of coordinate counters in the iterator object. At each iteration, the last coordinate counter is increased (starting from 0). If this counter is smaller than one less than the size of the array in that dimension (a pre-computed and stored value), then the counter is increased and the [`dataptr`](../reference/c-api/types-and- structures#c.PyArrayIterObject.dataptr "PyArrayIterObject.dataptr") member is increased by the strides in that dimension and the macro ends. If the end of a dimension is reached, the counter for the last dimension is reset to zero and the [`dataptr`](../reference/c-api/types-and- structures#c.PyArrayIterObject.dataptr "PyArrayIterObject.dataptr") is moved back to the beginning of that dimension by subtracting the strides value times one less than the number of elements in that dimension (this is also pre- computed and stored in the [`backstrides`](../reference/c-api/types-and- structures#c.PyArrayIterObject.backstrides "PyArrayIterObject.backstrides") member of the iterator object). In this case, the macro does not end, but a local dimension counter is decremented so that the next-to-last dimension replaces the role that the last dimension played and the previously-described tests are executed again on the next-to-last dimension. In this way, the [`dataptr`](../reference/c-api/types-and- structures#c.PyArrayIterObject.dataptr "PyArrayIterObject.dataptr") is adjusted appropriately for arbitrary striding. The [`coordinates`](../reference/c-api/types-and- structures#c.PyArrayIterObject.coordinates "PyArrayIterObject.coordinates") member of the [`PyArrayIterObject`](../reference/c-api/types-and- structures#c.PyArrayIterObject "PyArrayIterObject") structure maintains the current N-d counter unless the underlying array is C-contiguous in which case the coordinate counting is bypassed. The [`index`](../reference/c-api/types- and-structures#c.PyArrayIterObject.index "PyArrayIterObject.index") member of the [`PyArrayIterObject`](../reference/c-api/types-and- structures#c.PyArrayIterObject "PyArrayIterObject") keeps track of the current flat index of the iterator. It is updated by the [`PyArray_ITER_NEXT`](../reference/c-api/array#c.PyArray_ITER_NEXT "PyArray_ITER_NEXT") macro. ## Broadcasting See also [Broadcasting](../user/basics.broadcasting#basics-broadcasting) In Numeric, the ancestor of NumPy, broadcasting was implemented in several lines of code buried deep in `ufuncobject.c`. In NumPy, the notion of broadcasting has been abstracted so that it can be performed in multiple places. Broadcasting is handled by the function [`PyArray_Broadcast`](../reference/c-api/array#c.PyArray_Broadcast "PyArray_Broadcast"). This function requires a [`PyArrayMultiIterObject`](../reference/c-api/types-and- structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject") (or something that is a binary equivalent) to be passed in. The [`PyArrayMultiIterObject`](../reference/c-api/types-and- structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject") keeps track of the broadcast number of dimensions and size in each dimension along with the total size of the broadcast result. It also keeps track of the number of arrays being broadcast and a pointer to an iterator for each of the arrays being broadcast. The [`PyArray_Broadcast`](../reference/c-api/array#c.PyArray_Broadcast "PyArray_Broadcast") function takes the iterators that have already been defined and uses them to determine the broadcast shape in each dimension (to create the iterators at the same time that broadcasting occurs then use the [`PyArray_MultiIterNew`](../reference/c-api/array#c.PyArray_MultiIterNew "PyArray_MultiIterNew") function). Then, the iterators are adjusted so that each iterator thinks it is iterating over an array with the broadcast size. This is done by adjusting the iterators number of dimensions, and the [shape](../glossary#term-shape) in each dimension. This works because the iterator strides are also adjusted. Broadcasting only adjusts (or adds) length-1 dimensions. For these dimensions, the strides variable is simply set to 0 so that the data-pointer for the iterator over that array doesn’t move as the broadcasting operation operates over the extended dimension. Broadcasting was always implemented in Numeric using 0-valued strides for the extended dimensions. It is done in exactly the same way in NumPy. The big difference is that now the array of strides is kept track of in a [`PyArrayIterObject`](../reference/c-api/types-and- structures#c.PyArrayIterObject "PyArrayIterObject"), the iterators involved in a broadcast result are kept track of in a [`PyArrayMultiIterObject`](../reference/c-api/types-and- structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject"), and the [`PyArray_Broadcast`](../reference/c-api/array#c.PyArray_Broadcast "PyArray_Broadcast") call implements the [General broadcasting rules](../user/basics.broadcasting#general-broadcasting-rules). ## Array scalars See also [Scalars](../reference/arrays.scalars#arrays-scalars) The array scalars offer a hierarchy of Python types that allow a one-to-one correspondence between the datatype stored in an array and the Python-type that is returned when an element is extracted from the array. An exception to this rule was made with object arrays. Object arrays are heterogeneous collections of arbitrary Python objects. When you select an item from an object array, you get back the original Python object (and not an object array scalar which does exist but is rarely used for practical purposes). The array scalars also offer the same methods and attributes as arrays with the intent that the same code can be used to support arbitrary dimensions (including 0-dimensions). The array scalars are read-only (immutable) with the exception of the void scalar which can also be written to so that structured array field setting works more naturally (`a[0]['f1'] = value`). ## Indexing See also [Indexing on ndarrays](../user/basics.indexing#basics-indexing), [Indexing routines](../reference/routines.indexing#arrays-indexing) All Python indexing operations `arr[index]` are organized by first preparing the index and finding the index type. The supported index types are: * integer * [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis") * [slice](https://docs.python.org/3/glossary.html#term-slice "\(in Python v3.13\)") * [`Ellipsis`](https://docs.python.org/3/library/constants.html#Ellipsis "\(in Python v3.13\)") * integer arrays/array-likes (advanced) * boolean (single boolean array); if there is more than one boolean array as the index or the shape does not match exactly, the boolean array will be converted to an integer array instead. * 0-d boolean (and also integer); 0-d boolean arrays are a special case that has to be handled in the advanced indexing code. They signal that a 0-d boolean array had to be interpreted as an integer array. As well as the scalar array special case signaling that an integer array was interpreted as an integer index, which is important because an integer array index forces a copy but is ignored if a scalar is returned (full integer index). The prepared index is guaranteed to be valid with the exception of out of bound values and broadcasting errors for advanced indexing. This includes that an [`Ellipsis`](https://docs.python.org/3/library/constants.html#Ellipsis "\(in Python v3.13\)") is added for incomplete indices for example when a two- dimensional array is indexed with a single integer. The next step depends on the type of index which was found. If all dimensions are indexed with an integer a scalar is returned or set. A single boolean indexing array will call specialized boolean functions. Indices containing an [`Ellipsis`](https://docs.python.org/3/library/constants.html#Ellipsis "\(in Python v3.13\)") or [slice](https://docs.python.org/3/glossary.html#term-slice "\(in Python v3.13\)") but no advanced indexing will always create a view into the old array by calculating the new strides and memory offset. This view can then either be returned or, for assignments, filled using `PyArray_CopyObject`. Note that `PyArray_CopyObject` may also be called on temporary arrays in other branches to support complicated assignments when the array is of object [`dtype`](../reference/generated/numpy.dtype#numpy.dtype "numpy.dtype"). ### Advanced indexing By far the most complex case is advanced indexing, which may or may not be combined with typical view-based indexing. Here integer indices are interpreted as view-based. Before trying to understand this, you may want to make yourself familiar with its subtleties. The advanced indexing code has three different branches and one special case: * There is one indexing array and it, as well as the assignment array, can be iterated trivially. For example, they may be contiguous. Also, the indexing array must be of [`intp`](../reference/arrays.scalars#numpy.intp "numpy.intp") type and the value array in assignments should be of the correct type. This is purely a fast path. * There are only integer array indices so that no subarray exists. * View-based and advanced indexing is mixed. In this case, the view-based indexing defines a collection of subarrays that are combined by the advanced indexing. For example, `arr[[1, 2, 3], :]` is created by vertically stacking the subarrays `arr[1, :]`, `arr[2, :]`, and `arr[3, :]`. * There is a subarray but it has exactly one element. This case can be handled as if there is no subarray but needs some care during setup. Deciding what case applies, checking broadcasting, and determining the kind of transposition needed are all done in `PyArray_MapIterNew`. After setting up, there are two cases. If there is no subarray or it only has one element, no subarray iteration is necessary and an iterator is prepared which iterates all indexing arrays _as well as_ the result or value array. If there is a subarray, there are three iterators prepared. One for the indexing arrays, one for the result or value array (minus its subarray), and one for the subarrays of the original and the result/assignment array. The first two iterators give (or allow calculation) of the pointers into the start of the subarray, which then allows restarting the subarray iteration. When advanced indices are next to each other transposing may be necessary. All necessary transposing is handled by `PyArray_MapIterSwapAxes` and has to be handled by the caller unless `PyArray_MapIterNew` is asked to allocate the result. After preparation, getting and setting are relatively straightforward, although the different modes of iteration need to be considered. Unless there is only a single indexing array during item getting, the validity of the indices is checked beforehand. Otherwise, it is handled in the inner loop itself for optimization. ## Universal functions See also [Universal functions (ufunc)](../reference/ufuncs#ufuncs), [Universal functions (ufunc) basics](../user/basics.ufuncs#ufuncs-basics) Universal functions are callable objects that take \\(N\\) inputs and produce \\(M\\) outputs by wrapping basic 1-D loops that work element-by-element into full easy-to-use functions that seamlessly implement [broadcasting](../user/basics.broadcasting#basics-broadcasting), [type- checking](../user/basics.ufuncs#ufuncs-casting), [buffered coercion](../user/basics.ufuncs#use-of-internal-buffers), and [output-argument handling](../user/basics.ufuncs#ufuncs-output-type). New universal functions are normally created in C, although there is a mechanism for creating ufuncs from Python functions ([`frompyfunc`](../reference/generated/numpy.frompyfunc#numpy.frompyfunc "numpy.frompyfunc")). The user must supply a 1-D loop that implements the basic function taking the input scalar values and placing the resulting scalars into the appropriate output slots as explained in implementation. ### Setup Every [`ufunc`](../reference/generated/numpy.ufunc#numpy.ufunc "numpy.ufunc") calculation involves some overhead related to setting up the calculation. The practical significance of this overhead is that even though the actual calculation of the ufunc is very fast, you will be able to write array and type-specific code that will work faster for small arrays than the ufunc. In particular, using ufuncs to perform many calculations on 0-D arrays will be slower than other Python-based solutions (the silently-imported `scalarmath` module exists precisely to give array scalars the look-and-feel of ufunc based calculations with significantly reduced overhead). When a [`ufunc`](../reference/generated/numpy.ufunc#numpy.ufunc "numpy.ufunc") is called, many things must be done. The information collected from these setup operations is stored in a loop object. This loop object is a C-structure (that could become a Python object but is not initialized as such because it is only used internally). This loop object has the layout needed to be used with [`PyArray_Broadcast`](../reference/c-api/array#c.PyArray_Broadcast "PyArray_Broadcast") so that the broadcasting can be handled in the same way as it is handled in other sections of code. The first thing done is to look up in the thread-specific global dictionary the current values for the buffer-size, the error mask, and the associated error object. The state of the error mask controls what happens when an error condition is found. It should be noted that checking of the hardware error flags is only performed after each 1-D loop is executed. This means that if the input and output arrays are contiguous and of the correct type so that a single 1-D loop is performed, then the flags may not be checked until all elements of the array have been calculated. Looking up these values in a thread-specific dictionary takes time which is easily ignored for all but very small arrays. After checking, the thread-specific global variables, the inputs are evaluated to determine how the ufunc should proceed and the input and output arrays are constructed if necessary. Any inputs which are not arrays are converted to arrays (using context if necessary). Which of the inputs are scalars (and therefore converted to 0-D arrays) is noted. Next, an appropriate 1-D loop is selected from the 1-D loops available to the [`ufunc`](../reference/generated/numpy.ufunc#numpy.ufunc "numpy.ufunc") based on the input array types. This 1-D loop is selected by trying to match the signature of the datatypes of the inputs against the available signatures. The signatures corresponding to built-in types are stored in the [`ufunc.types`](../reference/generated/numpy.ufunc.types#numpy.ufunc.types "numpy.ufunc.types") member of the ufunc structure. The signatures corresponding to user-defined types are stored in a linked list of function information with the head element stored as a `CObject` in the `userloops` dictionary keyed by the datatype number (the first user-defined type in the argument list is used as the key). The signatures are searched until a signature is found to which the input arrays can all be cast safely (ignoring any scalar arguments which are not allowed to determine the type of the result). The implication of this search procedure is that “lesser types” should be placed below “larger types” when the signatures are stored. If no 1-D loop is found, then an error is reported. Otherwise, the `argument_list` is updated with the stored signature — in case casting is necessary and to fix the output types assumed by the 1-D loop. If the ufunc has 2 inputs and 1 output and the second input is an `Object` array then a special-case check is performed so that `NotImplemented` is returned if the second input is not an ndarray, has the [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") attribute, and has an `__r{op}__` special method. In this way, Python is signaled to give the other object a chance to complete the operation instead of using generic object-array calculations. This allows (for example) sparse matrices to override the multiplication operator 1-D loop. For input arrays that are smaller than the specified buffer size, copies are made of all non-contiguous, misaligned, or out-of-byteorder arrays to ensure that for small arrays, a single loop is used. Then, array iterators are created for all the input arrays and the resulting collection of iterators is broadcast to a single shape. The output arguments (if any) are then processed and any missing return arrays are constructed. If any provided output array doesn’t have the correct type (or is misaligned) and is smaller than the buffer size, then a new output array is constructed with the special [`NPY_ARRAY_WRITEBACKIFCOPY`](../reference/c-api/array#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY") flag set. At the end of the function, [`PyArray_ResolveWritebackIfCopy`](../reference/c-api/array#c.PyArray_ResolveWritebackIfCopy "PyArray_ResolveWritebackIfCopy") is called so that its contents will be copied back into the output array. Iterators for the output arguments are then processed. Finally, the decision is made about how to execute the looping mechanism to ensure that all elements of the input arrays are combined to produce the output arrays of the correct type. The options for loop execution are one-loop (for :term`contiguous`, aligned, and correct data type), strided-loop (for non-contiguous but still aligned and correct data type), and a buffered loop (for misaligned or incorrect data type situations). Depending on which execution method is called for, the loop is then set up and computed. ### Function call This section describes how the basic universal function computation loop is set up and executed for each of the three different kinds of execution. If [`NPY_ALLOW_THREADS`](../reference/c-api/array#c.NPY_ALLOW_THREADS "NPY_ALLOW_THREADS") is defined during compilation, then as long as no object arrays are involved, the Python Global Interpreter Lock (GIL) is released prior to calling the loops. It is re-acquired if necessary to handle error conditions. The hardware error flags are checked only after the 1-D loop is completed. #### One loop This is the simplest case of all. The ufunc is executed by calling the underlying 1-D loop exactly once. This is possible only when we have aligned data of the correct type (including byteorder) for both input and output and all arrays have uniform strides (either [contiguous](../glossary#term- contiguous), 0-D, or 1-D). In this case, the 1-D computational loop is called once to compute the calculation for the entire array. Note that the hardware error flags are only checked after the entire calculation is complete. #### Strided loop When the input and output arrays are aligned and of the correct type, but the striding is not uniform (non-contiguous and 2-D or larger), then a second looping structure is employed for the calculation. This approach converts all of the iterators for the input and output arguments to iterate over all but the largest dimension. The inner loop is then handled by the underlying 1-D computational loop. The outer loop is a standard iterator loop on the converted iterators. The hardware error flags are checked after each 1-D loop is completed. #### Buffered loop This is the code that handles the situation whenever the input and/or output arrays are either misaligned or of the wrong datatype (including being byteswapped) from what the underlying 1-D loop expects. The arrays are also assumed to be non-contiguous. The code works very much like the strided-loop except for the inner 1-D loop is modified so that pre-processing is performed on the inputs and post-processing is performed on the outputs in `bufsize` chunks (where `bufsize` is a user-settable parameter). The underlying 1-D computational loop is called on data that is copied over (if it needs to be). The setup code and the loop code is considerably more complicated in this case because it has to handle: * memory allocation of the temporary buffers * deciding whether or not to use buffers on the input and output data (misaligned and/or wrong datatype) * copying and possibly casting data for any inputs or outputs for which buffers are necessary. * special-casing `Object` arrays so that reference counts are properly handled when copies and/or casts are necessary. * breaking up the inner 1-D loop into `bufsize` chunks (with a possible remainder). Again, the hardware error flags are checked at the end of each 1-D loop. ### Final output manipulation Ufuncs allow other array-like classes to be passed seamlessly through the interface in that inputs of a particular class will induce the outputs to be of that same class. The mechanism by which this works is the following. If any of the inputs are not ndarrays and define the [`__array_wrap__`](../reference/arrays.classes#numpy.class.__array_wrap__ "numpy.class.__array_wrap__") method, then the class with the largest [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") attribute determines the type of all the outputs (with the exception of any output arrays passed in). The [`__array_wrap__`](../reference/arrays.classes#numpy.class.__array_wrap__ "numpy.class.__array_wrap__") method of the input array will be called with the ndarray being returned from the ufunc as its input. There are two calling styles of the [`__array_wrap__`](../reference/arrays.classes#numpy.class.__array_wrap__ "numpy.class.__array_wrap__") function supported. The first takes the ndarray as the first argument and a tuple of “context” as the second argument. The context is (ufunc, arguments, output argument number). This is the first call tried. If a `TypeError` occurs, then the function is called with just the ndarray as the first argument. ### Methods There are three methods of ufuncs that require calculation similar to the general-purpose ufuncs. These are [`ufunc.reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce"), [`ufunc.accumulate`](../reference/generated/numpy.ufunc.accumulate#numpy.ufunc.accumulate "numpy.ufunc.accumulate"), and [`ufunc.reduceat`](../reference/generated/numpy.ufunc.reduceat#numpy.ufunc.reduceat "numpy.ufunc.reduceat"). Each of these methods requires a setup command followed by a loop. There are four loop styles possible for the methods corresponding to no-elements, one-element, strided-loop, and buffered-loop. These are the same basic loop styles as implemented for the general-purpose function call except for the no-element and one-element cases which are special-cases occurring when the input array objects have 0 and 1 elements respectively. #### Setup The setup function for all three methods is `construct_reduce`. This function creates a reducing loop object and fills it with the parameters needed to complete the loop. All of the methods only work on ufuncs that take 2-inputs and return 1 output. Therefore, the underlying 1-D loop is selected assuming a signature of `[otype, otype, otype]` where `otype` is the requested reduction datatype. The buffer size and error handling are then retrieved from (per- thread) global storage. For small arrays that are misaligned or have incorrect datatype, a copy is made so that the un-buffered section of code is used. Then, the looping strategy is selected. If there is 1 element or 0 elements in the array, then a simple looping method is selected. If the array is not misaligned and has the correct datatype, then strided looping is selected. Otherwise, buffered looping must be performed. Looping parameters are then established, and the return array is constructed. The output array is of a different [shape](../glossary#term-shape) depending on whether the method is [`reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce"), [`accumulate`](../reference/generated/numpy.ufunc.accumulate#numpy.ufunc.accumulate "numpy.ufunc.accumulate"), or [`reduceat`](../reference/generated/numpy.ufunc.reduceat#numpy.ufunc.reduceat "numpy.ufunc.reduceat"). If an output array is already provided, then its shape is checked. If the output array is not C-contiguous, aligned, and of the correct data type, then a temporary copy is made with the [`NPY_ARRAY_WRITEBACKIFCOPY`](../reference/c-api/array#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY") flag set. In this way, the methods will be able to work with a well-behaved output array but the result will be copied back into the true output array when [`PyArray_ResolveWritebackIfCopy`](../reference/c-api/array#c.PyArray_ResolveWritebackIfCopy "PyArray_ResolveWritebackIfCopy") is called at function completion. Finally, iterators are set up to loop over the correct [axis](../glossary#term-axis) (depending on the value of axis provided to the method) and the setup routine returns to the actual computation routine. #### [`Reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") All of the ufunc methods use the same underlying 1-D computational loops with input and output arguments adjusted so that the appropriate reduction takes place. For example, the key to the functioning of [`reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") is that the 1-D loop is called with the output and the second input pointing to the same position in memory and both having a step- size of 0. The first input is pointing to the input array with a step-size given by the appropriate stride for the selected axis. In this way, the operation performed is \begin{align*} o & = & i[0] \\\ o & = & i[k]\textrm{}o\quad k=1\ldots N \end{align*} where \\(N+1\\) is the number of elements in the input, \\(i\\), \\(o\\) is the output, and \\(i[k]\\) is the \\(k^{\textrm{th}}\\) element of \\(i\\) along the selected axis. This basic operation is repeated for arrays with greater than 1 dimension so that the reduction takes place for every 1-D sub- array along the selected axis. An iterator with the selected dimension removed handles this looping. For buffered loops, care must be taken to copy and cast data before the loop function is called because the underlying loop expects aligned data of the correct datatype (including byteorder). The buffered loop must handle this copying and casting prior to calling the loop function on chunks no greater than the user-specified `bufsize`. #### [`Accumulate`](../reference/generated/numpy.ufunc.accumulate#numpy.ufunc.accumulate "numpy.ufunc.accumulate") The [`accumulate`](../reference/generated/numpy.ufunc.accumulate#numpy.ufunc.accumulate "numpy.ufunc.accumulate") method is very similar to the [`reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") method in that the output and the second input both point to the output. The difference is that the second input points to memory one stride behind the current output pointer. Thus, the operation performed is \begin{align*} o[0] & = & i[0] \\\ o[k] & = & i[k]\textrm{}o[k-1]\quad k=1\ldots N. \end{align*} The output has the same shape as the input and each 1-D loop operates over \\(N\\) elements when the shape in the selected axis is \\(N+1\\). Again, buffered loops take care to copy and cast the data before calling the underlying 1-D computational loop. #### [`Reduceat`](../reference/generated/numpy.ufunc.reduceat#numpy.ufunc.reduceat "numpy.ufunc.reduceat") The [`reduceat`](../reference/generated/numpy.ufunc.reduceat#numpy.ufunc.reduceat "numpy.ufunc.reduceat") function is a generalization of both the [`reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") and [`accumulate`](../reference/generated/numpy.ufunc.accumulate#numpy.ufunc.accumulate "numpy.ufunc.accumulate") functions. It implements a [`reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") over ranges of the input array specified by indices. The extra indices argument is checked to be sure that every input is not too large for the input array along the selected dimension before the loop calculations take place. The loop implementation is handled using code that is very similar to the [`reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") code repeated as many times as there are elements in the indices input. In particular: the first input pointer passed to the underlying 1-D computational loop points to the input array at the correct location indicated by the index array. In addition, the output pointer and the second input pointer passed to the underlying 1-D loop point to the same position in memory. The size of the 1-D computational loop is fixed to be the difference between the current index and the next index (when the current index is the last index, then the next index is assumed to be the length of the array along the selected dimension). In this way, the 1-D loop will implement a [`reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") over the specified indices. Misaligned or a loop datatype that does not match the input and/or output datatype is handled using buffered code wherein data is copied to a temporary buffer and cast to the correct datatype if necessary prior to calling the underlying 1-D function. The temporary buffers are created in (element) sizes no bigger than the user settable buffer-size value. Thus, the loop must be flexible enough to call the underlying 1-D computational loop enough times to complete the total calculation in chunks no bigger than the buffer-size. # Internal organization of NumPy arrays It helps to understand a bit about how NumPy arrays are handled under the covers to help understand NumPy better. This section will not go into great detail. Those wishing to understand the full details are requested to refer to Travis Oliphant’s book [Guide to NumPy](https://web.mit.edu/dvp/Public/numpybook.pdf). NumPy arrays consist of two major components: the raw array data (from now on, referred to as the data buffer), and the information about the raw array data. The data buffer is typically what people think of as arrays in C or Fortran, a [contiguous](../glossary#term-contiguous) (and fixed) block of memory containing fixed-sized data items. NumPy also contains a significant set of data that describes how to interpret the data in the data buffer. This extra information contains (among other things): 1. The basic data element’s size in bytes. 2. The start of the data within the data buffer (an offset relative to the beginning of the data buffer). 3. The number of [dimensions](../glossary#term-dimension) and the size of each dimension. 4. The separation between elements for each dimension (the [stride](../glossary#term-stride)). This does not have to be a multiple of the element size. 5. The byte order of the data (which may not be the native byte order). 6. Whether the buffer is read-only. 7. Information (via the [`dtype`](../reference/generated/numpy.dtype#numpy.dtype "numpy.dtype") object) about the interpretation of the basic data element. The basic data element may be as simple as an int or a float, or it may be a compound object (e.g., [struct-like](../glossary#term-structured-data-type)), a fixed character field, or Python object pointers. 8. Whether the array is to be interpreted as [C-order](../glossary#term-C-order) or [Fortran-order](../glossary#term-Fortran-order). This arrangement allows for the very flexible use of arrays. One thing that it allows is simple changes to the metadata to change the interpretation of the array buffer. Changing the byteorder of the array is a simple change involving no rearrangement of the data. The [shape](../glossary#term-shape) of the array can be changed very easily without changing anything in the data buffer or any data copying at all. Among other things that are made possible is one can create a new array metadata object that uses the same data buffer to create a new [view](../glossary#term-view) of that data buffer that has a different interpretation of the buffer (e.g., different shape, offset, byte order, strides, etc) but shares the same data bytes. Many operations in NumPy do just this such as [slicing](https://docs.python.org/3/glossary.html#term-slice "\(in Python v3.13\)"). Other operations, such as transpose, don’t move data elements around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the array doesn’t move. Typically these new versions of the array metadata but the same data buffer are new views into the data buffer. There is a different [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") object, but it uses the same data buffer. This is why it is necessary to force copies through the use of the [`copy`](../reference/generated/numpy.copy#numpy.copy "numpy.copy") method if one really wants to make a new and independent copy of the data buffer. New views into arrays mean the object reference counts for the data buffer increase. Simply doing away with the original array object will not remove the data buffer if other views of it still exist. ## Multidimensional array indexing order issues See also [Indexing on ndarrays](../user/basics.indexing#basics-indexing) What is the right way to index multi-dimensional arrays? Before you jump to conclusions about the one and true way to index multi-dimensional arrays, it pays to understand why this is a confusing issue. This section will try to explain in detail how NumPy indexing works and why we adopt the convention we do for images, and when it may be appropriate to adopt other conventions. The first thing to understand is that there are two conflicting conventions for indexing 2-dimensional arrays. Matrix notation uses the first index to indicate which row is being selected and the second index to indicate which column is selected. This is opposite the geometrically oriented-convention for images where people generally think the first index represents x position (i.e., column) and the second represents y position (i.e., row). This alone is the source of much confusion; matrix-oriented users and image-oriented users expect two different things with regard to indexing. The second issue to understand is how indices correspond to the order in which the array is stored in memory. In Fortran, the first index is the most rapidly varying index when moving through the elements of a two-dimensional array as it is stored in memory. If you adopt the matrix convention for indexing, then this means the matrix is stored one column at a time (since the first index moves to the next row as it changes). Thus Fortran is considered a Column- major language. C has just the opposite convention. In C, the last index changes most rapidly as one moves through the array as stored in memory. Thus C is a Row-major language. The matrix is stored by rows. Note that in both cases it presumes that the matrix convention for indexing is being used, i.e., for both Fortran and C, the first index is the row. Note this convention implies that the indexing convention is invariant and that the data order changes to keep that so. But that’s not the only way to look at it. Suppose one has large two- dimensional arrays (images or matrices) stored in data files. Suppose the data are stored by rows rather than by columns. If we are to preserve our index convention (whether matrix or image) that means that depending on the language we use, we may be forced to reorder the data if it is read into memory to preserve our indexing convention. For example, if we read row-ordered data into memory without reordering, it will match the matrix indexing convention for C, but not for Fortran. Conversely, it will match the image indexing convention for Fortran, but not for C. For C, if one is using data stored in row order, and one wants to preserve the image index convention, the data must be reordered when reading into memory. In the end, what you do for Fortran or C depends on which is more important, not reordering data or preserving the indexing convention. For large images, reordering data is potentially expensive, and often the indexing convention is inverted to avoid that. The situation with NumPy makes this issue yet more complicated. The internal machinery of NumPy arrays is flexible enough to accept any ordering of indices. One can simply reorder indices by manipulating the internal [stride](../glossary#term-stride) information for arrays without reordering the data at all. NumPy will know how to map the new index order to the data without moving the data. So if this is true, why not choose the index order that matches what you most expect? In particular, why not define row-ordered images to use the image convention? (This is sometimes referred to as the Fortran convention vs the C convention, thus the ‘C’ and ‘FORTRAN’ order options for array ordering in NumPy.) The drawback of doing this is potential performance penalties. It’s common to access the data sequentially, either implicitly in array operations or explicitly by looping over rows of an image. When that is done, then the data will be accessed in non-optimal order. As the first index is incremented, what is actually happening is that elements spaced far apart in memory are being sequentially accessed, with usually poor memory access speeds. For example, for a two-dimensional image `im` defined so that `im[0, 10]` represents the value at `x = 0`, `y = 10`. To be consistent with usual Python behavior then `im[0]` would represent a column at `x = 0`. Yet that data would be spread over the whole array since the data are stored in row order. Despite the flexibility of NumPy’s indexing, it can’t really paper over the fact basic operations are rendered inefficient because of data order or that getting contiguous subarrays is still awkward (e.g., `im[:, 0]` for the first row, vs `im[0]`). Thus one can’t use an idiom such as for row in `im`; for col in `im` does work, but doesn’t yield contiguous column data. As it turns out, NumPy is smart enough when dealing with [ufuncs](internals.code-explanations#ufuncs-internals) to determine which index is the most rapidly varying one in memory and uses that for the innermost loop. Thus for ufuncs, there is no large intrinsic advantage to either approach in most cases. On the other hand, use of [`ndarray.flat`](../reference/generated/numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") with a FORTRAN ordered array will lead to non-optimal memory access as adjacent elements in the flattened array (iterator, actually) are not contiguous in memory. Indeed, the fact is that Python indexing on lists and other sequences naturally leads to an outside-to-inside ordering (the first index gets the largest grouping, the next largest, and the last gets the smallest element). Since image data are normally stored in rows, this corresponds to the position within rows being the last item indexed. If you do want to use Fortran ordering realize that there are two approaches to consider: 1) accept that the first index is just not the most rapidly changing in memory and have all your I/O routines reorder your data when going from memory to disk or visa versa, or use NumPy’s mechanism for mapping the first index to the most rapidly varying data. We recommend the former if possible. The disadvantage of the latter is that many of NumPy’s functions will yield arrays without Fortran ordering unless you are careful to use the `order` keyword. Doing this would be highly inconvenient. Otherwise, we recommend simply learning to reverse the usual order of indices when accessing elements of an array. Granted, it goes against the grain, but it is more in line with Python semantics and the natural order of the data. # Releasing a version The following guides include detailed information on how to prepare a NumPy release. ## How to prepare a release These instructions give an overview of what is necessary to build binary releases for NumPy. ### Current build and release info Useful info can be found in the following locations: * **Source tree** * [INSTALL.rst](https://github.com/numpy/numpy/blob/main/INSTALL.rst) * [pavement.py](https://github.com/numpy/numpy/blob/main/pavement.py) * **NumPy docs** * [numpy/numpy](https://github.com/numpy/numpy/blob/main/doc/HOWTO_RELEASE.rst) * [numpy/numpy](https://github.com/numpy/numpy/blob/main/doc/RELEASE_WALKTHROUGH.rst) * [numpy/numpy](https://github.com/numpy/numpy/blob/main/doc/BRANCH_WALKTHROUGH.rst) * **Release scripts** * [numpy/numpy-vendor](https://github.com/numpy/numpy-vendor) ### Supported platforms and versions [NEP 29](https://docs.scipy.org/doc/scipy/dev/toolchain.html#nep29 "\(in SciPy v1.14.1\)") outlines which Python versions are supported; For the first half of 2020, this will be Python >= 3.6. We test NumPy against all these versions every time we merge code to main. Binary installers may be available for a subset of these versions (see below). * **OS X** OS X versions >= 10.9 are supported, for Python version support see [NEP 29](https://docs.scipy.org/doc/scipy/dev/toolchain.html#nep29 "\(in SciPy v1.14.1\)"). We build binary wheels for OSX that are compatible with Python.org Python, system Python, homebrew and macports - see this [OSX wheel building summary](https://github.com/MacPython/wiki/wiki/Spinning-wheels) for details. * **Windows** We build 32- and 64-bit wheels on Windows. Windows 7, 8 and 10 are supported. We build NumPy using the [mingw-w64 toolchain](https://mingwpy.github.io), [cibuildwheels](https://cibuildwheel.readthedocs.io/en/stable/) and GitHub actions. * **Linux** We build and ship [manylinux2014](https://www.python.org/dev/peps/pep-0513) wheels for NumPy. Many Linux distributions include their own binary builds of NumPy. * **BSD / Solaris** No binaries are provided, but successful builds on Solaris and BSD have been reported. ### Tool chain We build all our wheels on cloud infrastructure - so this list of compilers is for information and debugging builds locally. See the `.travis.yml` script in the [numpy wheels](https://github.com/MacPython/numpy-wheels) repo for an outdated source of the build recipes using multibuild. #### Compilers The same gcc version is used as the one with which Python itself is built on each platform. At the moment this means: * OS X builds on travis currently use `clang`. It appears that binary wheels for OSX >= 10.6 can be safely built from the travis-ci OSX 10.9 VMs when building against the Python from the Python.org installers; * Windows builds use the [mingw-w64 toolchain](https://mingwpy.github.io); * Manylinux2014 wheels use the gcc provided on the Manylinux docker images. You will need Cython for building the binaries. Cython compiles the `.pyx` files in the NumPy distribution to `.c` files. #### OpenBLAS All the wheels link to a version of [OpenBLAS](https://github.com/xianyi/OpenBLAS) supplied via the [openblas- libs](https://github.com/MacPython/openblas-libs) repo. The shared object (or DLL) is shipped with in the wheel, renamed to prevent name collisions with other OpenBLAS shared objects that may exist in the filesystem. #### Building source archives and wheels The NumPy wheels and sdist are now built using cibuildwheel with github actions. #### Building docs We are no longer building `PDF` files. All that will be needed is * virtualenv (pip). The other requirements will be filled automatically during the documentation build process. #### Uploading to PyPI The only application needed for uploading is * twine (pip). You will also need a PyPI token, which is best kept on a keyring. See the twine [keyring](https://twine.readthedocs.io/en/stable/#keyring-support) documentation for how to do that. #### Generating author/PR lists You will need a personal access token so that scripts can access the github NumPy repository. * gitpython (pip) * pygithub (pip) ### What is released * **Wheels** We currently support Python 3.8-3.10 on Windows, OSX, and Linux. * Windows: 32-bit and 64-bit wheels built using Github actions; * OSX: x64_86 and arm64 OSX wheels built using Github actions; * Linux: x64_86 and aarch64 Manylinux2014 wheels built using Github actions. * **Other** Release notes and changelog * **Source distribution** We build source releases in the .tar.gz format. ### Release process #### Agree on a release schedule A typical release schedule is one beta, two release candidates and a final release. It’s best to discuss the timing on the mailing list first, in order for people to get their commits in on time, get doc wiki edits merged, etc. After a date is set, create a new maintenance/x.y.z branch, add new empty release notes for the next version in the main branch and update the Trac Milestones. #### Make sure current branch builds a package correctly The CI builds wheels when a PR header begins with `REL`. Your last PR before releasing should be so marked and all the tests should pass. You can also do: git clean -fxdq python setup.py bdist_wheel python setup.py sdist For details of the build process itself, it is best to read the Step-by-Step Directions below. Note The following steps are repeated for the beta(s), release candidates(s) and the final release. #### Check deprecations Before the release branch is made, it should be checked that all deprecated code that should be removed is actually removed, and all new deprecations say in the docstring or deprecation warning what version the code will be removed. #### Check the C API version number The C API version needs to be tracked in three places * numpy/_core/meson.build * numpy/_core/code_generators/cversions.txt * numpy/_core/include/numpy/numpyconfig.h There are three steps to the process. 1. If the API has changed, increment the C_API_VERSION in numpy/core/meson.build. The API is unchanged only if any code compiled against the current API will be backward compatible with the last released NumPy version. Any changes to C structures or additions to the public interface will make the new API not backward compatible. 2. If the C_API_VERSION in the first step has changed, or if the hash of the API has changed, the cversions.txt file needs to be updated. To check the hash, run the script numpy/_core/cversions.py and note the API hash that is printed. If that hash does not match the last hash in numpy/_core/code_generators/cversions.txt the hash has changed. Using both the appropriate C_API_VERSION and hash, add a new entry to cversions.txt. If the API version was not changed, but the hash differs, you will need to comment out the previous entry for that API version. For instance, in NumPy 1.9 annotations were added, which changed the hash, but the API was the same as in 1.8. The hash serves as a check for API changes, but it is not definitive. If steps 1 and 2 are done correctly, compiling the release should not give a warning “API mismatch detect at the beginning of the build”. 3. The numpy/_core/include/numpy/numpyconfig.h will need a new NPY_X_Y_API_VERSION macro, where X and Y are the major and minor version numbers of the release. The value given to that macro only needs to be increased from the previous version if some of the functions or macros in the include files were deprecated. The C ABI version number in numpy/_core/meson.build should only be updated for a major release. #### Check the release notes Use [towncrier](https://pypi.org/project/towncrier/) to build the release note and commit the changes. This will remove all the fragments from `doc/release/upcoming_changes` and add `doc/release/-note.rst`.: towncrier build --version "" git commit -m"Create release note" Check that the release notes are up-to-date. Update the release notes with a Highlights section. Mention some of the following: * major new features * deprecated and removed features * supported Python versions * for SciPy, supported NumPy version(s) * outlook for the near future ## Step-by-step directions This is a walkthrough of the NumPy 2.1.0 release on Linux, modified for building with GitHub Actions and cibuildwheels and uploading to the [anaconda.org staging repository for NumPy](https://anaconda.org/multibuild- wheels-staging/numpy). The commands can be copied into the command line, but be sure to replace 2.1.0 by the correct version. This should be read together with the general release guide. ### Facility preparation Before beginning to make a release, use the `requirements/*_requirements.txt` files to ensure that you have the needed software. Most software can be installed with pip, but some will require apt-get, dnf, or whatever your system uses for software. You will also need a GitHub personal access token (PAT) to push the documentation. There are a few ways to streamline things: * Git can be set up to use a keyring to store your GitHub personal access token. Search online for the details. * You can use the `keyring` app to store the PyPI password for twine. See the online twine documentation for details. ### Prior to release #### Add/drop Python versions When adding or dropping Python versions, three files need to be edited: * .github/workflows/wheels.yml # for github cibuildwheel * tools/ci/cirrus_wheels.yml # for cibuildwheel aarch64/arm64 builds * pyproject.toml # for classifier and minimum version check. Make these changes in an ordinary PR against main and backport if necessary. Add `[wheel build]` at the end of the title line of the commit summary so that wheel builds will be run to test the changes. We currently release wheels for new Python versions after the first Python rc once manylinux and cibuildwheel support it. For Python 3.11 we were able to release within a week of the rc1 announcement. #### Backport pull requests Changes that have been marked for this release must be backported to the maintenance/2.1.x branch. #### Update 2.1.0 milestones Look at the issues/prs with 2.1.0 milestones and either push them off to a later version, or maybe remove the milestone. You may need to add a milestone. ### Make a release PR Four documents usually need to be updated or created for the release PR: * The changelog * The release notes * The `.mailmap` file * The `pyproject.toml` file These changes should be made in an ordinary PR against the maintenance branch. The commit heading should contain a `[wheel build]` directive to test if the wheels build. Other small, miscellaneous fixes may be part of this PR. The commit message might be something like: REL: Prepare for the NumPy 2.1.0 release [wheel build] - Create 2.1.0-changelog.rst. - Update 2.1.0-notes.rst. - Update .mailmap. - Update pyproject.toml #### Set the release version Check the `pyproject.toml` file and set the release version if needed: $ gvim pyproject.toml #### Check the `pavement.py` and `doc/source/release.rst` files Check that the `pavement.py` file points to the correct release notes. It should have been updated after the last release, but if not, fix it now. Also make sure that the notes have an entry in the `release.rst` file: $ gvim pavement.py doc/source/release.rst #### Generate the changelog The changelog is generated using the changelog tool: $ spin changelog $GITHUB v2.0.0..maintenance/2.1.x > doc/changelog/2.1.0-changelog.rst where `GITHUB` contains your GitHub access token. The text will need to be checked for non-standard contributor names and dependabot entries removed. It is also a good idea to remove any links that may be present in the PR titles as they don’t translate well to markdown, replace them with monospaced text. The non-standard contributor names should be fixed by updating the `.mailmap` file, which is a lot of work. It is best to make several trial runs before reaching this point and ping the malefactors using a GitHub issue to get the needed information. #### Finish the release notes If there are any release notes snippets in `doc/release/upcoming_changes/`, run `spin notes`, which will incorporate the snippets into the `doc/source/release/notes-towncrier.rst` file and delete the snippets: $ spin notes $ gvim doc/source/release/notes-towncrier.rst doc/source/release/2.1.0-notes.rst Once the `notes-towncrier` contents has been incorporated into release note the `.. include:: notes-towncrier.rst` directive can be removed. The notes will always need some fixups, the introduction will need to be written, and significant changes should be called out. For patch releases the changelog text may also be appended, but not for the initial release as it is too long. Check previous release notes to see how this is done. ### Release walkthrough Note that in the code snippets below, `upstream` refers to the root repository on GitHub and `origin` to its fork in your personal GitHub repositories. You may need to make adjustments if you have not forked the repository but simply cloned it locally. You can also edit `.git/config` and add `upstream` if it isn’t already present. #### 1\. Prepare the release commit Checkout the branch for the release, make sure it is up to date, and clean the repository: $ git checkout maintenance/2.1.x $ git pull upstream maintenance/2.1.x $ git submodule update $ git clean -xdfq Sanity check: $ python3 -m spin test -m full Tag the release and push the tag. This requires write permission for the numpy repository: $ git tag -a -s v2.1.0 -m"NumPy 2.1.0 release" $ git push upstream v2.1.0 If you need to delete the tag due to error: $ git tag -d v2.1.0 $ git push --delete upstream v2.1.0 #### 2\. Build wheels Tagging the build at the beginning of this process will trigger a wheel build via cibuildwheel and upload wheels and an sdist to the staging repo. The CI run on github actions (for all x86-based and macOS arm64 wheels) takes about 1 1/4 hours. The CI runs on cirrus (for aarch64 and M1) take less time. You can check for uploaded files at the [staging repository](https://anaconda.org/multibuild-wheels-staging/numpy/files), but note that it is not closely synched with what you see of the running jobs. If you wish to manually trigger a wheel build, you can do so: * On github actions -> [Wheel builder](https://github.com/numpy/numpy/actions/workflows/wheels.yml) there is a “Run workflow” button, click on it and choose the tag to build * On Cirrus we don’t currently have an easy way to manually trigger builds and uploads. If a wheel build fails for unrelated reasons, you can rerun it individually: * On github actions select [Wheel builder](https://github.com/numpy/numpy/actions/workflows/wheels.yml) click on the commit that contains the build you want to rerun. On the left there is a list of wheel builds, select the one you want to rerun and on the resulting page hit the counterclockwise arrows button. * On cirrus, log into cirrusci, look for the v2.1.0 tag and rerun the failed jobs. #### 3\. Download wheels When the wheels have all been successfully built and staged, download them from the Anaconda staging directory using the `tools/download-wheels.py` script: $ cd ../numpy $ mkdir -p release/installers $ python3 tools/download-wheels.py 2.1.0 #### 4\. Generate the README files This needs to be done after all installers are downloaded, but before the pavement file is updated for continued development: $ paver write_release #### 5\. Upload to PyPI Upload to PyPI using `twine`. A recent version of `twine` of is needed after recent PyPI changes, version `3.4.1` was used here: $ cd ../numpy $ twine upload release/installers/*.whl $ twine upload release/installers/*.gz # Upload last. If one of the commands breaks in the middle, you may need to selectively upload the remaining files because PyPI does not allow the same file to be uploaded twice. The source file should be uploaded last to avoid synchronization problems that might occur if pip users access the files while this is in process, causing pip to build from source rather than downloading a binary wheel. PyPI only allows a single source distribution, here we have chosen the zip archive. #### 6\. Upload files to GitHub Go to [numpy/numpy](https://github.com/numpy/numpy/releases), there should be a `v2.1.0 tag`, click on it and hit the edit button for that tag and update the title to ‘v2.1.0 (). There are two ways to add files, using an editable text window and as binary uploads. Start by editing the `release/README.md` that is translated from the rst version using pandoc. Things that will need fixing: PR lines from the changelog, if included, are wrapped and need unwrapping, links should be changed to monospaced text. Then copy the contents to the clipboard and paste them into the text window. It may take several tries to get it look right. Then * Upload `release/installers/numpy-2.1.0.tar.gz` as a binary file. * Upload `release/README.rst` as a binary file. * Upload `doc/changelog/2.1.0-changelog.rst` as a binary file. * Check the pre-release button if this is a pre-releases. * Hit the `{Publish,Update} release` button at the bottom. #### 7\. Upload documents to numpy.org (skip for prereleases) Note You will need a GitHub personal access token to push the update. This step is only needed for final releases and can be skipped for pre- releases and most patch releases. `make merge-doc` clones the `numpy/doc` repo into `doc/build/merge` and updates it with the new documentation: $ git clean -xdfq $ git co v2.1.0 $ rm -rf doc/build # want version to be current $ python -m spin docs merge-doc --build $ pushd doc/build/merge If the release series is a new one, you will need to add a new section to the `doc/build/merge/index.html` front page just after the “insert here” comment: $ gvim index.html +/'insert here' Further, update the version-switcher json file to add the new release and update the version marked `(stable)` and `preferred`: $ gvim _static/versions.json Then run `update.py` to update the version in `_static`: $ python3 update.py You can “test run” the new documentation in a browser to make sure the links work, although the version dropdown will not change, it pulls its information from `numpy.org`: $ firefox index.html # or google-chrome, etc. Update the stable link and update: $ ln -sfn 2.1 stable $ ls -l # check the link Once everything seems satisfactory, update, commit and upload the changes: $ git commit -a -m"Add documentation for v2.1.0" $ git push git@github.com:numpy/doc $ popd #### 8\. Reset the maintenance branch into a development state (skip for prereleases) Create release notes for next release and edit them to set the version. These notes will be a skeleton and have little content: $ git checkout -b begin-2.1.1 maintenance/2.1.x $ cp doc/source/release/template.rst doc/source/release/2.1.1-notes.rst $ gvim doc/source/release/2.1.1-notes.rst $ git add doc/source/release/2.1.1-notes.rst Add new release notes to the documentation release list and update the `RELEASE_NOTES` variable in `pavement.py`: $ gvim doc/source/release.rst pavement.py Update the `version` in `pyproject.toml`: $ gvim pyproject.toml Commit the result: $ git commit -a -m"MAINT: Prepare 2.1.x for further development" $ git push origin HEAD Go to GitHub and make a PR. It should be merged quickly. #### 9\. Announce the release on numpy.org (skip for prereleases) This assumes that you have forked [numpy/numpy.org](https://github.com/numpy/numpy.org): $ cd ../numpy.org $ git checkout main $ git pull upstream main $ git checkout -b announce-numpy-2.1.0 $ gvim content/en/news.md * For all releases, go to the bottom of the page and add a one line link. Look to the previous links for example. * For the `*.0` release in a cycle, add a new section at the top with a short description of the new features and point the news link to it. commit and push: $ git commit -a -m"announce the NumPy 2.1.0 release" $ git push origin HEAD Go to GitHub and make a PR. #### 10\. Announce to mailing lists The release should be announced on the numpy-discussion, scipy-devel, and python-announce-list mailing lists. Look at previous announcements for the basic template. The contributor and PR lists are the same as generated for the release notes above. If you crosspost, make sure that python-announce-list is BCC so that replies will not be sent to that list. #### 11\. Post-release update main (skip for prereleases) Checkout main and forward port the documentation changes. You may also want to update these notes if procedures have changed or improved: $ git checkout -b post-2.1.0-release-update main $ git checkout maintenance/2.1.x doc/source/release/2.1.0-notes.rst $ git checkout maintenance/2.1.x doc/changelog/2.1.0-changelog.rst $ git checkout maintenance/2.1.x .mailmap # only if updated for release. $ gvim doc/source/release.rst # Add link to new notes $ git status # check status before commit $ git commit -a -m"MAINT: Update main after 2.1.0 release." $ git push origin HEAD Go to GitHub and make a PR. ## Branch walkthrough This guide contains a walkthrough of branching NumPy 1.21.x on Linux. The commands can be copied into the command line, but be sure to replace 1.21 and 1.22 by the correct versions. It is good practice to make `.mailmap` as current as possible before making the branch, that may take several weeks. This should be read together with the general release guide. ### Branching #### Make the branch This is only needed when starting a new maintenance branch. Because NumPy now depends on tags to determine the version, the start of a new development cycle in the main branch needs an annotated tag. That is done as follows: $ git checkout main $ git pull upstream main $ git commit --allow-empty -m'REL: Begin NumPy 1.22.0 development' $ git push upstream HEAD If the push fails because new PRs have been merged, do: $ git pull --rebase upstream and repeat the push. Once the push succeeds, tag it: $ git tag -a -s v1.22.0.dev0 -m'Begin NumPy 1.22.0 development' $ git push upstream v1.22.0.dev0 then make the new branch and push it: $ git branch maintenance/1.21.x HEAD^ $ git push upstream maintenance/1.21.x #### Prepare the main branch for further development Make a PR branch to prepare main for further development: $ git checkout -b 'prepare-main-for-1.22.0-development' v1.22.0.dev0 Delete the release note fragments: $ git rm doc/release/upcoming_changes/[0-9]*.*.rst Create the new release notes skeleton and add to index: $ cp doc/source/release/template.rst doc/source/release/1.22.0-notes.rst $ gvim doc/source/release/1.22.0-notes.rst # put the correct version $ git add doc/source/release/1.22.0-notes.rst $ gvim doc/source/release.rst # add new notes to notes index $ git add doc/source/release.rst Update `pavement.py` and update the `RELEASE_NOTES` variable to point to the new notes: $ gvim pavement.py $ git add pavement.py Update `cversions.txt` to add current release. There should be no new hash to worry about at this early point, just add a comment following previous practice: $ gvim numpy/_core/code_generators/cversions.txt $ git add numpy/_core/code_generators/cversions.txt Check your work, commit it, and push: $ git status # check work $ git commit -m'REL: Prepare main for NumPy 1.22.0 development' $ git push origin HEAD Now make a pull request. # Reviewer guidelines Reviewing open pull requests (PRs) helps move the project forward. We encourage people outside the project to get involved as well; it’s a great way to get familiar with the codebase. ## Who can be a reviewer? Reviews can come from outside the NumPy team – we welcome contributions from domain experts (for instance, `linalg` or `fft`) or maintainers of other projects. You do not need to be a NumPy maintainer (a NumPy team member with permission to merge a PR) to review. If we do not know you yet, consider introducing yourself in [the mailing list or Slack](https://numpy.org/community/) before you start reviewing pull requests. ## Communication guidelines * Every PR, good or bad, is an act of generosity. Opening with a positive comment will help the author feel rewarded, and your subsequent remarks may be heard more clearly. You may feel good also. * Begin if possible with the large issues, so the author knows they’ve been understood. Resist the temptation to immediately go line by line, or to open with small pervasive issues. * You are the face of the project, and NumPy some time ago decided [the kind of project it will be](https://numpy.org/code-of-conduct/): open, empathetic, welcoming, friendly and patient. Be [kind](https://youtu.be/tzFWz5fiVKU?t=49m30s) to contributors. * Do not let perfect be the enemy of the good, particularly for documentation. If you find yourself making many small suggestions, or being too nitpicky on style or grammar, consider merging the current PR when all important concerns are addressed. Then, either push a commit directly (if you are a maintainer) or open a follow-up PR yourself. * If you need help writing replies in reviews, check out some standard replies for reviewing. ## Reviewer checklist * Is the intended behavior clear under all conditions? Some things to watch: * What happens with unexpected inputs like empty arrays or nan/inf values? * Are axis or shape arguments tested to be `int` or `tuples`? * Are unusual `dtypes` tested if a function supports those? * Should variable names be improved for clarity or consistency? * Should comments be added, or rather removed as unhelpful or extraneous? * Does the documentation follow the [NumPy guidelines](howto-docs#howto-document)? Are the docstrings properly formatted? * Does the code follow NumPy’s [Stylistic Guidelines](index#stylistic-guidelines)? * If you are a maintainer, and it is not obvious from the PR description, add a short explanation of what a branch did to the merge message and, if closing an issue, also add “Closes gh-123” where 123 is the issue number. * For code changes, at least one maintainer (i.e. someone with commit rights) should review and approve a pull request. If you are the first to review a PR and approve of the changes use the GitHub [approve review](https://help.github.com/articles/reviewing-changes-in-pull-requests/) tool to mark it as such. If a PR is straightforward, for example it’s a clearly correct bug fix, it can be merged straight away. If it’s more complex or changes public API, please leave it open for at least a couple of days so other maintainers get a chance to review. * If you are a subsequent reviewer on an already approved PR, please use the same review method as for a new PR (focus on the larger issues, resist the temptation to add only a few nitpicks). If you have commit rights and think no more review is needed, merge the PR. ### For maintainers * Make sure all automated CI tests pass before merging a PR, and that the [documentation builds](index#building-docs) without any errors. * In case of merge conflicts, ask the PR submitter to [rebase on main](development_workflow#rebasing-on-main). * For PRs that add new features or are in some way complex, wait at least a day or two before merging it. That way, others get a chance to comment before the code goes in. Consider adding it to the release notes. * When merging contributions, a committer is responsible for ensuring that those meet the requirements outlined in the [Development process guidelines](index#guidelines) for NumPy. Also, check that new features and backwards compatibility breaks were discussed on the [numpy-discussion mailing list](https://mail.python.org/mailman/listinfo/numpy-discussion). * Squashing commits or cleaning up commit messages of a PR that you consider too messy is OK. Remember to retain the original author’s name when doing this. Make sure commit messages follow the [rules for NumPy](development_workflow#writing-the-commit-message). * When you want to reject a PR: if it’s very obvious, you can just close it and explain why. If it’s not, then it’s a good idea to first explain why you think the PR is not suitable for inclusion in NumPy and then let a second committer comment or close. * If the PR submitter doesn’t respond to your comments for 6 months, move the PR in question to the inactive category with the “inactive” tag attached. At this point, the PR can be closed by a maintainer. If there is any interest in finalizing the PR under consideration, this can be indicated at any time, without waiting 6 months, by a comment. * Maintainers are encouraged to finalize PRs when only small changes are necessary before merging (e.g., fixing code style or grammatical errors). If a PR becomes inactive, maintainers may make larger changes. Remember, a PR is a collaboration between a contributor and a reviewer/s, sometimes a direct push is the best way to finish it. ### API changes As mentioned most public API changes should be discussed ahead of time and often with a wider audience (on the mailing list, or even through a NEP). For changes in the public C-API be aware that the NumPy C-API is backwards compatible so that any addition must be ABI compatible with previous versions. When it is not the case, you must add a guard. For example `PyUnicodeScalarObject` struct contains the following: #if NPY_FEATURE_VERSION >= NPY_1_20_API_VERSION char *buffer_fmt; #endif Because the `buffer_fmt` field was added to its end in NumPy 1.20 (all previous fields remained ABI compatible). Similarly, any function added to the API table in `numpy/_core/code_generators/numpy_api.py` must use the `MinVersion` annotation. For example: 'PyDataMem_SetHandler': (304, MinVersion("1.22")), Header only functionality (such as a new macro) typically does not need to be guarded. ### GitHub workflow When reviewing pull requests, please use workflow tracking features on GitHub as appropriate: * After you have finished reviewing, if you want to ask for the submitter to make changes, change your review status to “Changes requested.” This can be done on GitHub, PR page, Files changed tab, Review changes (button on the top right). * If you’re happy about the current status, mark the pull request as Approved (same way as Changes requested). Alternatively (for maintainers): merge the pull request, if you think it is ready to be merged. It may be helpful to have a copy of the pull request code checked out on your own machine so that you can play with it locally. You can use the [GitHub CLI](https://docs.github.com/en/github/getting-started-with-github/github-cli) to do this by clicking the `Open with` button in the upper right-hand corner of the PR page. Assuming you have your [development environment](development_environment#development-environment) set up, you can now build the code and test it. ## Standard replies for reviewing It may be helpful to store some of these in GitHub’s [saved replies](https://github.com/settings/replies/) for reviewing: **Usage question** You are asking a usage question. The issue tracker is for bugs and new features. I'm going to close this issue, feel free to ask for help via our [help channels](https://numpy.org/gethelp/). **You’re welcome to update the docs** Please feel free to offer a pull request updating the documentation if you feel it could be improved. **Self-contained example for bug** Please provide a [self-contained example code](https://stackoverflow.com/help/mcve), including imports and data (if possible), so that other contributors can just run it and reproduce your issue. Ideally your example code should be minimal. **Software versions** To help diagnose your issue, please paste the output of: ``` python -c 'import numpy; print(numpy.version.version)' ``` Thanks. **Code blocks** Readability can be greatly improved if you [format](https://help.github.com/articles/creating-and-highlighting-code-blocks/) your code snippets and complete error messages appropriately. You can edit your issue descriptions and comments at any time to improve readability. This helps maintainers a lot. Thanks! **Linking to code** For clarity's sake, you can link to code like [this](https://help.github.com/articles/creating-a-permanent-link-to-a-code-snippet/). **Better description and title** Please make the title of the PR more descriptive. The title will become the commit message when this is merged. You should state what issue (or PR) it fixes/resolves in the description using the syntax described [here](https://docs.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword). **Regression test needed** Please add a [non-regression test](https://en.wikipedia.org/wiki/Non-regression_testing) that would fail at main but pass in this PR. **Don’t change unrelated** Please do not change unrelated lines. It makes your contribution harder to review and may introduce merge conflicts to other pull requests. # Under-the-hood documentation for developers These documents are intended as a low-level look into NumPy; focused towards developers. * [Internal organization of NumPy arrays](internals) * [NumPy C code explanations](internals.code-explanations) * [Memory alignment](alignment) * [Byte-swapping](../user/byteswapping) * [Writing custom array containers](../user/basics.dispatch) * [Subclassing ndarray](../user/basics.subclassing) # Boilerplate reduction and templating ## Using FYPP for binding generic interfaces `f2py` doesn’t currently support binding interface blocks. However, there are workarounds in use. Perhaps the best known is the usage of `tempita` for using `.pyf.src` files as is done in the bindings which [are part of scipy](https://github.com/scipy/scipy/blob/c93da6f46dbed8b3cc0ccd2495b5678f7b740a03/scipy/linalg/clapack.pyf.src). `tempita` support has been removed and is no longer recommended in any case. Note The reason interfaces cannot be supported within `f2py` itself is because they don’t correspond to exported symbols in compiled libraries. ❯ nm gen.o 0000000000000078 T __add_mod_MOD_add_complex 0000000000000000 T __add_mod_MOD_add_complex_dp 0000000000000150 T __add_mod_MOD_add_integer 0000000000000124 T __add_mod_MOD_add_real 00000000000000ee T __add_mod_MOD_add_real_dp Here we will discuss a few techniques to leverage `f2py` in conjunction with [fypp](https://fypp.readthedocs.io/en/stable/fypp.html) to emulate generic interfaces and to ease the binding of multiple (similar) functions. ### Basic example: Addition module Let us build on the example (from the user guide, [F2PY examples](../f2py- examples#f2py-examples)) of a subroutine which takes in two arrays and returns its sum. C SUBROUTINE ZADD(A,B,C,N) C DOUBLE COMPLEX A(*) DOUBLE COMPLEX B(*) DOUBLE COMPLEX C(*) INTEGER N DO 20 J = 1, N C(J) = A(J)+B(J) 20 CONTINUE END We will recast this into modern fortran: module adder implicit none contains subroutine zadd(a, b, c, n) integer, intent(in) :: n double complex, intent(in) :: a(n), b(n) double complex, intent(out) :: c(n) integer :: j do j = 1, n c(j) = a(j) + b(j) end do end subroutine zadd end module adder We could go on as in the original example, adding intents by hand among other things, however in production often there are other concerns. For one, we can template via FYPP the construction of similar functions: module adder implicit none contains #:def add_subroutine(dtype_prefix, dtype) subroutine ${dtype_prefix}$add(a, b, c, n) integer, intent(in) :: n ${dtype}$, intent(in) :: a(n), b(n) ${dtype}$ :: c(n) integer :: j do j = 1, n c(j) = a(j) + b(j) end do end subroutine ${dtype_prefix}$add #:enddef #:for dtype_prefix, dtype in [('i', 'integer'), ('s', 'real'), ('d', 'real(kind=8)'), ('c', 'complex'), ('z', 'double complex')] @:add_subroutine(${dtype_prefix}$, ${dtype}$) #:endfor end module adder This can be pre-processed to generate the full fortran code: ❯ fypp gen_adder.f90.fypp > adder.f90 As to be expected, this can be wrapped by `f2py` subsequently. Now we will consider maintaining the bindings in a separate file. Note the following basic `.pyf` which can be generated for a single subroutine via `f2py -m adder adder_base.f90 -h adder.pyf`: ! -*- f90 -*- ! Note: the context of this file is case sensitive. python module adder ! in interface ! in :adder module adder ! in :adder:adder_base.f90 subroutine zadd(a,b,c,n) ! in :adder:adder_base.f90:adder double complex dimension(n),intent(in) :: a double complex dimension(n),intent(in),depend(n) :: b double complex dimension(n),intent(out),depend(n) :: c integer, optional,intent(in),check(shape(a, 0) == n),depend(a) :: n=shape(a, 0) end subroutine zadd end module adder end interface end python module adder ! This file was auto-generated with f2py (version:2.0.0.dev0+git20240101.bab7280). ! See: ! https://web.archive.org/web/20140822061353/http://cens.ioc.ee/projects/f2py2e With the docstring: c = zadd(a,b,[n]) Wrapper for ``zadd``. Parameters ---------- a : input rank-1 array('D') with bounds (n) b : input rank-1 array('D') with bounds (n) Other Parameters ---------------- n : input int, optional Default: shape(a, 0) Returns ------- c : rank-1 array('D') with bounds (n) Which is already pretty good. However, `n` should never be passed in the first place so we will make some minor adjustments. ! -*- f90 -*- ! Note: the context of this file is case sensitive. python module adder ! in interface ! in :adder module adder ! in :adder:adder_base.f90 subroutine zadd(a,b,c,n) ! in :adder:adder_base.f90:adder integer intent(hide),depend(a) :: n=len(a) double complex dimension(n),intent(in) :: a double complex dimension(n),intent(in),depend(n) :: b double complex dimension(n),intent(out),depend(n) :: c end subroutine zadd end module adder end interface end python module adder ! This file was auto-generated with f2py (version:2.0.0.dev0+git20240101.bab7280). ! See: ! https://numpy.org/doc/stable/f2py/ Which corresponds to: In [3]: ?adder.adder.zadd Call signature: adder.adder.zadd(*args, **kwargs) Type: fortran String form: Docstring: c = zadd(a,b) Wrapper for ``zadd``. Parameters ---------- a : input rank-1 array('D') with bounds (n) b : input rank-1 array('D') with bounds (n) Returns ------- c : rank-1 array('D') with bounds (n) Finally, we can template over this in a similar manner, to attain the original goal of having bindings which make use of `f2py` directives and have minimal spurious repetition. ! -*- f90 -*- ! Note: the context of this file is case sensitive. python module adder ! in interface ! in :adder module adder ! in :adder:adder_base.f90 #:def add_subroutine(dtype_prefix, dtype) subroutine ${dtype_prefix}$add(a,b,c,n) ! in :adder:adder_base.f90:adder integer intent(hide),depend(a) :: n=len(a) ${dtype}$ dimension(n),intent(in) :: a ${dtype}$ dimension(n),intent(in),depend(n) :: b ${dtype}$ dimension(n),intent(out),depend(n) :: c end subroutine ${dtype_prefix}$add #:enddef #:for dtype_prefix, dtype in [('i', 'integer'), ('s', 'real'), ('d', 'real(kind=8)'), ('c', 'complex'), ('z', 'complex(kind=8)')] @:add_subroutine(${dtype_prefix}$, ${dtype}$) #:endfor end module adder end interface end python module adder ! This file was auto-generated with f2py (version:2.0.0.dev0+git20240101.bab7280). ! See: ! https://numpy.org/doc/stable/f2py/ Usage boils down to: fypp gen_adder.f90.fypp > adder.f90 fypp adder.pyf.fypp > adder.pyf f2py -m adder -c adder.pyf adder.f90 --backend meson # Advanced F2PY use cases ## Adding user-defined functions to F2PY generated modules User-defined Python C/API functions can be defined inside signature files using `usercode` and `pymethoddef` statements (they must be used inside the `python module` block). For example, the following signature file `spam.pyf` ! -*- f90 -*- python module spam usercode ''' static char doc_spam_system[] = "Execute a shell command."; static PyObject *spam_system(PyObject *self, PyObject *args) { char *command; int sts; if (!PyArg_ParseTuple(args, "s", &command)) return NULL; sts = system(command); return Py_BuildValue("i", sts); } ''' pymethoddef ''' {"system", spam_system, METH_VARARGS, doc_spam_system}, ''' end python module spam wraps the C library function `system()`: f2py -c spam.pyf In Python this can then be used as: >>> import spam >>> status = spam.system('whoami') pearu >>> status = spam.system('blah') sh: line 1: blah: command not found ## Adding user-defined variables The following example illustrates how to add user-defined variables to a F2PY generated extension module by modifying the dictionary of a F2PY generated module. Consider the following signature file (compiled with `f2py -c var.pyf`): ! -*- f90 -*- python module var usercode ''' int BAR = 5; ''' interface usercode ''' PyDict_SetItemString(d,"BAR",PyLong_FromLong(BAR)); ''' end interface end python module Notice that the second `usercode` statement must be defined inside an `interface` block and the module dictionary is available through the variable `d` (see `varmodule.c` generated by `f2py var.pyf` for additional details). Usage in Python: >>> import var >>> var.BAR 5 ## Dealing with KIND specifiers Currently, F2PY can handle only `(kind=)` declarations where `` is a numeric integer (e.g. 1, 2, 4,…), but not a function call `KIND(..)` or any other expression. F2PY needs to know what would be the corresponding C type and a general solution for that would be too complicated to implement. However, F2PY provides a hook to overcome this difficulty, namely, users can define their own to maps. For example, if Fortran 90 code contains: REAL(kind=KIND(0.0D0)) ... then create a mapping file containing a Python dictionary: {'real': {'KIND(0.0D0)': 'double'}} for instance. Use the `--f2cmap` command-line option to pass the file name to F2PY. By default, F2PY assumes file name is `.f2py_f2cmap` in the current working directory. More generally, the f2cmap file must contain a dictionary with items: : {:} that defines mapping between Fortran type: ([kind=]) and the corresponding . The can be one of the following: double float long_double char signed_char unsigned_char short unsigned_short int long long_long unsigned complex_float complex_double complex_long_double string For example, for a Fortran file `func1.f` containing: subroutine func1(n, x, res) use, intrinsic :: iso_fortran_env, only: int64, real64 implicit none integer(int64), intent(in) :: n real(real64), intent(in) :: x(n) real(real64), intent(out) :: res Cf2py intent(hide) :: n res = sum(x) end In order to convert `int64` and `real64` to valid `C` data types, a `.f2py_f2cmap` file with the following content can be created in the current directory: dict(real=dict(real64='double'), integer=dict(int64='long long')) and create the module as usual. F2PY checks if a `.f2py_f2cmap` file is present in the current directory and will use it to map `KIND` specifiers to `C` data types. f2py -c func1.f -m func1 Alternatively, the mapping file can be saved with any other name, for example `mapfile.txt`, and this information can be passed to F2PY by using the `--f2cmap` option. f2py -c func1.f -m func1 --f2cmap mapfile.txt For more information, see F2Py source code `numpy/f2py/capi_maps.py`. ## Character strings ### Assumed length character strings In Fortran, assumed length character string arguments are declared as `character*(*)` or `character(len=*)`, that is, the length of such arguments are determined by the actual string arguments at runtime. For `intent(in)` arguments, this lack of length information poses no problems for f2py to construct functional wrapper functions. However, for `intent(out)` arguments, the lack of length information is problematic for f2py generated wrappers because there is no size information available for creating memory buffers for such arguments and F2PY assumes the length is 0. Depending on how the length of assumed length character strings are specified, there exist ways to workaround this problem, as exemplified below. If the length of the `character*(*)` output argument is determined by the state of other input arguments, the required connection can be established in a signature file or within a f2py-comment by adding an extra declaration for the corresponding argument that specifies the length in character selector part. For example, consider a Fortran file `asterisk1.f90`: subroutine foo1(s) character*(*), intent(out) :: s !f2py character(f2py_len=12) s s = "123456789A12" end subroutine foo1 Compile it with `f2py -c asterisk1.f90 -m asterisk1` and then in Python: >>> import asterisk1 >>> asterisk1.foo1() b'123456789A12' Notice that the extra declaration `character(f2py_len=12) s` is interpreted only by f2py and in the `f2py_len=` specification one can use C-expressions as a length value. In the following example: subroutine foo2(s, n) character(len=*), intent(out) :: s integer, intent(in) :: n !f2py character(f2py_len=n), depend(n) :: s s = "123456789A123456789B"(1:n) end subroutine foo2 the length of the output assumed length string depends on an input argument `n`, after wrapping with F2PY, in Python: >>> import asterisk >>> asterisk.foo2(2) b'12' >>> asterisk.foo2(12) b'123456789A12' >>> # Using via cmake In terms of complexity, `cmake` falls between `make` and `meson`. The learning curve is steeper since CMake syntax is not pythonic and is closer to `make` with environment variables. However, the trade-off is enhanced flexibility and support for most architectures and compilers. An introduction to the syntax is out of scope for this document, but this [extensive CMake collection](https://cliutils.gitlab.io/modern-cmake/) of resources is great. Note `cmake` is very popular for mixed-language systems, however support for `f2py` is not particularly native or pleasant; and a more natural approach is to consider [Using via scikit-build](skbuild#f2py-skbuild) ## Fibonacci walkthrough (F77) Returning to the `fib` example from [Three ways to wrap - getting started](../f2py.getting-started#f2py-getting-started) section. C FILE: FIB1.F SUBROUTINE FIB(A,N) C C CALCULATE FIRST N FIBONACCI NUMBERS C INTEGER N REAL*8 A(N) DO I=1,N IF (I.EQ.1) THEN A(I) = 0.0D0 ELSEIF (I.EQ.2) THEN A(I) = 1.0D0 ELSE A(I) = A(I-1) + A(I-2) ENDIF ENDDO END C END FILE FIB1.F We do not need to explicitly generate the `python -m numpy.f2py fib1.f` output, which is `fib1module.c`, which is beneficial. With this; we can now initialize a `CMakeLists.txt` file as follows: cmake_minimum_required(VERSION 3.18) # Needed to avoid requiring embedded Python libs too project(fibby VERSION 1.0 DESCRIPTION "FIB module" LANGUAGES C Fortran ) # Safety net if(PROJECT_SOURCE_DIR STREQUAL PROJECT_BINARY_DIR) message( FATAL_ERROR "In-source builds not allowed. Please make a new directory (called a build directory) and run CMake from there.\n" ) endif() # Grab Python, 3.8 or newer find_package(Python 3.8 REQUIRED COMPONENTS Interpreter Development.Module NumPy) # Grab the variables from a local Python installation # F2PY headers execute_process( COMMAND "${Python_EXECUTABLE}" -c "import numpy.f2py; print(numpy.f2py.get_include())" OUTPUT_VARIABLE F2PY_INCLUDE_DIR OUTPUT_STRIP_TRAILING_WHITESPACE ) # Print out the discovered paths include(CMakePrintHelpers) cmake_print_variables(Python_INCLUDE_DIRS) cmake_print_variables(F2PY_INCLUDE_DIR) cmake_print_variables(Python_NumPy_INCLUDE_DIRS) # Common variables set(f2py_module_name "fibby") set(fortran_src_file "${CMAKE_SOURCE_DIR}/fib1.f") set(f2py_module_c "${f2py_module_name}module.c") # Generate sources add_custom_target( genpyf DEPENDS "${CMAKE_CURRENT_BINARY_DIR}/${f2py_module_c}" ) add_custom_command( OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/${f2py_module_c}" COMMAND ${Python_EXECUTABLE} -m "numpy.f2py" "${fortran_src_file}" -m "fibby" --lower # Important DEPENDS fib1.f # Fortran source ) # Set up target Python_add_library(${CMAKE_PROJECT_NAME} MODULE WITH_SOABI "${CMAKE_CURRENT_BINARY_DIR}/${f2py_module_c}" # Generated "${F2PY_INCLUDE_DIR}/fortranobject.c" # From NumPy "${fortran_src_file}" # Fortran source(s) ) # Depend on sources target_link_libraries(${CMAKE_PROJECT_NAME} PRIVATE Python::NumPy) add_dependencies(${CMAKE_PROJECT_NAME} genpyf) target_include_directories(${CMAKE_PROJECT_NAME} PRIVATE "${F2PY_INCLUDE_DIR}") A key element of the `CMakeLists.txt` file defined above is that the `add_custom_command` is used to generate the wrapper `C` files and then added as a dependency of the actual shared library target via a `add_custom_target` directive which prevents the command from running every time. Additionally, the method used for obtaining the `fortranobject.c` file can also be used to grab the `numpy` headers on older `cmake` versions. This then works in the same manner as the other modules, although the naming conventions are different and the output library is not automatically prefixed with the `cython` information. ls . # CMakeLists.txt fib1.f cmake -S . -B build cmake --build build cd build python -c "import numpy as np; import fibby; a = np.zeros(9); fibby.fib(a); print (a)" # [ 0. 1. 1. 2. 3. 5. 8. 13. 21.] This is particularly useful where an existing toolchain already exists and `scikit-build` or other additional `python` dependencies are discouraged. # 1 Migrating to meson As per the timeline laid out in [Status of numpy.distutils and migration advice](../../reference/distutils_status_migration#distutils-status- migration), `distutils` has ceased to be the default build backend for `f2py`. This page collects common workflows in both formats. Note This is a ****living**** document, [pull requests](https://numpy.org/doc/stable/dev/howto-docs.html) are very welcome! ## 1.1 Baseline We will start out with a slightly modern variation of the classic Fibonnaci series generator. ! fib.f90 subroutine fib(a, n) use iso_c_binding integer(c_int), intent(in) :: n integer(c_int), intent(out) :: a(n) do i = 1, n if (i .eq. 1) then a(i) = 0.0d0 elseif (i .eq. 2) then a(i) = 1.0d0 else a(i) = a(i - 1) + a(i - 2) end if end do end This will not win any awards, but can be a reasonable starting point. ## 1.2 Compilation options ### 1.2.1 Basic Usage This is unchanged: python -m numpy.f2py -c fib.f90 -m fib ❯ python -c "import fib; print(fib.fib(30))" [ 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765 10946 17711 28657 46368 75025 121393 196418 317811 514229] ### 1.2.2 Specify the backend Distutils python -m numpy.f2py -c fib.f90 -m fib --backend distutils This is the default for Python versions before 3.12. Meson python -m numpy.f2py -c fib.f90 -m fib --backend meson This is the only option for Python versions after 3.12. ### 1.2.3 Pass a compiler name Distutils python -m numpy.f2py -c fib.f90 -m fib --backend distutils --fcompiler=gfortran Meson FC="gfortran" python -m numpy.f2py -c fib.f90 -m fib --backend meson Native files can also be used. Similarly, `CC` can be used in both cases to set the `C` compiler. Since the environment variables are generally pretty common across both, so a small sample is included below. **Name** | **What** ---|--- FC | Fortran compiler CC | C compiler CFLAGS | C compiler options FFLAGS | Fortran compiler options LDFLAGS | Linker options LDLIBRARYPATH | Library file locations (Unix) LIBS | Libraries to link against PATH | Search path for executables LDFLAGS | Linker flags CXX | C++ compiler CXXFLAGS | C++ compiler options Note For Windows, these may not work very reliably, so [native files](https://mesonbuild.com/Native-environments.html) are likely the best bet, or by direct 1.3 Customizing builds. ### 1.2.4 Dependencies Here, `meson` can actually be used to set dependencies more robustly. Distutils python -m numpy.f2py -c fib.f90 -m fib --backend distutils -llapack Note that this approach in practice is error prone. Meson python -m numpy.f2py -c fib.f90 -m fib --backend meson --dep lapack This maps to `dependency("lapack")` and so can be used for a wide variety of dependencies. They can be [customized further](https://mesonbuild.com/Dependencies.html) to use CMake or other systems to resolve dependencies. ### 1.2.5 Libraries Both `meson` and `distutils` are capable of linking against libraries. Distutils python -m numpy.f2py -c fib.f90 -m fib --backend distutils -lmylib -L/path/to/mylib Meson python -m numpy.f2py -c fib.f90 -m fib --backend meson -lmylib -L/path/to/mylib ## 1.3 Customizing builds Distutils python -m numpy.f2py -c fib.f90 -m fib --backend distutils --build-dir blah This can be technically integrated with other codes, see [Using via numpy.distutils](distutils#f2py-distutils). Meson python -m numpy.f2py -c fib.f90 -m fib --backend meson --build-dir blah The resulting build can be customized via the [Meson Build How-To Guide](https://mesonbuild.com/howtox.html). In fact, the resulting set of files can even be committed directly and used as a meson subproject in a separate codebase. # Using via numpy.distutils Legacy This submodule is considered legacy and will no longer receive updates. This could also mean it will be removed in future NumPy versions. `distutils` has been removed in favor of `meson` see [Status of numpy.distutils and migration advice](../../reference/distutils_status_migration#distutils-status- migration). [`numpy.distutils`](../../reference/distutils#module-numpy.distutils "numpy.distutils") is part of NumPy, and extends the standard Python `distutils` module to deal with Fortran sources and F2PY signature files, e.g. compile Fortran sources, call F2PY to construct extension modules, etc. Example Consider the following `setup_file.py` for the `fib` and `scalar` examples from [Three ways to wrap - getting started](../f2py.getting-started#f2py- getting-started) section: from numpy.distutils.core import Extension ext1 = Extension(name = 'scalar', sources = ['scalar.f']) ext2 = Extension(name = 'fib2', sources = ['fib2.pyf', 'fib1.f']) if __name__ == "__main__": from numpy.distutils.core import setup setup(name = 'f2py_example', description = "F2PY Users Guide examples", author = "Pearu Peterson", author_email = "pearu@cens.ioc.ee", ext_modules = [ext1, ext2] ) # End of setup_example.py Running python setup_example.py build will build two extension modules `scalar` and `fib2` to the build directory. ## Extensions to `distutils` [`numpy.distutils`](../../reference/distutils#module-numpy.distutils "numpy.distutils") extends `distutils` with the following features: * [`Extension`](../../reference/generated/numpy.distutils.core.extension#numpy.distutils.core.Extension "numpy.distutils.core.Extension") class argument `sources` may contain Fortran source files. In addition, the list `sources` may contain at most one F2PY signature file, and in this case, the name of an Extension module must match with the `` used in signature file. It is assumed that an F2PY signature file contains exactly one `python module` block. If `sources` do not contain a signature file, then F2PY is used to scan Fortran source files to construct wrappers to the Fortran codes. Additional options to the F2PY executable can be given using the [`Extension`](../../reference/generated/numpy.distutils.core.extension#numpy.distutils.core.Extension "numpy.distutils.core.Extension") class argument `f2py_options`. * The following new `distutils` commands are defined: `build_src` to construct Fortran wrapper extension modules, among many other things. `config_fc` to change Fortran compiler options. Additionally, the `build_ext` and `build_clib` commands are also enhanced to support Fortran sources. Run python config_fc build_src build_ext --help to see available options for these commands. * When building Python packages containing Fortran sources, one can choose different Fortran compilers by using the `build_ext` command option `--fcompiler=`. Here `` can be one of the following names (on `linux` systems): absoft compaq fujitsu g95 gnu gnu95 intel intele intelem lahey nag nagfor nv pathf95 pg vast See `numpy_distutils/fcompiler.py` for an up-to-date list of supported compilers for different platforms, or run python -m numpy.f2py -c --backend distutils --help-fcompiler # F2PY and build systems In this section we will cover the various popular build systems and their usage with `f2py`. Changed in version NumPy: 1.26.x The default build system for `f2py` has traditionally been through the enhanced `numpy.distutils` module. This module is based on `distutils` which was removed in `Python 3.12.0` in **October 2023**. Like the rest of NumPy and SciPy, `f2py` uses `meson` now, see [Status of numpy.distutils and migration advice](../../reference/distutils_status_migration#distutils-status-migration) for some more details. All changes to `f2py` are tested on SciPy, so their [CI configuration](https://docs.scipy.org/doc/scipy/dev/toolchain.html#official- builds) is always supported. Note See [1 Migrating to meson](distutils-to-meson#f2py-meson-distutils) for migration information. ## Basic concepts Building an extension module which includes Python and Fortran consists of: * Fortran source(s) * One or more generated files from `f2py` * A `C` wrapper file is always created * Code with modules require an additional `.f90` wrapper * Code with functions generate an additional `.f` wrapper * `fortranobject.{c,h}` * Distributed with `numpy` * Can be queried via `python -c "import numpy.f2py; print(numpy.f2py.get_include())"` * NumPy headers * Can be queried via `python -c "import numpy; print(numpy.get_include())"` * Python libraries and development headers Broadly speaking there are three cases which arise when considering the outputs of `f2py`: Fortran 77 programs * Input file `blah.f` * Generates * `blahmodule.c` * `blah-f2pywrappers.f` When no `COMMON` blocks are present only a `C` wrapper file is generated. Wrappers are also generated to rewrite assumed shape arrays as automatic arrays. Fortran 90 programs * Input file `blah.f90` * Generates: * `blahmodule.c` * `blah-f2pywrappers.f` * `blah-f2pywrappers2.f90` The `f90` wrapper is used to handle code which is subdivided into modules. The `f` wrapper makes `subroutines` for `functions`. It rewrites assumed shape arrays as automatic arrays. Signature files * Input file `blah.pyf` * Generates: * `blahmodule.c` * `blah-f2pywrappers2.f90` (occasionally) * `blah-f2pywrappers.f` (occasionally) Signature files `.pyf` do not signal their language standard via the file extension, they may generate the F90 and F77 specific wrappers depending on their contents; which shifts the burden of checking for generated files onto the build system. Changed in version NumPy: `1.22.4` `f2py` will deterministically generate wrapper files based on the input file Fortran standard (F77 or greater). `--skip-empty-wrappers` can be passed to `f2py` to restore the previous behaviour of only generating wrappers when needed by the input . In theory keeping the above requirements in hand, any build system can be adapted to generate `f2py` extension modules. Here we will cover a subset of the more popular systems. Note `make` has no place in a modern multi-language setup, and so is not discussed further. ## Build systems * [Using via `numpy.distutils`](distutils) * [Extensions to `distutils`](distutils#extensions-to-distutils) * [Using via `meson`](meson) * [Fibonacci walkthrough (F77)](meson#fibonacci-walkthrough-f77) * [Salient points](meson#salient-points) * [Using via `cmake`](cmake) * [Fibonacci walkthrough (F77)](cmake#fibonacci-walkthrough-f77) * [Using via `scikit-build`](skbuild) * [Fibonacci walkthrough (F77)](skbuild#fibonacci-walkthrough-f77) * [1 Migrating to `meson`](distutils-to-meson) * [1.1 Baseline](distutils-to-meson#baseline) * [1.2 Compilation options](distutils-to-meson#compilation-options) * [1.3 Customizing builds](distutils-to-meson#customizing-builds) # Using via meson Note Much of this document is now obsoleted, one can run `f2py` with `--build-dir` to get a skeleton `meson` project with basic dependencies setup. Changed in version 1.26.x: The default build system for `f2py` is now `meson`, see [Status of numpy.distutils and migration advice](../../reference/distutils_status_migration#distutils-status-migration) for some more details.. The key advantage gained by leveraging `meson` over the techniques described in [Using via numpy.distutils](distutils#f2py-distutils) is that this feeds into existing systems and larger projects with ease. `meson` has a rather pythonic syntax which makes it more comfortable and amenable to extension for `python` users. ## Fibonacci walkthrough (F77) We will need the generated `C` wrapper before we can use a general purpose build system like `meson`. We will acquire this by: python -m numpy.f2py fib1.f -m fib2 Now, consider the following `meson.build` file for the `fib` and `scalar` examples from [Three ways to wrap - getting started](../f2py.getting- started#f2py-getting-started) section: project('f2py_examples', 'c', version : '0.1', license: 'BSD-3', meson_version: '>=0.64.0', default_options : ['warning_level=2'], ) add_languages('fortran') py_mod = import('python') py = py_mod.find_installation(pure: false) py_dep = py.dependency() incdir_numpy = run_command(py, ['-c', 'import os; os.chdir(".."); import numpy; print(numpy.get_include())'], check : true ).stdout().strip() incdir_f2py = run_command(py, ['-c', 'import os; os.chdir(".."); import numpy.f2py; print(numpy.f2py.get_include())'], check : true ).stdout().strip() inc_np = include_directories(incdir_numpy, incdir_f2py) py.extension_module('fib2', [ 'fib1.f', 'fib2module.c', # note: this assumes f2py was manually run before! ], incdir_f2py / 'fortranobject.c', include_directories: inc_np, dependencies : py_dep, install : true ) At this point the build will complete, but the import will fail: meson setup builddir meson compile -C builddir cd builddir python -c 'import fib2' Traceback (most recent call last): File "", line 1, in ImportError: fib2.cpython-39-x86_64-linux-gnu.so: undefined symbol: FIB_ # Check this isn't a false positive nm -A fib2.cpython-39-x86_64-linux-gnu.so | grep FIB_ fib2.cpython-39-x86_64-linux-gnu.so: U FIB_ Recall that the original example, as reproduced below, was in SCREAMCASE: C FILE: FIB1.F SUBROUTINE FIB(A,N) C C CALCULATE FIRST N FIBONACCI NUMBERS C INTEGER N REAL*8 A(N) DO I=1,N IF (I.EQ.1) THEN A(I) = 0.0D0 ELSEIF (I.EQ.2) THEN A(I) = 1.0D0 ELSE A(I) = A(I-1) + A(I-2) ENDIF ENDDO END C END FILE FIB1.F With the standard approach, the subroutine exposed to `python` is `fib` and not `FIB`. This means we have a few options. One approach (where possible) is to lowercase the original Fortran file with say: tr "[:upper:]" "[:lower:]" < fib1.f > fib1.f python -m numpy.f2py fib1.f -m fib2 meson --wipe builddir meson compile -C builddir cd builddir python -c 'import fib2' However this requires the ability to modify the source which is not always possible. The easiest way to solve this is to let `f2py` deal with it: python -m numpy.f2py fib1.f -m fib2 --lower meson --wipe builddir meson compile -C builddir cd builddir python -c 'import fib2' ### Automating wrapper generation A major pain point in the workflow defined above, is the manual tracking of inputs. Although it would require more effort to figure out the actual outputs for reasons discussed in [F2PY and build systems](index#f2py-bldsys). Note From NumPy `1.22.4` onwards, `f2py` will deterministically generate wrapper files based on the input file Fortran standard (F77 or greater). `--skip- empty-wrappers` can be passed to `f2py` to restore the previous behaviour of only generating wrappers when needed by the input . However, we can augment our workflow in a straightforward to take into account files for which the outputs are known when the build system is set up. project('f2py_examples', 'c', version : '0.1', license: 'BSD-3', meson_version: '>=0.64.0', default_options : ['warning_level=2'], ) add_languages('fortran') py_mod = import('python') py = py_mod.find_installation(pure: false) py_dep = py.dependency() incdir_numpy = run_command(py, ['-c', 'import os; os.chdir(".."); import numpy; print(numpy.get_include())'], check : true ).stdout().strip() incdir_f2py = run_command(py, ['-c', 'import os; os.chdir(".."); import numpy.f2py; print(numpy.f2py.get_include())'], check : true ).stdout().strip() fibby_source = custom_target('fibbymodule.c', input : ['fib1.f'], # .f so no F90 wrappers output : ['fibbymodule.c', 'fibby-f2pywrappers.f'], command : [py, '-m', 'numpy.f2py', '@INPUT@', '-m', 'fibby', '--lower'] ) inc_np = include_directories(incdir_numpy, incdir_f2py) py.extension_module('fibby', ['fib1.f', fibby_source], incdir_f2py / 'fortranobject.c', include_directories: inc_np, dependencies : py_dep, install : true ) This can be compiled and run as before. rm -rf builddir meson setup builddir meson compile -C builddir cd builddir python -c "import numpy as np; import fibby; a = np.zeros(9); fibby.fib(a); print (a)" # [ 0. 1. 1. 2. 3. 5. 8. 13. 21.] ## Salient points It is worth keeping in mind the following: * It is not possible to use SCREAMCASE in this context, so either the contents of the `.f` file or the generated wrapper `.c` needs to be lowered to regular letters; which can be facilitated by the `--lower` option of `F2PY` # Using via scikit-build `scikit-build` provides two separate concepts geared towards the users of Python extension modules. 1. A `setuptools` replacement (legacy behaviour) 2. A series of `cmake` modules with definitions which help building Python extensions Note It is possible to use `scikit-build`’s `cmake` modules to [bypass the cmake setup mechanism](https://scikit-build.readthedocs.io/en/latest/cmake- modules/F2PY.html) completely, and to write targets which call `f2py -c`. This usage is **not recommended** since the point of these build system documents are to move away from the internal `numpy.distutils` methods. For situations where no `setuptools` replacements are required or wanted (i.e. if `wheels` are not needed), it is recommended to instead use the vanilla `cmake` setup described in [Using via cmake](cmake#f2py-cmake). ## Fibonacci walkthrough (F77) We will consider the `fib` example from [Three ways to wrap - getting started](../f2py.getting-started#f2py-getting-started) section. C FILE: FIB1.F SUBROUTINE FIB(A,N) C C CALCULATE FIRST N FIBONACCI NUMBERS C INTEGER N REAL*8 A(N) DO I=1,N IF (I.EQ.1) THEN A(I) = 0.0D0 ELSEIF (I.EQ.2) THEN A(I) = 1.0D0 ELSE A(I) = A(I-1) + A(I-2) ENDIF ENDDO END C END FILE FIB1.F ### `CMake` modules only Consider using the following `CMakeLists.txt`. ### setup project ### cmake_minimum_required(VERSION 3.9) project(fibby VERSION 1.0 DESCRIPTION "FIB module" LANGUAGES C Fortran ) # Safety net if(PROJECT_SOURCE_DIR STREQUAL PROJECT_BINARY_DIR) message( FATAL_ERROR "In-source builds not allowed. Please make a new directory (called a build directory) and run CMake from there.\n" ) endif() # Ensure scikit-build modules if (NOT SKBUILD) find_package(PythonInterp 3.8 REQUIRED) # Kanged --> https://github.com/Kitware/torch_liberator/blob/master/CMakeLists.txt # If skbuild is not the driver; include its utilities in CMAKE_MODULE_PATH execute_process( COMMAND "${PYTHON_EXECUTABLE}" -c "import os, skbuild; print(os.path.dirname(skbuild.__file__))" OUTPUT_VARIABLE SKBLD_DIR OUTPUT_STRIP_TRAILING_WHITESPACE ) list(APPEND CMAKE_MODULE_PATH "${SKBLD_DIR}/resources/cmake") message(STATUS "Looking in ${SKBLD_DIR}/resources/cmake for CMake modules") endif() # scikit-build style includes find_package(PythonExtensions REQUIRED) # for ${PYTHON_EXTENSION_MODULE_SUFFIX} # Grab the variables from a local Python installation # NumPy headers execute_process( COMMAND "${PYTHON_EXECUTABLE}" -c "import numpy; print(numpy.get_include())" OUTPUT_VARIABLE NumPy_INCLUDE_DIRS OUTPUT_STRIP_TRAILING_WHITESPACE ) # F2PY headers execute_process( COMMAND "${PYTHON_EXECUTABLE}" -c "import numpy.f2py; print(numpy.f2py.get_include())" OUTPUT_VARIABLE F2PY_INCLUDE_DIR OUTPUT_STRIP_TRAILING_WHITESPACE ) # Prepping the module set(f2py_module_name "fibby") set(fortran_src_file "${CMAKE_SOURCE_DIR}/fib1.f") set(f2py_module_c "${f2py_module_name}module.c") # Target for enforcing dependencies add_custom_target(genpyf DEPENDS "${fortran_src_file}" ) add_custom_command( OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/${f2py_module_c}" COMMAND ${PYTHON_EXECUTABLE} -m "numpy.f2py" "${fortran_src_file}" -m "fibby" --lower # Important DEPENDS fib1.f # Fortran source ) add_library(${CMAKE_PROJECT_NAME} MODULE "${f2py_module_name}module.c" "${F2PY_INCLUDE_DIR}/fortranobject.c" "${fortran_src_file}") target_include_directories(${CMAKE_PROJECT_NAME} PUBLIC ${F2PY_INCLUDE_DIR} ${NumPy_INCLUDE_DIRS} ${PYTHON_INCLUDE_DIRS}) set_target_properties(${CMAKE_PROJECT_NAME} PROPERTIES SUFFIX "${PYTHON_EXTENSION_MODULE_SUFFIX}") set_target_properties(${CMAKE_PROJECT_NAME} PROPERTIES PREFIX "") # Linker fixes if (UNIX) if (APPLE) set_target_properties(${CMAKE_PROJECT_NAME} PROPERTIES LINK_FLAGS '-Wl,-dylib,-undefined,dynamic_lookup') else() set_target_properties(${CMAKE_PROJECT_NAME} PROPERTIES LINK_FLAGS '-Wl,--allow-shlib-undefined') endif() endif() add_dependencies(${CMAKE_PROJECT_NAME} genpyf) install(TARGETS ${CMAKE_PROJECT_NAME} DESTINATION fibby) Much of the logic is the same as in [Using via cmake](cmake#f2py-cmake), however notably here the appropriate module suffix is generated via `sysconfig.get_config_var("SO")`. The resulting extension can be built and loaded in the standard workflow. ls . # CMakeLists.txt fib1.f cmake -S . -B build cmake --build build cd build python -c "import numpy as np; import fibby; a = np.zeros(9); fibby.fib(a); print (a)" # [ 0. 1. 1. 2. 3. 5. 8. 13. 21.] ### `setuptools` replacement Note **As of November 2021** The behavior described here of driving the `cmake` build of a module is considered to be legacy behaviour and should not be depended on. The utility of `scikit-build` lies in being able to drive the generation of more than extension modules, in particular a common usage pattern is the generation of Python distributables (for example for PyPI). The workflow with `scikit-build` straightforwardly supports such packaging requirements. Consider augmenting the project with a `setup.py` as defined: from skbuild import setup setup( name="fibby", version="0.0.1", description="a minimal example package (fortran version)", license="MIT", packages=['fibby'], python_requires=">=3.7", ) Along with a commensurate `pyproject.toml` [build-system] requires = ["setuptools>=42", "wheel", "scikit-build", "cmake>=3.9", "numpy>=1.21"] build-backend = "setuptools.build_meta" Together these can build the extension using `cmake` in tandem with other standard `setuptools` outputs. Running `cmake` through `setup.py` is mostly used when it is necessary to integrate with extension modules not built with `cmake`. ls . # CMakeLists.txt fib1.f pyproject.toml setup.py python setup.py build_ext --inplace python -c "import numpy as np; import fibby.fibby; a = np.zeros(9); fibby.fibby.fib(a); print (a)" # [ 0. 1. 1. 2. 3. 5. 8. 13. 21.] Where we have modified the path to the module as `--inplace` places the extension module in a subfolder. # F2PY examples Below are some examples of F2PY usage. This list is not comprehensive, but can be used as a starting point when wrapping your own code. Note The best place to look for examples is the [NumPy issue tracker](https://github.com/numpy/numpy/issues?q=is%3Aissue+label%3A%22component%3A+numpy.f2py%22+is%3Aclosed), or the test cases for `f2py`. Some more use cases are in [Boilerplate reduction and templating](advanced/boilerplating#f2py-boilerplating). ## F2PY walkthrough: a basic extension module ### Creating source for a basic extension module Consider the following subroutine, contained in a file named `add.f` C SUBROUTINE ZADD(A,B,C,N) C DOUBLE COMPLEX A(*) DOUBLE COMPLEX B(*) DOUBLE COMPLEX C(*) INTEGER N DO 20 J = 1, N C(J) = A(J)+B(J) 20 CONTINUE END This routine simply adds the elements in two contiguous arrays and places the result in a third. The memory for all three arrays must be provided by the calling routine. A very basic interface to this routine can be automatically generated by f2py: python -m numpy.f2py -m add add.f This command will produce an extension module named `addmodule.c` in the current directory. This extension module can now be compiled and used from Python just like any other extension module. ### Creating a compiled extension module You can also get f2py to both compile `add.f` along with the produced extension module leaving only a shared-library extension file that can be imported from Python: python -m numpy.f2py -c -m add add.f This command produces a Python extension module compatible with your platform. This module may then be imported from Python. It will contain a method for each subroutine in `add`. The docstring of each method contains information about how the module method may be called: >>> import add >>> print(add.zadd.__doc__) zadd(a,b,c,n) Wrapper for ``zadd``. Parameters ---------- a : input rank-1 array('D') with bounds (*) b : input rank-1 array('D') with bounds (*) c : input rank-1 array('D') with bounds (*) n : input int ### Improving the basic interface The default interface is a very literal translation of the Fortran code into Python. The Fortran array arguments are converted to NumPy arrays and the integer argument should be mapped to a `C` integer. The interface will attempt to convert all arguments to their required types (and shapes) and issue an error if unsuccessful. However, because `f2py` knows nothing about the semantics of the arguments (such that `C` is an output and `n` should really match the array sizes), it is possible to abuse this function in ways that can cause Python to crash. For example: >>> add.zadd([1, 2, 3], [1, 2], [3, 4], 1000) will cause a program crash on most systems. Under the hood, the lists are being converted to arrays but then the underlying `add` function is told to cycle way beyond the borders of the allocated memory. In order to improve the interface, `f2py` supports directives. This is accomplished by constructing a signature file. It is usually best to start from the interfaces that `f2py` produces in that file, which correspond to the default behavior. To get `f2py` to generate the interface file use the `-h` option: python -m numpy.f2py -h add.pyf -m add add.f This command creates the `add.pyf` file in the current directory. The section of this file corresponding to `zadd` is: subroutine zadd(a,b,c,n) ! in :add:add.f double complex dimension(*) :: a double complex dimension(*) :: b double complex dimension(*) :: c integer :: n end subroutine zadd By placing intent directives and checking code, the interface can be cleaned up quite a bit so the Python module method is both easier to use and more robust to malformed inputs. subroutine zadd(a,b,c,n) ! in :add:add.f double complex dimension(n) :: a double complex dimension(n) :: b double complex intent(out),dimension(n) :: c integer intent(hide),depend(a) :: n=len(a) end subroutine zadd The intent directive, intent(out) is used to tell f2py that `c` is an output variable and should be created by the interface before being passed to the underlying code. The intent(hide) directive tells f2py to not allow the user to specify the variable, `n`, but instead to get it from the size of `a`. The depend( `a` ) directive is necessary to tell f2py that the value of n depends on the input a (so that it won’t try to create the variable n until the variable a is created). After modifying `add.pyf`, the new Python module file can be generated by compiling both `add.f` and `add.pyf`: python -m numpy.f2py -c add.pyf add.f The new interface’s docstring is: >>> import add >>> print(add.zadd.__doc__) c = zadd(a,b) Wrapper for ``zadd``. Parameters ---------- a : input rank-1 array('D') with bounds (n) b : input rank-1 array('D') with bounds (n) Returns ------- c : rank-1 array('D') with bounds (n) Now, the function can be called in a much more robust way: >>> add.zadd([1, 2, 3], [4, 5, 6]) array([5.+0.j, 7.+0.j, 9.+0.j]) Notice the automatic conversion to the correct format that occurred. ### Inserting directives in Fortran source The robust interface of the previous section can also be generated automatically by placing the variable directives as special comments in the original Fortran code. Note For projects where the Fortran code is being actively developed, this may be preferred. Thus, if the source code is modified to contain: C SUBROUTINE ZADD(A,B,C,N) C CF2PY INTENT(OUT) :: C CF2PY INTENT(HIDE) :: N CF2PY DOUBLE COMPLEX :: A(N) CF2PY DOUBLE COMPLEX :: B(N) CF2PY DOUBLE COMPLEX :: C(N) DOUBLE COMPLEX A(*) DOUBLE COMPLEX B(*) DOUBLE COMPLEX C(*) INTEGER N DO 20 J = 1, N C(J) = A(J) + B(J) 20 CONTINUE END Then, one can compile the extension module using: python -m numpy.f2py -c -m add add.f The resulting signature for the function add.zadd is exactly the same one that was created previously. If the original source code had contained `A(N)` instead of `A(*)` and so forth with `B` and `C`, then nearly the same interface can be obtained by placing the `INTENT(OUT) :: C` comment line in the source code. The only difference is that `N` would be an optional input that would default to the length of `A`. ## A filtering example This example shows a function that filters a two-dimensional array of double precision floating-point numbers using a fixed averaging filter. The advantage of using Fortran to index into multi-dimensional arrays should be clear from this example. C SUBROUTINE DFILTER2D(A,B,M,N) C DOUBLE PRECISION A(M,N) DOUBLE PRECISION B(M,N) INTEGER N, M CF2PY INTENT(OUT) :: B CF2PY INTENT(HIDE) :: N CF2PY INTENT(HIDE) :: M DO 20 I = 2,M-1 DO 40 J = 2,N-1 B(I,J) = A(I,J) + & (A(I-1,J)+A(I+1,J) + & A(I,J-1)+A(I,J+1) )*0.5D0 + & (A(I-1,J-1) + A(I-1,J+1) + & A(I+1,J-1) + A(I+1,J+1))*0.25D0 40 CONTINUE 20 CONTINUE END This code can be compiled and linked into an extension module named filter using: python -m numpy.f2py -c -m filter filter.f This will produce an extension module in the current directory with a method named `dfilter2d` that returns a filtered version of the input. ## `depends` keyword example Consider the following code, saved in the file `myroutine.f90`: subroutine s(n, m, c, x) implicit none integer, intent(in) :: n, m real(kind=8), intent(out), dimension(n,m) :: x real(kind=8), intent(in) :: c(:) x = 0.0d0 x(1, 1) = c(1) end subroutine s Wrapping this with `python -m numpy.f2py -c myroutine.f90 -m myroutine`, we can do the following in Python: >>> import numpy as np >>> import myroutine >>> x = myroutine.s(2, 3, np.array([5, 6, 7])) >>> x array([[5., 0., 0.], [0., 0., 0.]]) Now, instead of generating the extension module directly, we will create a signature file for this subroutine first. This is a common pattern for multi- step extension module generation. In this case, after running python -m numpy.f2py myroutine.f90 -m myroutine -h myroutine.pyf the following signature file is generated: ! -*- f90 -*- ! Note: the context of this file is case sensitive. python module myroutine ! in interface ! in :myroutine subroutine s(n,m,c,x) ! in :myroutine:myroutine.f90 integer intent(in) :: n integer intent(in) :: m real(kind=8) dimension(:),intent(in) :: c real(kind=8) dimension(n,m),intent(out),depend(m,n) :: x end subroutine s end interface end python module myroutine ! This file was auto-generated with f2py (version:1.23.0.dev0+120.g4da01f42d). ! See: ! https://web.archive.org/web/20140822061353/http://cens.ioc.ee/projects/f2py2e Now, if we run `python -m numpy.f2py -c myroutine.pyf myroutine.f90` we see an error; note that the signature file included a `depend(m,n)` statement for `x` which is not necessary. Indeed, editing the file above to read ! -*- f90 -*- ! Note: the context of this file is case sensitive. python module myroutine ! in interface ! in :myroutine subroutine s(n,m,c,x) ! in :myroutine:myroutine.f90 integer intent(in) :: n integer intent(in) :: m real(kind=8) dimension(:),intent(in) :: c real(kind=8) dimension(n,m),intent(out) :: x end subroutine s end interface end python module myroutine ! This file was auto-generated with f2py (version:1.23.0.dev0+120.g4da01f42d). ! See: ! https://web.archive.org/web/20140822061353/http://cens.ioc.ee/projects/f2py2e and running `f2py -c myroutine.pyf myroutine.f90` yields correct results. ## Read more * [Wrapping C codes using f2py](https://scipy.github.io/old-wiki/pages/Cookbook/f2py_and_NumPy.html) * [F2py section on the SciPy Cookbook](https://scipy-cookbook.readthedocs.io/items/F2Py.html) * [F2py example: Interactive System for Ice sheet Simulation](http://websrv.cs.umt.edu/isis/index.php/F2py_example) * [“Interfacing With Other Languages” section on the SciPy Cookbook.](https://scipy-cookbook.readthedocs.io/items/idx_interfacing_with_other_languages.html) # F2PY reference manual * [Signature file](signature-file) * [Signature files syntax](signature-file#signature-files-syntax) * [Using F2PY bindings in Python](python-usage) * [Fortran type objects](python-usage#fortran-type-objects) * [Scalar arguments](python-usage#scalar-arguments) * [String arguments](python-usage#string-arguments) * [Array arguments](python-usage#array-arguments) * [Call-back arguments](python-usage#call-back-arguments) * [Common blocks](python-usage#common-blocks) * [Fortran 90 module data](python-usage#fortran-90-module-data) * [Allocatable arrays](python-usage#allocatable-arrays) * [F2PY and build systems](buildtools/index) * [Basic concepts](buildtools/index#basic-concepts) * [Build systems](buildtools/index#build-systems) * [Advanced F2PY use cases](advanced/use_cases) * [Adding user-defined functions to F2PY generated modules](advanced/use_cases#adding-user-defined-functions-to-f2py-generated-modules) * [Adding user-defined variables](advanced/use_cases#adding-user-defined-variables) * [Dealing with KIND specifiers](advanced/use_cases#dealing-with-kind-specifiers) * [Character strings](advanced/use_cases#character-strings) * [Boilerplate reduction and templating](advanced/boilerplating) * [Using FYPP for binding generic interfaces](advanced/boilerplating#using-fypp-for-binding-generic-interfaces) * [F2PY test suite](f2py-testing) * [Adding a test](f2py-testing#adding-a-test) # F2PY test suite F2PY’s test suite is present in the directory `numpy/f2py/tests`. Its aim is to ensure that Fortran language features are correctly translated to Python. For example, the user can specify starting and ending indices of arrays in Fortran. This behaviour is translated to the generated CPython library where the arrays strictly start from 0 index. The directory of the test suite looks like the following: ./tests/ ├── __init__.py ├── src │ ├── abstract_interface │ ├── array_from_pyobj │ ├── // ... several test folders │ └── string ├── test_abstract_interface.py ├── test_array_from_pyobj.py ├── // ... several test files ├── test_symbolic.py └── util.py Files starting with `test_` contain tests for various aspects of f2py from parsing Fortran files to checking modules’ documentation. `src` directory contains the Fortran source files upon which we do the testing. `util.py` contains utility functions for building and importing Fortran modules during test time using a temporary location. ## Adding a test F2PY’s current test suite predates `pytest` and therefore does not use fixtures. Instead, the test files contain test classes that inherit from `F2PyTest` class present in `util.py`. 1 backend = SimplifiedMesonBackend( 2 modulename=module_name, 3 sources=source_files, 4 extra_objects=kwargs.get("extra_objects", []), 5 build_dir=build_dir, 6 include_dirs=kwargs.get("include_dirs", []), 7 library_dirs=kwargs.get("library_dirs", []), 8 libraries=kwargs.get("libraries", []), 9 define_macros=kwargs.get("define_macros", []), 10 undef_macros=kwargs.get("undef_macros", []), This class many helper functions for parsing and compiling test source files. Its child classes can override its `sources` data member to provide their own source files. This superclass will then compile the added source files upon object creation and their functions will be appended to `self.module` data member. Thus, the child classes will be able to access the fortran functions specified in source file by calling `self.module.[fortran_function_name]`. New in version v2.0.0b1. Each of the `f2py` tests should run without failure if no Fortran compilers are present on the host machine. To facilitate this, the `CompilerChecker` is used, essentially providing a `meson` dependent set of utilities namely `has_{c,f77,f90,fortran}_compiler()`. For the CLI tests in `test_f2py2e`, flags which are expected to call `meson` or otherwise depend on a compiler need to call `compiler_check_f2pycli()` instead of `f2pycli()`. ### Example Consider the following subroutines, contained in a file named `add-test.f` subroutine addb(k) real(8), intent(inout) :: k(:) k=k+1 endsubroutine subroutine addc(w,k) real(8), intent(in) :: w(:) real(8), intent(out) :: k(size(w)) k=w+1 endsubroutine The first routine `addb` simply takes an array and increases its elements by 1. The second subroutine `addc` assigns a new array `k` with elements greater that the elements of the input array `w` by 1. A test can be implemented as follows: class TestAdd(util.F2PyTest): sources = [util.getpath("add-test.f")] def test_module(self): k = np.array([1, 2, 3], dtype=np.float64) w = np.array([1, 2, 3], dtype=np.float64) self.module.addb(k) assert np.allclose(k, w + 1) self.module.addc([w, k]) assert np.allclose(k, w + 1) We override the `sources` data member to provide the source file. The source files are compiled and subroutines are attached to module data member when the class object is created. The `test_module` function calls the subroutines and tests their results. # F2PY user guide * [Three ways to wrap - getting started](f2py.getting-started) * [The quick way](f2py.getting-started#the-quick-way) * [The smart way](f2py.getting-started#the-smart-way) * [The quick and smart way](f2py.getting-started#the-quick-and-smart-way) * [Using F2PY](usage) * [Using `f2py` as a command-line tool](usage#using-f2py-as-a-command-line-tool) * [Python module `numpy.f2py`](usage#python-module-numpy-f2py) * [Automatic extension module generation](usage#automatic-extension-module-generation) * [F2PY examples](f2py-examples) * [F2PY walkthrough: a basic extension module](f2py-examples#f2py-walkthrough-a-basic-extension-module) * [A filtering example](f2py-examples#a-filtering-example) * [`depends` keyword example](f2py-examples#depends-keyword-example) * [Read more](f2py-examples#read-more) # Three ways to wrap - getting started Wrapping Fortran or C functions to Python using F2PY consists of the following steps: * Creating the so-called [signature file](signature-file) that contains descriptions of wrappers to Fortran or C functions, also called the signatures of the functions. For Fortran routines, F2PY can create an initial signature file by scanning Fortran source codes and tracking all relevant information needed to create wrapper functions. * Optionally, F2PY-created signature files can be edited to optimize wrapper functions, which can make them “smarter” and more “Pythonic”. * F2PY reads a signature file and writes a Python C/API module containing Fortran/C/Python bindings. * F2PY compiles all sources and builds an extension module containing the wrappers. * In building the extension modules, F2PY uses `meson` and used to use `numpy.distutils` For different build systems, see [F2PY and build systems](buildtools/index#f2py-bldsys). Note See [1 Migrating to meson](buildtools/distutils-to-meson#f2py-meson-distutils) for migration information. * Depending on your operating system, you may need to install the Python development headers (which provide the file `Python.h`) separately. In Linux Debian-based distributions this package should be called `python3-dev`, in Fedora-based distributions it is `python3-devel`. For macOS, depending how Python was installed, your mileage may vary. In Windows, the headers are typically installed already, see [F2PY and Windows](windows/index#f2py-windows). Note F2PY supports all the operating systems SciPy is tested on so their [system dependencies panel](http://scipy.github.io/devdocs/building/index.html#system- level-dependencies) is a good reference. Depending on the situation, these steps can be carried out in a single composite command or step-by-step; in which case some steps can be omitted or combined with others. Below, we describe three typical approaches of using F2PY with Fortran 77. These can be read in order of increasing effort, but also cater to different access levels depending on whether the Fortran code can be freely modified. The following example Fortran 77 code will be used for illustration, save it as `fib1.f`: C FILE: FIB1.F SUBROUTINE FIB(A,N) C C CALCULATE FIRST N FIBONACCI NUMBERS C INTEGER N REAL*8 A(N) DO I=1,N IF (I.EQ.1) THEN A(I) = 0.0D0 ELSEIF (I.EQ.2) THEN A(I) = 1.0D0 ELSE A(I) = A(I-1) + A(I-2) ENDIF ENDDO END C END FILE FIB1.F Note F2PY parses Fortran/C signatures to build wrapper functions to be used with Python. However, it is not a compiler, and does not check for additional errors in source code, nor does it implement the entire language standards. Some errors may pass silently (or as warnings) and need to be verified by the user. ## The quick way The quickest way to wrap the Fortran subroutine `FIB` for use in Python is to run python -m numpy.f2py -c fib1.f -m fib1 or, alternatively, if the `f2py` command-line tool is available, f2py -c fib1.f -m fib1 Note Because the `f2py` command might not be available in all system, notably on Windows, we will use the `python -m numpy.f2py` command throughout this guide. This command compiles and wraps `fib1.f` (`-c`) to create the extension module `fib1.so` (`-m`) in the current directory. A list of command line options can be seen by executing `python -m numpy.f2py`. Now, in Python the Fortran subroutine `FIB` is accessible via `fib1.fib`: >>> import numpy as np >>> import fib1 >>> print(fib1.fib.__doc__) fib(a,[n]) Wrapper for ``fib``. Parameters ---------- a : input rank-1 array('d') with bounds (n) Other parameters ---------------- n : input int, optional Default: len(a) >>> a = np.zeros(8, 'd') >>> fib1.fib(a) >>> print(a) [ 0. 1. 1. 2. 3. 5. 8. 13.] Note * Note that F2PY recognized that the second argument `n` is the dimension of the first array argument `a`. Since by default all arguments are input-only arguments, F2PY concludes that `n` can be optional with the default value `len(a)`. * One can use different values for optional `n`: >>> a1 = np.zeros(8, 'd') >>> fib1.fib(a1, 6) >>> print(a1) [ 0. 1. 1. 2. 3. 5. 0. 0.] but an exception is raised when it is incompatible with the input array `a`: >>> fib1.fib(a, 10) Traceback (most recent call last): File "", line 1, in fib.error: (len(a)>=n) failed for 1st keyword n: fib:n=10 >>> F2PY implements basic compatibility checks between related arguments in order to avoid unexpected crashes. * When a NumPy array that is [Fortran](../glossary#term-Fortran-order) [contiguous](../glossary#term-contiguous) and has a `dtype` corresponding to a presumed Fortran type is used as an input array argument, then its C pointer is directly passed to Fortran. Otherwise, F2PY makes a contiguous copy (with the proper `dtype`) of the input array and passes a C pointer of the copy to the Fortran subroutine. As a result, any possible changes to the (copy of) input array have no effect on the original argument, as demonstrated below: >>> a = np.ones(8, 'i') >>> fib1.fib(a) >>> print(a) [1 1 1 1 1 1 1 1] Clearly, this is unexpected, as Fortran typically passes by reference. That the above example worked with `dtype=float` is considered accidental. F2PY provides an `intent(inplace)` attribute that modifies the attributes of an input array so that any changes made by the Fortran routine will be reflected in the input argument. For example, if one specifies the `intent(inplace) a` directive (see [Attributes](signature-file#f2py- attributes) for details), then the example above would read: >>> a = np.ones(8, 'i') >>> fib1.fib(a) >>> print(a) [ 0. 1. 1. 2. 3. 5. 8. 13.] However, the recommended way to have changes made by Fortran subroutine propagate to Python is to use the `intent(out)` attribute. That approach is more efficient and also cleaner. * The usage of `fib1.fib` in Python is very similar to using `FIB` in Fortran. However, using _in situ_ output arguments in Python is poor style, as there are no safety mechanisms in Python to protect against wrong argument types. When using Fortran or C, compilers discover any type mismatches during the compilation process, but in Python the types must be checked at runtime. Consequently, using _in situ_ output arguments in Python may lead to difficult to find bugs, not to mention the fact that the codes will be less readable when all required type checks are implemented. Though the approach to wrapping Fortran routines for Python discussed so far is very straightforward, it has several drawbacks (see the comments above). The drawbacks are due to the fact that there is no way for F2PY to determine the actual intention of the arguments; that is, there is ambiguity in distinguishing between input and output arguments. Consequently, F2PY assumes that all arguments are input arguments by default. There are ways (see below) to remove this ambiguity by “teaching” F2PY about the true intentions of function arguments, and F2PY is then able to generate more explicit, easier to use, and less error prone wrappers for Fortran functions. ## The smart way If we want to have more control over how F2PY will treat the interface to our Fortran code, we can apply the wrapping steps one by one. * First, we create a signature file from `fib1.f` by running: python -m numpy.f2py fib1.f -m fib2 -h fib1.pyf The signature file is saved to `fib1.pyf` (see the `-h` flag) and its contents are shown below. ! -*- f90 -*- python module fib2 ! in interface ! in :fib2 subroutine fib(a,n) ! in :fib2:fib1.f real*8 dimension(n) :: a integer optional,check(len(a)>=n),depend(a) :: n=len(a) end subroutine fib end interface end python module fib2 ! This file was auto-generated with f2py (version:2.28.198-1366). ! See http://cens.ioc.ee/projects/f2py2e/ * Next, we’ll teach F2PY that the argument `n` is an input argument (using the `intent(in)` attribute) and that the result, i.e., the contents of `a` after calling the Fortran function `FIB`, should be returned to Python (using the `intent(out)` attribute). In addition, an array `a` should be created dynamically using the size determined by the input argument `n` (using the `depend(n)` attribute to indicate this dependence relation). The contents of a suitably modified version of `fib1.pyf` (saved as `fib2.pyf`) are as follows: ! -*- f90 -*- python module fib2 interface subroutine fib(a,n) real*8 dimension(n),intent(out),depend(n) :: a integer intent(in) :: n end subroutine fib end interface end python module fib2 * Finally, we build the extension module with `numpy.distutils` by running: python -m numpy.f2py -c fib2.pyf fib1.f In Python: >>> import fib2 >>> print(fib2.fib.__doc__) a = fib(n) Wrapper for ``fib``. Parameters ---------- n : input int Returns ------- a : rank-1 array('d') with bounds (n) >>> print(fib2.fib(8)) [ 0. 1. 1. 2. 3. 5. 8. 13.] Note * The signature of `fib2.fib` now more closely corresponds to the intention of the Fortran subroutine `FIB`: given the number `n`, `fib2.fib` returns the first `n` Fibonacci numbers as a NumPy array. The new Python signature `fib2.fib` also rules out the unexpected behaviour in `fib1.fib`. * Note that by default, using a single `intent(out)` also implies `intent(hide)`. Arguments that have the `intent(hide)` attribute specified will not be listed in the argument list of a wrapper function. For more details, see [Signature file](signature-file). ## The quick and smart way The “smart way” of wrapping Fortran functions, as explained above, is suitable for wrapping (e.g. third party) Fortran codes for which modifications to their source codes are not desirable nor even possible. However, if editing Fortran codes is acceptable, then the generation of an intermediate signature file can be skipped in most cases. F2PY specific attributes can be inserted directly into Fortran source codes using F2PY directives. A F2PY directive consists of special comment lines (starting with `Cf2py` or `!f2py`, for example) which are ignored by Fortran compilers but interpreted by F2PY as normal lines. Consider a modified version of the previous Fortran code with F2PY directives, saved as `fib3.f`: C FILE: FIB3.F SUBROUTINE FIB(A,N) C C CALCULATE FIRST N FIBONACCI NUMBERS C INTEGER N REAL*8 A(N) Cf2py intent(in) n Cf2py intent(out) a Cf2py depend(n) a DO I=1,N IF (I.EQ.1) THEN A(I) = 0.0D0 ELSEIF (I.EQ.2) THEN A(I) = 1.0D0 ELSE A(I) = A(I-1) + A(I-2) ENDIF ENDDO END C END FILE FIB3.F Building the extension module can be now carried out in one command: python -m numpy.f2py -c -m fib3 fib3.f Notice that the resulting wrapper to `FIB` is as “smart” (unambiguous) as in the previous case: >>> import fib3 >>> print(fib3.fib.__doc__) a = fib(n) Wrapper for ``fib``. Parameters ---------- n : input int Returns ------- a : rank-1 array('d') with bounds (n) >>> print(fib3.fib(8)) [ 0. 1. 1. 2. 3. 5. 8. 13.] # F2PY user guide and reference manual The purpose of the `F2PY` – _Fortran to Python interface generator_ – utility is to provide a connection between Python and Fortran. F2PY distributed as part of [NumPy](https://www.numpy.org/) (`numpy.f2py`) and once installed is also available as a standalone command line tool. Originally created by Pearu Peterson, and older changelogs are in the [historical reference](https://web.archive.org/web/20140822061353/http://cens.ioc.ee/projects/f2py2e). F2PY facilitates creating/building native [Python C/API extension modules](https://docs.python.org/3/extending/extending.html#extending-python- with-c-or-c) that make it possible * to call Fortran 77/90/95 external subroutines and Fortran 90/95 module subroutines as well as C functions; * to access Fortran 77 `COMMON` blocks and Fortran 90/95 module data, including allocatable arrays from Python. Note Fortran 77 is essentially feature complete, and an increasing amount of Modern Fortran is supported within F2PY. Most `iso_c_binding` interfaces can be compiled to native extension modules automatically with `f2py`. Bug reports welcome! F2PY can be used either as a command line tool `f2py` or as a Python module `numpy.f2py`. While we try to provide the command line tool as part of the numpy setup, some platforms like Windows make it difficult to reliably put the executables on the `PATH`. If the `f2py` command is not available in your system, you may have to run it as a module: python -m numpy.f2py Using the `python -m` invocation is also good practice if you have multiple Python installs with NumPy in your system (outside of virtual environments) and you want to ensure you pick up a particular version of Python/F2PY. If you run `f2py` with no arguments, and the line `numpy Version` at the end matches the NumPy version printed from `python -m numpy.f2py`, then you can use the shorter version. If not, or if you cannot run `f2py`, you should replace all calls to `f2py` mentioned in this guide with the longer version. * [F2PY user guide](f2py-user) * [Three ways to wrap - getting started](f2py.getting-started) * [The quick way](f2py.getting-started#the-quick-way) * [The smart way](f2py.getting-started#the-smart-way) * [The quick and smart way](f2py.getting-started#the-quick-and-smart-way) * [Using F2PY](usage) * [Using `f2py` as a command-line tool](usage#using-f2py-as-a-command-line-tool) * [Python module `numpy.f2py`](usage#python-module-numpy-f2py) * [Automatic extension module generation](usage#automatic-extension-module-generation) * [F2PY examples](f2py-examples) * [F2PY walkthrough: a basic extension module](f2py-examples#f2py-walkthrough-a-basic-extension-module) * [A filtering example](f2py-examples#a-filtering-example) * [`depends` keyword example](f2py-examples#depends-keyword-example) * [Read more](f2py-examples#read-more) * [F2PY reference manual](f2py-reference) * [Signature file](signature-file) * [Signature files syntax](signature-file#signature-files-syntax) * [Using F2PY bindings in Python](python-usage) * [Fortran type objects](python-usage#fortran-type-objects) * [Scalar arguments](python-usage#scalar-arguments) * [String arguments](python-usage#string-arguments) * [Array arguments](python-usage#array-arguments) * [Call-back arguments](python-usage#call-back-arguments) * [Common blocks](python-usage#common-blocks) * [Fortran 90 module data](python-usage#fortran-90-module-data) * [Allocatable arrays](python-usage#allocatable-arrays) * [F2PY and build systems](buildtools/index) * [Basic concepts](buildtools/index#basic-concepts) * [Build systems](buildtools/index#build-systems) * [Advanced F2PY use cases](advanced/use_cases) * [Adding user-defined functions to F2PY generated modules](advanced/use_cases#adding-user-defined-functions-to-f2py-generated-modules) * [Adding user-defined variables](advanced/use_cases#adding-user-defined-variables) * [Dealing with KIND specifiers](advanced/use_cases#dealing-with-kind-specifiers) * [Character strings](advanced/use_cases#character-strings) * [Boilerplate reduction and templating](advanced/boilerplating) * [Using FYPP for binding generic interfaces](advanced/boilerplating#using-fypp-for-binding-generic-interfaces) * [F2PY test suite](f2py-testing) * [Adding a test](f2py-testing#adding-a-test) * [F2PY and Windows](windows/index) * [Overview](windows/index#overview) * [Baseline](windows/index#baseline) * [PowerShell and MSVC](windows/index#powershell-and-msvc) * [Microsoft Store Python paths](windows/index#microsoft-store-python-paths) * [F2PY and Windows Intel Fortran](windows/intel) * [F2PY and Windows with MSYS2](windows/msys2) * [F2PY and Conda on Windows](windows/conda) * [F2PY and PGI Fortran on Windows](windows/pgi) * [1 Migrating to `meson`](buildtools/distutils-to-meson) * [1.1 Baseline](buildtools/distutils-to-meson#baseline) * [1.2 Compilation options](buildtools/distutils-to-meson#compilation-options) * [1.2.1 Basic Usage](buildtools/distutils-to-meson#basic-usage) * [1.2.2 Specify the backend](buildtools/distutils-to-meson#specify-the-backend) * [1.2.3 Pass a compiler name](buildtools/distutils-to-meson#pass-a-compiler-name) * [1.2.4 Dependencies](buildtools/distutils-to-meson#dependencies) * [1.2.5 Libraries](buildtools/distutils-to-meson#libraries) * [1.3 Customizing builds](buildtools/distutils-to-meson#customizing-builds) # Using F2PY bindings in Python In this page, you can find a full description and a few examples of common usage patterns for F2PY with Python and different argument types. For more examples and use cases, see [F2PY examples](f2py-examples#f2py-examples). ## Fortran type objects All wrappers for Fortran/C routines, common blocks, or Fortran 90 module data generated by F2PY are exposed to Python as `fortran` type objects. Routine wrappers are callable `fortran` type objects while wrappers to Fortran data have attributes referring to data objects. All `fortran` type objects have an attribute `_cpointer` that contains a [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "\(in Python v3.13\)") referring to the C pointer of the corresponding Fortran/C function or variable at the C level. Such `PyCapsule` objects can be used as callback arguments for F2PY generated functions to bypass the Python C/API layer for calling Python functions from Fortran or C. This can be useful when the computational aspects of such functions are implemented in C or Fortran and wrapped with F2PY (or any other tool capable of providing the `PyCapsule` containing a function). Consider a Fortran 77 file ``ftype.f`: C FILE: FTYPE.F SUBROUTINE FOO(N) INTEGER N Cf2py integer optional,intent(in) :: n = 13 REAL A,X COMMON /DATA/ A,X(3) C PRINT*, "IN FOO: N=",N," A=",A," X=[",X(1),X(2),X(3),"]" END C END OF FTYPE.F and a wrapper built using `f2py -c ftype.f -m ftype`. In Python, you can observe the types of `foo` and `data`, and how to access individual objects of the wrapped Fortran code. >>> import ftype >>> print(ftype.__doc__) This module 'ftype' is auto-generated with f2py (version:2). Functions: foo(n=13) COMMON blocks: /data/ a,x(3) . >>> type(ftype.foo), type(ftype.data) (, ) >>> ftype.foo() IN FOO: N= 13 A= 0. X=[ 0. 0. 0.] >>> ftype.data.a = 3 >>> ftype.data.x = [1,2,3] >>> ftype.foo() IN FOO: N= 13 A= 3. X=[ 1. 2. 3.] >>> ftype.data.x[1] = 45 >>> ftype.foo(24) IN FOO: N= 24 A= 3. X=[ 1. 45. 3.] >>> ftype.data.x array([ 1., 45., 3.], dtype=float32) ## Scalar arguments In general, a scalar argument for a F2PY generated wrapper function can be an ordinary Python scalar (integer, float, complex number) as well as an arbitrary sequence object (list, tuple, array, string) of scalars. In the latter case, the first element of the sequence object is passed to the Fortran routine as a scalar argument. Note * When type-casting is required and there is possible loss of information via narrowing e.g. when type-casting float to integer or complex to float, F2PY _does not_ raise an exception. * For complex to real type-casting only the real part of a complex number is used. * `intent(inout)` scalar arguments are assumed to be array objects in order to have _in situ_ changes be effective. It is recommended to use arrays with proper type but also other types work. [Read more about the intent attribute](signature-file#f2py-attributes). Consider the following Fortran 77 code: C FILE: SCALAR.F SUBROUTINE FOO(A,B) REAL*8 A, B Cf2py intent(in) a Cf2py intent(inout) b PRINT*, " A=",A," B=",B PRINT*, "INCREMENT A AND B" A = A + 1D0 B = B + 1D0 PRINT*, "NEW A=",A," B=",B END C END OF FILE SCALAR.F and wrap it using `f2py -c -m scalar scalar.f`. In Python: >>> import scalar >>> print(scalar.foo.__doc__) foo(a,b) Wrapper for ``foo``. Parameters ---------- a : input float b : in/output rank-0 array(float,'d') >>> scalar.foo(2, 3) A= 2. B= 3. INCREMENT A AND B NEW A= 3. B= 4. >>> import numpy >>> a = numpy.array(2) # these are integer rank-0 arrays >>> b = numpy.array(3) >>> scalar.foo(a, b) A= 2. B= 3. INCREMENT A AND B NEW A= 3. B= 4. >>> print(a, b) # note that only b is changed in situ 2 4 ## String arguments F2PY generated wrapper functions accept almost any Python object as a string argument, since `str` is applied for non-string objects. Exceptions are NumPy arrays that must have type code `'S1'` or `'b'` (corresponding to the outdated `'c'` or `'1'` typecodes, respectively) when used as string arguments. See [Scalars](../reference/arrays.scalars#arrays-scalars) for more information on these typecodes. A string can have an arbitrary length when used as a string argument for an F2PY generated wrapper function. If the length is greater than expected, the string is truncated silently. If the length is smaller than expected, additional memory is allocated and filled with `\0`. Because Python strings are immutable, an `intent(inout)` argument expects an array version of a string in order to have _in situ_ changes be effective. Consider the following Fortran 77 code: C FILE: STRING.F SUBROUTINE FOO(A,B,C,D) CHARACTER*5 A, B CHARACTER*(*) C,D Cf2py intent(in) a,c Cf2py intent(inout) b,d PRINT*, "A=",A PRINT*, "B=",B PRINT*, "C=",C PRINT*, "D=",D PRINT*, "CHANGE A,B,C,D" A(1:1) = 'A' B(1:1) = 'B' C(1:1) = 'C' D(1:1) = 'D' PRINT*, "A=",A PRINT*, "B=",B PRINT*, "C=",C PRINT*, "D=",D END C END OF FILE STRING.F and wrap it using `f2py -c -m mystring string.f`. Python session: >>> import mystring >>> print(mystring.foo.__doc__) foo(a,b,c,d) Wrapper for ``foo``. Parameters ---------- a : input string(len=5) b : in/output rank-0 array(string(len=5),'c') c : input string(len=-1) d : in/output rank-0 array(string(len=-1),'c') >>> from numpy import array >>> a = array(b'123\0\0') >>> b = array(b'123\0\0') >>> c = array(b'123') >>> d = array(b'123') >>> mystring.foo(a, b, c, d) A=123 B=123 C=123 D=123 CHANGE A,B,C,D A=A23 B=B23 C=C23 D=D23 >>> a[()], b[()], c[()], d[()] (b'123', b'B23', b'123', b'D2') ## Array arguments In general, array arguments for F2PY generated wrapper functions accept arbitrary sequences that can be transformed to NumPy array objects. There are two notable exceptions: * `intent(inout)` array arguments must always be [proper-contiguous](../glossary#term-contiguous) and have a compatible `dtype`, otherwise an exception is raised. * `intent(inplace)` array arguments will be changed _in situ_ if the argument has a different type than expected (see the `intent(inplace)` [attribute](signature-file#f2py-attributes) for more information). In general, if a NumPy array is [proper-contiguous](../glossary#term- contiguous) and has a proper type then it is directly passed to the wrapped Fortran/C function. Otherwise, an element-wise copy of the input array is made and the copy, being proper-contiguous and with proper type, is used as the array argument. Usually there is no need to worry about how the arrays are stored in memory and whether the wrapped functions, being either Fortran or C functions, assume one or another storage order. F2PY automatically ensures that wrapped functions get arguments with the proper storage order; the underlying algorithm is designed to make copies of arrays only when absolutely necessary. However, when dealing with very large multidimensional input arrays with sizes close to the size of the physical memory in your computer, then care must be taken to ensure the usage of proper-contiguous and proper type arguments. To transform input arrays to column major storage order before passing them to Fortran routines, use the function [`numpy.asfortranarray`](../reference/generated/numpy.asfortranarray#numpy.asfortranarray "numpy.asfortranarray"). Consider the following Fortran 77 code: C FILE: ARRAY.F SUBROUTINE FOO(A,N,M) C C INCREMENT THE FIRST ROW AND DECREMENT THE FIRST COLUMN OF A C INTEGER N,M,I,J REAL*8 A(N,M) Cf2py intent(in,out,copy) a Cf2py integer intent(hide),depend(a) :: n=shape(a,0), m=shape(a,1) DO J=1,M A(1,J) = A(1,J) + 1D0 ENDDO DO I=1,N A(I,1) = A(I,1) - 1D0 ENDDO END C END OF FILE ARRAY.F and wrap it using `f2py -c -m arr array.f -DF2PY_REPORT_ON_ARRAY_COPY=1`. In Python: >>> import arr >>> from numpy import asfortranarray >>> print(arr.foo.__doc__) a = foo(a,[overwrite_a]) Wrapper for ``foo``. Parameters ---------- a : input rank-2 array('d') with bounds (n,m) Other Parameters ---------------- overwrite_a : input int, optional Default: 0 Returns ------- a : rank-2 array('d') with bounds (n,m) >>> a = arr.foo([[1, 2, 3], ... [4, 5, 6]]) created an array from object >>> print(a) [[ 1. 3. 4.] [ 3. 5. 6.]] >>> a.flags.c_contiguous False >>> a.flags.f_contiguous True # even if a is proper-contiguous and has proper type, # a copy is made forced by intent(copy) attribute # to preserve its original contents >>> b = arr.foo(a) copied an array: size=6, elsize=8 >>> print(a) [[ 1. 3. 4.] [ 3. 5. 6.]] >>> print(b) [[ 1. 4. 5.] [ 2. 5. 6.]] >>> b = arr.foo(a, overwrite_a = 1) # a is passed directly to Fortran ... # routine and its contents is discarded ... >>> print(a) [[ 1. 4. 5.] [ 2. 5. 6.]] >>> print(b) [[ 1. 4. 5.] [ 2. 5. 6.]] >>> a is b # a and b are actually the same objects True >>> print(arr.foo([1, 2, 3])) # different rank arrays are allowed created an array from object [ 1. 1. 2.] >>> print(arr.foo([[[1], [2], [3]]])) created an array from object [[[ 1.] [ 1.] [ 2.]]] >>> >>> # Creating arrays with column major data storage order: ... >>> s = asfortranarray([[1, 2, 3], [4, 5, 6]]) >>> s.flags.f_contiguous True >>> print(s) [[1 2 3] [4 5 6]] >>> print(arr.foo(s)) >>> s2 = asfortranarray(s) >>> s2 is s # an array with column major storage order # is returned immediately True >>> # Note that arr.foo returns a column major data storage order array: ... >>> s3 = ascontiguousarray(s) >>> s3.flags.f_contiguous False >>> s3.flags.c_contiguous True >>> s3 = arr.foo(s3) copied an array: size=6, elsize=8 >>> s3.flags.f_contiguous True >>> s3.flags.c_contiguous False ## Call-back arguments F2PY supports calling Python functions from Fortran or C codes. Consider the following Fortran 77 code: C FILE: CALLBACK.F SUBROUTINE FOO(FUN,R) EXTERNAL FUN INTEGER I REAL*8 R, FUN Cf2py intent(out) r R = 0D0 DO I=-5,5 R = R + FUN(I) ENDDO END C END OF FILE CALLBACK.F and wrap it using `f2py -c -m callback callback.f`. In Python: >>> import callback >>> print(callback.foo.__doc__) r = foo(fun,[fun_extra_args]) Wrapper for ``foo``. Parameters ---------- fun : call-back function Other Parameters ---------------- fun_extra_args : input tuple, optional Default: () Returns ------- r : float Notes ----- Call-back functions:: def fun(i): return r Required arguments: i : input int Return objects: r : float >>> def f(i): return i*i ... >>> print(callback.foo(f)) 110.0 >>> print(callback.foo(lambda i:1)) 11.0 In the above example F2PY was able to guess accurately the signature of the call-back function. However, sometimes F2PY cannot establish the appropriate signature; in these cases the signature of the call-back function must be explicitly defined in the signature file. To facilitate this, signature files may contain special modules (the names of these modules contain the special `__user__` sub-string) that define the various signatures for call-back functions. Callback arguments in routine signatures have the `external` attribute (see also the `intent(callback)` [attribute](signature-file#f2py-attributes)). To relate a callback argument with its signature in a `__user__` module block, a `use` statement can be utilized as illustrated below. The same signature for a callback argument can be referred to in different routine signatures. We use the same Fortran 77 code as in the previous example but now we will pretend that F2PY was not able to guess the signatures of call-back arguments correctly. First, we create an initial signature file `callback2.pyf` using F2PY: f2py -m callback2 -h callback2.pyf callback.f Then modify it as follows ! -*- f90 -*- python module __user__routines interface function fun(i) result (r) integer :: i real*8 :: r end function fun end interface end python module __user__routines python module callback2 interface subroutine foo(f,r) use __user__routines, f=>fun external f real*8 intent(out) :: r end subroutine foo end interface end python module callback2 Finally, we build the extension module using `f2py -c callback2.pyf callback.f`. An example Python session for this snippet would be identical to the previous example except that the argument names would differ. Sometimes a Fortran package may require that users provide routines that the package will use. F2PY can construct an interface to such routines so that Python functions can be called from Fortran. Consider the following Fortran 77 subroutine that takes an array as its input and applies a function `func` to its elements. subroutine calculate(x,n) cf2py intent(callback) func external func c The following lines define the signature of func for F2PY: cf2py real*8 y cf2py y = func(y) c cf2py intent(in,out,copy) x integer n,i real*8 x(n), func do i=1,n x(i) = func(x(i)) end do end The Fortran code expects that the function `func` has been defined externally. In order to use a Python function for `func`, it must have an attribute `intent(callback)` and it must be specified before the `external` statement. Finally, build an extension module using `f2py -c -m foo calculate.f` In Python: >>> import foo >>> foo.calculate(range(5), lambda x: x*x) array([ 0., 1., 4., 9., 16.]) >>> import math >>> foo.calculate(range(5), math.exp) array([ 1. , 2.71828183, 7.3890561, 20.08553692, 54.59815003]) The function is included as an argument to the python function call to the Fortran subroutine even though it was _not_ in the Fortran subroutine argument list. The “external” keyword refers to the C function generated by f2py, not the Python function itself. The python function is essentially being supplied to the C function. The callback function may also be explicitly set in the module. Then it is not necessary to pass the function in the argument list to the Fortran function. This may be desired if the Fortran function calling the Python callback function is itself called by another Fortran function. Consider the following Fortran 77 subroutine: subroutine f1() print *, "in f1, calling f2 twice.." call f2() call f2() return end subroutine f2() cf2py intent(callback, hide) fpy external fpy print *, "in f2, calling f2py.." call fpy() return end and wrap it using `f2py -c -m pfromf extcallback.f`. In Python: >>> import pfromf >>> pfromf.f2() Traceback (most recent call last): File "", line 1, in pfromf.error: Callback fpy not defined (as an argument or module pfromf attribute). >>> def f(): print("python f") ... >>> pfromf.fpy = f >>> pfromf.f2() in f2, calling f2py.. python f >>> pfromf.f1() in f1, calling f2 twice.. in f2, calling f2py.. python f in f2, calling f2py.. python f >>> Note When using modified Fortran code via `callstatement` or other directives, the wrapped Python function must be called as a callback, otherwise only the bare Fortran routine will be used. For more details, see [numpy/numpy#26681](https://github.com/numpy/numpy/issues/26681#issuecomment-2466460943) ### Resolving arguments to call-back functions F2PY generated interfaces are very flexible with respect to call-back arguments. For each call-back argument an additional optional argument `_extra_args` is introduced by F2PY. This argument can be used to pass extra arguments to user provided call-back functions. If a F2PY generated wrapper function expects the following call-back argument: def fun(a_1,...,a_n): ... return x_1,...,x_k but the following Python function def gun(b_1,...,b_m): ... return y_1,...,y_l is provided by a user, and in addition, fun_extra_args = (e_1,...,e_p) is used, then the following rules are applied when a Fortran or C function evaluates the call-back argument `gun`: * If `p == 0` then `gun(a_1, ..., a_q)` is called, here `q = min(m, n)`. * If `n + p <= m` then `gun(a_1, ..., a_n, e_1, ..., e_p)` is called. * If `p <= m < n + p` then `gun(a_1, ..., a_q, e_1, ..., e_p)` is called, and here `q=m-p`. * If `p > m` then `gun(e_1, ..., e_m)` is called. * If `n + p` is less than the number of required arguments to `gun` then an exception is raised. If the function `gun` may return any number of objects as a tuple; then the following rules are applied: * If `k < l`, then `y_{k + 1}, ..., y_l` are ignored. * If `k > l`, then only `x_1, ..., x_l` are set. ## Common blocks F2PY generates wrappers to `common` blocks defined in a routine signature block. Common blocks are visible to all Fortran codes linked to the current extension module, but not to other extension modules (this restriction is due to the way Python imports shared libraries). In Python, the F2PY wrappers to `common` blocks are `fortran` type objects that have (dynamic) attributes related to the data members of the common blocks. When accessed, these attributes return as NumPy array objects (multidimensional arrays are Fortran- contiguous) which directly link to data members in common blocks. Data members can be changed by direct assignment or by in-place changes to the corresponding array objects. Consider the following Fortran 77 code: C FILE: COMMON.F SUBROUTINE FOO INTEGER I,X REAL A COMMON /DATA/ I,X(4),A(2,3) PRINT*, "I=",I PRINT*, "X=[",X,"]" PRINT*, "A=[" PRINT*, "[",A(1,1),",",A(1,2),",",A(1,3),"]" PRINT*, "[",A(2,1),",",A(2,2),",",A(2,3),"]" PRINT*, "]" END C END OF COMMON.F and wrap it using `f2py -c -m common common.f`. In Python: >>> import common >>> print(common.data.__doc__) i : 'i'-scalar x : 'i'-array(4) a : 'f'-array(2,3) >>> common.data.i = 5 >>> common.data.x[1] = 2 >>> common.data.a = [[1,2,3],[4,5,6]] >>> common.foo() >>> common.foo() I= 5 X=[ 0 2 0 0 ] A=[ [ 1.00000000 , 2.00000000 , 3.00000000 ] [ 4.00000000 , 5.00000000 , 6.00000000 ] ] >>> common.data.a[1] = 45 >>> common.foo() I= 5 X=[ 0 2 0 0 ] A=[ [ 1.00000000 , 2.00000000 , 3.00000000 ] [ 45.0000000 , 45.0000000 , 45.0000000 ] ] >>> common.data.a # a is Fortran-contiguous array([[ 1., 2., 3.], [ 45., 45., 45.]], dtype=float32) >>> common.data.a.flags.f_contiguous True ## Fortran 90 module data The F2PY interface to Fortran 90 module data is similar to the handling of Fortran 77 common blocks. Consider the following Fortran 90 code: module mod integer i integer :: x(4) real, dimension(2,3) :: a real, allocatable, dimension(:,:) :: b contains subroutine foo integer k print*, "i=",i print*, "x=[",x,"]" print*, "a=[" print*, "[",a(1,1),",",a(1,2),",",a(1,3),"]" print*, "[",a(2,1),",",a(2,2),",",a(2,3),"]" print*, "]" print*, "Setting a(1,2)=a(1,2)+3" a(1,2) = a(1,2)+3 end subroutine foo end module mod and wrap it using `f2py -c -m moddata moddata.f90`. In Python: >>> import moddata >>> print(moddata.mod.__doc__) i : 'i'-scalar x : 'i'-array(4) a : 'f'-array(2,3) b : 'f'-array(-1,-1), not allocated foo() Wrapper for ``foo``. >>> moddata.mod.i = 5 >>> moddata.mod.x[:2] = [1,2] >>> moddata.mod.a = [[1,2,3],[4,5,6]] >>> moddata.mod.foo() i= 5 x=[ 1 2 0 0 ] a=[ [ 1.000000 , 2.000000 , 3.000000 ] [ 4.000000 , 5.000000 , 6.000000 ] ] Setting a(1,2)=a(1,2)+3 >>> moddata.mod.a # a is Fortran-contiguous array([[ 1., 5., 3.], [ 4., 5., 6.]], dtype=float32) >>> moddata.mod.a.flags.f_contiguous True ## Allocatable arrays F2PY has basic support for Fortran 90 module allocatable arrays. Consider the following Fortran 90 code: module mod real, allocatable, dimension(:,:) :: b contains subroutine foo integer k if (allocated(b)) then print*, "b=[" do k = 1,size(b,1) print*, b(k,1:size(b,2)) enddo print*, "]" else print*, "b is not allocated" endif end subroutine foo end module mod and wrap it using `f2py -c -m allocarr allocarr.f90`. In Python: >>> import allocarr >>> print(allocarr.mod.__doc__) b : 'f'-array(-1,-1), not allocated foo() Wrapper for ``foo``. >>> allocarr.mod.foo() b is not allocated >>> allocarr.mod.b = [[1, 2, 3], [4, 5, 6]] # allocate/initialize b >>> allocarr.mod.foo() b=[ 1.000000 2.000000 3.000000 4.000000 5.000000 6.000000 ] >>> allocarr.mod.b # b is Fortran-contiguous array([[ 1., 2., 3.], [ 4., 5., 6.]], dtype=float32) >>> allocarr.mod.b.flags.f_contiguous True >>> allocarr.mod.b = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] # reallocate/initialize b >>> allocarr.mod.foo() b=[ 1.000000 2.000000 3.000000 4.000000 5.000000 6.000000 7.000000 8.000000 9.000000 ] >>> allocarr.mod.b = None # deallocate array >>> allocarr.mod.foo() b is not allocated # Signature file The interface definition file (.pyf) is how you can fine-tune the interface between Python and Fortran. The syntax specification for signature files (`.pyf` files) is modeled on the Fortran 90/95 language specification. Almost all Fortran standard constructs are understood, both in free and fixed format (recall that Fortran 77 is a subset of Fortran 90/95). F2PY introduces some extensions to the Fortran 90/95 language specification that help in the design of the Fortran to Python interface, making it more “Pythonic”. Signature files may contain arbitrary Fortran code so that any Fortran 90/95 codes can be treated as signature files. F2PY silently ignores Fortran constructs that are irrelevant for creating the interface. However, this also means that syntax errors are not caught by F2PY and will only be caught when the library is built. Note Currently, F2PY may fail with some valid Fortran constructs. If this happens, you can check the [NumPy GitHub issue tracker](https://github.com/numpy/numpy/issues) for possible workarounds or work-in-progress ideas. In general, the contents of the signature files are case-sensitive. When scanning Fortran codes to generate a signature file, F2PY lowers all cases automatically except in multi-line blocks or when the `--no-lower` option is used. The syntax of signature files is presented below. ## Signature files syntax ### Python module block A signature file may contain one (recommended) or more `python module` blocks. The `python module` block describes the contents of a Python/C extension module `module.c` that F2PY generates. Warning Exception: if `` contains a substring `__user__`, then the corresponding `python module` block describes the signatures of call-back functions (see [Call-back arguments](python-usage#call-back-arguments)). A `python module` block has the following structure: python module []... [ interface end [interface] ]... [ interface module [] [] end [module []] end [interface] ]... end [python module []] Here brackets `[]` indicate an optional section, dots `...` indicate one or more of a previous section. So, `[]...` is to be read as zero or more of a previous section. ### Fortran/C routine signatures The signature of a Fortran routine has the following structure: [] function | subroutine \ [ ( [] ) ] [ result ( ) ] [] [] [] [] [] end [ function | subroutine [] ] From a Fortran routine signature F2PY generates a Python/C extension function that has the following signature: def ([,]): ... return The signature of a Fortran block data has the following structure: block data [ ] [] [] [] [] [] end [ block data [] ] ### Type declarations The definition of the `` part is [ [] :: ] where := byte | character [] | complex [] | real [] | double complex | double precision | integer [] | logical [] := * | ( [len=] [ , [kind=] ] ) | ( kind= [ , len= ] ) := * | ( [kind=] ) := [ [ * ] [ ( ) ] | [ ( ) ] * ] | [ / / | = ] \ [ , ] and * `` is a comma separated list of attributes; * `` is a comma separated list of dimension bounds; * `` is a C expression; * `` may be negative integer for `integer` type specifications. In such cases `integer*` represents unsigned C integers; If an argument has no ``, its type is determined by applying `implicit` rules to its name. ### Statements #### Attribute statements The `` is similar to the ``, but without ``. An attribute statement cannot contain other attributes, and `` can be only a list of names. See Attributes for more details on the attributes that can be used by F2PY. #### Use statements * The definition of the `` part is use [ , | , ONLY : ] where := => [ , ] * Currently F2PY uses `use` statements only for linking call-back modules and `external` arguments (call-back functions). See [Call-back arguments](python-usage#call-back-arguments). #### Common block statements * The definition of the `` part is common / / where := [ ( ) ] [ , ] * If a `python module` block contains two or more `common` blocks with the same name, the variables from the additional declarations are appended. The types of variables in `` are defined using ``. Note that the corresponding `` may contain array specifications; then these need not be specified in ``. #### Other statements * The `` part refers to any other Fortran language constructs that are not described above. F2PY ignores most of them except the following: * `call` statements and function calls of `external` arguments (see more details on external arguments); * `include` statements include '' include "" If a file `` does not exist, the `include` statement is ignored. Otherwise, the file `` is included to a signature file. `include` statements can be used in any part of a signature file, also outside the Fortran/C routine signature blocks. * `implicit` statements implicit none implicit where := ( ) Implicit rules are used to determine the type specification of a variable (from the first-letter of its name) if the variable is not defined using ``. Default implicit rules are given by: implicit real (a-h,o-z,$_), integer (i-m) * `entry` statements entry [([])] F2PY generates wrappers for all entry names using the signature of the routine block. Note The `entry` statement can be used to describe the signature of an arbitrary subroutine or function allowing F2PY to generate a number of wrappers from only one routine block signature. There are few restrictions while doing this: `fortranname` cannot be used, `callstatement` and `callprotoargument` can be used only if they are valid for all entry routines, etc. #### F2PY statements In addition, F2PY introduces the following statements: `threadsafe` Uses a `Py_BEGIN_ALLOW_THREADS .. Py_END_ALLOW_THREADS` block around the call to Fortran/C function. `callstatement ` Replaces the F2PY generated call statement to Fortran/C function with ``. The wrapped Fortran/C function is available as `(*f2py_func)`. To raise an exception, set `f2py_success = 0` in ``. `callprotoargument ` When the `callstatement` statement is used, F2PY may not generate proper prototypes for Fortran/C functions (because `` may contain function calls, and F2PY has no way to determine what should be the proper prototype). With this statement you can explicitly specify the arguments of the corresponding prototype: extern FUNC_F(,)(); `fortranname []` F2PY allows for the use of an arbitrary `` for a given Fortran/C function. Then this statement is used for the ``. If `fortranname` statement is used without `` then a dummy wrapper is generated. `usercode ` When this is used inside a `python module` block, the given C code will be inserted to generated C/API source just before wrapper function definitions. Here you can define arbitrary C functions to be used for the initialization of optional arguments. For example, if `usercode` is used twice inside `python module` block then the second multi-line block is inserted after the definition of the external routines. When used inside ``, then the given C code will be inserted into the corresponding wrapper function just after the declaration of variables but before any C statements. So, the `usercode` follow-up can contain both declarations and C statements. When used inside the first `interface` block, then the given C code will be inserted at the end of the initialization function of the extension module. This is how the extension modules dictionary can be modified and has many use- cases; for example, to define additional variables. `pymethoddef ` This is a multi-line block which will be inserted into the definition of a module methods `PyMethodDef`-array. It must be a comma-separated list of C arrays (see [Extending and Embedding](https://docs.python.org/extending/index.html) Python documentation for details). `pymethoddef` statement can be used only inside `python module` block. ### Attributes The following attributes can be used by F2PY. `optional` The corresponding argument is moved to the end of `` list. A default value for an optional argument can be specified via `` (see the `entitydecl` definition) Note * The default value must be given as a valid C expression. * Whenever `` is used, the `optional` attribute is set automatically by F2PY. * For an optional array argument, all its dimensions must be bounded. `required` The corresponding argument with this attribute is considered mandatory. This is the default. `required` should only be specified if there is a need to disable the automatic `optional` setting when `` is used. If a Python `None` object is used as a required argument, the argument is treated as optional. That is, in the case of array arguments, the memory is allocated. If `` is given, then the corresponding initialization is carried out. `dimension()` The corresponding variable is considered as an array with dimensions given in ``. `intent()` This specifies the “intention” of the corresponding argument. `` is a comma separated list of the following keys: * `in` The corresponding argument is considered to be input-only. This means that the value of the argument is passed to a Fortran/C function and that the function is expected to not change the value of this argument. * `inout` The corresponding argument is marked for input/output or as an _in situ_ output argument. `intent(inout)` arguments can be only [contiguous](../glossary#term-contiguous) NumPy arrays (in either the Fortran or C sense) with proper type and size. The latter coincides with the default contiguous concept used in NumPy and is effective only if `intent(c)` is used. F2PY assumes Fortran contiguous arguments by default. Note Using `intent(inout)` is generally not recommended, as it can cause unexpected results. For example, scalar arguments using `intent(inout)` are assumed to be array objects in order to have _in situ_ changes be effective. Use `intent(in,out)` instead. See also the `intent(inplace)` attribute. * `inplace` The corresponding argument is considered to be an input/output or _in situ_ output argument. `intent(inplace)` arguments must be NumPy arrays of a proper size. If the type of an array is not “proper” or the array is non-contiguous then the array will be modified in-place to fix the type and make it contiguous. Note Using `intent(inplace)` is generally not recommended either. For example, when slices have been taken from an `intent(inplace)` argument then after in-place changes, the data pointers for the slices may point to an unallocated memory area. * `out` The corresponding argument is considered to be a return variable. It is appended to the `` list. Using `intent(out)` sets `intent(hide)` automatically, unless `intent(in)` or `intent(inout)` are specified as well. By default, returned multidimensional arrays are Fortran-contiguous. If `intent(c)` attribute is used, then the returned multidimensional arrays are C-contiguous. * `hide` The corresponding argument is removed from the list of required or optional arguments. Typically `intent(hide)` is used with `intent(out)` or when `` completely determines the value of the argument like in the following example: integer intent(hide),depend(a) :: n = len(a) real intent(in),dimension(n) :: a * `c` The corresponding argument is treated as a C scalar or C array argument. For the case of a scalar argument, its value is passed to a C function as a C scalar argument (recall that Fortran scalar arguments are actually C pointer arguments). For array arguments, the wrapper function is assumed to treat multidimensional arrays as C-contiguous arrays. There is no need to use `intent(c)` for one-dimensional arrays, irrespective of whether the wrapped function is in Fortran or C. This is because the concepts of Fortran- and C contiguity overlap in one-dimensional cases. If `intent(c)` is used as a statement but without an entity declaration list, then F2PY adds the `intent(c)` attribute to all arguments. Also, when wrapping C functions, one must use `intent(c)` attribute for `` in order to disable Fortran specific `F_FUNC(..,..)` macros. * `cache` The corresponding argument is treated as junk memory. No Fortran nor C contiguity checks are carried out. Using `intent(cache)` makes sense only for array arguments, also in conjunction with `intent(hide)` or `optional` attributes. * `copy` Ensures that the original contents of `intent(in)` argument is preserved. Typically used with the `intent(in,out)` attribute. F2PY creates an optional argument `overwrite_` with the default value `0`. * `overwrite` This indicates that the original contents of the `intent(in)` argument may be altered by the Fortran/C function. F2PY creates an optional argument `overwrite_` with the default value `1`. * `out=` Replaces the returned name with `` in the `__doc__` string of the wrapper function. * `callback` Constructs an external function suitable for calling Python functions from Fortran. `intent(callback)` must be specified before the corresponding `external` statement. If the ‘argument’ is not in the argument list then it will be added to Python wrapper but only by initializing an external function. Note Use `intent(callback)` in situations where the Fortran/C code assumes that the user implemented a function with a given prototype and linked it to an executable. Don’t use `intent(callback)` if the function appears in the argument list of a Fortran routine. With `intent(hide)` or `optional` attributes specified and using a wrapper function without specifying the callback argument in the argument list; then the call-back function is assumed to be found in the namespace of the F2PY generated extension module where it can be set as a module attribute by a user. * `aux` Defines an auxiliary C variable in the F2PY generated wrapper function. Useful to save parameter values so that they can be accessed in initialization expressions for other variables. Note `intent(aux)` silently implies `intent(c)`. The following rules apply: * If none of `intent(in | inout | out | hide)` are specified, `intent(in)` is assumed. * `intent(in,inout)` is `intent(in)`; * `intent(in,hide)` or `intent(inout,hide)` is `intent(hide)`; * `intent(out)` is `intent(out,hide)` unless `intent(in)` or `intent(inout)` is specified. * If `intent(copy)` or `intent(overwrite)` is used, then an additional optional argument is introduced with a name `overwrite_` and a default value 0 or 1, respectively. * `intent(inout,inplace)` is `intent(inplace)`; * `intent(in,inplace)` is `intent(inplace)`; * `intent(hide)` disables `optional` and `required`. `check([])` Performs a consistency check on the arguments by evaluating ``; if `` returns 0, an exception is raised. Note If `check(..)` is not used then F2PY automatically generates a few standard checks (e.g. in a case of an array argument, it checks for the proper shape and size). Use `check()` to disable checks generated by F2PY. `depend([])` This declares that the corresponding argument depends on the values of variables in the `` list. For example, `` may use the values of other arguments. Using information given by `depend(..)` attributes, F2PY ensures that arguments are initialized in a proper order. If the `depend(..)` attribute is not used then F2PY determines dependence relations automatically. Use `depend()` to disable the dependence relations generated by F2PY. When you edit dependence relations that were initially generated by F2PY, be careful not to break the dependence relations of other relevant variables. Another thing to watch out for is cyclic dependencies. F2PY is able to detect cyclic dependencies when constructing wrappers and it complains if any are found. `allocatable` The corresponding variable is a Fortran 90 allocatable array defined as Fortran 90 module data. `external` The corresponding argument is a function provided by user. The signature of this call-back function can be defined * in `__user__` module block, * or by demonstrative (or real, if the signature file is a real Fortran code) call in the `` block. For example, F2PY generates from: external cb_sub, cb_fun integer n real a(n),r call cb_sub(a,n) r = cb_fun(4) the following call-back signatures: subroutine cb_sub(a,n) real dimension(n) :: a integer optional,check(len(a)>=n),depend(a) :: n=len(a) end subroutine cb_sub function cb_fun(e_4_e) result (r) integer :: e_4_e real :: r end function cb_fun The corresponding user-provided Python function are then: def cb_sub(a,[n]): ... return def cb_fun(e_4_e): ... return r See also the `intent(callback)` attribute. `parameter` This indicates that the corresponding variable is a parameter and it must have a fixed value. F2PY replaces all parameter occurrences by their corresponding values. ### Extensions #### F2PY directives The F2PY directives allow using F2PY signature file constructs in Fortran 77/90 source codes. With this feature one can (almost) completely skip the intermediate signature file generation and apply F2PY directly to Fortran source codes. F2PY directives have the following form: f2py ... where allowed comment characters for fixed and free format Fortran codes are `cC*!#` and `!`, respectively. Everything that follows `f2py` is ignored by a compiler but read by F2PY as a normal non-comment Fortran line: Note When F2PY finds a line with F2PY directive, the directive is first replaced by 5 spaces and then the line is reread. For fixed format Fortran codes, `` must be at the first column of a file, of course. For free format Fortran codes, the F2PY directives can appear anywhere in a file. #### C expressions C expressions are used in the following parts of signature files: * `` for variable initialization; * `` of the `check` attribute; * `` of the `dimension` attribute; * `callstatement` statement, here also a C multi-line block can be used. A C expression may contain: * standard C constructs; * functions from `math.h` and `Python.h`; * variables from the argument list, presumably initialized before according to given dependence relations; * the following CPP macros: `f2py_rank()` Returns the rank of an array ``. `f2py_shape(, )` Returns the ``-th dimension of an array ``. `f2py_len()` Returns the length of an array ``. `f2py_size()` Returns the size of an array ``. `f2py_itemsize()` Returns the itemsize of an array ``. `f2py_slen()` Returns the length of a string ``. For initializing an array ``, F2PY generates a loop over all indices and dimensions that executes the following pseudo-statement: (_i[0],_i[1],...) = ; where `_i[]` refers to the ``-th index value and that runs from `0` to `shape(,)-1`. For example, a function `myrange(n)` generated from the following signature subroutine myrange(a,n) fortranname ! myrange is a dummy wrapper integer intent(in) :: n real*8 intent(c,out),dimension(n),depend(n) :: a = _i[0] end subroutine myrange is equivalent to `numpy.arange(n,dtype=float)`. Warning F2PY may lower cases also in C expressions when scanning Fortran codes (see `--[no]-lower` option). #### Multi-line blocks A multi-line block starts with `'''` (triple single-quotes) and ends with `'''` in some _strictly_ subsequent line. Multi-line blocks can be used only within .pyf files. The contents of a multi-line block can be arbitrary (except that it cannot contain `'''`) and no transformations (e.g. lowering cases) are applied to it. Currently, multi-line blocks can be used in the following constructs: * as a C expression of the `callstatement` statement; * as a C type specification of the `callprotoargument` statement; * as a C code block of the `usercode` statement; * as a list of C arrays of the `pymethoddef` statement; * as documentation string. ### Extended char-selector F2PY extends char-selector specification, usable within a signature file or a F2PY directive, as follows: := | (f2py_len= ) See [Character strings](advanced/use_cases#character-strings) for usage. # Using F2PY This page contains a reference to all command-line options for the `f2py` command, as well as a reference to internal functions of the `numpy.f2py` module. ## Using `f2py` as a command-line tool When used as a command-line tool, `f2py` has three major modes, distinguished by the usage of `-c` and `-h` switches. ### 1\. Signature file generation To scan Fortran sources and generate a signature file, use f2py -h \ [[ only: : ] \ [ skip: : ]]... \ [ ...] Note A Fortran source file can contain many routines, and it is often not necessary to allow all routines to be usable from Python. In such cases, either specify which routines should be wrapped (in the `only: .. :` part) or which routines F2PY should ignore (in the `skip: .. :` part). F2PY has no concept of a “per-file” `skip` or `only` list, so if functions are listed in `only`, no other functions will be taken from any other files. If `` is specified as `stdout`, then signatures are written to standard output instead of a file. Among other options (see below), the following can be used in this mode: `--overwrite-signature` Overwrites an existing signature file. ### 2\. Extension module construction To construct an extension module, use f2py -m \ [[ only: : ] \ [ skip: : ]]... \ [ ...] The constructed extension module is saved as `module.c` to the current directory. Here `` may also contain signature files. Among other options (see below), the following options can be used in this mode: `--debug-capi` Adds debugging hooks to the extension module. When using this extension module, various diagnostic information about the wrapper is written to the standard output, for example, the values of variables, the steps taken, etc. `-include''` Add a CPP `#include` statement to the extension module source. `` should be given in one of the following forms "filename.ext" The include statement is inserted just before the wrapper functions. This feature enables using arbitrary C functions (defined in ``) in F2PY generated wrappers. Note This option is deprecated. Use `usercode` statement to specify C code snippets directly in signature files. `--[no-]wrap-functions` Create Fortran subroutine wrappers to Fortran functions. `--wrap-functions` is default because it ensures maximum portability and compiler independence. `--[no-]freethreading-compatible` Create a module that declares it does or doesn’t require the GIL. The default is `--no-freethreading-compatible` for backwards compatibility. Inspect the fortran code you are wrapping for thread safety issues before passing `--freethreading-compatible`, as `f2py` does not analyze fortran code for thread safety issues. `--include-paths ":..."` Search include files from given directories. Note The paths are to be separated by the correct operating system separator [`pathsep`](https://docs.python.org/3/library/os.html#os.pathsep "\(in Python v3.13\)"), that is `:` on Linux / MacOS and `;` on Windows. In `CMake` this corresponds to using `$`. `--help-link []` List system resources found by `numpy_distutils/system_info.py`. For example, try `f2py --help-link lapack_opt`. ### 3\. Building a module To build an extension module, use f2py -c \ [[ only: : ] \ [ skip: : ]]... \ [ ] [ <.o, .a, .so files> ] If `` contains a signature file, then the source for an extension module is constructed, all Fortran and C sources are compiled, and finally all object and library files are linked to the extension module `.so` which is saved into the current directory. If `` does not contain a signature file, then an extension module is constructed by scanning all Fortran source codes for routine signatures, before proceeding to build the extension module. Warning From Python 3.12 onwards, `distutils` has been removed. Use environment variables or native files to interact with `meson` instead. See its [FAQ](https://mesonbuild.com/howtox.html) for more information. Among other options (see below) and options described for previous modes, the following can be used. Note Changed in version 1.26.0: There are now two separate build backends which can be used, `distutils` and `meson`. Users are **strongly** recommended to switch to `meson` since it is the default above Python `3.12`. Common build flags: `--backend ` Specify the build backend for the compilation process. The supported backends are `meson` and `distutils`. If not specified, defaults to `distutils`. On Python 3.12 or higher, the default is `meson`. `--f77flags=` Specify F77 compiler flags `--f90flags=` Specify F90 compiler flags `--debug` Compile with debugging information `-l` Use the library `` when linking. `-D[=]` Define macro `` as ``. `-U` Define macro `` `-I` Append directory `` to the list of directories searched for include files. `-L` Add directory `` to the list of directories to be searched for `-l`. The `meson` specific flags are: `--dep ` **meson only** Specify a meson dependency for the module. This may be passed multiple times for multiple dependencies. Dependencies are stored in a list for further processing. Example: `--dep lapack --dep scalapack` This will identify “lapack” and “scalapack” as dependencies and remove them from argv, leaving a dependencies list containing [“lapack”, “scalapack”]. The older `distutils` flags are: `--help-fcompiler` **no meson** List the available Fortran compilers. `--fcompiler=` **no meson** Specify a Fortran compiler type by vendor. `--f77exec=` **no meson** Specify the path to a F77 compiler `--f90exec=` **no meson** Specify the path to a F90 compiler `--opt=` **no meson** Specify optimization flags `--arch=` **no meson** Specify architecture specific optimization flags `--noopt` **no meson** Compile without optimization flags `--noarch` **no meson** Compile without arch-dependent optimization flags `link-` **no meson** Link the extension module with as defined by `numpy_distutils/system_info.py`. E.g. to link with optimized LAPACK libraries (vecLib on MacOSX, ATLAS elsewhere), use `--link-lapack_opt`. See also `--help-link` switch. Note The `f2py -c` option must be applied either to an existing `.pyf` file (plus the source/object/library files) or one must specify the `-m ` option (plus the sources/object/library files). Use one of the following options: f2py -c -m fib1 fib1.f or f2py -m fib1 fib1.f -h fib1.pyf f2py -c fib1.pyf fib1.f For more information, see the [Building C and C++ Extensions](https://docs.python.org/3/extending/building.html) Python documentation for details. When building an extension module, a combination of the following macros may be required for non-gcc Fortran compilers: -DPREPEND_FORTRAN -DNO_APPEND_FORTRAN -DUPPERCASE_FORTRAN To test the performance of F2PY generated interfaces, use `-DF2PY_REPORT_ATEXIT`. Then a report of various timings is printed out at the exit of Python. This feature may not work on all platforms, and currently only Linux is supported. To see whether F2PY generated interface performs copies of array arguments, use `-DF2PY_REPORT_ON_ARRAY_COPY=`. When the size of an array argument is larger than ``, a message about the copying is sent to `stderr`. ### Other options `-m ` Name of an extension module. Default is `untitled`. Warning Don’t use this option if a signature file (`*.pyf`) is used. Changed in version 1.26.3: Will ignore `-m` if a `pyf` file is provided. `--[no-]lower` Do [not] lower the cases in ``. By default, `--lower` is assumed with `-h` switch, and `--no-lower` without the `-h` switch. `-include
` Writes additional headers in the C wrapper, can be passed multiple times, generates #include
each time. Note that this is meant to be passed in single quotes and without spaces, for example `'-include'` `--build-dir ` All F2PY generated files are created in ``. Default is `tempfile.mkdtemp()`. `--f2cmap ` Load Fortran-to-C `KIND` specifications from the given file. `--quiet` Run quietly. `--verbose` Run with extra verbosity. `--skip-empty-wrappers` Do not generate wrapper files unless required by the inputs. This is a backwards compatibility flag to restore pre 1.22.4 behavior. `-v` Print the F2PY version and exit. Execute `f2py` without any options to get an up-to-date list of available options. ## Python module `numpy.f2py` Warning Changed in version 2.0.0: There used to be a `f2py.compile` function, which was removed, users may wrap `python -m numpy.f2py` via `subprocess.run` manually, and set environment variables to interact with `meson` as required. When using `numpy.f2py` as a module, the following functions can be invoked. Fortran to Python Interface Generator. Copyright 1999 – 2011 Pearu Peterson all rights reserved. Copyright 2011 – present NumPy Developers. Permission to use, modify, and distribute this software is given under the terms of the NumPy License. NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. numpy.f2py.get_include()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/f2py/__init__.py#L25-L69) Return the directory that contains the `fortranobject.c` and `.h` files. Note This function is not needed when building an extension with [`numpy.distutils`](../reference/distutils#module-numpy.distutils "numpy.distutils") directly from `.f` and/or `.pyf` files in one go. Python extension modules built with f2py-generated code need to use `fortranobject.c` as a source file, and include the `fortranobject.h` header. This function can be used to obtain the directory containing both of these files. Returns: **include_path** str Absolute path to the directory containing `fortranobject.c` and `fortranobject.h`. See also [`numpy.get_include`](../reference/generated/numpy.get_include#numpy.get_include "numpy.get_include") function that returns the numpy include directory #### Notes New in version 1.21.1. Unless the build system you are using has specific support for f2py, building a Python extension using a `.pyf` signature file is a two-step process. For a module `mymod`: * Step 1: run `python -m numpy.f2py mymod.pyf --quiet`. This generates `mymodmodule.c` and (if needed) `mymod-f2pywrappers.f` files next to `mymod.pyf`. * Step 2: build your Python extension module. This requires the following source files: * `mymodmodule.c` * `mymod-f2pywrappers.f` (if it was generated in Step 1) * `fortranobject.c` numpy.f2py.run_main(_comline_list_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/f2py/f2py2e.py#L426-L511) Equivalent to running: f2py where `=string.join(,' ')`, but in Python. Unless `-h` is used, this function returns a dictionary containing information on generated modules and their dependencies on source files. You cannot build extension modules with this function, that is, using `-c` is not allowed. Use the `compile` command instead. #### Examples The command `f2py -m scalar scalar.f` can be executed from Python as follows. >>> import numpy.f2py >>> r = numpy.f2py.run_main(['-m','scalar','doc/source/f2py/scalar.f']) Reading fortran codes... Reading file 'doc/source/f2py/scalar.f' (format:fix,strict) Post-processing... Block: scalar Block: FOO Building modules... Building module "scalar"... Wrote C/API module "scalar" to file "./scalarmodule.c" >>> print(r) {'scalar': {'h': ['/home/users/pearu/src_cvs/f2py/src/fortranobject.h'], 'csrc': ['./scalarmodule.c', '/home/users/pearu/src_cvs/f2py/src/fortranobject.c']}} ## Automatic extension module generation If you want to distribute your f2py extension module, then you only need to include the .pyf file and the Fortran code. The distutils extensions in NumPy allow you to define an extension module entirely in terms of this interface file. A valid `setup.py` file allowing distribution of the `add.f` module (as part of the package `f2py_examples` so that it would be loaded as `f2py_examples.add`) is: def configuration(parent_package='', top_path=None) from numpy.distutils.misc_util import Configuration config = Configuration('f2py_examples',parent_package, top_path) config.add_extension('add', sources=['add.pyf','add.f']) return config if __name__ == '__main__': from numpy.distutils.core import setup setup(**configuration(top_path='').todict()) Installation of the new package is easy using: pip install . assuming you have the proper permissions to write to the main site- packages directory for the version of Python you are using. For the resulting package to work, you need to create a file named `__init__.py` (in the same directory as `add.pyf`). Notice the extension module is defined entirely in terms of the `add.pyf` and `add.f` files. The conversion of the .pyf file to a .c file is handled by [`numpy.distutils`](../reference/distutils#module-numpy.distutils "numpy.distutils"). # F2PY and Conda on Windows As a convenience measure, we will additionally assume the existence of `scoop`, which can be used to install tools without administrative access. Invoke-Expression (New-Object System.Net.WebClient).DownloadString('https://get.scoop.sh') Now we will setup a `conda` environment. scoop install miniconda3 # For conda activate / deactivate in powershell conda install -n root -c pscondaenvs pscondaenvs Powershell -c Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser conda init powershell # Open a new shell for the rest `conda` pulls packages from `msys2`, however, the UX is sufficiently different enough to warrant a separate discussion. Warning As of 30-01-2022, the [MSYS2 binaries](https://github.com/conda-forge/conda- forge.github.io/issues/1044) shipped with `conda` are **outdated** and this approach is **not preferred**. # F2PY and Windows Warning F2PY support for Windows is not always at par with Linux support Note [ScPy’s documentation](http://scipy.github.io/devdocs/building/index.html#system- level-dependencies) has some information on system-level dependencies which are well tested for Fortran as well. Broadly speaking, there are two issues working with F2PY on Windows: * the lack of actively developed FOSS Fortran compilers, and, * the linking issues related to the C runtime library for building Python-C extensions. The focus of this section is to establish a guideline for developing and extending Fortran modules for Python natively, via F2PY on Windows. Currently supported toolchains are: * Mingw-w64 C/C++/Fortran compilers * Intel compilers * Clang-cl + Flang * MSVC + Flang ## Overview From a user perspective, the most UNIX compatible Windows development environment is through emulation, either via the Windows Subsystem on Linux, or facilitated by Docker. In a similar vein, traditional virtualization methods like VirtualBox are also reasonable methods to develop UNIX tools on Windows. Native Windows support is typically stunted beyond the usage of commercial compilers. However, as of 2022, most commercial compilers have free plans which are sufficient for general use. Additionally, the Fortran language features supported by `f2py` (partial coverage of Fortran 2003), means that newer toolchains are often not required. Briefly, then, for an end user, in order of use: Classic Intel Compilers (commercial) These are maintained actively, though licensing restrictions may apply as further detailed in [F2PY and Windows Intel Fortran](intel#f2py-win-intel). Suitable for general use for those building native Windows programs by building off of MSVC. MSYS2 (FOSS) In conjunction with the `mingw-w64` project, `gfortran` and `gcc` toolchains can be used to natively build Windows programs. Windows Subsystem for Linux Assuming the usage of `gfortran`, this can be used for cross-compiling Windows applications, but is significantly more complicated. Conda Windows support for compilers in `conda` is facilitated by pulling MSYS2 binaries, however these [are outdated](https://github.com/conda-forge/conda- forge.github.io/issues/1044), and therefore not recommended (as of 30-01-2022). PGI Compilers (commercial) Unmaintained but sufficient if an existing license is present. Works natively, but has been superseded by the Nvidia HPC SDK, with no [native Windows support](https://developer.nvidia.com/nvidia-hpc-sdk-downloads#collapseFour). Cygwin (FOSS) Can also be used for `gfortran`. However, the POSIX API compatibility layer provided by Cygwin is meant to compile UNIX software on Windows, instead of building native Windows programs. This means cross compilation is required. The compilation suites described so far are compatible with the [now deprecated](https://github.com/numpy/numpy/pull/20875) `np.distutils` build backend which is exposed by the F2PY CLI. Additional build system usage (`meson`, `cmake`) as described in [F2PY and build systems](../buildtools/index#f2py-bldsys) allows for a more flexible set of compiler backends including: Intel oneAPI The newer Intel compilers (`ifx`, `icx`) are based on LLVM and can be used for native compilation. Licensing requirements can be onerous. Classic Flang (FOSS) The backbone of the PGI compilers were cannibalized to form the “classic” or [legacy version of Flang](https://github.com/flang-compiler/flang). This may be compiled from source and used natively. [LLVM Flang](https://releases.llvm.org/11.0.0/tools/flang/docs/ReleaseNotes.html) does not support Windows yet (30-01-2022). LFortran (FOSS) One of two LLVM based compilers. Not all of F2PY supported Fortran can be compiled yet (30-01-2022) but uses MSVC for native linking. ## Baseline For this document we will assume the following basic tools: * The IDE being considered is the community supported [Microsoft Visual Studio Code](https://code.visualstudio.com/Download) * The terminal being used is the [Windows Terminal](https://www.microsoft.com/en-us/p/windows-terminal/9n0dx20hk701?activetab=pivot:overviewtab) * The shell environment is assumed to be [Powershell 7.x](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.1) * Python 3.10 from [the Microsoft Store](https://www.microsoft.com/en-us/p/python-310/9pjpw5ldxlz5) and this can be tested with `Get-Command python.exe` resolving to `C:\Users\$USERNAME\AppData\Local\Microsoft\WindowsApps\python.exe` * The Microsoft Visual C++ (MSVC) toolset With this baseline configuration, we will further consider a configuration matrix as follows: Support matrix, exe implies a Windows installer **Fortran Compiler** | **C/C++ Compiler** | **Source** ---|---|--- Intel Fortran | MSVC / ICC | exe GFortran | MSVC | MSYS2/exe GFortran | GCC | WSL Classic Flang | MSVC | Source / Conda Anaconda GFortran | Anaconda GCC | exe For an understanding of the key issues motivating the need for such a matrix [Pauli Virtanen’s in-depth post on wheels with Fortran for Windows](https://pav.iki.fi/blog/2017-10-08/pywingfortran.html#building- python-wheels-with-fortran-for-windows) is an excellent resource. An entertaining explanation of an application binary interface (ABI) can be found in this post by [JeanHeyd Meneide](https://thephd.dev/binary-banshees-digital- demons-abi-c-c++-help-me-god-please). ## PowerShell and MSVC MSVC is installed either via the Visual Studio Bundle or the lighter (preferred) [Build Tools for Visual Studio](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual- studio-2019) with the `Desktop development with C++` setting. Note This can take a significant amount of time as it includes a download of around 2GB and requires a restart. It is possible to use the resulting environment from a [standard command prompt](https://docs.microsoft.com/en-us/cpp/build/building-on-the-command- line?view=msvc-160#developer_command_file_locations). However, it is more pleasant to use a [developer powershell](https://docs.microsoft.com/en- us/visualstudio/ide/reference/command-prompt-powershell?view=vs-2019), with a [profile in Windows Terminal](https://techcommunity.microsoft.com/t5/microsoft-365-pnp-blog/add- developer-powershell-and-developer-command-prompt-for-visual/ba-p/2243078). This can be achieved by adding the following block to the `profiles->list` section of the JSON file used to configure Windows Terminal (see `Settings->Open JSON file`): { "name": "Developer PowerShell for VS 2019", "commandline": "powershell.exe -noe -c \"$vsPath = (Join-Path ${env:ProgramFiles(x86)} -ChildPath 'Microsoft Visual Studio\\2019\\BuildTools'); Import-Module (Join-Path $vsPath 'Common7\\Tools\\Microsoft.VisualStudio.DevShell.dll'); Enter-VsDevShell -VsInstallPath $vsPath -SkipAutomaticLocation\"", "icon": "ms-appx:///ProfileIcons/{61c54bbd-c2c6-5271-96e7-009a87ff44bf}.png" } Now, testing the compiler toolchain could look like: # New Windows Developer Powershell instance / tab # or $vsPath = (Join-Path ${env:ProgramFiles(x86)} -ChildPath 'Microsoft Visual Studio\\2019\\BuildTools'); Import-Module (Join-Path $vsPath 'Common7\\Tools\\Microsoft.VisualStudio.DevShell.dll'); Enter-VsDevShell -VsInstallPath $vsPath -SkipAutomaticLocation ********************************************************************** ** Visual Studio 2019 Developer PowerShell v16.11.9 ** Copyright (c) 2021 Microsoft Corporation ********************************************************************** cd $HOME echo "#include" > blah.cpp; echo 'int main(){printf("Hi");return 1;}' >> blah.cpp cl blah.cpp .\blah.exe # Hi rm blah.cpp It is also possible to check that the environment has been updated correctly with `$ENV:PATH`. ## Microsoft Store Python paths The MS Windows version of Python discussed here installs to a non- deterministic path using a hash. This needs to be added to the `PATH` variable. $Env:Path += ";$env:LOCALAPPDATA\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python310\scripts" * [F2PY and Windows Intel Fortran](intel) * [F2PY and Windows with MSYS2](msys2) * [F2PY and Conda on Windows](conda) * [F2PY and PGI Fortran on Windows](pgi) # F2PY and Windows Intel Fortran As of NumPy 1.23, only the classic Intel compilers (`ifort`) are supported. Note The licensing restrictions for beta software [have been relaxed](https://www.intel.com/content/www/us/en/developer/articles/release- notes/oneapi-fortran-compiler-release-notes.html) during the transition to the LLVM backed `ifx/icc` family of compilers. However this document does not endorse the usage of Intel in downstream projects due to the issues pertaining to [disassembly of components and liability](https://software.sintel.com/content/www/us/en/develop/articles/end- user-license-agreement.html). Neither the Python Intel installation nor the `Classic Intel C/C++ Compiler` are required. * The [Intel Fortran Compilers](https://www.intel.com/content/www/us/en/developer/articles/tool/oneapi-standalone-components.html#inpage-nav-6-1) come in a combined installer providing both Classic and Beta versions; these also take around a gigabyte and a half or so. We will consider the classic example of the generation of Fibonnaci numbers, `fib1.f`, given by: C FILE: FIB1.F SUBROUTINE FIB(A,N) C C CALCULATE FIRST N FIBONACCI NUMBERS C INTEGER N REAL*8 A(N) DO I=1,N IF (I.EQ.1) THEN A(I) = 0.0D0 ELSEIF (I.EQ.2) THEN A(I) = 1.0D0 ELSE A(I) = A(I-1) + A(I-2) ENDIF ENDDO END C END FILE FIB1.F For `cmd.exe` fans, using the Intel oneAPI command prompt is the easiest approach, as it loads the required environment for both `ifort` and `msvc`. Helper batch scripts are also provided. # cmd.exe "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" python -m numpy.f2py -c fib1.f -m fib1 python -c "import fib1; import numpy as np; a=np.zeros(8); fib1.fib(a); print(a)" Powershell usage is a little less pleasant, and this configuration now works with MSVC as: # Powershell python -m numpy.f2py -c fib1.f -m fib1 --f77exec='C:\Program Files (x86)\Intel\oneAPI\compiler\latest\windows\bin\intel64\ifort.exe' --f90exec='C:\Program Files (x86)\Intel\oneAPI\compiler\latest\windows\bin\intel64\ifort.exe' -L'C:\Program Files (x86)\Intel\oneAPI\compiler\latest\windows\compiler\lib\ia32' python -c "import fib1; import numpy as np; a=np.zeros(8); fib1.fib(a); print(a)" # Alternatively, set environment and reload Powershell in one line cmd.exe /k '"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" && powershell' python -m numpy.f2py -c fib1.f -m fib1 python -c "import fib1; import numpy as np; a=np.zeros(8); fib1.fib(a); print(a)" Note that the actual path to your local installation of `ifort` may vary, and the command above will need to be updated accordingly. # F2PY and Windows with MSYS2 Follow the standard [installation instructions](https://www.msys2.org/). Then, to grab the requisite Fortran compiler with `MVSC`: # Assuming a fresh install pacman -Syu # Restart the terminal pacman -Su # Update packages # Get the toolchains pacman -S --needed base-devel gcc-fortran pacman -S mingw-w64-x86_64-toolchain # F2PY and PGI Fortran on Windows A variant of these are part of the so called “classic” Flang, however, as classic Flang requires a custom LLVM and compilation from sources. Warning Since the proprietary compilers are no longer available for usage they are not recommended and will not be ported to the new `f2py` CLI. Note **As of November 2021** As of 29-01-2022, [PGI compiler toolchains](https://www.pgroup.com/index.html) have been superseded by the Nvidia HPC SDK, with no [native Windows support](https://developer.nvidia.com/nvidia-hpc-sdk-downloads#collapseFour). # Glossary (`n`,) A parenthesized number followed by a comma denotes a tuple with one element. The trailing comma distinguishes a one-element tuple from a parenthesized `n`. -1 * **In a dimension entry** , instructs NumPy to choose the length that will keep the total number of array elements the same. >>> np.arange(12).reshape(4, -1).shape (4, 3) * **In an index** , any negative value [denotes](https://docs.python.org/dev/faq/programming.html#what-s-a-negative-index) indexing from the right. … An [`Ellipsis`](https://docs.python.org/3/library/constants.html#Ellipsis "\(in Python v3.13\)"). * **When indexing an array** , shorthand that the missing axes, if they exist, are full slices. >>> a = np.arange(24).reshape(2,3,4) >>> a[...].shape (2, 3, 4) >>> a[...,0].shape (2, 3) >>> a[0,...].shape (3, 4) >>> a[0,...,0].shape (3,) It can be used at most once; `a[...,0,...]` raises an [`IndexError`](https://docs.python.org/3/library/exceptions.html#IndexError "\(in Python v3.13\)"). * **In printouts** , NumPy substitutes `...` for the middle elements of large arrays. To see the entire array, use [`numpy.printoptions`](reference/generated/numpy.printoptions#numpy.printoptions "numpy.printoptions") : The Python [slice](https://docs.python.org/3/glossary.html#term-slice "\(in Python v3.13\)") operator. In ndarrays, slicing can be applied to every axis: >>> a = np.arange(24).reshape(2,3,4) >>> a array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> a[1:,-2:,:-1] array([[[16, 17, 18], [20, 21, 22]]]) Trailing slices can be omitted: >>> a[1] == a[1,:,:] array([[ True, True, True, True], [ True, True, True, True], [ True, True, True, True]]) In contrast to Python, where slicing creates a copy, in NumPy slicing creates a view. For details, see [Combining advanced and basic indexing](user/basics.indexing#combining-advanced-and-basic-indexing). < In a dtype declaration, indicates that the data is little-endian (the bracket is big on the right). >>> dt = np.dtype(' In a dtype declaration, indicates that the data is big-endian (the bracket is big on the left). >>> dt = np.dtype('>H') # big-endian unsigned short advanced indexing Rather than using a [scalar](reference/arrays.scalars) or slice as an index, an axis can be indexed with an array, providing fine-grained selection. This is known as [advanced indexing](user/basics.indexing#advanced-indexing) or “fancy indexing”. along an axis An operation `along axis n` of array `a` behaves as if its argument were an array of slices of `a` where each slice has a successive index of axis `n`. For example, if `a` is a 3 x `N` array, an operation along axis 0 behaves as if its argument were an array containing slices of each row: >>> np.array((a[0,:], a[1,:], a[2,:])) To make it concrete, we can pick the operation to be the array-reversal function [`numpy.flip`](reference/generated/numpy.flip#numpy.flip "numpy.flip"), which accepts an `axis` argument. We construct a 3 x 4 array `a`: >>> a = np.arange(12).reshape(3,4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) Reversing along axis 0 (the row axis) yields >>> np.flip(a,axis=0) array([[ 8, 9, 10, 11], [ 4, 5, 6, 7], [ 0, 1, 2, 3]]) Recalling the definition of `along an axis`, `flip` along axis 0 is treating its argument as if it were >>> np.array((a[0,:], a[1,:], a[2,:])) array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) and the result of `np.flip(a,axis=0)` is to reverse the slices: >>> np.array((a[2,:],a[1,:],a[0,:])) array([[ 8, 9, 10, 11], [ 4, 5, 6, 7], [ 0, 1, 2, 3]]) array Used synonymously in the NumPy docs with ndarray. array_like Any [scalar](reference/arrays.scalars) or [sequence](https://docs.python.org/3/glossary.html#term-sequence "\(in Python v3.13\)") that can be interpreted as an ndarray. In addition to ndarrays and scalars this category includes lists (possibly nested and with different element types) and tuples. Any argument accepted by [numpy.array](reference/generated/numpy.array) is array_like. >>> a = np.array([[1, 2.0], [0, 0], (1+1j, 3.)]) >>> a array([[1.+0.j, 2.+0.j], [0.+0.j, 0.+0.j], [1.+1.j, 3.+0.j]]) array scalar An [array scalar](reference/arrays.scalars) is an instance of the types/classes float32, float64, etc.. For uniformity in handling operands, NumPy treats a scalar as an array of zero dimension. In contrast, a 0-dimensional array is an [ndarray](reference/arrays.ndarray) instance containing precisely one value. axis Another term for an array dimension. Axes are numbered left to right; axis 0 is the first element in the shape tuple. In a two-dimensional vector, the elements of axis 0 are rows and the elements of axis 1 are columns. In higher dimensions, the picture changes. NumPy prints higher-dimensional vectors as replications of row-by-column building blocks, as in this three- dimensional vector: >>> a = np.arange(12).reshape(2,2,3) >>> a array([[[ 0, 1, 2], [ 3, 4, 5]], [[ 6, 7, 8], [ 9, 10, 11]]]) `a` is depicted as a two-element array whose elements are 2x3 vectors. From this point of view, rows and columns are the final two axes, respectively, in any shape. This rule helps you anticipate how a vector will be printed, and conversely how to find the index of any of the printed elements. For instance, in the example, the last two values of 8’s index must be 0 and 2. Since 8 appears in the second of the two 2x3’s, the first index must be 1: >>> a[1,0,2] 8 A convenient way to count dimensions in a printed vector is to count `[` symbols after the open-parenthesis. This is useful in distinguishing, say, a (1,2,3) shape from a (2,3) shape: >>> a = np.arange(6).reshape(2,3) >>> a.ndim 2 >>> a array([[0, 1, 2], [3, 4, 5]]) >>> a = np.arange(6).reshape(1,2,3) >>> a.ndim 3 >>> a array([[[0, 1, 2], [3, 4, 5]]]) .base If an array does not own its memory, then its [base](reference/generated/numpy.ndarray.base) attribute returns the object whose memory the array is referencing. That object may be referencing the memory from still another object, so the owning object may be `a.base.base.base...`. Some writers erroneously claim that testing `base` determines if arrays are views. For the correct way, see [`numpy.shares_memory`](reference/generated/numpy.shares_memory#numpy.shares_memory "numpy.shares_memory"). big-endian See [Endianness](https://en.wikipedia.org/wiki/Endianness). BLAS [Basic Linear Algebra Subprograms](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) broadcast _broadcasting_ is NumPy’s ability to process ndarrays of different sizes as if all were the same size. It permits an elegant do-what-I-mean behavior where, for instance, adding a scalar to a vector adds the scalar value to every element. >>> a = np.arange(3) >>> a array([0, 1, 2]) >>> a + [3, 3, 3] array([3, 4, 5]) >>> a + 3 array([3, 4, 5]) Ordinarily, vector operands must all be the same size, because NumPy works element by element – for instance, `c = a * b` is c[0,0,0] = a[0,0,0] * b[0,0,0] c[0,0,1] = a[0,0,1] * b[0,0,1] ... But in certain useful cases, NumPy can duplicate data along “missing” axes or “too-short” dimensions so shapes will match. The duplication costs no memory or time. For details, see [Broadcasting.](user/basics.broadcasting) C order Same as row-major. casting The process of converting array data from one dtype to another. There exist several casting modes, defined by the following casting rules: * `no`: The data types should not be cast at all. Any mismatch in data types between the arrays will raise a `TypeError`. * `equiv`: Only byte-order changes are allowed. * `safe`: Only casts that can preserve values are allowed. Upcasting (e.g., from int to float) is allowed, but downcasting is not. * `same_kind`: The ‘same_kind’ casting option allows safe casts and casts within a kind, like float64 to float32. * `unsafe`: any data conversions may be done. column-major See [Row- and column-major order](https://en.wikipedia.org/wiki/Row- _and_column-major_order). contiguous An array is contiguous if: * it occupies an unbroken block of memory, and * array elements with higher indexes occupy higher addresses (that is, no stride is negative). There are two types of proper-contiguous NumPy arrays: * Fortran-contiguous arrays refer to data that is stored column-wise, i.e. the indexing of data as stored in memory starts from the lowest dimension; * C-contiguous, or simply contiguous arrays, refer to data that is stored row-wise, i.e. the indexing of data as stored in memory starts from the highest dimension. For one-dimensional arrays these notions coincide. For example, a 2x2 array `A` is Fortran-contiguous if its elements are stored in memory in the following order: A[0,0] A[1,0] A[0,1] A[1,1] and C-contiguous if the order is as follows: A[0,0] A[0,1] A[1,0] A[1,1] To test whether an array is C-contiguous, use the `.flags.c_contiguous` attribute of NumPy arrays. To test for Fortran contiguity, use the `.flags.f_contiguous` attribute. copy See view. dimension See axis. dtype The datatype describing the (identically typed) elements in an ndarray. It can be changed to reinterpret the array contents. For details, see [Data type objects (dtype).](reference/arrays.dtypes) fancy indexing Another term for advanced indexing. field In a structured data type, each subtype is called a `field`. The `field` has a name (a string), a type (any valid dtype), and an optional `title`. See [Data type objects (dtype)](reference/arrays.dtypes#arrays-dtypes). Fortran order Same as column-major. flattened See ravel. homogeneous All elements of a homogeneous array have the same type. ndarrays, in contrast to Python lists, are homogeneous. The type can be complicated, as in a structured array, but all elements have that type. NumPy object arrays, which contain references to Python objects, fill the role of heterogeneous arrays. itemsize The size of the dtype element in bytes. little-endian See [Endianness](https://en.wikipedia.org/wiki/Endianness). mask A boolean array used to select only certain elements for an operation: >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> mask = (x > 2) >>> mask array([False, False, False, True, True]) >>> x[mask] = -1 >>> x array([ 0, 1, 2, -1, -1]) masked array Bad or missing data can be cleanly ignored by putting it in a masked array, which has an internal boolean array indicating invalid entries. Operations with masked arrays ignore these entries. >>> a = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True]) >>> a masked_array(data=[--, 2.0, --], mask=[ True, False, True], fill_value=1e+20) >>> a + [1, 2, 3] masked_array(data=[--, 4.0, --], mask=[ True, False, True], fill_value=1e+20) For details, see [Masked arrays.](reference/maskedarray) matrix NumPy’s two-dimensional [matrix class](reference/generated/numpy.matrix) should no longer be used; use regular ndarrays. ndarray [NumPy’s basic structure](reference/arrays). object array An array whose dtype is `object`; that is, it contains references to Python objects. Indexing the array dereferences the Python objects, so unlike other ndarrays, an object array has the ability to hold heterogeneous objects. ravel [numpy.ravel ](reference/generated/numpy.ravel) and [numpy.flatten ](reference/generated/numpy.ndarray.flatten) both flatten an ndarray. `ravel` will return a view if possible; `flatten` always returns a copy. Flattening collapses a multidimensional array to a single dimension; details of how this is done (for instance, whether `a[n+1]` should be the next row or next column) are parameters. record array A structured array with allowing access in an attribute style (`a.field`) in addition to `a['field']`. For details, see [numpy.recarray.](reference/generated/numpy.recarray) row-major See [Row- and column-major order](https://en.wikipedia.org/wiki/Row- _and_column-major_order). NumPy creates arrays in row-major order by default. scalar In NumPy, usually a synonym for array scalar. shape A tuple showing the length of each dimension of an ndarray. The length of the tuple itself is the number of dimensions ([numpy.ndim](reference/generated/numpy.ndarray.ndim)). The product of the tuple elements is the number of elements in the array. For details, see [numpy.ndarray.shape](reference/generated/numpy.ndarray.shape). stride Physical memory is one-dimensional; strides provide a mechanism to map a given index to an address in memory. For an N-dimensional array, its `strides` attribute is an N-element tuple; advancing from index `i` to index `i+1` on axis `n` means adding `a.strides[n]` bytes to the address. Strides are computed automatically from an array’s dtype and shape, but can be directly specified using [as_strided.](reference/generated/numpy.lib.stride_tricks.as_strided) For details, see [numpy.ndarray.strides](reference/generated/numpy.ndarray.strides). To see how striding underlies the power of NumPy views, see [The NumPy array: a structure for efficient numerical computation. ](https://arxiv.org/pdf/1102.1523.pdf) structured array Array whose dtype is a structured data type. structured data type Users can create arbitrarily complex dtypes that can include other arrays and dtypes. These composite dtypes are called [structured data types.](user/basics.rec) subarray An array nested in a structured data type, as `b` is here: >>> dt = np.dtype([('a', np.int32), ('b', np.float32, (3,))]) >>> np.zeros(3, dtype=dt) array([(0, [0., 0., 0.]), (0, [0., 0., 0.]), (0, [0., 0., 0.])], dtype=[('a', '>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> y = x[::2] >>> y array([0, 2, 4]) >>> x[0] = 3 # changing x changes y as well, since y is a view on x >>> y array([3, 2, 4]) # NumPy documentation **Version** : 2.2 **Download documentation** : [User guide (PDF)](https://numpy.org/doc/2.2/numpy-user.pdf) | [Reference guide (PDF)](https://numpy.org/doc/2.2/numpy-ref.pdf) | [All (ZIP)](https://numpy.org/doc/2.2/numpy-html.zip) | [Historical versions of documentation](https://numpy.org/doc/) **Useful links** : [Installation](https://numpy.org/install/) | [Source Repository](https://github.com/numpy/numpy) | [Issue Tracker](https://github.com/numpy/numpy/issues) | [Q&A Support](https://numpy.org/gethelp/) | [Mailing List](https://mail.python.org/mailman/listinfo/numpy-discussion) NumPy is the fundamental package for scientific computing in Python. It is a Python library that provides a multidimensional array object, various derived objects (such as masked arrays and matrices), and an assortment of routines for fast operations on arrays, including mathematical, logical, shape manipulation, sorting, selecting, I/O, discrete Fourier transforms, basic linear algebra, basic statistical operations, random simulation and much more. Getting started New to NumPy? Check out the Absolute Beginner’s Guide. It contains an introduction to NumPy’s main concepts and links to additional tutorials. [To the absolute beginner’s guide](user/absolute_beginners) User guide The user guide provides in-depth information on the key concepts of NumPy with useful background information and explanation. [To the user guide](user/index#user) API reference The reference guide contains a detailed description of the functions, modules, and objects included in NumPy. The reference describes how the methods work and which parameters can be used. It assumes that you have an understanding of the key concepts. [To the reference guide](reference/index#reference) Contributor’s guide Want to add to the codebase? Can help add translation or a flowchart to the documentation? The contributing guidelines will guide you through the process of improving NumPy. [To the contributor’s guide](dev/index#devindex) # NumPy license Copyright (c) 2005-2024, NumPy Developers. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the NumPy Developers nor the names of any contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # NumPy 2.0 migration guide This document contains a set of instructions on how to update your code to work with NumPy 2.0. It covers changes in NumPy’s Python and C APIs. Note Note that NumPy 2.0 also breaks binary compatibility - if you are distributing binaries for a Python package that depends on NumPy’s C API, please see [NumPy 2.0-specific advice](dev/depending_on_numpy#numpy-2-abi-handling). ## Ruff plugin Many of the changes covered in the 2.0 release notes and in this migration guide can be automatically adapted in downstream code with a dedicated [Ruff](https://docs.astral.sh/ruff/) rule, namely rule [NPY201](https://docs.astral.sh/ruff/rules/numpy2-deprecation/). You should install `ruff>=0.4.8` and add the `NPY201` rule to your `pyproject.toml`: [tool.ruff.lint] select = ["NPY201"] You can also apply the NumPy 2.0 rule directly from the command line: $ ruff check path/to/code/ --select NPY201 ## Changes to NumPy data type promotion NumPy 2.0 changes promotion (the result of combining dissimilar data types) as per [NEP 50](https://numpy.org/neps/nep-0050-scalar-promotion.html#nep50 "\(in NumPy Enhancement Proposals\)"). Please see the NEP for details on this change. It includes a table of example changes and a backwards compatibility section. The largest backwards compatibility change is that the precision of scalars is now preserved consistently. Two examples are: * `np.float32(3) + 3.` now returns a float32 when it previously returned a float64. * `np.array([3], dtype=np.float32) + np.float64(3)` will now return a float64 array. (The higher precision of the scalar is not ignored.) For floating point values, this can lead to lower precision results when working with scalars. For integers, errors or overflows are possible. To solve this, you may cast explicitly. Very often, it may also be a good solution to ensure you are working with Python scalars via `int()`, `float()`, or `numpy_scalar.item()`. To track down changes, you can enable emitting warnings for changed behavior (use `warnings.simplefilter` to raise it as an error for a traceback): np._set_promotion_state("weak_and_warn") which is useful during testing. Unfortunately, running this may flag many changes that are irrelevant in practice. ## Windows default integer The default integer used by NumPy is now 64bit on all 64bit systems (and 32bit on 32bit system). For historic reasons related to Python 2 it was previously equivalent to the C `long` type. The default integer is now equivalent to `np.intp`. Most end-users should not be affected by this change. Some operations will use more memory, but some operations may actually become faster. If you experience issues due to calling a library written in a compiled language it may help to explicitly cast to a `long`, for example with: `arr = arr.astype("long", copy=False)`. Libraries interfacing with compiled code that are written in C, Cython, or a similar language may require updating to accommodate user input if they are using the `long` or equivalent type on the C-side. In this case, you may wish to use `intp` and cast user input or support both `long` and `intp` (to better support NumPy 1.x as well). When creating a new integer array in C or Cython, the new `NPY_DEFAULT_INT` macro will evaluate to either `NPY_LONG` or `NPY_INTP` depending on the NumPy version. Note that the NumPy random API is not affected by this change. ## C-API Changes Some definitions were removed or replaced due to being outdated or unmaintainable. Some new API definitions will evaluate differently at runtime between NumPy 2.0 and NumPy 1.x. Some are defined in `numpy/_core/include/numpy/npy_2_compat.h` (for example `NPY_DEFAULT_INT`) which can be vendored in full or part to have the definitions available when compiling against NumPy 1.x. If necessary, `PyArray_RUNTIME_VERSION >= NPY_2_0_API_VERSION` can be used to explicitly implement different behavior on NumPy 1.x and 2.0. (The compat header defines it in a way compatible with such use.) Please let us know if you require additional workarounds here. ### The `PyArray_Descr` struct has been changed One of the most impactful C-API changes is that the `PyArray_Descr` struct is now more opaque to allow us to add additional flags and have itemsizes not limited by the size of `int` as well as allow improving structured dtypes in the future and not burden new dtypes with their fields. Code which only uses the type number and other initial fields is unaffected. Most code will hopefully mainly access the `->elsize` field, when the dtype/descriptor itself is attached to an array (e.g. `arr->descr->elsize`) this is best replaced with `PyArray_ITEMSIZE(arr)`. Where not possible, new accessor functions are required: * `PyDataType_ELSIZE` and `PyDataType_SET_ELSIZE` (note that the result is now `npy_intp` and not `int`). * `PyDataType_ALIGNMENT` * `PyDataType_FIELDS`, `PyDataType_NAMES`, `PyDataType_SUBARRAY` * `PyDataType_C_METADATA` Cython code should use Cython 3, in which case the change is transparent. (Struct access is available for elsize and alignment when compiling only for NumPy 2.) For compiling with both 1.x and 2.x if you use these new accessors it is unfortunately necessary to either define them locally via a macro like: #if NPY_ABI_VERSION < 0x02000000 #define PyDataType_ELSIZE(descr) ((descr)->elsize) #endif or adding `npy2_compat.h` into your code base and explicitly include it when compiling with NumPy 1.x (as they are new API). Including the file has no effect on NumPy 2. Please do not hesitate to open a NumPy issue, if you require assistance or the provided functions are not sufficient. **Custom User DTypes:** Existing user dtypes must now use [`PyArray_DescrProto`](reference/c-api/types-and- structures#c.PyArray_DescrProto "PyArray_DescrProto") to define their dtype and slightly modify the code. See note in [`PyArray_RegisterDataType`](reference/c-api/array#c.PyArray_RegisterDataType "PyArray_RegisterDataType"). ### Functionality moved to headers requiring `import_array()` If you previously included only `ndarraytypes.h` you may find that some functionality is not available anymore and requires the inclusion of `ndarrayobject.h` or similar. This include is also needed when vendoring `npy_2_compat.h` into your own codebase to allow use of the new definitions when compiling with NumPy 1.x. Functionality which previously did not require import includes: * Functions to access dtype flags: `PyDataType_FLAGCHK`, `PyDataType_REFCHK`, and the related `NPY_BEGIN_THREADS_DESCR`. * `PyArray_GETITEM` and `PyArray_SETITEM`. Warning It is important that the `import_array()` mechanism is used to ensure that the full NumPy API is accessible when using the `npy_2_compat.h` header. In most cases your extension module probably already calls it. However, if not we have added `PyArray_ImportNumPyAPI()` as a preferable way to ensure the NumPy API is imported. This function is light-weight when called multiple times so that you may insert it wherever it may be needed (if you wish to avoid setting it up at module import). ### Increased maximum number of dimensions The maximum number of dimensions (and arguments) was increased to 64. This affects the `NPY_MAXDIMS` and `NPY_MAXARGS` macros. It may be good to review their use, and we generally encourage you to not use these macros (especially `NPY_MAXARGS`), so that a future version of NumPy can remove this limitation on the number of dimensions. `NPY_MAXDIMS` was also used to signal `axis=None` in the C-API, including the `PyArray_AxisConverter`. The latter will return `-2147483648` as an axis (the smallest integer value). Other functions may error with `AxisError: axis 64 is out of bounds for array of dimension` in which case you need to pass `NPY_RAVEL_AXIS` instead of `NPY_MAXDIMS`. `NPY_RAVEL_AXIS` is defined in the `npy_2_compat.h` header and runtime dependent (mapping to 32 on NumPy 1.x and `-2147483648` on NumPy 2.x). ### Complex types - Underlying type changes The underlying C types for all of the complex types have been changed to use native C99 types. While the memory layout of those types remains identical to the types used in NumPy 1.x, the API is slightly different, since direct field access (like `c.real` or `c.imag`) is no longer possible. It is recommended to use the functions `npy_creal` and `npy_cimag` (and the corresponding float and long double variants) to retrieve the real or imaginary part of a complex number, as these will work with both NumPy 1.x and with NumPy 2.x. New functions `npy_csetreal` and `npy_csetimag`, along with compatibility macros `NPY_CSETREAL` and `NPY_CSETIMAG` (and the corresponding float and long double variants), have been added for setting the real or imaginary part. The underlying type remains a struct under C++ (all of the above still remains valid). This has implications for Cython. It is recommended to always use the native typedefs `cfloat_t`, `cdouble_t`, `clongdouble_t` rather than the NumPy types `npy_cfloat`, etc, unless you have to interface with C code written using the NumPy types. You can still write cython code using the `c.real` and `c.imag` attributes (using the native typedefs), but you can no longer use in-place operators `c.imag += 1` in Cython’s c++ mode. Because NumPy 2 now includes `complex.h` code that uses a variable named `I` may see an error such as to use the name `I` requires an `#undef I` now. Note NumPy 2.0.1 briefly included the `#undef I` to help users not already including `complex.h`. ## Changes to namespaces In NumPy 2.0 certain functions, modules, and constants were moved or removed to make the NumPy namespace more user-friendly by removing unnecessary or outdated functionality and clarifying which parts of NumPy are considered private. Please see the tables below for guidance on migration. For most changes this means replacing it with a backwards compatible alternative. Please refer to [NEP 52 — Python API cleanup for NumPy 2.0](https://numpy.org/neps/nep-0052-python-api-cleanup.html#nep52 "\(in NumPy Enhancement Proposals\)") for more details. ### Main namespace About 100 members of the main `np` namespace have been deprecated, removed, or moved to a new place. It was done to reduce clutter and establish only one way to access a given attribute. The table below shows members that have been removed: removed member | migration guideline ---|--- add_docstring | It’s still available as `np.lib.add_docstring`. add_newdoc | It’s still available as `np.lib.add_newdoc`. add_newdoc_ufunc | It’s an internal function and doesn’t have a replacement. alltrue | Use `np.all` instead. asfarray | Use `np.asarray` with a float dtype instead. byte_bounds | Now it’s available under `np.lib.array_utils.byte_bounds` cast | Use `np.asarray(arr, dtype=dtype)` instead. cfloat | Use `np.complex128` instead. charrarray | It’s still available as `np.char.chararray`. clongfloat | Use `np.clongdouble` instead. compare_chararrays | It’s still available as `np.char.compare_chararrays`. compat | There’s no replacement, as Python 2 is no longer supported. complex_ | Use `np.complex128` instead. cumproduct | Use `np.cumprod` instead. DataSource | It’s still available as `np.lib.npyio.DataSource`. deprecate | Emit `DeprecationWarning` with `warnings.warn` directly, or use `typing.deprecated`. deprecate_with_doc | Emit `DeprecationWarning` with `warnings.warn` directly, or use `typing.deprecated`. disp | Use your own printing function instead. fastCopyAndTranspose | Use `arr.T.copy()` instead. find_common_type | Use `numpy.promote_types` or `numpy.result_type` instead. To achieve semantics for the `scalar_types` argument, use `numpy.result_type` and pass the Python values `0`, `0.0`, or `0j`. format_parser | It’s still available as `np.rec.format_parser`. get_array_wrap | float_ | Use `np.float64` instead. geterrobj | Use the np.errstate context manager instead. Inf | Use `np.inf` instead. Infinity | Use `np.inf` instead. infty | Use `np.inf` instead. issctype | Use `issubclass(rep, np.generic)` instead. issubclass_ | Use `issubclass` builtin instead. issubsctype | Use `np.issubdtype` instead. mat | Use `np.asmatrix` instead. maximum_sctype | Use a specific dtype instead. You should avoid relying on any implicit mechanism and select the largest dtype of a kind explicitly in the code. NaN | Use `np.nan` instead. nbytes | Use `np.dtype().itemsize` instead. NINF | Use `-np.inf` instead. NZERO | Use `-0.0` instead. longcomplex | Use `np.clongdouble` instead. longfloat | Use `np.longdouble` instead. lookfor | Search NumPy’s documentation directly. obj2sctype | Use `np.dtype(obj).type` instead. PINF | Use `np.inf` instead. product | Use `np.prod` instead. PZERO | Use `0.0` instead. recfromcsv | Use `np.genfromtxt` with comma delimiter instead. recfromtxt | Use `np.genfromtxt` instead. round_ | Use `np.round` instead. safe_eval | Use `ast.literal_eval` instead. sctype2char | Use `np.dtype(obj).char` instead. sctypes | Access dtypes explicitly instead. seterrobj | Use the np.errstate context manager instead. set_numeric_ops | For the general case, use `PyUFunc_ReplaceLoopBySignature`. For ndarray subclasses, define the `__array_ufunc__` method and override the relevant ufunc. set_string_function | Use `np.set_printoptions` instead with a formatter for custom printing of NumPy objects. singlecomplex | Use `np.complex64` instead. string_ | Use `np.bytes_` instead. sometrue | Use `np.any` instead. source | Use `inspect.getsource` instead. tracemalloc_domain | It’s now available from `np.lib`. unicode_ | Use `np.str_` instead. who | Use an IDE variable explorer or `locals()` instead. If the table doesn’t contain an item that you were using but was removed in `2.0`, then it means it was a private member. You should either use the existing API or, in case it’s infeasible, reach out to us with a request to restore the removed entry. The next table presents deprecated members, which will be removed in a release after `2.0`: deprecated member | migration guideline ---|--- in1d | Use `np.isin` instead. row_stack | Use `np.vstack` instead (`row_stack` was an alias for `vstack`). trapz | Use `np.trapezoid` or a `scipy.integrate` function instead. Finally, a set of internal enums has been removed. As they weren’t used in downstream libraries we don’t provide any information on how to replace them: [`FLOATING_POINT_SUPPORT`, `FPE_DIVIDEBYZERO`, `FPE_INVALID`, `FPE_OVERFLOW`, `FPE_UNDERFLOW`, `UFUNC_BUFSIZE_DEFAULT`, `UFUNC_PYVALS_NAME`, `CLIP`, `WRAP`, `RAISE`, `BUFSIZE`, `ALLOW_THREADS`, `MAXDIMS`, `MAY_SHARE_EXACT`, `MAY_SHARE_BOUNDS`] ### numpy.lib namespace Most of the functions available within `np.lib` are also present in the main namespace, which is their primary location. To make it unambiguous how to access each public function, `np.lib` is now empty and contains only a handful of specialized submodules, classes and functions: * `array_utils`, `format`, `introspect`, `mixins`, `npyio` and `stride_tricks` submodules, * `Arrayterator` and `NumpyVersion` classes, * `add_docstring` and `add_newdoc` functions, * `tracemalloc_domain` constant. If you get an `AttributeError` when accessing an attribute from `np.lib` you should try accessing it from the main `np` namespace then. If an item is also missing from the main namespace, then you’re using a private member. You should either use the existing API or, in case it’s infeasible, reach out to us with a request to restore the removed entry. ### numpy.core namespace The `np.core` namespace is now officially private and has been renamed to `np._core`. The user should never fetch members from the `_core` directly - instead the main namespace should be used to access the attribute in question. The layout of the `_core` module might change in the future without notice, contrary to public modules which adhere to the deprecation period policy. If an item is also missing from the main namespace, then you should either use the existing API or, in case it’s infeasible, reach out to us with a request to restore the removed entry. ### ndarray and scalar methods A few methods from `np.ndarray` and `np.generic` scalar classes have been removed. The table below provides replacements for the removed members: expired member | migration guideline ---|--- newbyteorder | Use `arr.view(arr.dtype.newbyteorder(order))` instead. ptp | Use `np.ptp(arr, ...)` instead. setitem | Use `arr[index] = value` instead. ### numpy.strings namespace A new [`numpy.strings`](reference/routines.strings#module-numpy.strings "numpy.strings") namespace has been created, where most of the string operations are implemented as ufuncs. The old [`numpy.char`](reference/routines.char#module-numpy.char "numpy.char") namespace still is available, and, wherever possible, uses the new ufuncs for greater performance. We recommend using the [`strings`](reference/routines.strings#module-numpy.strings "numpy.strings") functions going forward. The [`char`](reference/routines.char#module- numpy.char "numpy.char") namespace may be deprecated in the future. ## Other changes ### Note about pickled files NumPy 2.0 is designed to load pickle files created with NumPy 1.26, and vice versa. For versions 1.25 and earlier loading NumPy 2.0 pickle file will throw an exception. ### Adapting to changes in the `copy` keyword The [copy keyword behavior changes](https://numpy.org/doc/2.2/release/2.0.0-notes.html#copy-keyword- changes-2-0) in [`asarray`](reference/generated/numpy.asarray#numpy.asarray "numpy.asarray"), [`array`](reference/generated/numpy.array#numpy.array "numpy.array") and [`ndarray.__array__`](reference/generated/numpy.ndarray.__array__#numpy.ndarray.__array__ "numpy.ndarray.__array__") may require these changes: * Code using `np.array(..., copy=False)` can in most cases be changed to `np.asarray(...)`. Older code tended to use `np.array` like this because it had less overhead than the default `np.asarray` copy-if-needed behavior. This is no longer true, and `np.asarray` is the preferred function. * For code that explicitly needs to pass `None`/`False` meaning “copy if needed” in a way that’s compatible with NumPy 1.x and 2.x, see [scipy#20172](https://github.com/scipy/scipy/pull/20172) for an example of how to do so. * For any `__array__` method on a non-NumPy array-like object, `dtype=None` and `copy=None` keywords must be added to the signature - this will work with older NumPy versions as well (although older numpy versions will never pass in `copy` keyword). If the keywords are added to the `__array__` signature, then for: * `copy=True` and any `dtype` value always return a new copy, * `copy=None` create a copy if required (for example by `dtype`), * `copy=False` a copy must never be made. If a copy is needed to return a numpy array or satisfy `dtype`, then raise an exception (`ValueError`). ### Writing numpy-version-dependent code It should be fairly rare to have to write code that explicitly branches on the `numpy` version - in most cases, code can be rewritten to be compatible with 1.x and 2.0 at the same time. However, if it is necessary, here is a suggested code pattern to use, using [`numpy.lib.NumpyVersion`](reference/generated/numpy.lib.numpyversion#numpy.lib.NumpyVersion "numpy.lib.NumpyVersion"): # example with AxisError, which is no longer available in # the main namespace in 2.0, and not available in the # `exceptions` namespace in <1.25.0 (example uses <2.0.0b1 # for illustrative purposes): if np.lib.NumpyVersion(np.__version__) >= '2.0.0b1': from numpy.exceptions import AxisError else: from numpy import AxisError This pattern will work correctly including with NumPy release candidates, which is important during the 2.0.0 release period. # Array API standard compatibility NumPy’s main namespace as well as the [`numpy.fft`](routines.fft#module- numpy.fft "numpy.fft") and [`numpy.linalg`](routines.linalg#module- numpy.linalg "numpy.linalg") namespaces are compatible [1] with the [2022.12 version](https://data-apis.org/array-api/2022.12/index.html) of the Python array API standard. NumPy aims to implement support for the [2023.12 version](https://data- apis.org/array-api/2023.12/index.html) and future versions of the standard - assuming that those future versions can be upgraded to given NumPy’s [backwards compatibility policy](https://numpy.org/neps/nep-0023-backwards- compatibility.html#nep23 "\(in NumPy Enhancement Proposals\)"). For usage guidelines for downstream libraries and end users who want to write code that will work with both NumPy and other array libraries, we refer to the documentation of the array API standard itself and to code and developer- focused documentation in SciPy and scikit-learn. Note that in order to use standard-complaint code with older NumPy versions (< 2.0), the [array-api-compat](https://github.com/data-apis/array-api-compat) package may be useful. For testing whether NumPy-using code is only using standard-compliant features rather than anything NumPy-specific, the [array- api-strict](https://github.com/data-apis/array-api-strict) package can be used. History NumPy 1.22.0 was the first version to include support for the array API standard, via a separate `numpy.array_api` submodule. This module was marked as experimental (it emitted a warning on import) and removed in NumPy 2.0 because full support was included in the main namespace. [NEP 47](https://numpy.org/neps/nep-0047-array-api-standard.html#nep47 "\(in NumPy Enhancement Proposals\)") and [NEP 56](https://numpy.org/neps/nep-0056-array- api-main-namespace.html#nep56 "\(in NumPy Enhancement Proposals\)") describe the motivation and scope for implementing the array API standard in NumPy. ## Entry point NumPy installs an [entry point](https://packaging.python.org/en/latest/specifications/entry-points/) that can be used for discovery purposes: >>> from importlib.metadata import entry_points >>> entry_points(group='array_api', name='numpy') [EntryPoint(name='numpy', value='numpy', group='array_api')] Note that leaving out `name='numpy'` will cause a list of entry points to be returned for all array API standard compatible implementations that installed an entry point. #### Footnotes [1] With a few very minor exceptions, as documented in [NEP 56](https://numpy.org/neps/nep-0056-array-api-main-namespace.html#nep56 "\(in NumPy Enhancement Proposals\)"). The `sum`, `prod` and `trace` behavior adheres to the 2023.12 version instead, as do function signatures; the only known incompatibility that may remain is that the standard forbids unsafe casts for in-place operators while NumPy supports those. ## Inspection NumPy implements the [array API inspection utilities](https://data- apis.org/array-api/latest/API_specification/inspection.html). These functions can be accessed via the `__array_namespace_info__()` function, which returns a namespace containing the inspection utilities. [`__array_namespace_info__`](generated/numpy.__array_namespace_info__#numpy.__array_namespace_info__ "numpy.__array_namespace_info__")() | Get the array API inspection namespace for NumPy. ---|--- # Standard array subclasses Note Subclassing a `numpy.ndarray` is possible but if your goal is to create an array with _modified_ behavior, as do dask arrays for distributed computation and cupy arrays for GPU-based computation, subclassing is discouraged. Instead, using numpy’s [dispatch mechanism](../user/basics.dispatch#basics- dispatch) is recommended. The [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") can be inherited from (in Python or in C) if desired. Therefore, it can form a foundation for many useful classes. Often whether to sub-class the array object or to simply use the core array component as an internal part of a new class is a difficult decision, and can be simply a matter of choice. NumPy has several tools for simplifying how your new object interacts with other array objects, and so the choice may not be significant in the end. One way to simplify the question is by asking yourself if the object you are interested in can be replaced as a single array or does it really require two or more arrays at its core. Note that [`asarray`](generated/numpy.asarray#numpy.asarray "numpy.asarray") always returns the base-class ndarray. If you are confident that your use of the array object can handle any subclass of an ndarray, then [`asanyarray`](generated/numpy.asanyarray#numpy.asanyarray "numpy.asanyarray") can be used to allow subclasses to propagate more cleanly through your subroutine. In principal a subclass could redefine any aspect of the array and therefore, under strict guidelines, [`asanyarray`](generated/numpy.asanyarray#numpy.asanyarray "numpy.asanyarray") would rarely be useful. However, most subclasses of the array object will not redefine certain aspects of the array object such as the buffer interface, or the attributes of the array. One important example, however, of why your subroutine may not be able to handle an arbitrary subclass of an array is that matrices redefine the “*” operator to be matrix-multiplication, rather than element-by-element multiplication. ## Special attributes and methods See also [Subclassing ndarray](../user/basics.subclassing#basics-subclassing) NumPy provides several hooks that classes can customize: class.__array_ufunc__(_ufunc_ , _method_ , _* inputs_, _** kwargs_) Any class, ndarray subclass or not, can define this method or set it to None in order to override the behavior of NumPy’s ufuncs. This works quite similarly to Python’s `__mul__` and other binary operation routines. * _ufunc_ is the ufunc object that was called. * _method_ is a string indicating which Ufunc method was called (one of `"__call__"`, `"reduce"`, `"reduceat"`, `"accumulate"`, `"outer"`, `"inner"`). * _inputs_ is a tuple of the input arguments to the `ufunc`. * _kwargs_ is a dictionary containing the optional input arguments of the ufunc. If given, any `out` arguments, both positional and keyword, are passed as a [`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple "\(in Python v3.13\)") in _kwargs_. See the discussion in [Universal functions (ufunc)](ufuncs#ufuncs) for details. The method should return either the result of the operation, or [`NotImplemented`](https://docs.python.org/3/library/constants.html#NotImplemented "\(in Python v3.13\)") if the operation requested is not implemented. If one of the input, output, or `where` arguments has a `__array_ufunc__` method, it is executed _instead_ of the ufunc. If more than one of the arguments implements `__array_ufunc__`, they are tried in the order: subclasses before superclasses, inputs before outputs, outputs before `where`, otherwise left to right. The first routine returning something other than [`NotImplemented`](https://docs.python.org/3/library/constants.html#NotImplemented "\(in Python v3.13\)") determines the result. If all of the `__array_ufunc__` operations return [`NotImplemented`](https://docs.python.org/3/library/constants.html#NotImplemented "\(in Python v3.13\)"), a [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "\(in Python v3.13\)") is raised. Note We intend to re-implement numpy functions as (generalized) Ufunc, in which case it will become possible for them to be overridden by the `__array_ufunc__` method. A prime candidate is [`matmul`](generated/numpy.matmul#numpy.matmul "numpy.matmul"), which currently is not a Ufunc, but could be relatively easily be rewritten as a (set of) generalized Ufuncs. The same may happen with functions such as [`median`](generated/numpy.median#numpy.median "numpy.median"), [`amin`](generated/numpy.amin#numpy.amin "numpy.amin"), and [`argsort`](generated/numpy.argsort#numpy.argsort "numpy.argsort"). Like with some other special methods in python, such as `__hash__` and `__iter__`, it is possible to indicate that your class does _not_ support ufuncs by setting `__array_ufunc__ = None`. Ufuncs always raise [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "\(in Python v3.13\)") when called on an object that sets `__array_ufunc__ = None`. The presence of `__array_ufunc__` also influences how [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") handles binary operations like `arr + obj` and `arr < obj` when `arr` is an [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") and `obj` is an instance of a custom class. There are two possibilities. If `obj.__array_ufunc__` is present and not None, then `ndarray.__add__` and friends will delegate to the ufunc machinery, meaning that `arr + obj` becomes `np.add(arr, obj)`, and then [`add`](generated/numpy.add#numpy.add "numpy.add") invokes `obj.__array_ufunc__`. This is useful if you want to define an object that acts like an array. Alternatively, if `obj.__array_ufunc__` is set to None, then as a special case, special methods like `ndarray.__add__` will notice this and _unconditionally_ raise [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "\(in Python v3.13\)"). This is useful if you want to create objects that interact with arrays via binary operations, but are not themselves arrays. For example, a units handling system might have an object `m` representing the “meters” unit, and want to support the syntax `arr * m` to represent that the array has units of “meters”, but not want to otherwise interact with arrays via ufuncs or otherwise. This can be done by setting `__array_ufunc__ = None` and defining `__mul__` and `__rmul__` methods. (Note that this means that writing an `__array_ufunc__` that always returns [`NotImplemented`](https://docs.python.org/3/library/constants.html#NotImplemented "\(in Python v3.13\)") is not quite the same as setting `__array_ufunc__ = None`: in the former case, `arr + obj` will raise [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "\(in Python v3.13\)"), while in the latter case it is possible to define a `__radd__` method to prevent this.) The above does not hold for in-place operators, for which [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") never returns [`NotImplemented`](https://docs.python.org/3/library/constants.html#NotImplemented "\(in Python v3.13\)"). Hence, `arr += obj` would always lead to a [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "\(in Python v3.13\)"). This is because for arrays in-place operations cannot generically be replaced by a simple reverse operation. (For instance, by default, `arr += obj` would be translated to `arr = arr + obj`, i.e., `arr` would be replaced, contrary to what is expected for in-place array operations.) Note If you define `__array_ufunc__`: * If you are not a subclass of [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"), we recommend your class define special methods like `__add__` and `__lt__` that delegate to ufuncs just like ndarray does. An easy way to do this is to subclass from [`NDArrayOperatorsMixin`](generated/numpy.lib.mixins.ndarrayoperatorsmixin#numpy.lib.mixins.NDArrayOperatorsMixin "numpy.lib.mixins.NDArrayOperatorsMixin"). * If you subclass [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"), we recommend that you put all your override logic in `__array_ufunc__` and not also override special methods. This ensures the class hierarchy is determined in only one place rather than separately by the ufunc machinery and by the binary operation rules (which gives preference to special methods of subclasses; the alternative way to enforce a one-place only hierarchy, of setting `__array_ufunc__` to None, would seem very unexpected and thus confusing, as then the subclass would not work at all with ufuncs). * [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") defines its own `__array_ufunc__`, which, evaluates the ufunc if no arguments have overrides, and returns [`NotImplemented`](https://docs.python.org/3/library/constants.html#NotImplemented "\(in Python v3.13\)") otherwise. This may be useful for subclasses for which `__array_ufunc__` converts any instances of its own class to [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"): it can then pass these on to its superclass using `super().__array_ufunc__(*inputs, **kwargs)`, and finally return the results after possible back-conversion. The advantage of this practice is that it ensures that it is possible to have a hierarchy of subclasses that extend the behaviour. See [Subclassing ndarray](../user/basics.subclassing#basics-subclassing) for details. class.__array_function__(_func_ , _types_ , _args_ , _kwargs_) * `func` is an arbitrary callable exposed by NumPy’s public API, which was called in the form `func(*args, **kwargs)`. * `types` is a collection [`collections.abc.Collection`](https://docs.python.org/3/library/collections.abc.html#collections.abc.Collection "\(in Python v3.13\)") of unique argument types from the original NumPy function call that implement `__array_function__`. * The tuple `args` and dict `kwargs` are directly passed on from the original call. As a convenience for `__array_function__` implementers, `types` provides all argument types with an `'__array_function__'` attribute. This allows implementers to quickly identify cases where they should defer to `__array_function__` implementations on other arguments. Implementations should not rely on the iteration order of `types`. Most implementations of `__array_function__` will start with two checks: 1. Is the given function something that we know how to overload? 2. Are all arguments of a type that we know how to handle? If these conditions hold, `__array_function__` should return the result from calling its implementation for `func(*args, **kwargs)`. Otherwise, it should return the sentinel value `NotImplemented`, indicating that the function is not implemented by these types. There are no general requirements on the return value from `__array_function__`, although most sensible implementations should probably return array(s) with the same type as one of the function’s arguments. It may also be convenient to define a custom decorators (`implements` below) for registering `__array_function__` implementations. HANDLED_FUNCTIONS = {} class MyArray: def __array_function__(self, func, types, args, kwargs): if func not in HANDLED_FUNCTIONS: return NotImplemented # Note: this allows subclasses that don't override # __array_function__ to handle MyArray objects if not all(issubclass(t, MyArray) for t in types): return NotImplemented return HANDLED_FUNCTIONS[func](*args, **kwargs) def implements(numpy_function): """Register an __array_function__ implementation for MyArray objects.""" def decorator(func): HANDLED_FUNCTIONS[numpy_function] = func return func return decorator @implements(np.concatenate) def concatenate(arrays, axis=0, out=None): ... # implementation of concatenate for MyArray objects @implements(np.broadcast_to) def broadcast_to(array, shape): ... # implementation of broadcast_to for MyArray objects Note that it is not required for `__array_function__` implementations to include _all_ of the corresponding NumPy function’s optional arguments (e.g., `broadcast_to` above omits the irrelevant `subok` argument). Optional arguments are only passed in to `__array_function__` if they were explicitly used in the NumPy function call. Just like the case for builtin special methods like `__add__`, properly written `__array_function__` methods should always return `NotImplemented` when an unknown type is encountered. Otherwise, it will be impossible to correctly override NumPy functions from another object if the operation also includes one of your objects. For the most part, the rules for dispatch with `__array_function__` match those for `__array_ufunc__`. In particular: * NumPy will gather implementations of `__array_function__` from all specified inputs and call them in order: subclasses before superclasses, and otherwise left to right. Note that in some edge cases involving subclasses, this differs slightly from the [current behavior](https://bugs.python.org/issue30140) of Python. * Implementations of `__array_function__` indicate that they can handle the operation by returning any value other than `NotImplemented`. * If all `__array_function__` methods return `NotImplemented`, NumPy will raise `TypeError`. If no `__array_function__` methods exists, NumPy will default to calling its own implementation, intended for use on NumPy arrays. This case arises, for example, when all array-like arguments are Python numbers or lists. (NumPy arrays do have a `__array_function__` method, given below, but it always returns `NotImplemented` if any argument other than a NumPy array subclass implements `__array_function__`.) One deviation from the current behavior of `__array_ufunc__` is that NumPy will only call `__array_function__` on the _first_ argument of each unique type. This matches Python’s [rule for calling reflected methods](https://docs.python.org/3/reference/datamodel.html#object.__ror__), and this ensures that checking overloads has acceptable performance even when there are a large number of overloaded arguments. class.__array_finalize__(_obj_) This method is called whenever the system internally allocates a new array from _obj_ , where _obj_ is a subclass (subtype) of the [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). It can be used to change attributes of _self_ after construction (so as to ensure a 2-d matrix for example), or to update meta-information from the “parent.” Subclasses inherit a default implementation of this method that does nothing. class.__array_wrap__(_array_ , _context =None_, _return_scalar =False_) At the end of every [ufunc](../user/basics.ufuncs#ufuncs-output-type), this method is called on the input object with the highest array priority, or the output object if one was specified. The ufunc-computed array is passed in and whatever is returned is passed to the user. Subclasses inherit a default implementation of this method, which transforms the array into a new instance of the object’s class. Subclasses may opt to use this method to transform the output array into an instance of the subclass and update metadata before returning the array to the user. NumPy may also call this function without a context from non-ufuncs to allow preserving subclass information. Changed in version 2.0: `return_scalar` is now passed as either `False` (usually) or `True` indicating that NumPy would return a scalar. Subclasses may ignore the value, or return `array[()]` to behave more like NumPy. Note It is hoped to eventually deprecate this method in favour of `__array_ufunc__` for ufuncs (and `__array_function__` for a few other functions like [`numpy.squeeze`](generated/numpy.squeeze#numpy.squeeze "numpy.squeeze")). class.__array_priority__ The value of this attribute is used to determine what type of object to return in situations where there is more than one possibility for the Python type of the returned object. Subclasses inherit a default value of 0.0 for this attribute. Note For ufuncs, it is hoped to eventually deprecate this method in favour of `__array_ufunc__`. class.__array__(_dtype =None_, _copy =None_) If defined on an object, should return an `ndarray`. This method is called by array-coercion functions like np.array() if an object implementing this interface is passed to those functions. The third-party implementations of `__array__` must take `dtype` and `copy` keyword arguments, as ignoring them might break third-party code or NumPy itself. * `dtype` is a data type of the returned array. * `copy` is an optional boolean that indicates whether a copy should be returned. For `True` a copy should always be made, for `None` only if required (e.g. due to passed `dtype` value), and for `False` a copy should never be made (if a copy is still required, an appropriate exception should be raised). Please refer to [Interoperability with NumPy](../user/basics.interoperability#basics-interoperability) for the protocol hierarchy, of which `__array__` is the oldest and least desirable. Note If a class (ndarray subclass or not) having the `__array__` method is used as the output object of an [ufunc](../user/basics.ufuncs#ufuncs-output-type), results will _not_ be written to the object returned by `__array__`. This practice will return `TypeError`. ## Matrix objects Note It is strongly advised _not_ to use the matrix subclass. As described below, it makes writing functions that deal consistently with matrices and regular arrays very difficult. Currently, they are mainly used for interacting with `scipy.sparse`. We hope to provide an alternative for this use, however, and eventually remove the `matrix` subclass. [`matrix`](generated/numpy.matrix#numpy.matrix "numpy.matrix") objects inherit from the ndarray and therefore, they have the same attributes and methods of ndarrays. There are six important differences of matrix objects, however, that may lead to unexpected results when you use matrices but expect them to act like arrays: 1. Matrix objects can be created using a string notation to allow Matlab-style syntax where spaces separate columns and semicolons (‘;’) separate rows. 2. Matrix objects are always two-dimensional. This has far-reaching implications, in that m.ravel() is still two-dimensional (with a 1 in the first dimension) and item selection returns two-dimensional objects so that sequence behavior is fundamentally different than arrays. 3. Matrix objects over-ride multiplication to be matrix-multiplication. **Make sure you understand this for functions that you may want to receive matrices. Especially in light of the fact that asanyarray(m) returns a matrix when m is a matrix.** 4. Matrix objects over-ride power to be matrix raised to a power. The same warning about using power inside a function that uses asanyarray(…) to get an array object holds for this fact. 5. The default __array_priority__ of matrix objects is 10.0, and therefore mixed operations with ndarrays always produce matrices. 6. Matrices have special attributes which make calculations easier. These are [`matrix.T`](generated/numpy.matrix.t#numpy.matrix.T "numpy.matrix.T") | Returns the transpose of the matrix. ---|--- [`matrix.H`](generated/numpy.matrix.h#numpy.matrix.H "numpy.matrix.H") | Returns the (complex) conjugate transpose of `self`. [`matrix.I`](generated/numpy.matrix.i#numpy.matrix.I "numpy.matrix.I") | Returns the (multiplicative) inverse of invertible `self`. [`matrix.A`](generated/numpy.matrix.a#numpy.matrix.A "numpy.matrix.A") | Return `self` as an [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") object. Warning Matrix objects over-ride multiplication, ‘*’, and power, ‘**’, to be matrix- multiplication and matrix power, respectively. If your subroutine can accept sub-classes and you do not convert to base- class arrays, then you must use the ufuncs multiply and power to be sure that you are performing the correct operation for all inputs. The matrix class is a Python subclass of the ndarray and can be used as a reference for how to construct your own subclass of the ndarray. Matrices can be created from other matrices, strings, and anything else that can be converted to an `ndarray` . The name “mat “is an alias for “matrix “in NumPy. [`matrix`](generated/numpy.matrix#numpy.matrix "numpy.matrix")(data[, dtype, copy]) | Returns a matrix from an array-like object, or from a string of data. ---|--- [`asmatrix`](generated/numpy.asmatrix#numpy.asmatrix "numpy.asmatrix")(data[, dtype]) | Interpret the input as a matrix. [`bmat`](generated/numpy.bmat#numpy.bmat "numpy.bmat")(obj[, ldict, gdict]) | Build a matrix object from a string, nested sequence, or array. Example 1: Matrix creation from a string >>> import numpy as np >>> a = np.asmatrix('1 2 3; 4 5 3') >>> print((a*a.T).I) [[ 0.29239766 -0.13450292] [-0.13450292 0.08187135]] Example 2: Matrix creation from a nested sequence >>> import numpy as np >>> np.asmatrix([[1,5,10],[1.0,3,4j]]) matrix([[ 1.+0.j, 5.+0.j, 10.+0.j], [ 1.+0.j, 3.+0.j, 0.+4.j]]) Example 3: Matrix creation from an array >>> import numpy as np >>> np.asmatrix(np.random.rand(3,3)).T matrix([[4.17022005e-01, 3.02332573e-01, 1.86260211e-01], [7.20324493e-01, 1.46755891e-01, 3.45560727e-01], [1.14374817e-04, 9.23385948e-02, 3.96767474e-01]]) ## Memory-mapped file arrays Memory-mapped files are useful for reading and/or modifying small segments of a large file with regular layout, without reading the entire file into memory. A simple subclass of the ndarray uses a memory-mapped file for the data buffer of the array. For small files, the over-head of reading the entire file into memory is typically not significant, however for large files using memory mapping can save considerable resources. Memory-mapped-file arrays have one additional method (besides those they inherit from the ndarray): [`.flush()`](generated/numpy.memmap.flush#numpy.memmap.flush "numpy.memmap.flush") which must be called manually by the user to ensure that any changes to the array actually get written to disk. [`memmap`](generated/numpy.memmap#numpy.memmap "numpy.memmap")(filename[, dtype, mode, offset, ...]) | Create a memory-map to an array stored in a _binary_ file on disk. ---|--- [`memmap.flush`](generated/numpy.memmap.flush#numpy.memmap.flush "numpy.memmap.flush")() | Write any changes in the array to the file on disk. Example: >>> import numpy as np >>> a = np.memmap('newfile.dat', dtype=float, mode='w+', shape=1000) >>> a[10] = 10.0 >>> a[30] = 30.0 >>> del a >>> b = np.fromfile('newfile.dat', dtype=float) >>> print(b[10], b[30]) 10.0 30.0 >>> a = np.memmap('newfile.dat', dtype=float) >>> print(a[10], a[30]) 10.0 30.0 ## Character arrays (numpy.char) See also [Creating character arrays (numpy.char)](routines.array-creation#routines- array-creation-char) Note The [`chararray`](generated/numpy.char.chararray#numpy.char.chararray "numpy.char.chararray") class exists for backwards compatibility with Numarray, it is not recommended for new development. Starting from numpy 1.4, if one needs arrays of strings, it is recommended to use arrays of [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") [`object_`](arrays.scalars#numpy.object_ "numpy.object_"), [`bytes_`](arrays.scalars#numpy.bytes_ "numpy.bytes_") or [`str_`](arrays.scalars#numpy.str_ "numpy.str_"), and use the free functions in the [`numpy.char`](routines.char#module-numpy.char "numpy.char") module for fast vectorized string operations. These are enhanced arrays of either [`str_`](arrays.scalars#numpy.str_ "numpy.str_") type or [`bytes_`](arrays.scalars#numpy.bytes_ "numpy.bytes_") type. These arrays inherit from the [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"), but specially-define the operations `+`, `*`, and `%` on a (broadcasting) element- by-element basis. These operations are not available on the standard [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") of character type. In addition, the [`chararray`](generated/numpy.char.chararray#numpy.char.chararray "numpy.char.chararray") has all of the standard [`str`](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.13\)") (and [`bytes`](https://docs.python.org/3/library/stdtypes.html#bytes "\(in Python v3.13\)")) methods, executing them on an element-by-element basis. Perhaps the easiest way to create a chararray is to use [`self.view(chararray)`](generated/numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") where _self_ is an ndarray of str or unicode data-type. However, a chararray can also be created using the [`chararray`](generated/numpy.char.chararray#numpy.char.chararray "numpy.char.chararray") constructor, or via the [`numpy.char.array`](generated/numpy.char.array#numpy.char.array "numpy.char.array") function: [`char.chararray`](generated/numpy.char.chararray#numpy.char.chararray "numpy.char.chararray")(shape[, itemsize, unicode, ...]) | Provides a convenient view on arrays of string and unicode values. ---|--- [`char.array`](generated/numpy.char.array#numpy.char.array "numpy.char.array")(obj[, itemsize, copy, unicode, order]) | Create a [`chararray`](generated/numpy.char.chararray#numpy.char.chararray "numpy.char.chararray"). Another difference with the standard ndarray of str data-type is that the chararray inherits the feature introduced by Numarray that white-space at the end of any element in the array will be ignored on item retrieval and comparison operations. ## Record arrays See also [Creating record arrays](routines.array-creation#routines-array-creation-rec), [Data type routines](routines.dtype#routines-dtype), [Data type objects (dtype)](arrays.dtypes#arrays-dtypes). NumPy provides the [`recarray`](generated/numpy.recarray#numpy.recarray "numpy.recarray") class which allows accessing the fields of a structured array as attributes, and a corresponding scalar data type object [`record`](generated/numpy.record#numpy.record "numpy.record"). [`recarray`](generated/numpy.recarray#numpy.recarray "numpy.recarray")(shape[, dtype, buf, offset, ...]) | Construct an ndarray that allows field access using attributes. ---|--- [`record`](generated/numpy.record#numpy.record "numpy.record") | A data-type scalar that allows field access as attribute lookup. Note The pandas DataFrame is more powerful than record array. If possible, please use pandas DataFrame instead. ## Masked arrays (numpy.ma) See also [Masked arrays](maskedarray#maskedarray) ## Standard container class For backward compatibility and as a standard “container “class, the UserArray from Numeric has been brought over to NumPy and named [`numpy.lib.user_array.container`](generated/numpy.lib.user_array.container#numpy.lib.user_array.container "numpy.lib.user_array.container") The container class is a Python class whose self.array attribute is an ndarray. Multiple inheritance is probably easier with numpy.lib.user_array.container than with the ndarray itself and so it is included by default. It is not documented here beyond mentioning its existence because you are encouraged to use the ndarray class directly if you can. [`numpy.lib.user_array.container`](generated/numpy.lib.user_array.container#numpy.lib.user_array.container "numpy.lib.user_array.container")(data[, ...]) | Standard container-class for easy multiple-inheritance. ---|--- ## Array iterators Iterators are a powerful concept for array processing. Essentially, iterators implement a generalized for-loop. If _myiter_ is an iterator object, then the Python code: for val in myiter: ... some code involving val ... calls `val = next(myiter)` repeatedly until [`StopIteration`](https://docs.python.org/3/library/exceptions.html#StopIteration "\(in Python v3.13\)") is raised by the iterator. There are several ways to iterate over an array that may be useful: default iteration, flat iteration, and \\(N\\)-dimensional enumeration. ### Default iteration The default iterator of an ndarray object is the default Python iterator of a sequence type. Thus, when the array object itself is used as an iterator. The default behavior is equivalent to: for i in range(arr.shape[0]): val = arr[i] This default iterator selects a sub-array of dimension \\(N-1\\) from the array. This can be a useful construct for defining recursive algorithms. To loop over the entire array requires \\(N\\) for-loops. >>> import numpy as np >>> a = np.arange(24).reshape(3,2,4) + 10 >>> for val in a: ... print('item:', val) item: [[10 11 12 13] [14 15 16 17]] item: [[18 19 20 21] [22 23 24 25]] item: [[26 27 28 29] [30 31 32 33]] ### Flat iteration [`ndarray.flat`](generated/numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") | A 1-D iterator over the array. ---|--- As mentioned previously, the flat attribute of ndarray objects returns an iterator that will cycle over the entire array in C-style contiguous order. >>> import numpy as np >>> a = np.arange(24).reshape(3,2,4) + 10 >>> for i, val in enumerate(a.flat): ... if i%5 == 0: print(i, val) 0 10 5 15 10 20 15 25 20 30 Here, I’ve used the built-in enumerate iterator to return the iterator index as well as the value. ### N-dimensional enumeration [`ndenumerate`](generated/numpy.ndenumerate#numpy.ndenumerate "numpy.ndenumerate")(arr) | Multidimensional index iterator. ---|--- Sometimes it may be useful to get the N-dimensional index while iterating. The ndenumerate iterator can achieve this. >>> import numpy as np >>> for i, val in np.ndenumerate(a): ... if sum(i)%5 == 0: print(i, val) (0, 0, 0) 10 (1, 1, 3) 25 (2, 0, 3) 29 (2, 1, 2) 32 ### Iterator for broadcasting [`broadcast`](generated/numpy.broadcast#numpy.broadcast "numpy.broadcast") | Produce an object that mimics broadcasting. ---|--- The general concept of broadcasting is also available from Python using the [`broadcast`](generated/numpy.broadcast#numpy.broadcast "numpy.broadcast") iterator. This object takes \\(N\\) objects as inputs and returns an iterator that returns tuples providing each of the input sequence elements in the broadcasted result. >>> import numpy as np >>> for val in np.broadcast([[1, 0], [2, 3]], [0, 1]): ... print(val) (np.int64(1), np.int64(0)) (np.int64(0), np.int64(1)) (np.int64(2), np.int64(0)) (np.int64(3), np.int64(1)) # Datetimes and timedeltas Starting in NumPy 1.7, there are core array data types which natively support datetime functionality. The data type is called [`datetime64`](arrays.scalars#numpy.datetime64 "numpy.datetime64"), so named because [`datetime`](https://docs.python.org/3/library/datetime.html#datetime.datetime "\(in Python v3.13\)") is already taken by the Python standard library. ## Datetime64 conventions and assumptions Similar to the Python [`date`](https://docs.python.org/3/library/datetime.html#datetime.date "\(in Python v3.13\)") class, dates are expressed in the current Gregorian Calendar, indefinitely extended both in the future and in the past. [1] Contrary to Python [`date`](https://docs.python.org/3/library/datetime.html#datetime.date "\(in Python v3.13\)"), which supports only years in the 1 AD — 9999 AD range, [`datetime64`](arrays.scalars#numpy.datetime64 "numpy.datetime64") allows also for dates BC; years BC follow the [Astronomical year numbering](https://en.wikipedia.org/wiki/Astronomical_year_numbering) convention, i.e. year 2 BC is numbered −1, year 1 BC is numbered 0, year 1 AD is numbered 1. Time instants, say 16:23:32.234, are represented counting hours, minutes, seconds and fractions from midnight: i.e. 00:00:00.000 is midnight, 12:00:00.000 is noon, etc. Each calendar day has exactly 86400 seconds. This is a “naive” time, with no explicit notion of timezones or specific time scales (UT1, UTC, TAI, etc.). [2] [1] The calendar obtained by extending the Gregorian calendar before its official adoption on Oct. 15, 1582 is called [Proleptic Gregorian Calendar](https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar) [2] The assumption of 86400 seconds per calendar day is not valid for UTC, the present day civil time scale. In fact due to the presence of [leap seconds](https://en.wikipedia.org/wiki/Leap_second) on rare occasions a day may be 86401 or 86399 seconds long. On the contrary the 86400s day assumption holds for the TAI timescale. An explicit support for TAI and TAI to UTC conversion, accounting for leap seconds, is proposed but not yet implemented. See also the shortcomings section below. ## Basic datetimes The most basic way to create datetimes is from strings in ISO 8601 date or datetime format. It is also possible to create datetimes from an integer by offset relative to the Unix epoch (00:00:00 UTC on 1 January 1970). The unit for internal storage is automatically selected from the form of the string, and can be either a date unit or a time unit. The date units are years (‘Y’), months (‘M’), weeks (‘W’), and days (‘D’), while the time units are hours (‘h’), minutes (‘m’), seconds (‘s’), milliseconds (‘ms’), and some additional SI-prefix seconds-based units. The [`datetime64`](arrays.scalars#numpy.datetime64 "numpy.datetime64") data type also accepts the string “NAT”, in any combination of lowercase/uppercase letters, for a “Not A Time” value. #### Example A simple ISO date: >>> import numpy as np >>> np.datetime64('2005-02-25') np.datetime64('2005-02-25') From an integer and a date unit, 1 year since the UNIX epoch: >>> np.datetime64(1, 'Y') np.datetime64('1971') Using months for the unit: >>> np.datetime64('2005-02') np.datetime64('2005-02') Specifying just the month, but forcing a ‘days’ unit: >>> np.datetime64('2005-02', 'D') np.datetime64('2005-02-01') From a date and time: >>> np.datetime64('2005-02-25T03:30') np.datetime64('2005-02-25T03:30') NAT (not a time): >>> np.datetime64('nat') np.datetime64('NaT') When creating an array of datetimes from a string, it is still possible to automatically select the unit from the inputs, by using the datetime type with generic units. #### Example >>> import numpy as np >>> np.array(['2007-07-13', '2006-01-13', '2010-08-13'], dtype='datetime64') array(['2007-07-13', '2006-01-13', '2010-08-13'], dtype='datetime64[D]') >>> np.array(['2001-01-01T12:00', '2002-02-03T13:56:03.172'], dtype='datetime64') array(['2001-01-01T12:00:00.000', '2002-02-03T13:56:03.172'], dtype='datetime64[ms]') An array of datetimes can be constructed from integers representing POSIX timestamps with the given unit. #### Example >>> import numpy as np >>> np.array([0, 1577836800], dtype='datetime64[s]') array(['1970-01-01T00:00:00', '2020-01-01T00:00:00'], dtype='datetime64[s]') >>> np.array([0, 1577836800000]).astype('datetime64[ms]') array(['1970-01-01T00:00:00.000', '2020-01-01T00:00:00.000'], dtype='datetime64[ms]') The datetime type works with many common NumPy functions, for example [`arange`](generated/numpy.arange#numpy.arange "numpy.arange") can be used to generate ranges of dates. #### Example All the dates for one month: >>> import numpy as np >>> np.arange('2005-02', '2005-03', dtype='datetime64[D]') array(['2005-02-01', '2005-02-02', '2005-02-03', '2005-02-04', '2005-02-05', '2005-02-06', '2005-02-07', '2005-02-08', '2005-02-09', '2005-02-10', '2005-02-11', '2005-02-12', '2005-02-13', '2005-02-14', '2005-02-15', '2005-02-16', '2005-02-17', '2005-02-18', '2005-02-19', '2005-02-20', '2005-02-21', '2005-02-22', '2005-02-23', '2005-02-24', '2005-02-25', '2005-02-26', '2005-02-27', '2005-02-28'], dtype='datetime64[D]') The datetime object represents a single moment in time. If two datetimes have different units, they may still be representing the same moment of time, and converting from a bigger unit like months to a smaller unit like days is considered a ‘safe’ cast because the moment of time is still being represented exactly. #### Example >>> import numpy as np >>> np.datetime64('2005') == np.datetime64('2005-01-01') True >>> np.datetime64('2010-03-14T15') == np.datetime64('2010-03-14T15:00:00.00') True Deprecated since version 1.11.0: NumPy does not store timezone information. For backwards compatibility, datetime64 still parses timezone offsets, which it handles by converting to UTC±00:00 (Zulu time). This behaviour is deprecated and will raise an error in the future. ## Datetime and timedelta arithmetic NumPy allows the subtraction of two datetime values, an operation which produces a number with a time unit. Because NumPy doesn’t have a physical quantities system in its core, the [`timedelta64`](arrays.scalars#numpy.timedelta64 "numpy.timedelta64") data type was created to complement [`datetime64`](arrays.scalars#numpy.datetime64 "numpy.datetime64"). The arguments for [`timedelta64`](arrays.scalars#numpy.timedelta64 "numpy.timedelta64") are a number, to represent the number of units, and a date/time unit, such as (D)ay, (M)onth, (Y)ear, (h)ours, (m)inutes, or (s)econds. The [`timedelta64`](arrays.scalars#numpy.timedelta64 "numpy.timedelta64") data type also accepts the string “NAT” in place of the number for a “Not A Time” value. #### Example >>> import numpy as np >>> np.timedelta64(1, 'D') np.timedelta64(1,'D') >>> np.timedelta64(4, 'h') np.timedelta64(4,'h') >>> np.timedelta64('nAt') np.timedelta64('NaT') Datetimes and Timedeltas work together to provide ways for simple datetime calculations. #### Example >>> import numpy as np >>> np.datetime64('2009-01-01') - np.datetime64('2008-01-01') np.timedelta64(366,'D') >>> np.datetime64('2009') + np.timedelta64(20, 'D') np.datetime64('2009-01-21') >>> np.datetime64('2011-06-15T00:00') + np.timedelta64(12, 'h') np.datetime64('2011-06-15T12:00') >>> np.timedelta64(1,'W') / np.timedelta64(1,'D') 7.0 >>> np.timedelta64(1,'W') % np.timedelta64(10,'D') np.timedelta64(7,'D') >>> np.datetime64('nat') - np.datetime64('2009-01-01') np.timedelta64('NaT','D') >>> np.datetime64('2009-01-01') + np.timedelta64('nat') np.datetime64('NaT') There are two Timedelta units (‘Y’, years and ‘M’, months) which are treated specially, because how much time they represent changes depending on when they are used. While a timedelta day unit is equivalent to 24 hours, month and year units cannot be converted directly into days without using ‘unsafe’ casting. The [`numpy.ndarray.astype`](generated/numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype") method can be used for unsafe conversion of months/years to days. The conversion follows calculating the averaged values from the 400 year leap-year cycle. #### Example >>> import numpy as np >>> a = np.timedelta64(1, 'Y') >>> np.timedelta64(a, 'M') numpy.timedelta64(12,'M') >>> np.timedelta64(a, 'D') Traceback (most recent call last): File "", line 1, in TypeError: Cannot cast NumPy timedelta64 scalar from metadata [Y] to [D] according to the rule 'same_kind' ## Datetime units The Datetime and Timedelta data types support a large number of time units, as well as generic units which can be coerced into any of the other units based on input data. Datetimes are always stored with an epoch of 1970-01-01T00:00. This means the supported dates are always a symmetric interval around the epoch, called “time span” in the table below. The length of the span is the range of a 64-bit integer times the length of the date or unit. For example, the time span for ‘W’ (week) is exactly 7 times longer than the time span for ‘D’ (day), and the time span for ‘D’ (day) is exactly 24 times longer than the time span for ‘h’ (hour). Here are the date units: Code | Meaning | Time span (relative) | Time span (absolute) ---|---|---|--- Y | year | +/- 9.2e18 years | [9.2e18 BC, 9.2e18 AD] M | month | +/- 7.6e17 years | [7.6e17 BC, 7.6e17 AD] W | week | +/- 1.7e17 years | [1.7e17 BC, 1.7e17 AD] D | day | +/- 2.5e16 years | [2.5e16 BC, 2.5e16 AD] And here are the time units: Code | Meaning | Time span (relative) | Time span (absolute) ---|---|---|--- h | hour | +/- 1.0e15 years | [1.0e15 BC, 1.0e15 AD] m | minute | +/- 1.7e13 years | [1.7e13 BC, 1.7e13 AD] s | second | +/- 2.9e11 years | [2.9e11 BC, 2.9e11 AD] ms | millisecond | +/- 2.9e8 years | [ 2.9e8 BC, 2.9e8 AD] us / μs | microsecond | +/- 2.9e5 years | [290301 BC, 294241 AD] ns | nanosecond | +/- 292 years | [ 1678 AD, 2262 AD] ps | picosecond | +/- 106 days | [ 1969 AD, 1970 AD] fs | femtosecond | +/- 2.6 hours | [ 1969 AD, 1970 AD] as | attosecond | +/- 9.2 seconds | [ 1969 AD, 1970 AD] ## Business day functionality To allow the datetime to be used in contexts where only certain days of the week are valid, NumPy includes a set of “busday” (business day) functions. The default for busday functions is that the only valid days are Monday through Friday (the usual business days). The implementation is based on a “weekmask” containing 7 Boolean flags to indicate valid days; custom weekmasks are possible that specify other sets of valid days. The “busday” functions can additionally check a list of “holiday” dates, specific dates that are not valid days. The function [`busday_offset`](generated/numpy.busday_offset#numpy.busday_offset "numpy.busday_offset") allows you to apply offsets specified in business days to datetimes with a unit of ‘D’ (day). #### Example >>> import numpy as np >>> np.busday_offset('2011-06-23', 1) np.datetime64('2011-06-24') >>> np.busday_offset('2011-06-23', 2) np.datetime64('2011-06-27') When an input date falls on the weekend or a holiday, [`busday_offset`](generated/numpy.busday_offset#numpy.busday_offset "numpy.busday_offset") first applies a rule to roll the date to a valid business day, then applies the offset. The default rule is ‘raise’, which simply raises an exception. The rules most typically used are ‘forward’ and ‘backward’. #### Example >>> import numpy as np >>> np.busday_offset('2011-06-25', 2) Traceback (most recent call last): File "", line 1, in ValueError: Non-business day date in busday_offset >>> np.busday_offset('2011-06-25', 0, roll='forward') np.datetime64('2011-06-27') >>> np.busday_offset('2011-06-25', 2, roll='forward') np.datetime64('2011-06-29') >>> np.busday_offset('2011-06-25', 0, roll='backward') np.datetime64('2011-06-24') >>> np.busday_offset('2011-06-25', 2, roll='backward') np.datetime64('2011-06-28') In some cases, an appropriate use of the roll and the offset is necessary to get a desired answer. #### Example The first business day on or after a date: >>> import numpy as np >>> np.busday_offset('2011-03-20', 0, roll='forward') np.datetime64('2011-03-21') >>> np.busday_offset('2011-03-22', 0, roll='forward') np.datetime64('2011-03-22') The first business day strictly after a date: >>> np.busday_offset('2011-03-20', 1, roll='backward') np.datetime64('2011-03-21') >>> np.busday_offset('2011-03-22', 1, roll='backward') np.datetime64('2011-03-23') The function is also useful for computing some kinds of days like holidays. In Canada and the U.S., Mother’s day is on the second Sunday in May, which can be computed with a custom weekmask. #### Example >>> import numpy as np >>> np.busday_offset('2012-05', 1, roll='forward', weekmask='Sun') np.datetime64('2012-05-13') When performance is important for manipulating many business dates with one particular choice of weekmask and holidays, there is an object [`busdaycalendar`](generated/numpy.busdaycalendar#numpy.busdaycalendar "numpy.busdaycalendar") which stores the data necessary in an optimized form. ### np.is_busday(): To test a [`datetime64`](arrays.scalars#numpy.datetime64 "numpy.datetime64") value to see if it is a valid day, use [`is_busday`](generated/numpy.is_busday#numpy.is_busday "numpy.is_busday"). #### Example >>> import numpy as np >>> np.is_busday(np.datetime64('2011-07-15')) # a Friday True >>> np.is_busday(np.datetime64('2011-07-16')) # a Saturday False >>> np.is_busday(np.datetime64('2011-07-16'), weekmask="Sat Sun") True >>> a = np.arange(np.datetime64('2011-07-11'), np.datetime64('2011-07-18')) >>> np.is_busday(a) array([ True, True, True, True, True, False, False]) ### np.busday_count(): To find how many valid days there are in a specified range of datetime64 dates, use [`busday_count`](generated/numpy.busday_count#numpy.busday_count "numpy.busday_count"): #### Example >>> import numpy as np >>> np.busday_count(np.datetime64('2011-07-11'), np.datetime64('2011-07-18')) 5 >>> np.busday_count(np.datetime64('2011-07-18'), np.datetime64('2011-07-11')) -5 If you have an array of datetime64 day values, and you want a count of how many of them are valid dates, you can do this: #### Example >>> import numpy as np >>> a = np.arange(np.datetime64('2011-07-11'), np.datetime64('2011-07-18')) >>> np.count_nonzero(np.is_busday(a)) 5 ### Custom weekmasks Here are several examples of custom weekmask values. These examples specify the “busday” default of Monday through Friday being valid days. Some examples: # Positional sequences; positions are Monday through Sunday. # Length of the sequence must be exactly 7. weekmask = [1, 1, 1, 1, 1, 0, 0] # list or other sequence; 0 == invalid day, 1 == valid day weekmask = "1111100" # string '0' == invalid day, '1' == valid day # string abbreviations from this list: Mon Tue Wed Thu Fri Sat Sun weekmask = "Mon Tue Wed Thu Fri" # any amount of whitespace is allowed; abbreviations are case-sensitive. weekmask = "MonTue Wed Thu\tFri" ## Datetime64 shortcomings The assumption that all days are exactly 86400 seconds long makes [`datetime64`](arrays.scalars#numpy.datetime64 "numpy.datetime64") largely compatible with Python [`datetime`](https://docs.python.org/3/library/datetime.html#module-datetime "\(in Python v3.13\)") and “POSIX time” semantics; therefore they all share the same well known shortcomings with respect to the UTC timescale and historical time determination. A brief non exhaustive summary is given below. * It is impossible to parse valid UTC timestamps occurring during a positive leap second. #### Example “2016-12-31 23:59:60 UTC” was a leap second, therefore “2016-12-31 23:59:60.450 UTC” is a valid timestamp which is not parseable by [`datetime64`](arrays.scalars#numpy.datetime64 "numpy.datetime64"): >>> import numpy as np >>> np.datetime64("2016-12-31 23:59:60.450") Traceback (most recent call last): File "", line 1, in ValueError: Seconds out of range in datetime string "2016-12-31 23:59:60.450" * Timedelta64 computations between two UTC dates can be wrong by an integer number of SI seconds. #### Example Compute the number of SI seconds between “2021-01-01 12:56:23.423 UTC” and “2001-01-01 00:00:00.000 UTC”: >>> import numpy as np >>> ( ... np.datetime64("2021-01-01 12:56:23.423") ... - np.datetime64("2001-01-01") ... ) / np.timedelta64(1, "s") 631198583.423 However, the correct answer is `631198588.423` SI seconds, because there were 5 leap seconds between 2001 and 2021. * Timedelta64 computations for dates in the past do not return SI seconds, as one would expect. #### Example Compute the number of seconds between “000-01-01 UT” and “1600-01-01 UT”, where UT is [universal time](https://en.wikipedia.org/wiki/Universal_Time): >>> import numpy as np >>> a = np.datetime64("0000-01-01", "us") >>> b = np.datetime64("1600-01-01", "us") >>> b - a numpy.timedelta64(50491123200000000,'us') The computed results, `50491123200` seconds, are obtained as the elapsed number of days (`584388`) times `86400` seconds; this is the number of seconds of a clock in sync with the Earth’s rotation. The exact value in SI seconds can only be estimated, e.g., using data published in [Measurement of the Earth’s rotation: 720 BC to AD 2015, 2016, Royal Society’s Proceedings A 472, by Stephenson et.al.](https://doi.org/10.1098/rspa.2016.0404). A sensible estimate is `50491112870 ± 90` seconds, with a difference of 10330 seconds. # Data type objects (dtype) A data type object (an instance of [`numpy.dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") class) describes how the bytes in the fixed-size block of memory corresponding to an array item should be interpreted. It describes the following aspects of the data: 1. Type of the data (integer, float, Python object, etc.) 2. Size of the data (how many bytes is in _e.g._ the integer) 3. Byte order of the data ([little-endian](../glossary#term-little-endian) or [big-endian](../glossary#term-big-endian)) 4. If the data type is [structured data type](../glossary#term-structured-data-type), an aggregate of other data types, (_e.g._ , describing an array item consisting of an integer and a float), 1. what are the names of the “[fields](../glossary#term-field)” of the structure, by which they can be [accessed](../user/basics.indexing#arrays-indexing-fields), 2. what is the data-type of each [field](../glossary#term-field), and 3. which part of the memory block each field takes. 5. If the data type is a sub-array, what is its shape and data type. To describe the type of scalar data, there are several [built-in scalar types](arrays.scalars#arrays-scalars-built-in) in NumPy for various precision of integers, floating-point numbers, _etc_. An item extracted from an array, _e.g._ , by indexing, will be a Python object whose type is the scalar type associated with the data type of the array. Note that the scalar types are not [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") objects, even though they can be used in place of one whenever a data type specification is needed in NumPy. Structured data types are formed by creating a data type whose [field](../glossary#term-field) contain other data types. Each field has a name by which it can be [accessed](../user/basics.indexing#arrays-indexing- fields). The parent data type should be of sufficient size to contain all its fields; the parent is nearly always based on the [`void`](arrays.scalars#numpy.void "numpy.void") type which allows an arbitrary item size. Structured data types may also contain nested structured sub-array data types in their fields. Finally, a data type can describe items that are themselves arrays of items of another data type. These sub-arrays must, however, be of a fixed size. If an array is created using a data-type describing a sub-array, the dimensions of the sub-array are appended to the shape of the array when the array is created. Sub-arrays in a field of a structured type behave differently, see [Field access](../user/basics.indexing#arrays-indexing- fields). Sub-arrays always have a C-contiguous memory layout. #### Example A simple data type containing a 32-bit big-endian integer: (see Specifying and constructing data types for details on construction) >>> import numpy as np >>> dt = np.dtype('>i4') >>> dt.byteorder '>' >>> dt.itemsize 4 >>> dt.name 'int32' >>> dt.type is np.int32 True The corresponding array scalar type is [`int32`](arrays.scalars#numpy.int32 "numpy.int32"). #### Example A structured data type containing a 16-character string (in field ‘name’) and a sub-array of two 64-bit floating-point number (in field ‘grades’): >>> import numpy as np >>> dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))]) >>> dt['name'] dtype('>> dt['grades'] dtype(('>> import numpy as np >>> x = np.array([('Sarah', (8.0, 7.0)), ('John', (6.0, 7.0))], dtype=dt) >>> x[1] ('John', [6., 7.]) >>> x[1]['grades'] array([6., 7.]) >>> type(x[1]) >>> type(x[1]['grades']) ## Specifying and constructing data types Whenever a data-type is required in a NumPy function or method, either a [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") object or something that can be converted to one can be supplied. Such conversions are done by the [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") constructor: [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype")(dtype[, align, copy]) | Create a data type object. ---|--- What can be converted to a data-type object is described below: [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") object Used as-is. None The default data type: [`float64`](arrays.scalars#numpy.float64 "numpy.float64"). Array-scalar types The 24 built-in [array scalar type objects](arrays.scalars#arrays-scalars- built-in) all convert to an associated data-type object. This is true for their sub-classes as well. Note that not all data-type information can be supplied with a type-object: for example, [`flexible`](arrays.scalars#numpy.flexible "numpy.flexible") data-types have a default _itemsize_ of 0, and require an explicitly given size to be useful. #### Example >>> import numpy as np >>> dt = np.dtype(np.int32) # 32-bit integer >>> dt = np.dtype(np.complex128) # 128-bit complex floating-point number Generic types The generic hierarchical type objects convert to corresponding type objects according to the associations: [`number`](arrays.scalars#numpy.number "numpy.number"), [`inexact`](arrays.scalars#numpy.inexact "numpy.inexact"), [`floating`](arrays.scalars#numpy.floating "numpy.floating") | [`float64`](arrays.scalars#numpy.float64 "numpy.float64") ---|--- [`complexfloating`](arrays.scalars#numpy.complexfloating "numpy.complexfloating") | [`complex128`](arrays.scalars#numpy.complex128 "numpy.complex128") [`integer`](arrays.scalars#numpy.integer "numpy.integer"), [`signedinteger`](arrays.scalars#numpy.signedinteger "numpy.signedinteger") | [`int_`](arrays.scalars#numpy.int_ "numpy.int_") [`unsignedinteger`](arrays.scalars#numpy.unsignedinteger "numpy.unsignedinteger") | [`uint`](arrays.scalars#numpy.uint "numpy.uint") [`generic`](arrays.scalars#numpy.generic "numpy.generic"), [`flexible`](arrays.scalars#numpy.flexible "numpy.flexible") | [`void`](arrays.scalars#numpy.void "numpy.void") Deprecated since version 1.19: This conversion of generic scalar types is deprecated. This is because it can be unexpected in a context such as `arr.astype(dtype=np.floating)`, which casts an array of `float32` to an array of `float64`, even though `float32` is a subdtype of `np.floating`. Built-in Python types Several python types are equivalent to a corresponding array scalar when used to generate a [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") object: [`int`](https://docs.python.org/3/library/functions.html#int "\(in Python v3.13\)") | [`int_`](arrays.scalars#numpy.int_ "numpy.int_") ---|--- [`bool`](arrays.scalars#numpy.bool "numpy.bool") | [`bool_`](arrays.scalars#numpy.bool_ "numpy.bool_") [`float`](https://docs.python.org/3/library/functions.html#float "\(in Python v3.13\)") | [`float64`](arrays.scalars#numpy.float64 "numpy.float64") [`complex`](https://docs.python.org/3/library/functions.html#complex "\(in Python v3.13\)") | [`complex128`](arrays.scalars#numpy.complex128 "numpy.complex128") [`bytes`](https://docs.python.org/3/library/stdtypes.html#bytes "\(in Python v3.13\)") | [`bytes_`](arrays.scalars#numpy.bytes_ "numpy.bytes_") [`str`](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.13\)") | [`str_`](arrays.scalars#numpy.str_ "numpy.str_") [`memoryview`](https://docs.python.org/3/library/stdtypes.html#memoryview "\(in Python v3.13\)") | [`void`](arrays.scalars#numpy.void "numpy.void") (all others) | [`object_`](arrays.scalars#numpy.object_ "numpy.object_") Note that `str_` corresponds to UCS4 encoded unicode strings. #### Example >>> import numpy as np >>> dt = np.dtype(float) # Python-compatible floating-point number >>> dt = np.dtype(int) # Python-compatible integer >>> dt = np.dtype(object) # Python object Note All other types map to `object_` for convenience. Code should expect that such types may map to a specific (new) dtype in the future. Types with `.dtype` Any type object with a `dtype` attribute: The attribute will be accessed and used directly. The attribute must return something that is convertible into a dtype object. Several kinds of strings can be converted. Recognized strings can be prepended with `'>'` ([big-endian](../glossary#term-big-endian)), `'<'` ([little- endian](../glossary#term-little-endian)), or `'='` (hardware-native, the default), to specify the byte order. One-character strings Each built-in data-type has a character code (the updated Numeric typecodes), that uniquely identifies it. #### Example >>> import numpy as np >>> dt = np.dtype('b') # byte, native byte order >>> dt = np.dtype('>H') # big-endian unsigned short >>> dt = np.dtype('>> dt = np.dtype('d') # double-precision floating-point number Array-protocol type strings (see [The array interface protocol](arrays.interface#arrays-interface)) The first character specifies the kind of data and the remaining characters specify the number of bytes per item, except for Unicode, where it is interpreted as the number of characters. The item size must correspond to an existing type, or an error will be raised. The supported kinds are `'?'` | boolean ---|--- `'b'` | (signed) byte `'B'` | unsigned byte `'i'` | (signed) integer `'u'` | unsigned integer `'f'` | floating-point `'c'` | complex-floating point `'m'` | timedelta `'M'` | datetime `'O'` | (Python) objects `'S'`, `'a'` | zero-terminated bytes (not recommended) `'U'` | Unicode string `'V'` | raw data ([`void`](arrays.scalars#numpy.void "numpy.void")) #### Example >>> import numpy as np >>> dt = np.dtype('i4') # 32-bit signed integer >>> dt = np.dtype('f8') # 64-bit floating-point number >>> dt = np.dtype('c16') # 128-bit complex floating-point number >>> dt = np.dtype('S25') # 25-length zero-terminated bytes >>> dt = np.dtype('U25') # 25-character string Note on string types For backward compatibility with existing code originally written to support Python 2, `S` and `a` typestrings are zero-terminated bytes. For unicode strings, use `U`, [`numpy.str_`](arrays.scalars#numpy.str_ "numpy.str_"). For signed bytes that do not need zero-termination `b` or `i1` can be used. String with comma-separated fields A short-hand notation for specifying the format of a structured data type is a comma-separated string of basic formats. A basic format in this context is an optional shape specifier followed by an array-protocol type string. Parenthesis are required on the shape if it has more than one dimension. NumPy allows a modification on the format in that any string that can uniquely identify the type can be used to specify the data- type in a field. The generated data-type fields are named `'f0'`, `'f1'`, …, `'f'` where N (>1) is the number of comma-separated basic formats in the string. If the optional shape specifier is provided, then the data-type for the corresponding field describes a sub-array. #### Example * field named `f0` containing a 32-bit integer * field named `f1` containing a 2 x 3 sub-array of 64-bit floating-point numbers * field named `f2` containing a 32-bit floating-point number >>> import numpy as np >>> dt = np.dtype("i4, (2,3)f8, f4") * field named `f0` containing a 3-character string * field named `f1` containing a sub-array of shape (3,) containing 64-bit unsigned integers * field named `f2` containing a 3 x 4 sub-array containing 10-character strings >>> import numpy as np >>> dt = np.dtype("S3, 3u8, (3,4)S10") Type strings Any string name of a NumPy dtype, e.g.: #### Example >>> import numpy as np >>> dt = np.dtype('uint32') # 32-bit unsigned integer >>> dt = np.dtype('float64') # 64-bit floating-point number `(flexible_dtype, itemsize)` The first argument must be an object that is converted to a zero-sized flexible data-type object, the second argument is an integer providing the desired itemsize. #### Example >>> import numpy as np >>> dt = np.dtype((np.void, 10)) # 10-byte wide data block >>> dt = np.dtype(('U', 10)) # 10-character unicode string `(fixed_dtype, shape)` The first argument is any object that can be converted into a fixed-size data- type object. The second argument is the desired shape of this type. If the shape parameter is 1, then the data-type object used to be equivalent to fixed dtype. This behaviour is deprecated since NumPy 1.17 and will raise an error in the future. If _shape_ is a tuple, then the new dtype defines a sub-array of the given shape. #### Example >>> import numpy as np >>> dt = np.dtype((np.int32, (2,2))) # 2 x 2 integer sub-array >>> dt = np.dtype(('i4, (2,3)f8, f4', (2,3))) # 2 x 3 structured sub-array `[(field_name, field_dtype, field_shape), ...]` _obj_ should be a list of fields where each field is described by a tuple of length 2 or 3. (Equivalent to the `descr` item in the [`__array_interface__`](arrays.interface#object.__array_interface__ "object.__array_interface__") attribute.) The first element, _field_name_ , is the field name (if this is `''` then a standard field name, `'f#'`, is assigned). The field name may also be a 2-tuple of strings where the first string is either a “title” (which may be any string or unicode string) or meta-data for the field which can be any object, and the second string is the “name” which must be a valid Python identifier. The second element, _field_dtype_ , can be anything that can be interpreted as a data-type. The optional third element _field_shape_ contains the shape if this field represents an array of the data-type in the second element. Note that a 3-tuple with a third argument equal to 1 is equivalent to a 2-tuple. This style does not accept _align_ in the [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") constructor as it is assumed that all of the memory is accounted for by the array interface description. #### Example Data-type with fields `big` (big-endian 32-bit integer) and `little` (little- endian 32-bit integer): >>> import numpy as np >>> dt = np.dtype([('big', '>i4'), ('little', '>> dt = np.dtype([('R','u1'), ('G','u1'), ('B','u1'), ('A','u1')]) `{'names': ..., 'formats': ..., 'offsets': ..., 'titles': ..., 'itemsize': ...}` This style has two required and three optional keys. The _names_ and _formats_ keys are required. Their respective values are equal-length lists with the field names and the field formats. The field names must be strings and the field formats can be any object accepted by [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") constructor. When the optional keys _offsets_ and _titles_ are provided, their values must each be lists of the same length as the _names_ and _formats_ lists. The _offsets_ value is a list of byte offsets (limited to [`ctypes.c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "\(in Python v3.13\)")) for each field, while the _titles_ value is a list of titles for each field (`None` can be used if no title is desired for that field). The _titles_ can be any object, but when a [`str`](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.13\)") object will add another entry to the fields dictionary keyed by the title and referencing the same field tuple which will contain the title as an additional tuple member. The _itemsize_ key allows the total size of the dtype to be set, and must be an integer large enough so all the fields are within the dtype. If the dtype being constructed is aligned, the _itemsize_ must also be divisible by the struct alignment. Total dtype _itemsize_ is limited to [`ctypes.c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "\(in Python v3.13\)"). #### Example Data type with fields `r`, `g`, `b`, `a`, each being an 8-bit unsigned integer: >>> import numpy as np >>> dt = np.dtype({'names': ['r','g','b','a'], ... 'formats': [np.uint8, np.uint8, np.uint8, np.uint8]}) Data type with fields `r` and `b` (with the given titles), both being 8-bit unsigned integers, the first at byte position 0 from the start of the field and the second at position 2: >>> dt = np.dtype({'names': ['r','b'], 'formats': ['u1', 'u1'], ... 'offsets': [0, 2], ... 'titles': ['Red pixel', 'Blue pixel']}) `{'field1': ..., 'field2': ..., ...}` This usage is discouraged, because it is ambiguous with the other dict-based construction method. If you have a field called ‘names’ and a field called ‘formats’ there will be a conflict. This style allows passing in the [`fields`](generated/numpy.dtype.fields#numpy.dtype.fields "numpy.dtype.fields") attribute of a data-type object. _obj_ should contain string or unicode keys that refer to `(data-type, offset)` or `(data-type, offset, title)` tuples. #### Example Data type containing field `col1` (10-character string at byte position 0), `col2` (32-bit float at byte position 10), and `col3` (integers at byte position 14): >>> import numpy as np >>> dt = np.dtype({'col1': ('U10', 0), 'col2': (np.float32, 10), ... 'col3': (int, 14)}) `(base_dtype, new_dtype)` In NumPy 1.7 and later, this form allows `base_dtype` to be interpreted as a structured dtype. Arrays created with this dtype will have underlying dtype `base_dtype` but will have fields and flags taken from `new_dtype`. This is useful for creating custom structured dtypes, as done in [record arrays](arrays.classes#arrays-classes-rec). This form also makes it possible to specify struct dtypes with overlapping fields, functioning like the ‘union’ type in C. This usage is discouraged, however, and the union mechanism is preferred. Both arguments must be convertible to data-type objects with the same total size. #### Example 32-bit integer, whose first two bytes are interpreted as an integer via field `real`, and the following two bytes via field `imag`. >>> import numpy as np >>> dt = np.dtype((np.int32,{'real':(np.int16, 0),'imag':(np.int16, 2)})) 32-bit integer, which is interpreted as consisting of a sub-array of shape `(4,)` containing 8-bit integers: >>> dt = np.dtype((np.int32, (np.int8, 4))) 32-bit integer, containing fields `r`, `g`, `b`, `a` that interpret the 4 bytes in the integer as four unsigned integers: >>> dt = np.dtype(('i4', [('r','u1'),('g','u1'),('b','u1'),('a','u1')])) ## Checking the data type When checking for a specific data type, use `==` comparison. #### Example >>> import numpy as np >>> a = np.array([1, 2], dtype=np.float32) >>> a.dtype == np.float32 True As opposed to Python types, a comparison using `is` should not be used. First, NumPy treats data type specifications (everything that can be passed to the [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") constructor) as equivalent to the data type object itself. This equivalence can only be handled through `==`, not through `is`. #### Example A [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") object is equal to all data type specifications that are equivalent to it. >>> import numpy as np >>> a = np.array([1, 2], dtype=float) >>> a.dtype == np.dtype(np.float64) True >>> a.dtype == np.float64 True >>> a.dtype == float True >>> a.dtype == "float64" True >>> a.dtype == "d" True Second, there is no guarantee that data type objects are singletons. #### Example Do not use `is` because data type objects may or may not be singletons. >>> import numpy as np >>> np.dtype(float) is np.dtype(float) True >>> np.dtype([('a', float)]) is np.dtype([('a', float)]) False ## dtype NumPy data type descriptions are instances of the [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") class. ### Attributes The type of the data is described by the following [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") attributes: [`dtype.type`](generated/numpy.dtype.type#numpy.dtype.type "numpy.dtype.type") | ---|--- [`dtype.kind`](generated/numpy.dtype.kind#numpy.dtype.kind "numpy.dtype.kind") | A character code (one of 'biufcmMOSUV') identifying the general kind of data. [`dtype.char`](generated/numpy.dtype.char#numpy.dtype.char "numpy.dtype.char") | A unique character code for each of the 21 different built-in types. [`dtype.num`](generated/numpy.dtype.num#numpy.dtype.num "numpy.dtype.num") | A unique number for each of the 21 different built-in types. [`dtype.str`](generated/numpy.dtype.str#numpy.dtype.str "numpy.dtype.str") | The array-protocol typestring of this data-type object. Size of the data is in turn described by: [`dtype.name`](generated/numpy.dtype.name#numpy.dtype.name "numpy.dtype.name") | A bit-width name for this data-type. ---|--- [`dtype.itemsize`](generated/numpy.dtype.itemsize#numpy.dtype.itemsize "numpy.dtype.itemsize") | The element size of this data-type object. Endianness of this data: [`dtype.byteorder`](generated/numpy.dtype.byteorder#numpy.dtype.byteorder "numpy.dtype.byteorder") | A character indicating the byte-order of this data-type object. ---|--- Information about sub-data-types in a [structured data type](../glossary#term- structured-data-type): [`dtype.fields`](generated/numpy.dtype.fields#numpy.dtype.fields "numpy.dtype.fields") | Dictionary of named fields defined for this data type, or `None`. ---|--- [`dtype.names`](generated/numpy.dtype.names#numpy.dtype.names "numpy.dtype.names") | Ordered list of field names, or `None` if there are no fields. For data types that describe sub-arrays: [`dtype.subdtype`](generated/numpy.dtype.subdtype#numpy.dtype.subdtype "numpy.dtype.subdtype") | Tuple `(item_dtype, shape)` if this [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") describes a sub-array, and None otherwise. ---|--- [`dtype.shape`](generated/numpy.dtype.shape#numpy.dtype.shape "numpy.dtype.shape") | Shape tuple of the sub-array if this data type describes a sub-array, and `()` otherwise. Attributes providing additional information: [`dtype.hasobject`](generated/numpy.dtype.hasobject#numpy.dtype.hasobject "numpy.dtype.hasobject") | Boolean indicating whether this dtype contains any reference-counted objects in any fields or sub-dtypes. ---|--- [`dtype.flags`](generated/numpy.dtype.flags#numpy.dtype.flags "numpy.dtype.flags") | Bit-flags describing how this data type is to be interpreted. [`dtype.isbuiltin`](generated/numpy.dtype.isbuiltin#numpy.dtype.isbuiltin "numpy.dtype.isbuiltin") | Integer indicating how this dtype relates to the built-in dtypes. [`dtype.isnative`](generated/numpy.dtype.isnative#numpy.dtype.isnative "numpy.dtype.isnative") | Boolean indicating whether the byte order of this dtype is native to the platform. [`dtype.descr`](generated/numpy.dtype.descr#numpy.dtype.descr "numpy.dtype.descr") | `__array_interface__` description of the data-type. [`dtype.alignment`](generated/numpy.dtype.alignment#numpy.dtype.alignment "numpy.dtype.alignment") | The required alignment (bytes) of this data-type according to the compiler. [`dtype.base`](generated/numpy.dtype.base#numpy.dtype.base "numpy.dtype.base") | Returns dtype for the base element of the subarrays, regardless of their dimension or shape. Metadata attached by the user: [`dtype.metadata`](generated/numpy.dtype.metadata#numpy.dtype.metadata "numpy.dtype.metadata") | Either `None` or a readonly dictionary of metadata (mappingproxy). ---|--- ### Methods Data types have the following method for changing the byte order: [`dtype.newbyteorder`](generated/numpy.dtype.newbyteorder#numpy.dtype.newbyteorder "numpy.dtype.newbyteorder")([new_order]) | Return a new dtype with a different byte order. ---|--- The following methods implement the pickle protocol: [`dtype.__reduce__`](generated/numpy.dtype.__reduce__#numpy.dtype.__reduce__ "numpy.dtype.__reduce__") | Helper for pickle. ---|--- [`dtype.__setstate__`](generated/numpy.dtype.__setstate__#numpy.dtype.__setstate__ "numpy.dtype.__setstate__") | Utility method for typing: [`dtype.__class_getitem__`](generated/numpy.dtype.__class_getitem__#numpy.dtype.__class_getitem__ "numpy.dtype.__class_getitem__")(item, /) | Return a parametrized wrapper around the [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") type. ---|--- Comparison operations: [`dtype.__ge__`](generated/numpy.dtype.__ge__#numpy.dtype.__ge__ "numpy.dtype.__ge__")(value, /) | Return self>=value. ---|--- [`dtype.__gt__`](generated/numpy.dtype.__gt__#numpy.dtype.__gt__ "numpy.dtype.__gt__")(value, /) | Return self>value. [`dtype.__le__`](generated/numpy.dtype.__le__#numpy.dtype.__le__ "numpy.dtype.__le__")(value, /) | Return self<=value. [`dtype.__lt__`](generated/numpy.dtype.__lt__#numpy.dtype.__lt__ "numpy.dtype.__lt__")(value, /) | Return self`: big-endian, `|`: not-relevant), a character code giving the basic type of the array, and an integer providing the number of bytes the type uses. The basic type character codes are: `t` | Bit field (following integer gives the number of bits in the bit field). ---|--- `b` | Boolean (integer type where all values are only `True` or `False`) `i` | Integer `u` | Unsigned integer `f` | Floating point `c` | Complex floating point `m` | Timedelta `M` | Datetime `O` | Object (i.e. the memory contains a pointer to [`PyObject`](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")) `S` | String (fixed-length sequence of char) `U` | Unicode (fixed-length sequence of [`Py_UCS4`](https://docs.python.org/3/c-api/unicode.html#c.Py_UCS4 "\(in Python v3.13\)")) `V` | Other (void * – each item is a fixed-size chunk of memory) **descr** (optional) A list of tuples providing a more detailed description of the memory layout for each item in the homogeneous array. Each tuple in the list has two or three elements. Normally, this attribute would be used when _typestr_ is `V[0-9]+`, but this is not a requirement. The only requirement is that the number of bytes represented in the _typestr_ key is the same as the total number of bytes represented here. The idea is to support descriptions of C-like structs that make up array elements. The elements of each tuple in the list are 1. A string providing a name associated with this portion of the datatype. This could also be a tuple of `('full name', 'basic_name')` where basic name would be a valid Python variable name representing the full name of the field. 2. Either a basic-type description string as in _typestr_ or another list (for nested structured types) 3. An optional shape tuple providing how many times this part of the structure should be repeated. No repeats are assumed if this is not given. Very complicated structures can be described using this generic interface. Notice, however, that each element of the array is still of the same data-type. Some examples of using this interface are given below. **Default** : `[('', typestr)]` **data** (optional) A 2-tuple whose first argument is a [Python integer](https://docs.python.org/3/c-api/long.html "\(in Python v3.13\)") that points to the data-area storing the array contents. Note When converting from C/C++ via `PyLong_From*` or high-level bindings such as Cython or pybind11, make sure to use an integer of sufficiently large bitness. This pointer must point to the first element of data (in other words any offset is always ignored in this case). The second entry in the tuple is a read-only flag (true means the data area is read-only). This attribute can also be an object exposing the [buffer interface](https://docs.python.org/3/c-api/buffer.html#bufferobjects "\(in Python v3.13\)") which will be used to share the data. If this key is not present (or returns None), then memory sharing will be done through the buffer interface of the object itself. In this case, the offset key can be used to indicate the start of the buffer. A reference to the object exposing the array interface must be stored by the new object if the memory area is to be secured. **Default** : `None` **strides** (optional) Either `None` to indicate a C-style contiguous array or a tuple of strides which provides the number of bytes needed to jump to the next array element in the corresponding dimension. Each entry must be an integer (a Python [`int`](https://docs.python.org/3/library/functions.html#int "\(in Python v3.13\)")). As with shape, the values may be larger than can be represented by a C `int` or `long`; the calling code should handle this appropriately, either by raising an error, or by using `long long` in C. The default is `None` which implies a C-style contiguous memory buffer. In this model, the last dimension of the array varies the fastest. For example, the default strides tuple for an object whose array entries are 8 bytes long and whose shape is `(10, 20, 30)` would be `(4800, 240, 8)`. **Default** : `None` (C-style contiguous) **mask** (optional) `None` or an object exposing the array interface. All elements of the mask array should be interpreted only as true or not true indicating which elements of this array are valid. The shape of this object should be `“broadcastable”` to the shape of the original array. **Default** : `None` (All array values are valid) **offset** (optional) An integer offset into the array data region. This can only be used when data is `None` or returns a [`memoryview`](https://docs.python.org/3/library/stdtypes.html#memoryview "\(in Python v3.13\)") object. **Default** : `0`. **version** (required) An integer showing the version of the interface (i.e. 3 for this version). Be careful not to use this to invalidate objects exposing future versions of the interface. ## C-struct access This approach to the array interface allows for faster access to an array using only one attribute lookup and a well-defined C-structure. object.__array_struct__ A [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "\(in Python v3.13\)") whose `pointer` member contains a pointer to a filled [`PyArrayInterface`](c-api/types-and-structures#c.PyArrayInterface "PyArrayInterface") structure. Memory for the structure is dynamically created and the [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "\(in Python v3.13\)") is also created with an appropriate destructor so the retriever of this attribute simply has to apply [`Py_DECREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_DECREF "\(in Python v3.13\)") to the object returned by this attribute when it is finished. Also, either the data needs to be copied out, or a reference to the object exposing this attribute must be held to ensure the data is not freed. Objects exposing the `__array_struct__` interface must also not reallocate their memory if other objects are referencing them. The [`PyArrayInterface`](c-api/types-and-structures#c.PyArrayInterface "PyArrayInterface") structure is defined in `numpy/ndarrayobject.h` as: typedef struct { int two; /* contains the integer 2 -- simple sanity check */ int nd; /* number of dimensions */ char typekind; /* kind in array --- character code of typestr */ int itemsize; /* size of each element */ int flags; /* flags indicating how the data should be interpreted */ /* must set ARR_HAS_DESCR bit to validate descr */ Py_ssize_t *shape; /* A length-nd array of shape information */ Py_ssize_t *strides; /* A length-nd array of stride information */ void *data; /* A pointer to the first element of the array */ PyObject *descr; /* NULL or data-description (same as descr key of __array_interface__) -- must set ARR_HAS_DESCR flag or this will be ignored. */ } PyArrayInterface; The flags member may consist of 5 bits showing how the data should be interpreted and one bit showing how the Interface should be interpreted. The data-bits are [`NPY_ARRAY_C_CONTIGUOUS`](c-api/array#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS") (0x1), [`NPY_ARRAY_F_CONTIGUOUS`](c-api/array#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS") (0x2), [`NPY_ARRAY_ALIGNED`](c-api/array#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") (0x100), [`NPY_ARRAY_NOTSWAPPED`](c-api/array#c.NPY_ARRAY_NOTSWAPPED "NPY_ARRAY_NOTSWAPPED") (0x200), and [`NPY_ARRAY_WRITEABLE`](c-api/array#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") (0x400). A final flag `NPY_ARR_HAS_DESCR` (0x800) indicates whether or not this structure has the arrdescr field. The field should not be accessed unless this flag is present. NPY_ARR_HAS_DESCR New since June 16, 2006: In the past most implementations used the `desc` member of the `PyCObject` (now [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "\(in Python v3.13\)")) itself (do not confuse this with the “descr” member of the [`PyArrayInterface`](c-api/types-and-structures#c.PyArrayInterface "PyArrayInterface") structure above — they are two separate things) to hold the pointer to the object exposing the interface. This is now an explicit part of the interface. Be sure to take a reference to the object and call [`PyCapsule_SetContext`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule_SetContext "\(in Python v3.13\)") before returning the [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "\(in Python v3.13\)"), and configure a destructor to decref this reference. Note `__array_struct__` is considered legacy and should not be used for new code. Use the [buffer protocol](https://docs.python.org/3/c-api/buffer.html "\(in Python v3.13\)") or the DLPack protocol [`numpy.from_dlpack`](generated/numpy.from_dlpack#numpy.from_dlpack "numpy.from_dlpack") instead. ## Type description examples For clarity it is useful to provide some examples of the type description and corresponding `__array_interface__` ‘descr’ entries. Thanks to Scott Gilbert for these examples: In every case, the ‘descr’ key is optional, but of course provides more information which may be important for various applications: * Float data typestr == '>f4' descr == [('','>f4')] * Complex double typestr == '>c8' descr == [('real','>f4'), ('imag','>f4')] * RGB Pixel data typestr == '|V3' descr == [('r','|u1'), ('g','|u1'), ('b','|u1')] * Mixed endian (weird but could happen). typestr == '|V8' (or '>u8') descr == [('big','>i4'), ('little','i4'), ('data','>f8',(16,4))] * Padded structure struct { int ival; double dval; } typestr == '|V16' descr == [('ival','>i4'),('','|V4'),('dval','>f8')] It should be clear that any structured type could be described using this interface. ## Differences with array interface (version 2) The version 2 interface was very similar. The differences were largely aesthetic. In particular: 1. The PyArrayInterface structure had no descr member at the end (and therefore no flag ARR_HAS_DESCR) 2. The `context` member of the [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "\(in Python v3.13\)") (formally the `desc` member of the `PyCObject`) returned from `__array_struct__` was not specified. Usually, it was the object exposing the array (so that a reference to it could be kept and destroyed when the C-object was destroyed). It is now an explicit requirement that this field be used in some way to hold a reference to the owning object. Note Until August 2020, this said: Now it must be a tuple whose first element is a string with “PyArrayInterface Version #” and whose second element is the object exposing the array. This design was retracted almost immediately after it was proposed, in <>. Despite 14 years of documentation to the contrary, at no point was it valid to assume that `__array_interface__` capsules held this tuple content. 3. The tuple returned from `__array_interface__['data']` used to be a hex-string (now it is an integer or a long integer). 4. There was no `__array_interface__` attribute instead all of the keys (except for version) in the `__array_interface__` dictionary were their own attribute: Thus to obtain the Python-side information you had to access separately the attributes: * `__array_data__` * `__array_shape__` * `__array_strides__` * `__array_typestr__` * `__array_descr__` * `__array_offset__` * `__array_mask__` # The N-dimensional array (ndarray) An [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") is a (usually fixed-size) multidimensional container of items of the same type and size. The number of dimensions and items in an array is defined by its [`shape`](generated/numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape"), which is a [`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple "\(in Python v3.13\)") of _N_ non-negative integers that specify the sizes of each dimension. The type of items in the array is specified by a separate [data- type object (dtype)](arrays.dtypes#arrays-dtypes), one of which is associated with each ndarray. As with other container objects in Python, the contents of an [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") can be accessed and modified by [indexing or slicing](routines.indexing#arrays- indexing) the array (using, for example, _N_ integers), and via the methods and attributes of the [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). Different [`ndarrays`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") can share the same data, so that changes made in one [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") may be visible in another. That is, an ndarray can be a _“view”_ to another ndarray, and the data it is referring to is taken care of by the _“base”_ ndarray. ndarrays can also be views to memory owned by Python [`strings`](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.13\)") or objects implementing the [`memoryview`](https://docs.python.org/3/library/stdtypes.html#memoryview "\(in Python v3.13\)") or [array](arrays.interface#arrays-interface) interfaces. #### Example A 2-dimensional array of size 2 x 3, composed of 4-byte integer elements: >>> import numpy as np >>> x = np.array([[1, 2, 3], [4, 5, 6]], np.int32) >>> type(x) >>> x.shape (2, 3) >>> x.dtype dtype('int32') The array can be indexed using Python container-like syntax: >>> # The element of x in the *second* row, *third* column, namely, 6. >>> x[1, 2] 6 For example [slicing](routines.indexing#arrays-indexing) can produce views of the array: >>> y = x[:,1] >>> y array([2, 5], dtype=int32) >>> y[0] = 9 # this also changes the corresponding element in x >>> y array([9, 5], dtype=int32) >>> x array([[1, 9, 3], [4, 5, 6]], dtype=int32) ## Constructing arrays New arrays can be constructed using the routines detailed in [Array creation routines](routines.array-creation#routines-array-creation), and also by using the low-level [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor: [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray")(shape[, dtype, buffer, offset, ...]) | An array object represents a multidimensional, homogeneous array of fixed-size items. ---|--- ## Indexing arrays Arrays can be indexed using an extended Python slicing syntax, `array[selection]`. Similar syntax is also used for accessing fields in a [structured data type](../glossary#term-structured-data-type). See also [Array Indexing](routines.indexing#arrays-indexing). ## Internal memory layout of an ndarray An instance of class [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") consists of a contiguous one-dimensional segment of computer memory (owned by the array, or by some other object), combined with an indexing scheme that maps _N_ integers into the location of an item in the block. The ranges in which the indices can vary is specified by the [`shape`](generated/numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") of the array. How many bytes each item takes and how the bytes are interpreted is defined by the [data-type object](arrays.dtypes#arrays-dtypes) associated with the array. A segment of memory is inherently 1-dimensional, and there are many different schemes for arranging the items of an _N_ -dimensional array in a 1-dimensional block. NumPy is flexible, and [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") objects can accommodate any _strided indexing scheme_. In a strided scheme, the N-dimensional index \\((n_0, n_1, ..., n_{N-1})\\) corresponds to the offset (in bytes): \\[n_{\mathrm{offset}} = \sum_{k=0}^{N-1} s_k n_k\\] from the beginning of the memory block associated with the array. Here, \\(s_k\\) are integers which specify the [`strides`](generated/numpy.ndarray.strides#numpy.ndarray.strides "numpy.ndarray.strides") of the array. The [column-major](../glossary#term- column-major) order (used, for example, in the Fortran language and in _Matlab_) and [row-major](../glossary#term-row-major) order (used in C) schemes are just specific kinds of strided scheme, and correspond to memory that can be _addressed_ by the strides: \\[s_k^{\mathrm{column}} = \mathrm{itemsize} \prod_{j=0}^{k-1} d_j , \quad s_k^{\mathrm{row}} = \mathrm{itemsize} \prod_{j=k+1}^{N-1} d_j .\\] where \\(d_j\\) `= self.shape[j]`. Both the C and Fortran orders are [contiguous](../glossary#term-contiguous), _i.e.,_ single-segment, memory layouts, in which every part of the memory block can be accessed by some combination of the indices. Note _Contiguous arrays_ and _single-segment arrays_ are synonymous and are used interchangeably throughout the documentation. While a C-style and Fortran-style contiguous array, which has the corresponding flags set, can be addressed with the above strides, the actual strides may be different. This can happen in two cases: 1. If `self.shape[k] == 1` then for any legal index `index[k] == 0`. This means that in the formula for the offset \\(n_k = 0\\) and thus \\(s_k n_k = 0\\) and the value of \\(s_k\\) `= self.strides[k]` is arbitrary. 2. If an array has no elements (`self.size == 0`) there is no legal index and the strides are never used. Any array with no elements may be considered C-style and Fortran-style contiguous. Point 1. means that `self` and `self.squeeze()` always have the same contiguity and `aligned` flags value. This also means that even a high dimensional array could be C-style and Fortran-style contiguous at the same time. An array is considered aligned if the memory offsets for all elements and the base offset itself is a multiple of [`self.itemsize`](generated/numpy.ndarray.itemsize#numpy.ndarray.itemsize "numpy.ndarray.itemsize"). Understanding _memory-alignment_ leads to better performance on most hardware. Warning It does _not_ generally hold that `self.strides[-1] == self.itemsize` for C-style contiguous arrays or `self.strides[0] == self.itemsize` for Fortran- style contiguous arrays is true. Data in new [`ndarrays`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") is in the [row-major](../glossary#term-row-major) (C) order, unless otherwise specified, but, for example, [basic array slicing](routines.indexing#arrays-indexing) often produces [views](../glossary#term-view) in a different scheme. Note Several algorithms in NumPy work on arbitrarily strided arrays. However, some algorithms require single-segment arrays. When an irregularly strided array is passed in to such algorithms, a copy is automatically made. ## Array attributes Array attributes reflect information that is intrinsic to the array itself. Generally, accessing an array through its attributes allows you to get and sometimes set intrinsic properties of the array without creating a new array. The exposed attributes are the core parts of an array and only some of them can be reset meaningfully without creating a new array. Information on each attribute is given below. ### Memory layout The following attributes contain information about the memory layout of the array: [`ndarray.flags`](generated/numpy.ndarray.flags#numpy.ndarray.flags "numpy.ndarray.flags") | Information about the memory layout of the array. ---|--- [`ndarray.shape`](generated/numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") | Tuple of array dimensions. [`ndarray.strides`](generated/numpy.ndarray.strides#numpy.ndarray.strides "numpy.ndarray.strides") | Tuple of bytes to step in each dimension when traversing an array. [`ndarray.ndim`](generated/numpy.ndarray.ndim#numpy.ndarray.ndim "numpy.ndarray.ndim") | Number of array dimensions. [`ndarray.data`](generated/numpy.ndarray.data#numpy.ndarray.data "numpy.ndarray.data") | Python buffer object pointing to the start of the array's data. [`ndarray.size`](generated/numpy.ndarray.size#numpy.ndarray.size "numpy.ndarray.size") | Number of elements in the array. [`ndarray.itemsize`](generated/numpy.ndarray.itemsize#numpy.ndarray.itemsize "numpy.ndarray.itemsize") | Length of one array element in bytes. [`ndarray.nbytes`](generated/numpy.ndarray.nbytes#numpy.ndarray.nbytes "numpy.ndarray.nbytes") | Total bytes consumed by the elements of the array. [`ndarray.base`](generated/numpy.ndarray.base#numpy.ndarray.base "numpy.ndarray.base") | Base object if memory is from some other object. ### Data type See also [Data type objects](arrays.dtypes#arrays-dtypes) The data type object associated with the array can be found in the [`dtype`](generated/numpy.ndarray.dtype#numpy.ndarray.dtype "numpy.ndarray.dtype") attribute: [`ndarray.dtype`](generated/numpy.ndarray.dtype#numpy.ndarray.dtype "numpy.ndarray.dtype") | Data-type of the array's elements. ---|--- ### Other attributes [`ndarray.T`](generated/numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T") | View of the transposed array. ---|--- [`ndarray.real`](generated/numpy.ndarray.real#numpy.ndarray.real "numpy.ndarray.real") | The real part of the array. [`ndarray.imag`](generated/numpy.ndarray.imag#numpy.ndarray.imag "numpy.ndarray.imag") | The imaginary part of the array. [`ndarray.flat`](generated/numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") | A 1-D iterator over the array. ### Array interface See also [The array interface protocol](arrays.interface#arrays-interface). [`__array_interface__`](arrays.interface#object.__array_interface__ "object.__array_interface__") | Python-side of the array interface ---|--- [`__array_struct__`](arrays.interface#object.__array_struct__ "object.__array_struct__") | C-side of the array interface ### [`ctypes`](https://docs.python.org/3/library/ctypes.html#module-ctypes "\(in Python v3.13\)") foreign function interface [`ndarray.ctypes`](generated/numpy.ndarray.ctypes#numpy.ndarray.ctypes "numpy.ndarray.ctypes") | An object to simplify the interaction of the array with the ctypes module. ---|--- ## Array methods An [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") object has many methods which operate on or with the array in some fashion, typically returning an array result. These methods are briefly explained below. (Each method’s docstring has a more complete description.) For the following methods there are also corresponding functions in [`numpy`](index#module-numpy "numpy"): [`all`](generated/numpy.all#numpy.all "numpy.all"), [`any`](generated/numpy.any#numpy.any "numpy.any"), [`argmax`](generated/numpy.argmax#numpy.argmax "numpy.argmax"), [`argmin`](generated/numpy.argmin#numpy.argmin "numpy.argmin"), [`argpartition`](generated/numpy.argpartition#numpy.argpartition "numpy.argpartition"), [`argsort`](generated/numpy.argsort#numpy.argsort "numpy.argsort"), [`choose`](generated/numpy.choose#numpy.choose "numpy.choose"), [`clip`](generated/numpy.clip#numpy.clip "numpy.clip"), [`compress`](generated/numpy.compress#numpy.compress "numpy.compress"), [`copy`](generated/numpy.copy#numpy.copy "numpy.copy"), [`cumprod`](generated/numpy.cumprod#numpy.cumprod "numpy.cumprod"), [`cumsum`](generated/numpy.cumsum#numpy.cumsum "numpy.cumsum"), [`diagonal`](generated/numpy.diagonal#numpy.diagonal "numpy.diagonal"), [`imag`](generated/numpy.imag#numpy.imag "numpy.imag"), [`max`](generated/numpy.amax#numpy.amax "numpy.amax"), [`mean`](generated/numpy.mean#numpy.mean "numpy.mean"), [`min`](generated/numpy.amin#numpy.amin "numpy.amin"), [`nonzero`](generated/numpy.nonzero#numpy.nonzero "numpy.nonzero"), [`partition`](generated/numpy.partition#numpy.partition "numpy.partition"), [`prod`](generated/numpy.prod#numpy.prod "numpy.prod"), [`put`](generated/numpy.put#numpy.put "numpy.put"), [`ravel`](generated/numpy.ravel#numpy.ravel "numpy.ravel"), [`real`](generated/numpy.real#numpy.real "numpy.real"), [`repeat`](generated/numpy.repeat#numpy.repeat "numpy.repeat"), [`reshape`](generated/numpy.reshape#numpy.reshape "numpy.reshape"), [`round`](generated/numpy.around#numpy.around "numpy.around"), [`searchsorted`](generated/numpy.searchsorted#numpy.searchsorted "numpy.searchsorted"), [`sort`](generated/numpy.sort#numpy.sort "numpy.sort"), [`squeeze`](generated/numpy.squeeze#numpy.squeeze "numpy.squeeze"), [`std`](generated/numpy.std#numpy.std "numpy.std"), [`sum`](generated/numpy.sum#numpy.sum "numpy.sum"), [`swapaxes`](generated/numpy.swapaxes#numpy.swapaxes "numpy.swapaxes"), [`take`](generated/numpy.take#numpy.take "numpy.take"), [`trace`](generated/numpy.trace#numpy.trace "numpy.trace"), [`transpose`](generated/numpy.transpose#numpy.transpose "numpy.transpose"), [`var`](generated/numpy.var#numpy.var "numpy.var"). ### Array conversion [`ndarray.item`](generated/numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item")(*args) | Copy an element of an array to a standard Python scalar and return it. ---|--- [`ndarray.tolist`](generated/numpy.ndarray.tolist#numpy.ndarray.tolist "numpy.ndarray.tolist")() | Return the array as an `a.ndim`-levels deep nested list of Python scalars. [`ndarray.tostring`](generated/numpy.ndarray.tostring#numpy.ndarray.tostring "numpy.ndarray.tostring")([order]) | A compatibility alias for [`tobytes`](generated/numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes"), with exactly the same behavior. [`ndarray.tobytes`](generated/numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes")([order]) | Construct Python bytes containing the raw data bytes in the array. [`ndarray.tofile`](generated/numpy.ndarray.tofile#numpy.ndarray.tofile "numpy.ndarray.tofile")(fid[, sep, format]) | Write array to a file as text or binary (default). [`ndarray.dump`](generated/numpy.ndarray.dump#numpy.ndarray.dump "numpy.ndarray.dump")(file) | Dump a pickle of the array to the specified file. [`ndarray.dumps`](generated/numpy.ndarray.dumps#numpy.ndarray.dumps "numpy.ndarray.dumps")() | Returns the pickle of the array as a string. [`ndarray.astype`](generated/numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype")(dtype[, order, casting, ...]) | Copy of the array, cast to a specified type. [`ndarray.byteswap`](generated/numpy.ndarray.byteswap#numpy.ndarray.byteswap "numpy.ndarray.byteswap")([inplace]) | Swap the bytes of the array elements [`ndarray.copy`](generated/numpy.ndarray.copy#numpy.ndarray.copy "numpy.ndarray.copy")([order]) | Return a copy of the array. [`ndarray.view`](generated/numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view")([dtype][, type]) | New view of array with the same data. [`ndarray.getfield`](generated/numpy.ndarray.getfield#numpy.ndarray.getfield "numpy.ndarray.getfield")(dtype[, offset]) | Returns a field of the given array as a certain type. [`ndarray.setflags`](generated/numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags")([write, align, uic]) | Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. [`ndarray.fill`](generated/numpy.ndarray.fill#numpy.ndarray.fill "numpy.ndarray.fill")(value) | Fill the array with a scalar value. ### Shape manipulation For reshape, resize, and transpose, the single tuple argument may be replaced with `n` integers which will be interpreted as an n-tuple. [`ndarray.reshape`](generated/numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape")(shape, /, *[, order, copy]) | Returns an array containing the same data with a new shape. ---|--- [`ndarray.resize`](generated/numpy.ndarray.resize#numpy.ndarray.resize "numpy.ndarray.resize")(new_shape[, refcheck]) | Change shape and size of array in-place. [`ndarray.transpose`](generated/numpy.ndarray.transpose#numpy.ndarray.transpose "numpy.ndarray.transpose")(*axes) | Returns a view of the array with axes transposed. [`ndarray.swapaxes`](generated/numpy.ndarray.swapaxes#numpy.ndarray.swapaxes "numpy.ndarray.swapaxes")(axis1, axis2) | Return a view of the array with `axis1` and `axis2` interchanged. [`ndarray.flatten`](generated/numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten")([order]) | Return a copy of the array collapsed into one dimension. [`ndarray.ravel`](generated/numpy.ndarray.ravel#numpy.ndarray.ravel "numpy.ndarray.ravel")([order]) | Return a flattened array. [`ndarray.squeeze`](generated/numpy.ndarray.squeeze#numpy.ndarray.squeeze "numpy.ndarray.squeeze")([axis]) | Remove axes of length one from `a`. ### Item selection and manipulation For array methods that take an _axis_ keyword, it defaults to _None_. If axis is _None_ , then the array is treated as a 1-D array. Any other value for _axis_ represents the dimension along which the operation should proceed. [`ndarray.take`](generated/numpy.ndarray.take#numpy.ndarray.take "numpy.ndarray.take")(indices[, axis, out, mode]) | Return an array formed from the elements of `a` at the given indices. ---|--- [`ndarray.put`](generated/numpy.ndarray.put#numpy.ndarray.put "numpy.ndarray.put")(indices, values[, mode]) | Set `a.flat[n] = values[n]` for all `n` in indices. [`ndarray.repeat`](generated/numpy.ndarray.repeat#numpy.ndarray.repeat "numpy.ndarray.repeat")(repeats[, axis]) | Repeat elements of an array. [`ndarray.choose`](generated/numpy.ndarray.choose#numpy.ndarray.choose "numpy.ndarray.choose")(choices[, out, mode]) | Use an index array to construct a new array from a set of choices. [`ndarray.sort`](generated/numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort")([axis, kind, order]) | Sort an array in-place. [`ndarray.argsort`](generated/numpy.ndarray.argsort#numpy.ndarray.argsort "numpy.ndarray.argsort")([axis, kind, order]) | Returns the indices that would sort this array. [`ndarray.partition`](generated/numpy.ndarray.partition#numpy.ndarray.partition "numpy.ndarray.partition")(kth[, axis, kind, order]) | Partially sorts the elements in the array in such a way that the value of the element in k-th position is in the position it would be in a sorted array. [`ndarray.argpartition`](generated/numpy.ndarray.argpartition#numpy.ndarray.argpartition "numpy.ndarray.argpartition")(kth[, axis, kind, order]) | Returns the indices that would partition this array. [`ndarray.searchsorted`](generated/numpy.ndarray.searchsorted#numpy.ndarray.searchsorted "numpy.ndarray.searchsorted")(v[, side, sorter]) | Find indices where elements of v should be inserted in a to maintain order. [`ndarray.nonzero`](generated/numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero")() | Return the indices of the elements that are non-zero. [`ndarray.compress`](generated/numpy.ndarray.compress#numpy.ndarray.compress "numpy.ndarray.compress")(condition[, axis, out]) | Return selected slices of this array along given axis. [`ndarray.diagonal`](generated/numpy.ndarray.diagonal#numpy.ndarray.diagonal "numpy.ndarray.diagonal")([offset, axis1, axis2]) | Return specified diagonals. ### Calculation Many of these methods take an argument named _axis_. In such cases, * If _axis_ is _None_ (the default), the array is treated as a 1-D array and the operation is performed over the entire array. This behavior is also the default if self is a 0-dimensional array or array scalar. (An array scalar is an instance of the types/classes float32, float64, etc., whereas a 0-dimensional array is an ndarray instance containing precisely one array scalar.) * If _axis_ is an integer, then the operation is done over the given axis (for each 1-D subarray that can be created along the given axis). Example of the _axis_ argument A 3-dimensional array of size 3 x 3 x 3, summed over each of its three axes: >>> import numpy as np >>> x = np.arange(27).reshape((3,3,3)) >>> x array([[[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8]], [[ 9, 10, 11], [12, 13, 14], [15, 16, 17]], [[18, 19, 20], [21, 22, 23], [24, 25, 26]]]) >>> x.sum(axis=0) array([[27, 30, 33], [36, 39, 42], [45, 48, 51]]) >>> # for sum, axis is the first keyword, so we may omit it, >>> # specifying only its value >>> x.sum(0), x.sum(1), x.sum(2) (array([[27, 30, 33], [36, 39, 42], [45, 48, 51]]), array([[ 9, 12, 15], [36, 39, 42], [63, 66, 69]]), array([[ 3, 12, 21], [30, 39, 48], [57, 66, 75]])) The parameter _dtype_ specifies the data type over which a reduction operation (like summing) should take place. The default reduce data type is the same as the data type of _self_. To avoid overflow, it can be useful to perform the reduction using a larger data type. For several methods, an optional _out_ argument can also be provided and the result will be placed into the output array given. The _out_ argument must be an [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") and have the same number of elements. It can have a different data type in which case casting will be performed. [`ndarray.max`](generated/numpy.ndarray.max#numpy.ndarray.max "numpy.ndarray.max")([axis, out, keepdims, initial, ...]) | Return the maximum along a given axis. ---|--- [`ndarray.argmax`](generated/numpy.ndarray.argmax#numpy.ndarray.argmax "numpy.ndarray.argmax")([axis, out, keepdims]) | Return indices of the maximum values along the given axis. [`ndarray.min`](generated/numpy.ndarray.min#numpy.ndarray.min "numpy.ndarray.min")([axis, out, keepdims, initial, ...]) | Return the minimum along a given axis. [`ndarray.argmin`](generated/numpy.ndarray.argmin#numpy.ndarray.argmin "numpy.ndarray.argmin")([axis, out, keepdims]) | Return indices of the minimum values along the given axis. [`ndarray.clip`](generated/numpy.ndarray.clip#numpy.ndarray.clip "numpy.ndarray.clip")([min, max, out]) | Return an array whose values are limited to `[min, max]`. [`ndarray.conj`](generated/numpy.ndarray.conj#numpy.ndarray.conj "numpy.ndarray.conj")() | Complex-conjugate all elements. [`ndarray.round`](generated/numpy.ndarray.round#numpy.ndarray.round "numpy.ndarray.round")([decimals, out]) | Return `a` with each element rounded to the given number of decimals. [`ndarray.trace`](generated/numpy.ndarray.trace#numpy.ndarray.trace "numpy.ndarray.trace")([offset, axis1, axis2, dtype, out]) | Return the sum along diagonals of the array. [`ndarray.sum`](generated/numpy.ndarray.sum#numpy.ndarray.sum "numpy.ndarray.sum")([axis, dtype, out, keepdims, ...]) | Return the sum of the array elements over the given axis. [`ndarray.cumsum`](generated/numpy.ndarray.cumsum#numpy.ndarray.cumsum "numpy.ndarray.cumsum")([axis, dtype, out]) | Return the cumulative sum of the elements along the given axis. [`ndarray.mean`](generated/numpy.ndarray.mean#numpy.ndarray.mean "numpy.ndarray.mean")([axis, dtype, out, keepdims, where]) | Returns the average of the array elements along given axis. [`ndarray.var`](generated/numpy.ndarray.var#numpy.ndarray.var "numpy.ndarray.var")([axis, dtype, out, ddof, ...]) | Returns the variance of the array elements, along given axis. [`ndarray.std`](generated/numpy.ndarray.std#numpy.ndarray.std "numpy.ndarray.std")([axis, dtype, out, ddof, ...]) | Returns the standard deviation of the array elements along given axis. [`ndarray.prod`](generated/numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod")([axis, dtype, out, keepdims, ...]) | Return the product of the array elements over the given axis [`ndarray.cumprod`](generated/numpy.ndarray.cumprod#numpy.ndarray.cumprod "numpy.ndarray.cumprod")([axis, dtype, out]) | Return the cumulative product of the elements along the given axis. [`ndarray.all`](generated/numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all")([axis, out, keepdims, where]) | Returns True if all elements evaluate to True. [`ndarray.any`](generated/numpy.ndarray.any#numpy.ndarray.any "numpy.ndarray.any")([axis, out, keepdims, where]) | Returns True if any of the elements of `a` evaluate to True. ## Arithmetic, matrix multiplication, and comparison operations Arithmetic and comparison operations on [`ndarrays`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") are defined as element-wise operations, and generally yield [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") objects as results. Each of the arithmetic operations (`+`, `-`, `*`, `/`, `//`, `%`, `divmod()`, `**` or `pow()`, `<<`, `>>`, `&`, `^`, `|`, `~`) and the comparisons (`==`, `<`, `>`, `<=`, `>=`, `!=`) is equivalent to the corresponding universal function (or [ufunc](../glossary#term-ufunc) for short) in NumPy. For more information, see the section on [Universal Functions](ufuncs#ufuncs). Comparison operators: [`ndarray.__lt__`](generated/numpy.ndarray.__lt__#numpy.ndarray.__lt__ "numpy.ndarray.__lt__")(value, /) | Return selfvalue. [`ndarray.__ge__`](generated/numpy.ndarray.__ge__#numpy.ndarray.__ge__ "numpy.ndarray.__ge__")(value, /) | Return self>=value. [`ndarray.__eq__`](generated/numpy.ndarray.__eq__#numpy.ndarray.__eq__ "numpy.ndarray.__eq__")(value, /) | Return self==value. [`ndarray.__ne__`](generated/numpy.ndarray.__ne__#numpy.ndarray.__ne__ "numpy.ndarray.__ne__")(value, /) | Return self!=value. Truth value of an array ([`bool()`](arrays.scalars#numpy.bool "numpy.bool")): [`ndarray.__bool__`](generated/numpy.ndarray.__bool__#numpy.ndarray.__bool__ "numpy.ndarray.__bool__")(/) | True if self else False ---|--- Note Truth-value testing of an array invokes [`ndarray.__bool__`](generated/numpy.ndarray.__bool__#numpy.ndarray.__bool__ "numpy.ndarray.__bool__"), which raises an error if the number of elements in the array is not 1, because the truth value of such arrays is ambiguous. Use [`.any()`](generated/numpy.ndarray.any#numpy.ndarray.any "numpy.ndarray.any") and [`.all()`](generated/numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all") instead to be clear about what is meant in such cases. (If you wish to check for whether an array is empty, use for example `.size > 0`.) Unary operations: [`ndarray.__neg__`](generated/numpy.ndarray.__neg__#numpy.ndarray.__neg__ "numpy.ndarray.__neg__")(/) | -self ---|--- [`ndarray.__pos__`](generated/numpy.ndarray.__pos__#numpy.ndarray.__pos__ "numpy.ndarray.__pos__")(/) | +self [`ndarray.__abs__`](generated/numpy.ndarray.__abs__#numpy.ndarray.__abs__ "numpy.ndarray.__abs__")(self) | [`ndarray.__invert__`](generated/numpy.ndarray.__invert__#numpy.ndarray.__invert__ "numpy.ndarray.__invert__")(/) | ~self Arithmetic: [`ndarray.__add__`](generated/numpy.ndarray.__add__#numpy.ndarray.__add__ "numpy.ndarray.__add__")(value, /) | Return self+value. ---|--- [`ndarray.__sub__`](generated/numpy.ndarray.__sub__#numpy.ndarray.__sub__ "numpy.ndarray.__sub__")(value, /) | Return self-value. [`ndarray.__mul__`](generated/numpy.ndarray.__mul__#numpy.ndarray.__mul__ "numpy.ndarray.__mul__")(value, /) | Return self*value. [`ndarray.__truediv__`](generated/numpy.ndarray.__truediv__#numpy.ndarray.__truediv__ "numpy.ndarray.__truediv__")(value, /) | Return self/value. [`ndarray.__floordiv__`](generated/numpy.ndarray.__floordiv__#numpy.ndarray.__floordiv__ "numpy.ndarray.__floordiv__")(value, /) | Return self//value. [`ndarray.__mod__`](generated/numpy.ndarray.__mod__#numpy.ndarray.__mod__ "numpy.ndarray.__mod__")(value, /) | Return self%value. [`ndarray.__divmod__`](generated/numpy.ndarray.__divmod__#numpy.ndarray.__divmod__ "numpy.ndarray.__divmod__")(value, /) | Return divmod(self, value). [`ndarray.__pow__`](generated/numpy.ndarray.__pow__#numpy.ndarray.__pow__ "numpy.ndarray.__pow__")(value[, mod]) | Return pow(self, value, mod). [`ndarray.__lshift__`](generated/numpy.ndarray.__lshift__#numpy.ndarray.__lshift__ "numpy.ndarray.__lshift__")(value, /) | Return self<>value. [`ndarray.__and__`](generated/numpy.ndarray.__and__#numpy.ndarray.__and__ "numpy.ndarray.__and__")(value, /) | Return self&value. [`ndarray.__or__`](generated/numpy.ndarray.__or__#numpy.ndarray.__or__ "numpy.ndarray.__or__")(value, /) | Return self|value. [`ndarray.__xor__`](generated/numpy.ndarray.__xor__#numpy.ndarray.__xor__ "numpy.ndarray.__xor__")(value, /) | Return self^value. Note * Any third argument to [`pow`](generated/numpy.pow#numpy.pow "numpy.pow") is silently ignored, as the underlying [`ufunc`](generated/numpy.power#numpy.power "numpy.power") takes only two arguments. * Because [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") is a built-in type (written in C), the `__r{op}__` special methods are not directly defined. * The functions called to implement many arithmetic special methods for arrays can be modified using [`__array_ufunc__`](arrays.classes#numpy.class.__array_ufunc__ "numpy.class.__array_ufunc__"). Arithmetic, in-place: [`ndarray.__iadd__`](generated/numpy.ndarray.__iadd__#numpy.ndarray.__iadd__ "numpy.ndarray.__iadd__")(value, /) | Return self+=value. ---|--- [`ndarray.__isub__`](generated/numpy.ndarray.__isub__#numpy.ndarray.__isub__ "numpy.ndarray.__isub__")(value, /) | Return self-=value. [`ndarray.__imul__`](generated/numpy.ndarray.__imul__#numpy.ndarray.__imul__ "numpy.ndarray.__imul__")(value, /) | Return self*=value. [`ndarray.__itruediv__`](generated/numpy.ndarray.__itruediv__#numpy.ndarray.__itruediv__ "numpy.ndarray.__itruediv__")(value, /) | Return self/=value. [`ndarray.__ifloordiv__`](generated/numpy.ndarray.__ifloordiv__#numpy.ndarray.__ifloordiv__ "numpy.ndarray.__ifloordiv__")(value, /) | Return self//=value. [`ndarray.__imod__`](generated/numpy.ndarray.__imod__#numpy.ndarray.__imod__ "numpy.ndarray.__imod__")(value, /) | Return self%=value. [`ndarray.__ipow__`](generated/numpy.ndarray.__ipow__#numpy.ndarray.__ipow__ "numpy.ndarray.__ipow__")(value, /) | Return self**=value. [`ndarray.__ilshift__`](generated/numpy.ndarray.__ilshift__#numpy.ndarray.__ilshift__ "numpy.ndarray.__ilshift__")(value, /) | Return self<<=value. [`ndarray.__irshift__`](generated/numpy.ndarray.__irshift__#numpy.ndarray.__irshift__ "numpy.ndarray.__irshift__")(value, /) | Return self>>=value. [`ndarray.__iand__`](generated/numpy.ndarray.__iand__#numpy.ndarray.__iand__ "numpy.ndarray.__iand__")(value, /) | Return self&=value. [`ndarray.__ior__`](generated/numpy.ndarray.__ior__#numpy.ndarray.__ior__ "numpy.ndarray.__ior__")(value, /) | Return self|=value. [`ndarray.__ixor__`](generated/numpy.ndarray.__ixor__#numpy.ndarray.__ixor__ "numpy.ndarray.__ixor__")(value, /) | Return self^=value. Warning In place operations will perform the calculation using the precision decided by the data type of the two operands, but will silently downcast the result (if necessary) so it can fit back into the array. Therefore, for mixed precision calculations, `A {op}= B` can be different than `A = A {op} B`. For example, suppose `a = ones((3,3))`. Then, `a += 3j` is different than `a = a + 3j`: while they both perform the same computation, `a += 3` casts the result to fit back in `a`, whereas `a = a + 3j` re-binds the name `a` to the result. Matrix Multiplication: [`ndarray.__matmul__`](generated/numpy.ndarray.__matmul__#numpy.ndarray.__matmul__ "numpy.ndarray.__matmul__")(value, /) | Return [self@value](https://numpy.org/cdn-cgi/l/email-protection#8af9efe6ecaca9b9bdb1aca9bfb8b1aca9beb2b1fcebe6ffef). ---|--- Note Matrix operators `@` and `@=` were introduced in Python 3.5 following [**PEP 465**](https://peps.python.org/pep-0465/), and the `@` operator has been introduced in NumPy 1.10.0. Further information can be found in the [`matmul`](generated/numpy.matmul#numpy.matmul "numpy.matmul") documentation. ## Special methods For standard library functions: [`ndarray.__copy__`](generated/numpy.ndarray.__copy__#numpy.ndarray.__copy__ "numpy.ndarray.__copy__")() | Used if [`copy.copy`](https://docs.python.org/3/library/copy.html#copy.copy "\(in Python v3.13\)") is called on an array. ---|--- [`ndarray.__deepcopy__`](generated/numpy.ndarray.__deepcopy__#numpy.ndarray.__deepcopy__ "numpy.ndarray.__deepcopy__")(memo, /) | Used if [`copy.deepcopy`](https://docs.python.org/3/library/copy.html#copy.deepcopy "\(in Python v3.13\)") is called on an array. [`ndarray.__reduce__`](generated/numpy.ndarray.__reduce__#numpy.ndarray.__reduce__ "numpy.ndarray.__reduce__")() | For pickling. [`ndarray.__setstate__`](generated/numpy.ndarray.__setstate__#numpy.ndarray.__setstate__ "numpy.ndarray.__setstate__")(state, /) | For unpickling. Basic customization: [`ndarray.__new__`](generated/numpy.ndarray.__new__#numpy.ndarray.__new__ "numpy.ndarray.__new__")(*args, **kwargs) | ---|--- [`ndarray.__array__`](generated/numpy.ndarray.__array__#numpy.ndarray.__array__ "numpy.ndarray.__array__")([dtype], *[, copy]) | For `dtype` parameter it returns a new reference to self if `dtype` is not given or it matches array's data type. [`ndarray.__array_wrap__`](generated/numpy.ndarray.__array_wrap__#numpy.ndarray.__array_wrap__ "numpy.ndarray.__array_wrap__")(array[, context], /) | Returns a view of [`array`](generated/numpy.array#numpy.array "numpy.array") with the same type as self. Container customization: (see [Indexing](routines.indexing#arrays-indexing)) [`ndarray.__len__`](generated/numpy.ndarray.__len__#numpy.ndarray.__len__ "numpy.ndarray.__len__")(/) | Return len(self). ---|--- [`ndarray.__getitem__`](generated/numpy.ndarray.__getitem__#numpy.ndarray.__getitem__ "numpy.ndarray.__getitem__")(key, /) | Return self[key]. [`ndarray.__setitem__`](generated/numpy.ndarray.__setitem__#numpy.ndarray.__setitem__ "numpy.ndarray.__setitem__")(key, value, /) | Set self[key] to value. [`ndarray.__contains__`](generated/numpy.ndarray.__contains__#numpy.ndarray.__contains__ "numpy.ndarray.__contains__")(key, /) | Return key in self. Conversion; the operations [`int()`](https://docs.python.org/3/library/functions.html#int "\(in Python v3.13\)"), [`float()`](https://docs.python.org/3/library/functions.html#float "\(in Python v3.13\)") and [`complex()`](https://docs.python.org/3/library/functions.html#complex "\(in Python v3.13\)"). They work only on arrays that have one element in them and return the appropriate scalar. [`ndarray.__int__`](generated/numpy.ndarray.__int__#numpy.ndarray.__int__ "numpy.ndarray.__int__")(self) | ---|--- [`ndarray.__float__`](generated/numpy.ndarray.__float__#numpy.ndarray.__float__ "numpy.ndarray.__float__")(self) | [`ndarray.__complex__`](generated/numpy.ndarray.__complex__#numpy.ndarray.__complex__ "numpy.ndarray.__complex__") | String representations: [`ndarray.__str__`](generated/numpy.ndarray.__str__#numpy.ndarray.__str__ "numpy.ndarray.__str__")(/) | Return str(self). ---|--- [`ndarray.__repr__`](generated/numpy.ndarray.__repr__#numpy.ndarray.__repr__ "numpy.ndarray.__repr__")(/) | Return repr(self). Utility method for typing: [`ndarray.__class_getitem__`](generated/numpy.ndarray.__class_getitem__#numpy.ndarray.__class_getitem__ "numpy.ndarray.__class_getitem__")(item, /) | Return a parametrized wrapper around the [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") type. ---|--- # Iterating over arrays Note Arrays support the iterator protocol and can be iterated over like Python lists. See the [Indexing, slicing and iterating](../user/quickstart#quickstart-indexing-slicing-and-iterating) section in the Quickstart guide for basic usage and examples. The remainder of this document presents the [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") object and covers more advanced usage. The iterator object [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer"), introduced in NumPy 1.6, provides many flexible ways to visit all the elements of one or more arrays in a systematic fashion. This page introduces some basic ways to use the object for computations on arrays in Python, then concludes with how one can accelerate the inner loop in Cython. Since the Python exposure of [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") is a relatively straightforward mapping of the C array iterator API, these ideas will also provide help working with array iteration from C or C++. ## Single array iteration The most basic task that can be done with the [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") is to visit every element of an array. Each element is provided one by one using the standard Python iterator interface. #### Example >>> import numpy as np >>> a = np.arange(6).reshape(2,3) >>> for x in np.nditer(a): ... print(x, end=' ') ... 0 1 2 3 4 5 An important thing to be aware of for this iteration is that the order is chosen to match the memory layout of the array instead of using a standard C or Fortran ordering. This is done for access efficiency, reflecting the idea that by default one simply wants to visit each element without concern for a particular ordering. We can see this by iterating over the transpose of our previous array, compared to taking a copy of that transpose in C order. #### Example >>> import numpy as np >>> a = np.arange(6).reshape(2,3) >>> for x in np.nditer(a.T): ... print(x, end=' ') ... 0 1 2 3 4 5 >>> for x in np.nditer(a.T.copy(order='C')): ... print(x, end=' ') ... 0 3 1 4 2 5 The elements of both `a` and `a.T` get traversed in the same order, namely the order they are stored in memory, whereas the elements of `a.T.copy(order=’C’)` get visited in a different order because they have been put into a different memory layout. ### Controlling iteration order There are times when it is important to visit the elements of an array in a specific order, irrespective of the layout of the elements in memory. The [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") object provides an `order` parameter to control this aspect of iteration. The default, having the behavior described above, is order=’K’ to keep the existing order. This can be overridden with order=’C’ for C order and order=’F’ for Fortran order. #### Example >>> import numpy as np >>> a = np.arange(6).reshape(2,3) >>> for x in np.nditer(a, order='F'): ... print(x, end=' ') ... 0 3 1 4 2 5 >>> for x in np.nditer(a.T, order='C'): ... print(x, end=' ') ... 0 3 1 4 2 5 ### Modifying array values By default, the [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") treats the input operand as a read-only object. To be able to modify the array elements, you must specify either read-write or write-only mode using the `‘readwrite’` or `‘writeonly’` per-operand flags. The nditer will then yield writeable buffer arrays which you may modify. However, because the nditer must copy this buffer data back to the original array once iteration is finished, you must signal when the iteration is ended, by one of two methods. You may either: * used the nditer as a context manager using the [`with`](https://docs.python.org/3/reference/compound_stmts.html#with "\(in Python v3.13\)") statement, and the temporary data will be written back when the context is exited. * call the iterator’s [`close`](generated/numpy.nditer.close#numpy.nditer.close "numpy.nditer.close") method once finished iterating, which will trigger the write-back. The nditer can no longer be iterated once either [`close`](generated/numpy.nditer.close#numpy.nditer.close "numpy.nditer.close") is called or its context is exited. #### Example >>> import numpy as np >>> a = np.arange(6).reshape(2,3) >>> a array([[0, 1, 2], [3, 4, 5]]) >>> with np.nditer(a, op_flags=['readwrite']) as it: ... for x in it: ... x[...] = 2 * x ... >>> a array([[ 0, 2, 4], [ 6, 8, 10]]) If you are writing code that needs to support older versions of numpy, note that prior to 1.15, [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") was not a context manager and did not have a [`close`](generated/numpy.nditer.close#numpy.nditer.close "numpy.nditer.close") method. Instead it relied on the destructor to initiate the writeback of the buffer. ### Using an external loop In all the examples so far, the elements of `a` are provided by the iterator one at a time, because all the looping logic is internal to the iterator. While this is simple and convenient, it is not very efficient. A better approach is to move the one-dimensional innermost loop into your code, external to the iterator. This way, NumPy’s vectorized operations can be used on larger chunks of the elements being visited. The [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") will try to provide chunks that are as large as possible to the inner loop. By forcing ‘C’ and ‘F’ order, we get different external loop sizes. This mode is enabled by specifying an iterator flag. Observe that with the default of keeping native memory order, the iterator is able to provide a single one-dimensional chunk, whereas when forcing Fortran order, it has to provide three chunks of two elements each. #### Example >>> import numpy as np >>> a = np.arange(6).reshape(2,3) >>> for x in np.nditer(a, flags=['external_loop']): ... print(x, end=' ') ... [0 1 2 3 4 5] >>> for x in np.nditer(a, flags=['external_loop'], order='F'): ... print(x, end=' ') ... [0 3] [1 4] [2 5] ### Tracking an index or multi-index During iteration, you may want to use the index of the current element in a computation. For example, you may want to visit the elements of an array in memory order, but use a C-order, Fortran-order, or multidimensional index to look up values in a different array. The index is tracked by the iterator object itself, and accessible through the `index` or `multi_index` properties, depending on what was requested. The examples below show printouts demonstrating the progression of the index: #### Example >>> import numpy as np >>> a = np.arange(6).reshape(2,3) >>> it = np.nditer(a, flags=['f_index']) >>> for x in it: ... print("%d <%d>" % (x, it.index), end=' ') ... 0 <0> 1 <2> 2 <4> 3 <1> 4 <3> 5 <5> >>> it = np.nditer(a, flags=['multi_index']) >>> for x in it: ... print("%d <%s>" % (x, it.multi_index), end=' ') ... 0 <(0, 0)> 1 <(0, 1)> 2 <(0, 2)> 3 <(1, 0)> 4 <(1, 1)> 5 <(1, 2)> >>> with np.nditer(a, flags=['multi_index'], op_flags=['writeonly']) as it: ... for x in it: ... x[...] = it.multi_index[1] - it.multi_index[0] ... >>> a array([[ 0, 1, 2], [-1, 0, 1]]) Tracking an index or multi-index is incompatible with using an external loop, because it requires a different index value per element. If you try to combine these flags, the [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") object will raise an exception. #### Example >>> import numpy as np >>> a = np.zeros((2,3)) >>> it = np.nditer(a, flags=['c_index', 'external_loop']) Traceback (most recent call last): File "", line 1, in ValueError: Iterator flag EXTERNAL_LOOP cannot be used if an index or multi-index is being tracked ### Alternative looping and element access To make its properties more readily accessible during iteration, [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") has an alternative syntax for iterating, which works explicitly with the iterator object itself. With this looping construct, the current value is accessible by indexing into the iterator. Other properties, such as tracked indices remain as before. The examples below produce identical results to the ones in the previous section. #### Example >>> import numpy as np >>> a = np.arange(6).reshape(2,3) >>> it = np.nditer(a, flags=['f_index']) >>> while not it.finished: ... print("%d <%d>" % (it[0], it.index), end=' ') ... is_not_finished = it.iternext() ... 0 <0> 1 <2> 2 <4> 3 <1> 4 <3> 5 <5> >>> it = np.nditer(a, flags=['multi_index']) >>> while not it.finished: ... print("%d <%s>" % (it[0], it.multi_index), end=' ') ... is_not_finished = it.iternext() ... 0 <(0, 0)> 1 <(0, 1)> 2 <(0, 2)> 3 <(1, 0)> 4 <(1, 1)> 5 <(1, 2)> >>> with np.nditer(a, flags=['multi_index'], op_flags=['writeonly']) as it: ... while not it.finished: ... it[0] = it.multi_index[1] - it.multi_index[0] ... is_not_finished = it.iternext() ... >>> a array([[ 0, 1, 2], [-1, 0, 1]]) ### Buffering the array elements When forcing an iteration order, we observed that the external loop option may provide the elements in smaller chunks because the elements can’t be visited in the appropriate order with a constant stride. When writing C code, this is generally fine, however in pure Python code this can cause a significant reduction in performance. By enabling buffering mode, the chunks provided by the iterator to the inner loop can be made larger, significantly reducing the overhead of the Python interpreter. In the example forcing Fortran iteration order, the inner loop gets to see all the elements in one go when buffering is enabled. #### Example >>> import numpy as np >>> a = np.arange(6).reshape(2,3) >>> for x in np.nditer(a, flags=['external_loop'], order='F'): ... print(x, end=' ') ... [0 3] [1 4] [2 5] >>> for x in np.nditer(a, flags=['external_loop','buffered'], order='F'): ... print(x, end=' ') ... [0 3 1 4 2 5] ### Iterating as a specific data type There are times when it is necessary to treat an array as a different data type than it is stored as. For instance, one may want to do all computations on 64-bit floats, even if the arrays being manipulated are 32-bit floats. Except when writing low-level C code, it’s generally better to let the iterator handle the copying or buffering instead of casting the data type yourself in the inner loop. There are two mechanisms which allow this to be done, temporary copies and buffering mode. With temporary copies, a copy of the entire array is made with the new data type, then iteration is done in the copy. Write access is permitted through a mode which updates the original array after all the iteration is complete. The major drawback of temporary copies is that the temporary copy may consume a large amount of memory, particularly if the iteration data type has a larger itemsize than the original one. Buffering mode mitigates the memory usage issue and is more cache-friendly than making temporary copies. Except for special cases, where the whole array is needed at once outside the iterator, buffering is recommended over temporary copying. Within NumPy, buffering is used by the ufuncs and other functions to support flexible inputs with minimal memory overhead. In our examples, we will treat the input array with a complex data type, so that we can take square roots of negative numbers. Without enabling copies or buffering mode, the iterator will raise an exception if the data type doesn’t match precisely. #### Example >>> import numpy as np >>> a = np.arange(6).reshape(2,3) - 3 >>> for x in np.nditer(a, op_dtypes=['complex128']): ... print(np.sqrt(x), end=' ') ... Traceback (most recent call last): File "", line 1, in TypeError: Iterator operand required copying or buffering, but neither copying nor buffering was enabled In copying mode, ‘copy’ is specified as a per-operand flag. This is done to provide control in a per-operand fashion. Buffering mode is specified as an iterator flag. #### Example >>> import numpy as np >>> a = np.arange(6).reshape(2,3) - 3 >>> for x in np.nditer(a, op_flags=['readonly','copy'], ... op_dtypes=['complex128']): ... print(np.sqrt(x), end=' ') ... 1.7320508075688772j 1.4142135623730951j 1j 0j (1+0j) (1.4142135623730951+0j) >>> for x in np.nditer(a, flags=['buffered'], op_dtypes=['complex128']): ... print(np.sqrt(x), end=' ') ... 1.7320508075688772j 1.4142135623730951j 1j 0j (1+0j) (1.4142135623730951+0j) The iterator uses NumPy’s casting rules to determine whether a specific conversion is permitted. By default, it enforces ‘safe’ casting. This means, for example, that it will raise an exception if you try to treat a 64-bit float array as a 32-bit float array. In many cases, the rule ‘same_kind’ is the most reasonable rule to use, since it will allow conversion from 64 to 32-bit float, but not from float to int or from complex to float. #### Example >>> import numpy as np >>> a = np.arange(6.) >>> for x in np.nditer(a, flags=['buffered'], op_dtypes=['float32']): ... print(x, end=' ') ... Traceback (most recent call last): File "", line 1, in TypeError: Iterator operand 0 dtype could not be cast from dtype('float64') to dtype('float32') according to the rule 'safe' >>> for x in np.nditer(a, flags=['buffered'], op_dtypes=['float32'], ... casting='same_kind'): ... print(x, end=' ') ... 0.0 1.0 2.0 3.0 4.0 5.0 >>> for x in np.nditer(a, flags=['buffered'], op_dtypes=['int32'], casting='same_kind'): ... print(x, end=' ') ... Traceback (most recent call last): File "", line 1, in TypeError: Iterator operand 0 dtype could not be cast from dtype('float64') to dtype('int32') according to the rule 'same_kind' One thing to watch out for is conversions back to the original data type when using a read-write or write-only operand. A common case is to implement the inner loop in terms of 64-bit floats, and use ‘same_kind’ casting to allow the other floating-point types to be processed as well. While in read-only mode, an integer array could be provided, read-write mode will raise an exception because conversion back to the array would violate the casting rule. #### Example >>> import numpy as np >>> a = np.arange(6) >>> for x in np.nditer(a, flags=['buffered'], op_flags=['readwrite'], ... op_dtypes=['float64'], casting='same_kind'): ... x[...] = x / 2.0 ... Traceback (most recent call last): File "", line 2, in TypeError: Iterator requested dtype could not be cast from dtype('float64') to dtype('int64'), the operand 0 dtype, according to the rule 'same_kind' ## Broadcasting array iteration NumPy has a set of rules for dealing with arrays that have differing shapes which are applied whenever functions take multiple operands which combine element-wise. This is called [broadcasting](../user/basics.ufuncs#ufuncs- broadcasting). The [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") object can apply these rules for you when you need to write such a function. As an example, we print out the result of broadcasting a one and a two dimensional array together. #### Example >>> import numpy as np >>> a = np.arange(3) >>> b = np.arange(6).reshape(2,3) >>> for x, y in np.nditer([a,b]): ... print("%d:%d" % (x,y), end=' ') ... 0:0 1:1 2:2 0:3 1:4 2:5 When a broadcasting error occurs, the iterator raises an exception which includes the input shapes to help diagnose the problem. #### Example >>> import numpy as np >>> a = np.arange(2) >>> b = np.arange(6).reshape(2,3) >>> for x, y in np.nditer([a,b]): ... print("%d:%d" % (x,y), end=' ') ... Traceback (most recent call last): ... ValueError: operands could not be broadcast together with shapes (2,) (2,3) ### Iterator-allocated output arrays A common case in NumPy functions is to have outputs allocated based on the broadcasting of the input, and additionally have an optional parameter called ‘out’ where the result will be placed when it is provided. The [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") object provides a convenient idiom that makes it very easy to support this mechanism. We’ll show how this works by creating a function [`square`](generated/numpy.square#numpy.square "numpy.square") which squares its input. Let’s start with a minimal function definition excluding ‘out’ parameter support. #### Example >>> import numpy as np >>> def square(a): ... with np.nditer([a, None]) as it: ... for x, y in it: ... y[...] = x*x ... return it.operands[1] ... >>> square([1,2,3]) array([1, 4, 9]) By default, the [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") uses the flags ‘allocate’ and ‘writeonly’ for operands that are passed in as None. This means we were able to provide just the two operands to the iterator, and it handled the rest. When adding the ‘out’ parameter, we have to explicitly provide those flags, because if someone passes in an array as ‘out’, the iterator will default to ‘readonly’, and our inner loop would fail. The reason ‘readonly’ is the default for input arrays is to prevent confusion about unintentionally triggering a reduction operation. If the default were ‘readwrite’, any broadcasting operation would also trigger a reduction, a topic which is covered later in this document. While we’re at it, let’s also introduce the ‘no_broadcast’ flag, which will prevent the output from being broadcast. This is important, because we only want one input value for each output. Aggregating more than one input value is a reduction operation which requires special handling. It would already raise an error because reductions must be explicitly enabled in an iterator flag, but the error message that results from disabling broadcasting is much more understandable for end-users. To see how to generalize the square function to a reduction, look at the sum of squares function in the section about Cython. For completeness, we’ll also add the ‘external_loop’ and ‘buffered’ flags, as these are what you will typically want for performance reasons. #### Example >>> import numpy as np >>> def square(a, out=None): ... it = np.nditer([a, out], ... flags = ['external_loop', 'buffered'], ... op_flags = [['readonly'], ... ['writeonly', 'allocate', 'no_broadcast']]) ... with it: ... for x, y in it: ... y[...] = x*x ... return it.operands[1] ... >>> square([1,2,3]) array([1, 4, 9]) >>> b = np.zeros((3,)) >>> square([1,2,3], out=b) array([1., 4., 9.]) >>> b array([1., 4., 9.]) >>> square(np.arange(6).reshape(2,3), out=b) Traceback (most recent call last): ... ValueError: non-broadcastable output operand with shape (3,) doesn't match the broadcast shape (2,3) ### Outer product iteration Any binary operation can be extended to an array operation in an outer product fashion like in [`outer`](generated/numpy.outer#numpy.outer "numpy.outer"), and the [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") object provides a way to accomplish this by explicitly mapping the axes of the operands. It is also possible to do this with [`newaxis`](constants#numpy.newaxis "numpy.newaxis") indexing, but we will show you how to directly use the nditer `op_axes` parameter to accomplish this with no intermediate views. We’ll do a simple outer product, placing the dimensions of the first operand before the dimensions of the second operand. The `op_axes` parameter needs one list of axes for each operand, and provides a mapping from the iterator’s axes to the axes of the operand. Suppose the first operand is one dimensional and the second operand is two dimensional. The iterator will have three dimensions, so `op_axes` will have two 3-element lists. The first list picks out the one axis of the first operand, and is -1 for the rest of the iterator axes, with a final result of [0, -1, -1]. The second list picks out the two axes of the second operand, but shouldn’t overlap with the axes picked out in the first operand. Its list is [-1, 0, 1]. The output operand maps onto the iterator axes in the standard manner, so we can provide None instead of constructing another list. The operation in the inner loop is a straightforward multiplication. Everything to do with the outer product is handled by the iterator setup. #### Example >>> import numpy as np >>> a = np.arange(3) >>> b = np.arange(8).reshape(2,4) >>> it = np.nditer([a, b, None], flags=['external_loop'], ... op_axes=[[0, -1, -1], [-1, 0, 1], None]) >>> with it: ... for x, y, z in it: ... z[...] = x*y ... result = it.operands[2] # same as z ... >>> result array([[[ 0, 0, 0, 0], [ 0, 0, 0, 0]], [[ 0, 1, 2, 3], [ 4, 5, 6, 7]], [[ 0, 2, 4, 6], [ 8, 10, 12, 14]]]) Note that once the iterator is closed we can not access [`operands`](generated/numpy.nditer.operands#numpy.nditer.operands "numpy.nditer.operands") and must use a reference created inside the context manager. ### Reduction iteration Whenever a writeable operand has fewer elements than the full iteration space, that operand is undergoing a reduction. The [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") object requires that any reduction operand be flagged as read-write, and only allows reductions when ‘reduce_ok’ is provided as an iterator flag. For a simple example, consider taking the sum of all elements in an array. #### Example >>> import numpy as np >>> a = np.arange(24).reshape(2,3,4) >>> b = np.array(0) >>> with np.nditer([a, b], flags=['reduce_ok'], ... op_flags=[['readonly'], ['readwrite']]) as it: ... for x,y in it: ... y[...] += x ... >>> b array(276) >>> np.sum(a) 276 Things are a little bit more tricky when combining reduction and allocated operands. Before iteration is started, any reduction operand must be initialized to its starting values. Here’s how we can do this, taking sums along the last axis of `a`. #### Example >>> import numpy as np >>> a = np.arange(24).reshape(2,3,4) >>> it = np.nditer([a, None], flags=['reduce_ok'], ... op_flags=[['readonly'], ['readwrite', 'allocate']], ... op_axes=[None, [0,1,-1]]) >>> with it: ... it.operands[1][...] = 0 ... for x, y in it: ... y[...] += x ... result = it.operands[1] ... >>> result array([[ 6, 22, 38], [54, 70, 86]]) >>> np.sum(a, axis=2) array([[ 6, 22, 38], [54, 70, 86]]) To do buffered reduction requires yet another adjustment during the setup. Normally the iterator construction involves copying the first buffer of data from the readable arrays into the buffer. Any reduction operand is readable, so it may be read into a buffer. Unfortunately, initialization of the operand after this buffering operation is complete will not be reflected in the buffer that the iteration starts with, and garbage results will be produced. The iterator flag “delay_bufalloc” is there to allow iterator-allocated reduction operands to exist together with buffering. When this flag is set, the iterator will leave its buffers uninitialized until it receives a reset, after which it will be ready for regular iteration. Here’s how the previous example looks if we also enable buffering. #### Example >>> import numpy as np >>> a = np.arange(24).reshape(2,3,4) >>> it = np.nditer([a, None], flags=['reduce_ok', ... 'buffered', 'delay_bufalloc'], ... op_flags=[['readonly'], ['readwrite', 'allocate']], ... op_axes=[None, [0,1,-1]]) >>> with it: ... it.operands[1][...] = 0 ... it.reset() ... for x, y in it: ... y[...] += x ... result = it.operands[1] ... >>> result array([[ 6, 22, 38], [54, 70, 86]]) ## Putting the inner loop in Cython Those who want really good performance out of their low level operations should strongly consider directly using the iteration API provided in C, but for those who are not comfortable with C or C++, Cython is a good middle ground with reasonable performance tradeoffs. For the [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") object, this means letting the iterator take care of broadcasting, dtype conversion, and buffering, while giving the inner loop to Cython. For our example, we’ll create a sum of squares function. To start, let’s implement this function in straightforward Python. We want to support an ‘axis’ parameter similar to the numpy [`sum`](generated/numpy.sum#numpy.sum "numpy.sum") function, so we will need to construct a list for the `op_axes` parameter. Here’s how this looks. #### Example >>> def axis_to_axeslist(axis, ndim): ... if axis is None: ... return [-1] * ndim ... else: ... if type(axis) is not tuple: ... axis = (axis,) ... axeslist = [1] * ndim ... for i in axis: ... axeslist[i] = -1 ... ax = 0 ... for i in range(ndim): ... if axeslist[i] != -1: ... axeslist[i] = ax ... ax += 1 ... return axeslist ... >>> def sum_squares_py(arr, axis=None, out=None): ... axeslist = axis_to_axeslist(axis, arr.ndim) ... it = np.nditer([arr, out], flags=['reduce_ok', ... 'buffered', 'delay_bufalloc'], ... op_flags=[['readonly'], ['readwrite', 'allocate']], ... op_axes=[None, axeslist], ... op_dtypes=['float64', 'float64']) ... with it: ... it.operands[1][...] = 0 ... it.reset() ... for x, y in it: ... y[...] += x*x ... return it.operands[1] ... >>> a = np.arange(6).reshape(2,3) >>> sum_squares_py(a) array(55.) >>> sum_squares_py(a, axis=-1) array([ 5., 50.]) To Cython-ize this function, we replace the inner loop (y[…] += x*x) with Cython code that’s specialized for the float64 dtype. With the ‘external_loop’ flag enabled, the arrays provided to the inner loop will always be one- dimensional, so very little checking needs to be done. Here’s the listing of sum_squares.pyx: import numpy as np cimport numpy as np cimport cython def axis_to_axeslist(axis, ndim): if axis is None: return [-1] * ndim else: if type(axis) is not tuple: axis = (axis,) axeslist = [1] * ndim for i in axis: axeslist[i] = -1 ax = 0 for i in range(ndim): if axeslist[i] != -1: axeslist[i] = ax ax += 1 return axeslist @cython.boundscheck(False) def sum_squares_cy(arr, axis=None, out=None): cdef np.ndarray[double] x cdef np.ndarray[double] y cdef int size cdef double value axeslist = axis_to_axeslist(axis, arr.ndim) it = np.nditer([arr, out], flags=['reduce_ok', 'external_loop', 'buffered', 'delay_bufalloc'], op_flags=[['readonly'], ['readwrite', 'allocate']], op_axes=[None, axeslist], op_dtypes=['float64', 'float64']) with it: it.operands[1][...] = 0 it.reset() for xarr, yarr in it: x = xarr y = yarr size = x.shape[0] for i in range(size): value = x[i] y[i] = y[i] + value * value return it.operands[1] On this machine, building the .pyx file into a module looked like the following, but you may have to find some Cython tutorials to tell you the specifics for your system configuration.: $ cython sum_squares.pyx $ gcc -shared -pthread -fPIC -fwrapv -O2 -Wall -I/usr/include/python2.7 -fno-strict-aliasing -o sum_squares.so sum_squares.c Running this from the Python interpreter produces the same answers as our native Python/NumPy code did. #### Example >>> from sum_squares import sum_squares_cy >>> a = np.arange(6).reshape(2,3) >>> sum_squares_cy(a) array(55.0) >>> sum_squares_cy(a, axis=-1) array([ 5., 50.]) Doing a little timing in IPython shows that the reduced overhead and memory allocation of the Cython inner loop is providing a very nice speedup over both the straightforward Python code and an expression using NumPy’s built-in sum function.: >>> a = np.random.rand(1000,1000) >>> timeit sum_squares_py(a, axis=-1) 10 loops, best of 3: 37.1 ms per loop >>> timeit np.sum(a*a, axis=-1) 10 loops, best of 3: 20.9 ms per loop >>> timeit sum_squares_cy(a, axis=-1) 100 loops, best of 3: 11.8 ms per loop >>> np.all(sum_squares_cy(a, axis=-1) == np.sum(a*a, axis=-1)) True >>> np.all(sum_squares_py(a, axis=-1) == np.sum(a*a, axis=-1)) True # Data type promotion in NumPy When mixing two different data types, NumPy has to determine the appropriate dtype for the result of the operation. This step is referred to as _promotion_ or _finding the common dtype_. In typical cases, the user does not need to worry about the details of promotion, since the promotion step usually ensures that the result will either match or exceed the precision of the input. For example, when the inputs are of the same dtype, the dtype of the result matches the dtype of the inputs: >>> np.int8(1) + np.int8(1) np.int8(2) Mixing two different dtypes normally produces a result with the dtype of the higher precision input: >>> np.int8(4) + np.int64(8) # 64 > 8 np.int64(12) >>> np.float32(3) + np.float16(3) # 32 > 16 np.float32(6.0) In typical cases, this does not lead to surprises. However, if you work with non-default dtypes like unsigned integers and low-precision floats, or if you mix NumPy integers, NumPy floats, and Python scalars, some details of NumPy promotion rules may be relevant. Note that these detailed rules do not always match those of other languages [1]. Numerical dtypes come in four “kinds” with a natural hierarchy. 1. unsigned integers (`uint`) 2. signed integers (`int`) 3. float (`float`) 4. complex (`complex`) In addition to kind, NumPy numerical dtypes also have an associated precision, specified in bits. Together, the kind and precision specify the dtype. For example, a `uint8` is an unsigned integer stored using 8 bits. The result of an operation will always be of an equal or higher kind of any of the inputs. Furthermore, the result will always have a precision greater than or equal to those of the inputs. Already, this can lead to some examples which may be unexpected: 1. When mixing floating point numbers and integers, the precision of the integer may force the result to a higher precision floating point. For example, the result of an operation involving `int64` and `float16` is `float64`. 2. When mixing unsigned and signed integers with the same precision, the result will have _higher_ precision than either inputs. Additionally, if one of them has 64bit precision already, no higher precision integer is available and for example an operation involving `int64` and `uint64` gives `float64`. Please see the `Numerical promotion` section and image below for details on both. ## Detailed behavior of Python scalars Since NumPy 2.0 [2], an important point in our promotion rules is that although operations involving two NumPy dtypes never lose precision, operations involving a NumPy dtype and a Python scalar (`int`, `float`, or `complex`) _can_ lose precision. For instance, it is probably intuitive that the result of an operation between a Python integer and a NumPy integer should be a NumPy integer. However, Python integers have arbitrary precision whereas all NumPy dtypes have fixed precision, so the arbitrary precision of Python integers cannot be preserved. More generally, NumPy considers the “kind” of Python scalars, but ignores their precision when determining the result dtype. This is often convenient. For instance, when working with arrays of a low precision dtype, it is usually desirable for simple operations with Python scalars to preserve the dtype. >>> arr_float32 = np.array([1, 2.5, 2.1], dtype="float32") >>> arr_float32 + 10.0 # undesirable to promote to float64 array([11. , 12.5, 12.1], dtype=float32) >>> arr_int16 = np.array([3, 5, 7], dtype="int16") >>> arr_int16 + 10 # undesirable to promote to int64 array([13, 15, 17], dtype=int16) In both cases, the result precision is dictated by the NumPy dtype. Because of this, `arr_float32 + 3.0` behaves the same as `arr_float32 + np.float32(3.0)`, and `arr_int16 + 10` behaves as `arr_int16 + np.int16(10.)`. As another example, when mixing NumPy integers with a Python `float` or `complex`, the result always has type `float64` or `complex128`: >> np.int16(1) + 1.0 np.float64(2.0) However, these rules can also lead to surprising behavior when working with low precision dtypes. First, since the Python value is converted to a NumPy one before the operation can by performed, operations can fail with an error when the result seems obvious. For instance, `np.int8(1) + 1000` cannot continue because `1000` exceeds the maximum value of an `int8`. When the Python scalar cannot be coerced to the NumPy dtype, an error is raised: >>> np.int8(1) + 1000 Traceback (most recent call last): ... OverflowError: Python integer 1000 out of bounds for int8 >>> np.int64(1) * 10**100 Traceback (most recent call last): ... OverflowError: Python int too large to convert to C long >>> np.float32(1) + 1e300 np.float32(inf) ... RuntimeWarning: overflow encountered in cast Second, since the Python float or integer precision is always ignored, a low precision NumPy scalar will keep using its lower precision unless explicitly converted to a higher precision NumPy dtype or Python scalar (e.g. via `int()`, `float()`, or `scalar.item()`). This lower precision may be detrimental to some calculations or lead to incorrect results, especially in the case of integer overflows: >>> np.int8(100) + 100 # the result exceeds the capacity of int8 np.int8(-56) ... RuntimeWarning: overflow encountered in scalar add Note that NumPy warns when overflows occur for scalars, but not for arrays; e.g., `np.array(100, dtype="uint8") + 100` will _not_ warn. ## Numerical promotion The following image shows the numerical promotion rules with the kinds on the vertical axis and the precision on the horizontal axis. The input dtype with the higher kind determines the kind of the result dtype. The result dtype has a precision as low as possible without appearing to the left of either input dtype in the diagram. Note the following specific rules and observations: 1. When a Python `float` or `complex` interacts with a NumPy integer the result will be `float64` or `complex128` (yellow border). NumPy booleans will also be cast to the default integer [3]. This is not relevant when additionally NumPy floating point values are involved. 2. The precision is drawn such that `float16 < int16 < uint16` because large `uint16` do not fit `int16` and large `int16` will lose precision when stored in a `float16`. This pattern however is broken since NumPy always considers `float64` and `complex128` to be acceptable promotion results for any integer value. 3. A special case is that NumPy promotes many combinations of signed and unsigned integers to `float64`. A higher kind is used here because no signed integer dtype is sufficiently precise to hold a `uint64`. ## Exceptions to the general promotion rules In NumPy promotion refers to what specific functions do with the result and in some cases, this means that NumPy may deviate from what the `np.result_type` would give. ### Behavior of `sum` and `prod` `np.sum` and `np.prod` will always return the default integer type when summing over integer values (or booleans). This is usually an `int64`. The reason for this is that integer summations are otherwise very likely to overflow and give confusing results. This rule also applies to the underlying `np.add.reduce` and `np.multiply.reduce`. ### Notable behavior with NumPy or Python integer scalars NumPy promotion refers to the result dtype and operation precision, but the operation will sometimes dictate that result. Division always returns floating point values and comparison always booleans. This leads to what may appear as “exceptions” to the rules: * NumPy comparisons with Python integers or mixed precision integers always return the correct result. The inputs will never be cast in a way which loses precision. * Equality comparisons between types which cannot be promoted will be considered all `False` (equality) or all `True` (not-equal). * Unary math functions like `np.sin` that always return floating point values, accept any Python integer input by converting it to `float64`. * Division always returns floating point values and thus also allows divisions between any NumPy integer with any Python integer value by casting both to `float64`. In principle, some of these exceptions may make sense for other functions. Please raise an issue if you feel this is the case. ## Promotion of non-numerical datatypes NumPy extends the promotion to non-numerical types, although in many cases promotion is not well defined and simply rejected. The following rules apply: * NumPy byte strings (`np.bytes_`) can be promoted to unicode strings (`np.str_`). However, casting the bytes to unicode will fail for non-ascii characters. * For some purposes NumPy will promote almost any other datatype to strings. This applies to array creation or concatenation. * The array constructors like `np.array()` will use `object` dtype when there is no viable promotion. * Structured dtypes can promote when their field names and order matches. In that case all fields are promoted individually. * NumPy `timedelta` can in some cases promote with integers. Note Some of these rules are somewhat surprising, and are being considered for change in the future. However, any backward-incompatible changes have to be weighed against the risks of breaking existing code. Please raise an issue if you have particular ideas about how promotion should work. ## Details of promoted `dtype` instances The above discussion has mainly dealt with the behavior when mixing different DType classes. A `dtype` instance attached to an array can carry additional information such as byte-order, metadata, string length, or exact structured dtype layout. While the string length or field names of a structured dtype are important, NumPy considers byte-order, metadata, and the exact layout of a structured dtype as storage details. During promotion NumPy does _not_ take these storage details into account: * Byte-order is converted to native byte-order. * Metadata attached to the dtype may or may not be preserved. * Resulting structured dtypes will be packed (but aligned if inputs were). This behaviors is the best behavior for most programs where storage details are not relevant to the final results and where the use of incorrect byte- order could drastically slow down evaluation. [1] To a large degree, this may just be for choices made early on in NumPy’s predecessors. For more details, see [NEP 50](https://numpy.org/neps/nep-0050-scalar-promotion.html#nep50 "\(in NumPy Enhancement Proposals\)"). [2] See also [NEP 50](https://numpy.org/neps/nep-0050-scalar-promotion.html#nep50 "\(in NumPy Enhancement Proposals\)") which changed the rules for NumPy 2.0. Previous versions of NumPy would sometimes return higher precision results based on the input value of Python scalars. Further, previous versions of NumPy would typically ignore the higher precision of NumPy scalars or 0-D arrays for promotion purposes. [3] The default integer is marked as `int64` in the schema but is `int32` on 32bit platforms. However, normal PCs are 64bit. # Scalars Python defines only one type of a particular data class (there is only one integer type, one floating-point type, etc.). This can be convenient in applications that don’t need to be concerned with all the ways data can be represented in a computer. For scientific computing, however, more control is often needed. In NumPy, there are 24 new fundamental Python types to describe different types of scalars. These type descriptors are mostly based on the types available in the C language that CPython is written in, with several additional types compatible with Python’s types. Array scalars have the same attributes and methods as [`ndarrays`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). [1] This allows one to treat items of an array partly on the same footing as arrays, smoothing out rough edges that result when mixing scalar and array operations. Array scalars live in a hierarchy (see the Figure below) of data types. They can be detected using the hierarchy: For example, `isinstance(val, np.generic)` will return [`True`](https://docs.python.org/3/library/constants.html#True "\(in Python v3.13\)") if _val_ is an array scalar object. Alternatively, what kind of array scalar is present can be determined using other members of the data type hierarchy. Thus, for example `isinstance(val, np.complexfloating)` will return [`True`](https://docs.python.org/3/library/constants.html#True "\(in Python v3.13\)") if _val_ is a complex valued type, while `isinstance(val, np.flexible)` will return true if _val_ is one of the flexible itemsize array types (`str_`, `bytes_`, `void`). **Figure:** Hierarchy of type objects representing the array data types. Not shown are the two integer types `intp` and `uintp` which are used for indexing (the same as the default integer since NumPy 2). [1] However, array scalars are immutable, so none of the array scalar attributes are settable. ## Built-in scalar types The built-in scalar types are shown below. The C-like names are associated with character codes, which are shown in their descriptions. Use of the character codes, however, is discouraged. Some of the scalar types are essentially equivalent to fundamental Python types and therefore inherit from them as well as from the generic array scalar type: Array scalar type | Related Python type | Inherits? ---|---|--- `int_` | [`int`](https://docs.python.org/3/library/functions.html#int "\(in Python v3.13\)") | Python 2 only `double` | [`float`](https://docs.python.org/3/library/functions.html#float "\(in Python v3.13\)") | yes `cdouble` | [`complex`](https://docs.python.org/3/library/functions.html#complex "\(in Python v3.13\)") | yes `bytes_` | [`bytes`](https://docs.python.org/3/library/stdtypes.html#bytes "\(in Python v3.13\)") | yes `str_` | [`str`](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.13\)") | yes `bool_` | `bool` | no `datetime64` | [`datetime.datetime`](https://docs.python.org/3/library/datetime.html#datetime.datetime "\(in Python v3.13\)") | no `timedelta64` | [`datetime.timedelta`](https://docs.python.org/3/library/datetime.html#datetime.timedelta "\(in Python v3.13\)") | no The `bool_` data type is very similar to the Python `bool` but does not inherit from it because Python’s `bool` does not allow itself to be inherited from, and on the C-level the size of the actual bool data is not the same as a Python Boolean scalar. Warning The `int_` type does **not** inherit from the [`int`](https://docs.python.org/3/library/functions.html#int "\(in Python v3.13\)") built-in under Python 3, because type [`int`](https://docs.python.org/3/library/functions.html#int "\(in Python v3.13\)") is no longer a fixed-width integer type. Tip The default data type in NumPy is `double`. _class_ numpy.generic[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Base class for numpy scalar types. Class from which most (all?) numpy scalar types are derived. For consistency, exposes the same API as [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"), despite many consequent attributes being either “get-only,” or completely irrelevant. This is the class from which it is strongly suggested users should derive custom scalar types. _class_ numpy.number[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Abstract base class of all numeric scalar types. ### Integer types _class_ numpy.integer[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Abstract base class of all integer scalar types. Note The numpy integer types mirror the behavior of C integers, and can therefore be subject to [Overflow errors](../user/basics.types#overflow-errors). #### Signed integer types _class_ numpy.signedinteger[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Abstract base class of all signed integer scalar types. _class_ numpy.byte[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Signed integer type, compatible with C `char`. Character code: `'b'` Canonical name: `numpy.byte` Alias on this platform (Linux x86_64): `numpy.int8`: 8-bit signed integer (`-128` to `127`). _class_ numpy.short[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Signed integer type, compatible with C `short`. Character code: `'h'` Canonical name: `numpy.short` Alias on this platform (Linux x86_64): `numpy.int16`: 16-bit signed integer (`-32_768` to `32_767`). _class_ numpy.intc[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Signed integer type, compatible with C `int`. Character code: `'i'` Canonical name: `numpy.intc` Alias on this platform (Linux x86_64): `numpy.int32`: 32-bit signed integer (`-2_147_483_648` to `2_147_483_647`). _class_ numpy.int_[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Default signed integer type, 64bit on 64bit systems and 32bit on 32bit systems. Character code: `'l'` Canonical name: `numpy.int_` Alias on this platform (Linux x86_64): `numpy.int64`: 64-bit signed integer (`-9_223_372_036_854_775_808` to `9_223_372_036_854_775_807`). Alias on this platform (Linux x86_64): `numpy.intp`: Signed integer large enough to fit pointer, compatible with C `intptr_t`. numpy.long[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) alias of `int_` _class_ numpy.longlong[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Signed integer type, compatible with C `long long`. Character code: `'q'` #### Unsigned integer types _class_ numpy.unsignedinteger[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Abstract base class of all unsigned integer scalar types. _class_ numpy.ubyte[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Unsigned integer type, compatible with C `unsigned char`. Character code: `'B'` Canonical name: `numpy.ubyte` Alias on this platform (Linux x86_64): `numpy.uint8`: 8-bit unsigned integer (`0` to `255`). _class_ numpy.ushort[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Unsigned integer type, compatible with C `unsigned short`. Character code: `'H'` Canonical name: `numpy.ushort` Alias on this platform (Linux x86_64): `numpy.uint16`: 16-bit unsigned integer (`0` to `65_535`). _class_ numpy.uintc[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Unsigned integer type, compatible with C `unsigned int`. Character code: `'I'` Canonical name: `numpy.uintc` Alias on this platform (Linux x86_64): `numpy.uint32`: 32-bit unsigned integer (`0` to `4_294_967_295`). _class_ numpy.uint[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Unsigned signed integer type, 64bit on 64bit systems and 32bit on 32bit systems. Character code: `'L'` Canonical name: `numpy.uint` Alias on this platform (Linux x86_64): `numpy.uint64`: 64-bit unsigned integer (`0` to `18_446_744_073_709_551_615`). Alias on this platform (Linux x86_64): `numpy.uintp`: Unsigned integer large enough to fit pointer, compatible with C `uintptr_t`. numpy.ulong[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) alias of `uint` _class_ numpy.ulonglong[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Signed integer type, compatible with C `unsigned long long`. Character code: `'Q'` ### Inexact types _class_ numpy.inexact[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Abstract base class of all numeric scalar types with a (potentially) inexact representation of the values in its range, such as floating-point numbers. Note Inexact scalars are printed using the fewest decimal digits needed to distinguish their value from other values of the same datatype, by judicious rounding. See the `unique` parameter of [`format_float_positional`](generated/numpy.format_float_positional#numpy.format_float_positional "numpy.format_float_positional") and [`format_float_scientific`](generated/numpy.format_float_scientific#numpy.format_float_scientific "numpy.format_float_scientific"). This means that variables with equal binary values but whose datatypes are of different precisions may display differently: >>> import numpy as np >>> f16 = np.float16("0.1") >>> f32 = np.float32(f16) >>> f64 = np.float64(f32) >>> f16 == f32 == f64 True >>> f16, f32, f64 (0.1, 0.099975586, 0.0999755859375) Note that none of these floats hold the exact value \\(\frac{1}{10}\\); `f16` prints as `0.1` because it is as close to that value as possible, whereas the other types do not as they have more precision and therefore have closer values. Conversely, floating-point scalars of different precisions which approximate the same decimal value may compare unequal despite printing identically: >>> f16 = np.float16("0.1") >>> f32 = np.float32("0.1") >>> f64 = np.float64("0.1") >>> f16 == f32 == f64 False >>> f16, f32, f64 (0.1, 0.1, 0.1) #### Floating-point types _class_ numpy.floating[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Abstract base class of all floating-point scalar types. _class_ numpy.half[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Half-precision floating-point number type. Character code: `'e'` Canonical name: `numpy.half` Alias on this platform (Linux x86_64): `numpy.float16`: 16-bit-precision floating-point number type: sign bit, 5 bits exponent, 10 bits mantissa. _class_ numpy.single[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Single-precision floating-point number type, compatible with C `float`. Character code: `'f'` Canonical name: `numpy.single` Alias on this platform (Linux x86_64): `numpy.float32`: 32-bit-precision floating-point number type: sign bit, 8 bits exponent, 23 bits mantissa. _class_ numpy.double(_x =0_, _/_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Double-precision floating-point number type, compatible with Python [`float`](https://docs.python.org/3/library/functions.html#float "\(in Python v3.13\)") and C `double`. Character code: `'d'` Canonical name: `numpy.double` Alias on this platform (Linux x86_64): `numpy.float64`: 64-bit precision floating-point number type: sign bit, 11 bits exponent, 52 bits mantissa. _class_ numpy.longdouble[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Extended-precision floating-point number type, compatible with C `long double` but not necessarily with IEEE 754 quadruple-precision. Character code: `'g'` Alias on this platform (Linux x86_64): `numpy.float128`: 128-bit extended-precision floating-point number type. #### Complex floating-point types _class_ numpy.complexfloating[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Abstract base class of all complex number scalar types that are made up of floating-point numbers. _class_ numpy.csingle[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Complex number type composed of two single-precision floating-point numbers. Character code: `'F'` Canonical name: `numpy.csingle` Alias on this platform (Linux x86_64): `numpy.complex64`: Complex number type composed of 2 32-bit-precision floating-point numbers. _class_ numpy.cdouble(_real =0_, _imag =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Complex number type composed of two double-precision floating-point numbers, compatible with Python [`complex`](https://docs.python.org/3/library/functions.html#complex "\(in Python v3.13\)"). Character code: `'D'` Canonical name: `numpy.cdouble` Alias on this platform (Linux x86_64): `numpy.complex128`: Complex number type composed of 2 64-bit-precision floating-point numbers. _class_ numpy.clongdouble[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Complex number type composed of two extended-precision floating-point numbers. Character code: `'G'` Alias on this platform (Linux x86_64): `numpy.complex256`: Complex number type composed of 2 128-bit extended- precision floating-point numbers. ### Other types numpy.bool_[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) alias of `bool` _class_ numpy.bool[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Boolean type (True or False), stored as a byte. Warning The `bool` type is not a subclass of the `int_` type (the `bool` is not even a number type). This is different than Python’s default implementation of `bool` as a sub-class of [`int`](https://docs.python.org/3/library/functions.html#int "\(in Python v3.13\)"). Character code: `'?'` _class_ numpy.datetime64[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) If created from a 64-bit integer, it represents an offset from `1970-01-01T00:00:00`. If created from string, the string can be in ISO 8601 date or datetime format. When parsing a string to create a datetime object, if the string contains a trailing timezone (A ‘Z’ or a timezone offset), the timezone will be dropped and a User Warning is given. Datetime64 objects should be considered to be UTC and therefore have an offset of +0000. >>> np.datetime64(10, 'Y') np.datetime64('1980') >>> np.datetime64('1980', 'Y') np.datetime64('1980') >>> np.datetime64(10, 'D') np.datetime64('1970-01-11') See [Datetimes and timedeltas](arrays.datetime#arrays-datetime) for more information. Character code: `'M'` _class_ numpy.timedelta64[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) A timedelta stored as a 64-bit integer. See [Datetimes and timedeltas](arrays.datetime#arrays-datetime) for more information. Character code: `'m'` _class_ numpy.object_[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Any Python object. Character code: `'O'` Note The data actually stored in object arrays (_i.e._ , arrays having dtype `object_`) are references to Python objects, not the objects themselves. Hence, object arrays behave more like usual Python [`lists`](https://docs.python.org/3/library/stdtypes.html#list "\(in Python v3.13\)"), in the sense that their contents need not be of the same Python type. The object type is also special because an array containing `object_` items does not return an `object_` object on item access, but instead returns the actual object that the array item refers to. The following data types are **flexible** : they have no predefined size and the data they describe can be of different length in different arrays. (In the character codes `#` is an integer denoting how many elements the data type consists of.) _class_ numpy.flexible[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Abstract base class of all scalar types without predefined length. The actual size of these types depends on the specific [`numpy.dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") instantiation. _class_ numpy.character[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Abstract base class of all character string scalar types. _class_ numpy.bytes_[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) A byte string. When used in arrays, this type strips trailing null bytes. Character code: `'S'` _class_ numpy.str_[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) A unicode string. This type strips trailing null codepoints. >>> s = np.str_("abc\x00") >>> s 'abc' Unlike the builtin [`str`](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.13\)"), this supports the [Buffer Protocol](https://docs.python.org/3/c-api/buffer.html#bufferobjects "\(in Python v3.13\)"), exposing its contents as UCS4: >>> m = memoryview(np.str_("abc")) >>> m.format '3w' >>> m.tobytes() b'a\x00\x00\x00b\x00\x00\x00c\x00\x00\x00' Character code: `'U'` _class_ numpy.void(_length_or_data_ , _/_ , _dtype =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Create a new structured or unstructured void scalar. Parameters: **length_or_data** int, array-like, bytes-like, object One of multiple meanings (see notes). The length or bytes data of an unstructured void. Or alternatively, the data to be stored in the new scalar when [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") is provided. This can be an array-like, in which case an array may be returned. **dtype** dtype, optional If provided the dtype of the new scalar. This dtype must be “void” dtype (i.e. a structured or unstructured void, see also [Structured datatypes](../user/basics.rec#defining-structured-types)). New in version 1.24. #### Notes For historical reasons and because void scalars can represent both arbitrary byte data and structured dtypes, the void constructor has three calling conventions: 1. `np.void(5)` creates a `dtype="V5"` scalar filled with five `\0` bytes. The 5 can be a Python or NumPy integer. 2. `np.void(b"bytes-like")` creates a void scalar from the byte string. The dtype itemsize will match the byte string length, here `"V10"`. 3. When a `dtype=` is passed the call is roughly the same as an array creation. However, a void scalar rather than array is returned. Please see the examples which show all three different conventions. #### Examples >>> np.void(5) np.void(b'\x00\x00\x00\x00\x00') >>> np.void(b'abcd') np.void(b'\x61\x62\x63\x64') >>> np.void((3.2, b'eggs'), dtype="d,S5") np.void((3.2, b'eggs'), dtype=[('f0', '>> np.void(3, dtype=[('x', np.int8), ('y', np.int8)]) np.void((3, 3), dtype=[('x', 'i1'), ('y', 'i1')]) Character code: `'V'` Warning See [Note on string types](arrays.dtypes#string-dtype-note). Numeric Compatibility: If you used old typecode characters in your Numeric code (which was never recommended), you will need to change some of them to the new characters. In particular, the needed changes are `c -> S1`, `b -> B`, `1 -> b`, `s -> h`, `w -> H`, and `u -> I`. These changes make the type character convention more consistent with other Python modules such as the [`struct`](https://docs.python.org/3/library/struct.html#module-struct "\(in Python v3.13\)") module. ### Sized aliases Along with their (mostly) C-derived names, the integer, float, and complex data-types are also available using a bit-width convention so that an array of the right size can always be ensured. Two aliases (`numpy.intp` and `numpy.uintp`) pointing to the integer type that is sufficiently large to hold a C pointer are also provided. numpy.int8[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) numpy.int16 numpy.int32 numpy.int64 Aliases for the signed integer types (one of `numpy.byte`, `numpy.short`, `numpy.intc`, `numpy.int_`, `numpy.long` and `numpy.longlong`) with the specified number of bits. Compatible with the C99 `int8_t`, `int16_t`, `int32_t`, and `int64_t`, respectively. numpy.uint8[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) numpy.uint16 numpy.uint32 numpy.uint64 Alias for the unsigned integer types (one of `numpy.ubyte`, `numpy.ushort`, `numpy.uintc`, `numpy.uint`, `numpy.ulong` and `numpy.ulonglong`) with the specified number of bits. Compatible with the C99 `uint8_t`, `uint16_t`, `uint32_t`, and `uint64_t`, respectively. numpy.intp[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Alias for the signed integer type (one of `numpy.byte`, `numpy.short`, `numpy.intc`, `numpy.int_`, `numpy.long` and `numpy.longlong`) that is used as a default integer and for indexing. Compatible with the C `Py_ssize_t`. Character code: `'n'` Changed in version 2.0: Before NumPy 2, this had the same size as a pointer. In practice this is almost always identical, but the character code `'p'` maps to the C `intptr_t`. The character code `'n'` was added in NumPy 2.0. numpy.uintp[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Alias for the unsigned integer type that is the same size as `intp`. Compatible with the C `size_t`. Character code: `'N'` Changed in version 2.0: Before NumPy 2, this had the same size as a pointer. In practice this is almost always identical, but the character code `'P'` maps to the C `uintptr_t`. The character code `'N'` was added in NumPy 2.0. numpy.float16[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) alias of `half` numpy.float32[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) alias of `single` numpy.float64[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) alias of `double` numpy.float96 numpy.float128[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Alias for `numpy.longdouble`, named after its size in bits. The existence of these aliases depends on the platform. numpy.complex64[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) alias of `csingle` numpy.complex128[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) alias of `cdouble` numpy.complex192 numpy.complex256[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) Alias for `numpy.clongdouble`, named after its size in bits. The existence of these aliases depends on the platform. ## Attributes The array scalar objects have an [`array priority`](arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") of [`NPY_SCALAR_PRIORITY`](c-api/array#c.NPY_SCALAR_PRIORITY "NPY_SCALAR_PRIORITY") (-1,000,000.0). They also do not (yet) have a [`ctypes`](generated/numpy.ndarray.ctypes#numpy.ndarray.ctypes "numpy.ndarray.ctypes") attribute. Otherwise, they share the same attributes as arrays: [`generic.flags`](generated/numpy.generic.flags#numpy.generic.flags "numpy.generic.flags") | The integer value of flags. ---|--- [`generic.shape`](generated/numpy.generic.shape#numpy.generic.shape "numpy.generic.shape") | Tuple of array dimensions. [`generic.strides`](generated/numpy.generic.strides#numpy.generic.strides "numpy.generic.strides") | Tuple of bytes steps in each dimension. [`generic.ndim`](generated/numpy.generic.ndim#numpy.generic.ndim "numpy.generic.ndim") | The number of array dimensions. [`generic.data`](generated/numpy.generic.data#numpy.generic.data "numpy.generic.data") | Pointer to start of data. [`generic.size`](generated/numpy.generic.size#numpy.generic.size "numpy.generic.size") | The number of elements in the gentype. [`generic.itemsize`](generated/numpy.generic.itemsize#numpy.generic.itemsize "numpy.generic.itemsize") | The length of one element in bytes. [`generic.base`](generated/numpy.generic.base#numpy.generic.base "numpy.generic.base") | Scalar attribute identical to the corresponding array attribute. [`generic.dtype`](generated/numpy.generic.dtype#numpy.generic.dtype "numpy.generic.dtype") | Get array data-descriptor. [`generic.real`](generated/numpy.generic.real#numpy.generic.real "numpy.generic.real") | The real part of the scalar. [`generic.imag`](generated/numpy.generic.imag#numpy.generic.imag "numpy.generic.imag") | The imaginary part of the scalar. [`generic.flat`](generated/numpy.generic.flat#numpy.generic.flat "numpy.generic.flat") | A 1-D view of the scalar. [`generic.T`](generated/numpy.generic.t#numpy.generic.T "numpy.generic.T") | Scalar attribute identical to the corresponding array attribute. [`generic.__array_interface__`](generated/numpy.generic.__array_interface__#numpy.generic.__array_interface__ "numpy.generic.__array_interface__") | Array protocol: Python side [`generic.__array_struct__`](generated/numpy.generic.__array_struct__#numpy.generic.__array_struct__ "numpy.generic.__array_struct__") | Array protocol: struct [`generic.__array_priority__`](generated/numpy.generic.__array_priority__#numpy.generic.__array_priority__ "numpy.generic.__array_priority__") | Array priority. [`generic.__array_wrap__`](generated/numpy.generic.__array_wrap__#numpy.generic.__array_wrap__ "numpy.generic.__array_wrap__") | __array_wrap__ implementation for scalar types ## Indexing See also [Indexing routines](routines.indexing#arrays-indexing), [Data type objects (dtype)](arrays.dtypes#arrays-dtypes) Array scalars can be indexed like 0-dimensional arrays: if _x_ is an array scalar, * `x[()]` returns a copy of array scalar * `x[...]` returns a 0-dimensional [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") * `x['field-name']` returns the array scalar in the field _field-name_. (_x_ can have fields, for example, when it corresponds to a structured data type.) ## Methods Array scalars have exactly the same methods as arrays. The default behavior of these methods is to internally convert the scalar to an equivalent 0-dimensional array and to call the corresponding array method. In addition, math operations on array scalars are defined so that the same hardware flags are set and used to interpret the results as for [ufunc](ufuncs#ufuncs), so that the error state used for ufuncs also carries over to the math on array scalars. The exceptions to the above rules are given below: [`generic.__array__`](generated/numpy.generic.__array__#numpy.generic.__array__ "numpy.generic.__array__") | sc.__array__(dtype) return 0-dim array from scalar with specified dtype ---|--- [`generic.__array_wrap__`](generated/numpy.generic.__array_wrap__#numpy.generic.__array_wrap__ "numpy.generic.__array_wrap__") | __array_wrap__ implementation for scalar types [`generic.squeeze`](generated/numpy.generic.squeeze#numpy.generic.squeeze "numpy.generic.squeeze") | Scalar method identical to the corresponding array attribute. [`generic.byteswap`](generated/numpy.generic.byteswap#numpy.generic.byteswap "numpy.generic.byteswap") | Scalar method identical to the corresponding array attribute. [`generic.__reduce__`](generated/numpy.generic.__reduce__#numpy.generic.__reduce__ "numpy.generic.__reduce__") | Helper for pickle. [`generic.__setstate__`](generated/numpy.generic.__setstate__#numpy.generic.__setstate__ "numpy.generic.__setstate__") | [`generic.setflags`](generated/numpy.generic.setflags#numpy.generic.setflags "numpy.generic.setflags") | Scalar method identical to the corresponding array attribute. Utility method for typing: [`number.__class_getitem__`](generated/numpy.number.__class_getitem__#numpy.number.__class_getitem__ "numpy.number.__class_getitem__")(item, /) | Return a parametrized wrapper around the `number` type. ---|--- ## Defining new types There are two ways to effectively define a new array scalar type (apart from composing structured types [dtypes](arrays.dtypes#arrays-dtypes) from the built-in scalar types): One way is to simply subclass the [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") and overwrite the methods of interest. This will work to a degree, but internally certain behaviors are fixed by the data type of the array. To fully customize the data type of an array you need to define a new data-type, and register it with NumPy. Such new types can only be defined in C, using the [NumPy C-API](c-api/index#c-api). # Array API ## Array structure and data access These macros access the [`PyArrayObject`](types-and-structures#c.PyArrayObject "PyArrayObject") structure members and are defined in `ndarraytypes.h`. The input argument, _arr_ , can be any [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")* that is directly interpretable as a [PyArrayObject](types- and-structures#c.PyArrayObject "PyArrayObject")* (any instance of the [`PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type") and its sub-types). intPyArray_NDIM([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) The number of dimensions in the array. intPyArray_FLAGS([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) Returns an integer representing the array-flags. intPyArray_TYPE([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) Return the (builtin) typenumber for the elements of this array. intPyArray_Pack(const[PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr, void*item, const[PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*value) New in version 2.0. Sets the memory location `item` of dtype `descr` to `value`. The function is equivalent to setting a single array element with a Python assignment. Returns 0 on success and -1 with an error set on failure. Note If the `descr` has the [`NPY_NEEDS_INIT`](types-and- structures#c.NPY_NEEDS_INIT "NPY_NEEDS_INIT") flag set, the data must be valid or the memory zeroed. intPyArray_SETITEM([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, void*itemptr, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj) Convert obj and place it in the ndarray, _arr_ , at the place pointed to by itemptr. Return -1 if an error occurs or 0 on success. Note In general, prefer the use of `PyArray_Pack` when handling arbitrary Python objects. Setitem is for example not able to handle arbitrary casts between different dtypes. voidPyArray_ENABLEFLAGS([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, intflags) Enables the specified array flags. This function does no validation, and assumes that you know what you’re doing. voidPyArray_CLEARFLAGS([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, intflags) Clears the specified array flags. This function does no validation, and assumes that you know what you’re doing. void*PyArray_DATA([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) char*PyArray_BYTES([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) These two macros are similar and obtain the pointer to the data-buffer for the array. The first macro can (and should be) assigned to a particular pointer where the second is for generic processing. If you have not guaranteed a contiguous and/or aligned array then be sure you understand how to access the data in the array to avoid memory and/or alignment problems. [npy_intp](dtype#c.npy_intp "npy_intp")*PyArray_DIMS([PyArrayObject](types- and-structures#c.PyArrayObject "PyArrayObject")*arr) Returns a pointer to the dimensions/shape of the array. The number of elements matches the number of dimensions of the array. Can return `NULL` for 0-dimensional arrays. [npy_intp](dtype#c.npy_intp "npy_intp")*PyArray_SHAPE([PyArrayObject](types- and-structures#c.PyArrayObject "PyArrayObject")*arr) A synonym for `PyArray_DIMS`, named to be consistent with the [`shape`](../generated/numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") usage within Python. [npy_intp](dtype#c.npy_intp "npy_intp")*PyArray_STRIDES([PyArrayObject](types- and-structures#c.PyArrayObject "PyArrayObject")*arr) Returns a pointer to the strides of the array. The number of elements matches the number of dimensions of the array. [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_DIM([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*arr, intn) Return the shape in the _n_ \\(^{\textrm{th}}\\) dimension. [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_STRIDE([PyArrayObject](types- and-structures#c.PyArrayObject "PyArrayObject")*arr, intn) Return the stride in the _n_ \\(^{\textrm{th}}\\) dimension. [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_ITEMSIZE([PyArrayObject](types- and-structures#c.PyArrayObject "PyArrayObject")*arr) Return the itemsize for the elements of this array. Note that, in the old API that was deprecated in version 1.7, this function had the return type `int`. [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_SIZE([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*arr) Returns the total size (in number of elements) of the array. [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_Size([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*obj) Returns 0 if _obj_ is not a sub-class of ndarray. Otherwise, returns the total number of elements in the array. Safer version of `PyArray_SIZE` (_obj_). [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_NBYTES([PyArrayObject](types- and-structures#c.PyArrayObject "PyArrayObject")*arr) Returns the total number of bytes consumed by the array. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_BASE([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*arr) This returns the base object of the array. In most cases, this means the object which owns the memory the array is pointing at. If you are constructing an array using the C API, and specifying your own memory, you should use the function `PyArray_SetBaseObject` to set the base to an object which owns the memory. If the `NPY_ARRAY_WRITEBACKIFCOPY` flag is set, it has a different meaning, namely base is the array into which the current array will be copied upon copy resolution. This overloading of the base property for two functions is likely to change in a future version of NumPy. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_DESCR([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*arr) Returns a borrowed reference to the dtype property of the array. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_DTYPE([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*arr) A synonym for PyArray_DESCR, named to be consistent with the ‘dtype’ usage within Python. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_GETITEM([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*arr, void*itemptr) Get a Python object of a builtin type from the ndarray, _arr_ , at the location pointed to by itemptr. Return `NULL` on failure. [`numpy.ndarray.item`](../generated/numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item") is identical to PyArray_GETITEM. intPyArray_FinalizeFunc([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj) The function pointed to by the [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "\(in Python v3.13\)") [`__array_finalize__`](../arrays.classes#numpy.class.__array_finalize__ "numpy.class.__array_finalize__"). The first argument is the newly created sub-type. The second argument (if not NULL) is the “parent” array (if the array was created using slicing or some other operation where a clearly- distinguishable parent is present). This routine can do anything it wants to. It should return a -1 on error and 0 otherwise. ### Data access These functions and macros provide easy access to elements of the ndarray from C. These work for all arrays. You may need to take care when accessing the data in the array, however, if it is not in machine byte-order, misaligned, or not writeable. In other words, be sure to respect the state of the flags unless you know what you are doing, or have previously guaranteed an array that is writeable, aligned, and in machine byte-order using `PyArray_FromAny`. If you wish to handle all types of arrays, the copyswap function for each type is useful for handling misbehaved arrays. Some platforms (e.g. Solaris) do not like misaligned data and will crash if you de-reference a misaligned pointer. Other platforms (e.g. x86 Linux) will just work more slowly with misaligned data. void*PyArray_GetPtr([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*aobj, [npy_intp](dtype#c.npy_intp "npy_intp")*ind) Return a pointer to the data of the ndarray, _aobj_ , at the N-dimensional index given by the c-array, _ind_ , (which must be at least _aobj_ ->nd in size). You may want to typecast the returned pointer to the data type of the ndarray. void*PyArray_GETPTR1([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj, [npy_intp](dtype#c.npy_intp "npy_intp")i) void*PyArray_GETPTR2([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj, [npy_intp](dtype#c.npy_intp "npy_intp")i, [npy_intp](dtype#c.npy_intp "npy_intp")j) void*PyArray_GETPTR3([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj, [npy_intp](dtype#c.npy_intp "npy_intp")i, [npy_intp](dtype#c.npy_intp "npy_intp")j, [npy_intp](dtype#c.npy_intp "npy_intp")k) void*PyArray_GETPTR4([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj, [npy_intp](dtype#c.npy_intp "npy_intp")i, [npy_intp](dtype#c.npy_intp "npy_intp")j, [npy_intp](dtype#c.npy_intp "npy_intp")k, [npy_intp](dtype#c.npy_intp "npy_intp")l) Quick, inline access to the element at the given coordinates in the ndarray, _obj_ , which must have respectively 1, 2, 3, or 4 dimensions (this is not checked). The corresponding _i_ , _j_ , _k_ , and _l_ coordinates can be any integer but will be interpreted as `npy_intp`. You may want to typecast the returned pointer to the data type of the ndarray. ## Creating arrays ### From scratch [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_NewFromDescr([PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)")*subtype, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*descr, intnd, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, [npy_intp](dtype#c.npy_intp "npy_intp")const*strides, void*data, intflags, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj) This function steals a reference to _descr_. The easiest way to get one is using `PyArray_DescrFromType`. This is the main array creation function. Most new arrays are created with this flexible function. The returned object is an object of Python-type _subtype_ , which must be a subtype of [`PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type"). The array has _nd_ dimensions, described by _dims_. The data- type descriptor of the new array is _descr_. If _subtype_ is of an array subclass instead of the base [`&PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type"), then _obj_ is the object to pass to the [`__array_finalize__`](../arrays.classes#numpy.class.__array_finalize__ "numpy.class.__array_finalize__") method of the subclass. If _data_ is `NULL`, then new unitinialized memory will be allocated and _flags_ can be non-zero to indicate a Fortran-style contiguous array. Use `PyArray_FILLWBYTE` to initialize the memory. If _data_ is not `NULL`, then it is assumed to point to the memory to be used for the array and the _flags_ argument is used as the new flags for the array (except the state of `NPY_ARRAY_OWNDATA`, `NPY_ARRAY_WRITEBACKIFCOPY` flag of the new array will be reset). In addition, if _data_ is non-NULL, then _strides_ can also be provided. If _strides_ is `NULL`, then the array strides are computed as C-style contiguous (default) or Fortran-style contiguous (_flags_ is nonzero for _data_ = `NULL` or _flags_ & `NPY_ARRAY_F_CONTIGUOUS` is nonzero non-NULL _data_). Any provided _dims_ and _strides_ are copied into newly allocated dimension and strides arrays for the new array object. `PyArray_CheckStrides` can help verify non- `NULL` stride information. If `data` is provided, it must stay alive for the life of the array. One way to manage this is through `PyArray_SetBaseObject` [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_NewLikeArray([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*prototype, NPY_ORDERorder, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr, intsubok) This function steals a reference to _descr_ if it is not NULL. This array creation routine allows for the convenient creation of a new array matching an existing array’s shapes and memory layout, possibly changing the layout and/or data type. When _order_ is `NPY_ANYORDER`, the result order is `NPY_FORTRANORDER` if _prototype_ is a fortran array, `NPY_CORDER` otherwise. When _order_ is `NPY_KEEPORDER`, the result order matches that of _prototype_ , even when the axes of _prototype_ aren’t in C or Fortran order. If _descr_ is NULL, the data type of _prototype_ is used. If _subok_ is 1, the newly created array will use the sub-type of _prototype_ to create the new array, otherwise it will create a base-class array. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_New([PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)")*subtype, intnd, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, inttype_num, [npy_intp](dtype#c.npy_intp "npy_intp")const*strides, void*data, intitemsize, intflags, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj) This is similar to `PyArray_NewFromDescr` (…) except you specify the data-type descriptor with _type_num_ and _itemsize_ , where _type_num_ corresponds to a builtin (or user-defined) type. If the type always has the same number of bytes, then itemsize is ignored. Otherwise, itemsize specifies the particular size of this array. Warning If data is passed to `PyArray_NewFromDescr` or `PyArray_New`, this memory must not be deallocated until the new array is deleted. If this data came from another Python object, this can be accomplished using [`Py_INCREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_INCREF "\(in Python v3.13\)") on that object and setting the base member of the new array to point to that object. If strides are passed in they must be consistent with the dimensions, the itemsize, and the data of the array. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_SimpleNew(intnd, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, inttypenum) Create a new uninitialized array of type, _typenum_ , whose size in each of _nd_ dimensions is given by the integer array, _dims_.The memory for the array is uninitialized (unless typenum is [`NPY_OBJECT`](dtype#c.NPY_TYPES.NPY_OBJECT "NPY_OBJECT") in which case each element in the array is set to NULL). The _typenum_ argument allows specification of any of the builtin data-types such as [`NPY_FLOAT`](dtype#c.NPY_TYPES.NPY_FLOAT "NPY_FLOAT") or [`NPY_LONG`](dtype#c.NPY_TYPES.NPY_LONG "NPY_LONG"). The memory for the array can be set to zero if desired using `PyArray_FILLWBYTE` (return_object, 0).This function cannot be used to create a flexible-type array (no itemsize given). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_SimpleNewFromData(intnd, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, inttypenum, void*data) Create an array wrapper around _data_ pointed to by the given pointer. The array flags will have a default that the data area is well-behaved and C-style contiguous. The shape of the array is given by the _dims_ c-array of length _nd_. The data-type of the array is indicated by _typenum_. If data comes from another reference-counted Python object, the reference count on this object should be increased after the pointer is passed in, and the base member of the returned ndarray should point to the Python object that owns the data. This will ensure that the provided memory is not freed while the returned array is in existence. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_SimpleNewFromDescr(intnd, [npy_int](dtype#c.npy_int "npy_int")const*dims, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) This function steals a reference to _descr_. Create a new array with the provided data-type descriptor, _descr_ , of the shape determined by _nd_ and _dims_. voidPyArray_FILLWBYTE([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, intval) Fill the array pointed to by _obj_ —which must be a (subclass of) ndarray—with the contents of _val_ (evaluated as a byte). This macro calls memset, so obj must be contiguous. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Zeros(intnd, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, intfortran) Construct a new _nd_ -dimensional array with shape given by _dims_ and data type given by _dtype_. If _fortran_ is non-zero, then a Fortran-order array is created, otherwise a C-order array is created. Fill the memory with zeros (or the 0 object if _dtype_ corresponds to [`NPY_OBJECT`](dtype#c.NPY_TYPES.NPY_OBJECT "NPY_OBJECT") ). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_ZEROS(intnd, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, inttype_num, intfortran) Macro form of `PyArray_Zeros` which takes a type-number instead of a data-type object. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Empty(intnd, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, intfortran) Construct a new _nd_ -dimensional array with shape given by _dims_ and data type given by _dtype_. If _fortran_ is non-zero, then a Fortran-order array is created, otherwise a C-order array is created. The array is uninitialized unless the data type corresponds to [`NPY_OBJECT`](dtype#c.NPY_TYPES.NPY_OBJECT "NPY_OBJECT") in which case the array is filled with [`Py_None`](https://docs.python.org/3/c-api/none.html#c.Py_None "\(in Python v3.13\)"). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_EMPTY(intnd, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, inttypenum, intfortran) Macro form of `PyArray_Empty` which takes a type-number, _typenum_ , instead of a data-type object. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Arange(doublestart, doublestop, doublestep, inttypenum) Construct a new 1-dimensional array of data-type, _typenum_ , that ranges from _start_ to _stop_ (exclusive) in increments of _step_ . Equivalent to **arange** (_start_ , _stop_ , _step_ , dtype). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_ArangeObj([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*start, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*stop, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*step, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) Construct a new 1-dimensional array of data-type determined by `descr`, that ranges from `start` to `stop` (exclusive) in increments of `step`. Equivalent to arange( `start`, `stop`, `step`, `typenum` ). intPyArray_SetBaseObject([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj) This function **steals a reference** to `obj` and sets it as the base property of `arr`. If you construct an array by passing in your own memory buffer as a parameter, you need to set the array’s `base` property to ensure the lifetime of the memory buffer is appropriate. The return value is 0 on success, -1 on failure. If the object provided is an array, this function traverses the chain of `base` pointers so that each array points to the owner of the memory directly. Once the base is set, it may not be changed to another value. ### From other objects [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_FromAny([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*dtype, intmin_depth, intmax_depth, intrequirements, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*context) This is the main function used to obtain an array from any nested sequence, or object that exposes the array interface, _op_. The parameters allow specification of the required _dtype_ , the minimum (_min_depth_) and maximum (_max_depth_) number of dimensions acceptable, and other _requirements_ for the array. This function **steals a reference** to the dtype argument, which needs to be a [`PyArray_Descr`](types-and-structures#c.PyArray_Descr "PyArray_Descr") structure indicating the desired data-type (including required byteorder). The _dtype_ argument may be `NULL`, indicating that any data-type (and byteorder) is acceptable. Unless `NPY_ARRAY_FORCECAST` is present in `flags`, this call will generate an error if the data type cannot be safely obtained from the object. If you want to use `NULL` for the _dtype_ and ensure the array is not swapped then use `PyArray_CheckFromAny`. A value of 0 for either of the depth parameters causes the parameter to be ignored. Any of the following array flags can be added (_e.g._ using |) to get the _requirements_ argument. If your code can handle general (_e.g._ strided, byte-swapped, or unaligned arrays) then _requirements_ may be 0. Also, if _op_ is not already an array (or does not expose the array interface), then a new array will be created (and filled from _op_ using the sequence protocol). The new array will have `NPY_ARRAY_DEFAULT` as its flags member. The _context_ argument is unused. `NPY_ARRAY_C_CONTIGUOUS` Make sure the returned array is C-style contiguous `NPY_ARRAY_F_CONTIGUOUS` Make sure the returned array is Fortran-style contiguous. `NPY_ARRAY_ALIGNED` Make sure the returned array is aligned on proper boundaries for its data type. An aligned array has the data pointer and every strides factor as a multiple of the alignment factor for the data-type- descriptor. `NPY_ARRAY_WRITEABLE` Make sure the returned array can be written to. `NPY_ARRAY_ENSURECOPY` Make sure a copy is made of _op_. If this flag is not present, data is not copied if it can be avoided. `NPY_ARRAY_ENSUREARRAY` Make sure the result is a base-class ndarray. By default, if _op_ is an instance of a subclass of ndarray, an instance of that same subclass is returned. If this flag is set, an ndarray object will be returned instead. `NPY_ARRAY_FORCECAST` Force a cast to the output type even if it cannot be done safely. Without this flag, a data cast will occur only if it can be done safely, otherwise an error is raised. `NPY_ARRAY_WRITEBACKIFCOPY` If _op_ is already an array, but does not satisfy the requirements, then a copy is made (which will satisfy the requirements). If this flag is present and a copy (of an object that is already an array) must be made, then the corresponding `NPY_ARRAY_WRITEBACKIFCOPY` flag is set in the returned copy and _op_ is made to be read-only. You must be sure to call `PyArray_ResolveWritebackIfCopy` to copy the contents back into _op_ and the _op_ array will be made writeable again. If _op_ is not writeable to begin with, or if it is not already an array, then an error is raised. Combinations of array flags can also be added. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_CheckFromAny([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*dtype, intmin_depth, intmax_depth, intrequirements, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*context) Nearly identical to `PyArray_FromAny` (…) except _requirements_ can contain `NPY_ARRAY_NOTSWAPPED` (over-riding the specification in _dtype_) and `NPY_ARRAY_ELEMENTSTRIDES` which indicates that the array should be aligned in the sense that the strides are multiples of the element size. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_FromArray([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*op, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*newtype, intrequirements) Special case of `PyArray_FromAny` for when _op_ is already an array but it needs to be of a specific _newtype_ (including byte-order) or has certain _requirements_. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_FromStructInterface([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op) Returns an ndarray object from a Python object that exposes the [`__array_struct__`](../arrays.interface#object.__array_struct__ "object.__array_struct__") attribute and follows the array interface protocol. If the object does not contain this attribute then a borrowed reference to [`Py_NotImplemented`](https://docs.python.org/3/c-api/object.html#c.Py_NotImplemented "\(in Python v3.13\)") is returned. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_FromInterface([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op) Returns an ndarray object from a Python object that exposes the [`__array_interface__`](../arrays.interface#object.__array_interface__ "object.__array_interface__") attribute following the array interface protocol. If the object does not contain this attribute then a borrowed reference to [`Py_NotImplemented`](https://docs.python.org/3/c-api/object.html#c.Py_NotImplemented "\(in Python v3.13\)") is returned. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_FromArrayAttr([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*dtype, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*context) Return an ndarray object from a Python object that exposes the [`__array__`](../arrays.classes#numpy.class.__array__ "numpy.class.__array__") method. The third-party implementations of [`__array__`](../arrays.classes#numpy.class.__array__ "numpy.class.__array__") must take `dtype` and `copy` keyword arguments. `context` is unused. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_ContiguousFromAny([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op, inttypenum, intmin_depth, intmax_depth) This function returns a (C-style) contiguous and behaved function array from any nested sequence or array interface exporting object, _op_ , of (non- flexible) type given by the enumerated _typenum_ , of minimum depth _min_depth_ , and of maximum depth _max_depth_. Equivalent to a call to `PyArray_FromAny` with requirements set to `NPY_ARRAY_DEFAULT` and the type_num member of the type argument set to _typenum_. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_ContiguousFromObject([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op, inttypenum, intmin_depth, intmax_depth) This function returns a well-behaved C-style contiguous array from any nested sequence or array-interface exporting object. The minimum number of dimensions the array can have is given by `min_depth` while the maximum is `max_depth`. This is equivalent to call `PyArray_FromAny` with requirements `NPY_ARRAY_DEFAULT` and `NPY_ARRAY_ENSUREARRAY`. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_FromObject([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op, inttypenum, intmin_depth, intmax_depth) Return an aligned and in native-byteorder array from any nested sequence or array-interface exporting object, op, of a type given by the enumerated typenum. The minimum number of dimensions the array can have is given by min_depth while the maximum is max_depth. This is equivalent to a call to `PyArray_FromAny` with requirements set to BEHAVED. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_EnsureArray([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op) This function **steals a reference** to `op` and makes sure that `op` is a base-class ndarray. It special cases array scalars, but otherwise calls `PyArray_FromAny` ( `op`, NULL, 0, 0, `NPY_ARRAY_ENSUREARRAY`, NULL). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_FromString(char*string, [npy_intp](dtype#c.npy_intp "npy_intp")slen, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, [npy_intp](dtype#c.npy_intp "npy_intp")num, char*sep) Construct a one-dimensional ndarray of a single type from a binary or (ASCII) text `string` of length `slen`. The data-type of the array to-be-created is given by `dtype`. If num is -1, then **copy** the entire string and return an appropriately sized array, otherwise, `num` is the number of items to **copy** from the string. If `sep` is NULL (or “”), then interpret the string as bytes of binary data, otherwise convert the sub-strings separated by `sep` to items of data-type `dtype`. Some data-types may not be readable in text mode and an error will be raised if that occurs. All errors return NULL. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_FromFile(FILE*fp, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*dtype, [npy_intp](dtype#c.npy_intp "npy_intp")num, char*sep) Construct a one-dimensional ndarray of a single type from a binary or text file. The open file pointer is `fp`, the data-type of the array to be created is given by `dtype`. This must match the data in the file. If `num` is -1, then read until the end of the file and return an appropriately sized array, otherwise, `num` is the number of items to read. If `sep` is NULL (or “”), then read from the file in binary mode, otherwise read from the file in text mode with `sep` providing the item separator. Some array types cannot be read in text mode in which case an error is raised. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_FromBuffer([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*buf, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*dtype, [npy_intp](dtype#c.npy_intp "npy_intp")count, [npy_intp](dtype#c.npy_intp "npy_intp")offset) Construct a one-dimensional ndarray of a single type from an object, `buf`, that exports the (single-segment) buffer protocol (or has an attribute __buffer__ that returns an object that exports the buffer protocol). A writeable buffer will be tried first followed by a read- only buffer. The `NPY_ARRAY_WRITEABLE` flag of the returned array will reflect which one was successful. The data is assumed to start at `offset` bytes from the start of the memory location for the object. The type of the data in the buffer will be interpreted depending on the data- type descriptor, `dtype.` If `count` is negative then it will be determined from the size of the buffer and the requested itemsize, otherwise, `count` represents how many elements should be converted from the buffer. intPyArray_CopyInto([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*dest, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*src) Copy from the source array, `src`, into the destination array, `dest`, performing a data-type conversion if necessary. If an error occurs return -1 (otherwise 0). The shape of `src` must be broadcastable to the shape of `dest`. NumPy checks for overlapping memory when copying two arrays. intPyArray_CopyObject([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*dest, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*src) Assign an object `src` to a NumPy array `dest` according to array-coercion rules. This is basically identical to `PyArray_FromAny`, but assigns directly to the output array. Returns 0 on success and -1 on failures. [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*PyArray_GETCONTIGUOUS([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op) If `op` is already (C-style) contiguous and well-behaved then just return a reference, otherwise return a (contiguous and well-behaved) copy of the array. The parameter op must be a (sub-class of an) ndarray and no checking for that is done. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_FROM_O([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj) Convert `obj` to an ndarray. The argument can be any nested sequence or object that exports the array interface. This is a macro form of `PyArray_FromAny` using `NULL`, 0, 0, 0 for the other arguments. Your code must be able to handle any data-type descriptor and any combination of data-flags to use this macro. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_FROM_OF([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, intrequirements) Similar to `PyArray_FROM_O` except it can take an argument of _requirements_ indicating properties the resulting array must have. Available requirements that can be enforced are `NPY_ARRAY_C_CONTIGUOUS`, `NPY_ARRAY_F_CONTIGUOUS`, `NPY_ARRAY_ALIGNED`, `NPY_ARRAY_WRITEABLE`, `NPY_ARRAY_NOTSWAPPED`, `NPY_ARRAY_ENSURECOPY`, `NPY_ARRAY_WRITEBACKIFCOPY`, `NPY_ARRAY_FORCECAST`, and `NPY_ARRAY_ENSUREARRAY`. Standard combinations of flags can also be used: [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_FROM_OT([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, inttypenum) Similar to `PyArray_FROM_O` except it can take an argument of _typenum_ specifying the type-number the returned array. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_FROM_OTF([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, inttypenum, intrequirements) Combination of `PyArray_FROM_OF` and `PyArray_FROM_OT` allowing both a _typenum_ and a _flags_ argument to be provided. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_FROMANY([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, inttypenum, intmin, intmax, intrequirements) Similar to `PyArray_FromAny` except the data-type is specified using a typenumber. `PyArray_DescrFromType` (_typenum_) is passed directly to `PyArray_FromAny`. This macro also adds `NPY_ARRAY_DEFAULT` to requirements if `NPY_ARRAY_ENSURECOPY` is passed in as requirements. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_CheckAxis([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, int*axis, intrequirements) Encapsulate the functionality of functions and methods that take the axis= keyword and work properly with None as the axis argument. The input array is `obj`, while `*axis` is a converted integer (so that `*axis == NPY_RAVEL_AXIS` is the None value), and `requirements` gives the needed properties of `obj`. The output is a converted version of the input so that requirements are met and if needed a flattening has occurred. On output negative values of `*axis` are converted and the new value is checked to ensure consistency with the shape of `obj`. ## Dealing with types ### General check of Python Type intPyArray_Check([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op) Evaluates true if _op_ is a Python object whose type is a sub-type of [`PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type"). intPyArray_CheckExact([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op) Evaluates true if _op_ is a Python object with type [`PyArray_Type`](types- and-structures#c.PyArray_Type "PyArray_Type"). intPyArray_HasArrayInterface([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*out) If `op` implements any part of the array interface, then `out` will contain a new reference to the newly created ndarray using the interface or `out` will contain `NULL` if an error during conversion occurs. Otherwise, out will contain a borrowed reference to [`Py_NotImplemented`](https://docs.python.org/3/c-api/object.html#c.Py_NotImplemented "\(in Python v3.13\)") and no error condition is set. intPyArray_HasArrayInterfaceType([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*dtype, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*context, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*out) If `op` implements any part of the array interface, then `out` will contain a new reference to the newly created ndarray using the interface or `out` will contain `NULL` if an error during conversion occurs. Otherwise, out will contain a borrowed reference to Py_NotImplemented and no error condition is set. This version allows setting of the dtype in the part of the array interface that looks for the [`__array__`](../arrays.classes#numpy.class.__array__ "numpy.class.__array__") attribute. `context` is unused. intPyArray_IsZeroDim([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op) Evaluates true if _op_ is an instance of (a subclass of) [`PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type") and has 0 dimensions. PyArray_IsScalar(op, cls) Evaluates true if _op_ is an instance of `Py{cls}ArrType_Type`. intPyArray_CheckScalar([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op) Evaluates true if _op_ is either an array scalar (an instance of a sub-type of [`PyGenericArrType_Type`](types-and-structures#c.PyGenericArrType_Type "PyGenericArrType_Type") ), or an instance of (a sub-class of) [`PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type") whose dimensionality is 0. intPyArray_IsPythonNumber([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op) Evaluates true if _op_ is an instance of a builtin numeric type (int, float, complex, long, bool) intPyArray_IsPythonScalar([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op) Evaluates true if _op_ is a builtin Python scalar object (int, float, complex, bytes, str, long, bool). intPyArray_IsAnyScalar([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op) Evaluates true if _op_ is either a Python scalar object (see `PyArray_IsPythonScalar`) or an array scalar (an instance of a sub- type of [`PyGenericArrType_Type`](types-and-structures#c.PyGenericArrType_Type "PyGenericArrType_Type") ). intPyArray_CheckAnyScalar([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op) Evaluates true if _op_ is a Python scalar object (see `PyArray_IsPythonScalar`), an array scalar (an instance of a sub-type of [`PyGenericArrType_Type`](types-and-structures#c.PyGenericArrType_Type "PyGenericArrType_Type")) or an instance of a sub-type of [`PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type") whose dimensionality is 0. ### Data-type accessors Some of the descriptor attributes may not always be defined and should or cannot not be accessed directly. Changed in version 2.0: Prior to NumPy 2.0 the ABI was different but unnecessary large for user DTypes. These accessors were all added in 2.0 and can be backported (see [The PyArray_Descr struct has been changed](../../numpy_2_0_migration_guide#migration-c-descr)). [npy_intp](dtype#c.npy_intp "npy_intp")PyDataType_ELSIZE([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*descr) The element size of the datatype (`itemsize` in Python). Note If the `descr` is attached to an array `PyArray_ITEMSIZE(arr)` can be used and is available on all NumPy versions. voidPyDataType_SET_ELSIZE([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr, [npy_intp](dtype#c.npy_intp "npy_intp")size) Allows setting of the itemsize, this is _only_ relevant for string/bytes datatypes as it is the current pattern to define one with a new size. [npy_intp](dtype#c.npy_intp "npy_intp")PyDataType_ALIGNENT([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*descr) The alignment of the datatype. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyDataType_METADATA([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*descr) The Metadata attached to a dtype, either `NULL` or a dictionary. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyDataType_NAMES([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*descr) `NULL` or a tuple of structured field names attached to a dtype. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyDataType_FIELDS([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*descr) `NULL`, `None`, or a dict of structured dtype fields, this dict must not be mutated, NumPy may change the way fields are stored in the future. This is the same dict as returned by `np.dtype.fields`. NpyAuxData*PyDataType_C_METADATA([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*descr) C-metadata object attached to a descriptor. This accessor should not be needed usually. The C-Metadata field does provide access to the datetime/timedelta time unit information. PyArray_ArrayDescr*PyDataType_SUBARRAY([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*descr) Information about a subarray dtype equivalent to the Python `np.dtype.base` and `np.dtype.shape`. If this is non- `NULL`, then this data-type descriptor is a C-style contiguous array of another data-type descriptor. In other-words, each element that this descriptor describes is actually an array of some other base descriptor. This is most useful as the data-type descriptor for a field in another data-type descriptor. The fields member should be `NULL` if this is non- `NULL` (the fields member of the base descriptor can be non- `NULL` however). typePyArray_ArrayDescr typedef struct { PyArray_Descr *base; PyObject *shape; } PyArray_ArrayDescr; [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*base The data-type-descriptor object of the base-type. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*shape The shape (always C-style contiguous) of the sub-array as a Python tuple. ### Data-type checking For the typenum macros, the argument is an integer representing an enumerated array data type. For the array type checking macros the argument must be a [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")* that can be directly interpreted as a [PyArrayObject](types- and-structures#c.PyArrayObject "PyArrayObject")*. intPyTypeNum_ISUNSIGNED(intnum) intPyDataType_ISUNSIGNED([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISUNSIGNED([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents an unsigned integer. intPyTypeNum_ISSIGNED(intnum) intPyDataType_ISSIGNED([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISSIGNED([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents a signed integer. intPyTypeNum_ISINTEGER(intnum) intPyDataType_ISINTEGER([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISINTEGER([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents any integer. intPyTypeNum_ISFLOAT(intnum) intPyDataType_ISFLOAT([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISFLOAT([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents any floating point number. intPyTypeNum_ISCOMPLEX(intnum) intPyDataType_ISCOMPLEX([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISCOMPLEX([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents any complex floating point number. intPyTypeNum_ISNUMBER(intnum) intPyDataType_ISNUMBER([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISNUMBER([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents any integer, floating point, or complex floating point number. intPyTypeNum_ISSTRING(intnum) intPyDataType_ISSTRING([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISSTRING([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents a string data type. intPyTypeNum_ISFLEXIBLE(intnum) intPyDataType_ISFLEXIBLE([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISFLEXIBLE([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents one of the flexible array types ( [`NPY_STRING`](dtype#c.NPY_TYPES.NPY_STRING "NPY_STRING"), [`NPY_UNICODE`](dtype#c.NPY_TYPES.NPY_UNICODE "NPY_UNICODE"), or [`NPY_VOID`](dtype#c.NPY_TYPES.NPY_VOID "NPY_VOID") ). intPyDataType_ISUNSIZED([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) Type has no size information attached, and can be resized. Should only be called on flexible dtypes. Types that are attached to an array will always be sized, hence the array form of this macro not existing. For structured datatypes with no fields this function now returns False. intPyTypeNum_ISUSERDEF(intnum) intPyDataType_ISUSERDEF([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISUSERDEF([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents a user-defined type. intPyTypeNum_ISEXTENDED(intnum) intPyDataType_ISEXTENDED([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISEXTENDED([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type is either flexible or user-defined. intPyTypeNum_ISOBJECT(intnum) intPyDataType_ISOBJECT([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISOBJECT([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents object data type. intPyTypeNum_ISBOOL(intnum) intPyDataType_ISBOOL([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISBOOL([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents Boolean data type. intPyDataType_HASFIELDS([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_HASFIELDS([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type has fields associated with it. intPyArray_ISNOTSWAPPED([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*m) Evaluates true if the data area of the ndarray _m_ is in machine byte-order according to the array’s data-type descriptor. intPyArray_ISBYTESWAPPED([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*m) Evaluates true if the data area of the ndarray _m_ is **not** in machine byte- order according to the array’s data-type descriptor. [npy_bool](dtype#c.npy_bool "npy_bool")PyArray_EquivTypes([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*type1, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*type2) Return `NPY_TRUE` if _type1_ and _type2_ actually represent equivalent types for this platform (the fortran member of each type is ignored). For example, on 32-bit platforms, [`NPY_LONG`](dtype#c.NPY_TYPES.NPY_LONG "NPY_LONG") and [`NPY_INT`](dtype#c.NPY_TYPES.NPY_INT "NPY_INT") are equivalent. Otherwise return `NPY_FALSE`. [npy_bool](dtype#c.npy_bool "npy_bool")PyArray_EquivArrTypes([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*a1, [PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*a2) Return `NPY_TRUE` if _a1_ and _a2_ are arrays with equivalent types for this platform. [npy_bool](dtype#c.npy_bool "npy_bool")PyArray_EquivTypenums(inttypenum1, inttypenum2) Special case of `PyArray_EquivTypes` (…) that does not accept flexible data types but may be easier to call. intPyArray_EquivByteorders(intb1, intb2) True if byteorder characters _b1_ and _b2_ ( `NPY_LITTLE`, `NPY_BIG`, `NPY_NATIVE`, `NPY_IGNORE` ) are either equal or equivalent as to their specification of a native byte order. Thus, on a little-endian machine `NPY_LITTLE` and `NPY_NATIVE` are equivalent where they are not equivalent on a big-endian machine. ### Converting data types [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Cast([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*arr, inttypenum) Mainly for backwards compatibility to the Numeric C-API and for simple casts to non-flexible types. Return a new array object with the elements of _arr_ cast to the data-type _typenum_ which must be one of the enumerated types and not a flexible type. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_CastToType([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*arr, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*type, intfortran) Return a new array of the _type_ specified, casting the elements of _arr_ as appropriate. The fortran argument specifies the ordering of the output array. intPyArray_CastTo([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*in) As of 1.6, this function simply calls `PyArray_CopyInto`, which handles the casting. Cast the elements of the array _in_ into the array _out_. The output array should be writeable, have an integer-multiple of the number of elements in the input array (more than one copy can be placed in out), and have a data type that is one of the builtin types. Returns 0 on success and -1 if an error occurs. intPyArray_CanCastSafely(intfromtype, inttotype) Returns non-zero if an array of data type _fromtype_ can be cast to an array of data type _totype_ without losing information. An exception is that 64-bit integers are allowed to be cast to 64-bit floating point values even though this can lose precision on large integers so as not to proliferate the use of long doubles without explicit requests. Flexible array types are not checked according to their lengths with this function. intPyArray_CanCastTo([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*fromtype, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*totype) `PyArray_CanCastTypeTo` supersedes this function in NumPy 1.6 and later. Equivalent to PyArray_CanCastTypeTo(fromtype, totype, NPY_SAFE_CASTING). intPyArray_CanCastTypeTo([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*fromtype, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*totype, NPY_CASTINGcasting) Returns non-zero if an array of data type _fromtype_ (which can include flexible types) can be cast safely to an array of data type _totype_ (which can include flexible types) according to the casting rule _casting_. For simple types with `NPY_SAFE_CASTING`, this is basically a wrapper around `PyArray_CanCastSafely`, but for flexible types such as strings or unicode, it produces results taking into account their sizes. Integer and float types can only be cast to a string or unicode type using `NPY_SAFE_CASTING` if the string or unicode type is big enough to hold the max value of the integer/float type being cast from. intPyArray_CanCastArrayTo([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*totype, NPY_CASTINGcasting) Returns non-zero if _arr_ can be cast to _totype_ according to the casting rule given in _casting_. If _arr_ is an array scalar, its value is taken into account, and non-zero is also returned when the value will not overflow or be truncated to an integer when converting to a smaller type. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_MinScalarType([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*arr) Note With the adoption of NEP 50 in NumPy 2, this function is not used internally. It is currently provided for backwards compatibility, but expected to be eventually deprecated. If _arr_ is an array, returns its data type descriptor, but if _arr_ is an array scalar (has 0 dimensions), it finds the data type of smallest size to which the value may be converted without overflow or truncation to an integer. This function will not demote complex to float or anything to boolean, but will demote a signed integer to an unsigned integer when the scalar value is positive. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_PromoteTypes([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*type1, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*type2) Finds the data type of smallest size and kind to which _type1_ and _type2_ may be safely converted. This function is symmetric and associative. A string or unicode result will be the proper size for storing the max value of the input types converted to a string or unicode. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_ResultType([npy_intp](dtype#c.npy_intp "npy_intp")narrs, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")**arrs, [npy_intp](dtype#c.npy_intp "npy_intp")ndtypes, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")**dtypes) This applies type promotion to all the input arrays and dtype objects, using the NumPy rules for combining scalars and arrays, to determine the output type for an operation with the given set of operands. This is the same result type that ufuncs produce. See the documentation of [`numpy.result_type`](../generated/numpy.result_type#numpy.result_type "numpy.result_type") for more detail about the type promotion algorithm. intPyArray_ObjectType([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op, intmintype) This function is superseded by `PyArray_ResultType`. This function is useful for determining a common type that two or more arrays can be converted to. It only works for non-flexible array types as no itemsize information is passed. The _mintype_ argument represents the minimum type acceptable, and _op_ represents the object that will be converted to an array. The return value is the enumerated typenumber that represents the data-type that _op_ should have. [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")**PyArray_ConvertToCommonType([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op, int*n) The functionality this provides is largely superseded by iterator [`NpyIter`](iterator#c.NpyIter "NpyIter") introduced in 1.6, with flag [`NPY_ITER_COMMON_DTYPE`](iterator#c.NPY_ITER_COMMON_DTYPE "NPY_ITER_COMMON_DTYPE") or with the same dtype parameter for all operands. Convert a sequence of Python objects contained in _op_ to an array of ndarrays each having the same data type. The type is selected in the same way as `PyArray_ResultType`. The length of the sequence is returned in _n_ , and an _n_ -length array of [`PyArrayObject`](types-and-structures#c.PyArrayObject "PyArrayObject") pointers is the return value (or `NULL` if an error occurs). The returned array must be freed by the caller of this routine (using `PyDataMem_FREE` ) and all the array objects in it `DECREF` ‘d or a memory- leak will occur. The example template-code below shows a typical usage: mps = PyArray_ConvertToCommonType(obj, &n); if (mps==NULL) return NULL; {code} for (i=0; iitemsize that holds the representation of 0 for that type. The returned pointer, _ret_ , **must be freed** using `PyDataMem_FREE` (ret) when it is not needed anymore. char*PyArray_One([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) A pointer to newly created memory of size _arr_ ->itemsize that holds the representation of 1 for that type. The returned pointer, _ret_ , **must be freed** using `PyDataMem_FREE` (ret) when it is not needed anymore. intPyArray_ValidType(inttypenum) Returns `NPY_TRUE` if _typenum_ represents a valid type-number (builtin or user-defined or character code). Otherwise, this function returns `NPY_FALSE`. ### User-defined data types voidPyArray_InitArrFuncs([PyArray_ArrFuncs](types-and- structures#c.PyArray_ArrFuncs "PyArray_ArrFuncs")*f) Initialize all function pointers and members to `NULL`. intPyArray_RegisterDataType([PyArray_DescrProto](types-and- structures#c.PyArray_DescrProto "PyArray_DescrProto")*dtype) Note As of NumPy 2.0 this API is considered legacy, the new DType API is more powerful and provides additional flexibility. The API may eventually be deprecated but support is continued for the time being. **Compiling for NumPy 1.x and 2.x** NumPy 2.x requires passing in a `PyArray_DescrProto` typed struct rather than a `PyArray_Descr`. This is necessary to allow changes. To allow code to run and compile on both 1.x and 2.x you need to change the type of your struct to `PyArray_DescrProto` and add: /* Allow compiling on NumPy 1.x */ #if NPY_ABI_VERSION < 0x02000000 #define PyArray_DescrProto PyArray_Descr #endif for 1.x compatibility. Further, the struct will _not_ be the actual descriptor anymore, only it’s type number will be updated. After successful registration, you must thus fetch the actual dtype with: int type_num = PyArray_RegisterDataType(&my_descr_proto); if (type_num < 0) { /* error */ } PyArray_Descr *my_descr = PyArray_DescrFromType(type_num); With these two changes, the code should compile and work on both 1.x and 2.x or later. In the unlikely case that you are heap allocating the dtype struct you should free it again on NumPy 2, since a copy is made. The struct is not a valid Python object, so do not use `Py_DECREF` on it. Register a data-type as a new user-defined data type for arrays. The type must have most of its entries filled in. This is not always checked and errors can produce segfaults. In particular, the typeobj member of the `dtype` structure must be filled with a Python type that has a fixed-size element-size that corresponds to the elsize member of _dtype_. Also the `f` member must have the required functions: nonzero, copyswap, copyswapn, getitem, setitem, and cast (some of the cast functions may be `NULL` if no support is desired). To avoid confusion, you should choose a unique character typecode but this is not enforced and not relied on internally. A user-defined type number is returned that uniquely identifies the type. A pointer to the new structure can then be obtained from `PyArray_DescrFromType` using the returned type number. A -1 is returned if an error occurs. If this _dtype_ has already been registered (checked only by the address of the pointer), then return the previously-assigned type-number. The number of user DTypes known to numpy is stored in `NPY_NUMUSERTYPES`, a static global variable that is public in the C API. Accessing this symbol is inherently _not_ thread-safe. If for some reason you need to use this API in a multithreaded context, you will need to add your own locking, NumPy does not ensure new data types can be added in a thread-safe manner. intPyArray_RegisterCastFunc([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*descr, inttotype, PyArray_VectorUnaryFunc*castfunc) Register a low-level casting function, _castfunc_ , to convert from the data- type, _descr_ , to the given data-type number, _totype_. Any old casting function is over-written. A `0` is returned on success or a `-1` on failure. typePyArray_VectorUnaryFunc The function pointer type for low-level casting functions. intPyArray_RegisterCanCast([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*descr, inttotype, NPY_SCALARKINDscalar) Register the data-type number, _totype_ , as castable from data-type object, _descr_ , of the given _scalar_ kind. Use _scalar_ = `NPY_NOSCALAR` to register that an array of data-type _descr_ can be cast safely to a data-type whose type_number is _totype_. The return value is 0 on success or -1 on failure. ### Special functions for NPY_OBJECT Warning When working with arrays or buffers filled with objects NumPy tries to ensure such buffers are filled with `None` before any data may be read. However, code paths may existed where an array is only initialized to `NULL`. NumPy itself accepts `NULL` as an alias for `None`, but may `assert` non-`NULL` when compiled in debug mode. Because NumPy is not yet consistent about initialization with None, users **must** expect a value of `NULL` when working with buffers created by NumPy. Users **should** also ensure to pass fully initialized buffers to NumPy, since NumPy may make this a strong requirement in the future. There is currently an intention to ensure that NumPy always initializes object arrays before they may be read. Any failure to do so will be regarded as a bug. In the future, users may be able to rely on non-NULL values when reading from any array, although exceptions for writing to freshly created arrays may remain (e.g. for output arrays in ufunc code). As of NumPy 1.23 known code paths exists where proper filling is not done. intPyArray_INCREF([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*op) Used for an array, _op_ , that contains any Python objects. It increments the reference count of every object in the array according to the data-type of _op_. A -1 is returned if an error occurs, otherwise 0 is returned. voidPyArray_Item_INCREF(char*ptr, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*dtype) A function to INCREF all the objects at the location _ptr_ according to the data-type _dtype_. If _ptr_ is the start of a structured type with an object at any offset, then this will (recursively) increment the reference count of all object-like items in the structured type. intPyArray_XDECREF([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*op) Used for an array, _op_ , that contains any Python objects. It decrements the reference count of every object in the array according to the data-type of _op_. Normal return value is 0. A -1 is returned if an error occurs. voidPyArray_Item_XDECREF(char*ptr, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*dtype) A function to XDECREF all the object-like items at the location _ptr_ as recorded in the data-type, _dtype_. This works recursively so that if `dtype` itself has fields with data-types that contain object-like items, all the object-like fields will be XDECREF `'d`. intPyArray_SetWritebackIfCopyBase([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*arr, [PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*base) Precondition: `arr` is a copy of `base` (though possibly with different strides, ordering, etc.) Sets the `NPY_ARRAY_WRITEBACKIFCOPY` flag and `arr->base`, and set `base` to READONLY. Call `PyArray_ResolveWritebackIfCopy` before calling [`Py_DECREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_DECREF "\(in Python v3.13\)") in order to copy any changes back to `base` and reset the READONLY flag. Returns 0 for success, -1 for failure. ## Array flags The `flags` attribute of the `PyArrayObject` structure contains important information about the memory used by the array (pointed to by the data member) This flag information must be kept accurate or strange results and even segfaults may result. There are 6 (binary) flags that describe the memory area used by the data buffer. These constants are defined in `arrayobject.h` and determine the bit- position of the flag. Python exposes a nice attribute- based interface as well as a dictionary-like interface for getting (and, if appropriate, setting) these flags. Memory areas of all kinds can be pointed to by an ndarray, necessitating these flags. If you get an arbitrary `PyArrayObject` in C-code, you need to be aware of the flags that are set. If you need to guarantee a certain kind of array (like `NPY_ARRAY_C_CONTIGUOUS` and `NPY_ARRAY_BEHAVED`), then pass these requirements into the PyArray_FromAny function. In versions 1.6 and earlier of NumPy, the following flags did not have the _ARRAY_ macro namespace in them. That form of the constant names is deprecated in 1.7. ### Basic Array Flags An ndarray can have a data segment that is not a simple contiguous chunk of well-behaved memory you can manipulate. It may not be aligned with word boundaries (very important on some platforms). It might have its data in a different byte-order than the machine recognizes. It might not be writeable. It might be in Fortran-contiguous order. The array flags are used to indicate what can be said about data associated with an array. NPY_ARRAY_C_CONTIGUOUS The data area is in C-style contiguous order (last index varies the fastest). NPY_ARRAY_F_CONTIGUOUS The data area is in Fortran-style contiguous order (first index varies the fastest). Note Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays. Even for contiguous arrays a stride for a given dimension `arr.strides[dim]` may be _arbitrary_ if `arr.shape[dim] == 1` or the array has no elements. It does _not_ generally hold that `self.strides[-1] == self.itemsize` for C-style contiguous arrays or `self.strides[0] == self.itemsize` for Fortran-style contiguous arrays is true. The correct way to access the `itemsize` of an array from the C API is `PyArray_ITEMSIZE(arr)`. See also [Internal memory layout of an ndarray](../arrays.ndarray#arrays-ndarray) NPY_ARRAY_OWNDATA The data area is owned by this array. Should never be set manually, instead create a `PyObject` wrapping the data and set the array’s base to that object. For an example, see the test in `test_mem_policy`. NPY_ARRAY_ALIGNED The data area and all array elements are aligned appropriately. NPY_ARRAY_WRITEABLE The data area can be written to. Notice that the above 3 flags are defined so that a new, well- behaved array has these flags defined as true. NPY_ARRAY_WRITEBACKIFCOPY The data area represents a (well-behaved) copy whose information should be transferred back to the original when `PyArray_ResolveWritebackIfCopy` is called. This is a special flag that is set if this array represents a copy made because a user required certain flags in `PyArray_FromAny` and a copy had to be made of some other array (and the user asked for this flag to be set in such a situation). The base attribute then points to the “misbehaved” array (which is set read_only). `PyArray_ResolveWritebackIfCopy` will copy its contents back to the “misbehaved” array (casting if necessary) and will reset the “misbehaved” array to `NPY_ARRAY_WRITEABLE`. If the “misbehaved” array was not `NPY_ARRAY_WRITEABLE` to begin with then `PyArray_FromAny` would have returned an error because `NPY_ARRAY_WRITEBACKIFCOPY` would not have been possible. `PyArray_UpdateFlags` (obj, flags) will update the `obj->flags` for `flags` which can be any of `NPY_ARRAY_C_CONTIGUOUS`, `NPY_ARRAY_F_CONTIGUOUS`, `NPY_ARRAY_ALIGNED`, or `NPY_ARRAY_WRITEABLE`. ### Combinations of array flags NPY_ARRAY_BEHAVED `NPY_ARRAY_ALIGNED` | `NPY_ARRAY_WRITEABLE` NPY_ARRAY_CARRAY `NPY_ARRAY_C_CONTIGUOUS` | `NPY_ARRAY_BEHAVED` NPY_ARRAY_CARRAY_RO `NPY_ARRAY_C_CONTIGUOUS` | `NPY_ARRAY_ALIGNED` NPY_ARRAY_FARRAY `NPY_ARRAY_F_CONTIGUOUS` | `NPY_ARRAY_BEHAVED` NPY_ARRAY_FARRAY_RO `NPY_ARRAY_F_CONTIGUOUS` | `NPY_ARRAY_ALIGNED` NPY_ARRAY_DEFAULT `NPY_ARRAY_CARRAY` NPY_ARRAY_IN_ARRAY `NPY_ARRAY_C_CONTIGUOUS` | `NPY_ARRAY_ALIGNED` NPY_ARRAY_IN_FARRAY `NPY_ARRAY_F_CONTIGUOUS` | `NPY_ARRAY_ALIGNED` NPY_ARRAY_OUT_ARRAY `NPY_ARRAY_C_CONTIGUOUS` | `NPY_ARRAY_WRITEABLE` | `NPY_ARRAY_ALIGNED` NPY_ARRAY_OUT_FARRAY `NPY_ARRAY_F_CONTIGUOUS` | `NPY_ARRAY_WRITEABLE` | `NPY_ARRAY_ALIGNED` NPY_ARRAY_INOUT_ARRAY `NPY_ARRAY_C_CONTIGUOUS` | `NPY_ARRAY_WRITEABLE` | `NPY_ARRAY_ALIGNED` | `NPY_ARRAY_WRITEBACKIFCOPY` NPY_ARRAY_INOUT_FARRAY `NPY_ARRAY_F_CONTIGUOUS` | `NPY_ARRAY_WRITEABLE` | `NPY_ARRAY_ALIGNED` | `NPY_ARRAY_WRITEBACKIFCOPY` NPY_ARRAY_UPDATE_ALL `NPY_ARRAY_C_CONTIGUOUS` | `NPY_ARRAY_F_CONTIGUOUS` | `NPY_ARRAY_ALIGNED` ### Flag-like constants These constants are used in `PyArray_FromAny` (and its macro forms) to specify desired properties of the new array. NPY_ARRAY_FORCECAST Cast to the desired type, even if it can’t be done without losing information. NPY_ARRAY_ENSURECOPY Make sure the resulting array is a copy of the original. NPY_ARRAY_ENSUREARRAY Make sure the resulting object is an actual ndarray, and not a sub-class. These constants are used in `PyArray_CheckFromAny` (and its macro forms) to specify desired properties of the new array. NPY_ARRAY_NOTSWAPPED Make sure the returned array has a data-type descriptor that is in machine byte-order, over-riding any specification in the _dtype_ argument. Normally, the byte-order requirement is determined by the _dtype_ argument. If this flag is set and the dtype argument does not indicate a machine byte-order descriptor (or is NULL and the object is already an array with a data-type descriptor that is not in machine byte- order), then a new data-type descriptor is created and used with its byte-order field set to native. NPY_ARRAY_BEHAVED_NS `NPY_ARRAY_ALIGNED` | `NPY_ARRAY_WRITEABLE` | `NPY_ARRAY_NOTSWAPPED` NPY_ARRAY_ELEMENTSTRIDES Make sure the returned array has strides that are multiples of the element size. ### Flag checking For all of these macros _arr_ must be an instance of a (subclass of) [`PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type"). intPyArray_CHKFLAGS([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*arr, intflags) The first parameter, arr, must be an ndarray or subclass. The parameter, _flags_ , should be an integer consisting of bitwise combinations of the possible flags an array can have: `NPY_ARRAY_C_CONTIGUOUS`, `NPY_ARRAY_F_CONTIGUOUS`, `NPY_ARRAY_OWNDATA`, `NPY_ARRAY_ALIGNED`, `NPY_ARRAY_WRITEABLE`, `NPY_ARRAY_WRITEBACKIFCOPY`. intPyArray_IS_C_CONTIGUOUS([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*arr) Evaluates true if _arr_ is C-style contiguous. intPyArray_IS_F_CONTIGUOUS([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*arr) Evaluates true if _arr_ is Fortran-style contiguous. intPyArray_ISFORTRAN([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*arr) Evaluates true if _arr_ is Fortran-style contiguous and _not_ C-style contiguous. `PyArray_IS_F_CONTIGUOUS` is the correct way to test for Fortran- style contiguity. intPyArray_ISWRITEABLE([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*arr) Evaluates true if the data area of _arr_ can be written to intPyArray_ISALIGNED([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*arr) Evaluates true if the data area of _arr_ is properly aligned on the machine. intPyArray_ISBEHAVED([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*arr) Evaluates true if the data area of _arr_ is aligned and writeable and in machine byte-order according to its descriptor. intPyArray_ISBEHAVED_RO([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*arr) Evaluates true if the data area of _arr_ is aligned and in machine byte-order. intPyArray_ISCARRAY([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*arr) Evaluates true if the data area of _arr_ is C-style contiguous, and `PyArray_ISBEHAVED` (_arr_) is true. intPyArray_ISFARRAY([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*arr) Evaluates true if the data area of _arr_ is Fortran-style contiguous and `PyArray_ISBEHAVED` (_arr_) is true. intPyArray_ISCARRAY_RO([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*arr) Evaluates true if the data area of _arr_ is C-style contiguous, aligned, and in machine byte-order. intPyArray_ISFARRAY_RO([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*arr) Evaluates true if the data area of _arr_ is Fortran-style contiguous, aligned, and in machine byte-order **.** intPyArray_ISONESEGMENT([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*arr) Evaluates true if the data area of _arr_ consists of a single (C-style or Fortran-style) contiguous segment. voidPyArray_UpdateFlags([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, intflagmask) The `NPY_ARRAY_C_CONTIGUOUS`, `NPY_ARRAY_ALIGNED`, and `NPY_ARRAY_F_CONTIGUOUS` array flags can be “calculated” from the array object itself. This routine updates one or more of these flags of _arr_ as specified in _flagmask_ by performing the required calculation. Warning It is important to keep the flags updated (using `PyArray_UpdateFlags` can help) whenever a manipulation with an array is performed that might cause them to change. Later calculations in NumPy that rely on the state of these flags do not repeat the calculation to update them. intPyArray_FailUnlessWriteable([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*obj, constchar*name) This function does nothing and returns 0 if _obj_ is writeable. It raises an exception and returns -1 if _obj_ is not writeable. It may also do other house-keeping, such as issuing warnings on arrays which are transitioning to become views. Always call this function at some point before writing to an array. _name_ is a name for the array, used to give better error messages. It can be something like “assignment destination”, “output array”, or even just “array”. ## ArrayMethod API ArrayMethod loops are intended as a generic mechanism for writing loops over arrays, including ufunc loops and casts. The public API is defined in the `numpy/dtype_api.h` header. See [PyArrayMethod_Context and PyArrayMethod_Spec](types-and-structures#arraymethod-structs) for documentation on the C structs exposed in the ArrayMethod API. ### Slots and Typedefs These are used to identify which kind of function an ArrayMethod slot implements. See Slots and Typedefs below for documentation on the functions that must be implemented for each slot. NPY_METH_resolve_descriptors typedefNPY_CASTING(PyArrayMethod_ResolveDescriptors)(struct[PyArrayMethodObject_tag](types- and-structures#c.PyArrayMethodObject_tag "PyArrayMethodObject_tag")*method,[PyArray_DTypeMeta](types-and- structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*const*dtypes,[PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*const*given_descrs,[PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")**loop_descrs,[npy_intp](dtype#c.npy_intp "npy_intp")*view_offset) The function used to set the descriptors for an operation based on the descriptors of the operands. For example, a ufunc operation with two input operands and one output operand that is called without `out` being set in the python API, `resolve_descriptors` will be passed the descriptors for the two operands and determine the correct descriptor to use for the output based on the output DType set for the ArrayMethod. If `out` is set, then the output descriptor would be passed in as well and should not be overridden. The _method_ is a pointer to the underlying cast or ufunc loop. In the future we may expose this struct publicly but for now this is an opaque pointer and the method cannot be inspected. The _dtypes_ is an `nargs` length array of `PyArray_DTypeMeta` pointers, _given_descrs_ is an `nargs` length array of input descriptor instances (output descriptors may be NULL if no output was provided by the user), and _loop_descrs_ is an `nargs` length array of descriptors that must be filled in by the resolve descriptors implementation. _view_offset_ is currently only interesting for casts and can normally be ignored. When a cast does not require any operation, this can be signalled by setting `view_offset` to 0. On error, you must return `(NPY_CASTING)-1` with an error set. NPY_METH_strided_loop NPY_METH_contiguous_loop NPY_METH_unaligned_strided_loop NPY_METH_unaligned_contiguous_loop One dimensional strided loops implementing the behavior (either a ufunc or cast). In most cases, `NPY_METH_strided_loop` is the generic and only version that needs to be implemented. `NPY_METH_contiguous_loop` can be implemented additionally as a more light-weight/faster version and it is used when all inputs and outputs are contiguous. To deal with possibly unaligned data, NumPy needs to be able to copy unaligned to aligned data. When implementing a new DType, the “cast” or copy for it needs to implement `NPY_METH_unaligned_strided_loop`. Unlike the normal versions, this loop must not assume that the data can be accessed in an aligned fashion. These loops must copy each value before accessing or storing: type_in in_value; type_out out_value memcpy(&value, in_data, sizeof(type_in)); out_value = in_value; memcpy(out_data, &out_value, sizeof(type_out) while a normal loop can just use: *(type_out *)out_data = *(type_in)in_data; The unaligned loops are currently only used in casts and will never be picked in ufuncs (ufuncs create a temporary copy to ensure aligned inputs). These slot IDs are ignored when `NPY_METH_get_loop` is defined, where instead whichever loop returned by the `get_loop` function is used. NPY_METH_contiguous_indexed_loop A specialized inner-loop option to speed up common `ufunc.at` computations. typedefint(PyArrayMethod_StridedLoop)([PyArrayMethod_Context](types-and- structures#c.PyArrayMethod_Context "PyArrayMethod_Context")*context,char*const*data,const[npy_intp](dtype#c.npy_intp "npy_intp")*dimensions,const[npy_intp](dtype#c.npy_intp "npy_intp")*strides,NpyAuxData*auxdata) An implementation of an ArrayMethod loop. All of the loop slot IDs listed above must provide a `PyArrayMethod_StridedLoop` implementation. The _context_ is a struct containing context for the loop operation - in particular the input descriptors. The _data_ are an array of pointers to the beginning of the input and output array buffers. The _dimensions_ are the loop dimensions for the operation. The _strides_ are an `nargs` length array of strides for each input. The _auxdata_ is an optional set of auxiliary data that can be passed in to the loop - helpful to turn on and off optional behavior or reduce boilerplate by allowing similar ufuncs to share loop implementations or to allocate space that is persistent over multiple strided loop calls. NPY_METH_get_loop Allows more fine-grained control over loop selection. Accepts an implementation of PyArrayMethod_GetLoop, which in turn returns a strided loop implementation. If `NPY_METH_get_loop` is defined, the other loop slot IDs are ignored, if specified. typedefint(PyArrayMethod_GetLoop)([PyArrayMethod_Context](types-and- structures#c.PyArrayMethod_Context "PyArrayMethod_Context")*context,intaligned,intmove_references,const[npy_intp](dtype#c.npy_intp "npy_intp")*strides,PyArrayMethod_StridedLoop**out_loop,NpyAuxData**out_transferdata,NPY_ARRAYMETHOD_FLAGS*flags); Sets the loop to use for an operation at runtime. The _context_ is the runtime context for the operation. _aligned_ indicates whether the data access for the loop is aligned (1) or unaligned (0). _move_references_ indicates whether embedded references in the data should be copied. _strides_ are the strides for the input array, _out_loop_ is a pointer that must be filled in with a pointer to the loop implementation. _out_transferdata_ can be optionally filled in to allow passing in extra user-defined context to an operation. _flags_ must be filled in with ArrayMethod flags relevant for the operation. This is for example necessary to indicate if the inner loop requires the Python GIL to be held. NPY_METH_get_reduction_initial typedefint(PyArrayMethod_GetReductionInitial)([PyArrayMethod_Context](types- and-structures#c.PyArrayMethod_Context "PyArrayMethod_Context")*context,[npy_bool](dtype#c.npy_bool "npy_bool")reduction_is_empty,char*initial) Query an ArrayMethod for the initial value for use in reduction. The _context_ is the ArrayMethod context, mainly to access the input descriptors. _reduction_is_empty_ indicates whether the reduction is empty. When it is, the value returned may differ. In this case it is a “default” value that may differ from the “identity” value normally used. For example: * `0.0` is the default for `sum([])`. But `-0.0` is the correct identity otherwise as it preserves the sign for `sum([-0.0])`. * We use no identity for object, but return the default of `0` and `1` for the empty `sum([], dtype=object)` and `prod([], dtype=object)`. This allows `np.sum(np.array(["a", "b"], dtype=object))` to work. * `-inf` or `INT_MIN` for `max` is an identity, but at least `INT_MIN` not a good _default_ when there are no items. _initial_ is a pointer to the data for the initial value, which should be filled in. Returns -1, 0, or 1 indicating error, no initial value, and the initial value being successfully filled. Errors must not be given when no initial value is correct, since NumPy may call this even when it is not strictly necessary to do so. ### Flags enumNPY_ARRAYMETHOD_FLAGS These flags allow switching on and off custom runtime behavior for ArrayMethod loops. For example, if a ufunc cannot possibly trigger floating point errors, then the `NPY_METH_NO_FLOATINGPOINT_ERRORS` flag should be set on the ufunc when it is registered. enumeratorNPY_METH_REQUIRES_PYAPI Indicates the method must hold the GIL. If this flag is not set, the GIL is released before the loop is called. enumeratorNPY_METH_NO_FLOATINGPOINT_ERRORS Indicates the method cannot generate floating errors, so checking for floating errors after the loop completes can be skipped. enumeratorNPY_METH_SUPPORTS_UNALIGNED Indicates the method supports unaligned access. enumeratorNPY_METH_IS_REORDERABLE Indicates that the result of applying the loop repeatedly (for example, in a reduction operation) does not depend on the order of application. enumeratorNPY_METH_RUNTIME_FLAGS The flags that can be changed at runtime. ### Typedefs Typedefs for functions that users of the ArrayMethod API can implement are described below. typedefint(PyArrayMethod_TraverseLoop)(void*traverse_context,const[PyArray_Descr](types- and-structures#c.PyArray_Descr "PyArray_Descr")*descr,char*data,[npy_intp](dtype#c.npy_intp "npy_intp")size,[npy_intp](dtype#c.npy_intp "npy_intp")stride,NpyAuxData*auxdata) A traverse loop working on a single array. This is similar to the general strided-loop function. This is designed for loops that need to visit every element of a single array. Currently this is used for array clearing, via the `NPY_DT_get_clear_loop` DType API hook, and zero-filling, via the `NPY_DT_get_fill_zero_loop` DType API hook. These are most useful for handling arrays storing embedded references to python objects or heap-allocated data. The _descr_ is the descriptor for the array, _data_ is a pointer to the array buffer, _size_ is the 1D size of the array buffer, _stride_ is the stride, and _auxdata_ is optional extra data for the loop. The _traverse_context_ is passed in because we may need to pass in Interpreter state or similar in the future, but we don’t want to pass in a full context (with pointers to dtypes, method, caller which all make no sense for a traverse function). We assume for now that this context can be just passed through in the future (for structured dtypes). typedefint(PyArrayMethod_GetTraverseLoop)(void*traverse_context,const[PyArray_Descr](types- and-structures#c.PyArray_Descr "PyArray_Descr")*descr,intaligned,[npy_intp](dtype#c.npy_intp "npy_intp")fixed_stride,PyArrayMethod_TraverseLoop**out_loop,NpyAuxData**out_auxdata,NPY_ARRAYMETHOD_FLAGS*flags) Simplified get_loop function specific to dtype traversal It should set the flags needed for the traversal loop and set _out_loop_ to the loop function, which must be a valid `PyArrayMethod_TraverseLoop` pointer. Currently this is used for zero-filling and clearing arrays storing embedded references. ### API Functions and Typedefs These functions are part of the main numpy array API and were added along with the rest of the ArrayMethod API. intPyUFunc_AddLoopFromSpec([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*ufunc, [PyArrayMethod_Spec](types-and- structures#c.PyArrayMethod_Spec "PyArrayMethod_Spec")*spec) Add loop directly to a ufunc from a given ArrayMethod spec. the main ufunc registration function. This adds a new implementation/loop to a ufunc. It replaces `PyUFunc_RegisterLoopForType`. intPyUFunc_AddPromoter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*ufunc, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*DType_tuple, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*promoter) Note that currently the output dtypes are always `NULL` unless they are also part of the signature. This is an implementation detail and could change in the future. However, in general promoters should not have a need for output dtypes. Register a new promoter for a ufunc. The first argument is the ufunc to register the promoter with. The second argument is a Python tuple containing DTypes or None matching the number of inputs and outputs for the ufuncs. The last argument is a promoter is a function stored in a PyCapsule. It is passed the operation and requested DType signatures and can mutate it to attempt a new search for a matching loop/promoter. typedefint(PyArrayMethod_PromoterFunction)([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*ufunc,[PyArray_DTypeMeta](types-and- structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*constop_dtypes[],[PyArray_DTypeMeta](types-and- structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*constsignature[],[PyArray_DTypeMeta](types-and- structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*new_op_dtypes[]) Type of the promoter function, which must be wrapped into a `PyCapsule` with name `"numpy._ufunc_promoter"`. It is passed the operation and requested DType signatures and can mutate the signatures to attempt a search for a new loop or promoter that can accomplish the operation by casting the inputs to the “promoted” DTypes. intPyUFunc_GiveFloatingpointErrors(constchar*name, intfpe_errors) Checks for a floating point error after performing a floating point operation in a manner that takes into account the error signaling configured via [`numpy.errstate`](../generated/numpy.errstate#numpy.errstate "numpy.errstate"). Takes the name of the operation to use in the error message and an integer flag that is one of `NPY_FPE_DIVIDEBYZERO`, `NPY_FPE_OVERFLOW`, `NPY_FPE_UNDERFLOW`, `NPY_FPE_INVALID` to indicate which error to check for. Returns -1 on failure (an error was raised) and 0 on success. intPyUFunc_AddWrappingLoop([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*ufunc_obj, [PyArray_DTypeMeta](types-and- structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*new_dtypes[], [PyArray_DTypeMeta](types-and-structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*wrapped_dtypes[], PyArrayMethod_TranslateGivenDescriptors*translate_given_descrs, PyArrayMethod_TranslateLoopDescriptors*translate_loop_descrs) Allows creating of a fairly lightweight wrapper around an existing ufunc loop. The idea is mainly for units, as this is currently slightly limited in that it enforces that you cannot use a loop from another ufunc. typedefint(PyArrayMethod_TranslateGivenDescriptors)(intnin,intnout,[PyArray_DTypeMeta](types- and-structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*wrapped_dtypes[],[PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*given_descrs[],[PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*new_descrs[]); The function to convert the given descriptors (passed in to `resolve_descriptors`) and translates them for the wrapped loop. The new descriptors MUST be viewable with the old ones, `NULL` must be supported (for output arguments) and should normally be forwarded. The output of of this function will be used to construct views of the arguments as if they were the translated dtypes and does not use a cast. This means this mechanism is mostly useful for DTypes that “wrap” another DType implementation. For example, a unit DType could use this to wrap an existing floating point DType without needing to re-implement low-level ufunc logic. In the unit example, `resolve_descriptors` would handle computing the output unit from the input unit. typedefint(PyArrayMethod_TranslateLoopDescriptors)(intnin,intnout,[PyArray_DTypeMeta](types- and-structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*new_dtypes[],[PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*given_descrs[],[PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*original_descrs[],[PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*loop_descrs[]); The function to convert the actual loop descriptors (as returned by the original `resolve_descriptors` function) to the ones the output array should use. This function must return “viewable” types, it must not mutate them in any form that would break the inner-loop logic. Does not need to support NULL. #### Wrapping Loop Example Suppose you want to wrap the `float64` multiply implementation for a `WrappedDoubleDType`. You would add a wrapping loop like so: PyArray_DTypeMeta *orig_dtypes[3] = { &WrappedDoubleDType, &WrappedDoubleDType, &WrappedDoubleDType}; PyArray_DTypeMeta *wrapped_dtypes[3] = { &PyArray_Float64DType, &PyArray_Float64DType, &PyArray_Float64DType} PyObject *mod = PyImport_ImportModule("numpy"); if (mod == NULL) { return -1; } PyObject *multiply = PyObject_GetAttrString(mod, "multiply"); Py_DECREF(mod); if (multiply == NULL) { return -1; } int res = PyUFunc_AddWrappingLoop( multiply, orig_dtypes, wrapped_dtypes, &translate_given_descrs &translate_loop_descrs); Py_DECREF(multiply); Note that this also requires two functions to be defined above this code: static int translate_given_descrs(int nin, int nout, PyArray_DTypeMeta *NPY_UNUSED(wrapped_dtypes[]), PyArray_Descr *given_descrs[], PyArray_Descr *new_descrs[]) { for (int i = 0; i < nin + nout; i++) { if (given_descrs[i] == NULL) { new_descrs[i] = NULL; } else { new_descrs[i] = PyArray_DescrFromType(NPY_DOUBLE); } } return 0; } static int translate_loop_descrs(int nin, int NPY_UNUSED(nout), PyArray_DTypeMeta *NPY_UNUSED(new_dtypes[]), PyArray_Descr *given_descrs[], PyArray_Descr *original_descrs[], PyArray_Descr *loop_descrs[]) { // more complicated parametric DTypes may need to // to do additional checking, but we know the wrapped // DTypes *have* to be float64 for this example. loop_descrs[0] = PyArray_DescrFromType(NPY_FLOAT64); Py_INCREF(loop_descrs[0]); loop_descrs[1] = PyArray_DescrFromType(NPY_FLOAT64); Py_INCREF(loop_descrs[1]); loop_descrs[2] = PyArray_DescrFromType(NPY_FLOAT64); Py_INCREF(loop_descrs[2]); } ## API for calling array methods ### Conversion [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_GetField([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*dtype, intoffset) Equivalent to [`ndarray.getfield`](../generated/numpy.ndarray.getfield#numpy.ndarray.getfield "numpy.ndarray.getfield") (_self_ , _dtype_ , _offset_). This function [steals a reference](https://docs.python.org/3/c-api/intro.html?reference-count- details) to [`PyArray_Descr`](types-and-structures#c.PyArray_Descr "PyArray_Descr") and returns a new array of the given `dtype` using the data in the current array at a specified `offset` in bytes. The `offset` plus the itemsize of the new array type must be less than `self->descr->elsize` or an error is raised. The same shape and strides as the original array are used. Therefore, this function has the effect of returning a field from a structured array. But, it can also be used to select specific bytes or groups of bytes from any array type. intPyArray_SetField([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, intoffset, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*val) Equivalent to [`ndarray.setfield`](../generated/numpy.ndarray.setfield#numpy.ndarray.setfield "numpy.ndarray.setfield") (_self_ , _val_ , _dtype_ , _offset_ ). Set the field starting at _offset_ in bytes and of the given _dtype_ to _val_. The _offset_ plus _dtype_ ->elsize must be less than _self_ ->descr->elsize or an error is raised. Otherwise, the _val_ argument is converted to an array and copied into the field pointed to. If necessary, the elements of _val_ are repeated to fill the destination array, But, the number of elements in the destination must be an integer multiple of the number of elements in _val_. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Byteswap([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, [npy_bool](dtype#c.npy_bool "npy_bool")inplace) Equivalent to [`ndarray.byteswap`](../generated/numpy.ndarray.byteswap#numpy.ndarray.byteswap "numpy.ndarray.byteswap") (_self_ , _inplace_). Return an array whose data area is byteswapped. If _inplace_ is non-zero, then do the byteswap inplace and return a reference to self. Otherwise, create a byteswapped copy and leave self unchanged. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_NewCopy([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*old, NPY_ORDERorder) Equivalent to [`ndarray.copy`](../generated/numpy.ndarray.copy#numpy.ndarray.copy "numpy.ndarray.copy") (_self_ , _fortran_). Make a copy of the _old_ array. The returned array is always aligned and writeable with data interpreted the same as the old array. If _order_ is `NPY_CORDER`, then a C-style contiguous array is returned. If _order_ is `NPY_FORTRANORDER`, then a Fortran-style contiguous array is returned. If _order is_ `NPY_ANYORDER`, then the array returned is Fortran-style contiguous only if the old one is; otherwise, it is C-style contiguous. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_ToList([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self) Equivalent to [`ndarray.tolist`](../generated/numpy.ndarray.tolist#numpy.ndarray.tolist "numpy.ndarray.tolist") (_self_). Return a nested Python list from _self_. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_ToString([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, NPY_ORDERorder) Equivalent to [`ndarray.tobytes`](../generated/numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes") (_self_ , _order_). Return the bytes of this array in a Python string. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_ToFile([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, FILE*fp, char*sep, char*format) Write the contents of _self_ to the file pointer _fp_ in C-style contiguous fashion. Write the data as binary bytes if _sep_ is the string “”or `NULL`. Otherwise, write the contents of _self_ as text using the _sep_ string as the item separator. Each item will be printed to the file. If the _format_ string is not `NULL` or “”, then it is a Python print statement format string showing how the items are to be written. intPyArray_Dump([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*file, intprotocol) Pickle the object in _self_ to the given _file_ (either a string or a Python file object). If _file_ is a Python string it is considered to be the name of a file which is then opened in binary mode. The given _protocol_ is used (if _protocol_ is negative, or the highest available is used). This is a simple wrapper around cPickle.dump(_self_ , _file_ , _protocol_). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Dumps([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*self, intprotocol) Pickle the object in _self_ to a Python string and return it. Use the Pickle _protocol_ provided (or the highest available if _protocol_ is negative). intPyArray_FillWithScalar([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj) Fill the array, _arr_ , with the given scalar object, _obj_. The object is first converted to the data type of _arr_ , and then copied into every location. A -1 is returned if an error occurs, otherwise 0 is returned. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_View([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*dtype, [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)")*ptype) Equivalent to [`ndarray.view`](../generated/numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") (_self_ , _dtype_). Return a new view of the array _self_ as possibly a different data-type, _dtype_ , and different array subclass _ptype_. If _dtype_ is `NULL`, then the returned array will have the same data type as _self_. The new data-type must be consistent with the size of _self_. Either the itemsizes must be identical, or _self_ must be single-segment and the total number of bytes must be the same. In the latter case the dimensions of the returned array will be altered in the last (or first for Fortran-style contiguous arrays) dimension. The data area of the returned array and self is exactly the same. ### Shape Manipulation [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Newshape([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, [PyArray_Dims](types-and- structures#c.PyArray_Dims "PyArray_Dims")*newshape, NPY_ORDERorder) Result will be a new array (pointing to the same memory location as _self_ if possible), but having a shape given by _newshape_. If the new shape is not compatible with the strides of _self_ , then a copy of the array with the new specified shape will be returned. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Reshape([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*shape) Equivalent to [`ndarray.reshape`](../generated/numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") (_self_ , _shape_) where _shape_ is a sequence. Converts _shape_ to a [`PyArray_Dims`](types-and-structures#c.PyArray_Dims "PyArray_Dims") structure and calls `PyArray_Newshape` internally. For back- ward compatibility – Not recommended [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Squeeze([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self) Equivalent to [`ndarray.squeeze`](../generated/numpy.ndarray.squeeze#numpy.ndarray.squeeze "numpy.ndarray.squeeze") (_self_). Return a new view of _self_ with all of the dimensions of length 1 removed from the shape. Warning matrix objects are always 2-dimensional. Therefore, `PyArray_Squeeze` has no effect on arrays of matrix sub-class. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_SwapAxes([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, inta1, inta2) Equivalent to [`ndarray.swapaxes`](../generated/numpy.ndarray.swapaxes#numpy.ndarray.swapaxes "numpy.ndarray.swapaxes") (_self_ , _a1_ , _a2_). The returned array is a new view of the data in _self_ with the given axes, _a1_ and _a2_ , swapped. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Resize([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, [PyArray_Dims](types-and- structures#c.PyArray_Dims "PyArray_Dims")*newshape, intrefcheck, NPY_ORDERfortran) Equivalent to [`ndarray.resize`](../generated/numpy.ndarray.resize#numpy.ndarray.resize "numpy.ndarray.resize") (_self_ , _newshape_ , refcheck `=` _refcheck_ , order= fortran ). This function only works on single-segment arrays. It changes the shape of _self_ inplace and will reallocate the memory for _self_ if _newshape_ has a different total number of elements then the old shape. If reallocation is necessary, then _self_ must own its data, have _self_ \- `>base==NULL`, have _self_ \- `>weakrefs==NULL`, and (unless refcheck is 0) not be referenced by any other array. The fortran argument can be `NPY_ANYORDER`, `NPY_CORDER`, or `NPY_FORTRANORDER`. It currently has no effect. Eventually it could be used to determine how the resize operation should view the data when constructing a differently-dimensioned array. Returns None on success and NULL on error. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Transpose([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, [PyArray_Dims](types-and- structures#c.PyArray_Dims "PyArray_Dims")*permute) Equivalent to [`ndarray.transpose`](../generated/numpy.ndarray.transpose#numpy.ndarray.transpose "numpy.ndarray.transpose") (_self_ , _permute_). Permute the axes of the ndarray object _self_ according to the data structure _permute_ and return the result. If _permute_ is `NULL`, then the resulting array has its axes reversed. For example if _self_ has shape \\(10\times20\times30\\), and _permute_ `.ptr` is (0,2,1) the shape of the result is \\(10\times30\times20.\\) If _permute_ is `NULL`, the shape of the result is \\(30\times20\times10.\\) [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Flatten([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, NPY_ORDERorder) Equivalent to [`ndarray.flatten`](../generated/numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten") (_self_ , _order_). Return a 1-d copy of the array. If _order_ is `NPY_FORTRANORDER` the elements are scanned out in Fortran order (first-dimension varies the fastest). If _order_ is `NPY_CORDER`, the elements of `self` are scanned in C-order (last dimension varies the fastest). If _order_ `NPY_ANYORDER`, then the result of `PyArray_ISFORTRAN` (_self_) is used to determine which order to flatten. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Ravel([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, NPY_ORDERorder) Equivalent to _self_.ravel(_order_). Same basic functionality as `PyArray_Flatten` (_self_ , _order_) except if _order_ is 0 and _self_ is C-style contiguous, the shape is altered but no copy is performed. ### Item selection and manipulation [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_TakeFrom([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*indices, intaxis, [PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*ret, NPY_CLIPMODEclipmode) Equivalent to [`ndarray.take`](../generated/numpy.ndarray.take#numpy.ndarray.take "numpy.ndarray.take") (_self_ , _indices_ , _axis_ , _ret_ , _clipmode_) except _axis_ =None in Python is obtained by setting _axis_ = `NPY_MAXDIMS` in C. Extract the items from self indicated by the integer-valued _indices_ along the given _axis._ The clipmode argument can be `NPY_RAISE`, `NPY_WRAP`, or `NPY_CLIP` to indicate what to do with out-of-bound indices. The _ret_ argument can specify an output array rather than having one created internally. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_PutTo([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*values, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*indices, NPY_CLIPMODEclipmode) Equivalent to _self_.put(_values_ , _indices_ , _clipmode_ ). Put _values_ into _self_ at the corresponding (flattened) _indices_. If _values_ is too small it will be repeated as necessary. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_PutMask([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*values, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*mask) Place the _values_ in _self_ wherever corresponding positions (using a flattened context) in _mask_ are true. The _mask_ and _self_ arrays must have the same total number of elements. If _values_ is too small, it will be repeated as necessary. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Repeat([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op, intaxis) Equivalent to [`ndarray.repeat`](../generated/numpy.ndarray.repeat#numpy.ndarray.repeat "numpy.ndarray.repeat") (_self_ , _op_ , _axis_). Copy the elements of _self_ , _op_ times along the given _axis_. Either _op_ is a scalar integer or a sequence of length _self_ ->dimensions[ _axis_ ] indicating how many times to repeat each item along the axis. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Choose([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*ret, NPY_CLIPMODEclipmode) Equivalent to [`ndarray.choose`](../generated/numpy.ndarray.choose#numpy.ndarray.choose "numpy.ndarray.choose") (_self_ , _op_ , _ret_ , _clipmode_). Create a new array by selecting elements from the sequence of arrays in _op_ based on the integer values in _self_. The arrays must all be broadcastable to the same shape and the entries in _self_ should be between 0 and len(_op_). The output is placed in _ret_ unless it is `NULL` in which case a new output is created. The _clipmode_ argument determines behavior for when entries in _self_ are not between 0 and len(_op_). NPY_RAISE raise a ValueError; NPY_WRAP wrap values < 0 by adding len(_op_) and values >=len(_op_) by subtracting len(_op_) until they are in range; NPY_CLIP all values are clipped to the region [0, len(_op_) ). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Sort([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intaxis, NPY_SORTKINDkind) Equivalent to [`ndarray.sort`](../generated/numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort") (_self_ , _axis_ , _kind_). Return an array with the items of _self_ sorted along _axis_. The array is sorted using the algorithm denoted by _kind_ , which is an integer/enum pointing to the type of sorting algorithms used. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_ArgSort([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intaxis) Equivalent to [`ndarray.argsort`](../generated/numpy.ndarray.argsort#numpy.ndarray.argsort "numpy.ndarray.argsort") (_self_ , _axis_). Return an array of indices such that selection of these indices along the given `axis` would return a sorted version of _self_. If _self_ ->descr is a data-type with fields defined, then self->descr->names is used to determine the sort order. A comparison where the first field is equal will use the second field and so on. To alter the sort order of a structured array, create a new data-type with a different order of names and construct a view of the array with that new data-type. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_LexSort([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*sort_keys, intaxis) Given a sequence of arrays (_sort_keys_) of the same shape, return an array of indices (similar to `PyArray_ArgSort` (…)) that would sort the arrays lexicographically. A lexicographic sort specifies that when two keys are found to be equal, the order is based on comparison of subsequent keys. A merge sort (which leaves equal entries unmoved) is required to be defined for the types. The sort is accomplished by sorting the indices first using the first _sort_key_ and then using the second _sort_key_ and so forth. This is equivalent to the lexsort(_sort_keys_ , _axis_) Python command. Because of the way the merge-sort works, be sure to understand the order the _sort_keys_ must be in (reversed from the order you would use when comparing two elements). If these arrays are all collected in a structured array, then `PyArray_Sort` (…) can also be used to sort the array directly. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_SearchSorted([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*values, NPY_SEARCHSIDEside, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*perm) Equivalent to [`ndarray.searchsorted`](../generated/numpy.ndarray.searchsorted#numpy.ndarray.searchsorted "numpy.ndarray.searchsorted") (_self_ , _values_ , _side_ , _perm_). Assuming _self_ is a 1-d array in ascending order, then the output is an array of indices the same shape as _values_ such that, if the elements in _values_ were inserted before the indices, the order of _self_ would be preserved. No checking is done on whether or not self is in ascending order. The _side_ argument indicates whether the index returned should be that of the first suitable location (if `NPY_SEARCHLEFT`) or of the last (if `NPY_SEARCHRIGHT`). The _sorter_ argument, if not `NULL`, must be a 1D array of integer indices the same length as _self_ , that sorts it into ascending order. This is typically the result of a call to `PyArray_ArgSort` (…) Binary search is used to find the required insertion points. intPyArray_Partition([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*ktharray, intaxis, NPY_SELECTKINDwhich) Equivalent to [`ndarray.partition`](../generated/numpy.ndarray.partition#numpy.ndarray.partition "numpy.ndarray.partition") (_self_ , _ktharray_ , _axis_ , _kind_). Partitions the array so that the values of the element indexed by _ktharray_ are in the positions they would be if the array is fully sorted and places all elements smaller than the kth before and all elements equal or greater after the kth element. The ordering of all elements within the partitions is undefined. If _self_ ->descr is a data-type with fields defined, then self->descr->names is used to determine the sort order. A comparison where the first field is equal will use the second field and so on. To alter the sort order of a structured array, create a new data-type with a different order of names and construct a view of the array with that new data-type. Returns zero on success and -1 on failure. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_ArgPartition([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*op, [PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*ktharray, intaxis, NPY_SELECTKINDwhich) Equivalent to [`ndarray.argpartition`](../generated/numpy.ndarray.argpartition#numpy.ndarray.argpartition "numpy.ndarray.argpartition") (_self_ , _ktharray_ , _axis_ , _kind_). Return an array of indices such that selection of these indices along the given `axis` would return a partitioned version of _self_. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Diagonal([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intoffset, intaxis1, intaxis2) Equivalent to [`ndarray.diagonal`](../generated/numpy.ndarray.diagonal#numpy.ndarray.diagonal "numpy.ndarray.diagonal") (_self_ , _offset_ , _axis1_ , _axis2_ ). Return the _offset_ diagonals of the 2-d arrays defined by _axis1_ and _axis2_. [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_CountNonzero([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self) Counts the number of non-zero elements in the array object _self_. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Nonzero([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self) Equivalent to [`ndarray.nonzero`](../generated/numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero") (_self_). Returns a tuple of index arrays that select elements of _self_ that are nonzero. If (nd= `PyArray_NDIM` ( `self` ))==1, then a single index array is returned. The index arrays have data type [`NPY_INTP`](dtype#c.NPY_TYPES.NPY_INTP "NPY_INTP"). If a tuple is returned (nd \\(\neq\\) 1), then its length is nd. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Compress([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*condition, intaxis, [PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.compress`](../generated/numpy.ndarray.compress#numpy.ndarray.compress "numpy.ndarray.compress") (_self_ , _condition_ , _axis_ ). Return the elements along _axis_ corresponding to elements of _condition_ that are true. ### Calculation Tip Pass in `NPY_RAVEL_AXIS` for axis in order to achieve the same effect that is obtained by passing in `axis=None` in Python (treating the array as a 1-d array). Note The out argument specifies where to place the result. If out is NULL, then the output array is created, otherwise the output is placed in out which must be the correct size and type. A new reference to the output array is always returned even when out is not NULL. The caller of the routine has the responsibility to `Py_DECREF` out if not NULL or a memory-leak will occur. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_ArgMax([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intaxis, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.argmax`](../generated/numpy.ndarray.argmax#numpy.ndarray.argmax "numpy.ndarray.argmax") (_self_ , _axis_). Return the index of the largest element of _self_ along _axis_. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_ArgMin([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intaxis, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.argmin`](../generated/numpy.ndarray.argmin#numpy.ndarray.argmin "numpy.ndarray.argmin") (_self_ , _axis_). Return the index of the smallest element of _self_ along _axis_. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Max([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intaxis, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.max`](../generated/numpy.ndarray.max#numpy.ndarray.max "numpy.ndarray.max") (_self_ , _axis_). Returns the largest element of _self_ along the given _axis_. When the result is a single element, returns a numpy scalar instead of an ndarray. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Min([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intaxis, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.min`](../generated/numpy.ndarray.min#numpy.ndarray.min "numpy.ndarray.min") (_self_ , _axis_). Return the smallest element of _self_ along the given _axis_. When the result is a single element, returns a numpy scalar instead of an ndarray. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Ptp([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intaxis, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Return the difference between the largest element of _self_ along _axis_ and the smallest element of _self_ along _axis_. When the result is a single element, returns a numpy scalar instead of an ndarray. Note The rtype argument specifies the data-type the reduction should take place over. This is important if the data-type of the array is not “large” enough to handle the output. By default, all integer data-types are made at least as large as [`NPY_LONG`](dtype#c.NPY_TYPES.NPY_LONG "NPY_LONG") for the “add” and “multiply” ufuncs (which form the basis for mean, sum, cumsum, prod, and cumprod functions). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Mean([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intaxis, intrtype, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.mean`](../generated/numpy.ndarray.mean#numpy.ndarray.mean "numpy.ndarray.mean") (_self_ , _axis_ , _rtype_). Returns the mean of the elements along the given _axis_ , using the enumerated type _rtype_ as the data type to sum in. Default sum behavior is obtained using [`NPY_NOTYPE`](dtype#c.NPY_NOTYPE "NPY_NOTYPE") for _rtype_. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Trace([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intoffset, intaxis1, intaxis2, intrtype, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.trace`](../generated/numpy.ndarray.trace#numpy.ndarray.trace "numpy.ndarray.trace") (_self_ , _offset_ , _axis1_ , _axis2_ , _rtype_). Return the sum (using _rtype_ as the data type of summation) over the _offset_ diagonal elements of the 2-d arrays defined by _axis1_ and _axis2_ variables. A positive offset chooses diagonals above the main diagonal. A negative offset selects diagonals below the main diagonal. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Clip([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*min, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*max) Equivalent to [`ndarray.clip`](../generated/numpy.ndarray.clip#numpy.ndarray.clip "numpy.ndarray.clip") (_self_ , _min_ , _max_). Clip an array, _self_ , so that values larger than _max_ are fixed to _max_ and values less than _min_ are fixed to _min_. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Conjugate([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, [PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.conjugate`](../generated/numpy.ndarray.conjugate#numpy.ndarray.conjugate "numpy.ndarray.conjugate") (_self_). Return the complex conjugate of _self_. If _self_ is not of complex data type, then return _self_ with a reference. Parameters: * **self** – Input array. * **out** – Output array. If provided, the result is placed into this array. Returns: The complex conjugate of _self_. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Round([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intdecimals, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.round`](../generated/numpy.ndarray.round#numpy.ndarray.round "numpy.ndarray.round") (_self_ , _decimals_ , _out_). Returns the array with elements rounded to the nearest decimal place. The decimal place is defined as the \\(10^{-\textrm{decimals}}\\) digit so that negative _decimals_ cause rounding to the nearest 10’s, 100’s, etc. If out is `NULL`, then the output array is created, otherwise the output is placed in _out_ which must be the correct size and type. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Std([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intaxis, intrtype, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.std`](../generated/numpy.ndarray.std#numpy.ndarray.std "numpy.ndarray.std") (_self_ , _axis_ , _rtype_). Return the standard deviation using data along _axis_ converted to data type _rtype_. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Sum([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intaxis, intrtype, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.sum`](../generated/numpy.ndarray.sum#numpy.ndarray.sum "numpy.ndarray.sum") (_self_ , _axis_ , _rtype_). Return 1-d vector sums of elements in _self_ along _axis_. Perform the sum after converting data to data type _rtype_. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_CumSum([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intaxis, intrtype, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.cumsum`](../generated/numpy.ndarray.cumsum#numpy.ndarray.cumsum "numpy.ndarray.cumsum") (_self_ , _axis_ , _rtype_). Return cumulative 1-d sums of elements in _self_ along _axis_. Perform the sum after converting data to data type _rtype_. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Prod([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intaxis, intrtype, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.prod`](../generated/numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod") (_self_ , _axis_ , _rtype_). Return 1-d products of elements in _self_ along _axis_. Perform the product after converting data to data type _rtype_. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_CumProd([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intaxis, intrtype, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.cumprod`](../generated/numpy.ndarray.cumprod#numpy.ndarray.cumprod "numpy.ndarray.cumprod") (_self_ , _axis_ , _rtype_). Return 1-d cumulative products of elements in `self` along `axis`. Perform the product after converting data to data type `rtype`. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_All([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intaxis, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.all`](../generated/numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all") (_self_ , _axis_). Return an array with True elements for every 1-d sub-array of `self` defined by `axis` in which all the elements are True. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Any([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*self, intaxis, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.any`](../generated/numpy.ndarray.any#numpy.ndarray.any "numpy.ndarray.any") (_self_ , _axis_). Return an array with True elements for every 1-d sub-array of _self_ defined by _axis_ in which any of the elements are True. ## Functions ### Array Functions intPyArray_AsCArray([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")**op, void*ptr, [npy_intp](dtype#c.npy_intp "npy_intp")*dims, intnd, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*typedescr) Sometimes it is useful to access a multidimensional array as a C-style multi- dimensional array so that algorithms can be implemented using C’s a[i][j][k] syntax. This routine returns a pointer, _ptr_ , that simulates this kind of C-style array, for 1-, 2-, and 3-d ndarrays. Parameters: * **op** – The address to any Python object. This Python object will be replaced with an equivalent well-behaved, C-style contiguous, ndarray of the given data type specified by the last two arguments. Be sure that stealing a reference in this way to the input object is justified. * **ptr** – The address to a (ctype* for 1-d, ctype** for 2-d or ctype*** for 3-d) variable where ctype is the equivalent C-type for the data type. On return, _ptr_ will be addressable as a 1-d, 2-d, or 3-d array. * **dims** – An output array that contains the shape of the array object. This array gives boundaries on any looping that will take place. * **nd** – The dimensionality of the array (1, 2, or 3). * **typedescr** – A [`PyArray_Descr`](types-and-structures#c.PyArray_Descr "PyArray_Descr") structure indicating the desired data-type (including required byteorder). The call will steal a reference to the parameter. Note The simulation of a C-style array is not complete for 2-d and 3-d arrays. For example, the simulated arrays of pointers cannot be passed to subroutines expecting specific, statically-defined 2-d and 3-d arrays. To pass to functions requiring those kind of inputs, you must statically define the required array and copy data. intPyArray_Free([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op, void*ptr) Must be called with the same objects and memory locations returned from `PyArray_AsCArray` (…). This function cleans up memory that otherwise would get leaked. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Concatenate([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, intaxis) Join the sequence of objects in _obj_ together along _axis_ into a single array. If the dimensions or types are not compatible an error is raised. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_InnerProduct([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj1, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj2) Compute a product-sum over the last dimensions of _obj1_ and _obj2_. Neither array is conjugated. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_MatrixProduct([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj1, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj) Compute a product-sum over the last dimension of _obj1_ and the second-to-last dimension of _obj2_. For 2-d arrays this is a matrix-product. Neither array is conjugated. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_MatrixProduct2([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj1, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Same as PyArray_MatrixProduct, but store the result in _out_. The output array must have the correct shape, type, and be C-contiguous, or an exception is raised. [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*PyArray_EinsteinSum(char*subscripts, [npy_intp](dtype#c.npy_intp "npy_intp")nop, [PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")**op_in, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*dtype, NPY_ORDERorder, NPY_CASTINGcasting, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Applies the Einstein summation convention to the array operands provided, returning a new array or placing the result in _out_. The string in _subscripts_ is a comma separated list of index letters. The number of operands is in _nop_ , and _op_in_ is an array containing those operands. The data type of the output can be forced with _dtype_ , the output order can be forced with _order_ (`NPY_KEEPORDER` is recommended), and when _dtype_ is specified, _casting_ indicates how permissive the data conversion should be. See the [`einsum`](../generated/numpy.einsum#numpy.einsum "numpy.einsum") function for more details. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Correlate([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op1, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op2, intmode) Compute the 1-d correlation of the 1-d arrays _op1_ and _op2_ . The correlation is computed at each output point by multiplying _op1_ by a shifted version of _op2_ and summing the result. As a result of the shift, needed values outside of the defined range of _op1_ and _op2_ are interpreted as zero. The mode determines how many shifts to return: 0 - return only shifts that did not need to assume zero- values; 1 - return an object that is the same size as _op1_ , 2 - return all possible shifts (any overlap at all is accepted). #### Notes This does not compute the usual correlation: if op2 is larger than op1, the arguments are swapped, and the conjugate is never taken for complex arrays. See PyArray_Correlate2 for the usual signal processing correlation. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Correlate2([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op1, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op2, intmode) Updated version of PyArray_Correlate, which uses the usual definition of correlation for 1d arrays. The correlation is computed at each output point by multiplying _op1_ by a shifted version of _op2_ and summing the result. As a result of the shift, needed values outside of the defined range of _op1_ and _op2_ are interpreted as zero. The mode determines how many shifts to return: 0 - return only shifts that did not need to assume zero- values; 1 - return an object that is the same size as _op1_ , 2 - return all possible shifts (any overlap at all is accepted). #### Notes Compute z as follows: z[k] = sum_n op1[n] * conj(op2[n+k]) [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Where([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*condition, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*x, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*y) If both `x` and `y` are `NULL`, then return `PyArray_Nonzero` (_condition_). Otherwise, both _x_ and _y_ must be given and the object returned is shaped like _condition_ and has elements of _x_ and _y_ where _condition_ is respectively True or False. ### Other functions [npy_bool](dtype#c.npy_bool "npy_bool")PyArray_CheckStrides(intelsize, intnd, [npy_intp](dtype#c.npy_intp "npy_intp")numbytes, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, [npy_intp](dtype#c.npy_intp "npy_intp")const*newstrides) Determine if _newstrides_ is a strides array consistent with the memory of an _nd_ -dimensional array with shape `dims` and element-size, _elsize_. The _newstrides_ array is checked to see if jumping by the provided number of bytes in each direction will ever mean jumping more than _numbytes_ which is the assumed size of the available memory segment. If _numbytes_ is 0, then an equivalent _numbytes_ is computed assuming _nd_ , _dims_ , and _elsize_ refer to a single-segment array. Return `NPY_TRUE` if _newstrides_ is acceptable, otherwise return `NPY_FALSE`. [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_MultiplyList([npy_intp](dtype#c.npy_intp "npy_intp")const*seq, intn) intPyArray_MultiplyIntList(intconst*seq, intn) Both of these routines multiply an _n_ -length array, _seq_ , of integers and return the result. No overflow checking is performed. intPyArray_CompareLists([npy_intp](dtype#c.npy_intp "npy_intp")const*l1, [npy_intp](dtype#c.npy_intp "npy_intp")const*l2, intn) Given two _n_ -length arrays of integers, _l1_ , and _l2_ , return 1 if the lists are identical; otherwise, return 0. ## Auxiliary data with object semantics typeNpyAuxData When working with more complex dtypes which are composed of other dtypes, such as the struct dtype, creating inner loops that manipulate the dtypes requires carrying along additional data. NumPy supports this idea through a struct `NpyAuxData`, mandating a few conventions so that it is possible to do this. Defining an `NpyAuxData` is similar to defining a class in C++, but the object semantics have to be tracked manually since the API is in C. Here’s an example for a function which doubles up an element using an element copier function as a primitive. typedef struct { NpyAuxData base; ElementCopier_Func *func; NpyAuxData *funcdata; } eldoubler_aux_data; void free_element_doubler_aux_data(NpyAuxData *data) { eldoubler_aux_data *d = (eldoubler_aux_data *)data; /* Free the memory owned by this auxdata */ NPY_AUXDATA_FREE(d->funcdata); PyArray_free(d); } NpyAuxData *clone_element_doubler_aux_data(NpyAuxData *data) { eldoubler_aux_data *ret = PyArray_malloc(sizeof(eldoubler_aux_data)); if (ret == NULL) { return NULL; } /* Raw copy of all data */ memcpy(ret, data, sizeof(eldoubler_aux_data)); /* Fix up the owned auxdata so we have our own copy */ ret->funcdata = NPY_AUXDATA_CLONE(ret->funcdata); if (ret->funcdata == NULL) { PyArray_free(ret); return NULL; } return (NpyAuxData *)ret; } NpyAuxData *create_element_doubler_aux_data( ElementCopier_Func *func, NpyAuxData *funcdata) { eldoubler_aux_data *ret = PyArray_malloc(sizeof(eldoubler_aux_data)); if (ret == NULL) { PyErr_NoMemory(); return NULL; } memset(&ret, 0, sizeof(eldoubler_aux_data)); ret->base->free = &free_element_doubler_aux_data; ret->base->clone = &clone_element_doubler_aux_data; ret->func = func; ret->funcdata = funcdata; return (NpyAuxData *)ret; } typeNpyAuxData_FreeFunc The function pointer type for NpyAuxData free functions. typeNpyAuxData_CloneFunc The function pointer type for NpyAuxData clone functions. These functions should never set the Python exception on error, because they may be called from a multi-threaded context. voidNPY_AUXDATA_FREE(NpyAuxData*auxdata) A macro which calls the auxdata’s free function appropriately, does nothing if auxdata is NULL. NpyAuxData*NPY_AUXDATA_CLONE(NpyAuxData*auxdata) A macro which calls the auxdata’s clone function appropriately, returning a deep copy of the auxiliary data. ## Array iterators As of NumPy 1.6.0, these array iterators are superseded by the new array iterator, [`NpyIter`](iterator#c.NpyIter "NpyIter"). An array iterator is a simple way to access the elements of an N-dimensional array quickly and efficiently, as seen in [the example](iterator#iteration- example) which provides more description of this useful approach to looping over an array from C. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_IterNew([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*arr) Return an array iterator object from the array, _arr_. This is equivalent to _arr_. **flat**. The array iterator object makes it easy to loop over an N-dimensional non-contiguous array in C-style contiguous fashion. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_IterAllButAxis([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*arr, int*axis) Return an array iterator that will iterate over all axes but the one provided in _*axis_. The returned iterator cannot be used with `PyArray_ITER_GOTO1D`. This iterator could be used to write something similar to what ufuncs do wherein the loop over the largest axis is done by a separate sub-routine. If _*axis_ is negative then _*axis_ will be set to the axis having the smallest stride and that axis will be used. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_BroadcastToShape([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*arr, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, intnd) Return an array iterator that is broadcast to iterate as an array of the shape provided by _dimensions_ and _nd_. intPyArrayIter_Check([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op) Evaluates true if _op_ is an array iterator (or instance of a subclass of the array iterator type). voidPyArray_ITER_RESET([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*iterator) Reset an _iterator_ to the beginning of the array. voidPyArray_ITER_NEXT([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*iterator) Increment the index and the dataptr members of the _iterator_ to point to the next element of the array. If the array is not (C-style) contiguous, also increment the N-dimensional coordinates array. void*PyArray_ITER_DATA([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*iterator) A pointer to the current element of the array. voidPyArray_ITER_GOTO([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*iterator, [npy_intp](dtype#c.npy_intp "npy_intp")*destination) Set the _iterator_ index, dataptr, and coordinates members to the location in the array indicated by the N-dimensional c-array, _destination_ , which must have size at least _iterator_ ->nd_m1+1. voidPyArray_ITER_GOTO1D([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*iterator, [npy_intp](dtype#c.npy_intp "npy_intp")index) Set the _iterator_ index and dataptr to the location in the array indicated by the integer _index_ which points to an element in the C-styled flattened array. intPyArray_ITER_NOTDONE([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*iterator) Evaluates TRUE as long as the iterator has not looped through all of the elements, otherwise it evaluates FALSE. ## Broadcasting (multi-iterators) [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_MultiIterNew(intnum, ...) A simplified interface to broadcasting. This function takes the number of arrays to broadcast and then _num_ extra ( [`PyObject *`](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)") ) arguments. These arguments are converted to arrays and iterators are created. `PyArray_Broadcast` is then called on the resulting multi- iterator object. The resulting, broadcasted mult-iterator object is then returned. A broadcasted operation can then be performed using a single loop and using `PyArray_MultiIter_NEXT` (..) voidPyArray_MultiIter_RESET([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*multi) Reset all the iterators to the beginning in a multi-iterator object, _multi_. voidPyArray_MultiIter_NEXT([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*multi) Advance each iterator in a multi-iterator object, _multi_ , to its next (broadcasted) element. void*PyArray_MultiIter_DATA([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*multi, inti) Return the data-pointer of the _i_ \\(^{\textrm{th}}\\) iterator in a multi- iterator object. voidPyArray_MultiIter_NEXTi([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*multi, inti) Advance the pointer of only the _i_ \\(^{\textrm{th}}\\) iterator. voidPyArray_MultiIter_GOTO([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*multi, [npy_intp](dtype#c.npy_intp "npy_intp")*destination) Advance each iterator in a multi-iterator object, _multi_ , to the given \\(N\\) -dimensional _destination_ where \\(N\\) is the number of dimensions in the broadcasted array. voidPyArray_MultiIter_GOTO1D([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*multi, [npy_intp](dtype#c.npy_intp "npy_intp")index) Advance each iterator in a multi-iterator object, _multi_ , to the corresponding location of the _index_ into the flattened broadcasted array. intPyArray_MultiIter_NOTDONE([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*multi) Evaluates TRUE as long as the multi-iterator has not looped through all of the elements (of the broadcasted result), otherwise it evaluates FALSE. [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_MultiIter_SIZE([PyArrayMultiIterObject](types-and- structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject")*multi) New in version 1.26.0. Returns the total broadcasted size of a multi-iterator object. intPyArray_MultiIter_NDIM([PyArrayMultiIterObject](types-and- structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject")*multi) New in version 1.26.0. Returns the number of dimensions in the broadcasted result of a multi-iterator object. [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_MultiIter_INDEX([PyArrayMultiIterObject](types-and- structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject")*multi) New in version 1.26.0. Returns the current (1-d) index into the broadcasted result of a multi- iterator object. intPyArray_MultiIter_NUMITER([PyArrayMultiIterObject](types-and- structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject")*multi) New in version 1.26.0. Returns the number of iterators that are represented by a multi-iterator object. void**PyArray_MultiIter_ITERS([PyArrayMultiIterObject](types-and- structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject")*multi) New in version 1.26.0. Returns an array of iterator objects that holds the iterators for the arrays to be broadcast together. On return, the iterators are adjusted for broadcasting. [npy_intp](dtype#c.npy_intp "npy_intp")*PyArray_MultiIter_DIMS([PyArrayMultiIterObject](types-and- structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject")*multi) New in version 1.26.0. Returns a pointer to the dimensions/shape of the broadcasted result of a multi-iterator object. intPyArray_Broadcast([PyArrayMultiIterObject](types-and- structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject")*mit) This function encapsulates the broadcasting rules. The _mit_ container should already contain iterators for all the arrays that need to be broadcast. On return, these iterators will be adjusted so that iteration over each simultaneously will accomplish the broadcasting. A negative number is returned if an error occurs. intPyArray_RemoveSmallest([PyArrayMultiIterObject](types-and- structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject")*mit) This function takes a multi-iterator object that has been previously “broadcasted,” finds the dimension with the smallest “sum of strides” in the broadcasted result and adapts all the iterators so as not to iterate over that dimension (by effectively making them of length-1 in that dimension). The corresponding dimension is returned unless _mit_ ->nd is 0, then -1 is returned. This function is useful for constructing ufunc-like routines that broadcast their inputs correctly and then call a strided 1-d version of the routine as the inner-loop. This 1-d version is usually optimized for speed and for this reason the loop should be performed over the axis that won’t require large stride jumps. ## Neighborhood iterator Neighborhood iterators are subclasses of the iterator object, and can be used to iter over a neighborhood of a point. For example, you may want to iterate over every voxel of a 3d image, and for every such voxel, iterate over an hypercube. Neighborhood iterator automatically handle boundaries, thus making this kind of code much easier to write than manual boundaries handling, at the cost of a slight overhead. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_NeighborhoodIterNew([PyArrayIterObject](types-and- structures#c.PyArrayIterObject "PyArrayIterObject")*iter, [npy_intp](dtype#c.npy_intp "npy_intp")bounds, intmode, [PyArrayObject](types- and-structures#c.PyArrayObject "PyArrayObject")*fill_value) This function creates a new neighborhood iterator from an existing iterator. The neighborhood will be computed relatively to the position currently pointed by _iter_ , the bounds define the shape of the neighborhood iterator, and the mode argument the boundaries handling mode. The _bounds_ argument is expected to be a (2 * iter->ao->nd) arrays, such as the range bound[2*i]->bounds[2*i+1] defines the range where to walk for dimension i (both bounds are included in the walked coordinates). The bounds should be ordered for each dimension (bounds[2*i] <= bounds[2*i+1]). The mode should be one of: NPY_NEIGHBORHOOD_ITER_ZERO_PADDING Zero padding. Outside bounds values will be 0. NPY_NEIGHBORHOOD_ITER_ONE_PADDING One padding, Outside bounds values will be 1. NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING Constant padding. Outside bounds values will be the same as the first item in fill_value. NPY_NEIGHBORHOOD_ITER_MIRROR_PADDING Mirror padding. Outside bounds values will be as if the array items were mirrored. For example, for the array [1, 2, 3, 4], x[-2] will be 2, x[-2] will be 1, x[4] will be 4, x[5] will be 1, etc… NPY_NEIGHBORHOOD_ITER_CIRCULAR_PADDING Circular padding. Outside bounds values will be as if the array was repeated. For example, for the array [1, 2, 3, 4], x[-2] will be 3, x[-2] will be 4, x[4] will be 1, x[5] will be 2, etc… If the mode is constant filling (`NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING`), fill_value should point to an array object which holds the filling value (the first item will be the filling value if the array contains more than one item). For other cases, fill_value may be NULL. * The iterator holds a reference to iter * Return NULL on failure (in which case the reference count of iter is not changed) * iter itself can be a Neighborhood iterator: this can be useful for .e.g automatic boundaries handling * the object returned by this function should be safe to use as a normal iterator * If the position of iter is changed, any subsequent call to PyArrayNeighborhoodIter_Next is undefined behavior, and PyArrayNeighborhoodIter_Reset must be called. * If the position of iter is not the beginning of the data and the underlying data for iter is contiguous, the iterator will point to the start of the data instead of position pointed by iter. To avoid this situation, iter should be moved to the required position only after the creation of iterator, and PyArrayNeighborhoodIter_Reset must be called. PyArrayIterObject *iter; PyArrayNeighborhoodIterObject *neigh_iter; iter = PyArray_IterNew(x); /*For a 3x3 kernel */ bounds = {-1, 1, -1, 1}; neigh_iter = (PyArrayNeighborhoodIterObject*)PyArray_NeighborhoodIterNew( iter, bounds, NPY_NEIGHBORHOOD_ITER_ZERO_PADDING, NULL); for(i = 0; i < iter->size; ++i) { for (j = 0; j < neigh_iter->size; ++j) { /* Walk around the item currently pointed by iter->dataptr */ PyArrayNeighborhoodIter_Next(neigh_iter); } /* Move to the next point of iter */ PyArrayIter_Next(iter); PyArrayNeighborhoodIter_Reset(neigh_iter); } intPyArrayNeighborhoodIter_Reset([PyArrayNeighborhoodIterObject](types-and- structures#c.PyArrayNeighborhoodIterObject "PyArrayNeighborhoodIterObject")*iter) Reset the iterator position to the first point of the neighborhood. This should be called whenever the iter argument given at PyArray_NeighborhoodIterObject is changed (see example) intPyArrayNeighborhoodIter_Next([PyArrayNeighborhoodIterObject](types-and- structures#c.PyArrayNeighborhoodIterObject "PyArrayNeighborhoodIterObject")*iter) After this call, iter->dataptr points to the next point of the neighborhood. Calling this function after every point of the neighborhood has been visited is undefined. ## Array scalars [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Return([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*arr) This function steals a reference to _arr_. This function checks to see if _arr_ is a 0-dimensional array and, if so, returns the appropriate array scalar. It should be used whenever 0-dimensional arrays could be returned to Python. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_Scalar(void*data, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*dtype, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*base) Return an array scalar object of the given _dtype_ by **copying** from memory pointed to by _data_. _base_ is expected to be the array object that is the owner of the data. _base_ is required if `dtype` is a `void` scalar, or if the `NPY_USE_GETITEM` flag is set and it is known that the `getitem` method uses the `arr` argument without checking if it is `NULL`. Otherwise `base` may be `NULL`. If the data is not in native byte order (as indicated by `dtype->byteorder`) then this function will byteswap the data, because array scalars are always in correct machine-byte order. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_ToScalar(void*data, [PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*arr) Return an array scalar object of the type and itemsize indicated by the array object _arr_ copied from the memory pointed to by _data_ and swapping if the data in _arr_ is not in machine byte-order. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_FromScalar([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*scalar, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*outcode) Return a 0-dimensional array of type determined by _outcode_ from _scalar_ which should be an array-scalar object. If _outcode_ is NULL, then the type is determined from _scalar_. voidPyArray_ScalarAsCtype([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*scalar, void*ctypeptr) Return in _ctypeptr_ a pointer to the actual value in an array scalar. There is no error checking so _scalar_ must be an array-scalar object, and ctypeptr must have enough space to hold the correct type. For flexible-sized types, a pointer to the data is copied into the memory of _ctypeptr_ , for all other types, the actual data is copied into the address pointed to by _ctypeptr_. intPyArray_CastScalarToCtype([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*scalar, void*ctypeptr, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*outcode) Return the data (cast to the data type indicated by _outcode_) from the array- scalar, _scalar_ , into the memory pointed to by _ctypeptr_ (which must be large enough to handle the incoming memory). Returns -1 on failure, and 0 on success. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyArray_TypeObjectFromType(inttype) Returns a scalar type-object from a type-number, _type_ . Equivalent to `PyArray_DescrFromType` (_type_)->typeobj except for reference counting and error-checking. Returns a new reference to the typeobject on success or `NULL` on failure. NPY_SCALARKINDPyArray_ScalarKind(inttypenum, [PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")**arr) Legacy way to query special promotion for scalar values. This is not used in NumPy itself anymore and is expected to be deprecated eventually. New DTypes can define promotion rules specific to Python scalars. intPyArray_CanCoerceScalar(charthistype, charneededtype, NPY_SCALARKINDscalar) Legacy way to query special promotion for scalar values. This is not used in NumPy itself anymore and is expected to be deprecated eventually. Use `PyArray_ResultType` for similar purposes. ## Data-type descriptors Warning Data-type objects must be reference counted so be aware of the action on the data-type reference of different C-API calls. The standard rule is that when a data-type object is returned it is a new reference. Functions that take [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")* objects and return arrays steal references to the data-type their inputs unless otherwise noted. Therefore, you must own a reference to any data-type object used as input to such a function. intPyArray_DescrCheck([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj) Evaluates as true if _obj_ is a data-type object ( [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")* ). [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_DescrNew([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*obj) Return a new data-type object copied from _obj_ (the fields reference is just updated so that the new object points to the same fields dictionary if any). [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_DescrNewFromType(inttypenum) Create a new data-type object from the built-in (or user-registered) data-type indicated by _typenum_. All builtin types should not have any of their fields changed. This creates a new copy of the [`PyArray_Descr`](types-and- structures#c.PyArray_Descr "PyArray_Descr") structure so that you can fill it in as appropriate. This function is especially needed for flexible data-types which need to have a new elsize member in order to be meaningful in array construction. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_DescrNewByteorder([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*obj, charnewendian) Create a new data-type object with the byteorder set according to _newendian_. All referenced data-type objects (in subdescr and fields members of the data- type object) are also changed (recursively). The value of _newendian_ is one of these macros: NPY_IGNORE NPY_SWAP NPY_NATIVE NPY_LITTLE NPY_BIG If a byteorder of `NPY_IGNORE` is encountered it is left alone. If newendian is `NPY_SWAP`, then all byte-orders are swapped. Other valid newendian values are `NPY_NATIVE`, `NPY_LITTLE`, and `NPY_BIG` which all cause the returned data-typed descriptor (and all it’s referenced data-type descriptors) to have the corresponding byte- order. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_DescrFromObject([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*mintype) Determine an appropriate data-type object from the object _op_ (which should be a “nested” sequence object) and the minimum data-type descriptor mintype (which can be `NULL` ). Similar in behavior to array(_op_).dtype. Don’t confuse this function with `PyArray_DescrConverter`. This function essentially looks at all the objects in the (nested) sequence and determines the data-type from the elements it finds. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_DescrFromScalar([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*scalar) Return a data-type object from an array-scalar object. No checking is done to be sure that _scalar_ is an array scalar. If no suitable data-type can be determined, then a data-type of [`NPY_OBJECT`](dtype#c.NPY_TYPES.NPY_OBJECT "NPY_OBJECT") is returned by default. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_DescrFromType(inttypenum) Returns a data-type object corresponding to _typenum_. The _typenum_ can be one of the enumerated types, a character code for one of the enumerated types, or a user-defined type. If you want to use a flexible size array, then you need to `flexible typenum` and set the results `elsize` parameter to the desired size. The typenum is one of the [`NPY_TYPES`](dtype#c.NPY_TYPES "NPY_TYPES"). intPyArray_DescrConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")**dtype) Convert any compatible Python object, _obj_ , to a data-type object in _dtype_. A large number of Python objects can be converted to data-type objects. See [Data type objects (dtype)](../arrays.dtypes#arrays-dtypes) for a complete description. This version of the converter converts None objects to a [`NPY_DEFAULT_TYPE`](dtype#c.NPY_TYPES.NPY_DEFAULT_TYPE "NPY_DEFAULT_TYPE") data-type object. This function can be used with the “O&” character code in [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "\(in Python v3.13\)") processing. intPyArray_DescrConverter2([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")**dtype) Convert any compatible Python object, _obj_ , to a data-type object in _dtype_. This version of the converter converts None objects so that the returned data-type is `NULL`. This function can also be used with the “O&” character in PyArg_ParseTuple processing. intPyArray_DescrAlignConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")**dtype) Like `PyArray_DescrConverter` except it aligns C-struct-like objects on word- boundaries as the compiler would. intPyArray_DescrAlignConverter2([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")**dtype) Like `PyArray_DescrConverter2` except it aligns C-struct-like objects on word- boundaries as the compiler would. ## Data Type Promotion and Inspection [PyArray_DTypeMeta](types-and-structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*PyArray_CommonDType(const[PyArray_DTypeMeta](types-and- structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*dtype1, const[PyArray_DTypeMeta](types-and-structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*dtype2) This function defines the common DType operator. Note that the common DType will not be `object` (unless one of the DTypes is `object`). Similar to [`numpy.result_type`](../generated/numpy.result_type#numpy.result_type "numpy.result_type"), but works on the classes and not instances. [PyArray_DTypeMeta](types-and-structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*PyArray_PromoteDTypeSequence([npy_intp](dtype#c.npy_intp "npy_intp")length, [PyArray_DTypeMeta](types-and- structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")**dtypes_in) Promotes a list of DTypes with each other in a way that should guarantee stable results even when changing the order. This function is smarter and can often return successful and unambiguous results when `common_dtype(common_dtype(dt1, dt2), dt3)` would depend on the operation order or fail. Nevertheless, DTypes should aim to ensure that their common- dtype implementation is associative and commutative! (Mainly, unsigned and signed integers are not.) For guaranteed consistent results DTypes must implement common-Dtype “transitively”. If A promotes B and B promotes C, than A must generally also promote C; where “promotes” means implements the promotion. (There are some exceptions for abstract DTypes) In general this approach always works as long as the most generic dtype is either strictly larger, or compatible with all other dtypes. For example promoting `float16` with any other float, integer, or unsigned integer again gives a floating point number. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_GetDefaultDescr(const[PyArray_DTypeMeta](types-and- structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*DType) Given a DType class, returns the default instance (descriptor). This checks for a `singleton` first and only calls the `default_descr` function if necessary. ## Custom Data Types New in version 2.0. These functions allow defining custom flexible data types outside of NumPy. See [NEP 42](https://numpy.org/neps/nep-0042-new-dtypes.html#nep42 "\(in NumPy Enhancement Proposals\)") for more details about the rationale and design of the new DType system. See the [numpy-user-dtypes repository](https://github.com/numpy/numpy-user-dtypes) for a number of example DTypes. Also see [PyArray_DTypeMeta and PyArrayDTypeMeta_Spec](types- and-structures#dtypemeta) for documentation on `PyArray_DTypeMeta` and `PyArrayDTypeMeta_Spec`. intPyArrayInitDTypeMeta_FromSpec([PyArray_DTypeMeta](types-and- structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*Dtype, [PyArrayDTypeMeta_Spec](types-and-structures#c.PyArrayDTypeMeta_Spec "PyArrayDTypeMeta_Spec")*spec) Initialize a new DType. It must currently be a static Python C type that is declared as [`PyArray_DTypeMeta`](types-and-structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta") and not [`PyTypeObject`](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)"). Further, it must subclass `np.dtype` and set its type to [`PyArrayDTypeMeta_Type`](types-and-structures#c.PyArrayDTypeMeta_Type "PyArrayDTypeMeta_Type") (before calling [`PyType_Ready`](https://docs.python.org/3/c-api/type.html#c.PyType_Ready "\(in Python v3.13\)")), which has additional fields compared to a normal [`PyTypeObject`](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)"). See the examples in the `numpy-user-dtypes` repository for usage with both parametric and non-parametric data types. ### Flags Flags that can be set on the `PyArrayDTypeMeta_Spec` to initialize the DType. NPY_DT_ABSTRACT Indicates the DType is an abstract “base” DType in a DType hierarchy and should not be directly instantiated. NPY_DT_PARAMETRIC Indicates the DType is parametric and does not have a unique singleton instance. NPY_DT_NUMERIC Indicates the DType represents a numerical value. ### Slot IDs and API Function Typedefs These IDs correspond to slots in the DType API and are used to identify implementations of each slot from the items of the `slots` array member of `PyArrayDTypeMeta_Spec` struct. NPY_DT_discover_descr_from_pyobject typedef[PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*(PyArrayDTypeMeta_DiscoverDescrFromPyobject)([PyArray_DTypeMeta](types- and-structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*cls,[PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj) Used during DType inference to find the correct DType for a given PyObject. Must return a descriptor instance appropriate to store the data in the python object that is passed in. _obj_ is the python object to inspect and _cls_ is the DType class to create a descriptor for. NPY_DT_default_descr typedef[PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*(PyArrayDTypeMeta_DefaultDescriptor)([PyArray_DTypeMeta](types- and-structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*cls) Returns the default descriptor instance for the DType. Must be defined for parametric data types. Non-parametric data types return the singleton by default. NPY_DT_common_dtype typedef[PyArray_DTypeMeta](types-and-structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*(PyArrayDTypeMeta_CommonDType)([PyArray_DTypeMeta](types- and-structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*dtype1,[PyArray_DTypeMeta](types-and- structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*dtype2) Given two input DTypes, determines the appropriate “common” DType that can store values for both types. Returns `Py_NotImplemented` if no such type exists. NPY_DT_common_instance typedef[PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*(PyArrayDTypeMeta_CommonInstance)([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*dtype1,[PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*dtype2) Given two input descriptors, determines the appropriate “common” descriptor that can store values for both instances. Returns `NULL` on error. NPY_DT_ensure_canonical typedef[PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*(PyArrayDTypeMeta_EnsureCanonical)([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*dtype) Returns the “canonical” representation for a descriptor instance. The notion of a canonical descriptor generalizes the concept of byte order, in that a canonical descriptor always has native byte order. If the descriptor is already canonical, this function returns a new reference to the input descriptor. NPY_DT_setitem typedefint(PyArrayDTypeMeta_SetItem)([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*,[PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*,char*) Implements scalar setitem for an array element given a PyObject. NPY_DT_getitem typedef[PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*(PyArrayDTypeMeta_GetItem)([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*,char*) Implements scalar getitem for an array element. Must return a python scalar. NPY_DT_get_clear_loop If defined, sets a traversal loop that clears data in the array. This is most useful for arrays of references that must clean up array entries before the array is garbage collected. Implements `PyArrayMethod_GetTraverseLoop`. NPY_DT_get_fill_zero_loop If defined, sets a traversal loop that fills an array with “zero” values, which may have a DType-specific meaning. This is called inside [`numpy.zeros`](../generated/numpy.zeros#numpy.zeros "numpy.zeros") for arrays that need to write a custom sentinel value that represents zero if for some reason a zero-filled array is not sufficient. Implements `PyArrayMethod_GetTraverseLoop`. NPY_DT_finalize_descr typedef[PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*(PyArrayDTypeMeta_FinalizeDescriptor)([PyArray_Descr](types- and-structures#c.PyArray_Descr "PyArray_Descr")*dtype) If defined, a function that is called to “finalize” a descriptor instance after an array is created. One use of this function is to force newly created arrays to have a newly created descriptor instance, no matter what input descriptor is provided by a user. #### PyArray_ArrFuncs slots In addition the above slots, the following slots are exposed to allow filling the [PyArray_ArrFuncs](types-and-structures#arrfuncs-type) struct attached to descriptor instances. Note that in the future these will be replaced by proper DType API slots but for now we have exposed the legacy `PyArray_ArrFuncs` slots. NPY_DT_PyArray_ArrFuncs_getitem Allows setting a per-dtype getitem. Note that this is not necessary to define unless the default version calling the function defined with the `NPY_DT_getitem` ID is unsuitable. This version will be slightly faster than using `NPY_DT_getitem` at the cost of sometimes needing to deal with a NULL input array. NPY_DT_PyArray_ArrFuncs_setitem Allows setting a per-dtype setitem. Note that this is not necessary to define unless the default version calling the function defined with the `NPY_DT_setitem` ID is unsuitable for some reason. NPY_DT_PyArray_ArrFuncs_compare Computes a comparison for [`numpy.sort`](../generated/numpy.sort#numpy.sort "numpy.sort"), implements `PyArray_CompareFunc`. NPY_DT_PyArray_ArrFuncs_argmax Computes the argmax for [`numpy.argmax`](../generated/numpy.argmax#numpy.argmax "numpy.argmax"), implements `PyArray_ArgFunc`. NPY_DT_PyArray_ArrFuncs_argmin Computes the argmin for [`numpy.argmin`](../generated/numpy.argmin#numpy.argmin "numpy.argmin"), implements `PyArray_ArgFunc`. NPY_DT_PyArray_ArrFuncs_dotfunc Computes the dot product for [`numpy.dot`](../generated/numpy.dot#numpy.dot "numpy.dot"), implements `PyArray_DotFunc`. NPY_DT_PyArray_ArrFuncs_scanfunc A formatted input function for [`numpy.fromfile`](../generated/numpy.fromfile#numpy.fromfile "numpy.fromfile"), implements `PyArray_ScanFunc`. NPY_DT_PyArray_ArrFuncs_fromstr A string parsing function for [`numpy.fromstring`](../generated/numpy.fromstring#numpy.fromstring "numpy.fromstring"), implements `PyArray_FromStrFunc`. NPY_DT_PyArray_ArrFuncs_nonzero Computes the nonzero function for [`numpy.nonzero`](../generated/numpy.nonzero#numpy.nonzero "numpy.nonzero"), implements `PyArray_NonzeroFunc`. NPY_DT_PyArray_ArrFuncs_fill An array filling function for [`numpy.ndarray.fill`](../generated/numpy.ndarray.fill#numpy.ndarray.fill "numpy.ndarray.fill"), implements `PyArray_FillFunc`. NPY_DT_PyArray_ArrFuncs_fillwithscalar A function to fill an array with a scalar value for [`numpy.ndarray.fill`](../generated/numpy.ndarray.fill#numpy.ndarray.fill "numpy.ndarray.fill"), implements `PyArray_FillWithScalarFunc`. NPY_DT_PyArray_ArrFuncs_sort An array of PyArray_SortFunc of length `NPY_NSORTS`. If set, allows defining custom sorting implementations for each of the sorting algorithms numpy implements. NPY_DT_PyArray_ArrFuncs_argsort An array of PyArray_ArgSortFunc of length `NPY_NSORTS`. If set, allows defining custom argsorting implementations for each of the sorting algorithms numpy implements. ### Macros and Static Inline Functions These macros and static inline functions are provided to allow more understandable and idiomatic code when working with `PyArray_DTypeMeta` instances. NPY_DTYPE(descr) Returns a `PyArray_DTypeMeta *` pointer to the DType of a given descriptor instance. staticinline[PyArray_DTypeMeta](types-and-structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*NPY_DT_NewRef([PyArray_DTypeMeta](types-and- structures#c.PyArray_DTypeMeta "PyArray_DTypeMeta")*o) Returns a `PyArray_DTypeMeta *` pointer to a new reference to a DType. ## Conversion utilities ### For use with [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "\(in Python v3.13\)") All of these functions can be used in [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "\(in Python v3.13\)") (…) with the “O&” format specifier to automatically convert any Python object to the required C-object. All of these functions return `NPY_SUCCEED` if successful and `NPY_FAIL` if not. The first argument to all of these function is a Python object. The second argument is the **address** of the C-type to convert the Python object to. Warning Be sure to understand what steps you should take to manage the memory when using these conversion functions. These functions can require freeing memory, and/or altering the reference counts of specific objects based on your use. intPyArray_Converter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")**address) Convert any Python object to a [`PyArrayObject`](types-and- structures#c.PyArrayObject "PyArrayObject"). If `PyArray_Check` (_obj_) is TRUE then its reference count is incremented and a reference placed in _address_. If _obj_ is not an array, then convert it to an array using `PyArray_FromAny` . No matter what is returned, you must DECREF the object returned by this routine in _address_ when you are done with it. intPyArray_OutputConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, [PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")**address) This is a default converter for output arrays given to functions. If _obj_ is [`Py_None`](https://docs.python.org/3/c-api/none.html#c.Py_None "\(in Python v3.13\)") or `NULL`, then _*address_ will be `NULL` but the call will succeed. If `PyArray_Check` ( _obj_) is TRUE then it is returned in _*address_ without incrementing its reference count. intPyArray_IntpConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, [PyArray_Dims](types-and-structures#c.PyArray_Dims "PyArray_Dims")*seq) Convert any Python sequence, _obj_ , smaller than `NPY_MAXDIMS` to a C-array of [`npy_intp`](dtype#c.npy_intp "npy_intp"). The Python object could also be a single number. The _seq_ variable is a pointer to a structure with members ptr and len. On successful return, _seq_ ->ptr contains a pointer to memory that must be freed, by calling `PyDimMem_FREE`, to avoid a memory leak. The restriction on memory size allows this converter to be conveniently used for sequences intended to be interpreted as array shapes. intPyArray_BufferConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, [PyArray_Chunk](types-and- structures#c.PyArray_Chunk "PyArray_Chunk")*buf) Convert any Python object, _obj_ , with a (single-segment) buffer interface to a variable with members that detail the object’s use of its chunk of memory. The _buf_ variable is a pointer to a structure with base, ptr, len, and flags members. The [`PyArray_Chunk`](types-and-structures#c.PyArray_Chunk "PyArray_Chunk") structure is binary compatible with the Python’s buffer object (through its len member on 32-bit platforms and its ptr member on 64-bit platforms). On return, the base member is set to _obj_ (or its base if _obj_ is already a buffer object pointing to another object). If you need to hold on to the memory be sure to INCREF the base member. The chunk of memory is pointed to by _buf_ ->ptr member and has length _buf_ ->len. The flags member of _buf_ is `NPY_ARRAY_ALIGNED` with the `NPY_ARRAY_WRITEABLE` flag set if _obj_ has a writeable buffer interface. intPyArray_AxisConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, int*axis) Convert a Python object, _obj_ , representing an axis argument to the proper value for passing to the functions that take an integer axis. Specifically, if _obj_ is None, _axis_ is set to `NPY_RAVEL_AXIS` which is interpreted correctly by the C-API functions that take axis arguments. intPyArray_BoolConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, [npy_bool](dtype#c.npy_bool "npy_bool")*value) Convert any Python object, _obj_ , to `NPY_TRUE` or `NPY_FALSE`, and place the result in _value_. intPyArray_ByteorderConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, char*endian) Convert Python strings into the corresponding byte-order character: ‘>’, ‘<’, ‘s’, ‘=’, or ‘|’. intPyArray_SortkindConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, NPY_SORTKIND*sort) Convert Python strings into one of `NPY_QUICKSORT` (starts with ‘q’ or ‘Q’), `NPY_HEAPSORT` (starts with ‘h’ or ‘H’), `NPY_MERGESORT` (starts with ‘m’ or ‘M’) or `NPY_STABLESORT` (starts with ‘t’ or ‘T’). `NPY_MERGESORT` and `NPY_STABLESORT` are aliased to each other for backwards compatibility and may refer to one of several stable sorting algorithms depending on the data type. intPyArray_SearchsideConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, NPY_SEARCHSIDE*side) Convert Python strings into one of `NPY_SEARCHLEFT` (starts with ‘l’ or ‘L’), or `NPY_SEARCHRIGHT` (starts with ‘r’ or ‘R’). intPyArray_OrderConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, NPY_ORDER*order) Convert the Python strings ‘C’, ‘F’, ‘A’, and ‘K’ into the `NPY_ORDER` enumeration `NPY_CORDER`, `NPY_FORTRANORDER`, `NPY_ANYORDER`, and `NPY_KEEPORDER`. intPyArray_CastingConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, NPY_CASTING*casting) Convert the Python strings ‘no’, ‘equiv’, ‘safe’, ‘same_kind’, and ‘unsafe’ into the `NPY_CASTING` enumeration `NPY_NO_CASTING`, `NPY_EQUIV_CASTING`, `NPY_SAFE_CASTING`, `NPY_SAME_KIND_CASTING`, and `NPY_UNSAFE_CASTING`. intPyArray_ClipmodeConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*object, NPY_CLIPMODE*val) Convert the Python strings ‘clip’, ‘wrap’, and ‘raise’ into the `NPY_CLIPMODE` enumeration `NPY_CLIP`, `NPY_WRAP`, and `NPY_RAISE`. intPyArray_ConvertClipmodeSequence([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*object, NPY_CLIPMODE*modes, intn) Converts either a sequence of clipmodes or a single clipmode into a C array of `NPY_CLIPMODE` values. The number of clipmodes _n_ must be known before calling this function. This function is provided to help functions allow a different clipmode for each dimension. ### Other conversions intPyArray_PyIntAsInt([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op) Convert all kinds of Python objects (including arrays and array scalars) to a standard integer. On error, -1 is returned and an exception set. You may find useful the macro: #define error_converting(x) (((x) == -1) && PyErr_Occurred()) [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_PyIntAsIntp([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*op) Convert all kinds of Python objects (including arrays and array scalars) to a (platform-pointer-sized) integer. On error, -1 is returned and an exception set. intPyArray_IntpFromSequence([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*seq, [npy_intp](dtype#c.npy_intp "npy_intp")*vals, intmaxvals) Convert any Python sequence (or single Python number) passed in as _seq_ to (up to) _maxvals_ pointer-sized integers and place them in the _vals_ array. The sequence can be smaller then _maxvals_ as the number of converted objects is returned. ## Including and importing the C API To use the NumPy C-API you typically need to include the `numpy/ndarrayobject.h` header and `numpy/ufuncobject.h` for some ufunc related functionality (`arrayobject.h` is an alias for `ndarrayobject.h`). These two headers export most relevant functionality. In general any project which uses the NumPy API must import NumPy using one of the functions `PyArray_ImportNumPyAPI()` or `import_array()`. In some places, functionality which requires `import_array()` is not needed, because you only need type definitions. In this case, it is sufficient to include `numpy/ndarratypes.h`. For the typical Python project, multiple C or C++ files will be compiled into a single shared object (the Python C-module) and `PyArray_ImportNumPyAPI()` should be called inside it’s module initialization. When you have a single C-file, this will consist of: #include "numpy/ndarrayobject.h" PyMODINIT_FUNC PyInit_my_module(void) { if (PyArray_ImportNumPyAPI() < 0) { return NULL; } /* Other initialization code. */ } However, most projects will have additional C files which are all linked together into a single Python module. In this case, the helper C files typically do not have a canonical place where `PyArray_ImportNumPyAPI` should be called (although it is OK and fast to call it often). To solve this, NumPy provides the following pattern that the the main file is modified to define `PY_ARRAY_UNIQUE_SYMBOL` before the include: /* Main module file */ #define PY_ARRAY_UNIQUE_SYMBOL MyModule #include "numpy/ndarrayobject.h" PyMODINIT_FUNC PyInit_my_module(void) { if (PyArray_ImportNumPyAPI() < 0) { return NULL; } /* Other initialization code. */ } while the other files use: /* Second file without any import */ #define NO_IMPORT_ARRAY #define PY_ARRAY_UNIQUE_SYMBOL MyModule #include "numpy/ndarrayobject.h" You can of course add the defines to a local header used throughout. You just have to make sure that the main file does _not_ define `NO_IMPORT_ARRAY`. For `numpy/ufuncobject.h` the same logic applies, but the unique symbol mechanism is `#define PY_UFUNC_UNIQUE_SYMBOL` (both can match). Additionally, you will probably wish to add a `#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION` to avoid warnings about possible use of old API. Note If you are experiencing access violations make sure that the NumPy API was properly imported and the symbol `PyArray_API` is not `NULL`. When in a debugger, this symbols actual name will be `PY_ARRAY_UNIQUE_SYMBOL``+``PyArray_API`, so for example `MyModulePyArray_API` in the above. (E.g. even a `printf("%p\n", PyArray_API);` just before the crash.) ### Mechanism details and dynamic linking The main part of the mechanism is that without NumPy needs to define a `void **PyArray_API` table for you to look up all functions. Depending on your macro setup, this takes different routes depending on whether `NO_IMPORT_ARRAY` and `PY_ARRAY_UNIQUE_SYMBOL` are defined: * If neither is defined, the C-API is declared to `static void **PyArray_API`, so it is only visible within the compilation unit/file using `#include `. * If only `PY_ARRAY_UNIQUE_SYMBOL` is defined (it could be empty) then the it is declared to a non-static `void **` allowing it to be used by other files which are linked. * If `NO_IMPORT_ARRAY` is defined, the table is declared as `extern void **`, meaning that it must be linked to a file which does not use `NO_IMPORT_ARRAY`. The `PY_ARRAY_UNIQUE_SYMBOL` mechanism additionally mangles the names to avoid conflicts. Changed in version NumPy: 2.1 changed the headers to avoid sharing the table outside of a single shared object/dll (this was always the case on Windows). Please see `NPY_API_SYMBOL_ATTRIBUTE` for details. In order to make use of the C-API from another extension module, the `import_array` function must be called. If the extension module is self- contained in a single .c file, then that is all that needs to be done. If, however, the extension module involves multiple files where the C-API is needed then some additional steps must be taken. intPyArray_ImportNumPyAPI(void) Ensures that the NumPy C-API is imported and usable. It returns `0` on success and `-1` with an error set if NumPy couldn’t be imported. While preferable to call it once at module initialization, this function is very light-weight if called multiple times. New in version 2.0: This function is backported in the `npy_2_compat.h` header. import_array(void) This function must be called in the initialization section of a module that will make use of the C-API. It imports the module where the function-pointer table is stored and points the correct variable to it. This macro includes a `return NULL;` on error, so that `PyArray_ImportNumPyAPI()` is preferable for custom error checking. You may also see use of `_import_array()` (a function, not a macro, but you may want to raise a better error if it fails) and the variations `import_array1(ret)` which customizes the return value. PY_ARRAY_UNIQUE_SYMBOL NPY_API_SYMBOL_ATTRIBUTE New in version 2.1. An additional symbol which can be used to share e.g. visibility beyond shared object boundaries. By default, NumPy adds the C visibility hidden attribute (if available): `void __attribute__((visibility("hidden"))) **PyArray_API;`. You can change this by defining `NPY_API_SYMBOL_ATTRIBUTE`, which will make this: `void NPY_API_SYMBOL_ATTRIBUTE **PyArray_API;` (with additional name mangling via the unique symbol). Adding an empty `#define NPY_API_SYMBOL_ATTRIBUTE` will have the same behavior as NumPy 1.x. Note Windows never had shared visibility although you can use this macro to achieve it. We generally discourage sharing beyond shared boundary lines since importing the array API includes NumPy version checks. NO_IMPORT_ARRAY Defining `NO_IMPORT_ARRAY` before the `ndarrayobject.h` include indicates that the NumPy C API import is handled in a different file and the include mechanism will not be added here. You must have one file without `NO_IMPORT_ARRAY` defined. #define PY_ARRAY_UNIQUE_SYMBOL cool_ARRAY_API #include On the other hand, coolhelper.c would contain at the top: #define NO_IMPORT_ARRAY #define PY_ARRAY_UNIQUE_SYMBOL cool_ARRAY_API #include You can also put the common two last lines into an extension-local header file as long as you make sure that NO_IMPORT_ARRAY is #defined before #including that file. Internally, these #defines work as follows: * If neither is defined, the C-API is declared to be `static void**`, so it is only visible within the compilation unit that #includes numpy/arrayobject.h. * If `PY_ARRAY_UNIQUE_SYMBOL` is #defined, but `NO_IMPORT_ARRAY` is not, the C-API is declared to be `void**`, so that it will also be visible to other compilation units. * If `NO_IMPORT_ARRAY` is #defined, regardless of whether `PY_ARRAY_UNIQUE_SYMBOL` is, the C-API is declared to be `extern void**`, so it is expected to be defined in another compilation unit. * Whenever `PY_ARRAY_UNIQUE_SYMBOL` is #defined, it also changes the name of the variable holding the C-API, which defaults to `PyArray_API`, to whatever the macro is #defined to. ### Checking the API Version Because python extensions are not used in the same way as usual libraries on most platforms, some errors cannot be automatically detected at build time or even runtime. For example, if you build an extension using a function available only for numpy >= 1.3.0, and you import the extension later with numpy 1.2, you will not get an import error (but almost certainly a segmentation fault when calling the function). That’s why several functions are provided to check for numpy versions. The macros `NPY_VERSION` and `NPY_FEATURE_VERSION` corresponds to the numpy version used to build the extension, whereas the versions returned by the functions `PyArray_GetNDArrayCVersion` and `PyArray_GetNDArrayCFeatureVersion` corresponds to the runtime numpy’s version. The rules for ABI and API compatibilities can be summarized as follows: * Whenever `NPY_VERSION` != `PyArray_GetNDArrayCVersion()`, the extension has to be recompiled (ABI incompatibility). * `NPY_VERSION` == `PyArray_GetNDArrayCVersion()` and `NPY_FEATURE_VERSION` <= `PyArray_GetNDArrayCFeatureVersion()` means backward compatible changes. ABI incompatibility is automatically detected in every numpy’s version. API incompatibility detection was added in numpy 1.4.0. If you want to supported many different numpy versions with one extension binary, you have to build your extension with the lowest `NPY_FEATURE_VERSION` as possible. NPY_VERSION The current version of the ndarray object (check to see if this variable is defined to guarantee the `numpy/arrayobject.h` header is being used). NPY_FEATURE_VERSION The current version of the C-API. unsignedintPyArray_GetNDArrayCVersion(void) This just returns the value `NPY_VERSION`. `NPY_VERSION` changes whenever a backward incompatible change at the ABI level. Because it is in the C-API, however, comparing the output of this function from the value defined in the current header gives a way to test if the C-API has changed thus requiring a re-compilation of extension modules that use the C-API. This is automatically checked in the function `import_array`. unsignedintPyArray_GetNDArrayCFeatureVersion(void) This just returns the value `NPY_FEATURE_VERSION`. `NPY_FEATURE_VERSION` changes whenever the API changes (e.g. a function is added). A changed value does not always require a recompile. ### Memory management char*PyDataMem_NEW(size_tnbytes) voidPyDataMem_FREE(char*ptr) char*PyDataMem_RENEW(void*ptr, size_tnewbytes) Functions to allocate, free, and reallocate memory. These are used internally to manage array data memory unless overridden. [npy_intp](dtype#c.npy_intp "npy_intp")*PyDimMem_NEW(intnd) voidPyDimMem_FREE(char*ptr) [npy_intp](dtype#c.npy_intp "npy_intp")*PyDimMem_RENEW(void*ptr, size_tnewnd) Macros to allocate, free, and reallocate dimension and strides memory. void*PyArray_malloc(size_tnbytes) voidPyArray_free(void*ptr) void*PyArray_realloc([npy_intp](dtype#c.npy_intp "npy_intp")*ptr, size_tnbytes) These macros use different memory allocators, depending on the constant `NPY_USE_PYMEM`. The system malloc is used when `NPY_USE_PYMEM` is 0, if `NPY_USE_PYMEM` is 1, then the Python memory allocator is used. NPY_USE_PYMEM intPyArray_ResolveWritebackIfCopy([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*obj) If `obj->flags` has `NPY_ARRAY_WRITEBACKIFCOPY`, this function clears the flags, `DECREF` s `obj->base` and makes it writeable, and sets `obj->base` to NULL. It then copies `obj->data` to `obj->base->data`, and returns the error state of the copy operation. This is the opposite of `PyArray_SetWritebackIfCopyBase`. Usually this is called once you are finished with `obj`, just before `Py_DECREF(obj)`. It may be called multiple times, or with `NULL` input. See also `PyArray_DiscardWritebackIfCopy`. Returns 0 if nothing was done, -1 on error, and 1 if action was taken. ### Threading support These macros are only meaningful if `NPY_ALLOW_THREADS` evaluates True during compilation of the extension module. Otherwise, these macros are equivalent to whitespace. Python uses a single Global Interpreter Lock (GIL) for each Python process so that only a single thread may execute at a time (even on multi-cpu machines). When calling out to a compiled function that may take time to compute (and does not have side-effects for other threads like updated global variables), the GIL should be released so that other Python threads can run while the time-consuming calculations are performed. This can be accomplished using two groups of macros. Typically, if one macro in a group is used in a code block, all of them must be used in the same code block. `NPY_ALLOW_THREADS` is true (defined as `1`) unless the build option `-Ddisable-threading` is set to `true` \- in which case `NPY_ALLOW_THREADS` is false (`0`). NPY_ALLOW_THREADS #### Group 1 This group is used to call code that may take some time but does not use any Python C-API calls. Thus, the GIL should be released during its calculation. NPY_BEGIN_ALLOW_THREADS Equivalent to [`Py_BEGIN_ALLOW_THREADS`](https://docs.python.org/3/c-api/init.html#c.Py_BEGIN_ALLOW_THREADS "\(in Python v3.13\)") except it uses `NPY_ALLOW_THREADS` to determine if the macro if replaced with white-space or not. NPY_END_ALLOW_THREADS Equivalent to [`Py_END_ALLOW_THREADS`](https://docs.python.org/3/c-api/init.html#c.Py_END_ALLOW_THREADS "\(in Python v3.13\)") except it uses `NPY_ALLOW_THREADS` to determine if the macro if replaced with white-space or not. NPY_BEGIN_THREADS_DEF Place in the variable declaration area. This macro sets up the variable needed for storing the Python state. NPY_BEGIN_THREADS Place right before code that does not need the Python interpreter (no Python C-API calls). This macro saves the Python state and releases the GIL. NPY_END_THREADS Place right after code that does not need the Python interpreter. This macro acquires the GIL and restores the Python state from the saved variable. voidNPY_BEGIN_THREADS_DESCR([PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*dtype) Useful to release the GIL only if _dtype_ does not contain arbitrary Python objects which may need the Python interpreter during execution of the loop. voidNPY_END_THREADS_DESCR([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype) Useful to regain the GIL in situations where it was released using the BEGIN form of this macro. voidNPY_BEGIN_THREADS_THRESHOLDED(intloop_size) Useful to release the GIL only if _loop_size_ exceeds a minimum threshold, currently set to 500. Should be matched with a `NPY_END_THREADS` to regain the GIL. #### Group 2 This group is used to re-acquire the Python GIL after it has been released. For example, suppose the GIL has been released (using the previous calls), and then some path in the code (perhaps in a different subroutine) requires use of the Python C-API, then these macros are useful to acquire the GIL. These macros accomplish essentially a reverse of the previous three (acquire the LOCK saving what state it had) and then re-release it with the saved state. NPY_ALLOW_C_API_DEF Place in the variable declaration area to set up the necessary variable. NPY_ALLOW_C_API Place before code that needs to call the Python C-API (when it is known that the GIL has already been released). NPY_DISABLE_C_API Place after code that needs to call the Python C-API (to re-release the GIL). Tip Never use semicolons after the threading support macros. ### Priority NPY_PRIORITY Default priority for arrays. NPY_SUBTYPE_PRIORITY Default subtype priority. NPY_SCALAR_PRIORITY Default scalar priority (very small) doublePyArray_GetPriority([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, doubledef) Return the [`__array_priority__`](../arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") attribute (converted to a double) of _obj_ or _def_ if no attribute of that name exists. Fast returns that avoid the attribute lookup are provided for objects of type [`PyArray_Type`](types-and- structures#c.PyArray_Type "PyArray_Type"). ### Default buffers NPY_BUFSIZE Default size of the user-settable internal buffers. NPY_MIN_BUFSIZE Smallest size of user-settable internal buffers. NPY_MAX_BUFSIZE Largest size allowed for the user-settable buffers. ### Other constants NPY_NUM_FLOATTYPE The number of floating-point types NPY_MAXDIMS The maximum number of dimensions that may be used by NumPy. This is set to 64 and was 32 before NumPy 2. Note We encourage you to avoid `NPY_MAXDIMS`. A future version of NumPy may wish to remove any dimension limitation (and thus the constant). The limitation was created so that NumPy can use stack allocations internally for scratch space. If your algorithm has a reasonable maximum number of dimension you could check and use that locally. NPY_MAXARGS The maximum number of array arguments that can be used in some functions. This used to be 32 before NumPy 2 and is now 64. To continue to allow using it as a check whether a number of arguments is compatible ufuncs, this macro is now runtime dependent. Note We discourage any use of `NPY_MAXARGS` that isn’t explicitly tied to checking for known NumPy limitations. NPY_FALSE Defined as 0 for use with Bool. NPY_TRUE Defined as 1 for use with Bool. NPY_FAIL The return value of failed converter functions which are called using the “O&” syntax in [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "\(in Python v3.13\)")-like functions. NPY_SUCCEED The return value of successful converter functions which are called using the “O&” syntax in [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "\(in Python v3.13\)")-like functions. NPY_RAVEL_AXIS Some NumPy functions (mainly the C-entrypoints for Python functions) have an `axis` argument. This macro may be passed for `axis=None`. Note This macro is NumPy version dependent at runtime. The value is now the minimum integer. However, on NumPy 1.x `NPY_MAXDIMS` was used (at the time set to 32). ### Miscellaneous Macros intPyArray_SAMESHAPE([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*a1, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*a2) Evaluates as True if arrays _a1_ and _a2_ have the same shape. PyArray_MAX(a, b) Returns the maximum of _a_ and _b_. If (_a_) or (_b_) are expressions they are evaluated twice. PyArray_MIN(a, b) Returns the minimum of _a_ and _b_. If (_a_) or (_b_) are expressions they are evaluated twice. voidPyArray_DiscardWritebackIfCopy([PyArrayObject](types-and- structures#c.PyArrayObject "PyArrayObject")*obj) If `obj->flags` has `NPY_ARRAY_WRITEBACKIFCOPY`, this function clears the flags, `DECREF` s `obj->base` and makes it writeable, and sets `obj->base` to NULL. In contrast to `PyArray_ResolveWritebackIfCopy` it makes no attempt to copy the data from `obj->base`. This undoes `PyArray_SetWritebackIfCopyBase`. Usually this is called after an error when you are finished with `obj`, just before `Py_DECREF(obj)`. It may be called multiple times, or with `NULL` input. ### Enumerated Types enumNPY_SORTKIND A special variable-type which can take on different values to indicate the sorting algorithm being used. enumeratorNPY_QUICKSORT enumeratorNPY_HEAPSORT enumeratorNPY_MERGESORT enumeratorNPY_STABLESORT Used as an alias of `NPY_MERGESORT` and vice versa. enumeratorNPY_NSORTS Defined to be the number of sorts. It is fixed at three by the need for backwards compatibility, and consequently `NPY_MERGESORT` and `NPY_STABLESORT` are aliased to each other and may refer to one of several stable sorting algorithms depending on the data type. enumNPY_SCALARKIND A special variable type indicating the number of “kinds” of scalars distinguished in determining scalar-coercion rules. This variable can take on the values: enumeratorNPY_NOSCALAR enumeratorNPY_BOOL_SCALAR enumeratorNPY_INTPOS_SCALAR enumeratorNPY_INTNEG_SCALAR enumeratorNPY_FLOAT_SCALAR enumeratorNPY_COMPLEX_SCALAR enumeratorNPY_OBJECT_SCALAR enumeratorNPY_NSCALARKINDS Defined to be the number of scalar kinds (not including `NPY_NOSCALAR`). enumNPY_ORDER An enumeration type indicating the element order that an array should be interpreted in. When a brand new array is created, generally only **NPY_CORDER** and **NPY_FORTRANORDER** are used, whereas when one or more inputs are provided, the order can be based on them. enumeratorNPY_ANYORDER Fortran order if all the inputs are Fortran, C otherwise. enumeratorNPY_CORDER C order. enumeratorNPY_FORTRANORDER Fortran order. enumeratorNPY_KEEPORDER An order as close to the order of the inputs as possible, even if the input is in neither C nor Fortran order. enumNPY_CLIPMODE A variable type indicating the kind of clipping that should be applied in certain functions. enumeratorNPY_RAISE The default for most operations, raises an exception if an index is out of bounds. enumeratorNPY_CLIP Clips an index to the valid range if it is out of bounds. enumeratorNPY_WRAP Wraps an index to the valid range if it is out of bounds. enumNPY_SEARCHSIDE A variable type indicating whether the index returned should be that of the first suitable location (if `NPY_SEARCHLEFT`) or of the last (if `NPY_SEARCHRIGHT`). enumeratorNPY_SEARCHLEFT enumeratorNPY_SEARCHRIGHT enumNPY_SELECTKIND A variable type indicating the selection algorithm being used. enumeratorNPY_INTROSELECT enumNPY_CASTING An enumeration type indicating how permissive data conversions should be. This is used by the iterator added in NumPy 1.6, and is intended to be used more broadly in a future version. enumeratorNPY_NO_CASTING Only allow identical types. enumeratorNPY_EQUIV_CASTING Allow identical and casts involving byte swapping. enumeratorNPY_SAFE_CASTING Only allow casts which will not cause values to be rounded, truncated, or otherwise changed. enumeratorNPY_SAME_KIND_CASTING Allow any safe casts, and casts between types of the same kind. For example, float64 -> float32 is permitted with this rule. enumeratorNPY_UNSAFE_CASTING Allow any cast, no matter what kind of data loss may occur. # System configuration When NumPy is built, information about system configuration is recorded, and is made available for extension modules using NumPy’s C API. These are mostly defined in `numpyconfig.h` (included in `ndarrayobject.h`). The public symbols are prefixed by `NPY_*`. NumPy also offers some functions for querying information about the platform in use. For private use, NumPy also constructs a `config.h` in the NumPy include directory, which is not exported by NumPy (that is a python extension which use the numpy C API will not see those symbols), to avoid namespace pollution. ## Data type sizes The `NPY_SIZEOF_{CTYPE}` constants are defined so that sizeof information is available to the pre-processor. NPY_SIZEOF_SHORT sizeof(short) NPY_SIZEOF_INT sizeof(int) NPY_SIZEOF_LONG sizeof(long) NPY_SIZEOF_LONGLONG sizeof(longlong) where longlong is defined appropriately on the platform. NPY_SIZEOF_PY_LONG_LONG NPY_SIZEOF_FLOAT sizeof(float) NPY_SIZEOF_DOUBLE sizeof(double) NPY_SIZEOF_LONG_DOUBLE NPY_SIZEOF_LONGDOUBLE sizeof(longdouble) NPY_SIZEOF_PY_INTPTR_T Size of a pointer `void *` and `intptr_t`/`Py_intptr_t`. NPY_SIZEOF_INTP Size of a `size_t` on this platform (`sizeof(size_t)`) ## Platform information NPY_CPU_X86 NPY_CPU_AMD64 NPY_CPU_IA64 NPY_CPU_PPC NPY_CPU_PPC64 NPY_CPU_SPARC NPY_CPU_SPARC64 NPY_CPU_S390 NPY_CPU_PARISC CPU architecture of the platform; only one of the above is defined. Defined in `numpy/npy_cpu.h` NPY_LITTLE_ENDIAN NPY_BIG_ENDIAN NPY_BYTE_ORDER Portable alternatives to the `endian.h` macros of GNU Libc. If big endian, `NPY_BYTE_ORDER` == `NPY_BIG_ENDIAN`, and similarly for little endian architectures. Defined in `numpy/npy_endian.h`. intPyArray_GetEndianness() Returns the endianness of the current platform. One of `NPY_CPU_BIG`, `NPY_CPU_LITTLE`, or `NPY_CPU_UNKNOWN_ENDIAN`. NPY_CPU_BIG NPY_CPU_LITTLE NPY_CPU_UNKNOWN_ENDIAN ## Compiler directives NPY_LIKELY NPY_UNLIKELY NPY_UNUSED # NumPy core math library The numpy core math library (`npymath`) is a first step in this direction. This library contains most math-related C99 functionality, which can be used on platforms where C99 is not well supported. The core math functions have the same API as the C99 ones, except for the `npy_*` prefix. The available functions are defined in `` \- please refer to this header when in doubt. Note An effort is underway to make `npymath` smaller (since C99 compatibility of compilers has improved over time) and more easily vendorable or usable as a header-only dependency. That will avoid problems with shipping a static library built with a compiler which may not match the compiler used by a downstream package or end user. See [gh-20880](https://github.com/numpy/numpy/issues/20880) for details. ## Floating point classification NPY_NAN This macro is defined to a NaN (Not a Number), and is guaranteed to have the signbit unset (‘positive’ NaN). The corresponding single and extension precision macro are available with the suffix F and L. NPY_INFINITY This macro is defined to a positive inf. The corresponding single and extension precision macro are available with the suffix F and L. NPY_PZERO This macro is defined to positive zero. The corresponding single and extension precision macro are available with the suffix F and L. NPY_NZERO This macro is defined to negative zero (that is with the sign bit set). The corresponding single and extension precision macro are available with the suffix F and L. npy_isnan(x) This is an alias for C99 isnan: works for single, double and extended precision, and return a non 0 value if x is a NaN. npy_isfinite(x) This is an alias for C99 isfinite: works for single, double and extended precision, and return a non 0 value if x is neither a NaN nor an infinity. npy_isinf(x) This is an alias for C99 isinf: works for single, double and extended precision, and return a non 0 value if x is infinite (positive and negative). npy_signbit(x) This is an alias for C99 signbit: works for single, double and extended precision, and return a non 0 value if x has the signbit set (that is the number is negative). npy_copysign(x, y) This is an alias for C99 copysign: return x with the same sign as y. Works for any value, including inf and nan. Single and extended precisions are available with suffix f and l. ## Useful math constants The following math constants are available in `npy_math.h`. Single and extended precision are also available by adding the `f` and `l` suffixes respectively. NPY_E Base of natural logarithm (\\(e\\)) NPY_LOG2E Logarithm to base 2 of the Euler constant (\\(\frac{\ln(e)}{\ln(2)}\\)) NPY_LOG10E Logarithm to base 10 of the Euler constant (\\(\frac{\ln(e)}{\ln(10)}\\)) NPY_LOGE2 Natural logarithm of 2 (\\(\ln(2)\\)) NPY_LOGE10 Natural logarithm of 10 (\\(\ln(10)\\)) NPY_PI Pi (\\(\pi\\)) NPY_PI_2 Pi divided by 2 (\\(\frac{\pi}{2}\\)) NPY_PI_4 Pi divided by 4 (\\(\frac{\pi}{4}\\)) NPY_1_PI Reciprocal of pi (\\(\frac{1}{\pi}\\)) NPY_2_PI Two times the reciprocal of pi (\\(\frac{2}{\pi}\\)) NPY_EULER The Euler constant \\(\lim_{n\rightarrow\infty}({\sum_{k=1}^n{\frac{1}{k}}-\ln n})\\) ## Low-level floating point manipulation Those can be useful for precise floating point comparison. doublenpy_nextafter(doublex, doubley) This is an alias to C99 nextafter: return next representable floating point value from x in the direction of y. Single and extended precisions are available with suffix f and l. doublenpy_spacing(doublex) This is a function equivalent to Fortran intrinsic. Return distance between x and next representable floating point value from x, e.g. spacing(1) == eps. spacing of nan and +/- inf return nan. Single and extended precisions are available with suffix f and l. voidnpy_set_floatstatus_divbyzero() Set the divide by zero floating point exception voidnpy_set_floatstatus_overflow() Set the overflow floating point exception voidnpy_set_floatstatus_underflow() Set the underflow floating point exception voidnpy_set_floatstatus_invalid() Set the invalid floating point exception intnpy_get_floatstatus() Get floating point status. Returns a bitmask with following possible flags: * NPY_FPE_DIVIDEBYZERO * NPY_FPE_OVERFLOW * NPY_FPE_UNDERFLOW * NPY_FPE_INVALID Note that `npy_get_floatstatus_barrier` is preferable as it prevents aggressive compiler optimizations reordering the call relative to the code setting the status, which could lead to incorrect results. intnpy_get_floatstatus_barrier(char*) Get floating point status. A pointer to a local variable is passed in to prevent aggressive compiler optimizations from reordering this function call relative to the code setting the status, which could lead to incorrect results. Returns a bitmask with following possible flags: * NPY_FPE_DIVIDEBYZERO * NPY_FPE_OVERFLOW * NPY_FPE_UNDERFLOW * NPY_FPE_INVALID intnpy_clear_floatstatus() Clears the floating point status. Returns the previous status mask. Note that `npy_clear_floatstatus_barrier` is preferable as it prevents aggressive compiler optimizations reordering the call relative to the code setting the status, which could lead to incorrect results. intnpy_clear_floatstatus_barrier(char*) Clears the floating point status. A pointer to a local variable is passed in to prevent aggressive compiler optimizations from reordering this function call. Returns the previous status mask. ## Support for complex numbers C99-like complex functions have been added. Those can be used if you wish to implement portable C extensions. Since NumPy 2.0 we use C99 complex types as the underlying type: typedef double _Complex npy_cdouble; typedef float _Complex npy_cfloat; typedef long double _Complex npy_clongdouble; MSVC does not support the `_Complex` type itself, but has added support for the C99 `complex.h` header by providing its own implementation. Thus, under MSVC, the equivalent MSVC types will be used: typedef _Dcomplex npy_cdouble; typedef _Fcomplex npy_cfloat; typedef _Lcomplex npy_clongdouble; Because MSVC still does not support C99 syntax for initializing a complex number, you need to restrict to C90-compatible syntax, e.g.: /* a = 1 + 2i \*/ npy_complex a = npy_cpack(1, 2); npy_complex b; b = npy_log(a); A few utilities have also been added in `numpy/npy_math.h`, in order to retrieve or set the real or the imaginary part of a complex number: npy_cdouble c; npy_csetreal(&c, 1.0); npy_csetimag(&c, 0.0); printf("%d + %di\n", npy_creal(c), npy_cimag(c)); Changed in version 2.0.0: The underlying C types for all of numpy’s complex types have been changed to use C99 complex types. Up until now the following was being used to represent complex types: typedef struct { double real, imag; } npy_cdouble; typedef struct { float real, imag; } npy_cfloat; typedef struct {npy_longdouble real, imag;} npy_clongdouble; Using the `struct` representation ensured that complex numbers could be used on all platforms, even the ones without support for built-in complex types. It also meant that a static library had to be shipped together with NumPy to provide a C99 compatibility layer for downstream packages to use. In recent years however, support for native complex types has been improved immensely, with MSVC adding built-in support for the `complex.h` header in 2019. To ease cross-version compatibility, macros that use the new set APIs have been added. #define NPY_CSETREAL(z, r) npy_csetreal(z, r) #define NPY_CSETIMAG(z, i) npy_csetimag(z, i) A compatibility layer is also provided in `numpy/npy_2_complexcompat.h`. It checks whether the macros exist, and falls back to the 1.x syntax in case they don’t. #include #ifndef NPY_CSETREALF #define NPY_CSETREALF(c, r) (c)->real = (r) #endif #ifndef NPY_CSETIMAGF #define NPY_CSETIMAGF(c, i) (c)->imag = (i) #endif We suggest all downstream packages that need this functionality to copy-paste the compatibility layer code into their own sources and use that, so that they can continue to support both NumPy 1.x and 2.x without issues. Note also that the `complex.h` header is included in `numpy/npy_common.h`, which makes `complex` a reserved keyword. ## Linking against the core math library in an extension To use the core math library that NumPy ships as a static library in your own Python extension, you need to add the `npymath` compile and link options to your extension. The exact steps to take will depend on the build system you are using. The generic steps to take are: 1. Add the numpy include directory (= the value of `np.get_include()`) to your include directories, 2. The `npymath` static library resides in the `lib` directory right next to numpy’s include directory (i.e., `pathlib.Path(np.get_include()) / '..' / 'lib'`). Add that to your library search directories, 3. Link with `libnpymath` and `libm`. Note Keep in mind that when you are cross compiling, you must use the `numpy` for the platform you are building for, not the native one for the build machine. Otherwise you pick up a static library built for the wrong architecture. When you build with `numpy.distutils` (deprecated), then use this in your `setup.py`: >>> from numpy.distutils.misc_util import get_info >>> info = get_info('npymath') >>> _ = config.add_extension('foo', sources=['foo.c'], extra_info=info) In other words, the usage of `info` is exactly the same as when using `blas_info` and co. When you are building with [Meson](https://mesonbuild.com), use: # Note that this will get easier in the future, when Meson has # support for numpy built in; most of this can then be replaced # by `dependency('numpy')`. incdir_numpy = run_command(py3, [ '-c', 'import os; os.chdir(".."); import numpy; print(numpy.get_include())' ], check: true ).stdout().strip() inc_np = include_directories(incdir_numpy) cc = meson.get_compiler('c') npymath_path = incdir_numpy / '..' / 'lib' npymath_lib = cc.find_library('npymath', dirs: npymath_path) py3.extension_module('module_name', ... include_directories: inc_np, dependencies: [npymath_lib], ## Half-precision functions The header file `` provides functions to work with IEEE 754-2008 16-bit floating point values. While this format is not typically used for numerical computations, it is useful for storing values which require floating point but do not need much precision. It can also be used as an educational tool to understand the nature of floating point round-off error. Like for other types, NumPy includes a typedef npy_half for the 16 bit float. Unlike for most of the other types, you cannot use this as a normal type in C, since it is a typedef for npy_uint16. For example, 1.0 looks like 0x3c00 to C, and if you do an equality comparison between the different signed zeros, you will get -0.0 != 0.0 (0x8000 != 0x0000), which is incorrect. For these reasons, NumPy provides an API to work with npy_half values accessible by including `` and linking to `npymath`. For functions that are not provided directly, such as the arithmetic operations, the preferred method is to convert to float or double and back again, as in the following example. npy_half sum(int n, npy_half *array) { float ret = 0; while(n--) { ret += npy_half_to_float(*array++); } return npy_float_to_half(ret); } External Links: * [754-2008 IEEE Standard for Floating-Point Arithmetic](https://ieeexplore.ieee.org/document/4610935/) * [Half-precision Float Wikipedia Article](https://en.wikipedia.org/wiki/Half-precision_floating-point_format). * [OpenGL Half Float Pixel Support](https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_half_float_pixel.txt) * [The OpenEXR image format](https://www.openexr.com/about.html). NPY_HALF_ZERO This macro is defined to positive zero. NPY_HALF_PZERO This macro is defined to positive zero. NPY_HALF_NZERO This macro is defined to negative zero. NPY_HALF_ONE This macro is defined to 1.0. NPY_HALF_NEGONE This macro is defined to -1.0. NPY_HALF_PINF This macro is defined to +inf. NPY_HALF_NINF This macro is defined to -inf. NPY_HALF_NAN This macro is defined to a NaN value, guaranteed to have its sign bit unset. floatnpy_half_to_float([npy_half](dtype#c.npy_half "npy_half")h) Converts a half-precision float to a single-precision float. doublenpy_half_to_double([npy_half](dtype#c.npy_half "npy_half")h) Converts a half-precision float to a double-precision float. [npy_half](dtype#c.npy_half "npy_half")npy_float_to_half(floatf) Converts a single-precision float to a half-precision float. The value is rounded to the nearest representable half, with ties going to the nearest even. If the value is too small or too big, the system’s floating point underflow or overflow bit will be set. [npy_half](dtype#c.npy_half "npy_half")npy_double_to_half(doubled) Converts a double-precision float to a half-precision float. The value is rounded to the nearest representable half, with ties going to the nearest even. If the value is too small or too big, the system’s floating point underflow or overflow bit will be set. intnpy_half_eq([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats (h1 == h2). intnpy_half_ne([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats (h1 != h2). intnpy_half_le([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats (h1 <= h2). intnpy_half_lt([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats (h1 < h2). intnpy_half_ge([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats (h1 >= h2). intnpy_half_gt([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats (h1 > h2). intnpy_half_eq_nonan([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats that are known to not be NaN (h1 == h2). If a value is NaN, the result is undefined. intnpy_half_lt_nonan([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats that are known to not be NaN (h1 < h2). If a value is NaN, the result is undefined. intnpy_half_le_nonan([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats that are known to not be NaN (h1 <= h2). If a value is NaN, the result is undefined. intnpy_half_iszero([npy_half](dtype#c.npy_half "npy_half")h) Tests whether the half-precision float has a value equal to zero. This may be slightly faster than calling npy_half_eq(h, NPY_ZERO). intnpy_half_isnan([npy_half](dtype#c.npy_half "npy_half")h) Tests whether the half-precision float is a NaN. intnpy_half_isinf([npy_half](dtype#c.npy_half "npy_half")h) Tests whether the half-precision float is plus or minus Inf. intnpy_half_isfinite([npy_half](dtype#c.npy_half "npy_half")h) Tests whether the half-precision float is finite (not NaN or Inf). intnpy_half_signbit([npy_half](dtype#c.npy_half "npy_half")h) Returns 1 is h is negative, 0 otherwise. [npy_half](dtype#c.npy_half "npy_half")npy_half_copysign([npy_half](dtype#c.npy_half "npy_half")x, [npy_half](dtype#c.npy_half "npy_half")y) Returns the value of x with the sign bit copied from y. Works for any value, including Inf and NaN. [npy_half](dtype#c.npy_half "npy_half")npy_half_spacing([npy_half](dtype#c.npy_half "npy_half")h) This is the same for half-precision float as npy_spacing and npy_spacingf described in the low-level floating point section. [npy_half](dtype#c.npy_half "npy_half")npy_half_nextafter([npy_half](dtype#c.npy_half "npy_half")x, [npy_half](dtype#c.npy_half "npy_half")y) This is the same for half-precision float as npy_nextafter and npy_nextafterf described in the low-level floating point section. [npy_uint16](dtype#c.npy_uint16 "npy_uint16")npy_floatbits_to_halfbits([npy_uint32](dtype#c.npy_uint32 "npy_uint32")f) Low-level function which converts a 32-bit single-precision float, stored as a uint32, into a 16-bit half-precision float. [npy_uint16](dtype#c.npy_uint16 "npy_uint16")npy_doublebits_to_halfbits([npy_uint64](dtype#c.npy_uint64 "npy_uint64")d) Low-level function which converts a 64-bit double-precision float, stored as a uint64, into a 16-bit half-precision float. [npy_uint32](dtype#c.npy_uint32 "npy_uint32")npy_halfbits_to_floatbits([npy_uint16](dtype#c.npy_uint16 "npy_uint16")h) Low-level function which converts a 16-bit half-precision float into a 32-bit single-precision float, stored as a uint32. [npy_uint64](dtype#c.npy_uint64 "npy_uint64")npy_halfbits_to_doublebits([npy_uint16](dtype#c.npy_uint16 "npy_uint16")h) Low-level function which converts a 16-bit half-precision float into a 64-bit double-precision float, stored as a uint64. # Memory management in NumPy The [`numpy.ndarray`](../generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") is a python class. It requires additional memory allocations to hold [`numpy.ndarray.strides`](../generated/numpy.ndarray.strides#numpy.ndarray.strides "numpy.ndarray.strides"), [`numpy.ndarray.shape`](../generated/numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") and [`numpy.ndarray.data`](../generated/numpy.ndarray.data#numpy.ndarray.data "numpy.ndarray.data") attributes. These attributes are specially allocated after creating the python object in [`__new__`](https://docs.python.org/3/reference/datamodel.html#object.__new__ "\(in Python v3.13\)"). The `strides` and `shape` are stored in a piece of memory allocated internally. The `data` allocation used to store the actual array values (which could be pointers in the case of `object` arrays) can be very large, so NumPy has provided interfaces to manage its allocation and release. This document details how those interfaces work. ## Historical overview Since version 1.7.0, NumPy has exposed a set of `PyDataMem_*` functions ([`PyDataMem_NEW`](array#c.PyDataMem_NEW "PyDataMem_NEW"), [`PyDataMem_FREE`](array#c.PyDataMem_FREE "PyDataMem_FREE"), [`PyDataMem_RENEW`](array#c.PyDataMem_RENEW "PyDataMem_RENEW")) which are backed by `alloc`, `free`, `realloc` respectively. Since those early days, Python also improved its memory management capabilities, and began providing various [management policies](https://docs.python.org/3/c-api/memory.html#memoryoverview "\(in Python v3.13\)") beginning in version 3.4. These routines are divided into a set of domains, each domain has a [`PyMemAllocatorEx`](https://docs.python.org/3/c-api/memory.html#c.PyMemAllocatorEx "\(in Python v3.13\)") structure of routines for memory management. Python also added a [`tracemalloc`](https://docs.python.org/3/library/tracemalloc.html#module- tracemalloc "\(in Python v3.13\)") module to trace calls to the various routines. These tracking hooks were added to the NumPy `PyDataMem_*` routines. NumPy added a small cache of allocated memory in its internal `npy_alloc_cache`, `npy_alloc_cache_zero`, and `npy_free_cache` functions. These wrap `alloc`, `alloc-and-memset(0)` and `free` respectively, but when `npy_free_cache` is called, it adds the pointer to a short list of available blocks marked by size. These blocks can be re-used by subsequent calls to `npy_alloc*`, avoiding memory thrashing. ## Configurable memory routines in NumPy (NEP 49) Users may wish to override the internal data memory routines with ones of their own. Since NumPy does not use the Python domain strategy to manage data memory, it provides an alternative set of C-APIs to change memory routines. There are no Python domain-wide strategies for large chunks of object data, so those are less suited to NumPy’s needs. User who wish to change the NumPy data memory management routines can use `PyDataMem_SetHandler`, which uses a `PyDataMem_Handler` structure to hold pointers to functions used to manage the data memory. The calls are still wrapped by internal routines to call [`PyTraceMalloc_Track`](https://docs.python.org/3/c-api/memory.html#c.PyTraceMalloc_Track "\(in Python v3.13\)"), [`PyTraceMalloc_Untrack`](https://docs.python.org/3/c-api/memory.html#c.PyTraceMalloc_Untrack "\(in Python v3.13\)"). Since the functions may change during the lifetime of the process, each `ndarray` carries with it the functions used at the time of its instantiation, and these will be used to reallocate or free the data memory of the instance. typePyDataMem_Handler A struct to hold function pointers used to manipulate memory typedef struct { char name[127]; /* multiple of 64 to keep the struct aligned */ uint8_t version; /* currently 1 */ PyDataMemAllocator allocator; } PyDataMem_Handler; where the allocator structure is /* The declaration of free differs from PyMemAllocatorEx */ typedef struct { void *ctx; void* (*malloc) (void *ctx, size_t size); void* (*calloc) (void *ctx, size_t nelem, size_t elsize); void* (*realloc) (void *ctx, void *ptr, size_t new_size); void (*free) (void *ctx, void *ptr, size_t size); } PyDataMemAllocator; [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyDataMem_SetHandler([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*handler) Set a new allocation policy. If the input value is `NULL`, will reset the policy to the default. Return the previous policy, or return `NULL` if an error has occurred. We wrap the user-provided functions so they will still call the python and numpy memory management callback hooks. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyDataMem_GetHandler() Return the current policy that will be used to allocate data for the next `PyArrayObject`. On failure, return `NULL`. For an example of setting up and using the PyDataMem_Handler, see the test in `numpy/_core/tests/test_mem_policy.py` ## What happens when deallocating if there is no policy set A rare but useful technique is to allocate a buffer outside NumPy, use [`PyArray_NewFromDescr`](array#c.PyArray_NewFromDescr "PyArray_NewFromDescr") to wrap the buffer in a `ndarray`, then switch the `OWNDATA` flag to true. When the `ndarray` is released, the appropriate function from the `ndarray`’s `PyDataMem_Handler` should be called to free the buffer. But the `PyDataMem_Handler` field was never set, it will be `NULL`. For backward compatibility, NumPy will call `free()` to release the buffer. If `NUMPY_WARN_IF_NO_MEM_POLICY` is set to `1`, a warning will be emitted. The current default is not to emit a warning, this may change in a future version of NumPy. A better technique would be to use a `PyCapsule` as a base object: /* define a PyCapsule_Destructor, using the correct deallocator for buff */ void free_wrap(void *capsule){ void * obj = PyCapsule_GetPointer(capsule, PyCapsule_GetName(capsule)); free(obj); }; /* then inside the function that creates arr from buff */ ... arr = PyArray_NewFromDescr(... buf, ...); if (arr == NULL) { return NULL; } capsule = PyCapsule_New(buf, "my_wrapped_buffer", (PyCapsule_Destructor)&free_wrap); if (PyArray_SetBaseObject(arr, capsule) == -1) { Py_DECREF(arr); return NULL; } ... ## Example of memory tracing with `np.lib.tracemalloc_domain` Note that since Python 3.6 (or newer), the builtin `tracemalloc` module can be used to track allocations inside NumPy. NumPy places its CPU memory allocations into the `np.lib.tracemalloc_domain` domain. For additional information, check: . Here is an example on how to use `np.lib.tracemalloc_domain`: """ The goal of this example is to show how to trace memory from an application that has NumPy and non-NumPy sections. We only select the sections using NumPy related calls. """ import tracemalloc import numpy as np # Flag to determine if we select NumPy domain use_np_domain = True nx = 300 ny = 500 # Start to trace memory tracemalloc.start() # Section 1 # --------- # NumPy related call a = np.zeros((nx,ny)) # non-NumPy related call b = [i**2 for i in range(nx*ny)] snapshot1 = tracemalloc.take_snapshot() # We filter the snapshot to only select NumPy related calls np_domain = np.lib.tracemalloc_domain dom_filter = tracemalloc.DomainFilter(inclusive=use_np_domain, domain=np_domain) snapshot1 = snapshot1.filter_traces([dom_filter]) top_stats1 = snapshot1.statistics('traceback') print("================ SNAPSHOT 1 =================") for stat in top_stats1: print(f"{stat.count} memory blocks: {stat.size / 1024:.1f} KiB") print(stat.traceback.format()[-1]) # Clear traces of memory blocks allocated by Python # before moving to the next section. tracemalloc.clear_traces() # Section 2 #---------- # We are only using NumPy c = np.sum(a*a) snapshot2 = tracemalloc.take_snapshot() top_stats2 = snapshot2.statistics('traceback') print() print("================ SNAPSHOT 2 =================") for stat in top_stats2: print(f"{stat.count} memory blocks: {stat.size / 1024:.1f} KiB") print(stat.traceback.format()[-1]) tracemalloc.stop() print() print("============================================") print("\nTracing Status : ", tracemalloc.is_tracing()) try: print("\nTrying to Take Snapshot After Tracing is Stopped.") snap = tracemalloc.take_snapshot() except Exception as e: print("Exception : ", e) # Datetime API NumPy represents dates internally using an int64 counter and a unit metadata struct. Time differences are represented similarly using an int64 and a unit metadata struct. The functions described below are available to to facilitate converting between ISO 8601 date strings, NumPy datetimes, and Python datetime objects in C. ## Data types In addition to the [`npy_datetime`](dtype#c.npy_datetime "npy_datetime") and [`npy_timedelta`](dtype#c.npy_timedelta "npy_timedelta") typedefs for [`npy_int64`](dtype#c.npy_int64 "npy_int64"), NumPy defines two additional structs that represent time unit metadata and an “exploded” view of a datetime. typePyArray_DatetimeMetaData Represents datetime unit metadata. typedef struct { NPY_DATETIMEUNIT base; int num; } PyArray_DatetimeMetaData; NPY_DATETIMEUNITbase The unit of the datetime. intnum A multiplier for the unit. typenpy_datetimestruct An “exploded” view of a datetime value typedef struct { npy_int64 year; npy_int32 month, day, hour, min, sec, us, ps, as; } npy_datetimestruct; enumNPY_DATETIMEUNIT Time units supported by NumPy. The “FR” in the names of the enum variants is short for frequency. enumeratorNPY_FR_ERROR Error or undetermined units. enumeratorNPY_FR_Y Years enumeratorNPY_FR_M Months enumeratorNPY_FR_W Weeks enumeratorNPY_FR_D Days enumeratorNPY_FR_h Hours enumeratorNPY_FR_m Minutes enumeratorNPY_FR_s Seconds enumeratorNPY_FR_ms Milliseconds enumeratorNPY_FR_us Microseconds enumeratorNPY_FR_ns Nanoseconds enumeratorNPY_FR_ps Picoseconds enumeratorNPY_FR_fs Femtoseconds enumeratorNPY_FR_as Attoseconds enumeratorNPY_FR_GENERIC Unbound units, can convert to anything ## Conversion functions intNpyDatetime_ConvertDatetimeStructToDatetime64(PyArray_DatetimeMetaData*meta, constnpy_datetimestruct*dts, [npy_datetime](dtype#c.npy_datetime "npy_datetime")*out) Converts a datetime from a datetimestruct to a datetime in the units specified by the unit metadata. The date is assumed to be valid. If the `num` member of the metadata struct is large, there may be integer overflow in this function. Returns 0 on success and -1 on failure. intNpyDatetime_ConvertDatetime64ToDatetimeStruct(PyArray_DatetimeMetaData*meta, [npy_datetime](dtype#c.npy_datetime "npy_datetime")dt, npy_datetimestruct*out) Converts a datetime with units specified by the unit metadata to an exploded datetime struct. Returns 0 on success and -1 on failure. intNpyDatetime_ConvertPyDateTimeToDatetimeStruct([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj, npy_datetimestruct*out, NPY_DATETIMEUNIT*out_bestunit, intapply_tzinfo) Tests for and converts a Python `datetime.datetime` or `datetime.date` object into a NumPy `npy_datetimestruct`. `out_bestunit` gives a suggested unit based on whether the object was a `datetime.date` or `datetime.datetime` object. If `apply_tzinfo` is 1, this function uses the tzinfo to convert to UTC time, otherwise it returns the struct with the local time. Returns -1 on error, 0 on success, and 1 (with no error set) if obj doesn’t have the needed date or datetime attributes. intNpyDatetime_ParseISO8601Datetime(charconst*str, [Py_ssize_t](https://docs.python.org/3/c-api/intro.html#c.Py_ssize_t "\(in Python v3.13\)")len, NPY_DATETIMEUNITunit, [NPY_CASTING](array#c.NPY_CASTING "NPY_CASTING")casting, npy_datetimestruct*out, NPY_DATETIMEUNIT*out_bestunit, [npy_bool](dtype#c.npy_bool "npy_bool")*out_special) Parses (almost) standard ISO 8601 date strings. The differences are: * The date “20100312” is parsed as the year 20100312, not as equivalent to “2010-03-12”. The ‘-’ in the dates are not optional. * Only seconds may have a decimal point, with up to 18 digits after it (maximum attoseconds precision). * Either a ‘T’ as in ISO 8601 or a ‘ ‘ may be used to separate the date and the time. Both are treated equivalently. * Doesn’t (yet) handle the “YYYY-DDD” or “YYYY-Www” formats. * Doesn’t handle leap seconds (seconds value has 60 in these cases). * Doesn’t handle 24:00:00 as synonym for midnight (00:00:00) tomorrow * Accepts special values “NaT” (not a time), “Today”, (current day according to local time) and “Now” (current time in UTC). `str` must be a NULL-terminated string, and `len` must be its length. `unit` should contain -1 if the unit is unknown, or the unit which will be used if it is. `casting` controls how the detected unit from the string is allowed to be cast to the ‘unit’ parameter. `out` gets filled with the parsed date-time. `out_bestunit` gives a suggested unit based on the amount of resolution provided in the string, or -1 for NaT. `out_special` gets set to 1 if the parsed time was ‘today’, ‘now’, empty string, or ‘NaT’. For ‘today’, the unit recommended is ‘D’, for ‘now’, the unit recommended is ‘s’, and for ‘NaT’ the unit recommended is ‘Y’. Returns 0 on success, -1 on failure. intNpyDatetime_GetDatetimeISO8601StrLen(intlocal, NPY_DATETIMEUNITbase) Returns the string length to use for converting datetime objects with the given local time and unit settings to strings. Use this when constructing strings to supply to `NpyDatetime_MakeISO8601Datetime`. intNpyDatetime_MakeISO8601Datetime(npy_datetimestruct*dts, char*outstr, [npy_intp](dtype#c.npy_intp "npy_intp")outlen, intlocal, intutc, NPY_DATETIMEUNITbase, inttzoffset, [NPY_CASTING](array#c.NPY_CASTING "NPY_CASTING")casting) Converts an `npy_datetimestruct` to an (almost) ISO 8601 NULL-terminated string. If the string fits in the space exactly, it leaves out the NULL terminator and returns success. The differences from ISO 8601 are the ‘NaT’ string, and the number of year digits is >= 4 instead of strictly 4. If `local` is non-zero, it produces a string in local time with a +-#### timezone offset. If `local` is zero and `utc` is non-zero, produce a string ending with ‘Z’ to denote UTC. By default, no time zone information is attached. `base` restricts the output to that unit. Set `base` to -1 to auto-detect a base after which all the values are zero. `tzoffset` is used if `local` is enabled, and `tzoffset` is set to a value other than -1. This is a manual override for the local time zone to use, as an offset in minutes. `casting` controls whether data loss is allowed by truncating the data to a coarser unit. This interacts with `local`, slightly, in order to form a date unit string as a local time, the casting must be unsafe. Returns 0 on success, -1 on failure (for example if the output string was too short). # C API deprecations ## Background The API exposed by NumPy for third-party extensions has grown over years of releases, and has allowed programmers to directly access NumPy functionality from C. This API can be best described as “organic”. It has emerged from multiple competing desires and from multiple points of view over the years, strongly influenced by the desire to make it easy for users to move to NumPy from Numeric and Numarray. The core API originated with Numeric in 1995 and there are patterns such as the heavy use of macros written to mimic Python’s C-API as well as account for compiler technology of the late 90’s. There is also only a small group of volunteers who have had very little time to spend on improving this API. There is an ongoing effort to improve the API. It is important in this effort to ensure that code that compiles for NumPy 1.X continues to compile for NumPy 1.X. At the same time, certain API’s will be marked as deprecated so that future-looking code can avoid these API’s and follow better practices. Another important role played by deprecation markings in the C API is to move towards hiding internal details of the NumPy implementation. For those needing direct, easy, access to the data of ndarrays, this will not remove this ability. Rather, there are many potential performance optimizations which require changing the implementation details, and NumPy developers have been unable to try them because of the high value of preserving ABI compatibility. By deprecating this direct access, we will in the future be able to improve NumPy’s performance in ways we cannot presently. ## Deprecation mechanism NPY_NO_DEPRECATED_API In C, there is no equivalent to the deprecation warnings that Python supports. One way to do deprecations is to flag them in the documentation and release notes, then remove or change the deprecated features in a future major version (NumPy 2.0 and beyond). Minor versions of NumPy should not have major C-API changes, however, that prevent code that worked on a previous minor release. For example, we will do our best to ensure that code that compiled and worked on NumPy 1.4 should continue to work on NumPy 1.7 (but perhaps with compiler warnings). To use the NPY_NO_DEPRECATED_API mechanism, you need to #define it to the target API version of NumPy before #including any NumPy headers. If you want to confirm that your code is clean against 1.7, use: #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION On compilers which support a #warning mechanism, NumPy issues a compiler warning if you do not define the symbol NPY_NO_DEPRECATED_API. This way, the fact that there are deprecations will be flagged for third-party developers who may not have read the release notes closely. Note that defining NPY_NO_DEPRECATED_API is not sufficient to make your extension ABI compatible with a given NumPy version. See [For downstream package authors](../../dev/depending_on_numpy#for-downstream-package-authors). # Data type API The standard array can have 25 different data types (and has some support for adding your own types). These data types all have an enumerated type, an enumerated type-character, and a corresponding array scalar Python type object (placed in a hierarchy). There are also standard C typedefs to make it easier to manipulate elements of the given data type. For the numeric types, there are also bit-width equivalent C typedefs and named typenumbers that make it easier to select the precision desired. Warning The names for the types in c code follows c naming conventions more closely. The Python names for these types follow Python conventions. Thus, `NPY_FLOAT` picks up a 32-bit float in C, but [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64") in Python corresponds to a 64-bit double. The bit-width names can be used in both Python and C for clarity. ## Enumerated types enumNPY_TYPES There is a list of enumerated types defined providing the basic 25 data types plus some useful generic names. Whenever the code requires a type number, one of these enumerated types is requested. The types are all called `NPY_{NAME}`: enumeratorNPY_BOOL The enumeration value for the boolean type, stored as one byte. It may only be set to the values 0 and 1. enumeratorNPY_BYTE enumeratorNPY_INT8 The enumeration value for an 8-bit/1-byte signed integer. enumeratorNPY_SHORT enumeratorNPY_INT16 The enumeration value for a 16-bit/2-byte signed integer. enumeratorNPY_INT enumeratorNPY_INT32 The enumeration value for a 32-bit/4-byte signed integer. enumeratorNPY_LONG Equivalent to either NPY_INT or NPY_LONGLONG, depending on the platform. enumeratorNPY_LONGLONG enumeratorNPY_INT64 The enumeration value for a 64-bit/8-byte signed integer. enumeratorNPY_UBYTE enumeratorNPY_UINT8 The enumeration value for an 8-bit/1-byte unsigned integer. enumeratorNPY_USHORT enumeratorNPY_UINT16 The enumeration value for a 16-bit/2-byte unsigned integer. enumeratorNPY_UINT enumeratorNPY_UINT32 The enumeration value for a 32-bit/4-byte unsigned integer. enumeratorNPY_ULONG Equivalent to either NPY_UINT or NPY_ULONGLONG, depending on the platform. enumeratorNPY_ULONGLONG enumeratorNPY_UINT64 The enumeration value for a 64-bit/8-byte unsigned integer. enumeratorNPY_HALF enumeratorNPY_FLOAT16 The enumeration value for a 16-bit/2-byte IEEE 754-2008 compatible floating point type. enumeratorNPY_FLOAT enumeratorNPY_FLOAT32 The enumeration value for a 32-bit/4-byte IEEE 754 compatible floating point type. enumeratorNPY_DOUBLE enumeratorNPY_FLOAT64 The enumeration value for a 64-bit/8-byte IEEE 754 compatible floating point type. enumeratorNPY_LONGDOUBLE The enumeration value for a platform-specific floating point type which is at least as large as NPY_DOUBLE, but larger on many platforms. enumeratorNPY_CFLOAT enumeratorNPY_COMPLEX64 The enumeration value for a 64-bit/8-byte complex type made up of two NPY_FLOAT values. enumeratorNPY_CDOUBLE enumeratorNPY_COMPLEX128 The enumeration value for a 128-bit/16-byte complex type made up of two NPY_DOUBLE values. enumeratorNPY_CLONGDOUBLE The enumeration value for a platform-specific complex floating point type which is made up of two NPY_LONGDOUBLE values. enumeratorNPY_DATETIME The enumeration value for a data type which holds dates or datetimes with a precision based on selectable date or time units. enumeratorNPY_TIMEDELTA The enumeration value for a data type which holds lengths of times in integers of selectable date or time units. enumeratorNPY_STRING The enumeration value for null-padded byte strings of a selectable size. The strings have a fixed maximum size within a given array. enumeratorNPY_UNICODE The enumeration value for UCS4 strings of a selectable size. The strings have a fixed maximum size within a given array. enumeratorNPY_VSTRING The enumeration value for UTF-8 variable-width strings. Note that this dtype holds an array of references, with string data stored outside of the array buffer. Use the C API for working with numpy variable-width static strings to access the string data in each array entry. Note This DType is new-style and is not included in `NPY_NTYPES_LEGACY`. enumeratorNPY_OBJECT The enumeration value for references to arbitrary Python objects. enumeratorNPY_VOID Primarily used to hold struct dtypes, but can contain arbitrary binary data. Some useful aliases of the above types are enumeratorNPY_INTP The enumeration value for a signed integer of type `Py_ssize_t` (same as `ssize_t` if defined). This is the type used by all arrays of indices. Changed in version 2.0: Previously, this was the same as `intptr_t` (same size as a pointer). In practice, this is identical except on very niche platforms. You can use the `'p'` character code for the pointer meaning. enumeratorNPY_UINTP The enumeration value for an unsigned integer type that is identical to a `size_t`. Changed in version 2.0: Previously, this was the same as `uintptr_t` (same size as a pointer). In practice, this is identical except on very niche platforms. You can use the `'P'` character code for the pointer meaning. enumeratorNPY_MASK The enumeration value of the type used for masks, such as with the [`NPY_ITER_ARRAYMASK`](iterator#c.NPY_ITER_ARRAYMASK "NPY_ITER_ARRAYMASK") iterator flag. This is equivalent to `NPY_UINT8`. enumeratorNPY_DEFAULT_TYPE The default type to use when no dtype is explicitly specified, for example when calling np.zero(shape). This is equivalent to `NPY_DOUBLE`. Other useful related constants are NPY_NTYPES_LEGACY The number of built-in NumPy types written using the legacy DType system. New NumPy dtypes will be written using the new DType API and may not function in the same manner as legacy DTypes. Use this macro if you want to handle legacy DTypes using different code paths or if you do not want to update code that uses `NPY_NTYPES_LEGACY` and does not work correctly with new DTypes. Note Newly added DTypes such as `NPY_VSTRING` will not be counted in `NPY_NTYPES_LEGACY`. NPY_NOTYPE A signal value guaranteed not to be a valid type enumeration number. NPY_USERDEF The start of type numbers used for legacy Custom Data types. New-style user DTypes currently are currently _not_ assigned a type-number. Note The total number of user dtypes is limited to below `NPY_VSTRING`. Higher numbers are reserved to future new-style DType use. The various character codes indicating certain types are also part of an enumerated list. References to type characters (should they be needed at all) should always use these enumerations. The form of them is `NPY_{NAME}LTR` where `{NAME}` can be **BOOL** , **BYTE** , **UBYTE** , **SHORT** , **USHORT** , **INT** , **UINT** , **LONG** , **ULONG** , **LONGLONG** , **ULONGLONG** , **HALF** , **FLOAT** , **DOUBLE** , **LONGDOUBLE** , **CFLOAT** , **CDOUBLE** , **CLONGDOUBLE** , **DATETIME** , **TIMEDELTA** , **OBJECT** , **STRING** , **UNICODE** , **VSTRING** , **VOID** **INTP** , **UINTP** **GENBOOL** , **SIGNED** , **UNSIGNED** , **FLOATING** , **COMPLEX** The latter group of `{NAME}s` corresponds to letters used in the array interface typestring specification. ## Defines ### Max and min values for integers `NPY_MAX_INT{bits}`, `NPY_MAX_UINT{bits}`, `NPY_MIN_INT{bits}` These are defined for `{bits}` = 8, 16, 32, 64, 128, and 256 and provide the maximum (minimum) value of the corresponding (unsigned) integer type. Note: the actual integer type may not be available on all platforms (i.e. 128-bit and 256-bit integers are rare). `NPY_MIN_{type}` This is defined for `{type}` = **BYTE** , **SHORT** , **INT** , **LONG** , **LONGLONG** , **INTP** `NPY_MAX_{type}` This is defined for all defined for `{type}` = **BYTE** , **UBYTE** , **SHORT** , **USHORT** , **INT** , **UINT** , **LONG** , **ULONG** , **LONGLONG** , **ULONGLONG** , **INTP** , **UINTP** ### Number of bits in data types All `NPY_SIZEOF_{CTYPE}` constants have corresponding `NPY_BITSOF_{CTYPE}` constants defined. The `NPY_BITSOF_{CTYPE}` constants provide the number of bits in the data type. Specifically, the available `{CTYPE}s` are **BOOL** , **CHAR** , **SHORT** , **INT** , **LONG** , **LONGLONG** , **FLOAT** , **DOUBLE** , **LONGDOUBLE** ### Bit-width references to enumerated typenums All of the numeric data types (integer, floating point, and complex) have constants that are defined to be a specific enumerated type number. Exactly which enumerated type a bit-width type refers to is platform dependent. In particular, the constants available are `PyArray_{NAME}{BITS}` where `{NAME}` is **INT** , **UINT** , **FLOAT** , **COMPLEX** and `{BITS}` can be 8, 16, 32, 64, 80, 96, 128, 160, 192, 256, and 512. Obviously not all bit-widths are available on all platforms for all the kinds of numeric types. Commonly 8-, 16-, 32-, 64-bit integers; 32-, 64-bit floats; and 64-, 128-bit complex types are available. ### Further integer aliases The constants **NPY_INTP** and **NPY_UINTP** refer to an `Py_ssize_t` and `size_t`. Although in practice normally true, these types are strictly speaking not pointer sized and the character codes `'p'` and `'P'` can be used for pointer sized integers. (Before NumPy 2, `intp` was pointer size, but this almost never matched the actual use, which is the reason for the name.) Since NumPy 2, **NPY_DEFAULT_INT** is additionally defined. The value of the macro is runtime dependent: Since NumPy 2, it maps to `NPY_INTP` while on earlier versions it maps to `NPY_LONG`. ## C-type names There are standard variable types for each of the numeric data types and the bool data type. Some of these are already available in the C-specification. You can create variables in extension code with these types. ### Boolean typenpy_bool unsigned char; The constants [`NPY_FALSE`](array#c.NPY_FALSE "NPY_FALSE") and [`NPY_TRUE`](array#c.NPY_TRUE "NPY_TRUE") are also defined. ### (Un)Signed Integer Unsigned versions of the integers can be defined by prepending a ‘u’ to the front of the integer name. typenpy_byte char typenpy_ubyte unsigned char typenpy_short short typenpy_ushort unsigned short typenpy_int int typenpy_uint unsigned int typenpy_int16 16-bit integer typenpy_uint16 16-bit unsigned integer typenpy_int32 32-bit integer typenpy_uint32 32-bit unsigned integer typenpy_int64 64-bit integer typenpy_uint64 64-bit unsigned integer typenpy_long long int typenpy_ulong unsigned long int typenpy_longlong long long int typenpy_ulonglong unsigned long long int typenpy_intp `Py_ssize_t` (a signed integer with the same size as the C `size_t`). This is the correct integer for lengths or indexing. In practice this is normally the size of a pointer, but this is not guaranteed. Note Before NumPy 2.0, this was the same as `Py_intptr_t`. While a better match, this did not match actual usage in practice. On the Python side, we still support `np.dtype('p')` to fetch a dtype compatible with storing pointers, while `n` is the correct character for the `ssize_t`. typenpy_uintp The C `size_t`/`Py_size_t`. ### (Complex) Floating point typenpy_half 16-bit float typenpy_float 32-bit float typenpy_cfloat 32-bit complex float typenpy_double 64-bit double typenpy_cdouble 64-bit complex double typenpy_longdouble long double typenpy_clongdouble long complex double complex types are structures with **.real** and **.imag** members (in that order). ### Bit-width names There are also typedefs for signed integers, unsigned integers, floating point, and complex floating point types of specific bit- widths. The available type names are `npy_int{bits}`, `npy_uint{bits}`, `npy_float{bits}`, and `npy_complex{bits}` where `{bits}` is the number of bits in the type and can be **8** , **16** , **32** , **64** , 128, and 256 for integer types; 16, **32** , **64** , 80, 96, 128, and 256 for floating-point types; and 32, **64** , **128** , 160, 192, and 512 for complex-valued types. Which bit-widths are available is platform dependent. The bolded bit-widths are usually available on all platforms. ### Time and timedelta typenpy_datetime date or datetime (alias of `npy_int64`) typenpy_timedelta length of time (alias of `npy_int64`) ## Printf formatting For help in printing, the following strings are defined as the correct format specifier in printf and related commands. NPY_LONGLONG_FMT NPY_ULONGLONG_FMT NPY_INTP_FMT NPY_UINTP_FMT NPY_LONGDOUBLE_FMT # Generalized universal function API There is a general need for looping over not only functions on scalars but also over functions on vectors (or arrays). This concept is realized in NumPy by generalizing the universal functions (ufuncs). In regular ufuncs, the elementary function is limited to element-by-element operations, whereas the generalized version (gufuncs) supports “sub-array” by “sub-array” operations. The Perl vector library PDL provides a similar functionality and its terms are re-used in the following. Each generalized ufunc has information associated with it that states what the “core” dimensionality of the inputs is, as well as the corresponding dimensionality of the outputs (the element-wise ufuncs have zero core dimensions). The list of the core dimensions for all arguments is called the “signature” of a ufunc. For example, the ufunc `numpy.add` has signature `(),()->()` defining two scalar inputs and one scalar output. Another example is the function `inner1d(a, b)` with a signature of `(i),(i)->()`. This applies the inner product along the last axis of each input, but keeps the remaining indices intact. For example, where `a` is of shape `(3, 5, N)` and `b` is of shape `(5, N)`, this will return an output of shape `(3,5)`. The underlying elementary function is called `3 * 5` times. In the signature, we specify one core dimension `(i)` for each input and zero core dimensions `()` for the output, since it takes two 1-d arrays and returns a scalar. By using the same name `i`, we specify that the two corresponding dimensions should be of the same size. The dimensions beyond the core dimensions are called “loop” dimensions. In the above example, this corresponds to `(3, 5)`. The signature determines how the dimensions of each input/output array are split into core and loop dimensions: 1. Each dimension in the signature is matched to a dimension of the corresponding passed-in array, starting from the end of the shape tuple. These are the core dimensions, and they must be present in the arrays, or an error will be raised. 2. Core dimensions assigned to the same label in the signature (e.g. the `i` in `inner1d`’s `(i),(i)->()`) must have exactly matching sizes, no broadcasting is performed. 3. The core dimensions are removed from all inputs and the remaining dimensions are broadcast together, defining the loop dimensions. 4. The shape of each output is determined from the loop dimensions plus the output’s core dimensions Typically, the size of all core dimensions in an output will be determined by the size of a core dimension with the same label in an input array. This is not a requirement, and it is possible to define a signature where a label comes up for the first time in an output, although some precautions must be taken when calling such a function. An example would be the function `euclidean_pdist(a)`, with signature `(n,d)->(p)`, that given an array of `n` `d`-dimensional vectors, computes all unique pairwise Euclidean distances among them. The output dimension `p` must therefore be equal to `n * (n - 1) / 2`, but by default, it is the caller’s responsibility to pass in an output array of the right size. If the size of a core dimension of an output cannot be determined from a passed in input or output array, an error will be raised. This can be changed by defining a `PyUFunc_ProcessCoreDimsFunc` function and assigning it to the `proces_core_dims_func` field of the `PyUFuncObject` structure. See below for more details. Note: Prior to NumPy 1.10.0, less strict checks were in place: missing core dimensions were created by prepending 1’s to the shape as necessary, core dimensions with the same label were broadcast together, and undetermined dimensions were created with size 1. ## Definitions Elementary Function Each ufunc consists of an elementary function that performs the most basic operation on the smallest portion of array arguments (e.g. adding two numbers is the most basic operation in adding two arrays). The ufunc applies the elementary function multiple times on different parts of the arrays. The input/output of elementary functions can be vectors; e.g., the elementary function of `inner1d` takes two vectors as input. Signature A signature is a string describing the input/output dimensions of the elementary function of a ufunc. See section below for more details. Core Dimension The dimensionality of each input/output of an elementary function is defined by its core dimensions (zero core dimensions correspond to a scalar input/output). The core dimensions are mapped to the last dimensions of the input/output arrays. Dimension Name A dimension name represents a core dimension in the signature. Different dimensions may share a name, indicating that they are of the same size. Dimension Index A dimension index is an integer representing a dimension name. It enumerates the dimension names according to the order of the first occurrence of each name in the signature. ## Details of signature The signature defines “core” dimensionality of input and output variables, and thereby also defines the contraction of the dimensions. The signature is represented by a string of the following format: * Core dimensions of each input or output array are represented by a list of dimension names in parentheses, `(i_1,...,i_N)`; a scalar input/output is denoted by `()`. Instead of `i_1`, `i_2`, etc, one can use any valid Python variable name. * Dimension lists for different arguments are separated by `","`. Input/output arguments are separated by `"->"`. * If one uses the same dimension name in multiple locations, this enforces the same size of the corresponding dimensions. The formal syntax of signatures is as follows: ::= "->" ::= ::= ::= nil | | "," ::= "(" ")" ::= nil | | "," ::= ::= valid Python variable name | valid integer ::= nil | "?" Notes: 1. All quotes are for clarity. 2. Unmodified core dimensions that share the same name must have the same size. Each dimension name typically corresponds to one level of looping in the elementary function’s implementation. 3. White spaces are ignored. 4. An integer as a dimension name freezes that dimension to the value. 5. If the name is suffixed with the “?” modifier, the dimension is a core dimension only if it exists on all inputs and outputs that share it; otherwise it is ignored (and replaced by a dimension of size 1 for the elementary function). Here are some examples of signatures: name | signature | common usage ---|---|--- add | `(),()->()` | binary ufunc sum1d | `(i)->()` | reduction inner1d | `(i),(i)->()` | vector-vector multiplication matmat | `(m,n),(n,p)->(m,p)` | matrix multiplication vecmat | `(n),(n,p)->(p)` | vector-matrix multiplication matvec | `(m,n),(n)->(m)` | matrix-vector multiplication matmul | `(m?,n),(n,p?)->(m?,p?)` | combination of the four above outer_inner | `(i,t),(j,t)->(i,j)` | inner over the last dimension, outer over the second to last, and loop/broadcast over the rest. cross1d | `(3),(3)->(3)` | cross product where the last dimension is frozen and must be 3 The last is an instance of freezing a core dimension and can be used to improve ufunc performance ## C-API for implementing elementary functions The current interface remains unchanged, and `PyUFunc_FromFuncAndData` can still be used to implement (specialized) ufuncs, consisting of scalar elementary functions. One can use `PyUFunc_FromFuncAndDataAndSignature` to declare a more general ufunc. The argument list is the same as `PyUFunc_FromFuncAndData`, with an additional argument specifying the signature as C string. Furthermore, the callback function is of the same type as before, `void (*foo)(char **args, intp *dimensions, intp *steps, void *func)`. When invoked, `args` is a list of length `nargs` containing the data of all input/output arguments. For a scalar elementary function, `steps` is also of length `nargs`, denoting the strides used for the arguments. `dimensions` is a pointer to a single integer defining the size of the axis to be looped over. For a non-trivial signature, `dimensions` will also contain the sizes of the core dimensions as well, starting at the second entry. Only one size is provided for each unique dimension name and the sizes are given according to the first occurrence of a dimension name in the signature. The first `nargs` elements of `steps` remain the same as for scalar ufuncs. The following elements contain the strides of all core dimensions for all arguments in order. For example, consider a ufunc with signature `(i,j),(i)->()`. In this case, `args` will contain three pointers to the data of the input/output arrays `a`, `b`, `c`. Furthermore, `dimensions` will be `[N, I, J]` to define the size of `N` of the loop and the sizes `I` and `J` for the core dimensions `i` and `j`. Finally, `steps` will be `[a_N, b_N, c_N, a_i, a_j, b_i]`, containing all necessary strides. ## Customizing core dimension size processing The optional function of type `PyUFunc_ProcessCoreDimsFunc`, stored on the `process_core_dims_func` attribute of the ufunc, provides the author of the ufunc a “hook” into the processing of the core dimensions of the arrays that were passed to the ufunc. The two primary uses of this “hook” are: * Check that constraints on the core dimensions required by the ufunc are satisfied (and set an exception if they are not). * Compute output shapes for any output core dimensions that were not determined by the input arrays. As an example of the first use, consider the generalized ufunc `minmax` with signature `(n)->(2)` that simultaneously computes the minimum and maximum of a sequence. It should require that `n > 0`, because the minimum and maximum of a sequence with length 0 is not meaningful. In this case, the ufunc author might define the function like this: int minmax_process_core_dims(PyUFuncObject ufunc, npy_intp *core_dim_sizes) { npy_intp n = core_dim_sizes[0]; if (n == 0) { PyExc_SetString("minmax requires the core dimension " "to be at least 1."); return -1; } return 0; } In this case, the length of the array `core_dim_sizes` will be 2. The second value in the array will always be 2, so there is no need for the function to inspect it. The core dimension `n` is stored in the first element. The function sets an exception and returns -1 if it finds that `n` is 0. The second use for the “hook” is to compute the size of output arrays when the output arrays are not provided by the caller and one or more core dimension of the output is not also an input core dimension. If the ufunc does not have a function defined on the `process_core_dims_func` attribute, an unspecified output core dimension size will result in an exception being raised. With the “hook” provided by `process_core_dims_func`, the author of the ufunc can set the output size to whatever is appropriate for the ufunc. In the array passed to the “hook” function, core dimensions that were not determined by the input are indicating by having the value -1 in the `core_dim_sizes` array. The function can replace the -1 with whatever value is appropriate for the ufunc, based on the core dimensions that occurred in the input arrays. Warning The function must never change a value in `core_dim_sizes` that is not -1 on input. Changing a value that was not -1 will generally result in incorrect output from the ufunc, and could result in the Python interpreter crashing. For example, consider the generalized ufunc `conv1d` for which the elementary function computes the “full” convolution of two one-dimensional arrays `x` and `y` with lengths `m` and `n`, respectively. The output of this convolution has length `m + n - 1`. To implement this as a generalized ufunc, the signature is set to `(m),(n)->(p)`, and in the “hook” function, if the core dimension `p` is found to be -1, it is replaced with `m + n - 1`. If `p` is _not_ -1, it must be verified that the given value equals `m + n - 1`. If it does not, the function must set an exception and return -1. For a meaningful result, the operation also requires that `m + n` is at least 1, i.e. both inputs can’t have length 0. Here’s how that might look in code: int conv1d_process_core_dims(PyUFuncObject *ufunc, npy_intp *core_dim_sizes) { // core_dim_sizes will hold the core dimensions [m, n, p]. // p will be -1 if the caller did not provide the out argument. npy_intp m = core_dim_sizes[0]; npy_intp n = core_dim_sizes[1]; npy_intp p = core_dim_sizes[2]; npy_intp required_p = m + n - 1; if (m == 0 && n == 0) { // Disallow both inputs having length 0. PyErr_SetString(PyExc_ValueError, "conv1d: both inputs have core dimension 0; the function " "requires that at least one input has size greater than 0."); return -1; } if (p == -1) { // Output array was not given in the call of the ufunc. // Set the correct output size here. core_dim_sizes[2] = required_p; return 0; } // An output array *was* given. Validate its core dimension. if (p != required_p) { PyErr_Format(PyExc_ValueError, "conv1d: the core dimension p of the out parameter " "does not equal m + n - 1, where m and n are the " "core dimensions of the inputs x and y; got m=%zd " "and n=%zd so p must be %zd, but got p=%zd.", m, n, required_p, p); return -1; } return 0; } # NumPy C-API NumPy provides a C-API to enable users to extend the system and get access to the array object for use in other routines. The best way to truly understand the C-API is to read the source code. If you are unfamiliar with (C) source code, however, this can be a daunting experience at first. Be assured that the task becomes easier with practice, and you may be surprised at how simple the C-code can be to understand. Even if you don’t think you can write C-code from scratch, it is much easier to understand and modify already-written source code than create it _de novo_. Python extensions are especially straightforward to understand because they all have a very similar structure. Admittedly, NumPy is not a trivial extension to Python, and may take a little more snooping to grasp. This is especially true because of the code-generation techniques, which simplify maintenance of very similar code, but can make the code a little less readable to beginners. Still, with a little persistence, the code can be opened to your understanding. It is my hope, that this guide to the C-API can assist in the process of becoming familiar with the compiled-level work that can be done with NumPy in order to squeeze that last bit of necessary speed out of your code. * [Python types and C-structures](types-and-structures) * [New Python types defined](types-and-structures#new-python-types-defined) * [Other C-structures](types-and-structures#other-c-structures) * [NumPy C-API and C complex](types-and-structures#numpy-c-api-and-c-complex) * [System configuration](config) * [Data type sizes](config#data-type-sizes) * [Platform information](config#platform-information) * [Compiler directives](config#compiler-directives) * [Data type API](dtype) * [Enumerated types](dtype#enumerated-types) * [Defines](dtype#defines) * [C-type names](dtype#c-type-names) * [Printf formatting](dtype#printf-formatting) * [Array API](array) * [Array structure and data access](array#array-structure-and-data-access) * [Creating arrays](array#creating-arrays) * [Dealing with types](array#dealing-with-types) * [Array flags](array#array-flags) * [ArrayMethod API](array#arraymethod-api) * [API for calling array methods](array#api-for-calling-array-methods) * [Functions](array#functions) * [Auxiliary data with object semantics](array#auxiliary-data-with-object-semantics) * [Array iterators](array#array-iterators) * [Broadcasting (multi-iterators)](array#broadcasting-multi-iterators) * [Neighborhood iterator](array#neighborhood-iterator) * [Array scalars](array#array-scalars) * [Data-type descriptors](array#data-type-descriptors) * [Data Type Promotion and Inspection](array#data-type-promotion-and-inspection) * [Custom Data Types](array#custom-data-types) * [Conversion utilities](array#conversion-utilities) * [Including and importing the C API](array#including-and-importing-the-c-api) * [Array iterator API](iterator) * [Array iterator](iterator#array-iterator) * [Iteration example](iterator#iteration-example) * [Multi-iteration example](iterator#multi-iteration-example) * [Multi index tracking example](iterator#multi-index-tracking-example) * [Iterator data types](iterator#iterator-data-types) * [Construction and destruction](iterator#construction-and-destruction) * [Functions for iteration](iterator#functions-for-iteration) * [Converting from previous NumPy iterators](iterator#converting-from-previous-numpy-iterators) * [ufunc API](ufunc) * [Constants](ufunc#constants) * [Macros](ufunc#macros) * [Types](ufunc#types) * [Functions](ufunc#functions) * [Generic functions](ufunc#generic-functions) * [Importing the API](ufunc#importing-the-api) * [Generalized universal function API](generalized-ufuncs) * [Definitions](generalized-ufuncs#definitions) * [Details of signature](generalized-ufuncs#details-of-signature) * [C-API for implementing elementary functions](generalized-ufuncs#c-api-for-implementing-elementary-functions) * [Customizing core dimension size processing](generalized-ufuncs#customizing-core-dimension-size-processing) * [NpyString API](strings) * [Examples](strings#examples) * [Types](strings#types) * [Functions](strings#functions) * [NumPy core math library](coremath) * [Floating point classification](coremath#floating-point-classification) * [Useful math constants](coremath#useful-math-constants) * [Low-level floating point manipulation](coremath#low-level-floating-point-manipulation) * [Support for complex numbers](coremath#support-for-complex-numbers) * [Linking against the core math library in an extension](coremath#linking-against-the-core-math-library-in-an-extension) * [Half-precision functions](coremath#half-precision-functions) * [Datetime API](datetimes) * [Data types](datetimes#data-types) * [Conversion functions](datetimes#conversion-functions) * [C API deprecations](deprecations) * [Background](deprecations#background) * [Deprecation mechanism NPY_NO_DEPRECATED_API](deprecations#deprecation-mechanism-npy-no-deprecated-api) * [Memory management in NumPy](data_memory) * [Historical overview](data_memory#historical-overview) * [Configurable memory routines in NumPy (NEP 49)](data_memory#configurable-memory-routines-in-numpy-nep-49) * [What happens when deallocating if there is no policy set](data_memory#what-happens-when-deallocating-if-there-is-no-policy-set) * [Example of memory tracing with `np.lib.tracemalloc_domain`](data_memory#example-of-memory-tracing-with-np-lib-tracemalloc-domain) # Array iterator API ## Array iterator The array iterator encapsulates many of the key features in ufuncs, allowing user code to support features like output parameters, preservation of memory layouts, and buffering of data with the wrong alignment or type, without requiring difficult coding. This page documents the API for the iterator. The iterator is named `NpyIter` and functions are named `NpyIter_*`. There is an [introductory guide to array iteration](../arrays.nditer#arrays- nditer) which may be of interest for those using this C API. In many instances, testing out ideas by creating the iterator in Python is a good idea before writing the C iteration code. ## Iteration example The best way to become familiar with the iterator is to look at its usage within the NumPy codebase itself. For example, here is a slightly tweaked version of the code for [`PyArray_CountNonzero`](array#c.PyArray_CountNonzero "PyArray_CountNonzero"), which counts the number of non-zero elements in an array. npy_intp PyArray_CountNonzero(PyArrayObject* self) { /* Nonzero boolean function */ PyArray_NonzeroFunc* nonzero = PyArray_DESCR(self)->f->nonzero; NpyIter* iter; NpyIter_IterNextFunc *iternext; char** dataptr; npy_intp nonzero_count; npy_intp* strideptr,* innersizeptr; /* Handle zero-sized arrays specially */ if (PyArray_SIZE(self) == 0) { return 0; } /* * Create and use an iterator to count the nonzeros. * flag NPY_ITER_READONLY * - The array is never written to. * flag NPY_ITER_EXTERNAL_LOOP * - Inner loop is done outside the iterator for efficiency. * flag NPY_ITER_NPY_ITER_REFS_OK * - Reference types are acceptable. * order NPY_KEEPORDER * - Visit elements in memory order, regardless of strides. * This is good for performance when the specific order * elements are visited is unimportant. * casting NPY_NO_CASTING * - No casting is required for this operation. */ iter = NpyIter_New(self, NPY_ITER_READONLY| NPY_ITER_EXTERNAL_LOOP| NPY_ITER_REFS_OK, NPY_KEEPORDER, NPY_NO_CASTING, NULL); if (iter == NULL) { return -1; } /* * The iternext function gets stored in a local variable * so it can be called repeatedly in an efficient manner. */ iternext = NpyIter_GetIterNext(iter, NULL); if (iternext == NULL) { NpyIter_Deallocate(iter); return -1; } /* The location of the data pointer which the iterator may update */ dataptr = NpyIter_GetDataPtrArray(iter); /* The location of the stride which the iterator may update */ strideptr = NpyIter_GetInnerStrideArray(iter); /* The location of the inner loop size which the iterator may update */ innersizeptr = NpyIter_GetInnerLoopSizePtr(iter); nonzero_count = 0; do { /* Get the inner loop data/stride/count values */ char* data = *dataptr; npy_intp stride = *strideptr; npy_intp count = *innersizeptr; /* This is a typical inner loop for NPY_ITER_EXTERNAL_LOOP */ while (count--) { if (nonzero(data, self)) { ++nonzero_count; } data += stride; } /* Increment the iterator to the next inner loop */ } while(iternext(iter)); NpyIter_Deallocate(iter); return nonzero_count; } ## Multi-iteration example Here is a copy function using the iterator. The `order` parameter is used to control the memory layout of the allocated result, typically [`NPY_KEEPORDER`](array#c.NPY_ORDER.NPY_KEEPORDER "NPY_KEEPORDER") is desired. PyObject *CopyArray(PyObject *arr, NPY_ORDER order) { NpyIter *iter; NpyIter_IterNextFunc *iternext; PyObject *op[2], *ret; npy_uint32 flags; npy_uint32 op_flags[2]; npy_intp itemsize, *innersizeptr, innerstride; char **dataptrarray; /* * No inner iteration - inner loop is handled by CopyArray code */ flags = NPY_ITER_EXTERNAL_LOOP; /* * Tell the constructor to automatically allocate the output. * The data type of the output will match that of the input. */ op[0] = arr; op[1] = NULL; op_flags[0] = NPY_ITER_READONLY; op_flags[1] = NPY_ITER_WRITEONLY | NPY_ITER_ALLOCATE; /* Construct the iterator */ iter = NpyIter_MultiNew(2, op, flags, order, NPY_NO_CASTING, op_flags, NULL); if (iter == NULL) { return NULL; } /* * Make a copy of the iternext function pointer and * a few other variables the inner loop needs. */ iternext = NpyIter_GetIterNext(iter, NULL); innerstride = NpyIter_GetInnerStrideArray(iter)[0]; itemsize = NpyIter_GetDescrArray(iter)[0]->elsize; /* * The inner loop size and data pointers may change during the * loop, so just cache the addresses. */ innersizeptr = NpyIter_GetInnerLoopSizePtr(iter); dataptrarray = NpyIter_GetDataPtrArray(iter); /* * Note that because the iterator allocated the output, * it matches the iteration order and is packed tightly, * so we don't need to check it like the input. */ if (innerstride == itemsize) { do { memcpy(dataptrarray[1], dataptrarray[0], itemsize * (*innersizeptr)); } while (iternext(iter)); } else { /* For efficiency, should specialize this based on item size... */ npy_intp i; do { npy_intp size = *innersizeptr; char *src = dataptrarray[0], *dst = dataptrarray[1]; for(i = 0; i < size; i++, src += innerstride, dst += itemsize) { memcpy(dst, src, itemsize); } } while (iternext(iter)); } /* Get the result from the iterator object array */ ret = NpyIter_GetOperandArray(iter)[1]; Py_INCREF(ret); if (NpyIter_Deallocate(iter) != NPY_SUCCEED) { Py_DECREF(ret); return NULL; } return ret; } ## Multi index tracking example This example shows you how to work with the `NPY_ITER_MULTI_INDEX` flag. For simplicity, we assume the argument is a two-dimensional array. int PrintMultiIndex(PyArrayObject *arr) { NpyIter *iter; NpyIter_IterNextFunc *iternext; npy_intp multi_index[2]; iter = NpyIter_New( arr, NPY_ITER_READONLY | NPY_ITER_MULTI_INDEX | NPY_ITER_REFS_OK, NPY_KEEPORDER, NPY_NO_CASTING, NULL); if (iter == NULL) { return -1; } if (NpyIter_GetNDim(iter) != 2) { NpyIter_Deallocate(iter); PyErr_SetString(PyExc_ValueError, "Array must be 2-D"); return -1; } if (NpyIter_GetIterSize(iter) != 0) { iternext = NpyIter_GetIterNext(iter, NULL); if (iternext == NULL) { NpyIter_Deallocate(iter); return -1; } NpyIter_GetMultiIndexFunc *get_multi_index = NpyIter_GetGetMultiIndex(iter, NULL); if (get_multi_index == NULL) { NpyIter_Deallocate(iter); return -1; } do { get_multi_index(iter, multi_index); printf("multi_index is [%" NPY_INTP_FMT ", %" NPY_INTP_FMT "]\n", multi_index[0], multi_index[1]); } while (iternext(iter)); } if (!NpyIter_Deallocate(iter)) { return -1; } return 0; } When called with a 2x3 array, the above example prints: multi_index is [0, 0] multi_index is [0, 1] multi_index is [0, 2] multi_index is [1, 0] multi_index is [1, 1] multi_index is [1, 2] ## Iterator data types The iterator layout is an internal detail, and user code only sees an incomplete struct. typeNpyIter This is an opaque pointer type for the iterator. Access to its contents can only be done through the iterator API. typeNpyIter_Type This is the type which exposes the iterator to Python. Currently, no API is exposed which provides access to the values of a Python-created iterator. If an iterator is created in Python, it must be used in Python and vice versa. Such an API will likely be created in a future version. typeNpyIter_IterNextFunc This is a function pointer for the iteration loop, returned by `NpyIter_GetIterNext`. typeNpyIter_GetMultiIndexFunc This is a function pointer for getting the current iterator multi-index, returned by `NpyIter_GetGetMultiIndex`. ## Construction and destruction NpyIter*NpyIter_New([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*op, [npy_uint32](dtype#c.npy_uint32 "npy_uint32")flags, [NPY_ORDER](array#c.NPY_ORDER "NPY_ORDER")order, [NPY_CASTING](array#c.NPY_CASTING "NPY_CASTING")casting, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype) Creates an iterator for the given numpy array object `op`. Flags that may be passed in `flags` are any combination of the global and per- operand flags documented in `NpyIter_MultiNew`, except for `NPY_ITER_ALLOCATE`. Any of the [`NPY_ORDER`](array#c.NPY_ORDER "NPY_ORDER") enum values may be passed to `order`. For efficient iteration, [`NPY_KEEPORDER`](array#c.NPY_ORDER.NPY_KEEPORDER "NPY_KEEPORDER") is the best option, and the other orders enforce the particular iteration pattern. Any of the [`NPY_CASTING`](array#c.NPY_CASTING "NPY_CASTING") enum values may be passed to `casting`. The values include [`NPY_NO_CASTING`](array#c.NPY_CASTING.NPY_NO_CASTING "NPY_NO_CASTING"), [`NPY_EQUIV_CASTING`](array#c.NPY_CASTING.NPY_EQUIV_CASTING "NPY_EQUIV_CASTING"), [`NPY_SAFE_CASTING`](array#c.NPY_CASTING.NPY_SAFE_CASTING "NPY_SAFE_CASTING"), [`NPY_SAME_KIND_CASTING`](array#c.NPY_CASTING.NPY_SAME_KIND_CASTING "NPY_SAME_KIND_CASTING"), and [`NPY_UNSAFE_CASTING`](array#c.NPY_CASTING.NPY_UNSAFE_CASTING "NPY_UNSAFE_CASTING"). To allow the casts to occur, copying or buffering must also be enabled. If `dtype` isn’t `NULL`, then it requires that data type. If copying is allowed, it will make a temporary copy if the data is castable. If `NPY_ITER_UPDATEIFCOPY` is enabled, it will also copy the data back with another cast upon iterator destruction. Returns NULL if there is an error, otherwise returns the allocated iterator. To make an iterator similar to the old iterator, this should work. iter = NpyIter_New(op, NPY_ITER_READWRITE, NPY_CORDER, NPY_NO_CASTING, NULL); If you want to edit an array with aligned `double` code, but the order doesn’t matter, you would use this. dtype = PyArray_DescrFromType(NPY_DOUBLE); iter = NpyIter_New(op, NPY_ITER_READWRITE| NPY_ITER_BUFFERED| NPY_ITER_NBO| NPY_ITER_ALIGNED, NPY_KEEPORDER, NPY_SAME_KIND_CASTING, dtype); Py_DECREF(dtype); NpyIter*NpyIter_MultiNew([npy_intp](dtype#c.npy_intp "npy_intp")nop, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")**op, [npy_uint32](dtype#c.npy_uint32 "npy_uint32")flags, [NPY_ORDER](array#c.NPY_ORDER "NPY_ORDER")order, [NPY_CASTING](array#c.NPY_CASTING "NPY_CASTING")casting, [npy_uint32](dtype#c.npy_uint32 "npy_uint32")*op_flags, [PyArray_Descr](types- and-structures#c.PyArray_Descr "PyArray_Descr")**op_dtypes) Creates an iterator for broadcasting the `nop` array objects provided in `op`, using regular NumPy broadcasting rules. Any of the [`NPY_ORDER`](array#c.NPY_ORDER "NPY_ORDER") enum values may be passed to `order`. For efficient iteration, [`NPY_KEEPORDER`](array#c.NPY_ORDER.NPY_KEEPORDER "NPY_KEEPORDER") is the best option, and the other orders enforce the particular iteration pattern. When using [`NPY_KEEPORDER`](array#c.NPY_ORDER.NPY_KEEPORDER "NPY_KEEPORDER"), if you also want to ensure that the iteration is not reversed along an axis, you should pass the flag `NPY_ITER_DONT_NEGATE_STRIDES`. Any of the [`NPY_CASTING`](array#c.NPY_CASTING "NPY_CASTING") enum values may be passed to `casting`. The values include [`NPY_NO_CASTING`](array#c.NPY_CASTING.NPY_NO_CASTING "NPY_NO_CASTING"), [`NPY_EQUIV_CASTING`](array#c.NPY_CASTING.NPY_EQUIV_CASTING "NPY_EQUIV_CASTING"), [`NPY_SAFE_CASTING`](array#c.NPY_CASTING.NPY_SAFE_CASTING "NPY_SAFE_CASTING"), [`NPY_SAME_KIND_CASTING`](array#c.NPY_CASTING.NPY_SAME_KIND_CASTING "NPY_SAME_KIND_CASTING"), and [`NPY_UNSAFE_CASTING`](array#c.NPY_CASTING.NPY_UNSAFE_CASTING "NPY_UNSAFE_CASTING"). To allow the casts to occur, copying or buffering must also be enabled. If `op_dtypes` isn’t `NULL`, it specifies a data type or `NULL` for each `op[i]`. Returns NULL if there is an error, otherwise returns the allocated iterator. Flags that may be passed in `flags`, applying to the whole iterator, are: NPY_ITER_C_INDEX Causes the iterator to track a raveled flat index matching C order. This option cannot be used with `NPY_ITER_F_INDEX`. NPY_ITER_F_INDEX Causes the iterator to track a raveled flat index matching Fortran order. This option cannot be used with `NPY_ITER_C_INDEX`. NPY_ITER_MULTI_INDEX Causes the iterator to track a multi-index. This prevents the iterator from coalescing axes to produce bigger inner loops. If the loop is also not buffered and no index is being tracked (`NpyIter_RemoveAxis` can be called), then the iterator size can be `-1` to indicate that the iterator is too large. This can happen due to complex broadcasting and will result in errors being created when the setting the iterator range, removing the multi index, or getting the next function. However, it is possible to remove axes again and use the iterator normally if the size is small enough after removal. NPY_ITER_EXTERNAL_LOOP Causes the iterator to skip iteration of the innermost loop, requiring the user of the iterator to handle it. This flag is incompatible with `NPY_ITER_C_INDEX`, `NPY_ITER_F_INDEX`, and `NPY_ITER_MULTI_INDEX`. NPY_ITER_DONT_NEGATE_STRIDES This only affects the iterator when [`NPY_KEEPORDER`](array#c.NPY_ORDER.NPY_KEEPORDER "NPY_KEEPORDER") is specified for the order parameter. By default with [`NPY_KEEPORDER`](array#c.NPY_ORDER.NPY_KEEPORDER "NPY_KEEPORDER"), the iterator reverses axes which have negative strides, so that memory is traversed in a forward direction. This disables this step. Use this flag if you want to use the underlying memory-ordering of the axes, but don’t want an axis reversed. This is the behavior of `numpy.ravel(a, order='K')`, for instance. NPY_ITER_COMMON_DTYPE Causes the iterator to convert all the operands to a common data type, calculated based on the ufunc type promotion rules. Copying or buffering must be enabled. If the common data type is known ahead of time, don’t use this flag. Instead, set the requested dtype for all the operands. NPY_ITER_REFS_OK Indicates that arrays with reference types (object arrays or structured arrays containing an object type) may be accepted and used in the iterator. If this flag is enabled, the caller must be sure to check whether `NpyIter_IterationNeedsAPI(iter)` is true, in which case it may not release the GIL during iteration. NPY_ITER_ZEROSIZE_OK Indicates that arrays with a size of zero should be permitted. Since the typical iteration loop does not naturally work with zero-sized arrays, you must check that the IterSize is larger than zero before entering the iteration loop. Currently only the operands are checked, not a forced shape. NPY_ITER_REDUCE_OK Permits writeable operands with a dimension with zero stride and size greater than one. Note that such operands must be read/write. When buffering is enabled, this also switches to a special buffering mode which reduces the loop length as necessary to not trample on values being reduced. Note that if you want to do a reduction on an automatically allocated output, you must use `NpyIter_GetOperandArray` to get its reference, then set every value to the reduction unit before doing the iteration loop. In the case of a buffered reduction, this means you must also specify the flag `NPY_ITER_DELAY_BUFALLOC`, then reset the iterator after initializing the allocated operand to prepare the buffers. NPY_ITER_RANGED Enables support for iteration of sub-ranges of the full `iterindex` range `[0, NpyIter_IterSize(iter))`. Use the function `NpyIter_ResetToIterIndexRange` to specify a range for iteration. This flag can only be used with `NPY_ITER_EXTERNAL_LOOP` when `NPY_ITER_BUFFERED` is enabled. This is because without buffering, the inner loop is always the size of the innermost iteration dimension, and allowing it to get cut up would require special handling, effectively making it more like the buffered version. NPY_ITER_BUFFERED Causes the iterator to store buffering data, and use buffering to satisfy data type, alignment, and byte-order requirements. To buffer an operand, do not specify the `NPY_ITER_COPY` or `NPY_ITER_UPDATEIFCOPY` flags, because they will override buffering. Buffering is especially useful for Python code using the iterator, allowing for larger chunks of data at once to amortize the Python interpreter overhead. If used with `NPY_ITER_EXTERNAL_LOOP`, the inner loop for the caller may get larger chunks than would be possible without buffering, because of how the strides are laid out. Note that if an operand is given the flag `NPY_ITER_COPY` or `NPY_ITER_UPDATEIFCOPY`, a copy will be made in preference to buffering. Buffering will still occur when the array was broadcast so elements need to be duplicated to get a constant stride. In normal buffering, the size of each inner loop is equal to the buffer size, or possibly larger if `NPY_ITER_GROWINNER` is specified. If `NPY_ITER_REDUCE_OK` is enabled and a reduction occurs, the inner loops may become smaller depending on the structure of the reduction. NPY_ITER_GROWINNER When buffering is enabled, this allows the size of the inner loop to grow when buffering isn’t necessary. This option is best used if you’re doing a straight pass through all the data, rather than anything with small cache-friendly arrays of temporary values for each inner loop. NPY_ITER_DELAY_BUFALLOC When buffering is enabled, this delays allocation of the buffers until `NpyIter_Reset` or another reset function is called. This flag exists to avoid wasteful copying of buffer data when making multiple copies of a buffered iterator for multi-threaded iteration. Another use of this flag is for setting up reduction operations. After the iterator is created, and a reduction output is allocated automatically by the iterator (be sure to use READWRITE access), its value may be initialized to the reduction unit. Use `NpyIter_GetOperandArray` to get the object. Then, call `NpyIter_Reset` to allocate and fill the buffers with their initial values. NPY_ITER_COPY_IF_OVERLAP If any write operand has overlap with any read operand, eliminate all overlap by making temporary copies (enabling UPDATEIFCOPY for write operands, if necessary). A pair of operands has overlap if there is a memory address that contains data common to both arrays. Because exact overlap detection has exponential runtime in the number of dimensions, the decision is made based on heuristics, which has false positives (needless copies in unusual cases) but has no false negatives. If any read/write overlap exists, this flag ensures the result of the operation is the same as if all operands were copied. In cases where copies would need to be made, **the result of the computation may be undefined without this flag!** Flags that may be passed in `op_flags[i]`, where `0 <= i < nop`: NPY_ITER_READWRITE NPY_ITER_READONLY NPY_ITER_WRITEONLY Indicate how the user of the iterator will read or write to `op[i]`. Exactly one of these flags must be specified per operand. Using `NPY_ITER_READWRITE` or `NPY_ITER_WRITEONLY` for a user-provided operand may trigger `WRITEBACKIFCOPY` semantics. The data will be written back to the original array when `NpyIter_Deallocate` is called. NPY_ITER_COPY Allow a copy of `op[i]` to be made if it does not meet the data type or alignment requirements as specified by the constructor flags and parameters. NPY_ITER_UPDATEIFCOPY Triggers `NPY_ITER_COPY`, and when an array operand is flagged for writing and is copied, causes the data in a copy to be copied back to `op[i]` when `NpyIter_Deallocate` is called. If the operand is flagged as write-only and a copy is needed, an uninitialized temporary array will be created and then copied to back to `op[i]` on calling `NpyIter_Deallocate`, instead of doing the unnecessary copy operation. NPY_ITER_NBO NPY_ITER_ALIGNED NPY_ITER_CONTIG Causes the iterator to provide data for `op[i]` that is in native byte order, aligned according to the dtype requirements, contiguous, or any combination. By default, the iterator produces pointers into the arrays provided, which may be aligned or unaligned, and with any byte order. If copying or buffering is not enabled and the operand data doesn’t satisfy the constraints, an error will be raised. The contiguous constraint applies only to the inner loop, successive inner loops may have arbitrary pointer changes. If the requested data type is in non-native byte order, the NBO flag overrides it and the requested data type is converted to be in native byte order. NPY_ITER_ALLOCATE This is for output arrays, and requires that the flag `NPY_ITER_WRITEONLY` or `NPY_ITER_READWRITE` be set. If `op[i]` is NULL, creates a new array with the final broadcast dimensions, and a layout matching the iteration order of the iterator. When `op[i]` is NULL, the requested data type `op_dtypes[i]` may be NULL as well, in which case it is automatically generated from the dtypes of the arrays which are flagged as readable. The rules for generating the dtype are the same is for UFuncs. Of special note is handling of byte order in the selected dtype. If there is exactly one input, the input’s dtype is used as is. Otherwise, if more than one input dtypes are combined together, the output will be in native byte order. After being allocated with this flag, the caller may retrieve the new array by calling `NpyIter_GetOperandArray` and getting the i-th object in the returned C array. The caller must call Py_INCREF on it to claim a reference to the array. NPY_ITER_NO_SUBTYPE For use with `NPY_ITER_ALLOCATE`, this flag disables allocating an array subtype for the output, forcing it to be a straight ndarray. TODO: Maybe it would be better to introduce a function `NpyIter_GetWrappedOutput` and remove this flag? NPY_ITER_NO_BROADCAST Ensures that the input or output matches the iteration dimensions exactly. NPY_ITER_ARRAYMASK Indicates that this operand is the mask to use for selecting elements when writing to operands which have the `NPY_ITER_WRITEMASKED` flag applied to them. Only one operand may have `NPY_ITER_ARRAYMASK` flag applied to it. The data type of an operand with this flag should be either [`NPY_BOOL`](dtype#c.NPY_TYPES.NPY_BOOL "NPY_BOOL"), [`NPY_MASK`](dtype#c.NPY_TYPES.NPY_MASK "NPY_MASK"), or a struct dtype whose fields are all valid mask dtypes. In the latter case, it must match up with a struct operand being WRITEMASKED, as it is specifying a mask for each field of that array. This flag only affects writing from the buffer back to the array. This means that if the operand is also `NPY_ITER_READWRITE` or `NPY_ITER_WRITEONLY`, code doing iteration can write to this operand to control which elements will be untouched and which ones will be modified. This is useful when the mask should be a combination of input masks. NPY_ITER_WRITEMASKED This array is the mask for all [`writemasked`](../generated/numpy.nditer#numpy.nditer "numpy.nditer") operands. Code uses the `writemasked` flag which indicates that only elements where the chosen ARRAYMASK operand is True will be written to. In general, the iterator does not enforce this, it is up to the code doing the iteration to follow that promise. When `writemasked` flag is used, and this operand is buffered, this changes how data is copied from the buffer into the array. A masked copying routine is used, which only copies the elements in the buffer for which `writemasked` returns true from the corresponding element in the ARRAYMASK operand. NPY_ITER_OVERLAP_ASSUME_ELEMENTWISE In memory overlap checks, assume that operands with `NPY_ITER_OVERLAP_ASSUME_ELEMENTWISE` enabled are accessed only in the iterator order. This enables the iterator to reason about data dependency, possibly avoiding unnecessary copies. This flag has effect only if `NPY_ITER_COPY_IF_OVERLAP` is enabled on the iterator. NpyIter*NpyIter_AdvancedNew([npy_intp](dtype#c.npy_intp "npy_intp")nop, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")**op, [npy_uint32](dtype#c.npy_uint32 "npy_uint32")flags, [NPY_ORDER](array#c.NPY_ORDER "NPY_ORDER")order, [NPY_CASTING](array#c.NPY_CASTING "NPY_CASTING")casting, [npy_uint32](dtype#c.npy_uint32 "npy_uint32")*op_flags, [PyArray_Descr](types- and-structures#c.PyArray_Descr "PyArray_Descr")**op_dtypes, intoa_ndim, int**op_axes, [npy_intp](dtype#c.npy_intp "npy_intp")const*itershape, [npy_intp](dtype#c.npy_intp "npy_intp")buffersize) Extends `NpyIter_MultiNew` with several advanced options providing more control over broadcasting and buffering. If -1/NULL values are passed to `oa_ndim`, `op_axes`, `itershape`, and `buffersize`, it is equivalent to `NpyIter_MultiNew`. The parameter `oa_ndim`, when not zero or -1, specifies the number of dimensions that will be iterated with customized broadcasting. If it is provided, `op_axes` must and `itershape` can also be provided. The `op_axes` parameter let you control in detail how the axes of the operand arrays get matched together and iterated. In `op_axes`, you must provide an array of `nop` pointers to `oa_ndim`-sized arrays of type `npy_intp`. If an entry in `op_axes` is NULL, normal broadcasting rules will apply. In `op_axes[j][i]` is stored either a valid axis of `op[j]`, or -1 which means `newaxis`. Within each `op_axes[j]` array, axes may not be repeated. The following example is how normal broadcasting applies to a 3-D array, a 2-D array, a 1-D array and a scalar. **Note** : Before NumPy 1.8 `oa_ndim == 0` was used for signalling that `op_axes` and `itershape` are unused. This is deprecated and should be replaced with -1. Better backward compatibility may be achieved by using `NpyIter_MultiNew` for this case. int oa_ndim = 3; /* # iteration axes */ int op0_axes[] = {0, 1, 2}; /* 3-D operand */ int op1_axes[] = {-1, 0, 1}; /* 2-D operand */ int op2_axes[] = {-1, -1, 0}; /* 1-D operand */ int op3_axes[] = {-1, -1, -1} /* 0-D (scalar) operand */ int* op_axes[] = {op0_axes, op1_axes, op2_axes, op3_axes}; The `itershape` parameter allows you to force the iterator to have a specific iteration shape. It is an array of length `oa_ndim`. When an entry is negative, its value is determined from the operands. This parameter allows automatically allocated outputs to get additional dimensions which don’t match up with any dimension of an input. If `buffersize` is zero, a default buffer size is used, otherwise it specifies how big of a buffer to use. Buffers which are powers of 2 such as 4096 or 8192 are recommended. Returns NULL if there is an error, otherwise returns the allocated iterator. NpyIter*NpyIter_Copy(NpyIter*iter) Makes a copy of the given iterator. This function is provided primarily to enable multi-threaded iteration of the data. _TODO_ : Move this to a section about multithreaded iteration. The recommended approach to multithreaded iteration is to first create an iterator with the flags `NPY_ITER_EXTERNAL_LOOP`, `NPY_ITER_RANGED`, `NPY_ITER_BUFFERED`, `NPY_ITER_DELAY_BUFALLOC`, and possibly `NPY_ITER_GROWINNER`. Create a copy of this iterator for each thread (minus one for the first iterator). Then, take the iteration index range `[0, NpyIter_GetIterSize(iter))` and split it up into tasks, for example using a TBB parallel_for loop. When a thread gets a task to execute, it then uses its copy of the iterator by calling `NpyIter_ResetToIterIndexRange` and iterating over the full range. When using the iterator in multi-threaded code or in code not holding the Python GIL, care must be taken to only call functions which are safe in that context. `NpyIter_Copy` cannot be safely called without the Python GIL, because it increments Python references. The `Reset*` and some other functions may be safely called by passing in the `errmsg` parameter as non-NULL, so that the functions will pass back errors through it instead of setting a Python exception. `NpyIter_Deallocate` must be called for each copy. intNpyIter_RemoveAxis(NpyIter*iter, intaxis) Removes an axis from iteration. This requires that `NPY_ITER_MULTI_INDEX` was set for iterator creation, and does not work if buffering is enabled or an index is being tracked. This function also resets the iterator to its initial state. This is useful for setting up an accumulation loop, for example. The iterator can first be created with all the dimensions, including the accumulation axis, so that the output gets created correctly. Then, the accumulation axis can be removed, and the calculation done in a nested fashion. **WARNING** : This function may change the internal memory layout of the iterator. Any cached functions or pointers from the iterator must be retrieved again! The iterator range will be reset as well. Returns `NPY_SUCCEED` or `NPY_FAIL`. intNpyIter_RemoveMultiIndex(NpyIter*iter) If the iterator is tracking a multi-index, this strips support for them, and does further iterator optimizations that are possible if multi-indices are not needed. This function also resets the iterator to its initial state. **WARNING** : This function may change the internal memory layout of the iterator. Any cached functions or pointers from the iterator must be retrieved again! After calling this function, NpyIter_HasMultiIndex(iter) will return false. Returns `NPY_SUCCEED` or `NPY_FAIL`. intNpyIter_EnableExternalLoop(NpyIter*iter) If `NpyIter_RemoveMultiIndex` was called, you may want to enable the flag `NPY_ITER_EXTERNAL_LOOP`. This flag is not permitted together with `NPY_ITER_MULTI_INDEX`, so this function is provided to enable the feature after `NpyIter_RemoveMultiIndex` is called. This function also resets the iterator to its initial state. **WARNING** : This function changes the internal logic of the iterator. Any cached functions or pointers from the iterator must be retrieved again! Returns `NPY_SUCCEED` or `NPY_FAIL`. intNpyIter_Deallocate(NpyIter*iter) Deallocates the iterator object and resolves any needed writebacks. Returns `NPY_SUCCEED` or `NPY_FAIL`. intNpyIter_Reset(NpyIter*iter, char**errmsg) Resets the iterator back to its initial state, at the beginning of the iteration range. Returns `NPY_SUCCEED` or `NPY_FAIL`. If errmsg is non-NULL, no Python exception is set when `NPY_FAIL` is returned. Instead, *errmsg is set to an error message. When errmsg is non-NULL, the function may be safely called without holding the Python GIL. intNpyIter_ResetToIterIndexRange(NpyIter*iter, [npy_intp](dtype#c.npy_intp "npy_intp")istart, [npy_intp](dtype#c.npy_intp "npy_intp")iend, char**errmsg) Resets the iterator and restricts it to the `iterindex` range `[istart, iend)`. See `NpyIter_Copy` for an explanation of how to use this for multi- threaded iteration. This requires that the flag `NPY_ITER_RANGED` was passed to the iterator constructor. If you want to reset both the `iterindex` range and the base pointers at the same time, you can do the following to avoid extra buffer copying (be sure to add the return code error checks when you copy this code). /* Set to a trivial empty range */ NpyIter_ResetToIterIndexRange(iter, 0, 0); /* Set the base pointers */ NpyIter_ResetBasePointers(iter, baseptrs); /* Set to the desired range */ NpyIter_ResetToIterIndexRange(iter, istart, iend); Returns `NPY_SUCCEED` or `NPY_FAIL`. If errmsg is non-NULL, no Python exception is set when `NPY_FAIL` is returned. Instead, *errmsg is set to an error message. When errmsg is non-NULL, the function may be safely called without holding the Python GIL. intNpyIter_ResetBasePointers(NpyIter*iter, char**baseptrs, char**errmsg) Resets the iterator back to its initial state, but using the values in `baseptrs` for the data instead of the pointers from the arrays being iterated. This functions is intended to be used, together with the `op_axes` parameter, by nested iteration code with two or more iterators. Returns `NPY_SUCCEED` or `NPY_FAIL`. If errmsg is non-NULL, no Python exception is set when `NPY_FAIL` is returned. Instead, *errmsg is set to an error message. When errmsg is non-NULL, the function may be safely called without holding the Python GIL. _TODO_ : Move the following into a special section on nested iterators. Creating iterators for nested iteration requires some care. All the iterator operands must match exactly, or the calls to `NpyIter_ResetBasePointers` will be invalid. This means that automatic copies and output allocation should not be used haphazardly. It is possible to still use the automatic data conversion and casting features of the iterator by creating one of the iterators with all the conversion parameters enabled, then grabbing the allocated operands with the `NpyIter_GetOperandArray` function and passing them into the constructors for the rest of the iterators. **WARNING** : When creating iterators for nested iteration, the code must not use a dimension more than once in the different iterators. If this is done, nested iteration will produce out-of-bounds pointers during iteration. **WARNING** : When creating iterators for nested iteration, buffering can only be applied to the innermost iterator. If a buffered iterator is used as the source for `baseptrs`, it will point into a small buffer instead of the array and the inner iteration will be invalid. The pattern for using nested iterators is as follows. NpyIter *iter1, *iter1; NpyIter_IterNextFunc *iternext1, *iternext2; char **dataptrs1; /* * With the exact same operands, no copies allowed, and * no axis in op_axes used both in iter1 and iter2. * Buffering may be enabled for iter2, but not for iter1. */ iter1 = ...; iter2 = ...; iternext1 = NpyIter_GetIterNext(iter1); iternext2 = NpyIter_GetIterNext(iter2); dataptrs1 = NpyIter_GetDataPtrArray(iter1); do { NpyIter_ResetBasePointers(iter2, dataptrs1); do { /* Use the iter2 values */ } while (iternext2(iter2)); } while (iternext1(iter1)); intNpyIter_GotoMultiIndex(NpyIter*iter, [npy_intp](dtype#c.npy_intp "npy_intp")const*multi_index) Adjusts the iterator to point to the `ndim` indices pointed to by `multi_index`. Returns an error if a multi-index is not being tracked, the indices are out of bounds, or inner loop iteration is disabled. Returns `NPY_SUCCEED` or `NPY_FAIL`. intNpyIter_GotoIndex(NpyIter*iter, [npy_intp](dtype#c.npy_intp "npy_intp")index) Adjusts the iterator to point to the `index` specified. If the iterator was constructed with the flag `NPY_ITER_C_INDEX`, `index` is the C-order index, and if the iterator was constructed with the flag `NPY_ITER_F_INDEX`, `index` is the Fortran-order index. Returns an error if there is no index being tracked, the index is out of bounds, or inner loop iteration is disabled. Returns `NPY_SUCCEED` or `NPY_FAIL`. [npy_intp](dtype#c.npy_intp "npy_intp")NpyIter_GetIterSize(NpyIter*iter) Returns the number of elements being iterated. This is the product of all the dimensions in the shape. When a multi index is being tracked (and `NpyIter_RemoveAxis` may be called) the size may be `-1` to indicate an iterator is too large. Such an iterator is invalid, but may become valid after `NpyIter_RemoveAxis` is called. It is not necessary to check for this case. [npy_intp](dtype#c.npy_intp "npy_intp")NpyIter_GetIterIndex(NpyIter*iter) Gets the `iterindex` of the iterator, which is an index matching the iteration order of the iterator. voidNpyIter_GetIterIndexRange(NpyIter*iter, [npy_intp](dtype#c.npy_intp "npy_intp")*istart, [npy_intp](dtype#c.npy_intp "npy_intp")*iend) Gets the `iterindex` sub-range that is being iterated. If `NPY_ITER_RANGED` was not specified, this always returns the range `[0, NpyIter_IterSize(iter))`. intNpyIter_GotoIterIndex(NpyIter*iter, [npy_intp](dtype#c.npy_intp "npy_intp")iterindex) Adjusts the iterator to point to the `iterindex` specified. The IterIndex is an index matching the iteration order of the iterator. Returns an error if the `iterindex` is out of bounds, buffering is enabled, or inner loop iteration is disabled. Returns `NPY_SUCCEED` or `NPY_FAIL`. [npy_bool](dtype#c.npy_bool "npy_bool")NpyIter_HasDelayedBufAlloc(NpyIter*iter) Returns 1 if the flag `NPY_ITER_DELAY_BUFALLOC` was passed to the iterator constructor, and no call to one of the Reset functions has been done yet, 0 otherwise. [npy_bool](dtype#c.npy_bool "npy_bool")NpyIter_HasExternalLoop(NpyIter*iter) Returns 1 if the caller needs to handle the inner-most 1-dimensional loop, or 0 if the iterator handles all looping. This is controlled by the constructor flag `NPY_ITER_EXTERNAL_LOOP` or `NpyIter_EnableExternalLoop`. [npy_bool](dtype#c.npy_bool "npy_bool")NpyIter_HasMultiIndex(NpyIter*iter) Returns 1 if the iterator was created with the `NPY_ITER_MULTI_INDEX` flag, 0 otherwise. [npy_bool](dtype#c.npy_bool "npy_bool")NpyIter_HasIndex(NpyIter*iter) Returns 1 if the iterator was created with the `NPY_ITER_C_INDEX` or `NPY_ITER_F_INDEX` flag, 0 otherwise. [npy_bool](dtype#c.npy_bool "npy_bool")NpyIter_RequiresBuffering(NpyIter*iter) Returns 1 if the iterator requires buffering, which occurs when an operand needs conversion or alignment and so cannot be used directly. [npy_bool](dtype#c.npy_bool "npy_bool")NpyIter_IsBuffered(NpyIter*iter) Returns 1 if the iterator was created with the `NPY_ITER_BUFFERED` flag, 0 otherwise. [npy_bool](dtype#c.npy_bool "npy_bool")NpyIter_IsGrowInner(NpyIter*iter) Returns 1 if the iterator was created with the `NPY_ITER_GROWINNER` flag, 0 otherwise. [npy_intp](dtype#c.npy_intp "npy_intp")NpyIter_GetBufferSize(NpyIter*iter) If the iterator is buffered, returns the size of the buffer being used, otherwise returns 0. intNpyIter_GetNDim(NpyIter*iter) Returns the number of dimensions being iterated. If a multi-index was not requested in the iterator constructor, this value may be smaller than the number of dimensions in the original objects. intNpyIter_GetNOp(NpyIter*iter) Returns the number of operands in the iterator. [npy_intp](dtype#c.npy_intp "npy_intp")*NpyIter_GetAxisStrideArray(NpyIter*iter, intaxis) Gets the array of strides for the specified axis. Requires that the iterator be tracking a multi-index, and that buffering not be enabled. This may be used when you want to match up operand axes in some fashion, then remove them with `NpyIter_RemoveAxis` to handle their processing manually. By calling this function before removing the axes, you can get the strides for the manual processing. Returns `NULL` on error. intNpyIter_GetShape(NpyIter*iter, [npy_intp](dtype#c.npy_intp "npy_intp")*outshape) Returns the broadcast shape of the iterator in `outshape`. This can only be called on an iterator which is tracking a multi-index. Returns `NPY_SUCCEED` or `NPY_FAIL`. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")**NpyIter_GetDescrArray(NpyIter*iter) This gives back a pointer to the `nop` data type Descrs for the objects being iterated. The result points into `iter`, so the caller does not gain any references to the Descrs. This pointer may be cached before the iteration loop, calling `iternext` will not change it. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")**NpyIter_GetOperandArray(NpyIter*iter) This gives back a pointer to the `nop` operand PyObjects that are being iterated. The result points into `iter`, so the caller does not gain any references to the PyObjects. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*NpyIter_GetIterView(NpyIter*iter, [npy_intp](dtype#c.npy_intp "npy_intp")i) This gives back a reference to a new ndarray view, which is a view into the i-th object in the array `NpyIter_GetOperandArray`, whose dimensions and strides match the internal optimized iteration pattern. A C-order iteration of this view is equivalent to the iterator’s iteration order. For example, if an iterator was created with a single array as its input, and it was possible to rearrange all its axes and then collapse it into a single strided iteration, this would return a view that is a one-dimensional array. voidNpyIter_GetReadFlags(NpyIter*iter, char*outreadflags) Fills `nop` flags. Sets `outreadflags[i]` to 1 if `op[i]` can be read from, and to 0 if not. voidNpyIter_GetWriteFlags(NpyIter*iter, char*outwriteflags) Fills `nop` flags. Sets `outwriteflags[i]` to 1 if `op[i]` can be written to, and to 0 if not. intNpyIter_CreateCompatibleStrides(NpyIter*iter, [npy_intp](dtype#c.npy_intp "npy_intp")itemsize, [npy_intp](dtype#c.npy_intp "npy_intp")*outstrides) Builds a set of strides which are the same as the strides of an output array created using the `NPY_ITER_ALLOCATE` flag, where NULL was passed for op_axes. This is for data packed contiguously, but not necessarily in C or Fortran order. This should be used together with `NpyIter_GetShape` and `NpyIter_GetNDim` with the flag `NPY_ITER_MULTI_INDEX` passed into the constructor. A use case for this function is to match the shape and layout of the iterator and tack on one or more dimensions. For example, in order to generate a vector per input value for a numerical gradient, you pass in ndim*itemsize for itemsize, then add another dimension to the end with size ndim and stride itemsize. To do the Hessian matrix, you do the same thing but add two dimensions, or take advantage of the symmetry and pack it into 1 dimension with a particular encoding. This function may only be called if the iterator is tracking a multi-index and if `NPY_ITER_DONT_NEGATE_STRIDES` was used to prevent an axis from being iterated in reverse order. If an array is created with this method, simply adding ‘itemsize’ for each iteration will traverse the new array matching the iterator. Returns `NPY_SUCCEED` or `NPY_FAIL`. [npy_bool](dtype#c.npy_bool "npy_bool")NpyIter_IsFirstVisit(NpyIter*iter, intiop) Checks to see whether this is the first time the elements of the specified reduction operand which the iterator points at are being seen for the first time. The function returns a reasonable answer for reduction operands and when buffering is disabled. The answer may be incorrect for buffered non-reduction operands. This function is intended to be used in EXTERNAL_LOOP mode only, and will produce some wrong answers when that mode is not enabled. If this function returns true, the caller should also check the inner loop stride of the operand, because if that stride is 0, then only the first element of the innermost external loop is being visited for the first time. _WARNING_ : For performance reasons, ‘iop’ is not bounds-checked, it is not confirmed that ‘iop’ is actually a reduction operand, and it is not confirmed that EXTERNAL_LOOP mode is enabled. These checks are the responsibility of the caller, and should be done outside of any inner loops. ## Functions for iteration NpyIter_IterNextFunc*NpyIter_GetIterNext(NpyIter*iter, char**errmsg) Returns a function pointer for iteration. A specialized version of the function pointer may be calculated by this function instead of being stored in the iterator structure. Thus, to get good performance, it is required that the function pointer be saved in a variable rather than retrieved for each loop iteration. Returns NULL if there is an error. If errmsg is non-NULL, no Python exception is set when `NPY_FAIL` is returned. Instead, *errmsg is set to an error message. When errmsg is non-NULL, the function may be safely called without holding the Python GIL. The typical looping construct is as follows. NpyIter_IterNextFunc *iternext = NpyIter_GetIterNext(iter, NULL); char** dataptr = NpyIter_GetDataPtrArray(iter); do { /* use the addresses dataptr[0], ... dataptr[nop-1] */ } while(iternext(iter)); When `NPY_ITER_EXTERNAL_LOOP` is specified, the typical inner loop construct is as follows. NpyIter_IterNextFunc *iternext = NpyIter_GetIterNext(iter, NULL); char** dataptr = NpyIter_GetDataPtrArray(iter); npy_intp* stride = NpyIter_GetInnerStrideArray(iter); npy_intp* size_ptr = NpyIter_GetInnerLoopSizePtr(iter), size; npy_intp iop, nop = NpyIter_GetNOp(iter); do { size = *size_ptr; while (size--) { /* use the addresses dataptr[0], ... dataptr[nop-1] */ for (iop = 0; iop < nop; ++iop) { dataptr[iop] += stride[iop]; } } } while (iternext()); Observe that we are using the dataptr array inside the iterator, not copying the values to a local temporary. This is possible because when `iternext()` is called, these pointers will be overwritten with fresh values, not incrementally updated. If a compile-time fixed buffer is being used (both flags `NPY_ITER_BUFFERED` and `NPY_ITER_EXTERNAL_LOOP`), the inner size may be used as a signal as well. The size is guaranteed to become zero when `iternext()` returns false, enabling the following loop construct. Note that if you use this construct, you should not pass `NPY_ITER_GROWINNER` as a flag, because it will cause larger sizes under some circumstances. /* The constructor should have buffersize passed as this value */ #define FIXED_BUFFER_SIZE 1024 NpyIter_IterNextFunc *iternext = NpyIter_GetIterNext(iter, NULL); char **dataptr = NpyIter_GetDataPtrArray(iter); npy_intp *stride = NpyIter_GetInnerStrideArray(iter); npy_intp *size_ptr = NpyIter_GetInnerLoopSizePtr(iter), size; npy_intp i, iop, nop = NpyIter_GetNOp(iter); /* One loop with a fixed inner size */ size = *size_ptr; while (size == FIXED_BUFFER_SIZE) { /* * This loop could be manually unrolled by a factor * which divides into FIXED_BUFFER_SIZE */ for (i = 0; i < FIXED_BUFFER_SIZE; ++i) { /* use the addresses dataptr[0], ... dataptr[nop-1] */ for (iop = 0; iop < nop; ++iop) { dataptr[iop] += stride[iop]; } } iternext(); size = *size_ptr; } /* Finish-up loop with variable inner size */ if (size > 0) do { size = *size_ptr; while (size--) { /* use the addresses dataptr[0], ... dataptr[nop-1] */ for (iop = 0; iop < nop; ++iop) { dataptr[iop] += stride[iop]; } } } while (iternext()); NpyIter_GetMultiIndexFunc*NpyIter_GetGetMultiIndex(NpyIter*iter, char**errmsg) Returns a function pointer for getting the current multi-index of the iterator. Returns NULL if the iterator is not tracking a multi-index. It is recommended that this function pointer be cached in a local variable before the iteration loop. Returns NULL if there is an error. If errmsg is non-NULL, no Python exception is set when `NPY_FAIL` is returned. Instead, *errmsg is set to an error message. When errmsg is non-NULL, the function may be safely called without holding the Python GIL. char**NpyIter_GetDataPtrArray(NpyIter*iter) This gives back a pointer to the `nop` data pointers. If `NPY_ITER_EXTERNAL_LOOP` was not specified, each data pointer points to the current data item of the iterator. If no inner iteration was specified, it points to the first data item of the inner loop. This pointer may be cached before the iteration loop, calling `iternext` will not change it. This function may be safely called without holding the Python GIL. char**NpyIter_GetInitialDataPtrArray(NpyIter*iter) Gets the array of data pointers directly into the arrays (never into the buffers), corresponding to iteration index 0. These pointers are different from the pointers accepted by `NpyIter_ResetBasePointers`, because the direction along some axes may have been reversed. This function may be safely called without holding the Python GIL. [npy_intp](dtype#c.npy_intp "npy_intp")*NpyIter_GetIndexPtr(NpyIter*iter) This gives back a pointer to the index being tracked, or NULL if no index is being tracked. It is only usable if one of the flags `NPY_ITER_C_INDEX` or `NPY_ITER_F_INDEX` were specified during construction. When the flag `NPY_ITER_EXTERNAL_LOOP` is used, the code needs to know the parameters for doing the inner loop. These functions provide that information. [npy_intp](dtype#c.npy_intp "npy_intp")*NpyIter_GetInnerStrideArray(NpyIter*iter) Returns a pointer to an array of the `nop` strides, one for each iterated object, to be used by the inner loop. This pointer may be cached before the iteration loop, calling `iternext` will not change it. This function may be safely called without holding the Python GIL. **WARNING** : While the pointer may be cached, its values may change if the iterator is buffered. [npy_intp](dtype#c.npy_intp "npy_intp")*NpyIter_GetInnerLoopSizePtr(NpyIter*iter) Returns a pointer to the number of iterations the inner loop should execute. This address may be cached before the iteration loop, calling `iternext` will not change it. The value itself may change during iteration, in particular if buffering is enabled. This function may be safely called without holding the Python GIL. voidNpyIter_GetInnerFixedStrideArray(NpyIter*iter, [npy_intp](dtype#c.npy_intp "npy_intp")*out_strides) Gets an array of strides which are fixed, or will not change during the entire iteration. For strides that may change, the value NPY_MAX_INTP is placed in the stride. Once the iterator is prepared for iteration (after a reset if `NPY_ITER_DELAY_BUFALLOC` was used), call this to get the strides which may be used to select a fast inner loop function. For example, if the stride is 0, that means the inner loop can always load its value into a variable once, then use the variable throughout the loop, or if the stride equals the itemsize, a contiguous version for that operand may be used. This function may be safely called without holding the Python GIL. ## Converting from previous NumPy iterators The old iterator API includes functions like PyArrayIter_Check, PyArray_Iter* and PyArray_ITER_*. The multi-iterator array includes PyArray_MultiIter*, PyArray_Broadcast, and PyArray_RemoveSmallest. The new iterator design replaces all of this functionality with a single object and associated API. One goal of the new API is that all uses of the existing iterator should be replaceable with the new iterator without significant effort. In 1.6, the major exception to this is the neighborhood iterator, which does not have corresponding features in this iterator. Here is a conversion table for which functions to use with the new iterator: _Iterator Functions_ | ---|--- [`PyArray_IterNew`](array#c.PyArray_IterNew "PyArray_IterNew") | `NpyIter_New` [`PyArray_IterAllButAxis`](array#c.PyArray_IterAllButAxis "PyArray_IterAllButAxis") | `NpyIter_New` \+ `axes` parameter **or** Iterator flag `NPY_ITER_EXTERNAL_LOOP` [`PyArray_BroadcastToShape`](array#c.PyArray_BroadcastToShape "PyArray_BroadcastToShape") | **NOT SUPPORTED** (Use the support for multiple operands instead.) [`PyArrayIter_Check`](array#c.PyArrayIter_Check "PyArrayIter_Check") | Will need to add this in Python exposure [`PyArray_ITER_RESET`](array#c.PyArray_ITER_RESET "PyArray_ITER_RESET") | `NpyIter_Reset` [`PyArray_ITER_NEXT`](array#c.PyArray_ITER_NEXT "PyArray_ITER_NEXT") | Function pointer from `NpyIter_GetIterNext` [`PyArray_ITER_DATA`](array#c.PyArray_ITER_DATA "PyArray_ITER_DATA") | `NpyIter_GetDataPtrArray` [`PyArray_ITER_GOTO`](array#c.PyArray_ITER_GOTO "PyArray_ITER_GOTO") | `NpyIter_GotoMultiIndex` [`PyArray_ITER_GOTO1D`](array#c.PyArray_ITER_GOTO1D "PyArray_ITER_GOTO1D") | `NpyIter_GotoIndex` or `NpyIter_GotoIterIndex` [`PyArray_ITER_NOTDONE`](array#c.PyArray_ITER_NOTDONE "PyArray_ITER_NOTDONE") | Return value of `iternext` function pointer _Multi-iterator Functions_ | [`PyArray_MultiIterNew`](array#c.PyArray_MultiIterNew "PyArray_MultiIterNew") | `NpyIter_MultiNew` [`PyArray_MultiIter_RESET`](array#c.PyArray_MultiIter_RESET "PyArray_MultiIter_RESET") | `NpyIter_Reset` [`PyArray_MultiIter_NEXT`](array#c.PyArray_MultiIter_NEXT "PyArray_MultiIter_NEXT") | Function pointer from `NpyIter_GetIterNext` [`PyArray_MultiIter_DATA`](array#c.PyArray_MultiIter_DATA "PyArray_MultiIter_DATA") | `NpyIter_GetDataPtrArray` [`PyArray_MultiIter_NEXTi`](array#c.PyArray_MultiIter_NEXTi "PyArray_MultiIter_NEXTi") | **NOT SUPPORTED** (always lock-step iteration) [`PyArray_MultiIter_GOTO`](array#c.PyArray_MultiIter_GOTO "PyArray_MultiIter_GOTO") | `NpyIter_GotoMultiIndex` [`PyArray_MultiIter_GOTO1D`](array#c.PyArray_MultiIter_GOTO1D "PyArray_MultiIter_GOTO1D") | `NpyIter_GotoIndex` or `NpyIter_GotoIterIndex` [`PyArray_MultiIter_NOTDONE`](array#c.PyArray_MultiIter_NOTDONE "PyArray_MultiIter_NOTDONE") | Return value of `iternext` function pointer [`PyArray_Broadcast`](array#c.PyArray_Broadcast "PyArray_Broadcast") | Handled by `NpyIter_MultiNew` [`PyArray_RemoveSmallest`](array#c.PyArray_RemoveSmallest "PyArray_RemoveSmallest") | Iterator flag `NPY_ITER_EXTERNAL_LOOP` _Other Functions_ | [`PyArray_ConvertToCommonType`](array#c.PyArray_ConvertToCommonType "PyArray_ConvertToCommonType") | Iterator flag `NPY_ITER_COMMON_DTYPE` # NpyString API New in version 2.0. This API allows access to the UTF-8 string data stored in NumPy StringDType arrays. See [NEP-55](https://numpy.org/neps/nep-0055-string_dtype.html#nep55 "\(in NumPy Enhancement Proposals\)") for more in-depth details into the design of StringDType. ## Examples ### Loading a String Say we are writing a ufunc implementation for `StringDType`. If we are given `const char *buf` pointer to the beginning of a `StringDType` array entry, and a `PyArray_Descr *` pointer to the array descriptor, one can access the underlying string data like so: npy_string_allocator *allocator = NpyString_acquire_allocator( (PyArray_StringDTypeObject *)descr); npy_static_string sdata = {0, NULL}; npy_packed_static_string *packed_string = (npy_packed_static_string *)buf; int is_null = 0; is_null = NpyString_load(allocator, packed_string, &sdata); if (is_null == -1) { // failed to load string, set error return -1; } else if (is_null) { // handle missing string // sdata->buf is NULL // sdata->size is 0 } else { // sdata->buf is a pointer to the beginning of a string // sdata->size is the size of the string } NpyString_release_allocator(allocator); ### Packing a String This example shows how to pack a new string entry into an array: char *str = "Hello world"; size_t size = 11; npy_packed_static_string *packed_string = (npy_packed_static_string *)buf; npy_string_allocator *allocator = NpyString_acquire_allocator( (PyArray_StringDTypeObject *)descr); // copy contents of str into packed_string if (NpyString_pack(allocator, packed_string, str, size) == -1) { // string packing failed, set error return -1; } // packed_string contains a copy of "Hello world" NpyString_release_allocator(allocator); ## Types typenpy_packed_static_string An opaque struct that represents “packed” encoded strings. Individual entries in array buffers are instances of this struct. Direct access to the data in the struct is undefined and future version of the library may change the packed representation of strings. typenpy_static_string An unpacked string allowing access to the UTF-8 string data. typedef struct npy_unpacked_static_string { size_t size; const char *buf; } npy_static_string; size_tsize The size of the string, in bytes. constchar*buf The string buffer. Holds UTF-8-encoded bytes. Does not currently end in a null string but we may decide to add null termination in the future, so do not rely on the presence or absence of null-termination. Note that this is a `const` buffer. If you want to alter an entry in an array, you should create a new string and pack it into the array entry. typenpy_string_allocator An opaque pointer to an object that handles string allocation. Before using the allocator, you must acquire the allocator lock and release the lock after you are done interacting with strings managed by the allocator. typePyArray_StringDTypeObject The C struct backing instances of StringDType in Python. Attributes store the settings the object was created with, an instance of `npy_string_allocator` that manages string allocations for arrays associated with the DType instance, and several attributes caching information about the missing string object that is commonly needed in cast and ufunc loop implementations. typedef struct { PyArray_Descr base; PyObject *na_object; char coerce; char has_nan_na; char has_string_na; char array_owned; npy_static_string default_string; npy_static_string na_name; npy_string_allocator *allocator; } PyArray_StringDTypeObject; [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")base The base object. Use this member to access fields common to all descriptor objects. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*na_object A reference to the object representing the null value. If there is no null value (the default) this will be NULL. charcoerce 1 if string coercion is enabled, 0 otherwise. charhas_nan_na 1 if the missing string object (if any) is NaN-like, 0 otherwise. charhas_string_na 1 if the missing string object (if any) is a string, 0 otherwise. chararray_owned 1 if an array owns the StringDType instance, 0 otherwise. npy_static_stringdefault_string The default string to use in operations. If the missing string object is a string, this will contain the string data for the missing string. npy_static_stringna_name The name of the missing string object, if any. An empty string otherwise. npy_string_allocatorallocator The allocator instance associated with the array that owns this descriptor instance. The allocator should only be directly accessed after acquiring the allocator_lock and the lock should be released immediately after the allocator is no longer needed ## Functions npy_string_allocator*NpyString_acquire_allocator(constPyArray_StringDTypeObject*descr) Acquire the mutex locking the allocator attached to `descr`. `NpyString_release_allocator` must be called on the allocator returned by this function exactly once. Note that functions requiring the GIL should not be called while the allocator mutex is held, as doing so may cause deadlocks. voidNpyString_acquire_allocators(size_tn_descriptors, [PyArray_Descr](types- and-structures#c.PyArray_Descr "PyArray_Descr")*constdescrs[], npy_string_allocator*allocators[]) Simultaneously acquire the mutexes locking the allocators attached to multiple descriptors. Writes a pointer to the associated allocator in the allocators array for each StringDType descriptor in the array. If any of the descriptors are not StringDType instances, write NULL to the allocators array for that entry. `n_descriptors` is the number of descriptors in the descrs array that should be examined. Any descriptor after `n_descriptors` elements is ignored. A buffer overflow will happen if the `descrs` array does not contain n_descriptors elements. If pointers to the same descriptor are passed multiple times, only acquires the allocator mutex once but sets identical allocator pointers appropriately. The allocator mutexes must be released after this function returns, see `NpyString_release_allocators`. Note that functions requiring the GIL should not be called while the allocator mutex is held, as doing so may cause deadlocks. voidNpyString_release_allocator(npy_string_allocator*allocator) Release the mutex locking an allocator. This must be called exactly once after acquiring the allocator mutex and all operations requiring the allocator are done. If you need to release multiple allocators, see NpyString_release_allocators, which can correctly handle releasing the allocator once when given several references to the same allocator. voidNpyString_release_allocators(size_tlength, npy_string_allocator*allocators[]) Release the mutexes locking N allocators. `length` is the length of the allocators array. NULL entries are ignored. If pointers to the same allocator are passed multiple times, only releases the allocator mutex once. intNpyString_load(npy_string_allocator*allocator, constnpy_packed_static_string*packed_string, npy_static_string*unpacked_string) Extract the packed contents of `packed_string` into `unpacked_string`. The `unpacked_string` is a read-only view onto the `packed_string` data and should not be used to modify the string data. If `packed_string` is the null string, sets `unpacked_string.buf` to the NULL pointer. Returns -1 if unpacking the string fails, returns 1 if `packed_string` is the null string, and returns 0 otherwise. A useful pattern is to define a stack-allocated npy_static_string instance initialized to `{0, NULL}` and pass a pointer to the stack-allocated unpacked string to this function. This function can be used to simultaneously unpack a string and determine if it is a null string. intNpyString_pack_null(npy_string_allocator*allocator, npy_packed_static_string*packed_string) Pack the null string into `packed_string`. Returns 0 on success and -1 on failure. intNpyString_pack(npy_string_allocator*allocator, npy_packed_static_string*packed_string, constchar*buf, size_tsize) Copy and pack the first `size` entries of the buffer pointed to by `buf` into the `packed_string`. Returns 0 on success and -1 on failure. # Python types and C-structures Several new types are defined in the C-code. Most of these are accessible from Python, but a few are not exposed due to their limited use. Every new Python type has an associated [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")* with an internal structure that includes a pointer to a “method table” that defines how the new object behaves in Python. When you receive a Python object into C code, you always get a pointer to a [`PyObject`](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)") structure. Because a [`PyObject`](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)") structure is very generic and defines only [`PyObject_HEAD`](https://docs.python.org/3/c-api/structures.html#c.PyObject_HEAD "\(in Python v3.13\)"), by itself it is not very interesting. However, different objects contain more details after the [`PyObject_HEAD`](https://docs.python.org/3/c-api/structures.html#c.PyObject_HEAD "\(in Python v3.13\)") (but you have to cast to the correct type to access them — or use accessor functions or macros). ## New Python types defined Python types are the functional equivalent in C of classes in Python. By constructing a new Python type you make available a new object for Python. The ndarray object is an example of a new type defined in C. New types are defined in C by two basic steps: 1. creating a C-structure (usually named `Py{Name}Object`) that is binary- compatible with the [`PyObject`](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)") structure itself but holds the additional information needed for that particular object; 2. populating the [`PyTypeObject`](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)") table (pointed to by the ob_type member of the [`PyObject`](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)") structure) with pointers to functions that implement the desired behavior for the type. Instead of special method names which define behavior for Python classes, there are “function tables” which point to functions that implement the desired results. Since Python 2.2, the PyTypeObject itself has become dynamic which allows C types that can be “sub-typed “from other C-types in C, and sub- classed in Python. The children types inherit the attributes and methods from their parent(s). There are two major new types: the ndarray ( `PyArray_Type` ) and the ufunc ( `PyUFunc_Type` ). Additional types play a supportive role: the `PyArrayIter_Type`, the `PyArrayMultiIter_Type`, and the `PyArrayDescr_Type` . The `PyArrayIter_Type` is the type for a flat iterator for an ndarray (the object that is returned when getting the flat attribute). The `PyArrayMultiIter_Type` is the type of the object returned when calling `broadcast`. It handles iteration and broadcasting over a collection of nested sequences. Also, the `PyArrayDescr_Type` is the data-type-descriptor type whose instances describe the data and `PyArray_DTypeMeta` is the metaclass for data-type descriptors. There are also new scalar-array types which are new Python scalars corresponding to each of the fundamental data types available for arrays. Additional types are placeholders that allow the array scalars to fit into a hierarchy of actual Python types. Finally, the `PyArray_DTypeMeta` instances corresponding to the NumPy built-in data types are also publicly visible. ### PyArray_Type and PyArrayObject [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)")PyArray_Type The Python type of the ndarray is `PyArray_Type`. In C, every ndarray is a pointer to a `PyArrayObject` structure. The ob_type member of this structure contains a pointer to the `PyArray_Type` typeobject. typePyArrayObject typeNPY_AO The `PyArrayObject` C-structure contains all of the required information for an array. All instances of an ndarray (and its subclasses) will have this structure. For future compatibility, these structure members should normally be accessed using the provided macros. If you need a shorter name, then you can make use of `NPY_AO` (deprecated) which is defined to be equivalent to `PyArrayObject`. Direct access to the struct fields are deprecated. Use the `PyArray_*(arr)` form instead. As of NumPy 1.20, the size of this struct is not considered part of the NumPy ABI (see note at the end of the member list). typedef struct PyArrayObject { PyObject_HEAD char *data; int nd; npy_intp *dimensions; npy_intp *strides; PyObject *base; PyArray_Descr *descr; int flags; PyObject *weakreflist; /* version dependent private members */ } PyArrayObject; [`PyObject_HEAD`](https://docs.python.org/3/c-api/structures.html#c.PyObject_HEAD "\(in Python v3.13\)") This is needed by all Python objects. It consists of (at least) a reference count member ( `ob_refcnt` ) and a pointer to the typeobject ( `ob_type` ). (Other elements may also be present if Python was compiled with special options see Include/object.h in the Python source tree for more information). The ob_type member points to a Python type object. char*data Accessible via [`PyArray_DATA`](array#c.PyArray_DATA "PyArray_DATA"), this data member is a pointer to the first element of the array. This pointer can (and normally should) be recast to the data type of the array. intnd An integer providing the number of dimensions for this array. When nd is 0, the array is sometimes called a rank-0 array. Such arrays have undefined dimensions and strides and cannot be accessed. Macro [`PyArray_NDIM`](array#c.PyArray_NDIM "PyArray_NDIM") defined in `ndarraytypes.h` points to this data member. `NPY_MAXDIMS` is defined as a compile time constant limiting the number of dimensions. This number is 64 since NumPy 2 and was 32 before. However, we may wish to remove this limitations in the future so that it is best to explicitly check dimensionality for code that relies on such an upper bound. [npy_intp](dtype#c.npy_intp "npy_intp")*dimensions An array of integers providing the shape in each dimension as long as nd \\(\geq\\) 1\. The integer is always large enough to hold a pointer on the platform, so the dimension size is only limited by memory. [`PyArray_DIMS`](array#c.PyArray_DIMS "PyArray_DIMS") is the macro associated with this data member. [npy_intp](dtype#c.npy_intp "npy_intp")*strides An array of integers providing for each dimension the number of bytes that must be skipped to get to the next element in that dimension. Associated with macro [`PyArray_STRIDES`](array#c.PyArray_STRIDES "PyArray_STRIDES"). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*base Pointed to by [`PyArray_BASE`](array#c.PyArray_BASE "PyArray_BASE"), this member is used to hold a pointer to another Python object that is related to this array. There are two use cases: * If this array does not own its own memory, then base points to the Python object that owns it (perhaps another array object) * If this array has the [`NPY_ARRAY_WRITEBACKIFCOPY`](array#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY") flag set, then this array is a working copy of a “misbehaved” array. When `PyArray_ResolveWritebackIfCopy` is called, the array pointed to by base will be updated with the contents of this array. PyArray_Descr*descr A pointer to a data-type descriptor object (see below). The data-type descriptor object is an instance of a new built-in type which allows a generic description of memory. There is a descriptor structure for each data type supported. This descriptor structure contains useful information about the type as well as a pointer to a table of function pointers to implement specific functionality. As the name suggests, it is associated with the macro [`PyArray_DESCR`](array#c.PyArray_DESCR "PyArray_DESCR"). intflags Pointed to by the macro [`PyArray_FLAGS`](array#c.PyArray_FLAGS "PyArray_FLAGS"), this data member represents the flags indicating how the memory pointed to by data is to be interpreted. Possible flags are [`NPY_ARRAY_C_CONTIGUOUS`](array#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS"), [`NPY_ARRAY_F_CONTIGUOUS`](array#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS"), [`NPY_ARRAY_OWNDATA`](array#c.NPY_ARRAY_OWNDATA "NPY_ARRAY_OWNDATA"), [`NPY_ARRAY_ALIGNED`](array#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED"), [`NPY_ARRAY_WRITEABLE`](array#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE"), [`NPY_ARRAY_WRITEBACKIFCOPY`](array#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY"). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*weakreflist This member allows array objects to have weak references (using the weakref module). Note Further members are considered private and version dependent. If the size of the struct is important for your code, special care must be taken. A possible use-case when this is relevant is subclassing in C. If your code relies on `sizeof(PyArrayObject)` to be constant, you must add the following check at import time: if (sizeof(PyArrayObject) < PyArray_Type.tp_basicsize) { PyErr_SetString(PyExc_ImportError, "Binary incompatibility with NumPy, must recompile/update X."); return NULL; } To ensure that your code does not have to be compiled for a specific NumPy version, you may add a constant, leaving room for changes in NumPy. A solution guaranteed to be compatible with any future NumPy version requires the use of a runtime calculate offset and allocation size. The `PyArray_Type` typeobject implements many of the features of [`Python objects`](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)") including the [`tp_as_number`](https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_as_number "\(in Python v3.13\)"), [`tp_as_sequence`](https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_as_sequence "\(in Python v3.13\)"), [`tp_as_mapping`](https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_as_mapping "\(in Python v3.13\)"), and [`tp_as_buffer`](https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_as_buffer "\(in Python v3.13\)") interfaces. The [`rich comparison`](https://docs.python.org/3/c-api/typeobj.html#c.richcmpfunc "\(in Python v3.13\)")) is also used along with new-style attribute lookup for member ([`tp_members`](https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_members "\(in Python v3.13\)")) and properties ([`tp_getset`](https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_getset "\(in Python v3.13\)")). The `PyArray_Type` can also be sub-typed. Tip The [`tp_as_number`](https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_as_number "\(in Python v3.13\)") methods use a generic approach to call whatever function has been registered for handling the operation. When the `_multiarray_umath` module is imported, it sets the numeric operations for all arrays to the corresponding ufuncs. This choice can be changed with [`PyUFunc_ReplaceLoopBySignature`](ufunc#c.PyUFunc_ReplaceLoopBySignature "PyUFunc_ReplaceLoopBySignature"). ### PyGenericArrType_Type [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)")PyGenericArrType_Type The `PyGenericArrType_Type` is the PyTypeObject definition which create the [`numpy.generic`](../arrays.scalars#numpy.generic "numpy.generic") python type. ### PyArrayDescr_Type and PyArray_Descr [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)")PyArrayDescr_Type The `PyArrayDescr_Type` is the built-in type of the data-type-descriptor objects used to describe how the bytes comprising the array are to be interpreted. There are 21 statically-defined `PyArray_Descr` objects for the built-in data-types. While these participate in reference counting, their reference count should never reach zero. There is also a dynamic table of user-defined `PyArray_Descr` objects that is also maintained. Once a data- type-descriptor object is “registered” it should never be deallocated either. The function [`PyArray_DescrFromType`](array#c.PyArray_DescrFromType "PyArray_DescrFromType") (…) can be used to retrieve a `PyArray_Descr` object from an enumerated type-number (either built-in or user- defined). typePyArray_DescrProto Identical structure to `PyArray_Descr`. This struct is used for static definition of a prototype for registering a new legacy DType by [`PyArray_RegisterDataType`](array#c.PyArray_RegisterDataType "PyArray_RegisterDataType"). See the note in [`PyArray_RegisterDataType`](array#c.PyArray_RegisterDataType "PyArray_RegisterDataType") for details. typePyArray_Descr The `PyArray_Descr` structure lies at the heart of the `PyArrayDescr_Type`. While it is described here for completeness, it should be considered internal to NumPy and manipulated via `PyArrayDescr_*` or `PyDataType*` functions and macros. The size of this structure is subject to change across versions of NumPy. To ensure compatibility: * Never declare a non-pointer instance of the struct * Never perform pointer arithmetic * Never use `sizeof(PyArray_Descr)` It has the following structure: typedef struct { PyObject_HEAD PyTypeObject *typeobj; char kind; char type; char byteorder; char _former_flags; // unused field int type_num; /* * Definitions after this one must be accessed through accessor * functions (see below) when compiling with NumPy 1.x support. */ npy_uint64 flags; npy_intp elsize; npy_intp alignment; NpyAuxData *c_metadata; npy_hash_t hash; void *reserved_null[2]; // unused field, must be NULLed. } PyArray_Descr; Some dtypes have additional members which are accessible through `PyDataType_NAMES`, `PyDataType_FIELDS`, `PyDataType_SUBARRAY`, and in some cases (times) `PyDataType_C_METADATA`. [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)")*typeobj Pointer to a typeobject that is the corresponding Python type for the elements of this array. For the builtin types, this points to the corresponding array scalar. For user-defined types, this should point to a user-defined typeobject. This typeobject can either inherit from array scalars or not. If it does not inherit from array scalars, then the `NPY_USE_GETITEM` and `NPY_USE_SETITEM` flags should be set in the `flags` member. charkind A character code indicating the kind of array (using the array interface typestring notation). A ‘b’ represents Boolean, a ‘i’ represents signed integer, a ‘u’ represents unsigned integer, ‘f’ represents floating point, ‘c’ represents complex floating point, ‘S’ represents 8-bit zero-terminated bytes, ‘U’ represents 32-bit/character unicode string, and ‘V’ represents arbitrary. chartype A traditional character code indicating the data type. charbyteorder A character indicating the byte-order: ‘>’ (big-endian), ‘<’ (little- endian), ‘=’ (native), ‘|’ (irrelevant, ignore). All builtin data- types have byteorder ‘=’. [npy_uint64](dtype#c.npy_uint64 "npy_uint64")flags A data-type bit-flag that determines if the data-type exhibits object- array like behavior. Each bit in this member is a flag which are named as: * `NPY_ITEM_REFCOUNT` * `NPY_ITEM_HASOBJECT` * `NPY_LIST_PICKLE` * `NPY_ITEM_IS_POINTER` * `NPY_NEEDS_INIT` * `NPY_NEEDS_PYAPI` * `NPY_USE_GETITEM` * `NPY_USE_SETITEM` * `NPY_FROM_FIELDS` * `NPY_OBJECT_DTYPE_FLAGS` inttype_num A number that uniquely identifies the data type. For new data-types, this number is assigned when the data-type is registered. [npy_intp](dtype#c.npy_intp "npy_intp")elsize For data types that are always the same size (such as long), this holds the size of the data type. For flexible data types where different arrays can have a different elementsize, this should be 0. See `PyDataType_ELSIZE` and `PyDataType_SET_ELSIZE` for a way to access this field in a NumPy 1.x compatible way. [npy_intp](dtype#c.npy_intp "npy_intp")alignment A number providing alignment information for this data type. Specifically, it shows how far from the start of a 2-element structure (whose first element is a `char` ), the compiler places an item of this type: `offsetof(struct {char c; type v;}, v)` See `PyDataType_ALIGNMENT` for a way to access this field in a NumPy 1.x compatible way. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*metadata Metadata about this dtype. [NpyAuxData](array#c.NpyAuxData "NpyAuxData")*c_metadata Metadata specific to the C implementation of the particular dtype. Added for NumPy 1.7.0. typenpy_hash_t npy_hash_t*hash Used for caching hash values. NPY_ITEM_REFCOUNT Indicates that items of this data-type must be reference counted (using [`Py_INCREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_INCREF "\(in Python v3.13\)") and [`Py_DECREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_DECREF "\(in Python v3.13\)") ). NPY_ITEM_HASOBJECT Same as `NPY_ITEM_REFCOUNT`. NPY_LIST_PICKLE Indicates arrays of this data-type must be converted to a list before pickling. NPY_ITEM_IS_POINTER Indicates the item is a pointer to some other data-type NPY_NEEDS_INIT Indicates memory for this data-type must be initialized (set to 0) on creation. NPY_NEEDS_PYAPI Indicates this data-type requires the Python C-API during access (so don’t give up the GIL if array access is going to be needed). NPY_USE_GETITEM On array access use the `f->getitem` function pointer instead of the standard conversion to an array scalar. Must use if you don’t define an array scalar to go along with the data-type. NPY_USE_SETITEM When creating a 0-d array from an array scalar use `f->setitem` instead of the standard copy from an array scalar. Must use if you don’t define an array scalar to go along with the data-type. NPY_FROM_FIELDS The bits that are inherited for the parent data-type if these bits are set in any field of the data-type. Currently ( `NPY_NEEDS_INIT` | `NPY_LIST_PICKLE` | `NPY_ITEM_REFCOUNT` | `NPY_NEEDS_PYAPI` ). NPY_OBJECT_DTYPE_FLAGS Bits set for the object data-type: ( `NPY_LIST_PICKLE` | `NPY_USE_GETITEM` | `NPY_ITEM_IS_POINTER` | `NPY_ITEM_REFCOUNT` | `NPY_NEEDS_INIT` | `NPY_NEEDS_PYAPI`). intPyDataType_FLAGCHK(PyArray_Descr*dtype, intflags) Return true if all the given flags are set for the data-type object. intPyDataType_REFCHK(PyArray_Descr*dtype) Equivalent to `PyDataType_FLAGCHK` (_dtype_ , `NPY_ITEM_REFCOUNT`). ### PyArray_ArrFuncs PyArray_ArrFuncs*PyDataType_GetArrFuncs(PyArray_Descr*dtype) Fetch the legacy `PyArray_ArrFuncs` of the datatype (cannot fail). New in version NumPy: 2.0 This function was added in a backwards compatible and backportable way in NumPy 2.0 (see `npy_2_compat.h`). Any code that previously accessed the `->f` slot of the `PyArray_Descr`, must now use this function and backport it to compile with 1.x. (The `npy_2_compat.h` header can be vendored for this purpose.) typePyArray_ArrFuncs Functions implementing internal features. Not all of these function pointers must be defined for a given type. The required members are `nonzero`, `copyswap`, `copyswapn`, `setitem`, `getitem`, and `cast`. These are assumed to be non- `NULL` and `NULL` entries will cause a program crash. The other functions may be `NULL` which will just mean reduced functionality for that data-type. (Also, the nonzero function will be filled in with a default function if it is `NULL` when you register a user-defined data-type). typedef struct { PyArray_VectorUnaryFunc *cast[NPY_NTYPES_LEGACY]; PyArray_GetItemFunc *getitem; PyArray_SetItemFunc *setitem; PyArray_CopySwapNFunc *copyswapn; PyArray_CopySwapFunc *copyswap; PyArray_CompareFunc *compare; PyArray_ArgFunc *argmax; PyArray_DotFunc *dotfunc; PyArray_ScanFunc *scanfunc; PyArray_FromStrFunc *fromstr; PyArray_NonzeroFunc *nonzero; PyArray_FillFunc *fill; PyArray_FillWithScalarFunc *fillwithscalar; PyArray_SortFunc *sort[NPY_NSORTS]; PyArray_ArgSortFunc *argsort[NPY_NSORTS]; PyObject *castdict; PyArray_ScalarKindFunc *scalarkind; int **cancastscalarkindto; int *cancastto; void *_unused1; void *_unused2; void *_unused3; PyArray_ArgFunc *argmin; } PyArray_ArrFuncs; The concept of a behaved segment is used in the description of the function pointers. A behaved segment is one that is aligned and in native machine byte- order for the data-type. The `nonzero`, `copyswap`, `copyswapn`, `getitem`, and `setitem` functions can (and must) deal with mis-behaved arrays. The other functions require behaved memory segments. Note The functions are largely legacy API, however, some are still used. As of NumPy 2.x they are only available via `PyDataType_GetArrFuncs` (see the function for more details). Before using any function defined in the struct you should check whether it is `NULL`. In general, the functions `getitem`, `setitem`, `copyswap`, and `copyswapn` can be expected to be defined, but all functions are expected to be replaced with newer API. For example, `PyArray_Pack` is a more powerful version of `setitem` that for example correctly deals with casts. voidcast(void*from, void*to, [npy_intp](dtype#c.npy_intp "npy_intp")n, void*fromarr, void*toarr) An array of function pointers to cast from the current type to all of the other builtin types. Each function casts a contiguous, aligned, and notswapped buffer pointed at by _from_ to a contiguous, aligned, and notswapped buffer pointed at by _to_ The number of items to cast is given by _n_ , and the arguments _fromarr_ and _toarr_ are interpreted as PyArrayObjects for flexible arrays to get itemsize information. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*getitem(void*data, void*arr) A pointer to a function that returns a standard Python object from a single element of the array object _arr_ pointed to by _data_. This function must be able to deal with “misbehaved “(misaligned and/or swapped) arrays correctly. intsetitem([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*item, void*data, void*arr) A pointer to a function that sets the Python object _item_ into the array, _arr_ , at the position pointed to by _data_ . This function deals with “misbehaved” arrays. If successful, a zero is returned, otherwise, a negative one is returned (and a Python error set). voidcopyswapn(void*dest, [npy_intp](dtype#c.npy_intp "npy_intp")dstride, void*src, [npy_intp](dtype#c.npy_intp "npy_intp")sstride, [npy_intp](dtype#c.npy_intp "npy_intp")n, intswap, void*arr) voidcopyswap(void*dest, void*src, intswap, void*arr) These members are both pointers to functions to copy data from _src_ to _dest_ and _swap_ if indicated. The value of arr is only used for flexible ( [`NPY_STRING`](dtype#c.NPY_TYPES.NPY_STRING "NPY_TYPES.NPY_STRING"), [`NPY_UNICODE`](dtype#c.NPY_TYPES.NPY_UNICODE "NPY_TYPES.NPY_UNICODE"), and [`NPY_VOID`](dtype#c.NPY_TYPES.NPY_VOID "NPY_TYPES.NPY_VOID") ) arrays (and is obtained from `arr->descr->elsize` ). The second function copies a single value, while the first loops over n values with the provided strides. These functions can deal with misbehaved _src_ data. If _src_ is NULL then no copy is performed. If _swap_ is 0, then no byteswapping occurs. It is assumed that _dest_ and _src_ do not overlap. If they overlap, then use `memmove` (…) first followed by `copyswap(n)` with NULL valued `src`. intcompare(constvoid*d1, constvoid*d2, void*arr) A pointer to a function that compares two elements of the array, `arr`, pointed to by `d1` and `d2`. This function requires behaved (aligned and not swapped) arrays. The return value is 1 if * `d1` > * `d2`, 0 if * `d1` == * `d2`, and -1 if * `d1` < * `d2`. The array object `arr` is used to retrieve itemsize and field information for flexible arrays. intargmax(void*data, [npy_intp](dtype#c.npy_intp "npy_intp")n, [npy_intp](dtype#c.npy_intp "npy_intp")*max_ind, void*arr) A pointer to a function that retrieves the index of the largest of `n` elements in `arr` beginning at the element pointed to by `data`. This function requires that the memory segment be contiguous and behaved. The return value is always 0. The index of the largest element is returned in `max_ind`. voiddotfunc(void*ip1, [npy_intp](dtype#c.npy_intp "npy_intp")is1, void*ip2, [npy_intp](dtype#c.npy_intp "npy_intp")is2, void*op, [npy_intp](dtype#c.npy_intp "npy_intp")n, void*arr) A pointer to a function that multiplies two `n` -length sequences together, adds them, and places the result in element pointed to by `op` of `arr`. The start of the two sequences are pointed to by `ip1` and `ip2`. To get to the next element in each sequence requires a jump of `is1` and `is2` _bytes_ , respectively. This function requires behaved (though not necessarily contiguous) memory. intscanfunc(FILE*fd, void*ip, void*arr) A pointer to a function that scans (scanf style) one element of the corresponding type from the file descriptor `fd` into the array memory pointed to by `ip`. The array is assumed to be behaved. The last argument `arr` is the array to be scanned into. Returns number of receiving arguments successfully assigned (which may be zero in case a matching failure occurred before the first receiving argument was assigned), or EOF if input failure occurs before the first receiving argument was assigned. This function should be called without holding the Python GIL, and has to grab it for error reporting. intfromstr(char*str, void*ip, char**endptr, void*arr) A pointer to a function that converts the string pointed to by `str` to one element of the corresponding type and places it in the memory location pointed to by `ip`. After the conversion is completed, `*endptr` points to the rest of the string. The last argument `arr` is the array into which ip points (needed for variable-size data- types). Returns 0 on success or -1 on failure. Requires a behaved array. This function should be called without holding the Python GIL, and has to grab it for error reporting. [npy_bool](dtype#c.npy_bool "npy_bool")nonzero(void*data, void*arr) A pointer to a function that returns TRUE if the item of `arr` pointed to by `data` is nonzero. This function can deal with misbehaved arrays. voidfill(void*data, [npy_intp](dtype#c.npy_intp "npy_intp")length, void*arr) A pointer to a function that fills a contiguous array of given length with data. The first two elements of the array must already be filled- in. From these two values, a delta will be computed and the values from item 3 to the end will be computed by repeatedly adding this computed delta. The data buffer must be well-behaved. voidfillwithscalar(void*buffer, [npy_intp](dtype#c.npy_intp "npy_intp")length, void*value, void*arr) A pointer to a function that fills a contiguous `buffer` of the given `length` with a single scalar `value` whose address is given. The final argument is the array which is needed to get the itemsize for variable-length arrays. intsort(void*start, [npy_intp](dtype#c.npy_intp "npy_intp")length, void*arr) An array of function pointers to a particular sorting algorithms. A particular sorting algorithm is obtained using a key (so far [`NPY_QUICKSORT`](array#c.NPY_SORTKIND.NPY_QUICKSORT "NPY_SORTKIND.NPY_QUICKSORT"), [`NPY_HEAPSORT`](array#c.NPY_SORTKIND.NPY_HEAPSORT "NPY_SORTKIND.NPY_HEAPSORT"), and [`NPY_MERGESORT`](array#c.NPY_SORTKIND.NPY_MERGESORT "NPY_SORTKIND.NPY_MERGESORT") are defined). These sorts are done in-place assuming contiguous and aligned data. intargsort(void*start, [npy_intp](dtype#c.npy_intp "npy_intp")*result, [npy_intp](dtype#c.npy_intp "npy_intp")length, void*arr) An array of function pointers to sorting algorithms for this data type. The same sorting algorithms as for sort are available. The indices producing the sort are returned in `result` (which must be initialized with indices 0 to `length-1` inclusive). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*castdict Either `NULL` or a dictionary containing low-level casting functions for user- defined data-types. Each function is wrapped in a [PyCapsule](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "\(in Python v3.13\)")* and keyed by the data-type number. [NPY_SCALARKIND](array#c.NPY_SCALARKIND "NPY_SCALARKIND")scalarkind(PyArrayObject*arr) A function to determine how scalars of this type should be interpreted. The argument is `NULL` or a 0-dimensional array containing the data (if that is needed to determine the kind of scalar). The return value must be of type [`NPY_SCALARKIND`](array#c.NPY_SCALARKIND "NPY_SCALARKIND"). int**cancastscalarkindto Either `NULL` or an array of [`NPY_NSCALARKINDS`](array#c.NPY_SCALARKIND.NPY_NSCALARKINDS "NPY_SCALARKIND.NPY_NSCALARKINDS") pointers. These pointers should each be either `NULL` or a pointer to an array of integers (terminated by [`NPY_NOTYPE`](dtype#c.NPY_NOTYPE "NPY_NOTYPE")) indicating data-types that a scalar of this data-type of the specified kind can be cast to safely (this usually means without losing precision). int*cancastto Either `NULL` or an array of integers (terminated by [`NPY_NOTYPE`](dtype#c.NPY_NOTYPE "NPY_NOTYPE") ) indicated data-types that this data-type can be cast to safely (this usually means without losing precision). intargmin(void*data, [npy_intp](dtype#c.npy_intp "npy_intp")n, [npy_intp](dtype#c.npy_intp "npy_intp")*min_ind, void*arr) A pointer to a function that retrieves the index of the smallest of `n` elements in `arr` beginning at the element pointed to by `data`. This function requires that the memory segment be contiguous and behaved. The return value is always 0. The index of the smallest element is returned in `min_ind`. ### PyArrayMethod_Context and PyArrayMethod_Spec typePyArrayMethodObject_tag An opaque struct used to represent the method “self” in ArrayMethod loops. typePyArrayMethod_Context A struct that is passed in to ArrayMethod loops to provide context for the runtime usage of the loop. typedef struct { PyObject *caller; struct PyArrayMethodObject_tag *method; PyArray_Descr *const *descriptors; } PyArrayMethod_Context [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*caller The caller, which is typically the ufunc that called the loop. May be `NULL` when a call is not from a ufunc (e.g. casts). structPyArrayMethodObject_tag*method The method “self”. Currently this object is an opaque pointer. PyArray_Descr**descriptors An array of descriptors for the ufunc loop, filled in by `resolve_descriptors`. The length of the array is `nin` \+ `nout`. typePyArrayMethod_Spec A struct used to register an ArrayMethod with NumPy. We use the slots mechanism used by the Python limited API. See below for the slot definitions. typedef struct { const char *name; int nin, nout; NPY_CASTING casting; NPY_ARRAYMETHOD_FLAGS flags; PyArray_DTypeMeta **dtypes; PyType_Slot *slots; } PyArrayMethod_Spec; constchar*name The name of the loop. intnin The number of input operands intnout The number of output operands. [NPY_CASTING](array#c.NPY_CASTING "NPY_CASTING")casting Used to indicate how minimally permissive a casting operation should be. For example, if a cast operation might in some circumstances be safe, but in others unsafe, then `NPY_UNSAFE_CASTING` should be set. Not used for ufunc loops but must still be set. [NPY_ARRAYMETHOD_FLAGS](array#c.NPY_ARRAYMETHOD_FLAGS "NPY_ARRAYMETHOD_FLAGS")flags The flags set for the method. PyArray_DTypeMeta**dtypes The DTypes for the loop. Must be `nin` \+ `nout` in length. [PyType_Slot](https://docs.python.org/3/c-api/type.html#c.PyType_Slot "\(in Python v3.13\)")*slots An array of slots for the method. Slot IDs must be one of the values below. ### PyArray_DTypeMeta and PyArrayDTypeMeta_Spec [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)")PyArrayDTypeMeta_Type The python type object corresponding to `PyArray_DTypeMeta`. typePyArray_DTypeMeta A largely opaque struct representing DType classes. Each instance defines a metaclass for a single NumPy data type. Data types can either be non- parametric or parametric. For non-parametric types, the DType class has a one- to-one correspondence with the descriptor instance created from the DType class. Parametric types can correspond to many different dtype instances depending on the chosen parameters. This type is available in the public `numpy/dtype_api.h` header. Currently use of this struct is not supported in the limited CPython API, so if `Py_LIMITED_API` is set, this type is a typedef for `PyTypeObject`. typedef struct { PyHeapTypeObject super; PyArray_Descr *singleton; int type_num; PyTypeObject *scalar_type; npy_uint64 flags; void *dt_slots; void *reserved[3]; } PyArray_DTypeMeta PyHeapTypeObjectsuper The superclass, providing hooks into the python object API. Set members of this struct to fill in the functions implementing the `PyTypeObject` API (e.g. `tp_new`). PyArray_Descr*singleton A descriptor instance suitable for use as a singleton descriptor for the data type. This is useful for non-parametric types representing simple plain old data type where there is only one logical descriptor instance for all data of the type. Can be NULL if a singleton instance is not appropriate. inttype_num Corresponds to the type number for legacy data types. Data types defined outside of NumPy and possibly future data types shipped with NumPy will have `type_num` set to -1, so this should not be relied on to discriminate between data types. [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)")*scalar_type The type of scalar instances for this data type. [npy_uint64](dtype#c.npy_uint64 "npy_uint64")flags Flags can be set to indicate to NumPy that this data type has optional behavior. See [Flags](array#dtype-flags) for a listing of allowed flag values. void*dt_slots An opaque pointer to a private struct containing implementations of functions in the DType API. This is filled in from the `slots` member of the `PyArrayDTypeMeta_Spec` instance used to initialize the DType. typePyArrayDTypeMeta_Spec A struct used to initialize a new DType with the `PyArrayInitDTypeMeta_FromSpec` function. typedef struct { PyTypeObject *typeobj; int flags; PyArrayMethod_Spec **casts; PyType_Slot *slots; PyTypeObject *baseclass; } [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)")*typeobj Either `NULL` or the type of the python scalar associated with the DType. Scalar indexing into an array returns an item with this type. intflags Static flags for the DType class, indicating whether the DType is parametric, abstract, or represents numeric data. The latter is optional but is useful to set to indicate to downstream code if the DType represents data that are numbers (ints, floats, or other numeric data type) or something else (e.g. a string, unit, or date). PyArrayMethod_Spec**casts; A `NULL`-terminated array of ArrayMethod specifications for casts defined by the DType. [PyType_Slot](https://docs.python.org/3/c-api/type.html#c.PyType_Slot "\(in Python v3.13\)")*slots; A `NULL`-terminated array of slot specifications for implementations of functions in the DType API. Slot IDs must be one of the DType slot IDs enumerated in [Slot IDs and API Function Typedefs](array#dtype-slots). ### Exposed DTypes classes (`PyArray_DTypeMeta` objects) For use with promoters, NumPy exposes a number of Dtypes following the pattern `PyArray_DType` corresponding to those found in `np.dtypes`. Additionally, the three DTypes, `PyArray_PyLongDType`, `PyArray_PyFloatDType`, `PyArray_PyComplexDType` correspond to the Python scalar values. These cannot be used in all places, but do allow for example the common dtype operation and implementing promotion with them may be necessary. Further, the following abstract DTypes are defined which cover both the builtin NumPy ones and the python ones, and users can in principle subclass from them (this does not inherit any DType specific functionality): * `PyArray_IntAbstractDType` * `PyArray_FloatAbstractDType` * `PyArray_ComplexAbstractDType` Warning As of NumPy 2.0, the _only_ valid use for these DTypes is registering a promoter conveniently to e.g. match “any integers” (and subclass checks). Because of this, they are not exposed to Python. ### PyUFunc_Type and PyUFuncObject [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)")PyUFunc_Type The ufunc object is implemented by creation of the `PyUFunc_Type`. It is a very simple type that implements only basic getattribute behavior, printing behavior, and has call behavior which allows these objects to act like functions. The basic idea behind the ufunc is to hold a reference to fast 1-dimensional (vector) loops for each data type that supports the operation. These one-dimensional loops all have the same signature and are the key to creating a new ufunc. They are called by the generic looping code as appropriate to implement the N-dimensional function. There are also some generic 1-d loops defined for floating and complexfloating arrays that allow you to define a ufunc using a single scalar function (_e.g._ atanh). typePyUFuncObject The core of the ufunc is the `PyUFuncObject` which contains all the information needed to call the underlying C-code loops that perform the actual work. While it is described here for completeness, it should be considered internal to NumPy and manipulated via `PyUFunc_*` functions. The size of this structure is subject to change across versions of NumPy. To ensure compatibility: * Never declare a non-pointer instance of the struct * Never perform pointer arithmetic * Never use `sizeof(PyUFuncObject)` It has the following structure: typedef struct { PyObject_HEAD int nin; int nout; int nargs; int identity; PyUFuncGenericFunction *functions; void **data; int ntypes; int reserved1; const char *name; char *types; const char *doc; void *ptr; PyObject *obj; PyObject *userloops; int core_enabled; int core_num_dim_ix; int *core_num_dims; int *core_dim_ixs; int *core_offsets; char *core_signature; PyUFunc_TypeResolutionFunc *type_resolver; void *reserved2; void *reserved3; npy_uint32 *op_flags; npy_uint32 *iter_flags; /* new in API version 0x0000000D */ npy_intp *core_dim_sizes; npy_uint32 *core_dim_flags; PyObject *identity_value; /* Further private slots (size depends on the NumPy version) */ } PyUFuncObject; intnin The number of input arguments. intnout The number of output arguments. intnargs The total number of arguments (_nin_ \+ _nout_). This must be less than [`NPY_MAXARGS`](array#c.NPY_MAXARGS "NPY_MAXARGS"). intidentity Either [`PyUFunc_One`](ufunc#c.PyUFunc_One "PyUFunc_One"), [`PyUFunc_Zero`](ufunc#c.PyUFunc_Zero "PyUFunc_Zero"), [`PyUFunc_MinusOne`](ufunc#c.PyUFunc_MinusOne "PyUFunc_MinusOne"), [`PyUFunc_None`](ufunc#c.PyUFunc_None "PyUFunc_None"), [`PyUFunc_ReorderableNone`](ufunc#c.PyUFunc_ReorderableNone "PyUFunc_ReorderableNone"), or [`PyUFunc_IdentityValue`](ufunc#c.PyUFunc_IdentityValue "PyUFunc_IdentityValue") to indicate the identity for this operation. It is only used for a reduce-like call on an empty array. voidfunctions(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")*dims, [npy_intp](dtype#c.npy_intp "npy_intp")*steps, void*extradata) An array of function pointers — one for each data type supported by the ufunc. This is the vector loop that is called to implement the underlying function _dims_ [0] times. The first argument, _args_ , is an array of _nargs_ pointers to behaved memory. Pointers to the data for the input arguments are first, followed by the pointers to the data for the output arguments. How many bytes must be skipped to get to the next element in the sequence is specified by the corresponding entry in the _steps_ array. The last argument allows the loop to receive extra information. This is commonly used so that a single, generic vector loop can be used for multiple functions. In this case, the actual scalar function to call is passed in as _extradata_. The size of this function pointer array is ntypes. void**data Extra data to be passed to the 1-d vector loops or `NULL` if no extra-data is needed. This C-array must be the same size ( _i.e._ ntypes) as the functions array. `NULL` is used if extra_data is not needed. Several C-API calls for UFuncs are just 1-d vector loops that make use of this extra data to receive a pointer to the actual function to call. intntypes The number of supported data types for the ufunc. This number specifies how many different 1-d loops (of the builtin data types) are available. char*name A string name for the ufunc. This is used dynamically to build the __doc__ attribute of ufuncs. char*types An array of \\(nargs \times ntypes\\) 8-bit type_numbers which contains the type signature for the function for each of the supported (builtin) data types. For each of the _ntypes_ functions, the corresponding set of type numbers in this array shows how the _args_ argument should be interpreted in the 1-d vector loop. These type numbers do not have to be the same type and mixed-type ufuncs are supported. char*doc Documentation for the ufunc. Should not contain the function signature as this is generated dynamically when __doc__ is retrieved. void*ptr Any dynamically allocated memory. Currently, this is used for dynamic ufuncs created from a python function to store room for the types, data, and name members. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*obj For ufuncs dynamically created from python functions, this member holds a reference to the underlying Python function. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*userloops A dictionary of user-defined 1-d vector loops (stored as CObject ptrs) for user-defined types. A loop may be registered by the user for any user-defined type. It is retrieved by type number. User defined type numbers are always larger than [`NPY_USERDEF`](dtype#c.NPY_USERDEF "NPY_USERDEF"). intcore_enabled 0 for scalar ufuncs; 1 for generalized ufuncs intcore_num_dim_ix Number of distinct core dimension names in the signature int*core_num_dims Number of core dimensions of each argument int*core_dim_ixs Dimension indices in a flattened form; indices of argument `k` are stored in `core_dim_ixs[core_offsets[k] : core_offsets[k] + core_numdims[k]]` int*core_offsets Position of 1st core dimension of each argument in `core_dim_ixs`, equivalent to cumsum(`core_num_dims`) char*core_signature Core signature string PyUFunc_TypeResolutionFunc*type_resolver A function which resolves the types and fills an array with the dtypes for the inputs and outputs typePyUFunc_TypeResolutionFunc The function pointer type for `type_resolver` [npy_uint32](dtype#c.npy_uint32 "npy_uint32")op_flags Override the default operand flags for each ufunc operand. [npy_uint32](dtype#c.npy_uint32 "npy_uint32")iter_flags Override the default nditer flags for the ufunc. Added in API version 0x0000000D [npy_intp](dtype#c.npy_intp "npy_intp")*core_dim_sizes For each distinct core dimension, the possible [frozen](generalized- ufuncs#frozen) size if `UFUNC_CORE_DIM_SIZE_INFERRED` is `0` [npy_uint32](dtype#c.npy_uint32 "npy_uint32")*core_dim_flags For each distinct core dimension, a set of flags ( `UFUNC_CORE_DIM_CAN_IGNORE` and `UFUNC_CORE_DIM_SIZE_INFERRED`) [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*identity_value Identity for reduction, when `PyUFuncObject.identity` is equal to [`PyUFunc_IdentityValue`](ufunc#c.PyUFunc_IdentityValue "PyUFunc_IdentityValue"). UFUNC_CORE_DIM_CAN_IGNORE if the dim name ends in `?` UFUNC_CORE_DIM_SIZE_INFERRED if the dim size will be determined from the operands and not from a [frozen](generalized-ufuncs#frozen) signature ### PyArrayIter_Type and PyArrayIterObject [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)")PyArrayIter_Type This is an iterator object that makes it easy to loop over an N-dimensional array. It is the object returned from the flat attribute of an ndarray. It is also used extensively throughout the implementation internals to loop over an N-dimensional array. The tp_as_mapping interface is implemented so that the iterator object can be indexed (using 1-d indexing), and a few methods are implemented through the tp_methods table. This object implements the next method and can be used anywhere an iterator can be used in Python. typePyArrayIterObject The C-structure corresponding to an object of `PyArrayIter_Type` is the `PyArrayIterObject`. The `PyArrayIterObject` is used to keep track of a pointer into an N-dimensional array. It contains associated information used to quickly march through the array. The pointer can be adjusted in three basic ways: 1) advance to the “next” position in the array in a C-style contiguous fashion, 2) advance to an arbitrary N-dimensional coordinate in the array, and 3) advance to an arbitrary one-dimensional index into the array. The members of the `PyArrayIterObject` structure are used in these calculations. Iterator objects keep their own dimension and strides information about an array. This can be adjusted as needed for “broadcasting,” or to loop over only specific dimensions. typedef struct { PyObject_HEAD int nd_m1; npy_intp index; npy_intp size; npy_intp coordinates[NPY_MAXDIMS_LEGACY_ITERS]; npy_intp dims_m1[NPY_MAXDIMS_LEGACY_ITERS]; npy_intp strides[NPY_MAXDIMS_LEGACY_ITERS]; npy_intp backstrides[NPY_MAXDIMS_LEGACY_ITERS]; npy_intp factors[NPY_MAXDIMS_LEGACY_ITERS]; PyArrayObject *ao; char *dataptr; npy_bool contiguous; } PyArrayIterObject; intnd_m1 \\(N-1\\) where \\(N\\) is the number of dimensions in the underlying array. [npy_intp](dtype#c.npy_intp "npy_intp")index The current 1-d index into the array. [npy_intp](dtype#c.npy_intp "npy_intp")size The total size of the underlying array. [npy_intp](dtype#c.npy_intp "npy_intp")*coordinates An \\(N\\) -dimensional index into the array. [npy_intp](dtype#c.npy_intp "npy_intp")*dims_m1 The size of the array minus 1 in each dimension. [npy_intp](dtype#c.npy_intp "npy_intp")*strides The strides of the array. How many bytes needed to jump to the next element in each dimension. [npy_intp](dtype#c.npy_intp "npy_intp")*backstrides How many bytes needed to jump from the end of a dimension back to its beginning. Note that `backstrides[k] == strides[k] * dims_m1[k]`, but it is stored here as an optimization. [npy_intp](dtype#c.npy_intp "npy_intp")*factors This array is used in computing an N-d index from a 1-d index. It contains needed products of the dimensions. PyArrayObject*ao A pointer to the underlying ndarray this iterator was created to represent. char*dataptr This member points to an element in the ndarray indicated by the index. [npy_bool](dtype#c.npy_bool "npy_bool")contiguous This flag is true if the underlying array is [`NPY_ARRAY_C_CONTIGUOUS`](array#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS"). It is used to simplify calculations when possible. How to use an array iterator on a C-level is explained more fully in later sections. Typically, you do not need to concern yourself with the internal structure of the iterator object, and merely interact with it through the use of the macros [`PyArray_ITER_NEXT`](array#c.PyArray_ITER_NEXT "PyArray_ITER_NEXT") (it), [`PyArray_ITER_GOTO`](array#c.PyArray_ITER_GOTO "PyArray_ITER_GOTO") (it, dest), or [`PyArray_ITER_GOTO1D`](array#c.PyArray_ITER_GOTO1D "PyArray_ITER_GOTO1D") (it, index). All of these macros require the argument _it_ to be a PyArrayIterObject*. ### PyArrayMultiIter_Type and PyArrayMultiIterObject [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)")PyArrayMultiIter_Type This type provides an iterator that encapsulates the concept of broadcasting. It allows \\(N\\) arrays to be broadcast together so that the loop progresses in C-style contiguous fashion over the broadcasted array. The corresponding C-structure is the `PyArrayMultiIterObject` whose memory layout must begin any object, _obj_ , passed in to the [`PyArray_Broadcast`](array#c.PyArray_Broadcast "PyArray_Broadcast") (obj) function. Broadcasting is performed by adjusting array iterators so that each iterator represents the broadcasted shape and size, but has its strides adjusted so that the correct element from the array is used at each iteration. typePyArrayMultiIterObject typedef struct { PyObject_HEAD int numiter; npy_intp size; npy_intp index; int nd; npy_intp dimensions[NPY_MAXDIMS_LEGACY_ITERS]; PyArrayIterObject *iters[]; } PyArrayMultiIterObject; intnumiter The number of arrays that need to be broadcast to the same shape. [npy_intp](dtype#c.npy_intp "npy_intp")size The total broadcasted size. [npy_intp](dtype#c.npy_intp "npy_intp")index The current (1-d) index into the broadcasted result. intnd The number of dimensions in the broadcasted result. [npy_intp](dtype#c.npy_intp "npy_intp")*dimensions The shape of the broadcasted result (only `nd` slots are used). PyArrayIterObject**iters An array of iterator objects that holds the iterators for the arrays to be broadcast together. On return, the iterators are adjusted for broadcasting. ### PyArrayNeighborhoodIter_Type and PyArrayNeighborhoodIterObject [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)")PyArrayNeighborhoodIter_Type This is an iterator object that makes it easy to loop over an N-dimensional neighborhood. typePyArrayNeighborhoodIterObject The C-structure corresponding to an object of `PyArrayNeighborhoodIter_Type` is the `PyArrayNeighborhoodIterObject`. typedef struct { PyObject_HEAD int nd_m1; npy_intp index, size; npy_intp coordinates[NPY_MAXDIMS_LEGACY_ITERS] npy_intp dims_m1[NPY_MAXDIMS_LEGACY_ITERS]; npy_intp strides[NPY_MAXDIMS_LEGACY_ITERS]; npy_intp backstrides[NPY_MAXDIMS_LEGACY_ITERS]; npy_intp factors[NPY_MAXDIMS_LEGACY_ITERS]; PyArrayObject *ao; char *dataptr; npy_bool contiguous; npy_intp bounds[NPY_MAXDIMS_LEGACY_ITERS][2]; npy_intp limits[NPY_MAXDIMS_LEGACY_ITERS][2]; npy_intp limits_sizes[NPY_MAXDIMS_LEGACY_ITERS]; npy_iter_get_dataptr_t translate; npy_intp nd; npy_intp dimensions[NPY_MAXDIMS_LEGACY_ITERS]; PyArrayIterObject* _internal_iter; char* constant; int mode; } PyArrayNeighborhoodIterObject; ### ScalarArrayTypes There is a Python type for each of the different built-in data types that can be present in the array. Most of these are simple wrappers around the corresponding data type in C. The C-names for these types are `Py{TYPE}ArrType_Type` where `{TYPE}` can be **Bool** , **Byte** , **Short** , **Int** , **Long** , **LongLong** , **UByte** , **UShort** , **UInt** , **ULong** , **ULongLong** , **Half** , **Float** , **Double** , **LongDouble** , **CFloat** , **CDouble** , **CLongDouble** , **String** , **Unicode** , **Void** , **Datetime** , **Timedelta** , and **Object**. These type names are part of the C-API and can therefore be created in extension C-code. There is also a `PyIntpArrType_Type` and a `PyUIntpArrType_Type` that are simple substitutes for one of the integer types that can hold a pointer on the platform. The structure of these scalar objects is not exposed to C-code. The function [`PyArray_ScalarAsCtype`](array#c.PyArray_ScalarAsCtype "PyArray_ScalarAsCtype") (..) can be used to extract the C-type value from the array scalar and the function [`PyArray_Scalar`](array#c.PyArray_Scalar "PyArray_Scalar") (…) can be used to construct an array scalar from a C-value. ## Other C-structures A few new C-structures were found to be useful in the development of NumPy. These C-structures are used in at least one C-API call and are therefore documented here. The main reason these structures were defined is to make it easy to use the Python ParseTuple C-API to convert from Python objects to a useful C-Object. ### PyArray_Dims typePyArray_Dims This structure is very useful when shape and/or strides information is supposed to be interpreted. The structure is: typedef struct { npy_intp *ptr; int len; } PyArray_Dims; The members of this structure are [npy_intp](dtype#c.npy_intp "npy_intp")*ptr A pointer to a list of ([`npy_intp`](dtype#c.npy_intp "npy_intp")) integers which usually represent array shape or array strides. intlen The length of the list of integers. It is assumed safe to access _ptr_ [0] to _ptr_ [len-1]. ### PyArray_Chunk typePyArray_Chunk This is equivalent to the buffer object structure in Python up to the ptr member. On 32-bit platforms (_i.e._ if [`NPY_SIZEOF_INT`](config#c.NPY_SIZEOF_INT "NPY_SIZEOF_INT") == [`NPY_SIZEOF_INTP`](config#c.NPY_SIZEOF_INTP "NPY_SIZEOF_INTP")), the len member also matches an equivalent member of the buffer object. It is useful to represent a generic single-segment chunk of memory. typedef struct { PyObject_HEAD PyObject *base; void *ptr; npy_intp len; int flags; } PyArray_Chunk; The members are [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*base The Python object this chunk of memory comes from. Needed so that memory can be accounted for properly. void*ptr A pointer to the start of the single-segment chunk of memory. [npy_intp](dtype#c.npy_intp "npy_intp")len The length of the segment in bytes. intflags Any data flags (_e.g._ [`NPY_ARRAY_WRITEABLE`](array#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") ) that should be used to interpret the memory. ### PyArrayInterface See also [The array interface protocol](../arrays.interface#arrays-interface) typePyArrayInterface The `PyArrayInterface` structure is defined so that NumPy and other extension modules can use the rapid array interface protocol. The [`__array_struct__`](../arrays.interface#object.__array_struct__ "object.__array_struct__") method of an object that supports the rapid array interface protocol should return a [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "\(in Python v3.13\)") that contains a pointer to a `PyArrayInterface` structure with the relevant details of the array. After the new array is created, the attribute should be `DECREF`’d which will free the `PyArrayInterface` structure. Remember to `INCREF` the object (whose [`__array_struct__`](../arrays.interface#object.__array_struct__ "object.__array_struct__") attribute was retrieved) and point the base member of the new `PyArrayObject` to this same object. In this way the memory for the array will be managed correctly. typedef struct { int two; int nd; char typekind; int itemsize; int flags; npy_intp *shape; npy_intp *strides; void *data; PyObject *descr; } PyArrayInterface; inttwo the integer 2 as a sanity check. intnd the number of dimensions in the array. chartypekind A character indicating what kind of array is present according to the typestring convention with ‘t’ -> bitfield, ‘b’ -> Boolean, ‘i’ -> signed integer, ‘u’ -> unsigned integer, ‘f’ -> floating point, ‘c’ -> complex floating point, ‘O’ -> object, ‘S’ -> (byte-)string, ‘U’ -> unicode, ‘V’ -> void. intitemsize The number of bytes each item in the array requires. intflags Any of the bits [`NPY_ARRAY_C_CONTIGUOUS`](array#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS") (1), [`NPY_ARRAY_F_CONTIGUOUS`](array#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS") (2), [`NPY_ARRAY_ALIGNED`](array#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") (0x100), [`NPY_ARRAY_NOTSWAPPED`](array#c.NPY_ARRAY_NOTSWAPPED "NPY_ARRAY_NOTSWAPPED") (0x200), or [`NPY_ARRAY_WRITEABLE`](array#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") (0x400) to indicate something about the data. The [`NPY_ARRAY_ALIGNED`](array#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED"), [`NPY_ARRAY_C_CONTIGUOUS`](array#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS"), and [`NPY_ARRAY_F_CONTIGUOUS`](array#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS") flags can actually be determined from the other parameters. The flag [`NPY_ARR_HAS_DESCR`](../arrays.interface#c.NPY_ARR_HAS_DESCR "NPY_ARR_HAS_DESCR") (0x800) can also be set to indicate to objects consuming the version 3 array interface that the descr member of the structure is present (it will be ignored by objects consuming version 2 of the array interface). [npy_intp](dtype#c.npy_intp "npy_intp")*shape An array containing the size of the array in each dimension. [npy_intp](dtype#c.npy_intp "npy_intp")*strides An array containing the number of bytes to jump to get to the next element in each dimension. void*data A pointer _to_ the first element of the array. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*descr A Python object describing the data-type in more detail (same as the _descr_ key in [`__array_interface__`](../arrays.interface#object.__array_interface__ "object.__array_interface__")). This can be `NULL` if _typekind_ and _itemsize_ provide enough information. This field is also ignored unless [`NPY_ARR_HAS_DESCR`](../arrays.interface#c.NPY_ARR_HAS_DESCR "NPY_ARR_HAS_DESCR") flag is on in _flags_. ### Internally used structures Internally, the code uses some additional Python objects primarily for memory management. These types are not accessible directly from Python, and are not exposed to the C-API. They are included here only for completeness and assistance in understanding the code. typePyUFunc_Loop1d A simple linked-list of C-structures containing the information needed to define a 1-d loop for a ufunc for every defined signature of a user-defined data-type. [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "\(in Python v3.13\)")PyArrayMapIter_Type Advanced indexing is handled with this Python type. It is simply a loose wrapper around the C-structure containing the variables needed for advanced array indexing. typePyArrayMapIterObject The C-structure associated with `PyArrayMapIter_Type`. This structure is useful if you are trying to understand the advanced-index mapping code. It is defined in the `arrayobject.h` header. This type is not exposed to Python and could be replaced with a C-structure. As a Python type it takes advantage of reference- counted memory management. ## NumPy C-API and C complex When you use the NumPy C-API, you will have access to complex real declarations `npy_cdouble` and `npy_cfloat`, which are declared in terms of the C standard types from `complex.h`. Unfortunately, `complex.h` contains `#define I …`` (where the actual definition depends on the compiler), which means that any downstream user that does `#include ` could get `I` defined, and using something like declaring `double I;` in their code will result in an obscure compiler error like This error can be avoided by adding: #undef I to your code. Changed in version 2.0: The inclusion of `complex.h` was new in NumPy 2, so that code defining a different `I` may not have required the `#undef I` on older versions. NumPy 2.0.1 briefly included the `#under I` # ufunc API ## Constants `UFUNC_{THING}_{ERR}` UFUNC_FPE_DIVIDEBYZERO UFUNC_FPE_OVERFLOW UFUNC_FPE_UNDERFLOW UFUNC_FPE_INVALID `PyUFunc_{VALUE}` PyUFunc_One PyUFunc_Zero PyUFunc_MinusOne PyUFunc_ReorderableNone PyUFunc_None PyUFunc_IdentityValue ## Macros NPY_LOOP_BEGIN_THREADS Used in universal function code to only release the Python GIL if loop->obj is not true (_i.e._ this is not an OBJECT array loop). Requires use of [`NPY_BEGIN_THREADS_DEF`](array#c.NPY_BEGIN_THREADS_DEF "NPY_BEGIN_THREADS_DEF") in variable declaration area. NPY_LOOP_END_THREADS Used in universal function code to re-acquire the Python GIL if it was released (because loop->obj was not true). ## Types typePyUFuncGenericFunction Pointers to functions that actually implement the underlying (element-by- element) function \\(N\\) times with the following signature: voidloopfunc(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*data) Parameters: * **args** – An array of pointers to the actual data for the input and output arrays. The input arguments are given first followed by the output arguments. * **dimensions** – A pointer to the size of the dimension over which this function is looping. * **steps** – A pointer to the number of bytes to jump to get to the next element in this dimension for each of the input and output arguments. * **data** – Arbitrary data (extra arguments, function names, _etc._ ) that can be stored with the ufunc and will be passed in when it is called. May be `NULL`. Changed in version 1.23.0: Accepts `NULL` `data` in addition to array of `NULL` values. This is an example of a func specialized for addition of doubles returning doubles. static void double_add(char **args, npy_intp const *dimensions, npy_intp const *steps, void *extra) { npy_intp i; npy_intp is1 = steps[0], is2 = steps[1]; npy_intp os = steps[2], n = dimensions[0]; char *i1 = args[0], *i2 = args[1], *op = args[2]; for (i = 0; i < n; i++) { *((double *)op) = *((double *)i1) + *((double *)i2); i1 += is1; i2 += is2; op += os; } } ## Functions [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyUFunc_FromFuncAndData(PyUFuncGenericFunction*func, void*const*data, constchar*types, intntypes, intnin, intnout, intidentity, constchar*name, constchar*doc, intunused) Create a new broadcasting universal function from required variables. Each ufunc builds around the notion of an element-by-element operation. Each ufunc object contains pointers to 1-d loops implementing the basic functionality for each supported type. Note The _func_ , _data_ , _types_ , _name_ , and _doc_ arguments are not copied by `PyUFunc_FromFuncAndData`. The caller must ensure that the memory used by these arrays is not freed as long as the ufunc object is alive. Parameters: * **func** – Must point to an array containing _ntypes_ `PyUFuncGenericFunction` elements. * **data** – Should be `NULL` or a pointer to an array of size _ntypes_. This array may contain arbitrary extra-data to be passed to the corresponding loop function in the func array, including `NULL`. * **types** – Length `(nin + nout) * ntypes` array of `char` encoding the [`numpy.dtype.num`](../generated/numpy.dtype.num#numpy.dtype.num "numpy.dtype.num") (built-in only) that the corresponding function in the `func` array accepts. For instance, for a comparison ufunc with three `ntypes`, two `nin` and one `nout`, where the first function accepts [`numpy.int32`](../arrays.scalars#numpy.int32 "numpy.int32") and the second [`numpy.int64`](../arrays.scalars#numpy.int64 "numpy.int64"), with both returning [`numpy.bool_`](../arrays.scalars#numpy.bool_ "numpy.bool_"), `types` would be `(char[]) {5, 5, 0, 7, 7, 0}` since `NPY_INT32` is 5, `NPY_INT64` is 7, and `NPY_BOOL` is 0. The bit-width names can also be used (e.g. [`NPY_INT32`](dtype#c.NPY_TYPES.NPY_INT32 "NPY_INT32"), [`NPY_COMPLEX128`](dtype#c.NPY_TYPES.NPY_COMPLEX128 "NPY_COMPLEX128") ) if desired. [Type casting rules](../../user/basics.ufuncs#ufuncs-casting) will be used at runtime to find the first `func` callable by the input/output provided. * **ntypes** – How many different data-type-specific functions the ufunc has implemented. * **nin** – The number of inputs to this operation. * **nout** – The number of outputs * **identity** – Either `PyUFunc_One`, `PyUFunc_Zero`, `PyUFunc_MinusOne`, or `PyUFunc_None`. This specifies what should be returned when an empty array is passed to the reduce method of the ufunc. The special value `PyUFunc_IdentityValue` may only be used with the `PyUFunc_FromFuncAndDataAndSignatureAndIdentity` method, to allow an arbitrary python object to be used as the identity. * **name** – The name for the ufunc as a `NULL` terminated string. Specifying a name of ‘add’ or ‘multiply’ enables a special behavior for integer-typed reductions when no dtype is given. If the input type is an integer (or boolean) data type smaller than the size of the [`numpy.int_`](../arrays.scalars#numpy.int_ "numpy.int_") data type, it will be internally upcast to the [`numpy.int_`](../arrays.scalars#numpy.int_ "numpy.int_") (or [`numpy.uint`](../arrays.scalars#numpy.uint "numpy.uint")) data type. * **doc** – Allows passing in a documentation string to be stored with the ufunc. The documentation string should not contain the name of the function or the calling signature as that will be dynamically determined from the object and available when accessing the **__doc__** attribute of the ufunc. * **unused** – Unused and present for backwards compatibility of the C-API. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyUFunc_FromFuncAndDataAndSignature(PyUFuncGenericFunction*func, void*const*data, constchar*types, intntypes, intnin, intnout, intidentity, constchar*name, constchar*doc, intunused, constchar*signature) This function is very similar to PyUFunc_FromFuncAndData above, but has an extra _signature_ argument, to define a [generalized universal functions](generalized-ufuncs#c-api-generalized-ufuncs). Similarly to how ufuncs are built around an element-by-element operation, gufuncs are around subarray-by-subarray operations, the [signature](generalized-ufuncs#details- of-signature) defining the subarrays to operate on. Parameters: * **signature** – The signature for the new gufunc. Setting it to NULL is equivalent to calling PyUFunc_FromFuncAndData. A copy of the string is made, so the passed in buffer can be freed. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*PyUFunc_FromFuncAndDataAndSignatureAndIdentity(PyUFuncGenericFunction*func, void**data, char*types, intntypes, intnin, intnout, intidentity, char*name, char*doc, intunused, char*signature, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*identity_value) This function is very similar to `PyUFunc_FromFuncAndDataAndSignature` above, but has an extra _identity_value_ argument, to define an arbitrary identity for the ufunc when `identity` is passed as `PyUFunc_IdentityValue`. Parameters: * **identity_value** – The identity for the new gufunc. Must be passed as `NULL` unless the `identity` argument is `PyUFunc_IdentityValue`. Setting it to NULL is equivalent to calling PyUFunc_FromFuncAndDataAndSignature. intPyUFunc_RegisterLoopForType([PyUFuncObject](types-and- structures#c.PyUFuncObject "PyUFuncObject")*ufunc, intusertype, PyUFuncGenericFunctionfunction, int*arg_types, void*data) This function allows the user to register a 1-d loop with an already- created ufunc to be used whenever the ufunc is called with any of its input arguments as the user-defined data-type. This is needed in order to make ufuncs work with built-in data-types. The data-type must have been previously registered with the numpy system. The loop is passed in as _function_. This loop can take arbitrary data which should be passed in as _data_. The data-types the loop requires are passed in as _arg_types_ which must be a pointer to memory at least as large as ufunc->nargs. intPyUFunc_RegisterLoopForDescr([PyUFuncObject](types-and- structures#c.PyUFuncObject "PyUFuncObject")*ufunc, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")*userdtype, PyUFuncGenericFunctionfunction, [PyArray_Descr](types-and- structures#c.PyArray_Descr "PyArray_Descr")**arg_dtypes, void*data) This function behaves like PyUFunc_RegisterLoopForType above, except that it allows the user to register a 1-d loop using PyArray_Descr objects instead of dtype type num values. This allows a 1-d loop to be registered for structured array data-dtypes and custom data-types instead of scalar data-types. intPyUFunc_ReplaceLoopBySignature([PyUFuncObject](types-and- structures#c.PyUFuncObject "PyUFuncObject")*ufunc, PyUFuncGenericFunctionnewfunc, int*signature, PyUFuncGenericFunction*oldfunc) Replace a 1-d loop matching the given _signature_ in the already-created _ufunc_ with the new 1-d loop newfunc. Return the old 1-d loop function in _oldfunc_. Return 0 on success and -1 on failure. This function works only with built-in types (use `PyUFunc_RegisterLoopForType` for user-defined types). A signature is an array of data-type numbers indicating the inputs followed by the outputs assumed by the 1-d loop. voidPyUFunc_clearfperr() Clear the IEEE error flags. ## Generic functions At the core of every ufunc is a collection of type-specific functions that defines the basic functionality for each of the supported types. These functions must evaluate the underlying function \\(N\geq1\\) times. Extra-data may be passed in that may be used during the calculation. This feature allows some general functions to be used as these basic looping functions. The general function has all the code needed to point variables to the right place and set up a function call. The general function assumes that the actual function to call is passed in as the extra data and calls it with the correct values. All of these functions are suitable for placing directly in the array of functions stored in the functions member of the PyUFuncObject structure. voidPyUFunc_f_f_As_d_d(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_d_d(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_f_f(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_g_g(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_F_F_As_D_D(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_F_F(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_D_D(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_G_G(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_e_e(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_e_e_As_f_f(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_e_e_As_d_d(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) Type specific, core 1-d functions for ufuncs where each calculation is obtained by calling a function taking one input argument and returning one output. This function is passed in `func`. The letters correspond to dtypechar’s of the supported data types ( `e` \- half, `f` \- float, `d` \- double, `g` \- long double, `F` \- cfloat, `D` \- cdouble, `G` \- clongdouble). The argument _func_ must support the same signature. The _As_X_X variants assume ndarray’s of one data type but cast the values to use an underlying function that takes a different data type. Thus, `PyUFunc_f_f_As_d_d` uses ndarrays of data type [`NPY_FLOAT`](dtype#c.NPY_TYPES.NPY_FLOAT "NPY_FLOAT") but calls out to a C-function that takes double and returns double. voidPyUFunc_ff_f_As_dd_d(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_ff_f(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_dd_d(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_gg_g(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_FF_F_As_DD_D(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_DD_D(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_FF_F(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_GG_G(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_ee_e(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_ee_e_As_ff_f(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_ee_e_As_dd_d(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) Type specific, core 1-d functions for ufuncs where each calculation is obtained by calling a function taking two input arguments and returning one output. The underlying function to call is passed in as _func_. The letters correspond to dtypechar’s of the specific data type supported by the general- purpose function. The argument `func` must support the corresponding signature. The `_As_XX_X` variants assume ndarrays of one data type but cast the values at each iteration of the loop to use the underlying function that takes a different data type. voidPyUFunc_O_O(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_OO_O(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) One-input, one-output, and two-input, one-output core 1-d functions for the [`NPY_OBJECT`](dtype#c.NPY_TYPES.NPY_OBJECT "NPY_OBJECT") data type. These functions handle reference count issues and return early on error. The actual function to call is _func_ and it must accept calls with the signature `(PyObject*) (PyObject*)` for `PyUFunc_O_O` or `(PyObject*)(PyObject *, PyObject *)` for `PyUFunc_OO_O`. voidPyUFunc_O_O_method(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) This general purpose 1-d core function assumes that _func_ is a string representing a method of the input object. For each iteration of the loop, the Python object is extracted from the array and its _func_ method is called returning the result to the output array. voidPyUFunc_OO_O_method(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) This general purpose 1-d core function assumes that _func_ is a string representing a method of the input object that takes one argument. The first argument in _args_ is the method whose function is called, the second argument in _args_ is the argument passed to the function. The output of the function is stored in the third entry of _args_. voidPyUFunc_On_Om(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) This is the 1-d core function used by the dynamic ufuncs created by umath.frompyfunc(function, nin, nout). In this case _func_ is a pointer to a `PyUFunc_PyFuncData` structure which has definition typePyUFunc_PyFuncData typedef struct { int nin; int nout; PyObject *callable; } PyUFunc_PyFuncData; At each iteration of the loop, the _nin_ input objects are extracted from their object arrays and placed into an argument tuple, the Python _callable_ is called with the input arguments, and the nout outputs are placed into their object arrays. ## Importing the API PY_UFUNC_UNIQUE_SYMBOL NO_IMPORT_UFUNC intPyUFunc_ImportUFuncAPI(void) Ensures that the UFunc C-API is imported and usable. It returns `0` on success and `-1` with an error set if NumPy couldn’t be imported. While preferable to call it once at module initialization, this function is very light-weight if called multiple times. New in version 2.0: This function mainly checks for `PyUFunc_API == NULL` so it can be manually backported if desired. import_ufunc(void) These are the constants and functions for accessing the ufunc C-API from extension modules in precisely the same way as the array C-API can be accessed. The `import_ufunc` () function must always be called (in the initialization subroutine of the extension module). If your extension module is in one file then that is all that is required. The other two constants are useful if your extension module makes use of multiple files. In that case, define `PY_UFUNC_UNIQUE_SYMBOL` to something unique to your code and then in source files that do not contain the module initialization function but still need access to the UFUNC API, define `PY_UFUNC_UNIQUE_SYMBOL` to the same name used previously and also define `NO_IMPORT_UFUNC`. The C-API is actually an array of function pointers. This array is created (and pointed to by a global variable) by import_ufunc. The global variable is either statically defined or allowed to be seen by other files depending on the state of `PY_UFUNC_UNIQUE_SYMBOL` and `NO_IMPORT_UFUNC`. # Constants NumPy includes several constants: numpy.e Euler’s constant, base of natural logarithms, Napier’s constant. `e = 2.71828182845904523536028747135266249775724709369995...` #### See Also exp : Exponential function log : Natural logarithm #### References numpy.euler_gamma `γ = 0.5772156649015328606065120900824024310421...` #### References numpy.inf IEEE 754 floating point representation of (positive) infinity. #### Returns yfloat A floating point representation of positive infinity. #### See Also isinf : Shows which elements are positive or negative infinity isposinf : Shows which elements are positive infinity isneginf : Shows which elements are negative infinity isnan : Shows which elements are Not a Number isfinite : Shows which elements are finite (not one of Not a Number, positive infinity and negative infinity) #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. Also that positive infinity is not equivalent to negative infinity. But infinity is equivalent to positive infinity. #### Examples >>> import numpy as np >>> np.inf inf >>> np.array([1]) / 0. array([inf]) numpy.nan IEEE 754 floating point representation of Not a Number (NaN). #### Returns y : A floating point representation of Not a Number. #### See Also isnan : Shows which elements are Not a Number. isfinite : Shows which elements are finite (not one of Not a Number, positive infinity and negative infinity) #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. #### Examples >>> import numpy as np >>> np.nan nan >>> np.log(-1) np.float64(nan) >>> np.log([-1, 1, 2]) array([ nan, 0. , 0.69314718]) numpy.newaxis A convenient alias for None, useful for indexing arrays. #### Examples >>> import numpy as np >>> np.newaxis is None True >>> x = np.arange(3) >>> x array([0, 1, 2]) >>> x[:, np.newaxis] array([[0], [1], [2]]) >>> x[:, np.newaxis, np.newaxis] array([[[0]], [[1]], [[2]]]) >>> x[:, np.newaxis] * x array([[0, 0, 0], [0, 1, 2], [0, 2, 4]]) Outer product, same as `outer(x, y)`: >>> y = np.arange(3, 6) >>> x[:, np.newaxis] * y array([[ 0, 0, 0], [ 3, 4, 5], [ 6, 8, 10]]) `x[np.newaxis, :]` is equivalent to `x[np.newaxis]` and `x[None]`: >>> x[np.newaxis, :].shape (1, 3) >>> x[np.newaxis].shape (1, 3) >>> x[None].shape (1, 3) >>> x[:, np.newaxis].shape (3, 1) numpy.pi `pi = 3.1415926535897932384626433...` #### References # Packaging (numpy.distutils) Warning `numpy.distutils` is deprecated, and will be removed for Python >= 3.12. For more details, see [Status of numpy.distutils and migration advice](distutils_status_migration#distutils-status-migration) Warning Note that `setuptools` does major releases often and those may contain changes that break `numpy.distutils`, which will _not_ be updated anymore for new `setuptools` versions. It is therefore recommended to set an upper version bound in your build configuration for the last known version of `setuptools` that works with your build. NumPy provides enhanced distutils functionality to make it easier to build and install sub-packages, auto-generate code, and extension modules that use Fortran-compiled libraries. A useful `Configuration` class is also provided in [`numpy.distutils.misc_util`](distutils/misc_util#module- numpy.distutils.misc_util "numpy.distutils.misc_util") that can make it easier to construct keyword arguments to pass to the setup function (by passing the dictionary obtained from the todict() method of the class). More information is available in the [numpy.distutils user guide](distutils_guide#distutils- user-guide). The choice and location of linked libraries such as BLAS and LAPACK as well as include paths and other such build options can be specified in a `site.cfg` file located in the NumPy root repository or a `.numpy-site.cfg` file in your home directory. See the `site.cfg.example` example file included in the NumPy repository or sdist for documentation. ## Modules in numpy.distutils * [distutils.misc_util](distutils/misc_util) * [`all_strings`](distutils/misc_util#numpy.distutils.misc_util.all_strings) * [`allpath`](distutils/misc_util#numpy.distutils.misc_util.allpath) * [`appendpath`](distutils/misc_util#numpy.distutils.misc_util.appendpath) * [`as_list`](distutils/misc_util#numpy.distutils.misc_util.as_list) * [`blue_text`](distutils/misc_util#numpy.distutils.misc_util.blue_text) * [`cyan_text`](distutils/misc_util#numpy.distutils.misc_util.cyan_text) * [`cyg2win32`](distutils/misc_util#numpy.distutils.misc_util.cyg2win32) * [`default_config_dict`](distutils/misc_util#numpy.distutils.misc_util.default_config_dict) * [`dict_append`](distutils/misc_util#numpy.distutils.misc_util.dict_append) * [`dot_join`](distutils/misc_util#numpy.distutils.misc_util.dot_join) * [`exec_mod_from_location`](distutils/misc_util#numpy.distutils.misc_util.exec_mod_from_location) * [`filter_sources`](distutils/misc_util#numpy.distutils.misc_util.filter_sources) * [`generate_config_py`](distutils/misc_util#numpy.distutils.misc_util.generate_config_py) * [`get_build_architecture`](distutils/misc_util#numpy.distutils.misc_util.get_build_architecture) * [`get_cmd`](distutils/misc_util#numpy.distutils.misc_util.get_cmd) * [`get_data_files`](distutils/misc_util#numpy.distutils.misc_util.get_data_files) * [`get_dependencies`](distutils/misc_util#numpy.distutils.misc_util.get_dependencies) * [`get_ext_source_files`](distutils/misc_util#numpy.distutils.misc_util.get_ext_source_files) * [`get_frame`](distutils/misc_util#numpy.distutils.misc_util.get_frame) * [`get_info`](distutils/misc_util#numpy.distutils.misc_util.get_info) * [`get_language`](distutils/misc_util#numpy.distutils.misc_util.get_language) * [`get_lib_source_files`](distutils/misc_util#numpy.distutils.misc_util.get_lib_source_files) * [`get_mathlibs`](distutils/misc_util#numpy.distutils.misc_util.get_mathlibs) * [`get_num_build_jobs`](distutils/misc_util#numpy.distutils.misc_util.get_num_build_jobs) * [`get_numpy_include_dirs`](distutils/misc_util#numpy.distutils.misc_util.get_numpy_include_dirs) * [`get_pkg_info`](distutils/misc_util#numpy.distutils.misc_util.get_pkg_info) * [`get_script_files`](distutils/misc_util#numpy.distutils.misc_util.get_script_files) * [`gpaths`](distutils/misc_util#numpy.distutils.misc_util.gpaths) * [`green_text`](distutils/misc_util#numpy.distutils.misc_util.green_text) * [`has_cxx_sources`](distutils/misc_util#numpy.distutils.misc_util.has_cxx_sources) * [`has_f_sources`](distutils/misc_util#numpy.distutils.misc_util.has_f_sources) * [`is_local_src_dir`](distutils/misc_util#numpy.distutils.misc_util.is_local_src_dir) * [`is_sequence`](distutils/misc_util#numpy.distutils.misc_util.is_sequence) * [`is_string`](distutils/misc_util#numpy.distutils.misc_util.is_string) * [`mingw32`](distutils/misc_util#numpy.distutils.misc_util.mingw32) * [`minrelpath`](distutils/misc_util#numpy.distutils.misc_util.minrelpath) * [`njoin`](distutils/misc_util#numpy.distutils.misc_util.njoin) * [`red_text`](distutils/misc_util#numpy.distutils.misc_util.red_text) * [`sanitize_cxx_flags`](distutils/misc_util#numpy.distutils.misc_util.sanitize_cxx_flags) * [`terminal_has_colors`](distutils/misc_util#numpy.distutils.misc_util.terminal_has_colors) * [`yellow_text`](distutils/misc_util#numpy.distutils.misc_util.yellow_text) [`ccompiler`](generated/numpy.distutils.ccompiler#module-numpy.distutils.ccompiler "numpy.distutils.ccompiler") | ---|--- [`ccompiler_opt`](generated/numpy.distutils.ccompiler_opt#module-numpy.distutils.ccompiler_opt "numpy.distutils.ccompiler_opt") | Provides the `CCompilerOpt` class, used for handling the CPU/hardware optimization, starting from parsing the command arguments, to managing the relation between the CPU baseline and dispatch-able features, also generating the required C headers and ending with compiling the sources with proper compiler's flags. [`cpuinfo.cpu`](generated/numpy.distutils.cpuinfo.cpu#numpy.distutils.cpuinfo.cpu "numpy.distutils.cpuinfo.cpu") | [`core.Extension`](generated/numpy.distutils.core.extension#numpy.distutils.core.Extension "numpy.distutils.core.Extension")(name, sources[, ...]) | Parameters: [`exec_command`](generated/numpy.distutils.exec_command#module-numpy.distutils.exec_command "numpy.distutils.exec_command") | exec_command [`log.set_verbosity`](generated/numpy.distutils.log.set_verbosity#numpy.distutils.log.set_verbosity "numpy.distutils.log.set_verbosity")(v[, force]) | [`system_info.get_info`](generated/numpy.distutils.system_info.get_info#numpy.distutils.system_info.get_info "numpy.distutils.system_info.get_info")(name[, notfound_action]) | notfound_action: [`system_info.get_standard_file`](generated/numpy.distutils.system_info.get_standard_file#numpy.distutils.system_info.get_standard_file "numpy.distutils.system_info.get_standard_file")(fname) | Returns a list of files named 'fname' from 1) System-wide directory (directory-location of this module) 2) Users HOME directory (os.environ['HOME']) 3) Local directory ## Configuration class _class_ numpy.distutils.misc_util.Configuration(_package_name =None_, _parent_name =None_, _top_path =None_, _package_path =None_, _** attrs_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L750-L2119) Construct a configuration instance for the given package name. If _parent_name_ is not None, then construct the package as a sub-package of the _parent_name_ package. If _top_path_ and _package_path_ are None then they are assumed equal to the path of the file this instance was created in. The setup.py files in the numpy distribution are good examples of how to use the `Configuration` instance. todict()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L857-L874) Return a dictionary compatible with the keyword arguments of distutils setup function. #### Examples >>> setup(**config.todict()) get_distribution()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L900-L903) Return the distutils distribution object for self. get_subpackage(_subpackage_name_ , _subpackage_path =None_, _parent_name =None_, _caller_level =1_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L957-L1016) Return list of subpackage configurations. Parameters: **subpackage_name** str or None Name of the subpackage to get the configuration. ‘*’ in subpackage_name is handled as a wildcard. **subpackage_path** str If None, then the path is assumed to be the local path plus the subpackage_name. If a setup.py file is not found in the subpackage_path, then a default configuration is used. **parent_name** str Parent name. add_subpackage(_subpackage_name_ , _subpackage_path =None_, _standalone =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L1018-L1059) Add a sub-package to the current Configuration instance. This is useful in a setup.py script for adding sub-packages to a package. Parameters: **subpackage_name** str name of the subpackage **subpackage_path** str if given, the subpackage path such as the subpackage is in subpackage_path / subpackage_name. If None,the subpackage is assumed to be located in the local path / subpackage_name. **standalone** bool add_data_files(_* files_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L1191-L1340) Add data files to configuration data_files. Parameters: **files** sequence Argument(s) can be either * 2-sequence (,) * paths to data files where python datadir prefix defaults to package dir. #### Notes The form of each element of the files sequence is very flexible allowing many combinations of where to get the files from the package and where they should ultimately be installed on the system. The most basic usage is for an element of the files argument sequence to be a simple filename. This will cause that file from the local path to be installed to the installation path of the self.name package (package path). The file argument can also be a relative path in which case the entire relative path will be installed into the package directory. Finally, the file can be an absolute path name in which case the file will be found at the absolute path name but installed to the package path. This basic behavior can be augmented by passing a 2-tuple in as the file argument. The first element of the tuple should specify the relative path (under the package install directory) where the remaining sequence of files should be installed to (it has nothing to do with the file-names in the source distribution). The second element of the tuple is the sequence of files that should be installed. The files in this sequence can be filenames, relative paths, or absolute paths. For absolute paths the file will be installed in the top-level package installation directory (regardless of the first argument). Filenames and relative path names will be installed in the package install directory under the path name given as the first element of the tuple. Rules for installation paths: 1. file.txt -> (., file.txt)-> parent/file.txt 2. foo/file.txt -> (foo, foo/file.txt) -> parent/foo/file.txt 3. /foo/bar/file.txt -> (., /foo/bar/file.txt) -> parent/file.txt 4. `*`.txt -> parent/a.txt, parent/b.txt 5. foo/`*`.txt`` -> parent/foo/a.txt, parent/foo/b.txt 6. `*/*.txt` -> (`*`, `*`/`*`.txt) -> parent/c/a.txt, parent/d/b.txt 7. (sun, file.txt) -> parent/sun/file.txt 8. (sun, bar/file.txt) -> parent/sun/file.txt 9. (sun, /foo/bar/file.txt) -> parent/sun/file.txt 10. (sun, `*`.txt) -> parent/sun/a.txt, parent/sun/b.txt 11. (sun, bar/`*`.txt) -> parent/sun/a.txt, parent/sun/b.txt 12. (sun/`*`, `*`/`*`.txt) -> parent/sun/c/a.txt, parent/d/b.txt An additional feature is that the path to a data-file can actually be a function that takes no arguments and returns the actual path(s) to the data- files. This is useful when the data files are generated while building the package. #### Examples Add files to the list of data_files to be included with the package. >>> self.add_data_files('foo.dat', ... ('fun', ['gun.dat', 'nun/pun.dat', '/tmp/sun.dat']), ... 'bar/cat.dat', ... '/full/path/to/can.dat') will install these data files to: / foo.dat fun/ gun.dat nun/ pun.dat sun.dat bar/ car.dat can.dat where is the package (or sub-package) directory such as ‘/usr/lib/python2.4/site-packages/mypackage’ (‘C: Python2.4 Lib site- packages mypackage’) or ‘/usr/lib/python2.4/site- packages/mypackage/mysubpackage’ (‘C: Python2.4 Lib site-packages mypackage mysubpackage’). add_data_dir(_data_path_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L1061-L1180) Recursively add files under data_path to data_files list. Recursively add files under data_path to the list of data_files to be installed (and distributed). The data_path can be either a relative path-name, or an absolute path-name, or a 2-tuple where the first argument shows where in the install directory the data directory should be installed to. Parameters: **data_path** seq or str Argument can be either * 2-sequence (, ) * path to data directory where python datadir suffix defaults to package dir. #### Notes Rules for installation paths: foo/bar -> (foo/bar, foo/bar) -> parent/foo/bar (gun, foo/bar) -> parent/gun foo/* -> (foo/a, foo/a), (foo/b, foo/b) -> parent/foo/a, parent/foo/b (gun, foo/*) -> (gun, foo/a), (gun, foo/b) -> gun (gun/*, foo/*) -> parent/gun/a, parent/gun/b /foo/bar -> (bar, /foo/bar) -> parent/bar (gun, /foo/bar) -> parent/gun (fun/*/gun/*, sun/foo/bar) -> parent/fun/foo/gun/bar #### Examples For example suppose the source directory contains fun/foo.dat and fun/bar/car.dat: >>> self.add_data_dir('fun') >>> self.add_data_dir(('sun', 'fun')) >>> self.add_data_dir(('gun', '/full/path/to/fun')) Will install data-files to the locations: / fun/ foo.dat bar/ car.dat sun/ foo.dat bar/ car.dat gun/ foo.dat car.dat add_include_dirs(_* paths_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L1360-L1374) Add paths to configuration include directories. Add the given sequence of paths to the beginning of the include_dirs list. This list will be visible to all extension modules of the current package. add_headers(_* files_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L1376-L1408) Add installable headers to configuration. Add the given sequence of files to the beginning of the headers list. By default, headers will be installed under // directory. If an item of files is a tuple, then its first argument specifies the actual installation location relative to the path. Parameters: **files** str or seq Argument(s) can be either: * 2-sequence (,) * path(s) to header file(s) where python includedir suffix will default to package name. add_extension(_name_ , _sources_ , _** kw_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L1433-L1534) Add extension to configuration. Create and add an Extension instance to the ext_modules list. This method also takes the following optional keyword arguments that are passed on to the Extension constructor. Parameters: **name** str name of the extension **sources** seq list of the sources. The list of sources may contain functions (called source generators) which must take an extension instance and a build directory as inputs and return a source file or list of source files or None. If None is returned then no sources are generated. If the Extension instance has no sources after processing all source generators, then no extension module is built. **include_dirs** **define_macros** **undef_macros** **library_dirs** **libraries** **runtime_library_dirs** **extra_objects** **extra_compile_args** **extra_link_args** **extra_f77_compile_args** **extra_f90_compile_args** **export_symbols** **swig_opts** **depends** The depends list contains paths to files or directories that the sources of the extension module depend on. If any path in the depends list is newer than the extension module, then the module will be rebuilt. **language** **f2py_options** **module_dirs** **extra_info** dict or list dict or list of dict of keywords to be appended to keywords. #### Notes The self.paths(…) method is applied to all lists that may contain paths. add_library(_name_ , _sources_ , _** build_info_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L1536-L1570) Add library to configuration. Parameters: **name** str Name of the extension. **sources** sequence List of the sources. The list of sources may contain functions (called source generators) which must take an extension instance and a build directory as inputs and return a source file or list of source files or None. If None is returned then no sources are generated. If the Extension instance has no sources after processing all source generators, then no extension module is built. **build_info** dict, optional The following keys are allowed: * depends * macros * include_dirs * extra_compiler_args * extra_f77_compile_args * extra_f90_compile_args * f2py_options * language add_scripts(_* files_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L1746-L1760) Add scripts to configuration. Add the sequence of files to the beginning of the scripts list. Scripts will be installed under the /bin/ directory. add_installed_library(_name_ , _sources_ , _install_dir_ , _build_info =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L1588-L1637) Similar to add_library, but the specified library is installed. Most C libraries used with `distutils` are only used to build python extensions, but libraries built through this method will be installed so that they can be reused by third-party packages. Parameters: **name** str Name of the installed library. **sources** sequence List of the library’s source files. See `add_library` for details. **install_dir** str Path to install the library, relative to the current sub-package. **build_info** dict, optional The following keys are allowed: * depends * macros * include_dirs * extra_compiler_args * extra_f77_compile_args * extra_f90_compile_args * f2py_options * language Returns: None See also `add_library`, `add_npy_pkg_config`, [`get_info`](distutils/misc_util#numpy.distutils.misc_util.get_info "numpy.distutils.misc_util.get_info") #### Notes The best way to encode the options required to link against the specified C libraries is to use a “libname.ini” file, and use [`get_info`](distutils/misc_util#numpy.distutils.misc_util.get_info "numpy.distutils.misc_util.get_info") to retrieve the required options (see `add_npy_pkg_config` for more information). add_npy_pkg_config(_template_ , _install_dir_ , _subst_dict =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L1639-L1743) Generate and install a npy-pkg config file from a template. The config file generated from `template` is installed in the given install directory, using `subst_dict` for variable substitution. Parameters: **template** str The path of the template, relatively to the current package path. **install_dir** str Where to install the npy-pkg config file, relatively to the current package path. **subst_dict** dict, optional If given, any string of the form `@key@` will be replaced by `subst_dict[key]` in the template file when installed. The install prefix is always available through the variable `@prefix@`, since the install prefix is not easy to get reliably from setup.py. See also `add_installed_library`, [`get_info`](distutils/misc_util#numpy.distutils.misc_util.get_info "numpy.distutils.misc_util.get_info") #### Notes This works for both standard installs and in-place builds, i.e. the `@prefix@` refer to the source directory for in-place builds. #### Examples config.add_npy_pkg_config('foo.ini.in', 'lib', {'foo': bar}) Assuming the foo.ini.in file has the following content: [meta] Name=@foo@ Version=1.0 Description=dummy description [default] Cflags=-I@prefix@/include Libs= The generated file will have the following content: [meta] Name=bar Version=1.0 Description=dummy description [default] Cflags=-Iprefix_dir/include Libs= and will be installed as foo.ini in the ‘lib’ subpath. When cross-compiling with numpy distutils, it might be necessary to use modified npy-pkg-config files. Using the default/generated files will link with the host libraries (i.e. libnpymath.a). For cross-compilation you of- course need to link with target libraries, while using the host Python installation. You can copy out the numpy/_core/lib/npy-pkg-config directory, add a pkgdir value to the .ini files and set NPY_PKG_CONFIG_PATH environment variable to point to the directory with the modified npy-pkg-config files. Example npymath.ini modified for cross-compilation: [meta] Name=npymath Description=Portable, core math library implementing C99 standard Version=0.1 [variables] pkgname=numpy._core pkgdir=/build/arm-linux-gnueabi/sysroot/usr/lib/python3.7/site-packages/numpy/_core prefix=${pkgdir} libdir=${prefix}/lib includedir=${prefix}/include [default] Libs=-L${libdir} -lnpymath Cflags=-I${includedir} Requires=mlib [msvc] Libs=/LIBPATH:${libdir} npymath.lib Cflags=/INCLUDE:${includedir} Requires=mlib paths(_* paths_, _** kws_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L1410-L1423) Apply glob to paths and prepend local_path if needed. Applies glob.glob(…) to each path in the sequence (if needed) and prepends the local_path if needed. Because this is called on all source lists, this allows wildcard characters to be specified in lists of sources for extension modules and libraries and scripts and allows path-names be relative to the source directory. get_config_cmd()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L1800-L1812) Returns the numpy.distutils config command instance. get_build_temp_dir()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L1814-L1821) Return a path to a temporary directory where temporary files should be placed. have_f77c()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L1823-L1840) Check for availability of Fortran 77 compiler. Use it inside source generating function to ensure that setup distribution instance has been initialized. #### Notes True if a Fortran 77 compiler is available (because a simple Fortran 77 code was able to be compiled successfully). have_f90c()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L1842-L1859) Check for availability of Fortran 90 compiler. Use it inside source generating function to ensure that setup distribution instance has been initialized. #### Notes True if a Fortran 90 compiler is available (because a simple Fortran 90 code was able to be compiled successfully) get_version(_version_file =None_, _version_variable =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L1942-L2016) Try to get version string of a package. Return a version string of the current package or None if the version information could not be detected. #### Notes This method scans files named __version__.py, _version.py, version.py, and __svn_version__.py for string variables version, __version__, and _version, until a version number is found. make_svn_version_py(_delete =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L2018-L2057) Appends a data function to the data_files list that will generate __svn_version__.py file to the current package directory. Generate package __svn_version__.py file from SVN revision number, it will be removed after python exits but will be available when sdist, etc commands are executed. #### Notes If __svn_version__.py existed before, nothing is done. This is intended for working with source directories that are in an SVN repository. make_config_py(_name ='__config__'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L2099-L2107) Generate package __config__.py file containing system_info information used during building the package. This file is installed to the package installation directory. get_info(_* names_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L2109-L2119) Get resources information. Return information (from system_info.get_info) for all of the names in the argument list in a single dictionary. ## Building installable C libraries Conventional C libraries (installed through `add_library`) are not installed, and are just used during the build (they are statically linked). An installable C library is a pure C library, which does not depend on the python C runtime, and is installed such that it may be used by third-party packages. To build and install the C library, you just use the method `add_installed_library` instead of `add_library`, which takes the same arguments except for an additional `install_dir` argument: .. hidden in a comment so as to be included in refguide but not rendered documentation >>> import numpy.distutils.misc_util >>> config = np.distutils.misc_util.Configuration(None, '', '.') >>> with open('foo.c', 'w') as f: pass >>> config.add_installed_library('foo', sources=['foo.c'], install_dir='lib') ### npy-pkg-config files To make the necessary build options available to third parties, you could use the `npy-pkg-config` mechanism implemented in `numpy.distutils`. This mechanism is based on a .ini file which contains all the options. A .ini file is very similar to .pc files as used by the pkg-config unix utility: [meta] Name: foo Version: 1.0 Description: foo library [variables] prefix = /home/user/local libdir = ${prefix}/lib includedir = ${prefix}/include [default] cflags = -I${includedir} libs = -L${libdir} -lfoo Generally, the file needs to be generated during the build, since it needs some information known at build time only (e.g. prefix). This is mostly automatic if one uses the `Configuration` method `add_npy_pkg_config`. Assuming we have a template file foo.ini.in as follows: [meta] Name: foo Version: @version@ Description: foo library [variables] prefix = @prefix@ libdir = ${prefix}/lib includedir = ${prefix}/include [default] cflags = -I${includedir} libs = -L${libdir} -lfoo and the following code in setup.py: >>> config.add_installed_library('foo', sources=['foo.c'], install_dir='lib') >>> subst = {'version': '1.0'} >>> config.add_npy_pkg_config('foo.ini.in', 'lib', subst_dict=subst) This will install the file foo.ini into the directory package_dir/lib, and the foo.ini file will be generated from foo.ini.in, where each `@version@` will be replaced by `subst_dict['version']`. The dictionary has an additional prefix substitution rule automatically added, which contains the install prefix (since this is not easy to get from setup.py). ### Reusing a C library from another package Info are easily retrieved from the [`get_info`](distutils/misc_util#numpy.distutils.misc_util.get_info "numpy.distutils.misc_util.get_info") function in [`numpy.distutils.misc_util`](distutils/misc_util#module- numpy.distutils.misc_util "numpy.distutils.misc_util"): >>> info = np.distutils.misc_util.get_info('npymath') >>> config.add_extension('foo', sources=['foo.c'], extra_info=info) An additional list of paths to look for .ini files can be given to [`get_info`](distutils/misc_util#numpy.distutils.misc_util.get_info "numpy.distutils.misc_util.get_info"). ## Conversion of `.src` files NumPy distutils supports automatic conversion of source files named .src. This facility can be used to maintain very similar code blocks requiring only simple changes between blocks. During the build phase of setup, if a template file named .src is encountered, a new file named is constructed from the template and placed in the build directory to be used instead. Two forms of template conversion are supported. The first form occurs for files named .ext.src where ext is a recognized Fortran extension (f, f90, f95, f77, for, ftn, pyf). The second form is used for all other cases. See [Conversion of .src files using templates](distutils_guide#templating). # distutils.misc_util numpy.distutils.misc_util.all_strings(_lst_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L490-L492) Return True if all items in lst are string objects. numpy.distutils.misc_util.allpath(_name_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L129-L132) Convert a /-separated pathname to one using the OS’s path separator. numpy.distutils.misc_util.appendpath(_prefix_ , _path_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L2298-L2317) numpy.distutils.misc_util.as_list(_seq_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L506-L510) numpy.distutils.misc_util.blue_text(_s_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L381-L382) numpy.distutils.misc_util.cyan_text(_s_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L379-L380) numpy.distutils.misc_util.cyg2win32(_path :[str](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.13\)")_) → [str](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.13\)")[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L386-L420) Convert a path from Cygwin-native to Windows-native. Uses the cygpath utility (part of the Base install) to do the actual conversion. Falls back to returning the original path if this fails. Handles the default `/cygdrive` mount prefix as well as the `/proc/cygdrive` portable prefix, custom cygdrive prefixes such as `/` or `/mnt`, and absolute paths such as `/usr/src/` or `/home/username` Parameters: **path** str The path to convert Returns: **converted_path** str The converted path #### Notes Documentation for cygpath utility: Documentation for the C function it wraps: numpy.distutils.misc_util.default_config_dict(_name =None_, _parent_name =None_, _local_path =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L2273-L2284) Return a configuration dictionary for usage in configuration() function defined in file setup_.py. numpy.distutils.misc_util.dict_append(_d_ , _** kws_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L2287-L2296) numpy.distutils.misc_util.dot_join(_* args_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L733-L734) numpy.distutils.misc_util.exec_mod_from_location(_modname_ , _modfile_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L2475-L2484) Use importlib machinery to import a module `modname` from the file `modfile`. Depending on the `spec.loader`, the module may not be registered in sys.modules. numpy.distutils.misc_util.filter_sources(_sources_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L533-L553) Return four lists of filenames containing C, C++, Fortran, and Fortran 90 module sources, respectively. numpy.distutils.misc_util.generate_config_py(_target_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L2319-L2448) Generate config.py file containing system_info information used during building the package. Usage: config[‘py_modules’].append((packagename, ‘__config__’,generate_config_py)) numpy.distutils.misc_util.get_build_architecture()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L2458-L2462) numpy.distutils.misc_util.get_cmd(_cmdname_ , __cache ={}_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L2122-L2132) numpy.distutils.misc_util.get_data_files(_data_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L714-L731) numpy.distutils.misc_util.get_dependencies(_sources_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L585-L587) numpy.distutils.misc_util.get_ext_source_files(_ext_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L640-L651) numpy.distutils.misc_util.get_frame(_level =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L736-L745) Return frame object from call stack with given level. numpy.distutils.misc_util.get_info(_pkgname_ , _dirs =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L2201-L2259) Return an info dict for a given C library. The info dict contains the necessary options to use the C library. Parameters: **pkgname** str Name of the package (should match the name of the .ini file, without the extension, e.g. foo for the file foo.ini). **dirs** sequence, optional If given, should be a sequence of additional directories where to look for npy-pkg-config files. Those directories are searched prior to the NumPy directory. Returns: **info** dict The dictionary with build information. Raises: PkgNotFound If the package is not found. See also [`Configuration.add_npy_pkg_config`](../distutils#numpy.distutils.misc_util.Configuration.add_npy_pkg_config "numpy.distutils.misc_util.Configuration.add_npy_pkg_config"), [`Configuration.add_installed_library`](../distutils#numpy.distutils.misc_util.Configuration.add_installed_library "numpy.distutils.misc_util.Configuration.add_installed_library") `get_pkg_info` #### Examples To get the necessary information for the npymath library from NumPy: >>> npymath_info = np.distutils.misc_util.get_info('npymath') >>> npymath_info {'define_macros': [], 'libraries': ['npymath'], 'library_dirs': ['.../numpy/_core/lib'], 'include_dirs': ['.../numpy/_core/include']} This info dict can then be used as input to a [`Configuration`](../distutils#numpy.distutils.misc_util.Configuration "numpy.distutils.misc_util.Configuration") instance: config.add_extension('foo', sources=['foo.c'], extra_info=npymath_info) numpy.distutils.misc_util.get_language(_sources_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L512-L523) Determine language value (c,f77,f90) from sources numpy.distutils.misc_util.get_lib_source_files(_lib_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L657-L669) numpy.distutils.misc_util.get_mathlibs(_path =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L205-L230) Return the MATHLIB line from numpyconfig.h numpy.distutils.misc_util.get_num_build_jobs()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L76-L110) Get number of parallel build jobs set by the –parallel command line argument of setup.py If the command did not receive a setting the environment variable NPY_NUM_BUILD_JOBS is checked. If that is unset, return the number of processors on the system, with a maximum of 8 (to prevent overloading the system if there a lot of CPUs). Returns: **out** int number of parallel jobs that can be run numpy.distutils.misc_util.get_numpy_include_dirs()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L2134-L2141) numpy.distutils.misc_util.get_pkg_info(_pkgname_ , _dirs =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L2163-L2199) Return library info for the given package. Parameters: **pkgname** str Name of the package (should match the name of the .ini file, without the extension, e.g. foo for the file foo.ini). **dirs** sequence, optional If given, should be a sequence of additional directories where to look for npy-pkg-config files. Those directories are searched prior to the NumPy directory. Returns: **pkginfo** class instance The `LibraryInfo` instance containing the build information. Raises: PkgNotFound If the package is not found. See also [`Configuration.add_npy_pkg_config`](../distutils#numpy.distutils.misc_util.Configuration.add_npy_pkg_config "numpy.distutils.misc_util.Configuration.add_npy_pkg_config"), [`Configuration.add_installed_library`](../distutils#numpy.distutils.misc_util.Configuration.add_installed_library "numpy.distutils.misc_util.Configuration.add_installed_library") `get_info` numpy.distutils.misc_util.get_script_files(_scripts_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L653-L655) numpy.distutils.misc_util.gpaths(_paths_ , _local_path =''_, _include_non_existing =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L303-L308) Apply glob to paths and prepend local_path if needed. numpy.distutils.misc_util.green_text(_s_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L375-L376) numpy.distutils.misc_util.has_cxx_sources(_sources_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L529-L531) Return True if sources contains C++ files numpy.distutils.misc_util.has_f_sources(_sources_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L525-L527) Return True if sources contains Fortran files numpy.distutils.misc_util.is_local_src_dir(_directory_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L589-L602) Return true if directory is local directory. numpy.distutils.misc_util.is_sequence(_seq_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L494-L501) numpy.distutils.misc_util.is_string(_s_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L487-L488) numpy.distutils.misc_util.mingw32()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L423-L431) Return true when using mingw32 environment. numpy.distutils.misc_util.minrelpath(_path_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L232-L259) Resolve and ‘.’ from path. numpy.distutils.misc_util.njoin(_* path_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L178-L203) Join two or more pathname components + - convert a /-separated pathname to one using the OS’s path separator. - resolve and from path. Either passing n arguments as in njoin(‘a’,’b’), or a sequence of n names as in njoin([‘a’,’b’]) is handled, or a mixture of such arguments. numpy.distutils.misc_util.red_text(_s_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L373-L374) numpy.distutils.misc_util.sanitize_cxx_flags(_cxxflags_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L2468-L2472) Some flags are valid for C but not C++. Prune them. numpy.distutils.misc_util.terminal_has_colors()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L323-L348) numpy.distutils.misc_util.yellow_text(_s_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/misc_util.py#L377-L378) # numpy.distutils user guide Warning `numpy.distutils` is deprecated, and will be removed for Python >= 3.12. For more details, see [Status of numpy.distutils and migration advice](distutils_status_migration#distutils-status-migration) ## SciPy structure Currently SciPy project consists of two packages: * NumPy — it provides packages like: * numpy.distutils - extension to Python distutils * numpy.f2py - a tool to bind Fortran/C codes to Python * numpy._core - future replacement of Numeric and numarray packages * numpy.lib - extra utility functions * numpy.testing - numpy-style tools for unit testing * etc * SciPy — a collection of scientific tools for Python. The aim of this document is to describe how to add new tools to SciPy. ## Requirements for SciPy packages SciPy consists of Python packages, called SciPy packages, that are available to Python users via the `scipy` namespace. Each SciPy package may contain other SciPy packages. And so on. Therefore, the SciPy directory tree is a tree of packages with arbitrary depth and width. Any SciPy package may depend on NumPy packages but the dependence on other SciPy packages should be kept minimal or zero. A SciPy package contains, in addition to its sources, the following files and directories: * `setup.py` — building script * `__init__.py` — package initializer * `tests/` — directory of unittests Their contents are described below. ## The `setup.py` file In order to add a Python package to SciPy, its build script (`setup.py`) must meet certain requirements. The most important requirement is that the package define a `configuration(parent_package='',top_path=None)` function which returns a dictionary suitable for passing to `numpy.distutils.core.setup(..)`. To simplify the construction of this dictionary, `numpy.distutils.misc_util` provides the `Configuration` class, described below. ### SciPy pure Python package example Below is an example of a minimal `setup.py` file for a pure SciPy package: #!/usr/bin/env python3 def configuration(parent_package='',top_path=None): from numpy.distutils.misc_util import Configuration config = Configuration('mypackage',parent_package,top_path) return config if __name__ == "__main__": from numpy.distutils.core import setup #setup(**configuration(top_path='').todict()) setup(configuration=configuration) The arguments of the `configuration` function specify the name of parent SciPy package (`parent_package`) and the directory location of the main `setup.py` script (`top_path`). These arguments, along with the name of the current package, should be passed to the `Configuration` constructor. The `Configuration` constructor has a fourth optional argument, `package_path`, that can be used when package files are located in a different location than the directory of the `setup.py` file. Remaining `Configuration` arguments are all keyword arguments that will be used to initialize attributes of `Configuration` instance. Usually, these keywords are the same as the ones that `setup(..)` function would expect, for example, `packages`, `ext_modules`, `data_files`, `include_dirs`, `libraries`, `headers`, `scripts`, `package_dir`, etc. However, the direct specification of these keywords is not recommended as the content of these keyword arguments will not be processed or checked for the consistency of SciPy building system. Finally, `Configuration` has `.todict()` method that returns all the configuration data as a dictionary suitable for passing on to the `setup(..)` function. ### `Configuration` instance attributes In addition to attributes that can be specified via keyword arguments to `Configuration` constructor, `Configuration` instance (let us denote as `config`) has the following attributes that can be useful in writing setup scripts: * `config.name` \- full name of the current package. The names of parent packages can be extracted as `config.name.split('.')`. * `config.local_path` \- path to the location of current `setup.py` file. * `config.top_path` \- path to the location of main `setup.py` file. ### `Configuration` instance methods * `config.todict()` — returns configuration dictionary suitable for passing to `numpy.distutils.core.setup(..)` function. * `config.paths(*paths) --- applies ``glob.glob(..)` to items of `paths` if necessary. Fixes `paths` item that is relative to `config.local_path`. * `config.get_subpackage(subpackage_name,subpackage_path=None)` — returns a list of subpackage configurations. Subpackage is looked in the current directory under the name `subpackage_name` but the path can be specified also via optional `subpackage_path` argument. If `subpackage_name` is specified as `None` then the subpackage name will be taken the basename of `subpackage_path`. Any `*` used for subpackage names are expanded as wildcards. * `config.add_subpackage(subpackage_name,subpackage_path=None)` — add SciPy subpackage configuration to the current one. The meaning and usage of arguments is explained above, see `config.get_subpackage()` method. * `config.add_data_files(*files)` — prepend `files` to `data_files` list. If `files` item is a tuple then its first element defines the suffix of where data files are copied relative to package installation directory and the second element specifies the path to data files. By default data files are copied under package installation directory. For example, config.add_data_files('foo.dat', ('fun',['gun.dat','nun/pun.dat','/tmp/sun.dat']), 'bar/car.dat'. '/full/path/to/can.dat', ) will install data files to the following locations / foo.dat fun/ gun.dat pun.dat sun.dat bar/ car.dat can.dat Path to data files can be a function taking no arguments and returning path(s) to data files – this is a useful when data files are generated while building the package. (XXX: explain the step when this function are called exactly) * `config.add_data_dir(data_path)` — add directory `data_path` recursively to `data_files`. The whole directory tree starting at `data_path` will be copied under package installation directory. If `data_path` is a tuple then its first element defines the suffix of where data files are copied relative to package installation directory and the second element specifies the path to data directory. By default, data directory are copied under package installation directory under the basename of `data_path`. For example, config.add_data_dir('fun') # fun/ contains foo.dat bar/car.dat config.add_data_dir(('sun','fun')) config.add_data_dir(('gun','/full/path/to/fun')) will install data files to the following locations / fun/ foo.dat bar/ car.dat sun/ foo.dat bar/ car.dat gun/ foo.dat bar/ car.dat * `config.add_include_dirs(*paths)` — prepend `paths` to `include_dirs` list. This list will be visible to all extension modules of the current package. * `config.add_headers(*files)` — prepend `files` to `headers` list. By default, headers will be installed under `/include/pythonX.X//` directory. If `files` item is a tuple then it’s first argument specifies the installation suffix relative to `/include/pythonX.X/` path. This is a Python distutils method; its use is discouraged for NumPy and SciPy in favour of `config.add_data_files(*files)`. * `config.add_scripts(*files)` — prepend `files` to `scripts` list. Scripts will be installed under `/bin/` directory. * `config.add_extension(name,sources,**kw)` — create and add an `Extension` instance to `ext_modules` list. The first argument `name` defines the name of the extension module that will be installed under `config.name` package. The second argument is a list of sources. `add_extension` method takes also keyword arguments that are passed on to the `Extension` constructor. The list of allowed keywords is the following: `include_dirs`, `define_macros`, `undef_macros`, `library_dirs`, `libraries`, `runtime_library_dirs`, `extra_objects`, `extra_compile_args`, `extra_link_args`, `export_symbols`, `swig_opts`, `depends`, `language`, `f2py_options`, `module_dirs`, `extra_info`, `extra_f77_compile_args`, `extra_f90_compile_args`. Note that `config.paths` method is applied to all lists that may contain paths. `extra_info` is a dictionary or a list of dictionaries that content will be appended to keyword arguments. The list `depends` contains paths to files or directories that the sources of the extension module depend on. If any path in the `depends` list is newer than the extension module, then the module will be rebuilt. The list of sources may contain functions (‘source generators’) with a pattern `def (ext, build_dir): return `. If `funcname` returns `None`, no sources are generated. And if the `Extension` instance has no sources after processing all source generators, no extension module will be built. This is the recommended way to conditionally define extension modules. Source generator functions are called by the `build_src` sub-command of `numpy.distutils`. For example, here is a typical source generator function: def generate_source(ext,build_dir): import os from distutils.dep_util import newer target = os.path.join(build_dir,'somesource.c') if newer(target,__file__): # create target file return target The first argument contains the Extension instance that can be useful to access its attributes like `depends`, `sources`, etc. lists and modify them during the building process. The second argument gives a path to a build directory that must be used when creating files to a disk. * `config.add_library(name, sources, **build_info)` — add a library to `libraries` list. Allowed keywords arguments are `depends`, `macros`, `include_dirs`, `extra_compiler_args`, `f2py_options`, `extra_f77_compile_args`, `extra_f90_compile_args`. See `.add_extension()` method for more information on arguments. * `config.have_f77c()` — return True if Fortran 77 compiler is available (read: a simple Fortran 77 code compiled successfully). * `config.have_f90c()` — return True if Fortran 90 compiler is available (read: a simple Fortran 90 code compiled successfully). * `config.get_version()` — return version string of the current package, `None` if version information could not be detected. This methods scans files `__version__.py`, `_version.py`, `version.py`, `__svn_version__.py` for string variables `version`, `__version__`, `_version`. * `config.make_svn_version_py()` — appends a data function to `data_files` list that will generate `__svn_version__.py` file to the current package directory. The file will be removed from the source directory when Python exits. * `config.get_build_temp_dir()` — return a path to a temporary directory. This is the place where one should build temporary files. * `config.get_distribution()` — return distutils `Distribution` instance. * `config.get_config_cmd()` — returns `numpy.distutils` config command instance. * `config.get_info(*names)` — ### Conversion of `.src` files using templates NumPy distutils supports automatic conversion of source files named .src. This facility can be used to maintain very similar code blocks requiring only simple changes between blocks. During the build phase of setup, if a template file named .src is encountered, a new file named is constructed from the template and placed in the build directory to be used instead. Two forms of template conversion are supported. The first form occurs for files named .ext.src where ext is a recognized Fortran extension (f, f90, f95, f77, for, ftn, pyf). The second form is used for all other cases. ### Fortran files This template converter will replicate all **function** and **subroutine** blocks in the file with names that contain ‘<…>’ according to the rules in ‘<…>’. The number of comma-separated words in ‘<…>’ determines the number of times the block is repeated. What these words are indicates what that repeat rule, ‘<…>’, should be replaced with in each block. All of the repeat rules in a block must contain the same number of comma-separated words indicating the number of times that block should be repeated. If the word in the repeat rule needs a comma, leftarrow, or rightarrow, then prepend it with a backslash ‘ '. If a word in the repeat rule matches ‘ \’ then it will be replaced with the -th word in the same repeat specification. There are two forms for the repeat rule: named and short. #### Named repeat rule A named repeat rule is useful when the same set of repeats must be used several times in a block. It is specified using , where N is the number of times the block should be repeated. On each repeat of the block, the entire expression, ‘<…>’ will be replaced first with item1, and then with item2, and so forth until N repeats are accomplished. Once a named repeat specification has been introduced, the same repeat rule may be used **in the current block** by referring only to the name (i.e. ). #### Short repeat rule A short repeat rule looks like . The rule specifies that the entire expression, ‘<…>’ should be replaced first with item1, and then with item2, and so forth until N repeats are accomplished. #### Pre-defined names The following predefined named repeat rules are available: * * <_c=s,d,c,z> * <_t=real, double precision, complex, double complex> * * * * ### Other files Non-Fortran files use a separate syntax for defining template blocks that should be repeated using a variable expansion similar to the named repeat rules of the Fortran-specific repeats. NumPy Distutils preprocesses C source files (extension: `.c.src`) written in a custom templating language to generate C code. The `@` symbol is used to wrap macro-style variables to empower a string substitution mechanism that might describe (for instance) a set of data types. The template language blocks are delimited by `/**begin repeat` and `/**end repeat**/` lines, which may also be nested using consecutively numbered delimiting lines such as `/**begin repeat1` and `/**end repeat1**/`: 1. `/**begin repeat` on a line by itself marks the beginning of a segment that should be repeated. 2. Named variable expansions are defined using `#name=item1, item2, item3, ..., itemN#` and placed on successive lines. These variables are replaced in each repeat block with corresponding word. All named variables in the same repeat block must define the same number of words. 3. In specifying the repeat rule for a named variable, `item*N` is short- hand for `item, item, ..., item` repeated N times. In addition, parenthesis in combination with `*N` can be used for grouping several items that should be repeated. Thus, `#name=(item1, item2)*4#` is equivalent to `#name=item1, item2, item1, item2, item1, item2, item1, item2#`. 4. `*/` on a line by itself marks the end of the variable expansion naming. The next line is the first line that will be repeated using the named rules. 5. Inside the block to be repeated, the variables that should be expanded are specified as `@name@`. 6. `/**end repeat**/` on a line by itself marks the previous line as the last line of the block to be repeated. 7. A loop in the NumPy C source code may have a `@TYPE@` variable, targeted for string substitution, which is preprocessed to a number of otherwise identical loops with several strings such as `INT`, `LONG`, `UINT`, `ULONG`. The `@TYPE@` style syntax thus reduces code duplication and maintenance burden by mimicking languages that have generic type support. The above rules may be clearer in the following template source example: 1 /* TIMEDELTA to non-float types */ 2 3 /**begin repeat 4 * 5 * #TOTYPE = BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, 6 * LONGLONG, ULONGLONG, DATETIME, 7 * TIMEDELTA# 8 * #totype = npy_byte, npy_ubyte, npy_short, npy_ushort, npy_int, npy_uint, 9 * npy_long, npy_ulong, npy_longlong, npy_ulonglong, 10 * npy_datetime, npy_timedelta# 11 */ 12 13 /**begin repeat1 14 * 15 * #FROMTYPE = TIMEDELTA# 16 * #fromtype = npy_timedelta# 17 */ 18 static void 19 @FROMTYPE@_to_@TOTYPE@(void *input, void *output, npy_intp n, 20 void *NPY_UNUSED(aip), void *NPY_UNUSED(aop)) 21 { 22 const @fromtype@ *ip = input; 23 @totype@ *op = output; 24 25 while (n--) { 26 *op++ = (@totype@)*ip++; 27 } 28 } 29 /**end repeat1**/ 30 31 /**end repeat**/ The preprocessing of generically-typed C source files (whether in NumPy proper or in any third party package using NumPy Distutils) is performed by [conv_template.py](https://github.com/numpy/numpy/blob/main/numpy/distutils/conv_template.py). The type-specific C files generated (extension: `.c`) by these modules during the build process are ready to be compiled. This form of generic typing is also supported for C header files (preprocessed to produce `.h` files). ### Useful functions in `numpy.distutils.misc_util` * `get_numpy_include_dirs()` — return a list of NumPy base include directories. NumPy base include directories contain header files such as `numpy/arrayobject.h`, `numpy/funcobject.h` etc. For installed NumPy the returned list has length 1 but when building NumPy the list may contain more directories, for example, a path to `config.h` file that `numpy/base/setup.py` file generates and is used by `numpy` header files. * `append_path(prefix,path)` — smart append `path` to `prefix`. * `gpaths(paths, local_path='')` — apply glob to paths and prepend `local_path` if needed. * `njoin(*path)` — join pathname components + convert `/`-separated path to `os.sep`-separated path and resolve `..`, `.` from paths. Ex. `njoin('a',['b','./c'],'..','g') -> os.path.join('a','b','g')`. * `minrelpath(path)` — resolves dots in `path`. * `rel_path(path, parent_path)` — return `path` relative to `parent_path`. * `def get_cmd(cmdname,_cache={})` — returns `numpy.distutils` command instance. * `all_strings(lst)` * `has_f_sources(sources)` * `has_cxx_sources(sources)` * `filter_sources(sources)` — return `c_sources, cxx_sources, f_sources, fmodule_sources` * `get_dependencies(sources)` * `is_local_src_dir(directory)` * `get_ext_source_files(ext)` * `get_script_files(scripts)` * `get_lib_source_files(lib)` * `get_data_files(data)` * `dot_join(*args)` — join non-zero arguments with a dot. * `get_frame(level=0)` — return frame object from call stack with given level. * `cyg2win32(path)` * `mingw32()` — return `True` when using mingw32 environment. * `terminal_has_colors()`, `red_text(s)`, `green_text(s)`, `yellow_text(s)`, `blue_text(s)`, `cyan_text(s)` * `get_path(mod_name,parent_path=None)` — return path of a module relative to parent_path when given. Handles also `__main__` and `__builtin__` modules. * `allpath(name)` — replaces `/` with `os.sep` in `name`. * `cxx_ext_match`, `fortran_ext_match`, `f90_ext_match`, `f90_module_name_match` ### `numpy.distutils.system_info` module * `get_info(name,notfound_action=0)` * `combine_paths(*args,**kws)` * `show_all()` ### `numpy.distutils.cpuinfo` module * `cpuinfo` ### `numpy.distutils.log` module * `set_verbosity(v)` ### `numpy.distutils.exec_command` module * `get_pythonexe()` * `find_executable(exe, path=None)` * `exec_command( command, execute_in='', use_shell=None, use_tee=None, **env )` ## The `__init__.py` file The header of a typical SciPy `__init__.py` is: """ Package docstring, typically with a brief description and function listing. """ # import functions into module namespace from .subpackage import * ... __all__ = [s for s in dir() if not s.startswith('_')] from numpy.testing import Tester test = Tester().test bench = Tester().bench ## Extra features in NumPy Distutils ### Specifying config_fc options for libraries in setup.py script It is possible to specify config_fc options in setup.py scripts. For example, using: config.add_library('library', sources=[...], config_fc={'noopt':(__file__,1)}) will compile the `library` sources without optimization flags. It’s recommended to specify only those config_fc options in such a way that are compiler independent. ### Getting extra Fortran 77 compiler options from source Some old Fortran codes need special compiler options in order to work correctly. In order to specify compiler options per source file, `numpy.distutils` Fortran compiler looks for the following pattern: CF77FLAGS() = in the first 20 lines of the source and use the `f77flags` for specified type of the fcompiler (the first character `C` is optional). TODO: This feature can be easily extended for Fortran 90 codes as well. Let us know if you would need such a feature. # Status of numpy.distutils and migration advice [`numpy.distutils`](distutils#module-numpy.distutils "numpy.distutils") has been deprecated in NumPy `1.23.0`. It will be removed for Python 3.12; for Python <= 3.11 it will not be removed until 2 years after the Python 3.12 release (Oct 2025). Warning `numpy.distutils` is only tested with `setuptools < 60.0`, newer versions may break. See Interaction of numpy.distutils with setuptools for details. ## Migration advice There are several build systems which are good options to migrate to. Assuming you have compiled code in your package (if not, you have several good options, e.g. the build backends offered by Poetry, Hatch or PDM) and you want to be using a well-designed, modern and reliable build system, we recommend: 1. [Meson](https://mesonbuild.com/), and the [meson-python](https://meson-python.readthedocs.io) build backend 2. [CMake](https://cmake.org/), and the [scikit-build-core](https://scikit-build-core.readthedocs.io/en/latest/) build backend If you have modest needs (only simple Cython/C extensions; no need for Fortran, BLAS/LAPACK, nested `setup.py` files, or other features of `numpy.distutils`) and have been happy with `numpy.distutils` so far, you can also consider switching to `setuptools`. Note that most functionality of `numpy.distutils` is unlikely to be ported to `setuptools`. ### Moving to Meson SciPy has moved to Meson and meson-python for its 1.9.0 release. During this process, remaining issues with Meson’s Python support and feature parity with `numpy.distutils` were resolved. _Note: parity means a large superset (because Meson is a good general-purpose build system); only a few BLAS/LAPACK library selection niceties are missing_. SciPy uses almost all functionality that `numpy.distutils` offers, so if SciPy has successfully made a release with Meson as the build system, there should be no blockers left to migrate, and SciPy will be a good reference for other packages who are migrating. For more details about the SciPy migration, see: * [RFC: switch to Meson as a build system](https://github.com/scipy/scipy/issues/13615) * [Tracking issue for Meson support](https://github.com/rgommers/scipy/issues/22) NumPy will migrate to Meson for the 1.26 release. ### Moving to CMake / scikit-build The next generation of scikit-build is called [scikit-build- core](https://scikit-build-core.readthedocs.io/en/latest/). Where the older `scikit-build` used `setuptools` underneath, the rewrite does not. Like Meson, CMake is a good general-purpose build system. ### Moving to `setuptools` For projects that only use `numpy.distutils` for historical reasons, and do not actually use features beyond those that `setuptools` also supports, moving to `setuptools` is likely the solution which costs the least effort. To assess that, there are the `numpy.distutils` features that are _not_ present in `setuptools`: * Nested `setup.py` files * Fortran build support * BLAS/LAPACK library support (OpenBLAS, MKL, ATLAS, Netlib LAPACK/BLAS, BLIS, 64-bit ILP interface, etc.) * Support for a few other scientific libraries, like FFTW and UMFPACK * Better MinGW support * Per-compiler build flag customization (e.g. `-O3` and `SSE2` flags are default) * a simple user build config system, see [site.cfg.example](https://github.com/numpy/numpy/blob/master/site.cfg.example) * SIMD intrinsics support * Support for the NumPy-specific `.src` templating format for `.c`/`.h` files The most widely used feature is nested `setup.py` files. This feature may perhaps still be ported to `setuptools` in the future (it needs a volunteer though, see [gh-18588](https://github.com/numpy/numpy/issues/18588) for status). Projects only using that feature could move to `setuptools` after that is done. In case a project uses only a couple of `setup.py` files, it also could make sense to simply aggregate all the content of those files into a single `setup.py` file and then move to `setuptools`. This involves dropping all `Configuration` instances, and using `Extension` instead. E.g.,: from distutils.core import setup from distutils.extension import Extension setup(name='foobar', version='1.0', ext_modules=[ Extension('foopkg.foo', ['foo.c']), Extension('barpkg.bar', ['bar.c']), ], ) For more details, see the [setuptools documentation](https://setuptools.pypa.io/en/latest/setuptools.html) ## Interaction of `numpy.distutils` with `setuptools` It is recommended to use `setuptools < 60.0`. Newer versions may work, but are not guaranteed to. The reason for this is that `setuptools` 60.0 enabled a vendored copy of `distutils`, including backwards incompatible changes that affect some functionality in `numpy.distutils`. If you are using only simple Cython or C extensions with minimal use of `numpy.distutils` functionality beyond nested `setup.py` files (its most popular feature, see [`Configuration`](distutils#numpy.distutils.misc_util.Configuration "numpy.distutils.misc_util.Configuration")), then latest `setuptools` is likely to continue working. In case of problems, you can also try `SETUPTOOLS_USE_DISTUTILS=stdlib` to avoid the backwards incompatible changes in `setuptools`. Whatever you do, it is recommended to put an upper bound on your `setuptools` build requirement in `pyproject.toml` to avoid future breakage - see [For downstream package authors](../dev/depending_on_numpy#for-downstream-package- authors). # numpy.__array_namespace_info__.capabilities method __array_namespace_info__.capabilities()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_array_api_info.py#L63-L105) Return a dictionary of array API library capabilities. The resulting dictionary has the following keys: * **“boolean indexing”** : boolean indicating whether an array library supports boolean indexing. Always `True` for NumPy. * **“data-dependent shapes”** : boolean indicating whether an array library supports data-dependent output shapes. Always `True` for NumPy. See for more details. Returns: **capabilities** dict A dictionary of array API library capabilities. See also [`__array_namespace_info__.default_device`](numpy.__array_namespace_info__.default_device#numpy.__array_namespace_info__.default_device "numpy.__array_namespace_info__.default_device") [`__array_namespace_info__.default_dtypes`](numpy.__array_namespace_info__.default_dtypes#numpy.__array_namespace_info__.default_dtypes "numpy.__array_namespace_info__.default_dtypes") [`__array_namespace_info__.dtypes`](numpy.__array_namespace_info__.dtypes#numpy.__array_namespace_info__.dtypes "numpy.__array_namespace_info__.dtypes") [`__array_namespace_info__.devices`](numpy.__array_namespace_info__.devices#numpy.__array_namespace_info__.devices "numpy.__array_namespace_info__.devices") #### Examples >>> info = np.__array_namespace_info__() >>> info.capabilities() {'boolean indexing': True, 'data-dependent shapes': True} # numpy.__array_namespace_info__.default_device method __array_namespace_info__.default_device()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_array_api_info.py#L107-L132) The default device used for new NumPy arrays. For NumPy, this always returns `'cpu'`. Returns: **device** str The default device used for new NumPy arrays. See also [`__array_namespace_info__.capabilities`](numpy.__array_namespace_info__.capabilities#numpy.__array_namespace_info__.capabilities "numpy.__array_namespace_info__.capabilities") [`__array_namespace_info__.default_dtypes`](numpy.__array_namespace_info__.default_dtypes#numpy.__array_namespace_info__.default_dtypes "numpy.__array_namespace_info__.default_dtypes") [`__array_namespace_info__.dtypes`](numpy.__array_namespace_info__.dtypes#numpy.__array_namespace_info__.dtypes "numpy.__array_namespace_info__.dtypes") [`__array_namespace_info__.devices`](numpy.__array_namespace_info__.devices#numpy.__array_namespace_info__.devices "numpy.__array_namespace_info__.devices") #### Examples >>> info = np.__array_namespace_info__() >>> info.default_device() 'cpu' # numpy.__array_namespace_info__.default_dtypes method __array_namespace_info__.default_dtypes(_*_ , _device =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_array_api_info.py#L134-L184) The default data types used for new NumPy arrays. For NumPy, this always returns the following dictionary: * **“real floating”** : `numpy.float64` * **“complex floating”** : `numpy.complex128` * **“integral”** : `numpy.intp` * **“indexing”** : `numpy.intp` Parameters: **device** str, optional The device to get the default data types for. For NumPy, only `'cpu'` is allowed. Returns: **dtypes** dict A dictionary describing the default data types used for new NumPy arrays. See also [`__array_namespace_info__.capabilities`](numpy.__array_namespace_info__.capabilities#numpy.__array_namespace_info__.capabilities "numpy.__array_namespace_info__.capabilities") [`__array_namespace_info__.default_device`](numpy.__array_namespace_info__.default_device#numpy.__array_namespace_info__.default_device "numpy.__array_namespace_info__.default_device") [`__array_namespace_info__.dtypes`](numpy.__array_namespace_info__.dtypes#numpy.__array_namespace_info__.dtypes "numpy.__array_namespace_info__.dtypes") [`__array_namespace_info__.devices`](numpy.__array_namespace_info__.devices#numpy.__array_namespace_info__.devices "numpy.__array_namespace_info__.devices") #### Examples >>> info = np.__array_namespace_info__() >>> info.default_dtypes() {'real floating': numpy.float64, 'complex floating': numpy.complex128, 'integral': numpy.int64, 'indexing': numpy.int64} # numpy.__array_namespace_info__.devices method __array_namespace_info__.devices()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_array_api_info.py#L321-L346) The devices supported by NumPy. For NumPy, this always returns `['cpu']`. Returns: **devices** list of str The devices supported by NumPy. See also [`__array_namespace_info__.capabilities`](numpy.__array_namespace_info__.capabilities#numpy.__array_namespace_info__.capabilities "numpy.__array_namespace_info__.capabilities") [`__array_namespace_info__.default_device`](numpy.__array_namespace_info__.default_device#numpy.__array_namespace_info__.default_device "numpy.__array_namespace_info__.default_device") [`__array_namespace_info__.default_dtypes`](numpy.__array_namespace_info__.default_dtypes#numpy.__array_namespace_info__.default_dtypes "numpy.__array_namespace_info__.default_dtypes") [`__array_namespace_info__.dtypes`](numpy.__array_namespace_info__.dtypes#numpy.__array_namespace_info__.dtypes "numpy.__array_namespace_info__.dtypes") #### Examples >>> info = np.__array_namespace_info__() >>> info.devices() ['cpu'] # numpy.__array_namespace_info__.dtypes method __array_namespace_info__.dtypes(_*_ , _device =None_, _kind =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_array_api_info.py#L186-L319) The array API data types supported by NumPy. Note that this function only returns data types that are defined by the array API. Parameters: **device** str, optional The device to get the data types for. For NumPy, only `'cpu'` is allowed. **kind** str or tuple of str, optional The kind of data types to return. If `None`, all data types are returned. If a string, only data types of that kind are returned. If a tuple, a dictionary containing the union of the given kinds is returned. The following kinds are supported: * `'bool'`: boolean data types (i.e., `bool`). * `'signed integer'`: signed integer data types (i.e., `int8`, `int16`, `int32`, `int64`). * `'unsigned integer'`: unsigned integer data types (i.e., `uint8`, `uint16`, `uint32`, `uint64`). * `'integral'`: integer data types. Shorthand for `('signed integer', 'unsigned integer')`. * `'real floating'`: real-valued floating-point data types (i.e., `float32`, `float64`). * `'complex floating'`: complex floating-point data types (i.e., `complex64`, `complex128`). * `'numeric'`: numeric data types. Shorthand for `('integral', 'real floating', 'complex floating')`. Returns: **dtypes** dict A dictionary mapping the names of data types to the corresponding NumPy data types. See also [`__array_namespace_info__.capabilities`](numpy.__array_namespace_info__.capabilities#numpy.__array_namespace_info__.capabilities "numpy.__array_namespace_info__.capabilities") [`__array_namespace_info__.default_device`](numpy.__array_namespace_info__.default_device#numpy.__array_namespace_info__.default_device "numpy.__array_namespace_info__.default_device") [`__array_namespace_info__.default_dtypes`](numpy.__array_namespace_info__.default_dtypes#numpy.__array_namespace_info__.default_dtypes "numpy.__array_namespace_info__.default_dtypes") [`__array_namespace_info__.devices`](numpy.__array_namespace_info__.devices#numpy.__array_namespace_info__.devices "numpy.__array_namespace_info__.devices") #### Examples >>> info = np.__array_namespace_info__() >>> info.dtypes(kind='signed integer') {'int8': numpy.int8, 'int16': numpy.int16, 'int32': numpy.int32, 'int64': numpy.int64} # numpy.__array_namespace_info__ _class_ numpy.__array_namespace_info__[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) Get the array API inspection namespace for NumPy. The array API inspection namespace defines the following functions: * capabilities() * default_device() * default_dtypes() * dtypes() * devices() See for more details. Returns: **info** ModuleType The array API inspection namespace for NumPy. #### Examples >>> info = np.__array_namespace_info__() >>> info.default_dtypes() {'real floating': numpy.float64, 'complex floating': numpy.complex128, 'integral': numpy.int64, 'indexing': numpy.int64} #### Methods [`capabilities`](numpy.__array_namespace_info__.capabilities#numpy.__array_namespace_info__.capabilities "numpy.__array_namespace_info__.capabilities")() | Return a dictionary of array API library capabilities. ---|--- [`default_device`](numpy.__array_namespace_info__.default_device#numpy.__array_namespace_info__.default_device "numpy.__array_namespace_info__.default_device")() | The default device used for new NumPy arrays. [`default_dtypes`](numpy.__array_namespace_info__.default_dtypes#numpy.__array_namespace_info__.default_dtypes "numpy.__array_namespace_info__.default_dtypes")(*[, device]) | The default data types used for new NumPy arrays. [`devices`](numpy.__array_namespace_info__.devices#numpy.__array_namespace_info__.devices "numpy.__array_namespace_info__.devices")() | The devices supported by NumPy. [`dtypes`](numpy.__array_namespace_info__.dtypes#numpy.__array_namespace_info__.dtypes "numpy.__array_namespace_info__.dtypes")(*[, device, kind]) | The array API data types supported by NumPy. # numpy.absolute numpy.absolute(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Calculate the absolute value element-wise. `np.abs` is a shorthand for this function. Parameters: **x** array_like Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **absolute** ndarray An ndarray containing the absolute value of each element in `x`. For complex input, `a + ib`, the absolute value is \\(\sqrt{ a^2 + b^2 }\\). This is a scalar if `x` is a scalar. #### Examples >>> import numpy as np >>> x = np.array([-1.2, 1.2]) >>> np.absolute(x) array([ 1.2, 1.2]) >>> np.absolute(1.2 + 1j) 1.5620499351813308 Plot the function over `[-10, 10]`: >>> import matplotlib.pyplot as plt >>> x = np.linspace(start=-10, stop=10, num=101) >>> plt.plot(x, np.absolute(x)) >>> plt.show() Plot the function over the complex plane: >>> xx = x + 1j * x[:, np.newaxis] >>> plt.imshow(np.abs(xx), extent=[-10, 10, -10, 10], cmap='gray') >>> plt.show() The [`abs`](https://docs.python.org/3/library/functions.html#abs "\(in Python v3.13\)") function can be used as a shorthand for `np.absolute` on ndarrays. >>> x = np.array([-1.2, 1.2]) >>> abs(x) array([1.2, 1.2]) # numpy.acos numpy.acos(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Trigonometric inverse cosine, element-wise. The inverse of [`cos`](numpy.cos#numpy.cos "numpy.cos") so that, if `y = cos(x)`, then `x = arccos(y)`. Parameters: **x** array_like `x`-coordinate on the unit circle. For real arguments, the domain is [-1, 1]. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **angle** ndarray The angle of the ray intersecting the unit circle at the given `x`-coordinate in radians [0, pi]. This is a scalar if `x` is a scalar. See also [`cos`](numpy.cos#numpy.cos "numpy.cos"), [`arctan`](numpy.arctan#numpy.arctan "numpy.arctan"), [`arcsin`](numpy.arcsin#numpy.arcsin "numpy.arcsin"), [`emath.arccos`](numpy.emath.arccos#numpy.emath.arccos "numpy.emath.arccos") #### Notes [`arccos`](numpy.arccos#numpy.arccos "numpy.arccos") is a multivalued function: for each `x` there are infinitely many numbers `z` such that `cos(z) = x`. The convention is to return the angle `z` whose real part lies in `[0, pi]`. For real-valued input data types, [`arccos`](numpy.arccos#numpy.arccos "numpy.arccos") always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, [`arccos`](numpy.arccos#numpy.arccos "numpy.arccos") is a complex analytic function that has branch cuts `[-inf, -1]` and `[1, inf]` and is continuous from above on the former and from below on the latter. The inverse [`cos`](numpy.cos#numpy.cos "numpy.cos") is also known as `acos` or cos^-1. #### References M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 79. #### Examples >>> import numpy as np We expect the arccos of 1 to be 0, and of -1 to be pi: >>> np.arccos([1, -1]) array([ 0. , 3.14159265]) Plot arccos: >>> import matplotlib.pyplot as plt >>> x = np.linspace(-1, 1, num=100) >>> plt.plot(x, np.arccos(x)) >>> plt.axis('tight') >>> plt.show() # numpy.acosh numpy.acosh(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Inverse hyperbolic cosine, element-wise. Parameters: **x** array_like Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **arccosh** ndarray Array of the same shape as `x`. This is a scalar if `x` is a scalar. See also [`cosh`](numpy.cosh#numpy.cosh "numpy.cosh"), [`arcsinh`](numpy.arcsinh#numpy.arcsinh "numpy.arcsinh"), [`sinh`](numpy.sinh#numpy.sinh "numpy.sinh"), [`arctanh`](numpy.arctanh#numpy.arctanh "numpy.arctanh"), [`tanh`](numpy.tanh#numpy.tanh "numpy.tanh") #### Notes [`arccosh`](numpy.arccosh#numpy.arccosh "numpy.arccosh") is a multivalued function: for each `x` there are infinitely many numbers `z` such that `cosh(z) = x`. The convention is to return the `z` whose imaginary part lies in `[-pi, pi]` and the real part in `[0, inf]`. For real-valued input data types, [`arccosh`](numpy.arccosh#numpy.arccosh "numpy.arccosh") always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, [`arccosh`](numpy.arccosh#numpy.arccosh "numpy.arccosh") is a complex analytical function that has a branch cut `[-inf, 1]` and is continuous from above on it. #### References [1] M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 86. [2] Wikipedia, “Inverse hyperbolic function”, #### Examples >>> import numpy as np >>> np.arccosh([np.e, 10.0]) array([ 1.65745445, 2.99322285]) >>> np.arccosh(1) 0.0 # numpy.add numpy.add(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Add arguments element-wise. Parameters: **x1, x2** array_like The arrays to be added. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **add** ndarray or scalar The sum of `x1` and `x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. #### Notes Equivalent to `x1` \+ `x2` in terms of array broadcasting. #### Examples >>> import numpy as np >>> np.add(1.0, 4.0) 5.0 >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> np.add(x1, x2) array([[ 0., 2., 4.], [ 3., 5., 7.], [ 6., 8., 10.]]) The `+` operator can be used as a shorthand for `np.add` on ndarrays. >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> x1 + x2 array([[ 0., 2., 4.], [ 3., 5., 7.], [ 6., 8., 10.]]) # numpy.all numpy.all(_a_ , _axis=None_ , _out=None_ , _keepdims= _, _*_ , _where= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L2589-L2676) Test whether all array elements along a given axis evaluate to True. Parameters: **a** array_like Input array or object that can be converted to an array. **axis** None or int or tuple of ints, optional Axis or axes along which a logical AND reduction is performed. The default (`axis=None`) is to perform a logical AND over all the dimensions of the input array. `axis` may be negative, in which case it counts from the last to the first axis. If this is a tuple of ints, a reduction is performed on multiple axes, instead of a single axis or all the axes as before. **out** ndarray, optional Alternate output array in which to place the result. It must have the same shape as the expected output and its type is preserved (e.g., if `dtype(out)` is float, the result will consist of 0.0’s and 1.0’s). See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the `all` method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **where** array_like of bool, optional Elements to include in checking for all `True` values. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.20.0. Returns: **all** ndarray, bool A new boolean or array is returned unless `out` is specified, in which case a reference to `out` is returned. See also [`ndarray.all`](numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all") equivalent method [`any`](numpy.any#numpy.any "numpy.any") Test whether any element along a given axis evaluates to True. #### Notes Not a Number (NaN), positive infinity and negative infinity evaluate to `True` because these are not equal to zero. Changed in version 2.0: Before NumPy 2.0, `all` did not return booleans for object dtype input arrays. This behavior is still available via `np.logical_and.reduce`. #### Examples >>> import numpy as np >>> np.all([[True,False],[True,True]]) False >>> np.all([[True,False],[True,True]], axis=0) array([ True, False]) >>> np.all([-1, 4, 5]) True >>> np.all([1.0, np.nan]) True >>> np.all([[True, True], [False, True]], where=[[True], [False]]) True >>> o=np.array(False) >>> z=np.all([-1, 4, 5], out=o) >>> id(z), id(o), z (28293632, 28293632, array(True)) # may vary # numpy.allclose numpy.allclose(_a_ , _b_ , _rtol =1e-05_, _atol =1e-08_, _equal_nan =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L2243-L2330) Returns True if two arrays are element-wise equal within a tolerance. The tolerance values are positive, typically very small numbers. The relative difference (`rtol` * abs(`b`)) and the absolute difference `atol` are added together to compare against the absolute difference between `a` and `b`. Warning The default `atol` is not appropriate for comparing numbers with magnitudes much smaller than one (see Notes). NaNs are treated as equal if they are in the same place and if `equal_nan=True`. Infs are treated as equal if they are in the same place and of the same sign in both arrays. Parameters: **a, b** array_like Input arrays to compare. **rtol** array_like The relative tolerance parameter (see Notes). **atol** array_like The absolute tolerance parameter (see Notes). **equal_nan** bool Whether to compare NaN’s as equal. If True, NaN’s in `a` will be considered equal to NaN’s in `b` in the output array. Returns: **allclose** bool Returns True if the two arrays are equal within the given tolerance; False otherwise. See also [`isclose`](numpy.isclose#numpy.isclose "numpy.isclose"), [`all`](numpy.all#numpy.all "numpy.all"), [`any`](numpy.any#numpy.any "numpy.any"), [`equal`](numpy.equal#numpy.equal "numpy.equal") #### Notes If the following equation is element-wise True, then allclose returns True.: absolute(a - b) <= (atol + rtol * absolute(b)) The above equation is not symmetric in `a` and `b`, so that `allclose(a, b)` might be different from `allclose(b, a)` in some rare cases. The default value of `atol` is not appropriate when the reference value `b` has magnitude smaller than one. For example, it is unlikely that `a = 1e-9` and `b = 2e-9` should be considered “close”, yet `allclose(1e-9, 2e-9)` is `True` with default settings. Be sure to select `atol` for the use case at hand, especially for defining the threshold below which a non-zero value in `a` will be considered “close” to a very small or zero value in `b`. The comparison of `a` and `b` uses standard broadcasting, which means that `a` and `b` need not have the same shape in order for `allclose(a, b)` to evaluate to True. The same is true for [`equal`](numpy.equal#numpy.equal "numpy.equal") but not [`array_equal`](numpy.array_equal#numpy.array_equal "numpy.array_equal"). `allclose` is not defined for non-numeric data types. [`bool`](../arrays.scalars#numpy.bool "numpy.bool") is considered a numeric data-type for this purpose. #### Examples >>> import numpy as np >>> np.allclose([1e10,1e-7], [1.00001e10,1e-8]) False >>> np.allclose([1e10,1e-8], [1.00001e10,1e-9]) True >>> np.allclose([1e10,1e-8], [1.0001e10,1e-9]) False >>> np.allclose([1.0, np.nan], [1.0, np.nan]) False >>> np.allclose([1.0, np.nan], [1.0, np.nan], equal_nan=True) True # numpy.amax numpy.amax(_a_ , _axis=None_ , _out=None_ , _keepdims= _, _initial= _, _where= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L3168-L3182) Return the maximum of an array or maximum along an axis. `amax` is an alias of [`max`](numpy.max#numpy.max "numpy.max"). See also [`max`](numpy.max#numpy.max "numpy.max") alias of this function [`ndarray.max`](numpy.ndarray.max#numpy.ndarray.max "numpy.ndarray.max") equivalent method # numpy.amin numpy.amin(_a_ , _axis=None_ , _out=None_ , _keepdims= _, _initial= _, _where= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L3306-L3320) Return the minimum of an array or minimum along an axis. `amin` is an alias of [`min`](numpy.min#numpy.min "numpy.min"). See also [`min`](numpy.min#numpy.min "numpy.min") alias of this function [`ndarray.min`](numpy.ndarray.min#numpy.ndarray.min "numpy.ndarray.min") equivalent method # numpy.angle numpy.angle(_z_ , _deg =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L1649-L1700) Return the angle of the complex argument. Parameters: **z** array_like A complex number or sequence of complex numbers. **deg** bool, optional Return angle in degrees if True, radians if False (default). Returns: **angle** ndarray or scalar The counterclockwise angle from the positive real axis on the complex plane in the range `(-pi, pi]`, with dtype as numpy.float64. See also [`arctan2`](numpy.arctan2#numpy.arctan2 "numpy.arctan2") [`absolute`](numpy.absolute#numpy.absolute "numpy.absolute") #### Notes This function passes the imaginary and real parts of the argument to [`arctan2`](numpy.arctan2#numpy.arctan2 "numpy.arctan2") to compute the result; consequently, it follows the convention of [`arctan2`](numpy.arctan2#numpy.arctan2 "numpy.arctan2") when the magnitude of the argument is zero. See example. #### Examples >>> import numpy as np >>> np.angle([1.0, 1.0j, 1+1j]) # in radians array([ 0. , 1.57079633, 0.78539816]) # may vary >>> np.angle(1+1j, deg=True) # in degrees 45.0 >>> np.angle([0., -0., complex(0., -0.), complex(-0., -0.)]) # convention array([ 0. , 3.14159265, -0. , -3.14159265]) # numpy.any numpy.any(_a_ , _axis=None_ , _out=None_ , _keepdims= _, _*_ , _where= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L2477-L2581) Test whether any array element along a given axis evaluates to True. Returns single boolean if `axis` is `None` Parameters: **a** array_like Input array or object that can be converted to an array. **axis** None or int or tuple of ints, optional Axis or axes along which a logical OR reduction is performed. The default (`axis=None`) is to perform a logical OR over all the dimensions of the input array. `axis` may be negative, in which case it counts from the last to the first axis. If this is a tuple of ints, a reduction is performed on multiple axes, instead of a single axis or all the axes as before. **out** ndarray, optional Alternate output array in which to place the result. It must have the same shape as the expected output and its type is preserved (e.g., if it is of type float, then it will remain so, returning 1.0 for True and 0.0 for False, regardless of the type of `a`). See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the `any` method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **where** array_like of bool, optional Elements to include in checking for any `True` values. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.20.0. Returns: **any** bool or ndarray A new boolean or [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") is returned unless `out` is specified, in which case a reference to `out` is returned. See also [`ndarray.any`](numpy.ndarray.any#numpy.ndarray.any "numpy.ndarray.any") equivalent method [`all`](numpy.all#numpy.all "numpy.all") Test whether all elements along a given axis evaluate to True. #### Notes Not a Number (NaN), positive infinity and negative infinity evaluate to `True` because these are not equal to zero. Changed in version 2.0: Before NumPy 2.0, `any` did not return booleans for object dtype input arrays. This behavior is still available via `np.logical_or.reduce`. #### Examples >>> import numpy as np >>> np.any([[True, False], [True, True]]) True >>> np.any([[True, False, True ], ... [False, False, False]], axis=0) array([ True, False, True]) >>> np.any([-1, 0, 5]) True >>> np.any([[np.nan], [np.inf]], axis=1, keepdims=True) array([[ True], [ True]]) >>> np.any([[True, False], [False, False]], where=[[False], [True]]) False >>> a = np.array([[1, 0, 0], ... [0, 0, 1], ... [0, 0, 0]]) >>> np.any(a, axis=0) array([ True, False, True]) >>> np.any(a, axis=1) array([ True, True, False]) >>> o=np.array(False) >>> z=np.any([-1, 4, 5], out=o) >>> z, o (array(True), array(True)) >>> # Check now that z is a reference to o >>> z is o True >>> id(z), id(o) # identity of z and o (191614240, 191614240) # numpy.append numpy.append(_arr_ , _values_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L5644-L5711) Append values to the end of an array. Parameters: **arr** array_like Values are appended to a copy of this array. **values** array_like These values are appended to a copy of `arr`. It must be of the correct shape (the same shape as `arr`, excluding `axis`). If `axis` is not specified, `values` can be any shape and will be flattened before use. **axis** int, optional The axis along which `values` are appended. If `axis` is not given, both `arr` and `values` are flattened before use. Returns: **append** ndarray A copy of `arr` with `values` appended to `axis`. Note that `append` does not occur in-place: a new array is allocated and filled. If `axis` is None, `out` is a flattened array. See also [`insert`](numpy.insert#numpy.insert "numpy.insert") Insert elements into an array. [`delete`](numpy.delete#numpy.delete "numpy.delete") Delete elements from an array. #### Examples >>> import numpy as np >>> np.append([1, 2, 3], [[4, 5, 6], [7, 8, 9]]) array([1, 2, 3, ..., 7, 8, 9]) When `axis` is specified, `values` must have the correct shape. >>> np.append([[1, 2, 3], [4, 5, 6]], [[7, 8, 9]], axis=0) array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> np.append([[1, 2, 3], [4, 5, 6]], [7, 8, 9], axis=0) Traceback (most recent call last): ... ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 1 has 1 dimension(s) >>> a = np.array([1, 2], dtype=int) >>> c = np.append(a, []) >>> c array([1., 2.]) >>> c.dtype float64 Default dtype for empty ndarrays is [`float64`](../arrays.scalars#numpy.float64 "numpy.float64") thus making the output of dtype [`float64`](../arrays.scalars#numpy.float64 "numpy.float64") when appended with dtype [`int64`](../arrays.scalars#numpy.int64 "numpy.int64") # numpy.apply_along_axis numpy.apply_along_axis(_func1d_ , _axis_ , _arr_ , _* args_, _** kwargs_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_shape_base_impl.py#L274-L412) Apply a function to 1-D slices along the given axis. Execute `func1d(a, *args, **kwargs)` where `func1d` operates on 1-D arrays and `a` is a 1-D slice of `arr` along `axis`. This is equivalent to (but faster than) the following use of [`ndindex`](numpy.ndindex#numpy.ndindex "numpy.ndindex") and [`s_`](numpy.s_#numpy.s_ "numpy.s_"), which sets each of `ii`, `jj`, and `kk` to a tuple of indices: Ni, Nk = a.shape[:axis], a.shape[axis+1:] for ii in ndindex(Ni): for kk in ndindex(Nk): f = func1d(arr[ii + s_[:,] + kk]) Nj = f.shape for jj in ndindex(Nj): out[ii + jj + kk] = f[jj] Equivalently, eliminating the inner loop, this can be expressed as: Ni, Nk = a.shape[:axis], a.shape[axis+1:] for ii in ndindex(Ni): for kk in ndindex(Nk): out[ii + s_[...,] + kk] = func1d(arr[ii + s_[:,] + kk]) Parameters: **func1d** function (M,) -> (Nj…) This function should accept 1-D arrays. It is applied to 1-D slices of `arr` along the specified axis. **axis** integer Axis along which `arr` is sliced. **arr** ndarray (Ni…, M, Nk…) Input array. **args** any Additional arguments to `func1d`. **kwargs** any Additional named arguments to `func1d`. Returns: **out** ndarray (Ni…, Nj…, Nk…) The output array. The shape of `out` is identical to the shape of `arr`, except along the `axis` dimension. This axis is removed, and replaced with new dimensions equal to the shape of the return value of `func1d`. So if `func1d` returns a scalar `out` will have one fewer dimensions than `arr`. See also [`apply_over_axes`](numpy.apply_over_axes#numpy.apply_over_axes "numpy.apply_over_axes") Apply a function repeatedly over multiple axes. #### Examples >>> import numpy as np >>> def my_func(a): ... """Average first and last element of a 1-D array""" ... return (a[0] + a[-1]) * 0.5 >>> b = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> np.apply_along_axis(my_func, 0, b) array([4., 5., 6.]) >>> np.apply_along_axis(my_func, 1, b) array([2., 5., 8.]) For a function that returns a 1D array, the number of dimensions in `outarr` is the same as `arr`. >>> b = np.array([[8,1,7], [4,3,9], [5,2,6]]) >>> np.apply_along_axis(sorted, 1, b) array([[1, 7, 8], [3, 4, 9], [2, 5, 6]]) For a function that returns a higher dimensional array, those dimensions are inserted in place of the `axis` dimension. >>> b = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> np.apply_along_axis(np.diag, -1, b) array([[[1, 0, 0], [0, 2, 0], [0, 0, 3]], [[4, 0, 0], [0, 5, 0], [0, 0, 6]], [[7, 0, 0], [0, 8, 0], [0, 0, 9]]]) # numpy.apply_over_axes numpy.apply_over_axes(_func_ , _a_ , _axes_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_shape_base_impl.py#L419-L504) Apply a function repeatedly over multiple axes. `func` is called as `res = func(a, axis)`, where `axis` is the first element of `axes`. The result `res` of the function call must have either the same dimensions as `a` or one less dimension. If `res` has one less dimension than `a`, a dimension is inserted before `axis`. The call to `func` is then repeated for each axis in `axes`, with `res` as the first argument. Parameters: **func** function This function must take two arguments, `func(a, axis)`. **a** array_like Input array. **axes** array_like Axes over which `func` is applied; the elements must be integers. Returns: **apply_over_axis** ndarray The output array. The number of dimensions is the same as `a`, but the shape can be different. This depends on whether `func` changes the shape of its output with respect to its input. See also [`apply_along_axis`](numpy.apply_along_axis#numpy.apply_along_axis "numpy.apply_along_axis") Apply a function to 1-D slices of an array along the given axis. #### Notes This function is equivalent to tuple axis arguments to reorderable ufuncs with keepdims=True. Tuple axis arguments to ufuncs have been available since version 1.7.0. #### Examples >>> import numpy as np >>> a = np.arange(24).reshape(2,3,4) >>> a array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) Sum over axes 0 and 2. The result has same number of dimensions as the original array: >>> np.apply_over_axes(np.sum, a, [0,2]) array([[[ 60], [ 92], [124]]]) Tuple axis arguments to ufuncs are equivalent: >>> np.sum(a, axis=(0,2), keepdims=True) array([[[ 60], [ 92], [124]]]) # numpy.arange numpy.arange([_start_ , ]_stop_ , [_step_ , ]_dtype=None_ , _*_ , _device=None_ , _like=None_) Return evenly spaced values within a given interval. `arange` can be called with a varying number of positional arguments: * `arange(stop)`: Values are generated within the half-open interval `[0, stop)` (in other words, the interval including `start` but excluding `stop`). * `arange(start, stop)`: Values are generated within the half-open interval `[start, stop)`. * `arange(start, stop, step)` Values are generated within the half-open interval `[start, stop)`, with spacing between values given by `step`. For integer arguments the function is roughly equivalent to the Python built- in [`range`](https://docs.python.org/3/library/stdtypes.html#range "\(in Python v3.13\)"), but returns an ndarray rather than a `range` instance. When using a non-integer step, such as 0.1, it is often better to use [`numpy.linspace`](numpy.linspace#numpy.linspace "numpy.linspace"). See the Warning sections below for more information. Parameters: **start** integer or real, optional Start of interval. The interval includes this value. The default start value is 0. **stop** integer or real End of interval. The interval does not include this value, except in some cases where `step` is not an integer and floating point round-off affects the length of `out`. **step** integer or real, optional Spacing between values. For any output `out`, this is the distance between two adjacent values, `out[i+1] - out[i]`. The default step size is 1. If `step` is specified as a position argument, `start` must also be given. **dtype** dtype, optional The type of the output array. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not given, infer the data type from the other input arguments. **device** str, optional The device on which to place the created array. Default: `None`. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **arange** ndarray Array of evenly spaced values. For floating point arguments, the length of the result is `ceil((stop - start)/step)`. Because of floating point overflow, this rule may result in the last element of `out` being greater than `stop`. Warning The length of the output might not be numerically stable. Another stability issue is due to the internal implementation of `numpy.arange`. The actual step value used to populate the array is `dtype(start + step) - dtype(start)` and not `step`. Precision loss can occur here, due to casting or due to using floating points when `start` is much larger than `step`. This can lead to unexpected behaviour. For example: >>> np.arange(0, 5, 0.5, dtype=int) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) >>> np.arange(-3, 3, 0.5, dtype=int) array([-3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8]) In such cases, the use of [`numpy.linspace`](numpy.linspace#numpy.linspace "numpy.linspace") should be preferred. The built-in [`range`](https://docs.python.org/3/library/stdtypes.html#range "\(in Python v3.13\)") generates [Python built-in integers that have arbitrary size](https://docs.python.org/3/c-api/long.html "\(in Python v3.13\)"), while `numpy.arange` produces [`numpy.int32`](../arrays.scalars#numpy.int32 "numpy.int32") or [`numpy.int64`](../arrays.scalars#numpy.int64 "numpy.int64") numbers. This may result in incorrect results for large integer values: >>> power = 40 >>> modulo = 10000 >>> x1 = [(n ** power) % modulo for n in range(8)] >>> x2 = [(n ** power) % modulo for n in np.arange(8)] >>> print(x1) [0, 1, 7776, 8801, 6176, 625, 6576, 4001] # correct >>> print(x2) [0, 1, 7776, 7185, 0, 5969, 4816, 3361] # incorrect See also [`numpy.linspace`](numpy.linspace#numpy.linspace "numpy.linspace") Evenly spaced numbers with careful handling of endpoints. [`numpy.ogrid`](numpy.ogrid#numpy.ogrid "numpy.ogrid") Arrays of evenly spaced numbers in N-dimensions. [`numpy.mgrid`](numpy.mgrid#numpy.mgrid "numpy.mgrid") Grid-shaped arrays of evenly spaced numbers in N-dimensions. [How to create arrays with regularly-spaced values](../../user/how-to- partition#how-to-partition) #### Examples >>> import numpy as np >>> np.arange(3) array([0, 1, 2]) >>> np.arange(3.0) array([ 0., 1., 2.]) >>> np.arange(3,7) array([3, 4, 5, 6]) >>> np.arange(3,7,2) array([3, 5]) # numpy.arccos numpy.arccos(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Trigonometric inverse cosine, element-wise. The inverse of [`cos`](numpy.cos#numpy.cos "numpy.cos") so that, if `y = cos(x)`, then `x = arccos(y)`. Parameters: **x** array_like `x`-coordinate on the unit circle. For real arguments, the domain is [-1, 1]. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **angle** ndarray The angle of the ray intersecting the unit circle at the given `x`-coordinate in radians [0, pi]. This is a scalar if `x` is a scalar. See also [`cos`](numpy.cos#numpy.cos "numpy.cos"), [`arctan`](numpy.arctan#numpy.arctan "numpy.arctan"), [`arcsin`](numpy.arcsin#numpy.arcsin "numpy.arcsin"), [`emath.arccos`](numpy.emath.arccos#numpy.emath.arccos "numpy.emath.arccos") #### Notes `arccos` is a multivalued function: for each `x` there are infinitely many numbers `z` such that `cos(z) = x`. The convention is to return the angle `z` whose real part lies in `[0, pi]`. For real-valued input data types, `arccos` always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, `arccos` is a complex analytic function that has branch cuts `[-inf, -1]` and `[1, inf]` and is continuous from above on the former and from below on the latter. The inverse [`cos`](numpy.cos#numpy.cos "numpy.cos") is also known as [`acos`](numpy.acos#numpy.acos "numpy.acos") or cos^-1. #### References M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 79. #### Examples >>> import numpy as np We expect the arccos of 1 to be 0, and of -1 to be pi: >>> np.arccos([1, -1]) array([ 0. , 3.14159265]) Plot arccos: >>> import matplotlib.pyplot as plt >>> x = np.linspace(-1, 1, num=100) >>> plt.plot(x, np.arccos(x)) >>> plt.axis('tight') >>> plt.show() # numpy.arccosh numpy.arccosh(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Inverse hyperbolic cosine, element-wise. Parameters: **x** array_like Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **arccosh** ndarray Array of the same shape as `x`. This is a scalar if `x` is a scalar. See also [`cosh`](numpy.cosh#numpy.cosh "numpy.cosh"), [`arcsinh`](numpy.arcsinh#numpy.arcsinh "numpy.arcsinh"), [`sinh`](numpy.sinh#numpy.sinh "numpy.sinh"), [`arctanh`](numpy.arctanh#numpy.arctanh "numpy.arctanh"), [`tanh`](numpy.tanh#numpy.tanh "numpy.tanh") #### Notes `arccosh` is a multivalued function: for each `x` there are infinitely many numbers `z` such that `cosh(z) = x`. The convention is to return the `z` whose imaginary part lies in `[-pi, pi]` and the real part in `[0, inf]`. For real-valued input data types, `arccosh` always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, `arccosh` is a complex analytical function that has a branch cut `[-inf, 1]` and is continuous from above on it. #### References [1] M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 86. [2] Wikipedia, “Inverse hyperbolic function”, #### Examples >>> import numpy as np >>> np.arccosh([np.e, 10.0]) array([ 1.65745445, 2.99322285]) >>> np.arccosh(1) 0.0 # numpy.arcsin numpy.arcsin(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Inverse sine, element-wise. Parameters: **x** array_like `y`-coordinate on the unit circle. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **angle** ndarray The inverse sine of each element in `x`, in radians and in the closed interval `[-pi/2, pi/2]`. This is a scalar if `x` is a scalar. See also [`sin`](numpy.sin#numpy.sin "numpy.sin"), [`cos`](numpy.cos#numpy.cos "numpy.cos"), [`arccos`](numpy.arccos#numpy.arccos "numpy.arccos"), [`tan`](numpy.tan#numpy.tan "numpy.tan"), [`arctan`](numpy.arctan#numpy.arctan "numpy.arctan"), [`arctan2`](numpy.arctan2#numpy.arctan2 "numpy.arctan2"), [`emath.arcsin`](numpy.emath.arcsin#numpy.emath.arcsin "numpy.emath.arcsin") #### Notes `arcsin` is a multivalued function: for each `x` there are infinitely many numbers `z` such that \\(sin(z) = x\\). The convention is to return the angle `z` whose real part lies in [-pi/2, pi/2]. For real-valued input data types, _arcsin_ always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, `arcsin` is a complex analytic function that has, by convention, the branch cuts [-inf, -1] and [1, inf] and is continuous from above on the former and from below on the latter. The inverse sine is also known as [`asin`](numpy.asin#numpy.asin "numpy.asin") or sin^{-1}. #### References Abramowitz, M. and Stegun, I. A., _Handbook of Mathematical Functions_ , 10th printing, New York: Dover, 1964, pp. 79ff. #### Examples >>> import numpy as np >>> np.arcsin(1) # pi/2 1.5707963267948966 >>> np.arcsin(-1) # -pi/2 -1.5707963267948966 >>> np.arcsin(0) 0.0 # numpy.arcsinh numpy.arcsinh(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Inverse hyperbolic sine element-wise. Parameters: **x** array_like Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Array of the same shape as `x`. This is a scalar if `x` is a scalar. #### Notes `arcsinh` is a multivalued function: for each `x` there are infinitely many numbers `z` such that `sinh(z) = x`. The convention is to return the `z` whose imaginary part lies in `[-pi/2, pi/2]`. For real-valued input data types, `arcsinh` always returns real output. For each value that cannot be expressed as a real number or infinity, it returns `nan` and sets the `invalid` floating point error flag. For complex-valued input, `arcsinh` is a complex analytical function that has branch cuts `[1j, infj]` and `[-1j, -infj]` and is continuous from the right on the former and from the left on the latter. The inverse hyperbolic sine is also known as [`asinh`](numpy.asinh#numpy.asinh "numpy.asinh") or `sinh^-1`. #### References [1] M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 86. [2] Wikipedia, “Inverse hyperbolic function”, #### Examples >>> import numpy as np >>> np.arcsinh(np.array([np.e, 10.0])) array([ 1.72538256, 2.99822295]) # numpy.arctan numpy.arctan(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Trigonometric inverse tangent, element-wise. The inverse of tan, so that if `y = tan(x)` then `x = arctan(y)`. Parameters: **x** array_like **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Out has the same shape as `x`. Its real part is in `[-pi/2, pi/2]` (`arctan(+/-inf)` returns `+/-pi/2`). This is a scalar if `x` is a scalar. See also [`arctan2`](numpy.arctan2#numpy.arctan2 "numpy.arctan2") The “four quadrant” arctan of the angle formed by (`x`, `y`) and the positive `x`-axis. [`angle`](numpy.angle#numpy.angle "numpy.angle") Argument of complex values. #### Notes `arctan` is a multi-valued function: for each `x` there are infinitely many numbers `z` such that tan(`z`) = `x`. The convention is to return the angle `z` whose real part lies in [-pi/2, pi/2]. For real-valued input data types, `arctan` always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, `arctan` is a complex analytic function that has [`1j, infj`] and [`-1j, -infj`] as branch cuts, and is continuous from the left on the former and from the right on the latter. The inverse tangent is also known as [`atan`](numpy.atan#numpy.atan "numpy.atan") or tan^{-1}. #### References Abramowitz, M. and Stegun, I. A., _Handbook of Mathematical Functions_ , 10th printing, New York: Dover, 1964, pp. 79. #### Examples We expect the arctan of 0 to be 0, and of 1 to be pi/4: >>> import numpy as np >>> np.arctan([0, 1]) array([ 0. , 0.78539816]) >>> np.pi/4 0.78539816339744828 Plot arctan: >>> import matplotlib.pyplot as plt >>> x = np.linspace(-10, 10) >>> plt.plot(x, np.arctan(x)) >>> plt.axis('tight') >>> plt.show() # numpy.arctan2 numpy.arctan2(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Element-wise arc tangent of `x1/x2` choosing the quadrant correctly. The quadrant (i.e., branch) is chosen so that `arctan2(x1, x2)` is the signed angle in radians between the ray ending at the origin and passing through the point (1,0), and the ray ending at the origin and passing through the point (`x2`, `x1`). (Note the role reversal: the “`y`-coordinate” is the first function parameter, the “`x`-coordinate” is the second.) By IEEE convention, this function is defined for `x2` = +/-0 and for either or both of `x1` and `x2` = +/-inf (see Notes for specific values). This function is not defined for complex-valued arguments; for the so-called argument of complex values, use [`angle`](numpy.angle#numpy.angle "numpy.angle"). Parameters: **x1** array_like, real-valued `y`-coordinates. **x2** array_like, real-valued `x`-coordinates. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **angle** ndarray Array of angles in radians, in the range `[-pi, pi]`. This is a scalar if both `x1` and `x2` are scalars. See also [`arctan`](numpy.arctan#numpy.arctan "numpy.arctan"), [`tan`](numpy.tan#numpy.tan "numpy.tan"), [`angle`](numpy.angle#numpy.angle "numpy.angle") #### Notes _arctan2_ is identical to the [`atan2`](numpy.atan2#numpy.atan2 "numpy.atan2") function of the underlying C library. The following special values are defined in the C standard: [1] `x1` | `x2` | `arctan2(x1,x2)` ---|---|--- +/- 0 | +0 | +/- 0 +/- 0 | -0 | +/- pi > 0 | +/-inf | +0 / +pi < 0 | +/-inf | -0 / -pi +/-inf | +inf | +/- (pi/4) +/-inf | -inf | +/- (3*pi/4) Note that +0 and -0 are distinct floating point numbers, as are +inf and -inf. #### References [1] ISO/IEC standard 9899:1999, “Programming language C.” #### Examples Consider four points in different quadrants: >>> import numpy as np >>> x = np.array([-1, +1, +1, -1]) >>> y = np.array([-1, -1, +1, +1]) >>> np.arctan2(y, x) * 180 / np.pi array([-135., -45., 45., 135.]) Note the order of the parameters. `arctan2` is defined also when `x2` = 0 and at several other special points, obtaining values in the range `[-pi, pi]`: >>> np.arctan2([1., -1.], [0., 0.]) array([ 1.57079633, -1.57079633]) >>> np.arctan2([0., 0., np.inf], [+0., -0., np.inf]) array([0. , 3.14159265, 0.78539816]) # numpy.arctanh numpy.arctanh(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Inverse hyperbolic tangent element-wise. Parameters: **x** array_like Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Array of the same shape as `x`. This is a scalar if `x` is a scalar. See also [`emath.arctanh`](numpy.emath.arctanh#numpy.emath.arctanh "numpy.emath.arctanh") #### Notes `arctanh` is a multivalued function: for each `x` there are infinitely many numbers `z` such that `tanh(z) = x`. The convention is to return the `z` whose imaginary part lies in `[-pi/2, pi/2]`. For real-valued input data types, `arctanh` always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, `arctanh` is a complex analytical function that has branch cuts `[-1, -inf]` and `[1, inf]` and is continuous from above on the former and from below on the latter. The inverse hyperbolic tangent is also known as [`atanh`](numpy.atanh#numpy.atanh "numpy.atanh") or `tanh^-1`. #### References [1] M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 86. [2] Wikipedia, “Inverse hyperbolic function”, #### Examples >>> import numpy as np >>> np.arctanh([0, -0.5]) array([ 0. , -0.54930614]) # numpy.argmax numpy.argmax(_a_ , _axis=None_ , _out=None_ , _*_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L1251-L1342) Returns the indices of the maximum values along an axis. Parameters: **a** array_like Input array. **axis** int, optional By default, the index is into the flattened array, otherwise along the specified axis. **out** array, optional If provided, the result will be inserted into this array. It should be of the appropriate shape and dtype. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. New in version 1.22.0. Returns: **index_array** ndarray of ints Array of indices into the array. It has the same shape as `a.shape` with the dimension along `axis` removed. If `keepdims` is set to True, then the size of `axis` will be 1 with the resulting array having same shape as `a.shape`. See also [`ndarray.argmax`](numpy.ndarray.argmax#numpy.ndarray.argmax "numpy.ndarray.argmax"), [`argmin`](numpy.argmin#numpy.argmin "numpy.argmin") [`amax`](numpy.amax#numpy.amax "numpy.amax") The maximum value along a given axis. [`unravel_index`](numpy.unravel_index#numpy.unravel_index "numpy.unravel_index") Convert a flat index into an index tuple. [`take_along_axis`](numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") Apply `np.expand_dims(index_array, axis)` from argmax to an array as if by calling max. #### Notes In case of multiple occurrences of the maximum values, the indices corresponding to the first occurrence are returned. #### Examples >>> import numpy as np >>> a = np.arange(6).reshape(2,3) + 10 >>> a array([[10, 11, 12], [13, 14, 15]]) >>> np.argmax(a) 5 >>> np.argmax(a, axis=0) array([1, 1, 1]) >>> np.argmax(a, axis=1) array([2, 2]) Indexes of the maximal elements of a N-dimensional array: >>> ind = np.unravel_index(np.argmax(a, axis=None), a.shape) >>> ind (1, 2) >>> a[ind] 15 >>> b = np.arange(6) >>> b[1] = 5 >>> b array([0, 5, 2, 3, 4, 5]) >>> np.argmax(b) # Only the first occurrence is returned. 1 >>> x = np.array([[4,2,3], [1,0,3]]) >>> index_array = np.argmax(x, axis=-1) >>> # Same as np.amax(x, axis=-1, keepdims=True) >>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), axis=-1) array([[4], [3]]) >>> # Same as np.amax(x, axis=-1) >>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), ... axis=-1).squeeze(axis=-1) array([4, 3]) Setting `keepdims` to `True`, >>> x = np.arange(24).reshape((2, 3, 4)) >>> res = np.argmax(x, axis=1, keepdims=True) >>> res.shape (2, 1, 4) # numpy.argmin numpy.argmin(_a_ , _axis=None_ , _out=None_ , _*_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L1349-L1440) Returns the indices of the minimum values along an axis. Parameters: **a** array_like Input array. **axis** int, optional By default, the index is into the flattened array, otherwise along the specified axis. **out** array, optional If provided, the result will be inserted into this array. It should be of the appropriate shape and dtype. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. New in version 1.22.0. Returns: **index_array** ndarray of ints Array of indices into the array. It has the same shape as `a.shape` with the dimension along `axis` removed. If `keepdims` is set to True, then the size of `axis` will be 1 with the resulting array having same shape as `a.shape`. See also [`ndarray.argmin`](numpy.ndarray.argmin#numpy.ndarray.argmin "numpy.ndarray.argmin"), [`argmax`](numpy.argmax#numpy.argmax "numpy.argmax") [`amin`](numpy.amin#numpy.amin "numpy.amin") The minimum value along a given axis. [`unravel_index`](numpy.unravel_index#numpy.unravel_index "numpy.unravel_index") Convert a flat index into an index tuple. [`take_along_axis`](numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") Apply `np.expand_dims(index_array, axis)` from argmin to an array as if by calling min. #### Notes In case of multiple occurrences of the minimum values, the indices corresponding to the first occurrence are returned. #### Examples >>> import numpy as np >>> a = np.arange(6).reshape(2,3) + 10 >>> a array([[10, 11, 12], [13, 14, 15]]) >>> np.argmin(a) 0 >>> np.argmin(a, axis=0) array([0, 0, 0]) >>> np.argmin(a, axis=1) array([0, 0]) Indices of the minimum elements of a N-dimensional array: >>> ind = np.unravel_index(np.argmin(a, axis=None), a.shape) >>> ind (0, 0) >>> a[ind] 10 >>> b = np.arange(6) + 10 >>> b[4] = 10 >>> b array([10, 11, 12, 13, 10, 15]) >>> np.argmin(b) # Only the first occurrence is returned. 0 >>> x = np.array([[4,2,3], [1,0,3]]) >>> index_array = np.argmin(x, axis=-1) >>> # Same as np.amin(x, axis=-1, keepdims=True) >>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), axis=-1) array([[2], [0]]) >>> # Same as np.amax(x, axis=-1) >>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), ... axis=-1).squeeze(axis=-1) array([2, 0]) Setting `keepdims` to `True`, >>> x = np.arange(24).reshape((2, 3, 4)) >>> res = np.argmin(x, axis=1, keepdims=True) >>> res.shape (2, 1, 4) # numpy.argpartition numpy.argpartition(_a_ , _kth_ , _axis =-1_, _kind ='introselect'_, _order =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L876-L962) Perform an indirect partition along the given axis using the algorithm specified by the `kind` keyword. It returns an array of indices of the same shape as `a` that index data along the given axis in partitioned order. Parameters: **a** array_like Array to sort. **kth** int or sequence of ints Element index to partition by. The k-th element will be in its final sorted position and all smaller elements will be moved before it and all larger elements behind it. The order of all elements in the partitions is undefined. If provided with a sequence of k-th it will partition all of them into their sorted position at once. Deprecated since version 1.22.0: Passing booleans as index is deprecated. **axis** int or None, optional Axis along which to sort. The default is -1 (the last axis). If None, the flattened array is used. **kind**{‘introselect’}, optional Selection algorithm. Default is ‘introselect’ **order** str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. Returns: **index_array** ndarray, int Array of indices that partition `a` along the specified axis. If `a` is one- dimensional, `a[index_array]` yields a partitioned `a`. More generally, `np.take_along_axis(a, index_array, axis=axis)` always yields the partitioned `a`, irrespective of dimensionality. See also [`partition`](numpy.partition#numpy.partition "numpy.partition") Describes partition algorithms used. [`ndarray.partition`](numpy.ndarray.partition#numpy.ndarray.partition "numpy.ndarray.partition") Inplace partition. [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Full indirect sort. [`take_along_axis`](numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") Apply `index_array` from argpartition to an array as if by calling partition. #### Notes The returned indices are not guaranteed to be sorted according to the values. Furthermore, the default selection algorithm `introselect` is unstable, and hence the returned indices are not guaranteed to be the earliest/latest occurrence of the element. `argpartition` works for real/complex inputs with nan values, see [`partition`](numpy.partition#numpy.partition "numpy.partition") for notes on the enhanced sort order and different selection algorithms. #### Examples One dimensional array: >>> import numpy as np >>> x = np.array([3, 4, 2, 1]) >>> x[np.argpartition(x, 3)] array([2, 1, 3, 4]) # may vary >>> x[np.argpartition(x, (1, 3))] array([1, 2, 3, 4]) # may vary >>> x = [3, 4, 2, 1] >>> np.array(x)[np.argpartition(x, 3)] array([2, 1, 3, 4]) # may vary Multi-dimensional array: >>> x = np.array([[3, 4, 2], [1, 3, 1]]) >>> index_array = np.argpartition(x, kth=1, axis=-1) >>> # below is the same as np.partition(x, kth=1) >>> np.take_along_axis(x, index_array, axis=-1) array([[2, 3, 4], [1, 1, 3]]) # numpy.argsort numpy.argsort(_a_ , _axis =-1_, _kind =None_, _order =None_, _*_ , _stable =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L1130-L1245) Returns the indices that would sort an array. Perform an indirect sort along the given axis using the algorithm specified by the `kind` keyword. It returns an array of indices of the same shape as `a` that index data along the given axis in sorted order. Parameters: **a** array_like Array to sort. **axis** int or None, optional Axis along which to sort. The default is -1 (the last axis). If None, the flattened array is used. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional Sorting algorithm. The default is ‘quicksort’. Note that both ‘stable’ and ‘mergesort’ use timsort under the covers and, in general, the actual implementation will vary with data type. The ‘mergesort’ option is retained for backwards compatibility. **order** str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. **stable** bool, optional Sort stability. If `True`, the returned array will maintain the relative order of `a` values which compare as equal. If `False` or `None`, this is not guaranteed. Internally, this option selects `kind='stable'`. Default: `None`. New in version 2.0.0. Returns: **index_array** ndarray, int Array of indices that sort `a` along the specified `axis`. If `a` is one- dimensional, `a[index_array]` yields a sorted `a`. More generally, `np.take_along_axis(a, index_array, axis=axis)` always yields the sorted `a`, irrespective of dimensionality. See also [`sort`](numpy.sort#numpy.sort "numpy.sort") Describes sorting algorithms used. [`lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort with multiple keys. [`ndarray.sort`](numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort") Inplace sort. [`argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") Indirect partial sort. [`take_along_axis`](numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") Apply `index_array` from argsort to an array as if by calling sort. #### Notes See [`sort`](numpy.sort#numpy.sort "numpy.sort") for notes on the different sorting algorithms. As of NumPy 1.4.0 `argsort` works with real/complex arrays containing nan values. The enhanced sort order is documented in [`sort`](numpy.sort#numpy.sort "numpy.sort"). #### Examples One dimensional array: >>> import numpy as np >>> x = np.array([3, 1, 2]) >>> np.argsort(x) array([1, 2, 0]) Two-dimensional array: >>> x = np.array([[0, 3], [2, 2]]) >>> x array([[0, 3], [2, 2]]) >>> ind = np.argsort(x, axis=0) # sorts along first axis (down) >>> ind array([[0, 1], [1, 0]]) >>> np.take_along_axis(x, ind, axis=0) # same as np.sort(x, axis=0) array([[0, 2], [2, 3]]) >>> ind = np.argsort(x, axis=1) # sorts along last axis (across) >>> ind array([[0, 1], [0, 1]]) >>> np.take_along_axis(x, ind, axis=1) # same as np.sort(x, axis=1) array([[0, 3], [2, 2]]) Indices of the sorted elements of a N-dimensional array: >>> ind = np.unravel_index(np.argsort(x, axis=None), x.shape) >>> ind (array([0, 1, 1, 0]), array([0, 0, 1, 1])) >>> x[ind] # same as np.sort(x, axis=None) array([0, 2, 2, 3]) Sorting with keys: >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '>> x array([(1, 0), (0, 1)], dtype=[('x', '>> np.argsort(x, order=('x','y')) array([1, 0]) >>> np.argsort(x, order=('y','x')) array([0, 1]) # numpy.argwhere numpy.argwhere(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L591-L639) Find the indices of array elements that are non-zero, grouped by element. Parameters: **a** array_like Input data. Returns: **index_array**(N, a.ndim) ndarray Indices of elements that are non-zero. Indices are grouped by element. This array will have shape `(N, a.ndim)` where `N` is the number of non-zero items. See also [`where`](numpy.where#numpy.where "numpy.where"), [`nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") #### Notes `np.argwhere(a)` is almost the same as `np.transpose(np.nonzero(a))`, but produces a result of the correct shape for a 0D array. The output of `argwhere` is not suitable for indexing arrays. For this purpose use `nonzero(a)` instead. #### Examples >>> import numpy as np >>> x = np.arange(6).reshape(2,3) >>> x array([[0, 1, 2], [3, 4, 5]]) >>> np.argwhere(x>1) array([[0, 2], [1, 0], [1, 1], [1, 2]]) # numpy.around numpy.around(_a_ , _decimals =0_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L3713-L3727) Round an array to the given number of decimals. `around` is an alias of [`round`](numpy.round#numpy.round "numpy.round"). See also [`ndarray.round`](numpy.ndarray.round#numpy.ndarray.round "numpy.ndarray.round") equivalent method [`round`](numpy.round#numpy.round "numpy.round") alias for this function [`ceil`](numpy.ceil#numpy.ceil "numpy.ceil"), [`fix`](numpy.fix#numpy.fix "numpy.fix"), [`floor`](numpy.floor#numpy.floor "numpy.floor"), [`rint`](numpy.rint#numpy.rint "numpy.rint"), [`trunc`](numpy.trunc#numpy.trunc "numpy.trunc") # numpy.array numpy.array(_object_ , _dtype =None_, _*_ , _copy =True_, _order ='K'_, _subok =False_, _ndmin =0_, _like =None_) Create an array. Parameters: **object** array_like An array, any object exposing the array interface, an object whose `__array__` method returns an array, or any (nested) sequence. If object is a scalar, a 0-dimensional array containing object is returned. **dtype** data-type, optional The desired data-type for the array. If not given, NumPy will try to use a default `dtype` that can represent the values (by applying promotion rules when necessary.) **copy** bool, optional If `True` (default), then the array data is copied. If `None`, a copy will only be made if `__array__` returns a copy, if obj is a nested sequence, or if a copy is needed to satisfy any of the other requirements (`dtype`, `order`, etc.). Note that any copy of the data is shallow, i.e., for arrays with object dtype, the new array will point to the same objects. See Examples for [`ndarray.copy`](numpy.ndarray.copy#numpy.ndarray.copy "numpy.ndarray.copy"). For `False` it raises a `ValueError` if a copy cannot be avoided. Default: `True`. **order**{‘K’, ‘A’, ‘C’, ‘F’}, optional Specify the memory layout of the array. If object is not an array, the newly created array will be in C order (row major) unless ‘F’ is specified, in which case it will be in Fortran order (column major). If object is an array the following holds. order | no copy | copy=True ---|---|--- ‘K’ | unchanged | F & C order preserved, otherwise most similar order ‘A’ | unchanged | F order if input is F and not C, otherwise C order ‘C’ | C order | C order ‘F’ | F order | F order When `copy=None` and a copy is made for other reasons, the result is the same as if `copy=True`, with some exceptions for ‘A’, see the Notes section. The default order is ‘K’. **subok** bool, optional If True, then sub-classes will be passed-through, otherwise the returned array will be forced to be a base-class array (default). **ndmin** int, optional Specifies the minimum number of dimensions that the resulting array should have. Ones will be prepended to the shape as needed to meet this requirement. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** ndarray An array object satisfying the specified requirements. See also [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like") Return an empty array with shape and type of input. [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") Return an array of ones with shape and type of input. [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Return an array of zeros with shape and type of input. [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like") Return a new array with shape of input filled with value. [`empty`](numpy.empty#numpy.empty "numpy.empty") Return a new uninitialized array. [`ones`](numpy.ones#numpy.ones "numpy.ones") Return a new array setting values to one. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Return a new array setting values to zero. [`full`](numpy.full#numpy.full "numpy.full") Return a new array of given shape filled with value. [`copy`](numpy.copy#numpy.copy "numpy.copy") Return an array copy of the given object. #### Notes When order is ‘A’ and `object` is an array in neither ‘C’ nor ‘F’ order, and a copy is forced by a change in dtype, then the order of the result is not necessarily ‘C’ as expected. This is likely a bug. #### Examples >>> import numpy as np >>> np.array([1, 2, 3]) array([1, 2, 3]) Upcasting: >>> np.array([1, 2, 3.0]) array([ 1., 2., 3.]) More than one dimension: >>> np.array([[1, 2], [3, 4]]) array([[1, 2], [3, 4]]) Minimum dimensions 2: >>> np.array([1, 2, 3], ndmin=2) array([[1, 2, 3]]) Type provided: >>> np.array([1, 2, 3], dtype=complex) array([ 1.+0.j, 2.+0.j, 3.+0.j]) Data-type consisting of more than one element: >>> x = np.array([(1,2),(3,4)],dtype=[('a','>> x['a'] array([1, 3], dtype=int32) Creating an array from sub-classes: >>> np.array(np.asmatrix('1 2; 3 4')) array([[1, 2], [3, 4]]) >>> np.array(np.asmatrix('1 2; 3 4'), subok=True) matrix([[1, 2], [3, 4]]) # numpy.array2string numpy.array2string(_a_ , _max_line_width=None_ , _precision=None_ , _suppress_small=None_ , _separator=' '_ , _prefix=''_ , _style= _, _formatter=None_ , _threshold=None_ , _edgeitems=None_ , _sign=None_ , _floatmode=None_ , _suffix=''_ , _*_ , _legacy=None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/arrayprint.py#L605-L784) Return a string representation of an array. Parameters: **a** ndarray Input array. **max_line_width** int, optional Inserts newlines if text is longer than `max_line_width`. Defaults to `numpy.get_printoptions()['linewidth']`. **precision** int or None, optional Floating point precision. Defaults to `numpy.get_printoptions()['precision']`. **suppress_small** bool, optional Represent numbers “very close” to zero as zero; default is False. Very close is defined by precision: if the precision is 8, e.g., numbers smaller (in absolute value) than 5e-9 are represented as zero. Defaults to `numpy.get_printoptions()['suppress']`. **separator** str, optional Inserted between elements. **prefix** str, optional **suffix** str, optional The length of the prefix and suffix strings are used to respectively align and wrap the output. An array is typically printed as: prefix + array2string(a) + suffix The output is left-padded by the length of the prefix string, and wrapping is forced at the column `max_line_width - len(suffix)`. It should be noted that the content of prefix and suffix strings are not included in the output. **style** _NoValue, optional Has no effect, do not use. Deprecated since version 1.14.0. **formatter** dict of callables, optional If not None, the keys should indicate the type(s) that the respective formatting function applies to. Callables should return a string. Types that are not specified (by their corresponding keys) are handled by the default formatters. Individual types for which a formatter can be set are: * ‘bool’ * ‘int’ * ‘timedelta’ : a [`numpy.timedelta64`](../arrays.scalars#numpy.timedelta64 "numpy.timedelta64") * ‘datetime’ : a [`numpy.datetime64`](../arrays.scalars#numpy.datetime64 "numpy.datetime64") * ‘float’ * ‘longfloat’ : 128-bit floats * ‘complexfloat’ * ‘longcomplexfloat’ : composed of two 128-bit floats * ‘void’ : type [`numpy.void`](../arrays.scalars#numpy.void "numpy.void") * ‘numpystr’ : types [`numpy.bytes_`](../arrays.scalars#numpy.bytes_ "numpy.bytes_") and [`numpy.str_`](../arrays.scalars#numpy.str_ "numpy.str_") Other keys that can be used to set a group of types at once are: * ‘all’ : sets all types * ‘int_kind’ : sets ‘int’ * ‘float_kind’ : sets ‘float’ and ‘longfloat’ * ‘complex_kind’ : sets ‘complexfloat’ and ‘longcomplexfloat’ * ‘str_kind’ : sets ‘numpystr’ **threshold** int, optional Total number of array elements which trigger summarization rather than full repr. Defaults to `numpy.get_printoptions()['threshold']`. **edgeitems** int, optional Number of array items in summary at beginning and end of each dimension. Defaults to `numpy.get_printoptions()['edgeitems']`. **sign** string, either ‘-’, ‘+’, or ‘ ‘, optional Controls printing of the sign of floating-point types. If ‘+’, always print the sign of positive values. If ‘ ‘, always prints a space (whitespace character) in the sign position of positive values. If ‘-’, omit the sign character of positive values. Defaults to `numpy.get_printoptions()['sign']`. Changed in version 2.0: The sign parameter can now be an integer type, previously types were floating-point types. **floatmode** str, optional Controls the interpretation of the `precision` option for floating-point types. Defaults to `numpy.get_printoptions()['floatmode']`. Can take the following values: * ‘fixed’: Always print exactly `precision` fractional digits, even if this would print more or fewer digits than necessary to specify the value uniquely. * ‘unique’: Print the minimum number of fractional digits necessary to represent each value uniquely. Different elements may have a different number of digits. The value of the `precision` option is ignored. * ‘maxprec’: Print at most `precision` fractional digits, but if an element can be uniquely represented with fewer digits only print it with that many. * ‘maxprec_equal’: Print at most `precision` fractional digits, but if every element in the array can be uniquely represented with an equal number of fewer digits, use that many digits for all elements. **legacy** string or `False`, optional If set to the string `'1.13'` enables 1.13 legacy printing mode. This approximates numpy 1.13 print output by including a space in the sign position of floats and different behavior for 0d arrays. If set to `False`, disables legacy mode. Unrecognized strings will be ignored with a warning for forward compatibility. Returns: **array_str** str String representation of the array. Raises: TypeError if a callable in `formatter` does not return a string. See also [`array_str`](numpy.array_str#numpy.array_str "numpy.array_str"), [`array_repr`](numpy.array_repr#numpy.array_repr "numpy.array_repr"), [`set_printoptions`](numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions"), [`get_printoptions`](numpy.get_printoptions#numpy.get_printoptions "numpy.get_printoptions") #### Notes If a formatter is specified for a certain type, the `precision` keyword is ignored for that type. This is a very flexible function; [`array_repr`](numpy.array_repr#numpy.array_repr "numpy.array_repr") and [`array_str`](numpy.array_str#numpy.array_str "numpy.array_str") are using `array2string` internally so keywords with the same name should work identically in all three functions. #### Examples >>> import numpy as np >>> x = np.array([1e-16,1,2,3]) >>> np.array2string(x, precision=2, separator=',', ... suppress_small=True) '[0.,1.,2.,3.]' >>> x = np.arange(3.) >>> np.array2string(x, formatter={'float_kind':lambda x: "%.2f" % x}) '[0.00 1.00 2.00]' >>> x = np.arange(3) >>> np.array2string(x, formatter={'int':lambda x: hex(x)}) '[0x0 0x1 0x2]' # numpy.array_equal numpy.array_equal(_a1_ , _a2_ , _equal_nan =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L2475-L2558) True if two arrays have the same shape and elements, False otherwise. Parameters: **a1, a2** array_like Input arrays. **equal_nan** bool Whether to compare NaN’s as equal. If the dtype of a1 and a2 is complex, values will be considered equal if either the real or the imaginary component of a given value is `nan`. Returns: **b** bool Returns True if the arrays are equal. See also [`allclose`](numpy.allclose#numpy.allclose "numpy.allclose") Returns True if two arrays are element-wise equal within a tolerance. [`array_equiv`](numpy.array_equiv#numpy.array_equiv "numpy.array_equiv") Returns True if input arrays are shape consistent and all elements equal. #### Examples >>> import numpy as np >>> np.array_equal([1, 2], [1, 2]) True >>> np.array_equal(np.array([1, 2]), np.array([1, 2])) True >>> np.array_equal([1, 2], [1, 2, 3]) False >>> np.array_equal([1, 2], [1, 4]) False >>> a = np.array([1, np.nan]) >>> np.array_equal(a, a) False >>> np.array_equal(a, a, equal_nan=True) True When `equal_nan` is True, complex values with nan components are considered equal if either the real _or_ the imaginary components are nan. >>> a = np.array([1 + 1j]) >>> b = a.copy() >>> a.real = np.nan >>> b.imag = np.nan >>> np.array_equal(a, b, equal_nan=True) True # numpy.array_equiv numpy.array_equiv(_a1_ , _a2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L2565-L2611) Returns True if input arrays are shape consistent and all elements equal. Shape consistent means they are either the same shape, or one input array can be broadcasted to create the same shape as the other one. Parameters: **a1, a2** array_like Input arrays. Returns: **out** bool True if equivalent, False otherwise. #### Examples >>> import numpy as np >>> np.array_equiv([1, 2], [1, 2]) True >>> np.array_equiv([1, 2], [1, 3]) False Showing the shape equivalence: >>> np.array_equiv([1, 2], [[1, 2], [1, 2]]) True >>> np.array_equiv([1, 2], [[1, 2, 1, 2], [1, 2, 1, 2]]) False >>> np.array_equiv([1, 2], [[1, 2], [1, 3]]) False # numpy.array_repr numpy.array_repr(_arr_ , _max_line_width =None_, _precision =None_, _suppress_small =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/arrayprint.py#L1628-L1675) Return the string representation of an array. Parameters: **arr** ndarray Input array. **max_line_width** int, optional Inserts newlines if text is longer than `max_line_width`. Defaults to `numpy.get_printoptions()['linewidth']`. **precision** int, optional Floating point precision. Defaults to `numpy.get_printoptions()['precision']`. **suppress_small** bool, optional Represent numbers “very close” to zero as zero; default is False. Very close is defined by precision: if the precision is 8, e.g., numbers smaller (in absolute value) than 5e-9 are represented as zero. Defaults to `numpy.get_printoptions()['suppress']`. Returns: **string** str The string representation of an array. See also [`array_str`](numpy.array_str#numpy.array_str "numpy.array_str"), [`array2string`](numpy.array2string#numpy.array2string "numpy.array2string"), [`set_printoptions`](numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions") #### Examples >>> import numpy as np >>> np.array_repr(np.array([1,2])) 'array([1, 2])' >>> np.array_repr(np.ma.array([0.])) 'MaskedArray([0.])' >>> np.array_repr(np.array([], np.int32)) 'array([], dtype=int32)' >>> x = np.array([1e-6, 4e-7, 2, 3]) >>> np.array_repr(x, precision=6, suppress_small=True) 'array([0.000001, 0. , 2. , 3. ])' # numpy.array_split numpy.array_split(_ary_ , _indices_or_sections_ , _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_shape_base_impl.py#L742-L796) Split an array into multiple sub-arrays. Please refer to the `split` documentation. The only difference between these functions is that `array_split` allows `indices_or_sections` to be an integer that does _not_ equally divide the axis. For an array of length l that should be split into n sections, it returns l % n sub-arrays of size l//n + 1 and the rest of size l//n. See also [`split`](numpy.split#numpy.split "numpy.split") Split array into multiple sub-arrays of equal size. #### Examples >>> import numpy as np >>> x = np.arange(8.0) >>> np.array_split(x, 3) [array([0., 1., 2.]), array([3., 4., 5.]), array([6., 7.])] >>> x = np.arange(9) >>> np.array_split(x, 4) [array([0, 1, 2]), array([3, 4]), array([5, 6]), array([7, 8])] # numpy.array_str numpy.array_str(_a_ , _max_line_width =None_, _precision =None_, _suppress_small =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/arrayprint.py#L1710-L1748) Return a string representation of the data in an array. The data in the array is returned as a single string. This function is similar to [`array_repr`](numpy.array_repr#numpy.array_repr "numpy.array_repr"), the difference being that [`array_repr`](numpy.array_repr#numpy.array_repr "numpy.array_repr") also returns information on the kind of array and its data type. Parameters: **a** ndarray Input array. **max_line_width** int, optional Inserts newlines if text is longer than `max_line_width`. Defaults to `numpy.get_printoptions()['linewidth']`. **precision** int, optional Floating point precision. Defaults to `numpy.get_printoptions()['precision']`. **suppress_small** bool, optional Represent numbers “very close” to zero as zero; default is False. Very close is defined by precision: if the precision is 8, e.g., numbers smaller (in absolute value) than 5e-9 are represented as zero. Defaults to `numpy.get_printoptions()['suppress']`. See also [`array2string`](numpy.array2string#numpy.array2string "numpy.array2string"), [`array_repr`](numpy.array_repr#numpy.array_repr "numpy.array_repr"), [`set_printoptions`](numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions") #### Examples >>> import numpy as np >>> np.array_str(np.arange(3)) '[0 1 2]' # numpy.asanyarray numpy.asanyarray(_a_ , _dtype =None_, _order =None_, _*_ , _device =None_, _copy =None_, _like =None_) Convert the input to an ndarray, but pass ndarray subclasses through. Parameters: **a** array_like Input data, in any form that can be converted to an array. This includes scalars, lists, lists of tuples, tuples, tuples of tuples, tuples of lists, and ndarrays. **dtype** data-type, optional By default, the data-type is inferred from the input data. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Memory layout. ‘A’ and ‘K’ depend on the order of input array a. ‘C’ row-major (C-style), ‘F’ column-major (Fortran-style) memory representation. ‘A’ (any) means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise ‘K’ (keep) preserve input order Defaults to ‘C’. **device** str, optional The device on which to place the created array. Default: `None`. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.1.0. **copy** bool, optional If `True`, then the object is copied. If `None` then the object is copied only if needed, i.e. if `__array__` returns a copy, if obj is a nested sequence, or if a copy is needed to satisfy any of the other requirements (`dtype`, `order`, etc.). For `False` it raises a `ValueError` if a copy cannot be avoided. Default: `None`. New in version 2.1.0. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** ndarray or an ndarray subclass Array interpretation of `a`. If `a` is an ndarray or a subclass of ndarray, it is returned as-is and no copy is performed. See also [`asarray`](numpy.asarray#numpy.asarray "numpy.asarray") Similar function which always returns ndarrays. [`ascontiguousarray`](numpy.ascontiguousarray#numpy.ascontiguousarray "numpy.ascontiguousarray") Convert input to a contiguous array. [`asfortranarray`](numpy.asfortranarray#numpy.asfortranarray "numpy.asfortranarray") Convert input to an ndarray with column-major memory order. [`asarray_chkfinite`](numpy.asarray_chkfinite#numpy.asarray_chkfinite "numpy.asarray_chkfinite") Similar function which checks input for NaNs and Infs. [`fromiter`](numpy.fromiter#numpy.fromiter "numpy.fromiter") Create an array from an iterator. [`fromfunction`](numpy.fromfunction#numpy.fromfunction "numpy.fromfunction") Construct an array by executing a function on grid positions. #### Examples Convert a list into an array: >>> a = [1, 2] >>> import numpy as np >>> np.asanyarray(a) array([1, 2]) Instances of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") subclasses are passed through as-is: >>> a = np.array([(1., 2), (3., 4)], dtype='f4,i4').view(np.recarray) >>> np.asanyarray(a) is a True # numpy.asarray numpy.asarray(_a_ , _dtype =None_, _order =None_, _*_ , _device =None_, _copy =None_, _like =None_) Convert the input to an array. Parameters: **a** array_like Input data, in any form that can be converted to an array. This includes lists, lists of tuples, tuples, tuples of tuples, tuples of lists and ndarrays. **dtype** data-type, optional By default, the data-type is inferred from the input data. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Memory layout. ‘A’ and ‘K’ depend on the order of input array a. ‘C’ row-major (C-style), ‘F’ column-major (Fortran-style) memory representation. ‘A’ (any) means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise ‘K’ (keep) preserve input order Defaults to ‘K’. **device** str, optional The device on which to place the created array. Default: `None`. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. **copy** bool, optional If `True`, then the object is copied. If `None` then the object is copied only if needed, i.e. if `__array__` returns a copy, if obj is a nested sequence, or if a copy is needed to satisfy any of the other requirements (`dtype`, `order`, etc.). For `False` it raises a `ValueError` if a copy cannot be avoided. Default: `None`. New in version 2.0.0. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** ndarray Array interpretation of `a`. No copy is performed if the input is already an ndarray with matching dtype and order. If `a` is a subclass of ndarray, a base class ndarray is returned. See also [`asanyarray`](numpy.asanyarray#numpy.asanyarray "numpy.asanyarray") Similar function which passes through subclasses. [`ascontiguousarray`](numpy.ascontiguousarray#numpy.ascontiguousarray "numpy.ascontiguousarray") Convert input to a contiguous array. [`asfortranarray`](numpy.asfortranarray#numpy.asfortranarray "numpy.asfortranarray") Convert input to an ndarray with column-major memory order. [`asarray_chkfinite`](numpy.asarray_chkfinite#numpy.asarray_chkfinite "numpy.asarray_chkfinite") Similar function which checks input for NaNs and Infs. [`fromiter`](numpy.fromiter#numpy.fromiter "numpy.fromiter") Create an array from an iterator. [`fromfunction`](numpy.fromfunction#numpy.fromfunction "numpy.fromfunction") Construct an array by executing a function on grid positions. #### Examples Convert a list into an array: >>> a = [1, 2] >>> import numpy as np >>> np.asarray(a) array([1, 2]) Existing arrays are not copied: >>> a = np.array([1, 2]) >>> np.asarray(a) is a True If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is set, array is copied only if dtype does not match: >>> a = np.array([1, 2], dtype=np.float32) >>> np.shares_memory(np.asarray(a, dtype=np.float32), a) True >>> np.shares_memory(np.asarray(a, dtype=np.float64), a) False Contrary to [`asanyarray`](numpy.asanyarray#numpy.asanyarray "numpy.asanyarray"), ndarray subclasses are not passed through: >>> issubclass(np.recarray, np.ndarray) True >>> a = np.array([(1., 2), (3., 4)], dtype='f4,i4').view(np.recarray) >>> np.asarray(a) is a False >>> np.asanyarray(a) is a True # numpy.asarray_chkfinite numpy.asarray_chkfinite(_a_ , _dtype =None_, _order =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L579-L648) Convert the input to an array, checking for NaNs or Infs. Parameters: **a** array_like Input data, in any form that can be converted to an array. This includes lists, lists of tuples, tuples, tuples of tuples, tuples of lists and ndarrays. Success requires no NaNs or Infs. **dtype** data-type, optional By default, the data-type is inferred from the input data. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Memory layout. ‘A’ and ‘K’ depend on the order of input array a. ‘C’ row-major (C-style), ‘F’ column-major (Fortran-style) memory representation. ‘A’ (any) means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise ‘K’ (keep) preserve input order Defaults to ‘C’. Returns: **out** ndarray Array interpretation of `a`. No copy is performed if the input is already an ndarray. If `a` is a subclass of ndarray, a base class ndarray is returned. Raises: ValueError Raises ValueError if `a` contains NaN (Not a Number) or Inf (Infinity). See also [`asarray`](numpy.asarray#numpy.asarray "numpy.asarray") Create and array. [`asanyarray`](numpy.asanyarray#numpy.asanyarray "numpy.asanyarray") Similar function which passes through subclasses. [`ascontiguousarray`](numpy.ascontiguousarray#numpy.ascontiguousarray "numpy.ascontiguousarray") Convert input to a contiguous array. [`asfortranarray`](numpy.asfortranarray#numpy.asfortranarray "numpy.asfortranarray") Convert input to an ndarray with column-major memory order. [`fromiter`](numpy.fromiter#numpy.fromiter "numpy.fromiter") Create an array from an iterator. [`fromfunction`](numpy.fromfunction#numpy.fromfunction "numpy.fromfunction") Construct an array by executing a function on grid positions. #### Examples >>> import numpy as np Convert a list into an array. If all elements are finite, then `asarray_chkfinite` is identical to `asarray`. >>> a = [1, 2] >>> np.asarray_chkfinite(a, dtype=float) array([1., 2.]) Raises ValueError if array_like contains Nans or Infs. >>> a = [1, 2, np.inf] >>> try: ... np.asarray_chkfinite(a) ... except ValueError: ... print('ValueError') ... ValueError # numpy.ascontiguousarray numpy.ascontiguousarray(_a_ , _dtype =None_, _*_ , _like =None_) Return a contiguous array (ndim >= 1) in memory (C order). Parameters: **a** array_like Input array. **dtype** str or dtype object, optional Data-type of returned array. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** ndarray Contiguous array of same shape and content as `a`, with type [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") if specified. See also [`asfortranarray`](numpy.asfortranarray#numpy.asfortranarray "numpy.asfortranarray") Convert input to an ndarray with column-major memory order. [`require`](numpy.require#numpy.require "numpy.require") Return an ndarray that satisfies requirements. [`ndarray.flags`](numpy.ndarray.flags#numpy.ndarray.flags "numpy.ndarray.flags") Information about the memory layout of the array. #### Examples Starting with a Fortran-contiguous array: >>> import numpy as np >>> x = np.ones((2, 3), order='F') >>> x.flags['F_CONTIGUOUS'] True Calling `ascontiguousarray` makes a C-contiguous copy: >>> y = np.ascontiguousarray(x) >>> y.flags['C_CONTIGUOUS'] True >>> np.may_share_memory(x, y) False Now, starting with a C-contiguous array: >>> x = np.ones((2, 3), order='C') >>> x.flags['C_CONTIGUOUS'] True Then, calling `ascontiguousarray` returns the same object: >>> y = np.ascontiguousarray(x) >>> x is y True Note: This function returns an array with at least one-dimension (1-d) so it will not preserve 0-d arrays. # numpy.asfortranarray numpy.asfortranarray(_a_ , _dtype =None_, _*_ , _like =None_) Return an array (ndim >= 1) laid out in Fortran order in memory. Parameters: **a** array_like Input array. **dtype** str or dtype object, optional By default, the data-type is inferred from the input data. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** ndarray The input `a` in Fortran, or column-major, order. See also [`ascontiguousarray`](numpy.ascontiguousarray#numpy.ascontiguousarray "numpy.ascontiguousarray") Convert input to a contiguous (C order) array. [`asanyarray`](numpy.asanyarray#numpy.asanyarray "numpy.asanyarray") Convert input to an ndarray with either row or column-major memory order. [`require`](numpy.require#numpy.require "numpy.require") Return an ndarray that satisfies requirements. [`ndarray.flags`](numpy.ndarray.flags#numpy.ndarray.flags "numpy.ndarray.flags") Information about the memory layout of the array. #### Examples Starting with a C-contiguous array: >>> import numpy as np >>> x = np.ones((2, 3), order='C') >>> x.flags['C_CONTIGUOUS'] True Calling `asfortranarray` makes a Fortran-contiguous copy: >>> y = np.asfortranarray(x) >>> y.flags['F_CONTIGUOUS'] True >>> np.may_share_memory(x, y) False Now, starting with a Fortran-contiguous array: >>> x = np.ones((2, 3), order='F') >>> x.flags['F_CONTIGUOUS'] True Then, calling `asfortranarray` returns the same object: >>> y = np.asfortranarray(x) >>> x is y True Note: This function returns an array with at least one-dimension (1-d) so it will not preserve 0-d arrays. # numpy.asin numpy.asin(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Inverse sine, element-wise. Parameters: **x** array_like `y`-coordinate on the unit circle. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **angle** ndarray The inverse sine of each element in `x`, in radians and in the closed interval `[-pi/2, pi/2]`. This is a scalar if `x` is a scalar. See also [`sin`](numpy.sin#numpy.sin "numpy.sin"), [`cos`](numpy.cos#numpy.cos "numpy.cos"), [`arccos`](numpy.arccos#numpy.arccos "numpy.arccos"), [`tan`](numpy.tan#numpy.tan "numpy.tan"), [`arctan`](numpy.arctan#numpy.arctan "numpy.arctan"), [`arctan2`](numpy.arctan2#numpy.arctan2 "numpy.arctan2"), [`emath.arcsin`](numpy.emath.arcsin#numpy.emath.arcsin "numpy.emath.arcsin") #### Notes [`arcsin`](numpy.arcsin#numpy.arcsin "numpy.arcsin") is a multivalued function: for each `x` there are infinitely many numbers `z` such that \\(sin(z) = x\\). The convention is to return the angle `z` whose real part lies in [-pi/2, pi/2]. For real-valued input data types, _arcsin_ always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, [`arcsin`](numpy.arcsin#numpy.arcsin "numpy.arcsin") is a complex analytic function that has, by convention, the branch cuts [-inf, -1] and [1, inf] and is continuous from above on the former and from below on the latter. The inverse sine is also known as `asin` or sin^{-1}. #### References Abramowitz, M. and Stegun, I. A., _Handbook of Mathematical Functions_ , 10th printing, New York: Dover, 1964, pp. 79ff. #### Examples >>> import numpy as np >>> np.arcsin(1) # pi/2 1.5707963267948966 >>> np.arcsin(-1) # -pi/2 -1.5707963267948966 >>> np.arcsin(0) 0.0 # numpy.asinh numpy.asinh(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Inverse hyperbolic sine element-wise. Parameters: **x** array_like Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Array of the same shape as `x`. This is a scalar if `x` is a scalar. #### Notes [`arcsinh`](numpy.arcsinh#numpy.arcsinh "numpy.arcsinh") is a multivalued function: for each `x` there are infinitely many numbers `z` such that `sinh(z) = x`. The convention is to return the `z` whose imaginary part lies in `[-pi/2, pi/2]`. For real-valued input data types, [`arcsinh`](numpy.arcsinh#numpy.arcsinh "numpy.arcsinh") always returns real output. For each value that cannot be expressed as a real number or infinity, it returns `nan` and sets the `invalid` floating point error flag. For complex-valued input, [`arcsinh`](numpy.arcsinh#numpy.arcsinh "numpy.arcsinh") is a complex analytical function that has branch cuts `[1j, infj]` and `[-1j, -infj]` and is continuous from the right on the former and from the left on the latter. The inverse hyperbolic sine is also known as `asinh` or `sinh^-1`. #### References [1] M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 86. [2] Wikipedia, “Inverse hyperbolic function”, #### Examples >>> import numpy as np >>> np.arcsinh(np.array([np.e, 10.0])) array([ 1.72538256, 2.99822295]) # numpy.asmatrix numpy.asmatrix(_data_ , _dtype =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L35-L69) Interpret the input as a matrix. Unlike [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix"), `asmatrix` does not make a copy if the input is already a matrix or an ndarray. Equivalent to `matrix(data, copy=False)`. Parameters: **data** array_like Input data. **dtype** data-type Data-type of the output matrix. Returns: **mat** matrix `data` interpreted as a matrix. #### Examples >>> import numpy as np >>> x = np.array([[1, 2], [3, 4]]) >>> m = np.asmatrix(x) >>> x[0,0] = 5 >>> m matrix([[5, 2], [3, 4]]) # numpy.astype numpy.astype(_x_ , _dtype_ , _/_ , _*_ , _copy =True_, _device =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L2618-L2681) Copies an array to a specified data type. This function is an Array API compatible alternative to [`numpy.ndarray.astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype"). Parameters: **x** ndarray Input NumPy array to cast. `array_likes` are explicitly not supported here. **dtype** dtype Data type of the result. **copy** bool, optional Specifies whether to copy an array when the specified dtype matches the data type of the input array `x`. If `True`, a newly allocated array must always be returned. If `False` and the specified dtype matches the data type of the input array, the input array must be returned; otherwise, a newly allocated array must be returned. Defaults to `True`. **device** str, optional The device on which to place the returned array. Default: None. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.1.0. Returns: **out** ndarray An array having the specified data type. See also [`ndarray.astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype") #### Examples >>> import numpy as np >>> arr = np.array([1, 2, 3]); arr array([1, 2, 3]) >>> np.astype(arr, np.float64) array([1., 2., 3.]) Non-copy case: >>> arr = np.array([1, 2, 3]) >>> arr_noncpy = np.astype(arr, arr.dtype, copy=False) >>> np.shares_memory(arr, arr_noncpy) True # numpy.atan numpy.atan(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Trigonometric inverse tangent, element-wise. The inverse of tan, so that if `y = tan(x)` then `x = arctan(y)`. Parameters: **x** array_like **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Out has the same shape as `x`. Its real part is in `[-pi/2, pi/2]` (`arctan(+/-inf)` returns `+/-pi/2`). This is a scalar if `x` is a scalar. See also [`arctan2`](numpy.arctan2#numpy.arctan2 "numpy.arctan2") The “four quadrant” arctan of the angle formed by (`x`, `y`) and the positive `x`-axis. [`angle`](numpy.angle#numpy.angle "numpy.angle") Argument of complex values. #### Notes [`arctan`](numpy.arctan#numpy.arctan "numpy.arctan") is a multi-valued function: for each `x` there are infinitely many numbers `z` such that tan(`z`) = `x`. The convention is to return the angle `z` whose real part lies in [-pi/2, pi/2]. For real-valued input data types, [`arctan`](numpy.arctan#numpy.arctan "numpy.arctan") always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, [`arctan`](numpy.arctan#numpy.arctan "numpy.arctan") is a complex analytic function that has [`1j, infj`] and [`-1j, -infj`] as branch cuts, and is continuous from the left on the former and from the right on the latter. The inverse tangent is also known as `atan` or tan^{-1}. #### References Abramowitz, M. and Stegun, I. A., _Handbook of Mathematical Functions_ , 10th printing, New York: Dover, 1964, pp. 79. #### Examples We expect the arctan of 0 to be 0, and of 1 to be pi/4: >>> import numpy as np >>> np.arctan([0, 1]) array([ 0. , 0.78539816]) >>> np.pi/4 0.78539816339744828 Plot arctan: >>> import matplotlib.pyplot as plt >>> x = np.linspace(-10, 10) >>> plt.plot(x, np.arctan(x)) >>> plt.axis('tight') >>> plt.show() # numpy.atan2 numpy.atan2(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Element-wise arc tangent of `x1/x2` choosing the quadrant correctly. The quadrant (i.e., branch) is chosen so that `arctan2(x1, x2)` is the signed angle in radians between the ray ending at the origin and passing through the point (1,0), and the ray ending at the origin and passing through the point (`x2`, `x1`). (Note the role reversal: the “`y`-coordinate” is the first function parameter, the “`x`-coordinate” is the second.) By IEEE convention, this function is defined for `x2` = +/-0 and for either or both of `x1` and `x2` = +/-inf (see Notes for specific values). This function is not defined for complex-valued arguments; for the so-called argument of complex values, use [`angle`](numpy.angle#numpy.angle "numpy.angle"). Parameters: **x1** array_like, real-valued `y`-coordinates. **x2** array_like, real-valued `x`-coordinates. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **angle** ndarray Array of angles in radians, in the range `[-pi, pi]`. This is a scalar if both `x1` and `x2` are scalars. See also [`arctan`](numpy.arctan#numpy.arctan "numpy.arctan"), [`tan`](numpy.tan#numpy.tan "numpy.tan"), [`angle`](numpy.angle#numpy.angle "numpy.angle") #### Notes _arctan2_ is identical to the `atan2` function of the underlying C library. The following special values are defined in the C standard: [1] `x1` | `x2` | `arctan2(x1,x2)` ---|---|--- +/- 0 | +0 | +/- 0 +/- 0 | -0 | +/- pi > 0 | +/-inf | +0 / +pi < 0 | +/-inf | -0 / -pi +/-inf | +inf | +/- (pi/4) +/-inf | -inf | +/- (3*pi/4) Note that +0 and -0 are distinct floating point numbers, as are +inf and -inf. #### References [1] ISO/IEC standard 9899:1999, “Programming language C.” #### Examples Consider four points in different quadrants: >>> import numpy as np >>> x = np.array([-1, +1, +1, -1]) >>> y = np.array([-1, -1, +1, +1]) >>> np.arctan2(y, x) * 180 / np.pi array([-135., -45., 45., 135.]) Note the order of the parameters. [`arctan2`](numpy.arctan2#numpy.arctan2 "numpy.arctan2") is defined also when `x2` = 0 and at several other special points, obtaining values in the range `[-pi, pi]`: >>> np.arctan2([1., -1.], [0., 0.]) array([ 1.57079633, -1.57079633]) >>> np.arctan2([0., 0., np.inf], [+0., -0., np.inf]) array([0. , 3.14159265, 0.78539816]) # numpy.atanh numpy.atanh(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Inverse hyperbolic tangent element-wise. Parameters: **x** array_like Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Array of the same shape as `x`. This is a scalar if `x` is a scalar. See also [`emath.arctanh`](numpy.emath.arctanh#numpy.emath.arctanh "numpy.emath.arctanh") #### Notes [`arctanh`](numpy.arctanh#numpy.arctanh "numpy.arctanh") is a multivalued function: for each `x` there are infinitely many numbers `z` such that `tanh(z) = x`. The convention is to return the `z` whose imaginary part lies in `[-pi/2, pi/2]`. For real-valued input data types, [`arctanh`](numpy.arctanh#numpy.arctanh "numpy.arctanh") always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, [`arctanh`](numpy.arctanh#numpy.arctanh "numpy.arctanh") is a complex analytical function that has branch cuts `[-1, -inf]` and `[1, inf]` and is continuous from above on the former and from below on the latter. The inverse hyperbolic tangent is also known as `atanh` or `tanh^-1`. #### References [1] M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 86. [2] Wikipedia, “Inverse hyperbolic function”, #### Examples >>> import numpy as np >>> np.arctanh([0, -0.5]) array([ 0. , -0.54930614]) # numpy.atleast_1d numpy.atleast_1d(_* arys_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/shape_base.py#L21-L73) Convert inputs to arrays with at least one dimension. Scalar inputs are converted to 1-dimensional arrays, whilst higher-dimensional inputs are preserved. Parameters: **arys1, arys2, …** array_like One or more input arrays. Returns: **ret** ndarray An array, or tuple of arrays, each with `a.ndim >= 1`. Copies are made only if necessary. See also [`atleast_2d`](numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d"), [`atleast_3d`](numpy.atleast_3d#numpy.atleast_3d "numpy.atleast_3d") #### Examples >>> import numpy as np >>> np.atleast_1d(1.0) array([1.]) >>> x = np.arange(9.0).reshape(3,3) >>> np.atleast_1d(x) array([[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]) >>> np.atleast_1d(x) is x True >>> np.atleast_1d(1, [3, 4]) (array([1]), array([3, 4])) # numpy.atleast_2d numpy.atleast_2d(_* arys_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/shape_base.py#L80-L132) View inputs as arrays with at least two dimensions. Parameters: **arys1, arys2, …** array_like One or more array-like sequences. Non-array inputs are converted to arrays. Arrays that already have two or more dimensions are preserved. Returns: **res, res2, …** ndarray An array, or tuple of arrays, each with `a.ndim >= 2`. Copies are avoided where possible, and views with two or more dimensions are returned. See also [`atleast_1d`](numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d"), [`atleast_3d`](numpy.atleast_3d#numpy.atleast_3d "numpy.atleast_3d") #### Examples >>> import numpy as np >>> np.atleast_2d(3.0) array([[3.]]) >>> x = np.arange(3.0) >>> np.atleast_2d(x) array([[0., 1., 2.]]) >>> np.atleast_2d(x).base is x True >>> np.atleast_2d(1, [1, 2], [[1, 2]]) (array([[1]]), array([[1, 2]]), array([[1, 2]])) # numpy.atleast_3d numpy.atleast_3d(_* arys_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/shape_base.py#L139-L205) View inputs as arrays with at least three dimensions. Parameters: **arys1, arys2, …** array_like One or more array-like sequences. Non-array inputs are converted to arrays. Arrays that already have three or more dimensions are preserved. Returns: **res1, res2, …** ndarray An array, or tuple of arrays, each with `a.ndim >= 3`. Copies are avoided where possible, and views with three or more dimensions are returned. For example, a 1-D array of shape `(N,)` becomes a view of shape `(1, N, 1)`, and a 2-D array of shape `(M, N)` becomes a view of shape `(M, N, 1)`. See also [`atleast_1d`](numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d"), [`atleast_2d`](numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d") #### Examples >>> import numpy as np >>> np.atleast_3d(3.0) array([[[3.]]]) >>> x = np.arange(3.0) >>> np.atleast_3d(x).shape (1, 3, 1) >>> x = np.arange(12.0).reshape(4,3) >>> np.atleast_3d(x).shape (4, 3, 1) >>> np.atleast_3d(x).base is x.base # x is a reshape, so not base itself True >>> for arr in np.atleast_3d([1, 2], [[1, 2]], [[[1, 2]]]): ... print(arr, arr.shape) ... [[[1] [2]]] (1, 2, 1) [[[1] [2]]] (1, 2, 1) [[[1 2]]] (1, 1, 2) # numpy.average numpy.average(_a_ , _axis=None_ , _weights=None_ , _returned=False_ , _*_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L415-L576) Compute the weighted average along the specified axis. Parameters: **a** array_like Array containing data to be averaged. If `a` is not an array, a conversion is attempted. **axis** None or int or tuple of ints, optional Axis or axes along which to average `a`. The default, `axis=None`, will average over all of the elements of the input array. If axis is negative it counts from the last to the first axis. If axis is a tuple of ints, averaging is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before. **weights** array_like, optional An array of weights associated with the values in `a`. Each value in `a` contributes to the average according to its associated weight. The array of weights must be the same shape as `a` if no axis is specified, otherwise the weights must have dimensions and shape consistent with `a` along the specified axis. If `weights=None`, then all data in `a` are assumed to have a weight equal to one. The calculation is: avg = sum(a * weights) / sum(weights) where the sum is over all included elements. The only constraint on the values of `weights` is that `sum(weights)` must not be 0. **returned** bool, optional Default is `False`. If `True`, the tuple (`average`, `sum_of_weights`) is returned, otherwise only the average is returned. If `weights=None`, `sum_of_weights` is equivalent to the number of elements over which the average is taken. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. _Note:_ `keepdims` will not work with instances of [`numpy.matrix`](numpy.matrix#numpy.matrix "numpy.matrix") or other classes whose methods do not support `keepdims`. New in version 1.23.0. Returns: **retval, [sum_of_weights]** array_type or double Return the average along the specified axis. When `returned` is `True`, return a tuple with the average as the first element and the sum of the weights as the second element. `sum_of_weights` is of the same type as `retval`. The result dtype follows a general pattern. If `weights` is None, the result dtype will be that of `a` , or `float64` if `a` is integral. Otherwise, if `weights` is not None and `a` is non- integral, the result type will be the type of lowest precision capable of representing values of both `a` and `weights`. If `a` happens to be integral, the previous rules still applies but the result dtype will at least be `float64`. Raises: ZeroDivisionError When all weights along axis are zero. See [`numpy.ma.average`](numpy.ma.average#numpy.ma.average "numpy.ma.average") for a version robust to this type of error. TypeError When `weights` does not have the same shape as `a`, and `axis=None`. ValueError When `weights` does not have dimensions and shape consistent with `a` along specified `axis`. See also [`mean`](numpy.mean#numpy.mean "numpy.mean") [`ma.average`](numpy.ma.average#numpy.ma.average "numpy.ma.average") average for masked arrays – useful if your data contains “missing” values [`numpy.result_type`](numpy.result_type#numpy.result_type "numpy.result_type") Returns the type that results from applying the numpy type promotion rules to the arguments. #### Examples >>> import numpy as np >>> data = np.arange(1, 5) >>> data array([1, 2, 3, 4]) >>> np.average(data) 2.5 >>> np.average(np.arange(1, 11), weights=np.arange(10, 0, -1)) 4.0 >>> data = np.arange(6).reshape((3, 2)) >>> data array([[0, 1], [2, 3], [4, 5]]) >>> np.average(data, axis=1, weights=[1./4, 3./4]) array([0.75, 2.75, 4.75]) >>> np.average(data, weights=[1./4, 3./4]) Traceback (most recent call last): ... TypeError: Axis must be specified when shapes of a and weights differ. With `keepdims=True`, the following result has shape (3, 1). >>> np.average(data, axis=1, keepdims=True) array([[0.5], [2.5], [4.5]]) >>> data = np.arange(8).reshape((2, 2, 2)) >>> data array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> np.average(data, axis=(0, 1), weights=[[1./4, 3./4], [1., 1./2]]) array([3.4, 4.4]) >>> np.average(data, axis=0, weights=[[1./4, 3./4], [1., 1./2]]) Traceback (most recent call last): ... ValueError: Shape of weights must be consistent with shape of a along specified axis. # numpy.bartlett numpy.bartlett(_M_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L3158-L3262) Return the Bartlett window. The Bartlett window is very similar to a triangular window, except that the end points are at zero. It is often used in signal processing for tapering a signal, without generating too much ripple in the frequency domain. Parameters: **M** int Number of points in the output window. If zero or less, an empty array is returned. Returns: **out** array The triangular window, with the maximum value normalized to one (the value one appears only if the number of samples is odd), with the first and last samples equal to zero. See also [`blackman`](numpy.blackman#numpy.blackman "numpy.blackman"), [`hamming`](numpy.hamming#numpy.hamming "numpy.hamming"), [`hanning`](numpy.hanning#numpy.hanning "numpy.hanning"), [`kaiser`](numpy.kaiser#numpy.kaiser "numpy.kaiser") #### Notes The Bartlett window is defined as \\[w(n) = \frac{2}{M-1} \left( \frac{M-1}{2} - \left|n - \frac{M-1}{2}\right| \right)\\] Most references to the Bartlett window come from the signal processing literature, where it is used as one of many windowing functions for smoothing values. Note that convolution with this window produces linear interpolation. It is also known as an apodization (which means “removing the foot”, i.e. smoothing discontinuities at the beginning and end of the sampled signal) or tapering function. The Fourier transform of the Bartlett window is the product of two sinc functions. Note the excellent discussion in Kanasewich [2]. #### References [1] M.S. Bartlett, “Periodogram Analysis and Continuous Spectra”, Biometrika 37, 1-16, 1950. [2] E.R. Kanasewich, “Time Sequence Analysis in Geophysics”, The University of Alberta Press, 1975, pp. 109-110. [3] A.V. Oppenheim and R.W. Schafer, “Discrete-Time Signal Processing”, Prentice- Hall, 1999, pp. 468-471. [4] Wikipedia, “Window function”, [5] W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling, “Numerical Recipes”, Cambridge University Press, 1986, page 429. #### Examples >>> import numpy as np >>> import matplotlib.pyplot as plt >>> np.bartlett(12) array([ 0. , 0.18181818, 0.36363636, 0.54545455, 0.72727273, # may vary 0.90909091, 0.90909091, 0.72727273, 0.54545455, 0.36363636, 0.18181818, 0. ]) Plot the window and its frequency response (requires SciPy and matplotlib). import matplotlib.pyplot as plt from numpy.fft import fft, fftshift window = np.bartlett(51) plt.plot(window) plt.title("Bartlett window") plt.ylabel("Amplitude") plt.xlabel("Sample") plt.show() plt.figure() A = fft(window, 2048) / 25.5 mag = np.abs(fftshift(A)) freq = np.linspace(-0.5, 0.5, len(A)) with np.errstate(divide='ignore', invalid='ignore'): response = 20 * np.log10(mag) response = np.clip(response, -100, 100) plt.plot(freq, response) plt.title("Frequency response of Bartlett window") plt.ylabel("Magnitude [dB]") plt.xlabel("Normalized frequency [cycles per sample]") plt.axis('tight') plt.show() # numpy.base_repr numpy.base_repr(_number_ , _base =2_, _padding =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L2122-L2177) Return a string representation of a number in the given base system. Parameters: **number** int The value to convert. Positive and negative values are handled. **base** int, optional Convert [`number`](../arrays.scalars#numpy.number "numpy.number") to the `base` number system. The valid range is 2-36, the default value is 2. **padding** int, optional Number of zeros padded on the left. Default is 0 (no padding). Returns: **out** str String representation of [`number`](../arrays.scalars#numpy.number "numpy.number") in `base` system. See also [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Faster version of `base_repr` for base 2. #### Examples >>> import numpy as np >>> np.base_repr(5) '101' >>> np.base_repr(6, 5) '11' >>> np.base_repr(7, base=5, padding=3) '00012' >>> np.base_repr(10, base=16) 'A' >>> np.base_repr(32, base=16) '20' # numpy.binary_repr numpy.binary_repr(_num_ , _width =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L2014-L2119) Return the binary representation of the input number as a string. For negative numbers, if width is not given, a minus sign is added to the front. If width is given, the two’s complement of the number is returned, with respect to that width. In a two’s-complement system negative numbers are represented by the two’s complement of the absolute value. This is the most common method of representing signed integers on computers [1]. A N-bit two’s-complement system can represent every integer in the range \\(-2^{N-1}\\) to \\(+2^{N-1}-1\\). Parameters: **num** int Only an integer decimal number can be used. **width** int, optional The length of the returned string if `num` is positive, or the length of the two’s complement if `num` is negative, provided that `width` is at least a sufficient number of bits for `num` to be represented in the designated form. If the `width` value is insufficient, an error is raised. Returns: **bin** str Binary representation of `num` or two’s complement of `num`. See also [`base_repr`](numpy.base_repr#numpy.base_repr "numpy.base_repr") Return a string representation of a number in the given base system. [`bin`](https://docs.python.org/3/library/functions.html#bin "\(in Python v3.13\)") Python’s built-in binary representation generator of an integer. #### Notes `binary_repr` is equivalent to using [`base_repr`](numpy.base_repr#numpy.base_repr "numpy.base_repr") with base 2, but about 25x faster. #### References [1] Wikipedia, “Two’s complement”, [https://en.wikipedia.org/wiki/Two’s_complement](https://en.wikipedia.org/wiki/Two's_complement) #### Examples >>> import numpy as np >>> np.binary_repr(3) '11' >>> np.binary_repr(-3) '-11' >>> np.binary_repr(3, width=4) '0011' The two’s complement is returned when the input number is negative and width is specified: >>> np.binary_repr(-3, width=3) '101' >>> np.binary_repr(-3, width=5) '11101' # numpy.bincount numpy.bincount(_x_ , _/_ , _weights =None_, _minlength =0_) Count number of occurrences of each value in array of non-negative ints. The number of bins (of size 1) is one larger than the largest value in `x`. If `minlength` is specified, there will be at least this number of bins in the output array (though it will be longer if necessary, depending on the contents of `x`). Each bin gives the number of occurrences of its index value in `x`. If `weights` is specified the input array is weighted by it, i.e. if a value `n` is found at position `i`, `out[n] += weight[i]` instead of `out[n] += 1`. Parameters: **x** array_like, 1 dimension, nonnegative ints Input array. **weights** array_like, optional Weights, array of the same shape as `x`. **minlength** int, optional A minimum number of bins for the output array. Returns: **out** ndarray of ints The result of binning the input array. The length of `out` is equal to `np.amax(x)+1`. Raises: ValueError If the input is not 1-dimensional, or contains elements with negative values, or if `minlength` is negative. TypeError If the type of the input is float or complex. See also [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram"), [`digitize`](numpy.digitize#numpy.digitize "numpy.digitize"), [`unique`](numpy.unique#numpy.unique "numpy.unique") #### Examples >>> import numpy as np >>> np.bincount(np.arange(5)) array([1, 1, 1, 1, 1]) >>> np.bincount(np.array([0, 1, 1, 3, 2, 1, 7])) array([1, 3, 1, 1, 0, 0, 0, 1]) >>> x = np.array([0, 1, 1, 3, 2, 1, 7, 23]) >>> np.bincount(x).size == np.amax(x)+1 True The input array needs to be of integer dtype, otherwise a TypeError is raised: >>> np.bincount(np.arange(5, dtype=float)) Traceback (most recent call last): ... TypeError: Cannot cast array data from dtype('float64') to dtype('int64') according to the rule 'safe' A possible use of `bincount` is to perform sums over variable-size chunks of an array, using the `weights` keyword. >>> w = np.array([0.3, 0.5, 0.2, 0.7, 1., -0.6]) # weights >>> x = np.array([0, 1, 1, 2, 2, 2]) >>> np.bincount(x, weights=w) array([ 0.3, 0.7, 1.1]) # numpy.bitwise_and numpy.bitwise_and(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Compute the bit-wise AND of two arrays element-wise. Computes the bit-wise AND of the underlying binary representation of the integers in the input arrays. This ufunc implements the C/Python operator `&`. Parameters: **x1, x2** array_like Only integer and boolean types are handled. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Result. This is a scalar if both `x1` and `x2` are scalars. See also [`logical_and`](numpy.logical_and#numpy.logical_and "numpy.logical_and") [`bitwise_or`](numpy.bitwise_or#numpy.bitwise_or "numpy.bitwise_or") [`bitwise_xor`](numpy.bitwise_xor#numpy.bitwise_xor "numpy.bitwise_xor") [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Return the binary representation of the input number as a string. #### Examples >>> import numpy as np The number 13 is represented by `00001101`. Likewise, 17 is represented by `00010001`. The bit-wise AND of 13 and 17 is therefore `000000001`, or 1: >>> np.bitwise_and(13, 17) 1 >>> np.bitwise_and(14, 13) 12 >>> np.binary_repr(12) '1100' >>> np.bitwise_and([14,3], 13) array([12, 1]) >>> np.bitwise_and([11,7], [4,25]) array([0, 1]) >>> np.bitwise_and(np.array([2,5,255]), np.array([3,14,16])) array([ 2, 4, 16]) >>> np.bitwise_and([True, True], [False, True]) array([False, True]) The `&` operator can be used as a shorthand for `np.bitwise_and` on ndarrays. >>> x1 = np.array([2, 5, 255]) >>> x2 = np.array([3, 14, 16]) >>> x1 & x2 array([ 2, 4, 16]) # numpy.bitwise_count numpy.bitwise_count(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Computes the number of 1-bits in the absolute value of `x`. Analogous to the builtin `int.bit_count` or `popcount` in C++. Parameters: **x** array_like, unsigned int Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The corresponding number of 1-bits in the input. Returns uint8 for all integer types This is a scalar if `x` is a scalar. #### References [1] [2] Wikipedia, “Hamming weight”, [3] ) #### Examples >>> import numpy as np >>> np.bitwise_count(1023) np.uint8(10) >>> a = np.array([2**i - 1 for i in range(16)]) >>> np.bitwise_count(a) array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], dtype=uint8) # numpy.bitwise_invert numpy.bitwise_invert(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Compute bit-wise inversion, or bit-wise NOT, element-wise. Computes the bit-wise NOT of the underlying binary representation of the integers in the input arrays. This ufunc implements the C/Python operator `~`. For signed integer inputs, the bit-wise NOT of the absolute value is returned. In a two’s-complement system, this operation effectively flips all the bits, resulting in a representation that corresponds to the negative of the input plus one. This is the most common method of representing signed integers on computers [1]. A N-bit two’s-complement system can represent every integer in the range \\(-2^{N-1}\\) to \\(+2^{N-1}-1\\). Parameters: **x** array_like Only integer and boolean types are handled. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Result. This is a scalar if `x` is a scalar. See also [`bitwise_and`](numpy.bitwise_and#numpy.bitwise_and "numpy.bitwise_and"), [`bitwise_or`](numpy.bitwise_or#numpy.bitwise_or "numpy.bitwise_or"), [`bitwise_xor`](numpy.bitwise_xor#numpy.bitwise_xor "numpy.bitwise_xor") [`logical_not`](numpy.logical_not#numpy.logical_not "numpy.logical_not") [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Return the binary representation of the input number as a string. #### Notes `numpy.bitwise_not` is an alias for [`invert`](numpy.invert#numpy.invert "numpy.invert"): >>> np.bitwise_not is np.invert True #### References [1] Wikipedia, “Two’s complement”, [https://en.wikipedia.org/wiki/Two’s_complement](https://en.wikipedia.org/wiki/Two's_complement) #### Examples >>> import numpy as np We’ve seen that 13 is represented by `00001101`. The invert or bit-wise NOT of 13 is then: >>> x = np.invert(np.array(13, dtype=np.uint8)) >>> x np.uint8(242) >>> np.binary_repr(x, width=8) '11110010' The result depends on the bit-width: >>> x = np.invert(np.array(13, dtype=np.uint16)) >>> x np.uint16(65522) >>> np.binary_repr(x, width=16) '1111111111110010' When using signed integer types, the result is the bit-wise NOT of the unsigned type, interpreted as a signed integer: >>> np.invert(np.array([13], dtype=np.int8)) array([-14], dtype=int8) >>> np.binary_repr(-14, width=8) '11110010' Booleans are accepted as well: >>> np.invert(np.array([True, False])) array([False, True]) The `~` operator can be used as a shorthand for `np.invert` on ndarrays. >>> x1 = np.array([True, False]) >>> ~x1 array([False, True]) # numpy.bitwise_left_shift numpy.bitwise_left_shift(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Shift the bits of an integer to the left. Bits are shifted to the left by appending `x2` 0s at the right of `x1`. Since the internal representation of numbers is in binary format, this operation is equivalent to multiplying `x1` by `2**x2`. Parameters: **x1** array_like of integer type Input values. **x2** array_like of integer type Number of zeros to append to `x1`. Has to be non-negative. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** array of integer type Return `x1` with bits shifted `x2` times to the left. This is a scalar if both `x1` and `x2` are scalars. See also [`right_shift`](numpy.right_shift#numpy.right_shift "numpy.right_shift") Shift the bits of an integer to the right. [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Return the binary representation of the input number as a string. #### Examples >>> import numpy as np >>> np.binary_repr(5) '101' >>> np.left_shift(5, 2) 20 >>> np.binary_repr(20) '10100' >>> np.left_shift(5, [1,2,3]) array([10, 20, 40]) Note that the dtype of the second argument may change the dtype of the result and can lead to unexpected results in some cases (see [Casting Rules](../../user/basics.ufuncs#ufuncs-casting)): >>> a = np.left_shift(np.uint8(255), np.int64(1)) # Expect 254 >>> print(a, type(a)) # Unexpected result due to upcasting 510 >>> b = np.left_shift(np.uint8(255), np.uint8(1)) >>> print(b, type(b)) 254 The `<<` operator can be used as a shorthand for `np.left_shift` on ndarrays. >>> x1 = 5 >>> x2 = np.array([1, 2, 3]) >>> x1 << x2 array([10, 20, 40]) # numpy.bitwise_or numpy.bitwise_or(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Compute the bit-wise OR of two arrays element-wise. Computes the bit-wise OR of the underlying binary representation of the integers in the input arrays. This ufunc implements the C/Python operator `|`. Parameters: **x1, x2** array_like Only integer and boolean types are handled. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Result. This is a scalar if both `x1` and `x2` are scalars. See also [`logical_or`](numpy.logical_or#numpy.logical_or "numpy.logical_or") [`bitwise_and`](numpy.bitwise_and#numpy.bitwise_and "numpy.bitwise_and") [`bitwise_xor`](numpy.bitwise_xor#numpy.bitwise_xor "numpy.bitwise_xor") [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Return the binary representation of the input number as a string. #### Examples >>> import numpy as np The number 13 has the binary representation `00001101`. Likewise, 16 is represented by `00010000`. The bit-wise OR of 13 and 16 is then `00011101`, or 29: >>> np.bitwise_or(13, 16) 29 >>> np.binary_repr(29) '11101' >>> np.bitwise_or(32, 2) 34 >>> np.bitwise_or([33, 4], 1) array([33, 5]) >>> np.bitwise_or([33, 4], [1, 2]) array([33, 6]) >>> np.bitwise_or(np.array([2, 5, 255]), np.array([4, 4, 4])) array([ 6, 5, 255]) >>> np.array([2, 5, 255]) | np.array([4, 4, 4]) array([ 6, 5, 255]) >>> np.bitwise_or(np.array([2, 5, 255, 2147483647], dtype=np.int32), ... np.array([4, 4, 4, 2147483647], dtype=np.int32)) array([ 6, 5, 255, 2147483647], dtype=int32) >>> np.bitwise_or([True, True], [False, True]) array([ True, True]) The `|` operator can be used as a shorthand for `np.bitwise_or` on ndarrays. >>> x1 = np.array([2, 5, 255]) >>> x2 = np.array([4, 4, 4]) >>> x1 | x2 array([ 6, 5, 255]) # numpy.bitwise_right_shift numpy.bitwise_right_shift(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Shift the bits of an integer to the right. Bits are shifted to the right `x2`. Because the internal representation of numbers is in binary format, this operation is equivalent to dividing `x1` by `2**x2`. Parameters: **x1** array_like, int Input values. **x2** array_like, int Number of bits to remove at the right of `x1`. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray, int Return `x1` with bits shifted `x2` times to the right. This is a scalar if both `x1` and `x2` are scalars. See also [`left_shift`](numpy.left_shift#numpy.left_shift "numpy.left_shift") Shift the bits of an integer to the left. [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Return the binary representation of the input number as a string. #### Examples >>> import numpy as np >>> np.binary_repr(10) '1010' >>> np.right_shift(10, 1) 5 >>> np.binary_repr(5) '101' >>> np.right_shift(10, [1,2,3]) array([5, 2, 1]) The `>>` operator can be used as a shorthand for `np.right_shift` on ndarrays. >>> x1 = 10 >>> x2 = np.array([1,2,3]) >>> x1 >> x2 array([5, 2, 1]) # numpy.bitwise_xor numpy.bitwise_xor(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Compute the bit-wise XOR of two arrays element-wise. Computes the bit-wise XOR of the underlying binary representation of the integers in the input arrays. This ufunc implements the C/Python operator `^`. Parameters: **x1, x2** array_like Only integer and boolean types are handled. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Result. This is a scalar if both `x1` and `x2` are scalars. See also [`logical_xor`](numpy.logical_xor#numpy.logical_xor "numpy.logical_xor") [`bitwise_and`](numpy.bitwise_and#numpy.bitwise_and "numpy.bitwise_and") [`bitwise_or`](numpy.bitwise_or#numpy.bitwise_or "numpy.bitwise_or") [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Return the binary representation of the input number as a string. #### Examples >>> import numpy as np The number 13 is represented by `00001101`. Likewise, 17 is represented by `00010001`. The bit-wise XOR of 13 and 17 is therefore `00011100`, or 28: >>> np.bitwise_xor(13, 17) 28 >>> np.binary_repr(28) '11100' >>> np.bitwise_xor(31, 5) 26 >>> np.bitwise_xor([31,3], 5) array([26, 6]) >>> np.bitwise_xor([31,3], [5,6]) array([26, 5]) >>> np.bitwise_xor([True, True], [False, True]) array([ True, False]) The `^` operator can be used as a shorthand for `np.bitwise_xor` on ndarrays. >>> x1 = np.array([True, True]) >>> x2 = np.array([False, True]) >>> x1 ^ x2 array([ True, False]) # numpy.blackman numpy.blackman(_M_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L3058-L3155) Return the Blackman window. The Blackman window is a taper formed by using the first three terms of a summation of cosines. It was designed to have close to the minimal leakage possible. It is close to optimal, only slightly worse than a Kaiser window. Parameters: **M** int Number of points in the output window. If zero or less, an empty array is returned. Returns: **out** ndarray The window, with the maximum value normalized to one (the value one appears only if the number of samples is odd). See also [`bartlett`](numpy.bartlett#numpy.bartlett "numpy.bartlett"), [`hamming`](numpy.hamming#numpy.hamming "numpy.hamming"), [`hanning`](numpy.hanning#numpy.hanning "numpy.hanning"), [`kaiser`](numpy.kaiser#numpy.kaiser "numpy.kaiser") #### Notes The Blackman window is defined as \\[w(n) = 0.42 - 0.5 \cos(2\pi n/M) + 0.08 \cos(4\pi n/M)\\] Most references to the Blackman window come from the signal processing literature, where it is used as one of many windowing functions for smoothing values. It is also known as an apodization (which means “removing the foot”, i.e. smoothing discontinuities at the beginning and end of the sampled signal) or tapering function. It is known as a “near optimal” tapering function, almost as good (by some measures) as the kaiser window. #### References Blackman, R.B. and Tukey, J.W., (1958) The measurement of power spectra, Dover Publications, New York. Oppenheim, A.V., and R.W. Schafer. Discrete-Time Signal Processing. Upper Saddle River, NJ: Prentice-Hall, 1999, pp. 468-471. #### Examples >>> import numpy as np >>> import matplotlib.pyplot as plt >>> np.blackman(12) array([-1.38777878e-17, 3.26064346e-02, 1.59903635e-01, # may vary 4.14397981e-01, 7.36045180e-01, 9.67046769e-01, 9.67046769e-01, 7.36045180e-01, 4.14397981e-01, 1.59903635e-01, 3.26064346e-02, -1.38777878e-17]) Plot the window and the frequency response. import matplotlib.pyplot as plt from numpy.fft import fft, fftshift window = np.blackman(51) plt.plot(window) plt.title("Blackman window") plt.ylabel("Amplitude") plt.xlabel("Sample") plt.show() # doctest: +SKIP plt.figure() A = fft(window, 2048) / 25.5 mag = np.abs(fftshift(A)) freq = np.linspace(-0.5, 0.5, len(A)) with np.errstate(divide='ignore', invalid='ignore'): response = 20 * np.log10(mag) response = np.clip(response, -100, 100) plt.plot(freq, response) plt.title("Frequency response of Blackman window") plt.ylabel("Magnitude [dB]") plt.xlabel("Normalized frequency [cycles per sample]") plt.axis('tight') plt.show() # numpy.block numpy.block(_arrays_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/shape_base.py#L784-L953) Assemble an nd-array from nested lists of blocks. Blocks in the innermost lists are concatenated (see [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate")) along the last dimension (-1), then these are concatenated along the second- last dimension (-2), and so on until the outermost list is reached. Blocks can be of any dimension, but will not be broadcasted using the normal rules. Instead, leading axes of size 1 are inserted, to make `block.ndim` the same for all blocks. This is primarily useful for working with scalars, and means that code like `np.block([v, 1])` is valid, where `v.ndim == 1`. When the nested list is two levels deep, this allows block matrices to be constructed from their components. Parameters: **arrays** nested list of array_like or scalars (but not tuples) If passed a single ndarray or scalar (a nested list of depth 0), this is returned unmodified (and not copied). Elements shapes must match along the appropriate axes (without broadcasting), but leading 1s will be prepended to the shape as necessary to make the dimensions match. Returns: **block_array** ndarray The array assembled from the given blocks. The dimensionality of the output is equal to the greatest of: * the dimensionality of all the inputs * the depth to which the input list is nested Raises: ValueError * If list depths are mismatched - for instance, `[[a, b], c]` is illegal, and should be spelt `[[a, b], [c]]` * If lists are empty - for instance, `[[a, b], []]` See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack") Stack arrays in sequence vertically (row wise). [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third axis). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit") Split an array into multiple sub-arrays vertically (row-wise). [`unstack`](numpy.unstack#numpy.unstack "numpy.unstack") Split an array into a tuple of sub-arrays along an axis. #### Notes When called with only scalars, `np.block` is equivalent to an ndarray call. So `np.block([[1, 2], [3, 4]])` is equivalent to `np.array([[1, 2], [3, 4]])`. This function does not enforce that the blocks lie on a fixed grid. `np.block([[a, b], [c, d]])` is not restricted to arrays of the form: AAAbb AAAbb cccDD But is also allowed to produce, for some `a, b, c, d`: AAAbb AAAbb cDDDD Since concatenation happens along the last axis first, `block` is _not_ capable of producing the following directly: AAAbb cccbb cccDD Matlab’s “square bracket stacking”, `[A, B, ...; p, q, ...]`, is equivalent to `np.block([[A, B, ...], [p, q, ...]])`. #### Examples The most common use of this function is to build a block matrix: >>> import numpy as np >>> A = np.eye(2) * 2 >>> B = np.eye(3) * 3 >>> np.block([ ... [A, np.zeros((2, 3))], ... [np.ones((3, 2)), B ] ... ]) array([[2., 0., 0., 0., 0.], [0., 2., 0., 0., 0.], [1., 1., 3., 0., 0.], [1., 1., 0., 3., 0.], [1., 1., 0., 0., 3.]]) With a list of depth 1, `block` can be used as [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack"): >>> np.block([1, 2, 3]) # hstack([1, 2, 3]) array([1, 2, 3]) >>> a = np.array([1, 2, 3]) >>> b = np.array([4, 5, 6]) >>> np.block([a, b, 10]) # hstack([a, b, 10]) array([ 1, 2, 3, 4, 5, 6, 10]) >>> A = np.ones((2, 2), int) >>> B = 2 * A >>> np.block([A, B]) # hstack([A, B]) array([[1, 1, 2, 2], [1, 1, 2, 2]]) With a list of depth 2, `block` can be used in place of [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack"): >>> a = np.array([1, 2, 3]) >>> b = np.array([4, 5, 6]) >>> np.block([[a], [b]]) # vstack([a, b]) array([[1, 2, 3], [4, 5, 6]]) >>> A = np.ones((2, 2), int) >>> B = 2 * A >>> np.block([[A], [B]]) # vstack([A, B]) array([[1, 1], [1, 1], [2, 2], [2, 2]]) It can also be used in place of [`atleast_1d`](numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d") and [`atleast_2d`](numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d"): >>> a = np.array(0) >>> b = np.array([1]) >>> np.block([a]) # atleast_1d(a) array([0]) >>> np.block([b]) # atleast_1d(b) array([1]) >>> np.block([[a]]) # atleast_2d(a) array([[0]]) >>> np.block([[b]]) # atleast_2d(b) array([[1]]) # numpy.bmat numpy.bmat(_obj_ , _ldict =None_, _gdict =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L1041-L1118) Build a matrix object from a string, nested sequence, or array. Parameters: **obj** str or array_like Input data. If a string, variables in the current scope may be referenced by name. **ldict** dict, optional A dictionary that replaces local operands in current frame. Ignored if `obj` is not a string or `gdict` is None. **gdict** dict, optional A dictionary that replaces global operands in current frame. Ignored if `obj` is not a string. Returns: **out** matrix Returns a matrix object, which is a specialized 2-D array. See also [`block`](numpy.block#numpy.block "numpy.block") A generalization of this function for N-d arrays, that returns normal ndarrays. #### Examples >>> import numpy as np >>> A = np.asmatrix('1 1; 1 1') >>> B = np.asmatrix('2 2; 2 2') >>> C = np.asmatrix('3 4; 5 6') >>> D = np.asmatrix('7 8; 9 0') All the following expressions construct the same block matrix: >>> np.bmat([[A, B], [C, D]]) matrix([[1, 1, 2, 2], [1, 1, 2, 2], [3, 4, 7, 8], [5, 6, 9, 0]]) >>> np.bmat(np.r_[np.c_[A, B], np.c_[C, D]]) matrix([[1, 1, 2, 2], [1, 1, 2, 2], [3, 4, 7, 8], [5, 6, 9, 0]]) >>> np.bmat('A,B; C,D') matrix([[1, 1, 2, 2], [1, 1, 2, 2], [3, 4, 7, 8], [5, 6, 9, 0]]) # numpy.broadcast _class_ numpy.broadcast[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) Produce an object that mimics broadcasting. Parameters: **in1, in2, …** array_like Input parameters. Returns: **b** broadcast object Broadcast the input parameters against one another, and return an object that encapsulates the result. Amongst others, it has `shape` and `nd` properties, and may be used as an iterator. See also [`broadcast_arrays`](numpy.broadcast_arrays#numpy.broadcast_arrays "numpy.broadcast_arrays") [`broadcast_to`](numpy.broadcast_to#numpy.broadcast_to "numpy.broadcast_to") [`broadcast_shapes`](numpy.broadcast_shapes#numpy.broadcast_shapes "numpy.broadcast_shapes") #### Examples Manually adding two vectors, using broadcasting: >>> import numpy as np >>> x = np.array([[1], [2], [3]]) >>> y = np.array([4, 5, 6]) >>> b = np.broadcast(x, y) >>> out = np.empty(b.shape) >>> out.flat = [u+v for (u,v) in b] >>> out array([[5., 6., 7.], [6., 7., 8.], [7., 8., 9.]]) Compare against built-in broadcasting: >>> x + y array([[5, 6, 7], [6, 7, 8], [7, 8, 9]]) Attributes: [`index`](numpy.broadcast.index#numpy.broadcast.index "numpy.broadcast.index") current index in broadcasted result [`iters`](numpy.broadcast.iters#numpy.broadcast.iters "numpy.broadcast.iters") tuple of iterators along `self`’s “components.” [`nd`](numpy.broadcast.nd#numpy.broadcast.nd "numpy.broadcast.nd") Number of dimensions of broadcasted result. [`ndim`](numpy.ndim#numpy.ndim "numpy.ndim") Number of dimensions of broadcasted result. [`numiter`](numpy.broadcast.numiter#numpy.broadcast.numiter "numpy.broadcast.numiter") Number of iterators possessed by the broadcasted result. [`shape`](numpy.shape#numpy.shape "numpy.shape") Shape of broadcasted result. [`size`](numpy.size#numpy.size "numpy.size") Total size of broadcasted result. #### Methods [`reset`](numpy.broadcast.reset#numpy.broadcast.reset "numpy.broadcast.reset")() | Reset the broadcasted result's iterator(s). ---|--- # numpy.broadcast.index attribute broadcast.index current index in broadcasted result #### Examples >>> import numpy as np >>> x = np.array([[1], [2], [3]]) >>> y = np.array([4, 5, 6]) >>> b = np.broadcast(x, y) >>> b.index 0 >>> next(b), next(b), next(b) ((1, 4), (1, 5), (1, 6)) >>> b.index 3 # numpy.broadcast.iters attribute broadcast.iters tuple of iterators along `self`’s “components.” Returns a tuple of [`numpy.flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") objects, one for each “component” of `self`. See also [`numpy.flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples >>> import numpy as np >>> x = np.array([1, 2, 3]) >>> y = np.array([[4], [5], [6]]) >>> b = np.broadcast(x, y) >>> row, col = b.iters >>> next(row), next(col) (1, 4) # numpy.broadcast.nd attribute broadcast.nd Number of dimensions of broadcasted result. For code intended for NumPy 1.12.0 and later the more consistent [`ndim`](numpy.ndim#numpy.ndim "numpy.ndim") is preferred. #### Examples >>> import numpy as np >>> x = np.array([1, 2, 3]) >>> y = np.array([[4], [5], [6]]) >>> b = np.broadcast(x, y) >>> b.nd 2 # numpy.broadcast.numiter attribute broadcast.numiter Number of iterators possessed by the broadcasted result. #### Examples >>> import numpy as np >>> x = np.array([1, 2, 3]) >>> y = np.array([[4], [5], [6]]) >>> b = np.broadcast(x, y) >>> b.numiter 2 # numpy.broadcast.reset method broadcast.reset() Reset the broadcasted result’s iterator(s). Parameters: **None** Returns: None #### Examples >>> import numpy as np >>> x = np.array([1, 2, 3]) >>> y = np.array([[4], [5], [6]]) >>> b = np.broadcast(x, y) >>> b.index 0 >>> next(b), next(b), next(b) ((1, 4), (2, 4), (3, 4)) >>> b.index 3 >>> b.reset() >>> b.index 0 # numpy.broadcast_arrays numpy.broadcast_arrays(_* args_, _subok =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_stride_tricks_impl.py#L481-L549) Broadcast any number of arrays against each other. Parameters: ***args** array_likes The arrays to broadcast. **subok** bool, optional If True, then sub-classes will be passed-through, otherwise the returned arrays will be forced to be a base-class array (default). Returns: **broadcasted** tuple of arrays These arrays are views on the original arrays. They are typically not contiguous. Furthermore, more than one element of a broadcasted array may refer to a single memory location. If you need to write to the arrays, make copies first. While you can set the `writable` flag True, writing to a single output value may end up changing more than one location in the output array. Deprecated since version 1.17: The output is currently marked so that if written to, a deprecation warning will be emitted. A future version will set the `writable` flag False so writing to it will raise an error. See also [`broadcast`](numpy.broadcast#numpy.broadcast "numpy.broadcast") [`broadcast_to`](numpy.broadcast_to#numpy.broadcast_to "numpy.broadcast_to") [`broadcast_shapes`](numpy.broadcast_shapes#numpy.broadcast_shapes "numpy.broadcast_shapes") #### Examples >>> import numpy as np >>> x = np.array([[1,2,3]]) >>> y = np.array([[4],[5]]) >>> np.broadcast_arrays(x, y) (array([[1, 2, 3], [1, 2, 3]]), array([[4, 4, 4], [5, 5, 5]])) Here is a useful idiom for getting contiguous copies instead of non-contiguous views. >>> [np.array(a) for a in np.broadcast_arrays(x, y)] [array([[1, 2, 3], [1, 2, 3]]), array([[4, 4, 4], [5, 5, 5]])] # numpy.broadcast_shapes numpy.broadcast_shapes(_* args_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_stride_tricks_impl.py#L433-L474) Broadcast the input shapes into a single shape. [Learn more about broadcasting here](../../user/basics.broadcasting#basics- broadcasting). New in version 1.20.0. Parameters: ***args** tuples of ints, or ints The shapes to be broadcast against each other. Returns: tuple Broadcasted shape. Raises: ValueError If the shapes are not compatible and cannot be broadcast according to NumPy’s broadcasting rules. See also [`broadcast`](numpy.broadcast#numpy.broadcast "numpy.broadcast") [`broadcast_arrays`](numpy.broadcast_arrays#numpy.broadcast_arrays "numpy.broadcast_arrays") [`broadcast_to`](numpy.broadcast_to#numpy.broadcast_to "numpy.broadcast_to") #### Examples >>> import numpy as np >>> np.broadcast_shapes((1, 2), (3, 1), (3, 2)) (3, 2) >>> np.broadcast_shapes((6, 7), (5, 6, 1), (7,), (5, 1, 7)) (5, 6, 7) # numpy.broadcast_to numpy.broadcast_to(_array_ , _shape_ , _subok =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_stride_tricks_impl.py#L367-L410) Broadcast an array to a new shape. Parameters: **array** array_like The array to broadcast. **shape** tuple or int The shape of the desired array. A single integer `i` is interpreted as `(i,)`. **subok** bool, optional If True, then sub-classes will be passed-through, otherwise the returned array will be forced to be a base-class array (default). Returns: **broadcast** array A readonly view on the original array with the given shape. It is typically not contiguous. Furthermore, more than one element of a broadcasted array may refer to a single memory location. Raises: ValueError If the array is not compatible with the new shape according to NumPy’s broadcasting rules. See also [`broadcast`](numpy.broadcast#numpy.broadcast "numpy.broadcast") [`broadcast_arrays`](numpy.broadcast_arrays#numpy.broadcast_arrays "numpy.broadcast_arrays") [`broadcast_shapes`](numpy.broadcast_shapes#numpy.broadcast_shapes "numpy.broadcast_shapes") #### Examples >>> import numpy as np >>> x = np.array([1, 2, 3]) >>> np.broadcast_to(x, (3, 3)) array([[1, 2, 3], [1, 2, 3], [1, 2, 3]]) # numpy.busday_count numpy.busday_count(_begindates_ , _enddates_ , _weekmask ='1111100'_, _holidays =[]_, _busdaycal =None_, _out =None_) Counts the number of valid days between `begindates` and `enddates`, not including the day of `enddates`. If `enddates` specifies a date value that is earlier than the corresponding `begindates` date value, the count will be negative. Parameters: **begindates** array_like of datetime64[D] The array of the first dates for counting. **enddates** array_like of datetime64[D] The array of the end dates for counting, which are excluded from the count themselves. **weekmask** str or array_like of bool, optional A seven-element array indicating which of Monday through Sunday are valid days. May be specified as a length-seven list or array, like [1,1,1,1,1,0,0]; a length-seven string, like ‘1111100’; or a string like “Mon Tue Wed Thu Fri”, made up of 3-character abbreviations for weekdays, optionally separated by white space. Valid abbreviations are: Mon Tue Wed Thu Fri Sat Sun **holidays** array_like of datetime64[D], optional An array of dates to consider as invalid dates. They may be specified in any order, and NaT (not-a-time) dates are ignored. This list is saved in a normalized form that is suited for fast calculations of valid days. **busdaycal** busdaycalendar, optional A [`busdaycalendar`](numpy.busdaycalendar#numpy.busdaycalendar "numpy.busdaycalendar") object which specifies the valid days. If this parameter is provided, neither weekmask nor holidays may be provided. **out** array of int, optional If provided, this array is filled with the result. Returns: **out** array of int An array with a shape from broadcasting `begindates` and `enddates` together, containing the number of valid days between the begin and end dates. See also [`busdaycalendar`](numpy.busdaycalendar#numpy.busdaycalendar "numpy.busdaycalendar") An object that specifies a custom set of valid days. [`is_busday`](numpy.is_busday#numpy.is_busday "numpy.is_busday") Returns a boolean array indicating valid days. [`busday_offset`](numpy.busday_offset#numpy.busday_offset "numpy.busday_offset") Applies an offset counted in valid days. #### Examples >>> import numpy as np >>> # Number of weekdays in January 2011 ... np.busday_count('2011-01', '2011-02') 21 >>> # Number of weekdays in 2011 >>> np.busday_count('2011', '2012') 260 >>> # Number of Saturdays in 2011 ... np.busday_count('2011', '2012', weekmask='Sat') 53 # numpy.busday_offset numpy.busday_offset(_dates_ , _offsets_ , _roll ='raise'_, _weekmask ='1111100'_, _holidays =None_, _busdaycal =None_, _out =None_) First adjusts the date to fall on a valid day according to the `roll` rule, then applies offsets to the given dates counted in valid days. Parameters: **dates** array_like of datetime64[D] The array of dates to process. **offsets** array_like of int The array of offsets, which is broadcast with `dates`. **roll**{‘raise’, ‘nat’, ‘forward’, ‘following’, ‘backward’, ‘preceding’, ‘modifiedfollowing’, ‘modifiedpreceding’}, optional How to treat dates that do not fall on a valid day. The default is ‘raise’. * ‘raise’ means to raise an exception for an invalid day. * ‘nat’ means to return a NaT (not-a-time) for an invalid day. * ‘forward’ and ‘following’ mean to take the first valid day later in time. * ‘backward’ and ‘preceding’ mean to take the first valid day earlier in time. * ‘modifiedfollowing’ means to take the first valid day later in time unless it is across a Month boundary, in which case to take the first valid day earlier in time. * ‘modifiedpreceding’ means to take the first valid day earlier in time unless it is across a Month boundary, in which case to take the first valid day later in time. **weekmask** str or array_like of bool, optional A seven-element array indicating which of Monday through Sunday are valid days. May be specified as a length-seven list or array, like [1,1,1,1,1,0,0]; a length-seven string, like ‘1111100’; or a string like “Mon Tue Wed Thu Fri”, made up of 3-character abbreviations for weekdays, optionally separated by white space. Valid abbreviations are: Mon Tue Wed Thu Fri Sat Sun **holidays** array_like of datetime64[D], optional An array of dates to consider as invalid dates. They may be specified in any order, and NaT (not-a-time) dates are ignored. This list is saved in a normalized form that is suited for fast calculations of valid days. **busdaycal** busdaycalendar, optional A [`busdaycalendar`](numpy.busdaycalendar#numpy.busdaycalendar "numpy.busdaycalendar") object which specifies the valid days. If this parameter is provided, neither weekmask nor holidays may be provided. **out** array of datetime64[D], optional If provided, this array is filled with the result. Returns: **out** array of datetime64[D] An array with a shape from broadcasting `dates` and `offsets` together, containing the dates with offsets applied. See also [`busdaycalendar`](numpy.busdaycalendar#numpy.busdaycalendar "numpy.busdaycalendar") An object that specifies a custom set of valid days. [`is_busday`](numpy.is_busday#numpy.is_busday "numpy.is_busday") Returns a boolean array indicating valid days. [`busday_count`](numpy.busday_count#numpy.busday_count "numpy.busday_count") Counts how many valid days are in a half-open date range. #### Examples >>> import numpy as np >>> # First business day in October 2011 (not accounting for holidays) ... np.busday_offset('2011-10', 0, roll='forward') np.datetime64('2011-10-03') >>> # Last business day in February 2012 (not accounting for holidays) ... np.busday_offset('2012-03', -1, roll='forward') np.datetime64('2012-02-29') >>> # Third Wednesday in January 2011 ... np.busday_offset('2011-01', 2, roll='forward', weekmask='Wed') np.datetime64('2011-01-19') >>> # 2012 Mother's Day in Canada and the U.S. ... np.busday_offset('2012-05', 1, roll='forward', weekmask='Sun') np.datetime64('2012-05-13') >>> # First business day on or after a date ... np.busday_offset('2011-03-20', 0, roll='forward') np.datetime64('2011-03-21') >>> np.busday_offset('2011-03-22', 0, roll='forward') np.datetime64('2011-03-22') >>> # First business day after a date ... np.busday_offset('2011-03-20', 1, roll='backward') np.datetime64('2011-03-21') >>> np.busday_offset('2011-03-22', 1, roll='backward') np.datetime64('2011-03-23') # numpy.busdaycalendar.holidays attribute busdaycalendar.holidays A copy of the holiday array indicating additional invalid days. # numpy.busdaycalendar _class_ numpy.busdaycalendar(_weekmask ='1111100'_, _holidays =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) A business day calendar object that efficiently stores information defining valid days for the busday family of functions. The default valid days are Monday through Friday (“business days”). A busdaycalendar object can be specified with any set of weekly valid days, plus an optional “holiday” dates that always will be invalid. Once a busdaycalendar object is created, the weekmask and holidays cannot be modified. Parameters: **weekmask** str or array_like of bool, optional A seven-element array indicating which of Monday through Sunday are valid days. May be specified as a length-seven list or array, like [1,1,1,1,1,0,0]; a length-seven string, like ‘1111100’; or a string like “Mon Tue Wed Thu Fri”, made up of 3-character abbreviations for weekdays, optionally separated by white space. Valid abbreviations are: Mon Tue Wed Thu Fri Sat Sun **holidays** array_like of datetime64[D], optional An array of dates to consider as invalid dates, no matter which weekday they fall upon. Holiday dates may be specified in any order, and NaT (not-a-time) dates are ignored. This list is saved in a normalized form that is suited for fast calculations of valid days. Returns: **out** busdaycalendar A business day calendar object containing the specified weekmask and holidays values. See also [`is_busday`](numpy.is_busday#numpy.is_busday "numpy.is_busday") Returns a boolean array indicating valid days. [`busday_offset`](numpy.busday_offset#numpy.busday_offset "numpy.busday_offset") Applies an offset counted in valid days. [`busday_count`](numpy.busday_count#numpy.busday_count "numpy.busday_count") Counts how many valid days are in a half-open date range. #### Notes Once a busdaycalendar object is created, you cannot modify the weekmask or holidays. The attributes return copies of internal data. #### Examples >>> import numpy as np >>> # Some important days in July ... bdd = np.busdaycalendar( ... holidays=['2011-07-01', '2011-07-04', '2011-07-17']) >>> # Default is Monday to Friday weekdays ... bdd.weekmask array([ True, True, True, True, True, False, False]) >>> # Any holidays already on the weekend are removed ... bdd.holidays array(['2011-07-01', '2011-07-04'], dtype='datetime64[D]') Attributes: [`weekmask`](numpy.busdaycalendar.weekmask#numpy.busdaycalendar.weekmask "numpy.busdaycalendar.weekmask")(copy) seven-element array of bool A copy of the seven-element boolean mask indicating valid days. [`holidays`](numpy.busdaycalendar.holidays#numpy.busdaycalendar.holidays "numpy.busdaycalendar.holidays")(copy) sorted array of datetime64[D] A copy of the holiday array indicating additional invalid days. # numpy.busdaycalendar.weekmask attribute busdaycalendar.weekmask A copy of the seven-element boolean mask indicating valid days. # numpy.c_ numpy.c__= _ Translates slice objects to concatenation along the second axis. This is short-hand for `np.r_['-1,2,0', index expression]`, which is useful because of its common occurrence. In particular, arrays will be stacked along their last axis after being upgraded to at least 2-D with 1’s post-pended to the shape (column vectors made out of 1-D arrays). See also [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`r_`](numpy.r_#numpy.r_ "numpy.r_") For more detailed documentation. #### Examples >>> import numpy as np >>> np.c_[np.array([1,2,3]), np.array([4,5,6])] array([[1, 4], [2, 5], [3, 6]]) >>> np.c_[np.array([[1,2,3]]), 0, 0, np.array([[4,5,6]])] array([[1, 2, 3, ..., 4, 5, 6]]) # numpy.can_cast numpy.can_cast(_from__ , _to_ , _casting ='safe'_) Returns True if cast between data types can occur according to the casting rule. Parameters: **from_** dtype, dtype specifier, NumPy scalar, or array Data type, NumPy scalar, or array to cast from. **to** dtype or dtype specifier Data type to cast to. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. Returns: **out** bool True if cast can occur according to the casting rule. See also [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), [`result_type`](numpy.result_type#numpy.result_type "numpy.result_type") #### Notes Changed in version 2.0: This function does not support Python scalars anymore and does not apply any value-based logic for 0-D arrays and NumPy scalars. #### Examples Basic examples >>> import numpy as np >>> np.can_cast(np.int32, np.int64) True >>> np.can_cast(np.float64, complex) True >>> np.can_cast(complex, float) False >>> np.can_cast('i8', 'f8') True >>> np.can_cast('i8', 'f4') False >>> np.can_cast('i4', 'S4') False # numpy.cbrt numpy.cbrt(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the cube-root of an array, element-wise. Parameters: **x** array_like The values whose cube-roots are required. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray An array of the same shape as `x`, containing the cube root of each element in `x`. If `out` was provided, `y` is a reference to it. This is a scalar if `x` is a scalar. #### Examples >>> import numpy as np >>> np.cbrt([1,8,27]) array([ 1., 2., 3.]) # numpy.ceil numpy.ceil(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the ceiling of the input, element-wise. The ceil of the scalar `x` is the smallest integer `i`, such that `i >= x`. It is often denoted as \\(\lceil x \rceil\\). Parameters: **x** array_like Input data. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray or scalar The ceiling of each element in `x`. This is a scalar if `x` is a scalar. See also [`floor`](numpy.floor#numpy.floor "numpy.floor"), [`trunc`](numpy.trunc#numpy.trunc "numpy.trunc"), [`rint`](numpy.rint#numpy.rint "numpy.rint"), [`fix`](numpy.fix#numpy.fix "numpy.fix") #### Examples >>> import numpy as np >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) >>> np.ceil(a) array([-1., -1., -0., 1., 2., 2., 2.]) # numpy.char.add char.add(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Add arguments element-wise. Parameters: **x1, x2** array_like The arrays to be added. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **add** ndarray or scalar The sum of `x1` and `x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. #### Notes Equivalent to `x1` \+ `x2` in terms of array broadcasting. #### Examples >>> import numpy as np >>> np.add(1.0, 4.0) 5.0 >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> np.add(x1, x2) array([[ 0., 2., 4.], [ 3., 5., 7.], [ 6., 8., 10.]]) The `+` operator can be used as a shorthand for `np.add` on ndarrays. >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> x1 + x2 array([[ 0., 2., 4.], [ 3., 5., 7.], [ 6., 8., 10.]]) # numpy.char.array char.array(_obj_ , _itemsize =None_, _copy =True_, _unicode =None_, _order =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L1213-L1353) Create a [`chararray`](numpy.char.chararray#numpy.char.chararray "numpy.char.chararray"). Note This class is provided for numarray backward-compatibility. New code (not concerned with numarray compatibility) should use arrays of type [`bytes_`](../arrays.scalars#numpy.bytes_ "numpy.bytes_") or [`str_`](../arrays.scalars#numpy.str_ "numpy.str_") and use the free functions in [`numpy.char`](../routines.char#module-numpy.char "numpy.char") for fast vectorized string operations instead. Versus a NumPy array of dtype [`bytes_`](../arrays.scalars#numpy.bytes_ "numpy.bytes_") or [`str_`](../arrays.scalars#numpy.str_ "numpy.str_"), this class adds the following functionality: 1. values automatically have whitespace removed from the end when indexed 2. comparison operators automatically remove whitespace from the end when comparing values 3. vectorized string operations are provided as methods (e.g. [`chararray.endswith`](numpy.char.chararray.endswith#numpy.char.chararray.endswith "numpy.char.chararray.endswith")) and infix operators (e.g. `+, *, %`) Parameters: **obj** array of str or unicode-like **itemsize** int, optional `itemsize` is the number of characters per scalar in the resulting array. If `itemsize` is None, and `obj` is an object array or a Python list, the `itemsize` will be automatically determined. If `itemsize` is provided and `obj` is of type str or unicode, then the `obj` string will be chunked into `itemsize` pieces. **copy** bool, optional If true (default), then the object is copied. Otherwise, a copy will only be made if `__array__` returns a copy, if obj is a nested sequence, or if a copy is needed to satisfy any of the other requirements (`itemsize`, unicode, `order`, etc.). **unicode** bool, optional When true, the resulting [`chararray`](numpy.char.chararray#numpy.char.chararray "numpy.char.chararray") can contain Unicode characters, when false only 8-bit characters. If unicode is None and `obj` is one of the following: * a [`chararray`](numpy.char.chararray#numpy.char.chararray "numpy.char.chararray"), * an ndarray of type [`str_`](../arrays.scalars#numpy.str_ "numpy.str_") or [`bytes_`](../arrays.scalars#numpy.bytes_ "numpy.bytes_") * a Python [`str`](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.13\)") or [`bytes`](https://docs.python.org/3/library/stdtypes.html#bytes "\(in Python v3.13\)") object, then the unicode setting of the output array will be automatically determined. **order**{‘C’, ‘F’, ‘A’}, optional Specify the order of the array. If order is ‘C’ (default), then the array will be in C-contiguous order (last-index varies the fastest). If order is ‘F’, then the returned array will be in Fortran-contiguous order (first-index varies the fastest). If order is ‘A’, then the returned array may be in any order (either C-, Fortran-contiguous, or even discontiguous). #### Examples >>> import numpy as np >>> char_array = np.char.array(['hello', 'world', 'numpy','array']) >>> char_array chararray(['hello', 'world', 'numpy', 'array'], dtype='>> import numpy as np >>> np.char.asarray(['hello', 'world']) chararray(['hello', 'world'], dtype='>> import numpy as np >>> c = np.array(['a1b2','1b2a','b2a1','2a1b'],'S4'); c array(['a1b2', '1b2a', 'b2a1', '2a1b'], dtype='|S4') >>> np.strings.capitalize(c) array(['A1b2', '1b2a', 'B2a1', '2a1b'], dtype='|S4') # numpy.char.center char.center(_a_ , _width_ , _fillchar =' '_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L654-L719) Return a copy of `a` with its elements centered in a string of length `width`. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype **width** array_like, with any integer dtype The length of the resulting strings, unless `width < str_len(a)`. **fillchar** array-like, with `StringDType`, `bytes_`, or `str_` dtype Optional padding character to use (default is space). Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types See also [`str.center`](https://docs.python.org/3/library/stdtypes.html#str.center "\(in Python v3.13\)") #### Notes While it is possible for `a` and `fillchar` to have different dtypes, passing a non-ASCII character in `fillchar` when `a` is of dtype “S” is not allowed, and a `ValueError` is raised. #### Examples >>> import numpy as np >>> c = np.array(['a1b2','1b2a','b2a1','2a1b']); c array(['a1b2', '1b2a', 'b2a1', '2a1b'], dtype='>> np.strings.center(c, width=9) array([' a1b2 ', ' 1b2a ', ' b2a1 ', ' 2a1b '], dtype='>> np.strings.center(c, width=9, fillchar='*') array(['***a1b2**', '***1b2a**', '***b2a1**', '***2a1b**'], dtype='>> np.strings.center(c, width=1) array(['a1b2', '1b2a', 'b2a1', '2a1b'], dtype='>> import numpy as np >>> x = np.array([1, 2, 2.5]) >>> x array([1. , 2. , 2.5]) >>> x.astype(int) array([1, 2, 2]) # numpy.char.chararray.base attribute char.chararray.base Base object if memory is from some other object. #### Examples The base of an array that owns its memory is None: >>> import numpy as np >>> x = np.array([1,2,3,4]) >>> x.base is None True Slicing creates a view, whose memory is shared with x: >>> y = x[2:] >>> y.base is x True # numpy.char.chararray.copy method char.chararray.copy(_order ='C'_) Return a copy of the array. Parameters: **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. (Note that this function and [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") Similar function with different default behavior [`numpy.copyto`](numpy.copyto#numpy.copyto "numpy.copyto") #### Notes This function is the preferred method for creating an array copy. The function [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. #### Examples >>> import numpy as np >>> x = np.array([[1,2,3],[4,5,6]], order='F') >>> y = x.copy() >>> x.fill(0) >>> x array([[0, 0, 0], [0, 0, 0]]) >>> y array([[1, 2, 3], [4, 5, 6]]) >>> y.flags['C_CONTIGUOUS'] True For arrays containing Python objects (e.g. dtype=object), the copy is a shallow one. The new array will contain the same object which may lead to surprises if that object can be modified (is mutable): >>> a = np.array([1, 'm', [2, 3, 4]], dtype=object) >>> b = a.copy() >>> b[2][0] = 10 >>> a array([1, 'm', list([10, 3, 4])], dtype=object) To ensure all elements within an `object` array are copied, use [`copy.deepcopy`](https://docs.python.org/3/library/copy.html#copy.deepcopy "\(in Python v3.13\)"): >>> import copy >>> a = np.array([1, 'm', [2, 3, 4]], dtype=object) >>> c = copy.deepcopy(a) >>> c[2][0] = 10 >>> c array([1, 'm', list([10, 3, 4])], dtype=object) >>> a array([1, 'm', list([2, 3, 4])], dtype=object) # numpy.char.chararray.count method char.chararray.count(_sub_ , _start =0_, _end =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L765-L775) Returns an array with the number of non-overlapping occurrences of substring `sub` in the range [`start`, `end`]. See also [`char.count`](numpy.char.count#numpy.char.count "numpy.char.count") # numpy.char.chararray.ctypes attribute char.chararray.ctypes An object to simplify the interaction of the array with the ctypes module. This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library. Parameters: **None** Returns: **c** Python object Possessing attributes data, shape, strides, etc. See also [`numpy.ctypeslib`](../routines.ctypeslib#module-numpy.ctypeslib "numpy.ctypeslib") #### Notes Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes): _ctypes.data A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as: `self._array_interface_['data'][0]`. Note that unlike `data_as`, a reference won’t be kept to the array: code like `ctypes.c_void_p((a + b).ctypes.data)` will result in a pointer to a deallocated array, and should be spelt `(a + b).ctypes.data_as(ctypes.c_void_p)` _ctypes.shape (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to `dtype('p')` on this platform (see [`c_intp`](../routines.ctypeslib#numpy.ctypeslib.c_intp "numpy.ctypeslib.c_intp")). This base-type could be [`ctypes.c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "\(in Python v3.13\)"), [`ctypes.c_long`](https://docs.python.org/3/library/ctypes.html#ctypes.c_long "\(in Python v3.13\)"), or [`ctypes.c_longlong`](https://docs.python.org/3/library/ctypes.html#ctypes.c_longlong "\(in Python v3.13\)") depending on the platform. The ctypes array contains the shape of the underlying array. _ctypes.strides (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array. _ctypes.data_as(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_internal.py#L279-L296) Return the data pointer cast to a particular c-types object. For example, calling `self._as_parameter_` is equivalent to `self.data_as(ctypes.c_void_p)`. Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: `self.data_as(ctypes.POINTER(ctypes.c_double))`. The returned pointer will keep a reference to the array. _ctypes.shape_as(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_internal.py#L298-L305) Return the shape tuple as an array of some other c-types type. For example: `self.shape_as(ctypes.c_short)`. _ctypes.strides_as(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_internal.py#L307-L314) Return the strides tuple as an array of some other c-types type. For example: `self.strides_as(ctypes.c_longlong)`. If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the `as_parameter` attribute which will return an integer equal to the data attribute. #### Examples >>> import numpy as np >>> import ctypes >>> x = np.array([[0, 1], [2, 3]], dtype=np.int32) >>> x array([[0, 1], [2, 3]], dtype=int32) >>> x.ctypes.data 31962608 # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)) <__main__.LP_c_uint object at 0x7ff2fc1fc200> # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)).contents c_uint(0) >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint64)).contents c_ulong(4294967296) >>> x.ctypes.shape # may vary >>> x.ctypes.strides # may vary # numpy.char.chararray.data attribute char.chararray.data Python buffer object pointing to the start of the array’s data. # numpy.char.chararray.decode method char.chararray.decode(_encoding =None_, _errors =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L777-L786) Calls `bytes.decode` element-wise. See also [`char.decode`](numpy.char.decode#numpy.char.decode "numpy.char.decode") # numpy.char.chararray.dtype attribute char.chararray.dtype Data-type of the array’s elements. Warning Setting `arr.dtype` is discouraged and may be deprecated in the future. Setting will replace the `dtype` without modifying the memory (see also [`ndarray.view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") and [`ndarray.astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype")). Parameters: **None** Returns: **d** numpy dtype object See also [`ndarray.astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype") Cast the values contained in the array to a new data-type. [`ndarray.view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") Create a view of the same data but a different data-type. [`numpy.dtype`](numpy.dtype#numpy.dtype "numpy.dtype") #### Examples >>> x array([[0, 1], [2, 3]]) >>> x.dtype dtype('int32') >>> type(x.dtype) # numpy.char.chararray.dump method char.chararray.dump(_file_) Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load. Parameters: **file** str or Path A string naming the dump file. # numpy.char.chararray.dumps method char.chararray.dumps() Returns the pickle of the array as a string. pickle.loads will convert the string back to an array. Parameters: **None** # numpy.char.chararray.encode method char.chararray.encode(_encoding =None_, _errors =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L788-L797) Calls [`str.encode`](https://docs.python.org/3/library/stdtypes.html#str.encode "\(in Python v3.13\)") element-wise. See also [`char.encode`](numpy.char.encode#numpy.char.encode "numpy.char.encode") # numpy.char.chararray.endswith method char.chararray.endswith(_suffix_ , _start =0_, _end =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L799-L809) Returns a boolean array which is `True` where the string element in `self` ends with `suffix`, otherwise `False`. See also [`char.endswith`](numpy.char.endswith#numpy.char.endswith "numpy.char.endswith") # numpy.char.chararray.expandtabs method char.chararray.expandtabs(_tabsize =8_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L811-L821) Return a copy of each string element where all tab characters are replaced by one or more spaces. See also [`char.expandtabs`](numpy.char.expandtabs#numpy.char.expandtabs "numpy.char.expandtabs") # numpy.char.chararray.fill method char.chararray.fill(_value_) Fill the array with a scalar value. Parameters: **value** scalar All elements of `a` will be assigned this value. #### Examples >>> import numpy as np >>> a = np.array([1, 2]) >>> a.fill(0) >>> a array([0, 0]) >>> a = np.empty(2) >>> a.fill(1) >>> a array([1., 1.]) Fill expects a scalar value and always behaves the same as assigning to a single array element. The following is a rare example where this distinction is important: >>> a = np.array([None, None], dtype=object) >>> a[0] = np.array(3) >>> a array([array(3), None], dtype=object) >>> a.fill(np.array(3)) >>> a array([array(3), array(3)], dtype=object) Where other forms of assignments will unpack the array being assigned: >>> a[...] = np.array(3) >>> a array([3, 3], dtype=object) # numpy.char.chararray.find method char.chararray.find(_sub_ , _start =0_, _end =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L823-L833) For each element, return the lowest index in the string where substring `sub` is found. See also [`char.find`](numpy.char.find#numpy.char.find "numpy.char.find") # numpy.char.chararray.flags attribute char.chararray.flags Information about the memory layout of the array. #### Notes The `flags` object can be accessed dictionary-like (as in `a.flags['WRITEABLE']`), or by using lowercased attribute names (as in `a.flags.writeable`). Short flag names are only supported in dictionary access. Only the WRITEBACKIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling [`ndarray.setflags`](numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags"). The array flags cannot be set arbitrarily: * WRITEBACKIFCOPY can only be set `False`. * ALIGNED can only be set `True` if the data is truly aligned. * WRITEABLE can only be set `True` if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string. Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays. Even for contiguous arrays a stride for a given dimension `arr.strides[dim]` may be _arbitrary_ if `arr.shape[dim] == 1` or the array has no elements. It does _not_ generally hold that `self.strides[-1] == self.itemsize` for C-style contiguous arrays or `self.strides[0] == self.itemsize` for Fortran-style contiguous arrays is true. Attributes: **C_CONTIGUOUS (C)** The data is in a single, C-style contiguous segment. **F_CONTIGUOUS (F)** The data is in a single, Fortran-style contiguous segment. **OWNDATA (O)** The array owns the memory it uses or borrows it from another object. **WRITEABLE (W)** The data area can be written to. Setting this to False locks the data, making it read-only. A view (slice, etc.) inherits WRITEABLE from its base array at creation time, but a view of a writeable array may be subsequently locked while the base array remains writeable. (The opposite is not true, in that a view of a locked array may not be made writeable. However, currently, locking a base object does not lock any views that already reference it, so under that circumstance it is possible to alter the contents of a locked array via a previously created writeable view onto it.) Attempting to change a non- writeable array raises a RuntimeError exception. **ALIGNED (A)** The data and all elements are aligned appropriately for the hardware. **WRITEBACKIFCOPY (X)** This array is a copy of some other array. The C-API function PyArray_ResolveWritebackIfCopy must be called before deallocating to the base array will be updated with the contents of this array. **FNC** F_CONTIGUOUS and not C_CONTIGUOUS. **FORC** F_CONTIGUOUS or C_CONTIGUOUS (one-segment test). **BEHAVED (B)** ALIGNED and WRITEABLE. **CARRAY (CA)** BEHAVED and C_CONTIGUOUS. **FARRAY (FA)** BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS. # numpy.char.chararray.flat attribute char.chararray.flat A 1-D iterator over the array. This is a [`numpy.flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") instance, which acts similarly to, but is not a subclass of, Python’s built-in iterator object. See also [`flatten`](numpy.char.chararray.flatten#numpy.char.chararray.flatten "numpy.char.chararray.flatten") Return a copy of the array collapsed into one dimension. [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples >>> import numpy as np >>> x = np.arange(1, 7).reshape(2, 3) >>> x array([[1, 2, 3], [4, 5, 6]]) >>> x.flat[3] 4 >>> x.T array([[1, 4], [2, 5], [3, 6]]) >>> x.T.flat[3] 5 >>> type(x.flat) An assignment example: >>> x.flat = 3; x array([[3, 3, 3], [3, 3, 3]]) >>> x.flat[[1,4]] = 1; x array([[3, 1, 3], [3, 1, 3]]) # numpy.char.chararray.flatten method char.chararray.flatten(_order ='C'_) Return a copy of the array collapsed into one dimension. Parameters: **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional ‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran- style) order. ‘A’ means to flatten in column-major order if `a` is Fortran _contiguous_ in memory, row-major order otherwise. ‘K’ means to flatten `a` in the order the elements occur in memory. The default is ‘C’. Returns: **y** ndarray A copy of the input array, flattened to one dimension. See also [`ravel`](numpy.ravel#numpy.ravel "numpy.ravel") Return a flattened array. [`flat`](numpy.char.chararray.flat#numpy.char.chararray.flat "numpy.char.chararray.flat") A 1-D flat iterator over the array. #### Examples >>> import numpy as np >>> a = np.array([[1,2], [3,4]]) >>> a.flatten() array([1, 2, 3, 4]) >>> a.flatten('F') array([1, 3, 2, 4]) # numpy.char.chararray.getfield method char.chararray.getfield(_dtype_ , _offset =0_) Returns a field of the given array as a certain type. A field is a view of the array data with a given data-type. The values in the view are determined by the given type and the offset into the current array in bytes. The offset needs to be such that the view dtype fits in the array dtype; for example an array of dtype complex128 has 16-byte elements. If taking a view with a 32-bit integer (4 bytes), the offset needs to be between 0 and 12 bytes. Parameters: **dtype** str or dtype The data type of the view. The dtype size of the view can not be larger than that of the array itself. **offset** int Number of bytes to skip before beginning the element view. #### Examples >>> import numpy as np >>> x = np.diag([1.+1.j]*2) >>> x[1, 1] = 2 + 4.j >>> x array([[1.+1.j, 0.+0.j], [0.+0.j, 2.+4.j]]) >>> x.getfield(np.float64) array([[1., 0.], [0., 2.]]) By choosing an offset of 8 bytes we can select the complex part of the array for our view: >>> x.getfield(np.float64, offset=8) array([[1., 0.], [0., 4.]]) # numpy.char.chararray _class_ numpy.char.chararray(_shape_ , _itemsize =1_, _unicode =False_, _buffer =None_, _offset =0_, _strides =None_, _order =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/char/__init__.py) Provides a convenient view on arrays of string and unicode values. Note The `chararray` class exists for backwards compatibility with Numarray, it is not recommended for new development. Starting from numpy 1.4, if one needs arrays of strings, it is recommended to use arrays of [`dtype`](numpy.char.chararray.dtype#numpy.char.chararray.dtype "numpy.char.chararray.dtype") [`object_`](../arrays.scalars#numpy.object_ "numpy.object_"), [`bytes_`](../arrays.scalars#numpy.bytes_ "numpy.bytes_") or [`str_`](../arrays.scalars#numpy.str_ "numpy.str_"), and use the free functions in the [`numpy.char`](../routines.char#module-numpy.char "numpy.char") module for fast vectorized string operations. Versus a NumPy array of dtype [`bytes_`](../arrays.scalars#numpy.bytes_ "numpy.bytes_") or [`str_`](../arrays.scalars#numpy.str_ "numpy.str_"), this class adds the following functionality: 1. values automatically have whitespace removed from the end when indexed 2. comparison operators automatically remove whitespace from the end when comparing values 3. vectorized string operations are provided as methods (e.g. [`endswith`](numpy.char.chararray.endswith#numpy.char.chararray.endswith "numpy.char.chararray.endswith")) and infix operators (e.g. `"+", "*", "%"`) chararrays should be created using [`numpy.char.array`](numpy.char.array#numpy.char.array "numpy.char.array") or [`numpy.char.asarray`](numpy.char.asarray#numpy.char.asarray "numpy.char.asarray"), rather than this constructor directly. This constructor creates the array, using `buffer` (with `offset` and [`strides`](numpy.char.chararray.strides#numpy.char.chararray.strides "numpy.char.chararray.strides")) if it is not `None`. If `buffer` is `None`, then constructs a new array with [`strides`](numpy.char.chararray.strides#numpy.char.chararray.strides "numpy.char.chararray.strides") in “C order”, unless both `len(shape) >= 2` and `order='F'`, in which case [`strides`](numpy.char.chararray.strides#numpy.char.chararray.strides "numpy.char.chararray.strides") is in “Fortran order”. Parameters: **shape** tuple Shape of the array. **itemsize** int, optional Length of each array element, in number of characters. Default is 1. **unicode** bool, optional Are the array elements of type unicode (True) or string (False). Default is False. **buffer** object exposing the buffer interface or str, optional Memory address of the start of the array data. Default is None, in which case a new array is created. **offset** int, optional Fixed stride displacement from the beginning of an axis? Default is 0. Needs to be >=0. **strides** array_like of ints, optional Strides for the array (see [`strides`](numpy.ndarray.strides#numpy.ndarray.strides "numpy.ndarray.strides") for full description). Default is None. **order**{‘C’, ‘F’}, optional The order in which the array data is stored in memory: ‘C’ -> “row major” order (the default), ‘F’ -> “column major” (Fortran) order. #### Examples >>> import numpy as np >>> charar = np.char.chararray((3, 3)) >>> charar[:] = 'a' >>> charar chararray([[b'a', b'a', b'a'], [b'a', b'a', b'a'], [b'a', b'a', b'a']], dtype='|S1') >>> charar = np.char.chararray(charar.shape, itemsize=5) >>> charar[:] = 'abc' >>> charar chararray([[b'abc', b'abc', b'abc'], [b'abc', b'abc', b'abc'], [b'abc', b'abc', b'abc']], dtype='|S5') Attributes: [`T`](numpy.char.chararray.t#numpy.char.chararray.T "numpy.char.chararray.T") View of the transposed array. [`base`](numpy.char.chararray.base#numpy.char.chararray.base "numpy.char.chararray.base") Base object if memory is from some other object. [`ctypes`](numpy.char.chararray.ctypes#numpy.char.chararray.ctypes "numpy.char.chararray.ctypes") An object to simplify the interaction of the array with the ctypes module. [`data`](numpy.char.chararray.data#numpy.char.chararray.data "numpy.char.chararray.data") Python buffer object pointing to the start of the array’s data. **device** [`dtype`](numpy.char.chararray.dtype#numpy.char.chararray.dtype "numpy.char.chararray.dtype") Data-type of the array’s elements. [`flags`](numpy.char.chararray.flags#numpy.char.chararray.flags "numpy.char.chararray.flags") Information about the memory layout of the array. [`flat`](numpy.char.chararray.flat#numpy.char.chararray.flat "numpy.char.chararray.flat") A 1-D iterator over the array. [`imag`](numpy.char.chararray.imag#numpy.char.chararray.imag "numpy.char.chararray.imag") The imaginary part of the array. **itemset** [`itemsize`](numpy.char.chararray.itemsize#numpy.char.chararray.itemsize "numpy.char.chararray.itemsize") Length of one array element in bytes. [`mT`](numpy.char.chararray.mt#numpy.char.chararray.mT "numpy.char.chararray.mT") View of the matrix transposed array. [`nbytes`](numpy.char.chararray.nbytes#numpy.char.chararray.nbytes "numpy.char.chararray.nbytes") Total bytes consumed by the elements of the array. [`ndim`](numpy.char.chararray.ndim#numpy.char.chararray.ndim "numpy.char.chararray.ndim") Number of array dimensions. **newbyteorder** **ptp** [`real`](numpy.char.chararray.real#numpy.char.chararray.real "numpy.char.chararray.real") The real part of the array. [`shape`](numpy.char.chararray.shape#numpy.char.chararray.shape "numpy.char.chararray.shape") Tuple of array dimensions. [`size`](numpy.char.chararray.size#numpy.char.chararray.size "numpy.char.chararray.size") Number of elements in the array. [`strides`](numpy.char.chararray.strides#numpy.char.chararray.strides "numpy.char.chararray.strides") Tuple of bytes to step in each dimension when traversing an array. #### Methods [`astype`](numpy.char.chararray.astype#numpy.char.chararray.astype "numpy.char.chararray.astype")(dtype[, order, casting, subok, copy]) | Copy of the array, cast to a specified type. ---|--- [`argsort`](numpy.char.chararray.argsort#numpy.char.chararray.argsort "numpy.char.chararray.argsort")([axis, kind, order]) | Returns the indices that would sort this array. [`copy`](numpy.char.chararray.copy#numpy.char.chararray.copy "numpy.char.chararray.copy")([order]) | Return a copy of the array. [`count`](numpy.char.chararray.count#numpy.char.chararray.count "numpy.char.chararray.count")(sub[, start, end]) | Returns an array with the number of non-overlapping occurrences of substring `sub` in the range [`start`, `end`]. [`decode`](numpy.char.chararray.decode#numpy.char.chararray.decode "numpy.char.chararray.decode")([encoding, errors]) | Calls `bytes.decode` element-wise. [`dump`](numpy.char.chararray.dump#numpy.char.chararray.dump "numpy.char.chararray.dump")(file) | Dump a pickle of the array to the specified file. [`dumps`](numpy.char.chararray.dumps#numpy.char.chararray.dumps "numpy.char.chararray.dumps")() | Returns the pickle of the array as a string. [`encode`](numpy.char.chararray.encode#numpy.char.chararray.encode "numpy.char.chararray.encode")([encoding, errors]) | Calls [`str.encode`](https://docs.python.org/3/library/stdtypes.html#str.encode "\(in Python v3.13\)") element-wise. [`endswith`](numpy.char.chararray.endswith#numpy.char.chararray.endswith "numpy.char.chararray.endswith")(suffix[, start, end]) | Returns a boolean array which is `True` where the string element in `self` ends with `suffix`, otherwise `False`. [`expandtabs`](numpy.char.chararray.expandtabs#numpy.char.chararray.expandtabs "numpy.char.chararray.expandtabs")([tabsize]) | Return a copy of each string element where all tab characters are replaced by one or more spaces. [`fill`](numpy.char.chararray.fill#numpy.char.chararray.fill "numpy.char.chararray.fill")(value) | Fill the array with a scalar value. [`find`](numpy.char.chararray.find#numpy.char.chararray.find "numpy.char.chararray.find")(sub[, start, end]) | For each element, return the lowest index in the string where substring `sub` is found. [`flatten`](numpy.char.chararray.flatten#numpy.char.chararray.flatten "numpy.char.chararray.flatten")([order]) | Return a copy of the array collapsed into one dimension. [`getfield`](numpy.char.chararray.getfield#numpy.char.chararray.getfield "numpy.char.chararray.getfield")(dtype[, offset]) | Returns a field of the given array as a certain type. [`index`](numpy.char.chararray.index#numpy.char.chararray.index "numpy.char.chararray.index")(sub[, start, end]) | Like [`find`](numpy.char.find#numpy.char.find "numpy.char.find"), but raises [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "\(in Python v3.13\)") when the substring is not found. [`isalnum`](numpy.char.chararray.isalnum#numpy.char.chararray.isalnum "numpy.char.chararray.isalnum")() | Returns true for each element if all characters in the string are alphanumeric and there is at least one character, false otherwise. [`isalpha`](numpy.char.chararray.isalpha#numpy.char.chararray.isalpha "numpy.char.chararray.isalpha")() | Returns true for each element if all characters in the string are alphabetic and there is at least one character, false otherwise. [`isdecimal`](numpy.char.chararray.isdecimal#numpy.char.chararray.isdecimal "numpy.char.chararray.isdecimal")() | For each element in `self`, return True if there are only decimal characters in the element. [`isdigit`](numpy.char.chararray.isdigit#numpy.char.chararray.isdigit "numpy.char.chararray.isdigit")() | Returns true for each element if all characters in the string are digits and there is at least one character, false otherwise. [`islower`](numpy.char.chararray.islower#numpy.char.chararray.islower "numpy.char.chararray.islower")() | Returns true for each element if all cased characters in the string are lowercase and there is at least one cased character, false otherwise. [`isnumeric`](numpy.char.chararray.isnumeric#numpy.char.chararray.isnumeric "numpy.char.chararray.isnumeric")() | For each element in `self`, return True if there are only numeric characters in the element. [`isspace`](numpy.char.chararray.isspace#numpy.char.chararray.isspace "numpy.char.chararray.isspace")() | Returns true for each element if there are only whitespace characters in the string and there is at least one character, false otherwise. [`istitle`](numpy.char.chararray.istitle#numpy.char.chararray.istitle "numpy.char.chararray.istitle")() | Returns true for each element if the element is a titlecased string and there is at least one character, false otherwise. [`isupper`](numpy.char.chararray.isupper#numpy.char.chararray.isupper "numpy.char.chararray.isupper")() | Returns true for each element if all cased characters in the string are uppercase and there is at least one character, false otherwise. [`item`](numpy.char.chararray.item#numpy.char.chararray.item "numpy.char.chararray.item")(*args) | Copy an element of an array to a standard Python scalar and return it. [`join`](numpy.char.chararray.join#numpy.char.chararray.join "numpy.char.chararray.join")(seq) | Return a string which is the concatenation of the strings in the sequence `seq`. [`ljust`](numpy.char.chararray.ljust#numpy.char.chararray.ljust "numpy.char.chararray.ljust")(width[, fillchar]) | Return an array with the elements of `self` left-justified in a string of length `width`. [`lower`](numpy.char.chararray.lower#numpy.char.chararray.lower "numpy.char.chararray.lower")() | Return an array with the elements of `self` converted to lowercase. [`lstrip`](numpy.char.chararray.lstrip#numpy.char.chararray.lstrip "numpy.char.chararray.lstrip")([chars]) | For each element in `self`, return a copy with the leading characters removed. [`nonzero`](numpy.char.chararray.nonzero#numpy.char.chararray.nonzero "numpy.char.chararray.nonzero")() | Return the indices of the elements that are non-zero. [`put`](numpy.char.chararray.put#numpy.char.chararray.put "numpy.char.chararray.put")(indices, values[, mode]) | Set `a.flat[n] = values[n]` for all `n` in indices. [`ravel`](numpy.char.chararray.ravel#numpy.char.chararray.ravel "numpy.char.chararray.ravel")([order]) | Return a flattened array. [`repeat`](numpy.char.chararray.repeat#numpy.char.chararray.repeat "numpy.char.chararray.repeat")(repeats[, axis]) | Repeat elements of an array. [`replace`](numpy.char.chararray.replace#numpy.char.chararray.replace "numpy.char.chararray.replace")(old, new[, count]) | For each element in `self`, return a copy of the string with all occurrences of substring `old` replaced by `new`. [`reshape`](numpy.char.chararray.reshape#numpy.char.chararray.reshape "numpy.char.chararray.reshape")(shape, /, *[, order, copy]) | Returns an array containing the same data with a new shape. [`resize`](numpy.char.chararray.resize#numpy.char.chararray.resize "numpy.char.chararray.resize")(new_shape[, refcheck]) | Change shape and size of array in-place. [`rfind`](numpy.char.chararray.rfind#numpy.char.chararray.rfind "numpy.char.chararray.rfind")(sub[, start, end]) | For each element in `self`, return the highest index in the string where substring `sub` is found, such that `sub` is contained within [`start`, `end`]. [`rindex`](numpy.char.chararray.rindex#numpy.char.chararray.rindex "numpy.char.chararray.rindex")(sub[, start, end]) | Like [`rfind`](numpy.char.rfind#numpy.char.rfind "numpy.char.rfind"), but raises [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "\(in Python v3.13\)") when the substring `sub` is not found. [`rjust`](numpy.char.chararray.rjust#numpy.char.chararray.rjust "numpy.char.chararray.rjust")(width[, fillchar]) | Return an array with the elements of `self` right-justified in a string of length `width`. [`rsplit`](numpy.char.chararray.rsplit#numpy.char.chararray.rsplit "numpy.char.chararray.rsplit")([sep, maxsplit]) | For each element in `self`, return a list of the words in the string, using `sep` as the delimiter string. [`rstrip`](numpy.char.chararray.rstrip#numpy.char.chararray.rstrip "numpy.char.chararray.rstrip")([chars]) | For each element in `self`, return a copy with the trailing characters removed. [`searchsorted`](numpy.char.chararray.searchsorted#numpy.char.chararray.searchsorted "numpy.char.chararray.searchsorted")(v[, side, sorter]) | Find indices where elements of v should be inserted in a to maintain order. [`setfield`](numpy.char.chararray.setfield#numpy.char.chararray.setfield "numpy.char.chararray.setfield")(val, dtype[, offset]) | Put a value into a specified place in a field defined by a data-type. [`setflags`](numpy.char.chararray.setflags#numpy.char.chararray.setflags "numpy.char.chararray.setflags")([write, align, uic]) | Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. [`sort`](numpy.char.chararray.sort#numpy.char.chararray.sort "numpy.char.chararray.sort")([axis, kind, order]) | Sort an array in-place. [`split`](numpy.char.chararray.split#numpy.char.chararray.split "numpy.char.chararray.split")([sep, maxsplit]) | For each element in `self`, return a list of the words in the string, using `sep` as the delimiter string. [`splitlines`](numpy.char.chararray.splitlines#numpy.char.chararray.splitlines "numpy.char.chararray.splitlines")([keepends]) | For each element in `self`, return a list of the lines in the element, breaking at line boundaries. [`squeeze`](numpy.char.chararray.squeeze#numpy.char.chararray.squeeze "numpy.char.chararray.squeeze")([axis]) | Remove axes of length one from `a`. [`startswith`](numpy.char.chararray.startswith#numpy.char.chararray.startswith "numpy.char.chararray.startswith")(prefix[, start, end]) | Returns a boolean array which is `True` where the string element in `self` starts with `prefix`, otherwise `False`. [`strip`](numpy.char.chararray.strip#numpy.char.chararray.strip "numpy.char.chararray.strip")([chars]) | For each element in `self`, return a copy with the leading and trailing characters removed. [`swapaxes`](numpy.char.chararray.swapaxes#numpy.char.chararray.swapaxes "numpy.char.chararray.swapaxes")(axis1, axis2) | Return a view of the array with `axis1` and `axis2` interchanged. [`swapcase`](numpy.char.chararray.swapcase#numpy.char.chararray.swapcase "numpy.char.chararray.swapcase")() | For each element in `self`, return a copy of the string with uppercase characters converted to lowercase and vice versa. [`take`](numpy.char.chararray.take#numpy.char.chararray.take "numpy.char.chararray.take")(indices[, axis, out, mode]) | Return an array formed from the elements of `a` at the given indices. [`title`](numpy.char.chararray.title#numpy.char.chararray.title "numpy.char.chararray.title")() | For each element in `self`, return a titlecased version of the string: words start with uppercase characters, all remaining cased characters are lowercase. [`tofile`](numpy.char.chararray.tofile#numpy.char.chararray.tofile "numpy.char.chararray.tofile")(fid[, sep, format]) | Write array to a file as text or binary (default). [`tolist`](numpy.char.chararray.tolist#numpy.char.chararray.tolist "numpy.char.chararray.tolist")() | Return the array as an `a.ndim`-levels deep nested list of Python scalars. [`tostring`](numpy.char.chararray.tostring#numpy.char.chararray.tostring "numpy.char.chararray.tostring")([order]) | A compatibility alias for `tobytes`, with exactly the same behavior. [`translate`](numpy.char.chararray.translate#numpy.char.chararray.translate "numpy.char.chararray.translate")(table[, deletechars]) | For each element in `self`, return a copy of the string where all characters occurring in the optional argument `deletechars` are removed, and the remaining characters have been mapped through the given translation table. [`transpose`](numpy.char.chararray.transpose#numpy.char.chararray.transpose "numpy.char.chararray.transpose")(*axes) | Returns a view of the array with axes transposed. [`upper`](numpy.char.chararray.upper#numpy.char.chararray.upper "numpy.char.chararray.upper")() | Return an array with the elements of `self` converted to uppercase. [`view`](numpy.char.chararray.view#numpy.char.chararray.view "numpy.char.chararray.view")([dtype][, type]) | New view of array with the same data. [`zfill`](numpy.char.chararray.zfill#numpy.char.chararray.zfill "numpy.char.chararray.zfill")(width) | Return the numeric string left-filled with zeros in a string of length `width`. # numpy.char.chararray.imag attribute char.chararray.imag The imaginary part of the array. #### Examples >>> import numpy as np >>> x = np.sqrt([1+0j, 0+1j]) >>> x.imag array([ 0. , 0.70710678]) >>> x.imag.dtype dtype('float64') # numpy.char.chararray.index method char.chararray.index(_sub_ , _start =0_, _end =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L835-L845) Like [`find`](numpy.char.chararray.find#numpy.char.chararray.find "numpy.char.chararray.find"), but raises [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "\(in Python v3.13\)") when the substring is not found. See also [`char.index`](numpy.char.index#numpy.char.index "numpy.char.index") # numpy.char.chararray.isalnum method char.chararray.isalnum()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L847-L858) Returns true for each element if all characters in the string are alphanumeric and there is at least one character, false otherwise. See also [`char.isalnum`](numpy.char.isalnum#numpy.char.isalnum "numpy.char.isalnum") # numpy.char.chararray.isalpha method char.chararray.isalpha()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L860-L871) Returns true for each element if all characters in the string are alphabetic and there is at least one character, false otherwise. See also [`char.isalpha`](numpy.char.isalpha#numpy.char.isalpha "numpy.char.isalpha") # numpy.char.chararray.isdecimal method char.chararray.isdecimal()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L1200-L1210) For each element in `self`, return True if there are only decimal characters in the element. See also [`char.isdecimal`](numpy.char.isdecimal#numpy.char.isdecimal "numpy.char.isdecimal") # numpy.char.chararray.isdigit method char.chararray.isdigit()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L873-L883) Returns true for each element if all characters in the string are digits and there is at least one character, false otherwise. See also [`char.isdigit`](numpy.char.isdigit#numpy.char.isdigit "numpy.char.isdigit") # numpy.char.chararray.islower method char.chararray.islower()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L885-L896) Returns true for each element if all cased characters in the string are lowercase and there is at least one cased character, false otherwise. See also [`char.islower`](numpy.char.islower#numpy.char.islower "numpy.char.islower") # numpy.char.chararray.isnumeric method char.chararray.isnumeric()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L1188-L1198) For each element in `self`, return True if there are only numeric characters in the element. See also [`char.isnumeric`](numpy.char.isnumeric#numpy.char.isnumeric "numpy.char.isnumeric") # numpy.char.chararray.isspace method char.chararray.isspace()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L898-L909) Returns true for each element if there are only whitespace characters in the string and there is at least one character, false otherwise. See also [`char.isspace`](numpy.char.isspace#numpy.char.isspace "numpy.char.isspace") # numpy.char.chararray.istitle method char.chararray.istitle()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L911-L921) Returns true for each element if the element is a titlecased string and there is at least one character, false otherwise. See also [`char.istitle`](numpy.char.istitle#numpy.char.istitle "numpy.char.istitle") # numpy.char.chararray.isupper method char.chararray.isupper()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L923-L934) Returns true for each element if all cased characters in the string are uppercase and there is at least one character, false otherwise. See also [`char.isupper`](numpy.char.isupper#numpy.char.isupper "numpy.char.isupper") # numpy.char.chararray.item method char.chararray.item(_* args_) Copy an element of an array to a standard Python scalar and return it. Parameters: ***args** Arguments (variable number and type) * none: in this case, the method only works for arrays with one element (`a.size == 1`), which element is copied into a standard Python scalar object and returned. * int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return. * tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array. Returns: **z** Standard Python scalar object A copy of the specified element of the array as a suitable Python scalar #### Notes When the data type of `a` is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned. `item` is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math. #### Examples >>> import numpy as np >>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.item(3) 1 >>> x.item(7) 0 >>> x.item((0, 1)) 2 >>> x.item((2, 2)) 1 For an array with object dtype, elements are returned as-is. >>> a = np.array([np.int64(1)], dtype=object) >>> a.item() #return np.int64 np.int64(1) # numpy.char.chararray.itemsize attribute char.chararray.itemsize Length of one array element in bytes. #### Examples >>> import numpy as np >>> x = np.array([1,2,3], dtype=np.float64) >>> x.itemsize 8 >>> x = np.array([1,2,3], dtype=np.complex128) >>> x.itemsize 16 # numpy.char.chararray.join method char.chararray.join(_seq_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L936-L946) Return a string which is the concatenation of the strings in the sequence `seq`. See also [`char.join`](numpy.char.join#numpy.char.join "numpy.char.join") # numpy.char.chararray.ljust method char.chararray.ljust(_width_ , _fillchar =' '_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L948-L958) Return an array with the elements of `self` left-justified in a string of length `width`. See also [`char.ljust`](numpy.char.ljust#numpy.char.ljust "numpy.char.ljust") # numpy.char.chararray.lower method char.chararray.lower()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L960-L970) Return an array with the elements of `self` converted to lowercase. See also [`char.lower`](numpy.char.lower#numpy.char.lower "numpy.char.lower") # numpy.char.chararray.lstrip method char.chararray.lstrip(_chars =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L972-L982) For each element in `self`, return a copy with the leading characters removed. See also [`char.lstrip`](numpy.char.lstrip#numpy.char.lstrip "numpy.char.lstrip") # numpy.char.chararray.mT attribute char.chararray.mT View of the matrix transposed array. The matrix transpose is the transpose of the last two dimensions, even if the array is of higher dimension. New in version 2.0. Raises: ValueError If the array is of dimension less than 2. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.mT array([[1, 3], [2, 4]]) >>> a = np.arange(8).reshape((2, 2, 2)) >>> a array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> a.mT array([[[0, 2], [1, 3]], [[4, 6], [5, 7]]]) # numpy.char.chararray.nbytes attribute char.chararray.nbytes Total bytes consumed by the elements of the array. See also [`sys.getsizeof`](https://docs.python.org/3/library/sys.html#sys.getsizeof "\(in Python v3.13\)") Memory consumed by the object itself without parents in case view. This does include memory consumed by non-element attributes. #### Notes Does not include memory consumed by non-element attributes of the array object. #### Examples >>> import numpy as np >>> x = np.zeros((3,5,2), dtype=np.complex128) >>> x.nbytes 480 >>> np.prod(x.shape) * x.itemsize 480 # numpy.char.chararray.ndim attribute char.chararray.ndim Number of array dimensions. #### Examples >>> import numpy as np >>> x = np.array([1, 2, 3]) >>> x.ndim 1 >>> y = np.zeros((2, 3, 4)) >>> y.ndim 3 # numpy.char.chararray.nonzero method char.chararray.nonzero() Return the indices of the elements that are non-zero. Refer to [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") for full documentation. See also [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") equivalent function # numpy.char.chararray.put method char.chararray.put(_indices_ , _values_ , _mode ='raise'_) Set `a.flat[n] = values[n]` for all `n` in indices. Refer to [`numpy.put`](numpy.put#numpy.put "numpy.put") for full documentation. See also [`numpy.put`](numpy.put#numpy.put "numpy.put") equivalent function # numpy.char.chararray.ravel method char.chararray.ravel([_order_]) Return a flattened array. Refer to [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") for full documentation. See also [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") equivalent function [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") a flat iterator on the array. # numpy.char.chararray.real attribute char.chararray.real The real part of the array. See also [`numpy.real`](numpy.real#numpy.real "numpy.real") equivalent function #### Examples >>> import numpy as np >>> x = np.sqrt([1+0j, 0+1j]) >>> x.real array([ 1. , 0.70710678]) >>> x.real.dtype dtype('float64') # numpy.char.chararray.repeat method char.chararray.repeat(_repeats_ , _axis =None_) Repeat elements of an array. Refer to [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") for full documentation. See also [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") equivalent function # numpy.char.chararray.replace method char.chararray.replace(_old_ , _new_ , _count =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L994-L1004) For each element in `self`, return a copy of the string with all occurrences of substring `old` replaced by `new`. See also [`char.replace`](numpy.char.replace#numpy.char.replace "numpy.char.replace") # numpy.char.chararray.reshape method char.chararray.reshape(_shape_ , _/_ , _*_ , _order ='C'_, _copy =None_) Returns an array containing the same data with a new shape. Refer to [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") for full documentation. See also [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") equivalent function #### Notes Unlike the free function [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape"), this method on [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") allows the elements of the shape parameter to be passed in as separate arguments. For example, `a.reshape(10, 11)` is equivalent to `a.reshape((10, 11))`. # numpy.char.chararray.resize method char.chararray.resize(_new_shape_ , _refcheck =True_) Change shape and size of array in-place. Parameters: **new_shape** tuple of ints, or `n` ints Shape of resized array. **refcheck** bool, optional If False, reference count will not be checked. Default is True. Returns: None Raises: ValueError If `a` does not own its own data or references or views to it exist, and the data memory must be changed. PyPy only: will always raise if the data memory must be changed, since there is no reliable way to determine if references or views to it exist. SystemError If the `order` keyword argument is specified. This behaviour is a bug in NumPy. See also [`resize`](numpy.resize#numpy.resize "numpy.resize") Return a new array with the specified shape. #### Notes This reallocates space for the data area if necessary. Only contiguous arrays (data elements consecutive in memory) can be resized. The purpose of the reference count check is to make sure you do not use this array as a buffer for another Python object and then reallocate the memory. However, reference counts can increase in other ways so if you are sure that you have not shared the memory for this array with another Python object, then you may safely set `refcheck` to False. #### Examples Shrinking an array: array is flattened (in the order that the data are stored in memory), resized, and reshaped: >>> import numpy as np >>> a = np.array([[0, 1], [2, 3]], order='C') >>> a.resize((2, 1)) >>> a array([[0], [1]]) >>> a = np.array([[0, 1], [2, 3]], order='F') >>> a.resize((2, 1)) >>> a array([[0], [2]]) Enlarging an array: as above, but missing entries are filled with zeros: >>> b = np.array([[0, 1], [2, 3]]) >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple >>> b array([[0, 1, 2], [3, 0, 0]]) Referencing an array prevents resizing… >>> c = a >>> a.resize((1, 1)) Traceback (most recent call last): ... ValueError: cannot resize an array that references or is referenced ... Unless `refcheck` is False: >>> a.resize((1, 1), refcheck=False) >>> a array([[0]]) >>> c array([[0]]) # numpy.char.chararray.rfind method char.chararray.rfind(_sub_ , _start =0_, _end =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L1006-L1017) For each element in `self`, return the highest index in the string where substring `sub` is found, such that `sub` is contained within [`start`, `end`]. See also [`char.rfind`](numpy.char.rfind#numpy.char.rfind "numpy.char.rfind") # numpy.char.chararray.rindex method char.chararray.rindex(_sub_ , _start =0_, _end =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L1019-L1029) Like [`rfind`](numpy.char.chararray.rfind#numpy.char.chararray.rfind "numpy.char.chararray.rfind"), but raises [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "\(in Python v3.13\)") when the substring `sub` is not found. See also [`char.rindex`](numpy.char.rindex#numpy.char.rindex "numpy.char.rindex") # numpy.char.chararray.rjust method char.chararray.rjust(_width_ , _fillchar =' '_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L1031-L1041) Return an array with the elements of `self` right-justified in a string of length `width`. See also [`char.rjust`](numpy.char.rjust#numpy.char.rjust "numpy.char.rjust") # numpy.char.chararray.rsplit method char.chararray.rsplit(_sep =None_, _maxsplit =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L1053-L1063) For each element in `self`, return a list of the words in the string, using `sep` as the delimiter string. See also [`char.rsplit`](numpy.char.rsplit#numpy.char.rsplit "numpy.char.rsplit") # numpy.char.chararray.rstrip method char.chararray.rstrip(_chars =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L1065-L1075) For each element in `self`, return a copy with the trailing characters removed. See also [`char.rstrip`](numpy.char.rstrip#numpy.char.rstrip "numpy.char.rstrip") # numpy.char.chararray.searchsorted method char.chararray.searchsorted(_v_ , _side ='left'_, _sorter =None_) Find indices where elements of v should be inserted in a to maintain order. For full documentation, see [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") See also [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") equivalent function # numpy.char.chararray.setfield method char.chararray.setfield(_val_ , _dtype_ , _offset =0_) Put a value into a specified place in a field defined by a data-type. Place `val` into `a`’s field defined by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") and beginning `offset` bytes into the field. Parameters: **val** object Value to be placed in field. **dtype** dtype object Data-type of the field in which to place `val`. **offset** int, optional The number of bytes into the field at which to place `val`. Returns: None See also [`getfield`](numpy.char.chararray.getfield#numpy.char.chararray.getfield "numpy.char.chararray.getfield") #### Examples >>> import numpy as np >>> x = np.eye(3) >>> x.getfield(np.float64) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) >>> x.setfield(3, np.int32) >>> x.getfield(np.int32) array([[3, 3, 3], [3, 3, 3], [3, 3, 3]], dtype=int32) >>> x array([[1.0e+000, 1.5e-323, 1.5e-323], [1.5e-323, 1.0e+000, 1.5e-323], [1.5e-323, 1.5e-323, 1.0e+000]]) >>> x.setfield(np.eye(3), np.int32) >>> x array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) # numpy.char.chararray.setflags method char.chararray.setflags(_write =None_, _align =None_, _uic =None_) Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. These Boolean-valued flags affect how numpy interprets the memory area used by `a` (see Notes below). The ALIGNED flag can only be set to True if the data is actually aligned according to the type. The WRITEBACKIFCOPY flag can never be set to True. The flag WRITEABLE can only be set to True if the array owns its own memory, or the ultimate owner of the memory exposes a writeable buffer interface, or is a string. (The exception for string is made so that unpickling can be done without copying memory.) Parameters: **write** bool, optional Describes whether or not `a` can be written to. **align** bool, optional Describes whether or not `a` is aligned properly for its type. **uic** bool, optional Describes whether or not `a` is a copy of another “base” array. #### Notes Array flags provide information about how the memory area used for the array is to be interpreted. There are 7 Boolean flags in use, only three of which can be changed by the user: WRITEBACKIFCOPY, WRITEABLE, and ALIGNED. WRITEABLE (W) the data area can be written to; ALIGNED (A) the data and strides are aligned appropriately for the hardware (as determined by the compiler); WRITEBACKIFCOPY (X) this array is a copy of some other array (referenced by .base). When the C-API function PyArray_ResolveWritebackIfCopy is called, the base array will be updated with the contents of this array. All flags can be accessed using the single (upper case) letter as well as the full name. #### Examples >>> import numpy as np >>> y = np.array([[3, 1, 7], ... [2, 0, 0], ... [8, 5, 9]]) >>> y array([[3, 1, 7], [2, 0, 0], [8, 5, 9]]) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False >>> y.setflags(write=0, align=0) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : False ALIGNED : False WRITEBACKIFCOPY : False >>> y.setflags(uic=1) Traceback (most recent call last): File "", line 1, in ValueError: cannot set WRITEBACKIFCOPY flag to True # numpy.char.chararray.shape attribute char.chararray.shape Tuple of array dimensions. The shape property is usually used to get the current shape of an array, but may also be used to reshape the array in-place by assigning a tuple of array dimensions to it. As with [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape"), one of the new shape dimensions can be -1, in which case its value is inferred from the size of the array and the remaining dimensions. Reshaping an array in-place will fail if a copy is required. Warning Setting `arr.shape` is discouraged and may be deprecated in the future. Using [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") is the preferred approach. See also [`numpy.shape`](numpy.shape#numpy.shape "numpy.shape") Equivalent getter function. [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Function similar to setting `shape`. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Method similar to setting `shape`. #### Examples >>> import numpy as np >>> x = np.array([1, 2, 3, 4]) >>> x.shape (4,) >>> y = np.zeros((2, 3, 4)) >>> y.shape (2, 3, 4) >>> y.shape = (3, 8) >>> y array([[ 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0.]]) >>> y.shape = (3, 6) Traceback (most recent call last): File "", line 1, in ValueError: total size of new array must be unchanged >>> np.zeros((4,2))[::2].shape = (-1,) Traceback (most recent call last): File "", line 1, in AttributeError: Incompatible shape for in-place modification. Use `.reshape()` to make a copy with the desired shape. # numpy.char.chararray.size attribute char.chararray.size Number of elements in the array. Equal to `np.prod(a.shape)`, i.e., the product of the array’s dimensions. #### Notes `a.size` returns a standard arbitrary precision Python integer. This may not be the case with other methods of obtaining the same value (like the suggested `np.prod(a.shape)`, which returns an instance of `np.int_`), and may be relevant if the value is used further in calculations that may overflow a fixed size integer type. #### Examples >>> import numpy as np >>> x = np.zeros((3, 5, 2), dtype=np.complex128) >>> x.size 30 >>> np.prod(x.shape) 30 # numpy.char.chararray.sort method char.chararray.sort(_axis =-1_, _kind =None_, _order =None_) Sort an array in-place. Refer to [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for full documentation. Parameters: **axis** int, optional Axis along which to sort. Default is -1, which means sort along the last axis. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional Sorting algorithm. The default is ‘quicksort’. Note that both ‘stable’ and ‘mergesort’ use timsort under the covers and, in general, the actual implementation will vary with datatype. The ‘mergesort’ option is retained for backwards compatibility. **order** str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. See also [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") Return a sorted copy of an array. [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Indirect sort. [`numpy.lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort on multiple keys. [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") Find elements in sorted array. [`numpy.partition`](numpy.partition#numpy.partition "numpy.partition") Partial sort. #### Notes See [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for notes on the different sorting algorithms. #### Examples >>> import numpy as np >>> a = np.array([[1,4], [3,1]]) >>> a.sort(axis=1) >>> a array([[1, 4], [1, 3]]) >>> a.sort(axis=0) >>> a array([[1, 3], [1, 4]]) Use the `order` keyword to specify a field to use when sorting a structured array: >>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)]) >>> a.sort(order='y') >>> a array([(b'c', 1), (b'a', 2)], dtype=[('x', 'S1'), ('y', '>> import numpy as np >>> y = np.reshape(np.arange(2*3*4), (2,3,4)) >>> y array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> y.strides (48, 16, 4) >>> y[1,1,1] 17 >>> offset=sum(y.strides * np.array((1,1,1))) >>> offset/y.itemsize 17 >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0) >>> x.strides (32, 4, 224, 1344) >>> i = np.array([3,5,2,2]) >>> offset = sum(i * x.strides) >>> x[3,5,2,2] 813 >>> offset / x.itemsize 813 # numpy.char.chararray.strip method char.chararray.strip(_chars =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L1113-L1123) For each element in `self`, return a copy with the leading and trailing characters removed. See also [`char.strip`](numpy.char.strip#numpy.char.strip "numpy.char.strip") # numpy.char.chararray.swapaxes method char.chararray.swapaxes(_axis1_ , _axis2_) Return a view of the array with `axis1` and `axis2` interchanged. Refer to [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") for full documentation. See also [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") equivalent function # numpy.char.chararray.swapcase method char.chararray.swapcase()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L1125-L1135) For each element in `self`, return a copy of the string with uppercase characters converted to lowercase and vice versa. See also [`char.swapcase`](numpy.char.swapcase#numpy.char.swapcase "numpy.char.swapcase") # numpy.char.chararray.T attribute char.chararray.T View of the transposed array. Same as `self.transpose()`. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.T array([[1, 3], [2, 4]]) >>> a = np.array([1, 2, 3, 4]) >>> a array([1, 2, 3, 4]) >>> a.T array([1, 2, 3, 4]) # numpy.char.chararray.take method char.chararray.take(_indices_ , _axis =None_, _out =None_, _mode ='raise'_) Return an array formed from the elements of `a` at the given indices. Refer to [`numpy.take`](numpy.take#numpy.take "numpy.take") for full documentation. See also [`numpy.take`](numpy.take#numpy.take "numpy.take") equivalent function # numpy.char.chararray.title method char.chararray.title()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L1137-L1148) For each element in `self`, return a titlecased version of the string: words start with uppercase characters, all remaining cased characters are lowercase. See also [`char.title`](numpy.char.title#numpy.char.title "numpy.char.title") # numpy.char.chararray.tofile method char.chararray.tofile(_fid_ , _sep =''_, _format ='%s'_) Write array to a file as text or binary (default). Data is always written in ‘C’ order, independent of the order of `a`. The data produced by this method can be recovered using the function fromfile(). Parameters: **fid** file or str or Path An open file object, or a string containing a filename. **sep** str Separator between array items for text output. If “” (empty), a binary file is written, equivalent to `file.write(a.tobytes())`. **format** str Format string for text file output. Each entry in the array is formatted to text by first converting it to the closest Python type, and then using “format” % item. #### Notes This is a convenience function for quick storage of array data. Information on endianness and precision is lost, so this method is not a good choice for files intended to archive data or transport data between machines with different endianness. Some of these problems can be overcome by outputting the data as text files, at the expense of speed and file size. When fid is a file object, array contents are directly written to the file, bypassing the file object’s `write` method. As a result, tofile cannot be used with files objects supporting compression (e.g., GzipFile) or file-like objects that do not support `fileno()` (e.g., BytesIO). # numpy.char.chararray.tolist method char.chararray.tolist() Return the array as an `a.ndim`-levels deep nested list of Python scalars. Return a copy of the array data as a (nested) Python list. Data items are converted to the nearest compatible builtin Python type, via the [`item`](numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item") function. If `a.ndim` is 0, then since the depth of the nested list is 0, it will not be a list at all, but a simple Python scalar. Parameters: **none** Returns: **y** object, or list of object, or list of list of object, or … The possibly nested list of array elements. #### Notes The array may be recreated via `a = np.array(a.tolist())`, although this may sometimes lose precision. #### Examples For a 1D array, `a.tolist()` is almost the same as `list(a)`, except that `tolist` changes numpy scalars to Python scalars: >>> import numpy as np >>> a = np.uint32([1, 2]) >>> a_list = list(a) >>> a_list [np.uint32(1), np.uint32(2)] >>> type(a_list[0]) >>> a_tolist = a.tolist() >>> a_tolist [1, 2] >>> type(a_tolist[0]) Additionally, for a 2D array, `tolist` applies recursively: >>> a = np.array([[1, 2], [3, 4]]) >>> list(a) [array([1, 2]), array([3, 4])] >>> a.tolist() [[1, 2], [3, 4]] The base case for this recursion is a 0D array: >>> a = np.array(1) >>> list(a) Traceback (most recent call last): ... TypeError: iteration over a 0-d array >>> a.tolist() 1 # numpy.char.chararray.tostring method char.chararray.tostring(_order ='C'_) A compatibility alias for [`tobytes`](numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes"), with exactly the same behavior. Despite its name, it returns [`bytes`](https://docs.python.org/3/library/stdtypes.html#bytes "\(in Python v3.13\)") not [`str`](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.13\)")s. Deprecated since version 1.19.0. # numpy.char.chararray.translate method char.chararray.translate(_table_ , _deletechars =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L1150-L1162) For each element in `self`, return a copy of the string where all characters occurring in the optional argument `deletechars` are removed, and the remaining characters have been mapped through the given translation table. See also [`char.translate`](numpy.char.translate#numpy.char.translate "numpy.char.translate") # numpy.char.chararray.transpose method char.chararray.transpose(_* axes_) Returns a view of the array with axes transposed. Refer to [`numpy.transpose`](numpy.transpose#numpy.transpose "numpy.transpose") for full documentation. Parameters: **axes** None, tuple of ints, or `n` ints * None or no argument: reverses the order of the axes. * tuple of ints: `i` in the `j`-th place in the tuple means that the array’s `i`-th axis becomes the transposed array’s `j`-th axis. * `n` ints: same as an n-tuple of the same ints (this form is intended simply as a “convenience” alternative to the tuple form). Returns: **p** ndarray View of the array with its axes suitably permuted. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") Equivalent function. [`ndarray.T`](numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T") Array property returning the array transposed. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Give a new shape to an array without changing its data. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.transpose() array([[1, 3], [2, 4]]) >>> a.transpose((1, 0)) array([[1, 3], [2, 4]]) >>> a.transpose(1, 0) array([[1, 3], [2, 4]]) >>> a = np.array([1, 2, 3, 4]) >>> a array([1, 2, 3, 4]) >>> a.transpose() array([1, 2, 3, 4]) # numpy.char.chararray.upper method char.chararray.upper()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L1164-L1174) Return an array with the elements of `self` converted to uppercase. See also [`char.upper`](numpy.char.upper#numpy.char.upper "numpy.char.upper") # numpy.char.chararray.view method char.chararray.view(_[dtype][, type]_) New view of array with the same data. Note Passing None for `dtype` is different from omitting the parameter, since the former invokes `dtype(None)` which is an alias for `dtype('float64')`. Parameters: **dtype** data-type or ndarray sub-class, optional Data-type descriptor of the returned view, e.g., float32 or int16. Omitting it results in the view having the same data-type as `a`. This argument can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the `type` parameter). **type** Python type, optional Type of the returned view, e.g., ndarray or matrix. Again, omission of the parameter results in type preservation. #### Notes `a.view()` is used two different ways: `a.view(some_dtype)` or `a.view(dtype=some_dtype)` constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory. `a.view(ndarray_subclass)` or `a.view(type=ndarray_subclass)` just returns an instance of `ndarray_subclass` that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory. For `a.view(some_dtype)`, if `some_dtype` has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the last axis of `a` must be contiguous. This axis will be resized in the result. Changed in version 1.23.0: Only the last axis needs to be contiguous. Previously, the entire array had to be C-contiguous. #### Examples >>> import numpy as np >>> x = np.array([(-1, 2)], dtype=[('a', np.int8), ('b', np.int8)]) Viewing array data using a different type and dtype: >>> nonneg = np.dtype([("a", np.uint8), ("b", np.uint8)]) >>> y = x.view(dtype=nonneg, type=np.recarray) >>> x["a"] array([-1], dtype=int8) >>> y.a array([255], dtype=uint8) Creating a view on a structured array so it can be used in calculations >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)]) >>> xv = x.view(dtype=np.int8).reshape(-1,2) >>> xv array([[1, 2], [3, 4]], dtype=int8) >>> xv.mean(0) array([2., 3.]) Making changes to the view changes the underlying array >>> xv[0,1] = 20 >>> x array([(1, 20), (3, 4)], dtype=[('a', 'i1'), ('b', 'i1')]) Using a view to convert an array to a recarray: >>> z = x.view(np.recarray) >>> z.a array([1, 3], dtype=int8) Views share data: >>> x[0] = (9, 10) >>> z[0] np.record((9, 10), dtype=[('a', 'i1'), ('b', 'i1')]) Views that change the dtype size (bytes per entry) should normally be avoided on arrays defined by slices, transposes, fortran-ordering, etc.: >>> x = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.int16) >>> y = x[:, ::2] >>> y array([[1, 3], [4, 6]], dtype=int16) >>> y.view(dtype=[('width', np.int16), ('length', np.int16)]) Traceback (most recent call last): ... ValueError: To change to a dtype of a different size, the last axis must be contiguous >>> z = y.copy() >>> z.view(dtype=[('width', np.int16), ('length', np.int16)]) array([[(1, 3)], [(4, 6)]], dtype=[('width', '>> x = np.arange(2 * 3 * 4, dtype=np.int8).reshape(2, 3, 4) >>> x.transpose(1, 0, 2).view(np.int16) array([[[ 256, 770], [3340, 3854]], [[1284, 1798], [4368, 4882]], [[2312, 2826], [5396, 5910]]], dtype=int16) # numpy.char.chararray.zfill method char.chararray.zfill(_width_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L1176-L1186) Return the numeric string left-filled with zeros in a string of length `width`. See also [`char.zfill`](numpy.char.zfill#numpy.char.zfill "numpy.char.zfill") # numpy.char.compare_chararrays char.compare_chararrays(_a1_ , _a2_ , _cmp_ , _rstrip_) Performs element-wise comparison of two string arrays using the comparison operator specified by `cmp`. Parameters: **a1, a2** array_like Arrays to be compared. **cmp**{“<”, “<=”, “==”, “>=”, “>”, “!=”} Type of comparison. **rstrip** Boolean If True, the spaces at the end of Strings are removed before the comparison. Returns: **out** ndarray The output array of type Boolean with the same shape as a and b. Raises: ValueError If `cmp` is not valid. TypeError If at least one of `a` or `b` is a non-string array #### Examples >>> import numpy as np >>> a = np.array(["a", "b", "cde"]) >>> b = np.array(["a", "a", "dec"]) >>> np.char.compare_chararrays(a, b, ">", True) array([False, True, False]) # numpy.char.count char.count(_a_ , _sub_ , _start =0_, _end =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L382-L424) Returns an array with the number of non-overlapping occurrences of substring `sub` in the range [`start`, `end`). Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype **sub** array-like, with `StringDType`, `bytes_`, or `str_` dtype The substring to search for. **start, end** array_like, with any integer dtype The range to look in, interpreted as in slice notation. Returns: **y** ndarray Output array of ints See also [`str.count`](https://docs.python.org/3/library/stdtypes.html#str.count "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> c array(['aAaAaA', ' aA ', 'abBABba'], dtype='>> np.strings.count(c, 'A') array([3, 1, 1]) >>> np.strings.count(c, 'aA') array([3, 1, 0]) >>> np.strings.count(c, 'A', start=1, end=4) array([2, 1, 1]) >>> np.strings.count(c, 'A', start=1, end=3) array([1, 0, 0]) # numpy.char.decode char.decode(_a_ , _encoding =None_, _errors =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L509-L554) Calls [`bytes.decode`](https://docs.python.org/3/library/stdtypes.html#bytes.decode "\(in Python v3.13\)") element-wise. The set of available codecs comes from the Python standard library, and may be extended at runtime. For more information, see the [`codecs`](https://docs.python.org/3/library/codecs.html#module-codecs "\(in Python v3.13\)") module. Parameters: **a** array_like, with `bytes_` dtype **encoding** str, optional The name of an encoding **errors** str, optional Specifies how to handle encoding errors Returns: **out** ndarray See also [`bytes.decode`](https://docs.python.org/3/library/stdtypes.html#bytes.decode "\(in Python v3.13\)") #### Notes The type of the result will depend on the encoding specified. #### Examples >>> import numpy as np >>> c = np.array([b'\x81\xc1\x81\xc1\x81\xc1', b'@@\x81\xc1@@', ... b'\x81\x82\xc2\xc1\xc2\x82\x81']) >>> c array([b'\x81\xc1\x81\xc1\x81\xc1', b'@@\x81\xc1@@', b'\x81\x82\xc2\xc1\xc2\x82\x81'], dtype='|S7') >>> np.strings.decode(c, encoding='cp037') array(['aAaAaA', ' aA ', 'abBABba'], dtype='>> import numpy as np >>> a = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> np.strings.encode(a, encoding='cp037') array([b'ÁÁÁ', b'@@Á@@', b'‚ÂÁ‚'], dtype='|S7') # numpy.char.endswith char.endswith(_a_ , _suffix_ , _start =0_, _end =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L468-L506) Returns a boolean array which is `True` where the string element in `a` ends with `suffix`, otherwise `False`. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype **suffix** array-like, with `StringDType`, `bytes_`, or `str_` dtype **start, end** array_like, with any integer dtype With `start`, test beginning at that position. With `end`, stop comparing at that position. Returns: **out** ndarray Output array of bools See also [`str.endswith`](https://docs.python.org/3/library/stdtypes.html#str.endswith "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> s = np.array(['foo', 'bar']) >>> s array(['foo', 'bar'], dtype='>> np.strings.endswith(s, 'ar') array([False, True]) >>> np.strings.endswith(s, 'a', start=1, end=2) array([False, True]) # numpy.char.equal char.equal(_x1_ , _x2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L60-L91) Return (x1 == x2) element-wise. Unlike [`numpy.equal`](numpy.equal#numpy.equal "numpy.equal"), this comparison is performed by first stripping whitespace characters from the end of the string. This behavior is provided for backward-compatibility with numarray. Parameters: **x1, x2** array_like of str or unicode Input arrays of the same shape. Returns: **out** ndarray Output array of bools. See also [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less`](numpy.less#numpy.less "numpy.less") #### Examples >>> import numpy as np >>> y = "aa " >>> x = "aa" >>> np.char.equal(x, y) array(True) # numpy.char.expandtabs char.expandtabs(_a_ , _tabsize =8_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L602-L651) Return a copy of each string element where all tab characters are replaced by one or more spaces. Calls [`str.expandtabs`](https://docs.python.org/3/library/stdtypes.html#str.expandtabs "\(in Python v3.13\)") element-wise. Return a copy of each string element where all tab characters are replaced by one or more spaces, depending on the current column and the given `tabsize`. The column number is reset to zero after each newline occurring in the string. This doesn’t understand other non-printing characters or escape sequences. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype Input array **tabsize** int, optional Replace tabs with `tabsize` number of spaces. If not given defaults to 8 spaces. Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input type See also [`str.expandtabs`](https://docs.python.org/3/library/stdtypes.html#str.expandtabs "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> a = np.array([' Hello world']) >>> np.strings.expandtabs(a, tabsize=4) array([' Hello world'], dtype='>> import numpy as np >>> a = np.array(["NumPy is a Python library"]) >>> np.strings.find(a, "Python") array([11]) # numpy.char.greater char.greater(_x1_ , _x2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L197-L228) Return (x1 > x2) element-wise. Unlike [`numpy.greater`](numpy.greater#numpy.greater "numpy.greater"), this comparison is performed by first stripping whitespace characters from the end of the string. This behavior is provided for backward-compatibility with numarray. Parameters: **x1, x2** array_like of str or unicode Input arrays of the same shape. Returns: **out** ndarray Output array of bools. See also [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`less`](numpy.less#numpy.less "numpy.less") #### Examples >>> import numpy as np >>> x1 = np.array(['a', 'b', 'c']) >>> np.char.greater(x1, 'b') array([False, False, True]) # numpy.char.greater_equal char.greater_equal(_x1_ , _x2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L128-L160) Return (x1 >= x2) element-wise. Unlike [`numpy.greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), this comparison is performed by first stripping whitespace characters from the end of the string. This behavior is provided for backward-compatibility with numarray. Parameters: **x1, x2** array_like of str or unicode Input arrays of the same shape. Returns: **out** ndarray Output array of bools. See also [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less`](numpy.less#numpy.less "numpy.less") #### Examples >>> import numpy as np >>> x1 = np.array(['a', 'b', 'c']) >>> np.char.greater_equal(x1, 'b') array([False, True, True]) # numpy.char.index char.index(_a_ , _sub_ , _start =0_, _end =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L314-L345) Like [`find`](numpy.char.find#numpy.char.find "numpy.char.find"), but raises [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "\(in Python v3.13\)") when the substring is not found. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype **sub** array-like, with `StringDType`, `bytes_`, or `str_` dtype **start, end** array_like, with any integer dtype, optional Returns: **out** ndarray Output array of ints. See also [`find`](numpy.char.find#numpy.char.find "numpy.char.find"), [`str.index`](https://docs.python.org/3/library/stdtypes.html#str.index "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> a = np.array(["Computer Science"]) >>> np.strings.index(a, "Science", start=0, end=None) array([9]) # numpy.char.isalnum char.isalnum(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns true for each element if all characters in the string are alphanumeric and there is at least one character, false otherwise. Parameters: **x** array_like, with `StringDType`, `bytes_` or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray Output array of bool This is a scalar if `x` is a scalar. See also [`str.isalnum`](https://docs.python.org/3/library/stdtypes.html#str.isalnum "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> a = np.array(['a', '1', 'a1', '(', '']) >>> np.strings.isalnum(a) array([ True, True, True, False, False]) # numpy.char.isalpha char.isalpha(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns true for each element if all characters in the data interpreted as a string are alphabetic and there is at least one character, false otherwise. For byte strings (i.e. `bytes`), alphabetic characters are those byte values in the sequence b’abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ’. For Unicode strings, alphabetic characters are those characters defined in the Unicode character database as “Letter”. Parameters: **x** array_like, with `StringDType`, `bytes_`, or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray Output array of bools This is a scalar if `x` is a scalar. See also [`str.isalpha`](https://docs.python.org/3/library/stdtypes.html#str.isalpha "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> a = np.array(['a', 'b', '0']) >>> np.strings.isalpha(a) array([ True, True, False]) >>> a = np.array([['a', 'b', '0'], ['c', '1', '2']]) >>> np.strings.isalpha(a) array([[ True, True, False], [ True, False, False]]) # numpy.char.isdecimal char.isdecimal(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ For each element, return True if there are only decimal characters in the element. Decimal characters include digit characters, and all characters that can be used to form decimal-radix numbers, e.g. `U+0660, ARABIC-INDIC DIGIT ZERO`. Parameters: **x** array_like, with `StringDType` or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray Output array of bools This is a scalar if `x` is a scalar. See also [`str.isdecimal`](https://docs.python.org/3/library/stdtypes.html#str.isdecimal "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> np.strings.isdecimal(['12345', '4.99', '123ABC', '']) array([ True, False, False, False]) # numpy.char.isdigit char.isdigit(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns true for each element if all characters in the string are digits and there is at least one character, false otherwise. For byte strings, digits are the byte values in the sequence b’0123456789’. For Unicode strings, digits include decimal characters and digits that need special handling, such as the compatibility superscript digits. This also covers digits which cannot be used to form numbers in base 10, like the Kharosthi numbers. Parameters: **x** array_like, with `StringDType`, `bytes_`, or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray Output array of bools This is a scalar if `x` is a scalar. See also [`str.isdigit`](https://docs.python.org/3/library/stdtypes.html#str.isdigit "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> a = np.array(['a', 'b', '0']) >>> np.strings.isdigit(a) array([False, False, True]) >>> a = np.array([['a', 'b', '0'], ['c', '1', '2']]) >>> np.strings.isdigit(a) array([[False, False, True], [False, True, True]]) # numpy.char.islower char.islower(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns true for each element if all cased characters in the string are lowercase and there is at least one cased character, false otherwise. Parameters: **x** array_like, with `StringDType`, `bytes_` or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray Output array of bools This is a scalar if `x` is a scalar. See also [`str.islower`](https://docs.python.org/3/library/stdtypes.html#str.islower "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> np.strings.islower("GHC") array(False) >>> np.strings.islower("ghc") array(True) # numpy.char.isnumeric char.isnumeric(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ For each element, return True if there are only numeric characters in the element. Numeric characters include digit characters, and all characters that have the Unicode numeric value property, e.g. `U+2155, VULGAR FRACTION ONE FIFTH`. Parameters: **x** array_like, with `StringDType` or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray Output array of bools This is a scalar if `x` is a scalar. See also [`str.isnumeric`](https://docs.python.org/3/library/stdtypes.html#str.isnumeric "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> np.strings.isnumeric(['123', '123abc', '9.0', '1/4', 'VIII']) array([ True, False, False, False, False]) # numpy.char.isspace char.isspace(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns true for each element if there are only whitespace characters in the string and there is at least one character, false otherwise. For byte strings, whitespace characters are the ones in the sequence b’ tnrx0bf’. For Unicode strings, a character is whitespace, if, in the Unicode character database, its general category is Zs (“Separator, space”), or its bidirectional class is one of WS, B, or S. Parameters: **x** array_like, with `StringDType`, `bytes_`, or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray Output array of bools This is a scalar if `x` is a scalar. See also [`str.isspace`](https://docs.python.org/3/library/stdtypes.html#str.isspace "\(in Python v3.13\)") #### Examples >>> np.char.isspace(list("a b c")) array([False, True, False, True, False]) >>> np.char.isspace(b'\x0a \x0b \x0c') np.True_ >>> np.char.isspace(b'\x0a \x0b \x0c N') np.False_ # numpy.char.istitle char.istitle(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns true for each element if the element is a titlecased string and there is at least one character, false otherwise. Parameters: **x** array_like, with `StringDType`, `bytes_` or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray Output array of bools This is a scalar if `x` is a scalar. See also [`str.istitle`](https://docs.python.org/3/library/stdtypes.html#str.istitle "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> np.strings.istitle("Numpy Is Great") array(True) >>> np.strings.istitle("Numpy is great") array(False) # numpy.char.isupper char.isupper(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return true for each element if all cased characters in the string are uppercase and there is at least one character, false otherwise. Parameters: **x** array_like, with `StringDType`, `bytes_` or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray Output array of bools This is a scalar if `x` is a scalar. See also [`str.isupper`](https://docs.python.org/3/library/stdtypes.html#str.isupper "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> np.strings.isupper("GHC") array(True) >>> a = np.array(["hello", "HELLO", "Hello"]) >>> np.strings.isupper(a) array([False, True, False]) # numpy.char.join char.join(_sep_ , _seq_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L1295-L1328) Return a string which is the concatenation of the strings in the sequence `seq`. Calls [`str.join`](https://docs.python.org/3/library/stdtypes.html#str.join "\(in Python v3.13\)") element-wise. Parameters: **sep** array-like, with `StringDType`, `bytes_`, or `str_` dtype **seq** array-like, with `StringDType`, `bytes_`, or `str_` dtype Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types See also [`str.join`](https://docs.python.org/3/library/stdtypes.html#str.join "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> np.strings.join('-', 'osd') array('o-s-d', dtype='>> np.strings.join(['-', '.'], ['ghc', 'osd']) array(['g-h-c', 'o.s.d'], dtype='>> import numpy as np >>> x1 = np.array(['a', 'b', 'c']) >>> np.char.less(x1, 'b') array([True, False, False]) # numpy.char.less_equal char.less_equal(_x1_ , _x2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L163-L194) Return (x1 <= x2) element-wise. Unlike [`numpy.less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), this comparison is performed by first stripping whitespace characters from the end of the string. This behavior is provided for backward-compatibility with numarray. Parameters: **x1, x2** array_like of str or unicode Input arrays of the same shape. Returns: **out** ndarray Output array of bools. See also [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less`](numpy.less#numpy.less "numpy.less") #### Examples >>> import numpy as np >>> x1 = np.array(['a', 'b', 'c']) >>> np.char.less_equal(x1, 'b') array([ True, True, False]) # numpy.char.ljust char.ljust(_a_ , _width_ , _fillchar =' '_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L722-L783) Return an array with the elements of `a` left-justified in a string of length `width`. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype **width** array_like, with any integer dtype The length of the resulting strings, unless `width < str_len(a)`. **fillchar** array-like, with `StringDType`, `bytes_`, or `str_` dtype Optional character to use for padding (default is space). Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types See also [`str.ljust`](https://docs.python.org/3/library/stdtypes.html#str.ljust "\(in Python v3.13\)") #### Notes While it is possible for `a` and `fillchar` to have different dtypes, passing a non-ASCII character in `fillchar` when `a` is of dtype “S” is not allowed, and a `ValueError` is raised. #### Examples >>> import numpy as np >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> np.strings.ljust(c, width=3) array(['aAaAaA', ' aA ', 'abBABba'], dtype='>> np.strings.ljust(c, width=9) array(['aAaAaA ', ' aA ', 'abBABba '], dtype='>> import numpy as np >>> c = np.array(['A1B C', '1BCA', 'BCA1']); c array(['A1B C', '1BCA', 'BCA1'], dtype='>> np.strings.lower(c) array(['a1b c', '1bca', 'bca1'], dtype='>> import numpy as np >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> c array(['aAaAaA', ' aA ', 'abBABba'], dtype='>> np.strings.lstrip(c, 'a') array(['AaAaA', ' aA ', 'bBABba'], dtype='>> np.strings.lstrip(c, 'A') # leaves c unchanged array(['aAaAaA', ' aA ', 'abBABba'], dtype='>> (np.strings.lstrip(c, ' ') == np.strings.lstrip(c, '')).all() np.False_ >>> (np.strings.lstrip(c, ' ') == np.strings.lstrip(c)).all() np.True_ # numpy.char.mod char.mod(_a_ , _values_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L196-L230) Return (a % i), that is pre-Python 2.6 string formatting (interpolation), element-wise for a pair of array_likes of str or unicode. Parameters: **a** array_like, with `np.bytes_` or `np.str_` dtype **values** array_like of values These values will be element-wise interpolated into the string. Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types #### Examples >>> import numpy as np >>> a = np.array(["NumPy is a %s library"]) >>> np.strings.mod(a, values=["Python"]) array(['NumPy is a Python library'], dtype='>> a = np.array([b'%d bytes', b'%d bits']) >>> values = np.array([8, 64]) >>> np.strings.mod(a, values) array([b'8 bytes', b'64 bits'], dtype='|S7') # numpy.char.multiply char.multiply(_a_ , _i_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L265-L314) Return (a * i), that is string multiple concatenation, element-wise. Values in `i` of less than 0 are treated as 0 (which yields an empty string). Parameters: **a** array_like, with `np.bytes_` or `np.str_` dtype **i** array_like, with any integer dtype Returns: **out** ndarray Output array of str or unicode, depending on input types #### Notes This is a thin wrapper around np.strings.multiply that raises `ValueError` when `i` is not an integer. It only exists for backwards-compatibility. #### Examples >>> import numpy as np >>> a = np.array(["a", "b", "c"]) >>> np.strings.multiply(a, 3) array(['aaa', 'bbb', 'ccc'], dtype='>> i = np.array([1, 2, 3]) >>> np.strings.multiply(a, i) array(['a', 'bb', 'ccc'], dtype='>> np.strings.multiply(np.array(['a']), i) array(['a', 'aa', 'aaa'], dtype='>> a = np.array(['a', 'b', 'c', 'd', 'e', 'f']).reshape((2, 3)) >>> np.strings.multiply(a, 3) array([['aaa', 'bbb', 'ccc'], ['ddd', 'eee', 'fff']], dtype='>> np.strings.multiply(a, i) array([['a', 'bb', 'ccc'], ['d', 'ee', 'fff']], dtype='>> import numpy as np >>> x1 = np.array(['a', 'b', 'c']) >>> np.char.not_equal(x1, 'b') array([ True, False, True]) # numpy.char.partition char.partition(_a_ , _sep_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/defchararray.py#L317-L356) Partition each element in `a` around `sep`. Calls [`str.partition`](https://docs.python.org/3/library/stdtypes.html#str.partition "\(in Python v3.13\)") element-wise. For each element in `a`, split the element as the first occurrence of `sep`, and return 3 strings containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return 3 strings containing the string itself, followed by two empty strings. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype Input array **sep**{str, unicode} Separator to split each string element in `a`. Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types. The output array will have an extra dimension with 3 elements per input element. See also [`str.partition`](https://docs.python.org/3/library/stdtypes.html#str.partition "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> x = np.array(["Numpy is nice!"]) >>> np.char.partition(x, " ") array([['Numpy', ' ', 'is nice!']], dtype='>> import numpy as np >>> a = np.array(["That is a mango", "Monkeys eat mangos"]) >>> np.strings.replace(a, 'mango', 'banana') array(['That is a banana', 'Monkeys eat bananas'], dtype='>> a = np.array(["The dish is fresh", "This is it"]) >>> np.strings.replace(a, 'is', 'was') array(['The dwash was fresh', 'Thwas was it'], dtype='>> import numpy as np >>> a = np.array(["Computer Science"]) >>> np.strings.rfind(a, "Science", start=0, end=None) array([9]) >>> np.strings.rfind(a, "Science", start=0, end=8) array([-1]) >>> b = np.array(["Computer Science", "Science"]) >>> np.strings.rfind(b, "Science", start=0, end=None) array([9, 0]) # numpy.char.rindex char.rindex(_a_ , _sub_ , _start =0_, _end =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L348-L379) Like [`rfind`](numpy.char.rfind#numpy.char.rfind "numpy.char.rfind"), but raises [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "\(in Python v3.13\)") when the substring `sub` is not found. Parameters: **a** array-like, with `np.bytes_` or `np.str_` dtype **sub** array-like, with `np.bytes_` or `np.str_` dtype **start, end** array-like, with any integer dtype, optional Returns: **out** ndarray Output array of ints. See also [`rfind`](numpy.char.rfind#numpy.char.rfind "numpy.char.rfind"), [`str.rindex`](https://docs.python.org/3/library/stdtypes.html#str.rindex "\(in Python v3.13\)") #### Examples >>> a = np.array(["Computer Science"]) >>> np.strings.rindex(a, "Science", start=0, end=None) array([9]) # numpy.char.rjust char.rjust(_a_ , _width_ , _fillchar =' '_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L786-L847) Return an array with the elements of `a` right-justified in a string of length `width`. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype **width** array_like, with any integer dtype The length of the resulting strings, unless `width < str_len(a)`. **fillchar** array-like, with `StringDType`, `bytes_`, or `str_` dtype Optional padding character to use (default is space). Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types See also [`str.rjust`](https://docs.python.org/3/library/stdtypes.html#str.rjust "\(in Python v3.13\)") #### Notes While it is possible for `a` and `fillchar` to have different dtypes, passing a non-ASCII character in `fillchar` when `a` is of dtype “S” is not allowed, and a `ValueError` is raised. #### Examples >>> import numpy as np >>> a = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> np.strings.rjust(a, width=3) array(['aAaAaA', ' aA ', 'abBABba'], dtype='>> np.strings.rjust(a, width=9) array([' aAaAaA', ' aA ', ' abBABba'], dtype='>> import numpy as np >>> a = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> np.char.rpartition(a, 'A') array([['aAaAa', 'A', ''], [' a', 'A', ' '], ['abB', 'A', 'Bba']], dtype='>> import numpy as np >>> a = np.array(['aAaAaA', 'abBABba']) >>> np.strings.rsplit(a, 'A') array([list(['a', 'a', 'a', '']), list(['abB', 'Bba'])], dtype=object) # numpy.char.rstrip char.rstrip(_a_ , _chars =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L945-L985) For each element in `a`, return a copy with the trailing characters removed. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype **chars** scalar with the same dtype as `a`, optional The `chars` argument is a string specifying the set of characters to be removed. If `None`, the `chars` argument defaults to removing whitespace. The `chars` argument is not a prefix or suffix; rather, all combinations of its values are stripped. Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types See also [`str.rstrip`](https://docs.python.org/3/library/stdtypes.html#str.rstrip "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> c = np.array(['aAaAaA', 'abBABba']) >>> c array(['aAaAaA', 'abBABba'], dtype='>> np.strings.rstrip(c, 'a') array(['aAaAaA', 'abBABb'], dtype='>> np.strings.rstrip(c, 'A') array(['aAaAa', 'abBABba'], dtype='>> import numpy as np >>> x = np.array("Numpy is nice!") >>> np.strings.split(x, " ") array(list(['Numpy', 'is', 'nice!']), dtype=object) >>> np.strings.split(x, " ", 1) array(list(['Numpy', 'is nice!']), dtype=object) # numpy.char.splitlines char.splitlines(_a_ , _keepends =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L1420-L1454) For each element in `a`, return a list of the lines in the element, breaking at line boundaries. Calls [`str.splitlines`](https://docs.python.org/3/library/stdtypes.html#str.splitlines "\(in Python v3.13\)") element-wise. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype **keepends** bool, optional Line breaks are not included in the resulting list unless keepends is given and true. Returns: **out** ndarray Array of list objects See also [`str.splitlines`](https://docs.python.org/3/library/stdtypes.html#str.splitlines "\(in Python v3.13\)") #### Examples >>> np.char.splitlines("first line\nsecond line") array(list(['first line', 'second line']), dtype=object) >>> a = np.array(["first\nsecond", "third\nfourth"]) >>> np.char.splitlines(a) array([list(['first', 'second']), list(['third', 'fourth'])], dtype=object) # numpy.char.startswith char.startswith(_a_ , _prefix_ , _start =0_, _end =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L427-L465) Returns a boolean array which is `True` where the string element in `a` starts with `prefix`, otherwise `False`. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype **prefix** array-like, with `StringDType`, `bytes_`, or `str_` dtype **start, end** array_like, with any integer dtype With `start`, test beginning at that position. With `end`, stop comparing at that position. Returns: **out** ndarray Output array of bools See also [`str.startswith`](https://docs.python.org/3/library/stdtypes.html#str.startswith "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> s = np.array(['foo', 'bar']) >>> s array(['foo', 'bar'], dtype='>> np.strings.startswith(s, 'fo') array([True, False]) >>> np.strings.startswith(s, 'o', start=1, end=2) array([True, False]) # numpy.char.str_len char.str_len(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns the length of each element. For byte strings, this is the number of bytes, while, for Unicode strings, it is the number of Unicode code points. Parameters: **x** array_like, with `StringDType`, `bytes_`, or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray Output array of ints This is a scalar if `x` is a scalar. See also [`len`](https://docs.python.org/3/library/functions.html#len "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> a = np.array(['Grace Hopper Conference', 'Open Source Day']) >>> np.strings.str_len(a) array([23, 15]) >>> a = np.array(['Р', 'о']) >>> np.strings.str_len(a) array([1, 1]) >>> a = np.array([['hello', 'world'], ['Р', 'о']]) >>> np.strings.str_len(a) array([[5, 5], [1, 1]]) # numpy.char.strip char.strip(_a_ , _chars =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L988-L1032) For each element in `a`, return a copy with the leading and trailing characters removed. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype **chars** scalar with the same dtype as `a`, optional The `chars` argument is a string specifying the set of characters to be removed. If `None`, the `chars` argument defaults to removing whitespace. The `chars` argument is not a prefix or suffix; rather, all combinations of its values are stripped. Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types See also [`str.strip`](https://docs.python.org/3/library/stdtypes.html#str.strip "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> c array(['aAaAaA', ' aA ', 'abBABba'], dtype='>> np.strings.strip(c) array(['aAaAaA', 'aA', 'abBABba'], dtype='>> np.strings.strip(c, 'a') array(['AaAaA', ' aA ', 'bBABb'], dtype='>> np.strings.strip(c, 'A') array(['aAaAa', ' aA ', 'abBABba'], dtype='>> import numpy as np >>> c=np.array(['a1B c','1b Ca','b Ca1','cA1b'],'S5'); c array(['a1B c', '1b Ca', 'b Ca1', 'cA1b'], dtype='|S5') >>> np.strings.swapcase(c) array(['A1b C', '1B cA', 'B cA1', 'Ca1B'], dtype='|S5') # numpy.char.title char.title(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L1189-L1228) Return element-wise title cased version of string or unicode. Title case words start with uppercase characters, all remaining cased characters are lowercase. Calls [`str.title`](https://docs.python.org/3/library/stdtypes.html#str.title "\(in Python v3.13\)") element-wise. For 8-bit strings, this method is locale-dependent. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype Input array. Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types See also [`str.title`](https://docs.python.org/3/library/stdtypes.html#str.title "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> c=np.array(['a1b c','1b ca','b ca1','ca1b'],'S5'); c array(['a1b c', '1b ca', 'b ca1', 'ca1b'], dtype='|S5') >>> np.strings.title(c) array(['A1B C', '1B Ca', 'B Ca1', 'Ca1B'], dtype='|S5') # numpy.char.translate char.translate(_a_ , _table_ , _deletechars =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L1594-L1641) For each element in `a`, return a copy of the string where all characters occurring in the optional argument `deletechars` are removed, and the remaining characters have been mapped through the given translation table. Calls [`str.translate`](https://docs.python.org/3/library/stdtypes.html#str.translate "\(in Python v3.13\)") element-wise. Parameters: **a** array-like, with `np.bytes_` or `np.str_` dtype **table** str of length 256 **deletechars** str Returns: **out** ndarray Output array of str or unicode, depending on input type See also [`str.translate`](https://docs.python.org/3/library/stdtypes.html#str.translate "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> a = np.array(['a1b c', '1bca', 'bca1']) >>> table = a[0].maketrans('abc', '123') >>> deletechars = ' ' >>> np.char.translate(a, table, deletechars) array(['112 3', '1231', '2311'], dtype='>> import numpy as np >>> c = np.array(['a1b c', '1bca', 'bca1']); c array(['a1b c', '1bca', 'bca1'], dtype='>> np.strings.upper(c) array(['A1B C', '1BCA', 'BCA1'], dtype='>> import numpy as np >>> np.strings.zfill(['1', '-1', '+1'], 3) array(['001', '-01', '+01'], dtype=' n-1 are mapped to n-1 Returns: **merged_array** array The merged result. Raises: ValueError: shape mismatch If `a` and each choice array are not all broadcastable to the same shape. See also [`ndarray.choose`](numpy.ndarray.choose#numpy.ndarray.choose "numpy.ndarray.choose") equivalent method [`numpy.take_along_axis`](numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") Preferable if `choices` is an array #### Notes To reduce the chance of misinterpretation, even though the following “abuse” is nominally supported, `choices` should neither be, nor be thought of as, a single array, i.e., the outermost sequence-like container should be either a list or a tuple. #### Examples >>> import numpy as np >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], ... [20, 21, 22, 23], [30, 31, 32, 33]] >>> np.choose([2, 3, 1, 0], choices ... # the first element of the result will be the first element of the ... # third (2+1) "array" in choices, namely, 20; the second element ... # will be the second element of the fourth (3+1) choice array, i.e., ... # 31, etc. ... ) array([20, 31, 12, 3]) >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1) array([20, 31, 12, 3]) >>> # because there are 4 choice arrays >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4) array([20, 1, 12, 3]) >>> # i.e., 0 A couple examples illustrating how choose broadcasts: >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]] >>> choices = [-10, 10] >>> np.choose(a, choices) array([[ 10, -10, 10], [-10, 10, -10], [ 10, -10, 10]]) >>> # With thanks to Anne Archibald >>> a = np.array([0, 1]).reshape((2,1,1)) >>> c1 = np.array([1, 2, 3]).reshape((1,3,1)) >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5)) >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2 array([[[ 1, 1, 1, 1, 1], [ 2, 2, 2, 2, 2], [ 3, 3, 3, 3, 3]], [[-1, -2, -3, -4, -5], [-1, -2, -3, -4, -5], [-1, -2, -3, -4, -5]]]) # numpy.clip numpy.clip(_a_ , _a_min= _, _a_max= _, _out=None_ , _*_ , _min= _, _max= _, _**kwargs_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L2241-L2330) Clip (limit) the values in an array. Given an interval, values outside the interval are clipped to the interval edges. For example, if an interval of `[0, 1]` is specified, values smaller than 0 become 0, and values larger than 1 become 1. Equivalent to but faster than `np.minimum(a_max, np.maximum(a, a_min))`. No check is performed to ensure `a_min < a_max`. Parameters: **a** array_like Array containing elements to clip. **a_min, a_max** array_like or None Minimum and maximum value. If `None`, clipping is not performed on the corresponding edge. If both `a_min` and `a_max` are `None`, the elements of the returned array stay the same. Both are broadcasted against `a`. **out** ndarray, optional The results will be placed in this array. It may be the input array for in- place clipping. `out` must be of the right shape to hold the output. Its type is preserved. **min, max** array_like or None Array API compatible alternatives for `a_min` and `a_max` arguments. Either `a_min` and `a_max` or `min` and `max` can be passed at the same time. Default: `None`. New in version 2.1.0. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **clipped_array** ndarray An array with the elements of `a`, but where values < `a_min` are replaced with `a_min`, and those > `a_max` with `a_max`. See also [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes When `a_min` is greater than `a_max`, `clip` returns an array in which all values are equal to `a_max`, as shown in the second example. #### Examples >>> import numpy as np >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.clip(a, 1, 8) array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) >>> np.clip(a, 8, 1) array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) >>> np.clip(a, 3, 6, out=a) array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) >>> a array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.clip(a, [3, 4, 1, 1, 1, 4, 4, 4, 4, 4], 8) array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8]) # numpy.column_stack numpy.column_stack(_tup_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_shape_base_impl.py#L621-L662) Stack 1-D arrays as columns into a 2-D array. Take a sequence of 1-D arrays and stack them as columns to make a single 2-D array. 2-D arrays are stacked as-is, just like with [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack"). 1-D arrays are turned into 2-D columns first. Parameters: **tup** sequence of 1-D or 2-D arrays. Arrays to stack. All of them must have the same first dimension. Returns: **stacked** 2-D array The array formed by stacking the given arrays. See also [`stack`](numpy.stack#numpy.stack "numpy.stack"), [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack"), [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack"), [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") #### Examples >>> import numpy as np >>> a = np.array((1,2,3)) >>> b = np.array((2,3,4)) >>> np.column_stack((a,b)) array([[1, 2], [2, 3], [3, 4]]) # numpy.common_type numpy.common_type(_* arrays_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_type_check_impl.py#L646-L699) Return a scalar type which is common to the input arrays. The return type will always be an inexact (i.e. floating point) scalar type, even if all the arrays are integer arrays. If one of the inputs is an integer array, the minimum precision type that is returned is a 64-bit floating point dtype. All input arrays except int64 and uint64 can be safely cast to the returned dtype without loss of information. Parameters: **array1, array2, …** ndarrays Input arrays. Returns: **out** data type code Data type code. See also [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), [`mintypecode`](numpy.mintypecode#numpy.mintypecode "numpy.mintypecode") #### Examples >>> np.common_type(np.arange(2, dtype=np.float32)) >>> np.common_type(np.arange(2, dtype=np.float32), np.arange(2)) >>> np.common_type(np.arange(4), np.array([45, 6.j]), np.array([45.0])) # numpy.compress numpy.compress(_condition_ , _a_ , _axis =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L2171-L2233) Return selected slices of an array along given axis. When working along a given axis, a slice along that axis is returned in `output` for each index where `condition` evaluates to True. When working on a 1-D array, `compress` is equivalent to [`extract`](numpy.extract#numpy.extract "numpy.extract"). Parameters: **condition** 1-D array of bools Array that selects which entries to return. If len(condition) is less than the size of `a` along the given axis, then output is truncated to the length of the condition array. **a** array_like Array from which to extract a part. **axis** int, optional Axis along which to take slices. If None (default), work on the flattened array. **out** ndarray, optional Output array. Its type is preserved and it must be of the right shape to hold the output. Returns: **compressed_array** ndarray A copy of `a` without the slices along axis for which `condition` is false. See also [`take`](numpy.take#numpy.take "numpy.take"), [`choose`](numpy.choose#numpy.choose "numpy.choose"), [`diag`](numpy.diag#numpy.diag "numpy.diag"), [`diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal"), [`select`](numpy.select#numpy.select "numpy.select") [`ndarray.compress`](numpy.ndarray.compress#numpy.ndarray.compress "numpy.ndarray.compress") Equivalent method in ndarray [`extract`](numpy.extract#numpy.extract "numpy.extract") Equivalent method when working on 1-D arrays [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4], [5, 6]]) >>> a array([[1, 2], [3, 4], [5, 6]]) >>> np.compress([0, 1], a, axis=0) array([[3, 4]]) >>> np.compress([False, True, True], a, axis=0) array([[3, 4], [5, 6]]) >>> np.compress([False, True], a, axis=1) array([[2], [4], [6]]) Working on the flattened array does not return slices along an axis but selects elements. >>> np.compress([False, True], a) array([2]) # numpy.concat numpy.concat(_(a1_ , _a2_ , _...)_ , _axis=0_ , _out=None_ , _dtype=None_ , _casting="same_kind"_) Join a sequence of arrays along an existing axis. Parameters: **a1, a2, …** sequence of array_like The arrays must have the same shape, except in the dimension corresponding to `axis` (the first, by default). **axis** int, optional The axis along which the arrays will be joined. If axis is None, arrays are flattened before use. Default is 0. **out** ndarray, optional If provided, the destination to place the result. The shape must be correct, matching that of what concatenate would have returned if no out argument were specified. **dtype** str or dtype If provided, the destination array will have this dtype. Cannot be provided together with `out`. New in version 1.20.0. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘same_kind’. For a description of the options, please see [casting](../../glossary#term-casting). New in version 1.20.0. Returns: **res** ndarray The concatenated array. See also [`ma.concatenate`](numpy.ma.concatenate#numpy.ma.concatenate "numpy.ma.concatenate") Concatenate function that preserves input masks. [`array_split`](numpy.array_split#numpy.array_split "numpy.array_split") Split an array into multiple sub-arrays of equal or near-equal size. [`split`](numpy.split#numpy.split "numpy.split") Split array into a list of multiple sub-arrays of equal size. [`hsplit`](numpy.hsplit#numpy.hsplit "numpy.hsplit") Split array into multiple sub-arrays horizontally (column wise). [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit") Split array into multiple sub-arrays vertically (row wise). [`dsplit`](numpy.dsplit#numpy.dsplit "numpy.dsplit") Split array into multiple sub-arrays along the 3rd axis (depth). [`stack`](numpy.stack#numpy.stack "numpy.stack") Stack a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble arrays from blocks. [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack") Stack arrays in sequence vertically (row wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third dimension). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. #### Notes When one or more of the arrays to be concatenated is a MaskedArray, this function will return a MaskedArray object instead of an ndarray, but the input masks are _not_ preserved. In cases where a MaskedArray is expected as input, use the ma.concatenate function from the masked array module instead. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> b = np.array([[5, 6]]) >>> np.concatenate((a, b), axis=0) array([[1, 2], [3, 4], [5, 6]]) >>> np.concatenate((a, b.T), axis=1) array([[1, 2, 5], [3, 4, 6]]) >>> np.concatenate((a, b), axis=None) array([1, 2, 3, 4, 5, 6]) This function will not preserve masking of MaskedArray inputs. >>> a = np.ma.arange(3) >>> a[1] = np.ma.masked >>> b = np.arange(2, 5) >>> a masked_array(data=[0, --, 2], mask=[False, True, False], fill_value=999999) >>> b array([2, 3, 4]) >>> np.concatenate([a, b]) masked_array(data=[0, 1, 2, 2, 3, 4], mask=False, fill_value=999999) >>> np.ma.concatenate([a, b]) masked_array(data=[0, --, 2, 2, 3, 4], mask=[False, True, False, False, False, False], fill_value=999999) # numpy.concatenate numpy.concatenate(_(a1_ , _a2_ , _...)_ , _axis=0_ , _out=None_ , _dtype=None_ , _casting="same_kind"_) Join a sequence of arrays along an existing axis. Parameters: **a1, a2, …** sequence of array_like The arrays must have the same shape, except in the dimension corresponding to `axis` (the first, by default). **axis** int, optional The axis along which the arrays will be joined. If axis is None, arrays are flattened before use. Default is 0. **out** ndarray, optional If provided, the destination to place the result. The shape must be correct, matching that of what concatenate would have returned if no out argument were specified. **dtype** str or dtype If provided, the destination array will have this dtype. Cannot be provided together with `out`. New in version 1.20.0. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘same_kind’. For a description of the options, please see [casting](../../glossary#term-casting). New in version 1.20.0. Returns: **res** ndarray The concatenated array. See also [`ma.concatenate`](numpy.ma.concatenate#numpy.ma.concatenate "numpy.ma.concatenate") Concatenate function that preserves input masks. [`array_split`](numpy.array_split#numpy.array_split "numpy.array_split") Split an array into multiple sub-arrays of equal or near-equal size. [`split`](numpy.split#numpy.split "numpy.split") Split array into a list of multiple sub-arrays of equal size. [`hsplit`](numpy.hsplit#numpy.hsplit "numpy.hsplit") Split array into multiple sub-arrays horizontally (column wise). [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit") Split array into multiple sub-arrays vertically (row wise). [`dsplit`](numpy.dsplit#numpy.dsplit "numpy.dsplit") Split array into multiple sub-arrays along the 3rd axis (depth). [`stack`](numpy.stack#numpy.stack "numpy.stack") Stack a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble arrays from blocks. [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack") Stack arrays in sequence vertically (row wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third dimension). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. #### Notes When one or more of the arrays to be concatenated is a MaskedArray, this function will return a MaskedArray object instead of an ndarray, but the input masks are _not_ preserved. In cases where a MaskedArray is expected as input, use the ma.concatenate function from the masked array module instead. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> b = np.array([[5, 6]]) >>> np.concatenate((a, b), axis=0) array([[1, 2], [3, 4], [5, 6]]) >>> np.concatenate((a, b.T), axis=1) array([[1, 2, 5], [3, 4, 6]]) >>> np.concatenate((a, b), axis=None) array([1, 2, 3, 4, 5, 6]) This function will not preserve masking of MaskedArray inputs. >>> a = np.ma.arange(3) >>> a[1] = np.ma.masked >>> b = np.arange(2, 5) >>> a masked_array(data=[0, --, 2], mask=[False, True, False], fill_value=999999) >>> b array([2, 3, 4]) >>> np.concatenate([a, b]) masked_array(data=[0, 1, 2, 2, 3, 4], mask=False, fill_value=999999) >>> np.ma.concatenate([a, b]) masked_array(data=[0, --, 2, 2, 3, 4], mask=[False, True, False, False, False, False], fill_value=999999) # numpy.conj numpy.conj(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the complex conjugate, element-wise. The complex conjugate of a complex number is obtained by changing the sign of its imaginary part. Parameters: **x** array_like Input value. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The complex conjugate of `x`, with same dtype as `y`. This is a scalar if `x` is a scalar. #### Notes `conj` is an alias for [`conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate"): >>> np.conj is np.conjugate True #### Examples >>> import numpy as np >>> np.conjugate(1+2j) (1-2j) >>> x = np.eye(2) + 1j * np.eye(2) >>> np.conjugate(x) array([[ 1.-1.j, 0.-0.j], [ 0.-0.j, 1.-1.j]]) # numpy.conjugate numpy.conjugate(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the complex conjugate, element-wise. The complex conjugate of a complex number is obtained by changing the sign of its imaginary part. Parameters: **x** array_like Input value. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The complex conjugate of `x`, with same dtype as `y`. This is a scalar if `x` is a scalar. #### Notes [`conj`](numpy.conj#numpy.conj "numpy.conj") is an alias for `conjugate`: >>> np.conj is np.conjugate True #### Examples >>> import numpy as np >>> np.conjugate(1+2j) (1-2j) >>> x = np.eye(2) + 1j * np.eye(2) >>> np.conjugate(x) array([[ 1.-1.j, 0.-0.j], [ 0.-0.j, 1.-1.j]]) # numpy.convolve numpy.convolve(_a_ , _v_ , _mode ='full'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L772-L869) Returns the discrete, linear convolution of two one-dimensional sequences. The convolution operator is often seen in signal processing, where it models the effect of a linear time-invariant system on a signal [1]. In probability theory, the sum of two independent random variables is distributed according to the convolution of their individual distributions. If `v` is longer than `a`, the arrays are swapped before computation. Parameters: **a**(N,) array_like First one-dimensional input array. **v**(M,) array_like Second one-dimensional input array. **mode**{‘full’, ‘valid’, ‘same’}, optional ‘full’: By default, mode is ‘full’. This returns the convolution at each point of overlap, with an output shape of (N+M-1,). At the end-points of the convolution, the signals do not overlap completely, and boundary effects may be seen. ‘same’: Mode ‘same’ returns output of length `max(M, N)`. Boundary effects are still visible. ‘valid’: Mode ‘valid’ returns output of length `max(M, N) - min(M, N) + 1`. The convolution product is only given for points where the signals overlap completely. Values outside the signal boundary have no effect. Returns: **out** ndarray Discrete, linear convolution of `a` and `v`. See also [`scipy.signal.fftconvolve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.fftconvolve.html#scipy.signal.fftconvolve "\(in SciPy v1.14.1\)") Convolve two arrays using the Fast Fourier Transform. [`scipy.linalg.toeplitz`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.toeplitz.html#scipy.linalg.toeplitz "\(in SciPy v1.14.1\)") Used to construct the convolution operator. [`polymul`](numpy.polymul#numpy.polymul "numpy.polymul") Polynomial multiplication. Same output as convolve, but also accepts poly1d objects as input. #### Notes The discrete convolution operation is defined as \\[(a * v)_n = \sum_{m = -\infty}^{\infty} a_m v_{n - m}\\] It can be shown that a convolution \\(x(t) * y(t)\\) in time/space is equivalent to the multiplication \\(X(f) Y(f)\\) in the Fourier domain, after appropriate padding (padding is necessary to prevent circular convolution). Since multiplication is more efficient (faster) than convolution, the function [`scipy.signal.fftconvolve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.fftconvolve.html#scipy.signal.fftconvolve "\(in SciPy v1.14.1\)") exploits the FFT to calculate the convolution of large data-sets. #### References [1] Wikipedia, “Convolution”, #### Examples Note how the convolution operator flips the second array before “sliding” the two across one another: >>> import numpy as np >>> np.convolve([1, 2, 3], [0, 1, 0.5]) array([0. , 1. , 2.5, 4. , 1.5]) Only return the middle values of the convolution. Contains boundary effects, where zeros are taken into account: >>> np.convolve([1,2,3],[0,1,0.5], 'same') array([1. , 2.5, 4. ]) The two arrays are of the same length, so there is only one position where they completely overlap: >>> np.convolve([1,2,3],[0,1,0.5], 'valid') array([2.5]) # numpy.copy numpy.copy(_a_ , _order ='K'_, _subok =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L898-L966) Return an array copy of the given object. Parameters: **a** array_like Input data. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. (Note that this function and [`ndarray.copy`](numpy.ndarray.copy#numpy.ndarray.copy "numpy.ndarray.copy") are very similar, but have different default values for their order= arguments.) **subok** bool, optional If True, then sub-classes will be passed-through, otherwise the returned array will be forced to be a base-class array (defaults to False). Returns: **arr** ndarray Array interpretation of `a`. See also [`ndarray.copy`](numpy.ndarray.copy#numpy.ndarray.copy "numpy.ndarray.copy") Preferred method for creating an array copy #### Notes This is equivalent to: >>> np.array(a, copy=True) The copy made of the data is shallow, i.e., for arrays with object dtype, the new array will point to the same objects. See Examples from [`ndarray.copy`](numpy.ndarray.copy#numpy.ndarray.copy "numpy.ndarray.copy"). #### Examples >>> import numpy as np Create an array x, with a reference y and a copy z: >>> x = np.array([1, 2, 3]) >>> y = x >>> z = np.copy(x) Note that, when we modify x, y changes, but not z: >>> x[0] = 10 >>> x[0] == y[0] True >>> x[0] == z[0] False Note that, np.copy clears previously set WRITEABLE=False flag. >>> a = np.array([1, 2, 3]) >>> a.flags["WRITEABLE"] = False >>> b = np.copy(a) >>> b.flags["WRITEABLE"] True >>> b[0] = 3 >>> b array([3, 2, 3]) # numpy.copysign numpy.copysign(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Change the sign of x1 to that of x2, element-wise. If `x2` is a scalar, its sign will be copied to all elements of `x1`. Parameters: **x1** array_like Values to change the sign of. **x2** array_like The sign of `x2` is copied to `x1`. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar The values of `x1` with the sign of `x2`. This is a scalar if both `x1` and `x2` are scalars. #### Examples >>> import numpy as np >>> np.copysign(1.3, -1) -1.3 >>> 1/np.copysign(0, 1) inf >>> 1/np.copysign(0, -1) -inf >>> np.copysign([-1, 0, 1], -1.1) array([-1., -0., -1.]) >>> np.copysign([-1, 0, 1], np.arange(3)-1) array([-1., 0., 1.]) # numpy.copyto numpy.copyto(_dst_ , _src_ , _casting ='same_kind'_, _where =True_) Copies values from one array to another, broadcasting as necessary. Raises a TypeError if the `casting` rule is violated, and if [`where`](numpy.where#numpy.where "numpy.where") is provided, it selects which elements to copy. Parameters: **dst** ndarray The array into which values are copied. **src** array_like The array from which values are copied. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur when copying. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. **where** array_like of bool, optional A boolean array which is broadcasted to match the dimensions of `dst`, and selects elements to copy from `src` to `dst` wherever it contains the value True. #### Examples >>> import numpy as np >>> A = np.array([4, 5, 6]) >>> B = [1, 2, 3] >>> np.copyto(A, B) >>> A array([1, 2, 3]) >>> A = np.array([[1, 2, 3], [4, 5, 6]]) >>> B = [[4, 5, 6], [7, 8, 9]] >>> np.copyto(A, B) >>> A array([[4, 5, 6], [7, 8, 9]]) # numpy.corrcoef numpy.corrcoef(_x_ , _y=None_ , _rowvar=True_ , _bias= _, _ddof= _, _*_ , _dtype=None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L2903-L3055) Return Pearson product-moment correlation coefficients. Please refer to the documentation for [`cov`](numpy.cov#numpy.cov "numpy.cov") for more detail. The relationship between the correlation coefficient matrix, `R`, and the covariance matrix, `C`, is \\[R_{ij} = \frac{ C_{ij} } { \sqrt{ C_{ii} C_{jj} } }\\] The values of `R` are between -1 and 1, inclusive. Parameters: **x** array_like A 1-D or 2-D array containing multiple variables and observations. Each row of `x` represents a variable, and each column a single observation of all those variables. Also see `rowvar` below. **y** array_like, optional An additional set of variables and observations. `y` has the same shape as `x`. **rowvar** bool, optional If `rowvar` is True (default), then each row represents a variable, with observations in the columns. Otherwise, the relationship is transposed: each column represents a variable, while the rows contain observations. **bias** _NoValue, optional Has no effect, do not use. Deprecated since version 1.10.0. **ddof** _NoValue, optional Has no effect, do not use. Deprecated since version 1.10.0. **dtype** data-type, optional Data-type of the result. By default, the return data-type will have at least [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64") precision. New in version 1.20. Returns: **R** ndarray The correlation coefficient matrix of the variables. See also [`cov`](numpy.cov#numpy.cov "numpy.cov") Covariance matrix #### Notes Due to floating point rounding the resulting array may not be Hermitian, the diagonal elements may not be 1, and the elements may not satisfy the inequality abs(a) <= 1. The real and imaginary parts are clipped to the interval [-1, 1] in an attempt to improve on that situation but is not much help in the complex case. This function accepts but discards arguments `bias` and `ddof`. This is for backwards compatibility with previous versions of this function. These arguments had no effect on the return values of the function and can be safely ignored in this and previous versions of numpy. #### Examples >>> import numpy as np In this example we generate two random arrays, `xarr` and `yarr`, and compute the row-wise and column-wise Pearson correlation coefficients, `R`. Since `rowvar` is true by default, we first find the row-wise Pearson correlation coefficients between the variables of `xarr`. >>> import numpy as np >>> rng = np.random.default_rng(seed=42) >>> xarr = rng.random((3, 3)) >>> xarr array([[0.77395605, 0.43887844, 0.85859792], [0.69736803, 0.09417735, 0.97562235], [0.7611397 , 0.78606431, 0.12811363]]) >>> R1 = np.corrcoef(xarr) >>> R1 array([[ 1. , 0.99256089, -0.68080986], [ 0.99256089, 1. , -0.76492172], [-0.68080986, -0.76492172, 1. ]]) If we add another set of variables and observations `yarr`, we can compute the row-wise Pearson correlation coefficients between the variables in `xarr` and `yarr`. >>> yarr = rng.random((3, 3)) >>> yarr array([[0.45038594, 0.37079802, 0.92676499], [0.64386512, 0.82276161, 0.4434142 ], [0.22723872, 0.55458479, 0.06381726]]) >>> R2 = np.corrcoef(xarr, yarr) >>> R2 array([[ 1. , 0.99256089, -0.68080986, 0.75008178, -0.934284 , -0.99004057], [ 0.99256089, 1. , -0.76492172, 0.82502011, -0.97074098, -0.99981569], [-0.68080986, -0.76492172, 1. , -0.99507202, 0.89721355, 0.77714685], [ 0.75008178, 0.82502011, -0.99507202, 1. , -0.93657855, -0.83571711], [-0.934284 , -0.97074098, 0.89721355, -0.93657855, 1. , 0.97517215], [-0.99004057, -0.99981569, 0.77714685, -0.83571711, 0.97517215, 1. ]]) Finally if we use the option `rowvar=False`, the columns are now being treated as the variables and we will find the column-wise Pearson correlation coefficients between variables in `xarr` and `yarr`. >>> R3 = np.corrcoef(xarr, yarr, rowvar=False) >>> R3 array([[ 1. , 0.77598074, -0.47458546, -0.75078643, -0.9665554 , 0.22423734], [ 0.77598074, 1. , -0.92346708, -0.99923895, -0.58826587, -0.44069024], [-0.47458546, -0.92346708, 1. , 0.93773029, 0.23297648, 0.75137473], [-0.75078643, -0.99923895, 0.93773029, 1. , 0.55627469, 0.47536961], [-0.9665554 , -0.58826587, 0.23297648, 0.55627469, 1. , -0.46666491], [ 0.22423734, -0.44069024, 0.75137473, 0.47536961, -0.46666491, 1. ]]) # numpy.correlate numpy.correlate(_a_ , _v_ , _mode ='valid'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L692-L765) Cross-correlation of two 1-dimensional sequences. This function computes the correlation as generally defined in signal processing texts [1]: \\[c_k = \sum_n a_{n+k} \cdot \overline{v}_n\\] with a and v sequences being zero-padded where necessary and \\(\overline v\\) denoting complex conjugation. Parameters: **a, v** array_like Input sequences. **mode**{‘valid’, ‘same’, ‘full’}, optional Refer to the [`convolve`](numpy.convolve#numpy.convolve "numpy.convolve") docstring. Note that the default is ‘valid’, unlike [`convolve`](numpy.convolve#numpy.convolve "numpy.convolve"), which uses ‘full’. Returns: **out** ndarray Discrete cross-correlation of `a` and `v`. See also [`convolve`](numpy.convolve#numpy.convolve "numpy.convolve") Discrete, linear convolution of two one-dimensional sequences. [`scipy.signal.correlate`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.correlate.html#scipy.signal.correlate "\(in SciPy v1.14.1\)") uses FFT which has superior performance on large arrays. #### Notes The definition of correlation above is not unique and sometimes correlation may be defined differently. Another common definition is [1]: \\[c'_k = \sum_n a_{n} \cdot \overline{v_{n+k}}\\] which is related to \\(c_k\\) by \\(c'_k = c_{-k}\\). `numpy.correlate` may perform slowly in large arrays (i.e. n = 1e5) because it does not use the FFT to compute the convolution; in that case, [`scipy.signal.correlate`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.correlate.html#scipy.signal.correlate "\(in SciPy v1.14.1\)") might be preferable. #### References [1] (1,2) Wikipedia, “Cross-correlation”, #### Examples >>> import numpy as np >>> np.correlate([1, 2, 3], [0, 1, 0.5]) array([3.5]) >>> np.correlate([1, 2, 3], [0, 1, 0.5], "same") array([2. , 3.5, 3. ]) >>> np.correlate([1, 2, 3], [0, 1, 0.5], "full") array([0.5, 2. , 3.5, 3. , 0. ]) Using complex sequences: >>> np.correlate([1+1j, 2, 3-1j], [0, 1, 0.5j], 'full') array([ 0.5-0.5j, 1.0+0.j , 1.5-1.5j, 3.0-1.j , 0.0+0.j ]) Note that you get the time reversed, complex conjugated result (\\(\overline{c_{-k}}\\)) when the two input sequences a and v change places: >>> np.correlate([0, 1, 0.5j], [1+1j, 2, 3-1j], 'full') array([ 0.0+0.j , 3.0+1.j , 1.5+1.5j, 1.0+0.j , 0.5+0.5j]) # numpy.cos numpy.cos(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Cosine element-wise. Parameters: **x** array_like Input array in radians. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The corresponding cosine values. This is a scalar if `x` is a scalar. #### Notes If `out` is provided, the function writes the result into it, and returns a reference to `out`. (See Examples) #### References M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions. New York, NY: Dover, 1972. #### Examples >>> import numpy as np >>> np.cos(np.array([0, np.pi/2, np.pi])) array([ 1.00000000e+00, 6.12303177e-17, -1.00000000e+00]) >>> >>> # Example of providing the optional output parameter >>> out1 = np.array([0], dtype='d') >>> out2 = np.cos([0.1], out1) >>> out2 is out1 True >>> >>> # Example of ValueError due to provision of shape mis-matched `out` >>> np.cos(np.zeros((3,3)),np.zeros((2,2))) Traceback (most recent call last): File "", line 1, in ValueError: operands could not be broadcast together with shapes (3,3) (2,2) # numpy.cosh numpy.cosh(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Hyperbolic cosine, element-wise. Equivalent to `1/2 * (np.exp(x) + np.exp(-x))` and `np.cos(1j*x)`. Parameters: **x** array_like Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Output array of same shape as `x`. This is a scalar if `x` is a scalar. #### Examples >>> import numpy as np >>> np.cosh(0) 1.0 The hyperbolic cosine describes the shape of a hanging cable: >>> import matplotlib.pyplot as plt >>> x = np.linspace(-4, 4, 1000) >>> plt.plot(x, np.cosh(x)) >>> plt.show() # numpy.count_nonzero numpy.count_nonzero(_a_ , _axis =None_, _*_ , _keepdims =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L450-L517) Counts the number of non-zero values in the array `a`. The word “non-zero” is in reference to the Python 2.x built-in method `__nonzero__()` (renamed `__bool__()` in Python 3.x) of Python objects that tests an object’s “truthfulness”. For example, any number is considered truthful if it is nonzero, whereas any string is considered truthful if it is not the empty string. Thus, this function (recursively) counts how many elements in `a` (and in sub-arrays thereof) have their `__nonzero__()` or `__bool__()` method evaluated to `True`. Parameters: **a** array_like The array for which to count non-zeros. **axis** int or tuple, optional Axis or tuple of axes along which to count non-zeros. Default is None, meaning that non-zeros will be counted along a flattened version of `a`. **keepdims** bool, optional If this is set to True, the axes that are counted are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. Returns: **count** int or array of int Number of non-zero values in the array along a given axis. Otherwise, the total number of non-zero values in the array is returned. See also [`nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") Return the coordinates of all the non-zero values. #### Examples >>> import numpy as np >>> np.count_nonzero(np.eye(4)) 4 >>> a = np.array([[0, 1, 7, 0], ... [3, 0, 2, 19]]) >>> np.count_nonzero(a) 5 >>> np.count_nonzero(a, axis=0) array([1, 1, 2, 1]) >>> np.count_nonzero(a, axis=1) array([2, 3]) >>> np.count_nonzero(a, axis=1, keepdims=True) array([[2], [3]]) # numpy.cov numpy.cov(_m_ , _y =None_, _rowvar =True_, _bias =False_, _ddof =None_, _fweights =None_, _aweights =None_, _*_ , _dtype =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L2680-L2895) Estimate a covariance matrix, given data and weights. Covariance indicates the level to which two variables vary together. If we examine N-dimensional samples, \\(X = [x_1, x_2, ... x_N]^T\\), then the covariance matrix element \\(C_{ij}\\) is the covariance of \\(x_i\\) and \\(x_j\\). The element \\(C_{ii}\\) is the variance of \\(x_i\\). See the notes for an outline of the algorithm. Parameters: **m** array_like A 1-D or 2-D array containing multiple variables and observations. Each row of `m` represents a variable, and each column a single observation of all those variables. Also see `rowvar` below. **y** array_like, optional An additional set of variables and observations. `y` has the same form as that of `m`. **rowvar** bool, optional If `rowvar` is True (default), then each row represents a variable, with observations in the columns. Otherwise, the relationship is transposed: each column represents a variable, while the rows contain observations. **bias** bool, optional Default normalization (False) is by `(N - 1)`, where `N` is the number of observations given (unbiased estimate). If `bias` is True, then normalization is by `N`. These values can be overridden by using the keyword `ddof` in numpy versions >= 1.5. **ddof** int, optional If not `None` the default value implied by `bias` is overridden. Note that `ddof=1` will return the unbiased estimate, even if both `fweights` and `aweights` are specified, and `ddof=0` will return the simple average. See the notes for the details. The default value is `None`. **fweights** array_like, int, optional 1-D array of integer frequency weights; the number of times each observation vector should be repeated. **aweights** array_like, optional 1-D array of observation vector weights. These relative weights are typically large for observations considered “important” and smaller for observations considered less “important”. If `ddof=0` the array of weights can be used to assign probabilities to observation vectors. **dtype** data-type, optional Data-type of the result. By default, the return data-type will have at least [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64") precision. New in version 1.20. Returns: **out** ndarray The covariance matrix of the variables. See also [`corrcoef`](numpy.corrcoef#numpy.corrcoef "numpy.corrcoef") Normalized covariance matrix #### Notes Assume that the observations are in the columns of the observation array `m` and let `f = fweights` and `a = aweights` for brevity. The steps to compute the weighted covariance are as follows: >>> m = np.arange(10, dtype=np.float64) >>> f = np.arange(10) * 2 >>> a = np.arange(10) ** 2. >>> ddof = 1 >>> w = f * a >>> v1 = np.sum(w) >>> v2 = np.sum(w * a) >>> m -= np.sum(m * w, axis=None, keepdims=True) / v1 >>> cov = np.dot(m * w, m.T) * v1 / (v1**2 - ddof * v2) Note that when `a == 1`, the normalization factor `v1 / (v1**2 - ddof * v2)` goes over to `1 / (np.sum(f) - ddof)` as it should. #### Examples >>> import numpy as np Consider two variables, \\(x_0\\) and \\(x_1\\), which correlate perfectly, but in opposite directions: >>> x = np.array([[0, 2], [1, 1], [2, 0]]).T >>> x array([[0, 1, 2], [2, 1, 0]]) Note how \\(x_0\\) increases while \\(x_1\\) decreases. The covariance matrix shows this clearly: >>> np.cov(x) array([[ 1., -1.], [-1., 1.]]) Note that element \\(C_{0,1}\\), which shows the correlation between \\(x_0\\) and \\(x_1\\), is negative. Further, note how `x` and `y` are combined: >>> x = [-2.1, -1, 4.3] >>> y = [3, 1.1, 0.12] >>> X = np.stack((x, y), axis=0) >>> np.cov(X) array([[11.71 , -4.286 ], # may vary [-4.286 , 2.144133]]) >>> np.cov(x, y) array([[11.71 , -4.286 ], # may vary [-4.286 , 2.144133]]) >>> np.cov(x) array(11.71) # numpy.cross numpy.cross(_a_ , _b_ , _axisa =-1_, _axisb =-1_, _axisc =-1_, _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L1522-L1739) Return the cross product of two (arrays of) vectors. The cross product of `a` and `b` in \\(R^3\\) is a vector perpendicular to both `a` and `b`. If `a` and `b` are arrays of vectors, the vectors are defined by the last axis of `a` and `b` by default, and these axes can have dimensions 2 or 3. Where the dimension of either `a` or `b` is 2, the third component of the input vector is assumed to be zero and the cross product calculated accordingly. In cases where both input vectors have dimension 2, the z-component of the cross product is returned. Parameters: **a** array_like Components of the first vector(s). **b** array_like Components of the second vector(s). **axisa** int, optional Axis of `a` that defines the vector(s). By default, the last axis. **axisb** int, optional Axis of `b` that defines the vector(s). By default, the last axis. **axisc** int, optional Axis of `c` containing the cross product vector(s). Ignored if both input vectors have dimension 2, as the return is scalar. By default, the last axis. **axis** int, optional If defined, the axis of `a`, `b` and `c` that defines the vector(s) and cross product(s). Overrides `axisa`, `axisb` and `axisc`. Returns: **c** ndarray Vector cross product(s). Raises: ValueError When the dimension of the vector(s) in `a` and/or `b` does not equal 2 or 3. See also [`inner`](numpy.inner#numpy.inner "numpy.inner") Inner product [`outer`](numpy.outer#numpy.outer "numpy.outer") Outer product. [`linalg.cross`](numpy.linalg.cross#numpy.linalg.cross "numpy.linalg.cross") An Array API compatible variation of `np.cross`, which accepts (arrays of) 3-element vectors only. [`ix_`](numpy.ix_#numpy.ix_ "numpy.ix_") Construct index arrays. #### Notes Supports full broadcasting of the inputs. Dimension-2 input arrays were deprecated in 2.0.0. If you do need this functionality, you can use: def cross2d(x, y): return x[..., 0] * y[..., 1] - x[..., 1] * y[..., 0] #### Examples Vector cross-product. >>> import numpy as np >>> x = [1, 2, 3] >>> y = [4, 5, 6] >>> np.cross(x, y) array([-3, 6, -3]) One vector with dimension 2. >>> x = [1, 2] >>> y = [4, 5, 6] >>> np.cross(x, y) array([12, -6, -3]) Equivalently: >>> x = [1, 2, 0] >>> y = [4, 5, 6] >>> np.cross(x, y) array([12, -6, -3]) Both vectors with dimension 2. >>> x = [1,2] >>> y = [4,5] >>> np.cross(x, y) array(-3) Multiple vector cross-products. Note that the direction of the cross product vector is defined by the _right-hand rule_. >>> x = np.array([[1,2,3], [4,5,6]]) >>> y = np.array([[4,5,6], [1,2,3]]) >>> np.cross(x, y) array([[-3, 6, -3], [ 3, -6, 3]]) The orientation of `c` can be changed using the `axisc` keyword. >>> np.cross(x, y, axisc=0) array([[-3, 3], [ 6, -6], [-3, 3]]) Change the vector definition of `x` and `y` using `axisa` and `axisb`. >>> x = np.array([[1,2,3], [4,5,6], [7, 8, 9]]) >>> y = np.array([[7, 8, 9], [4,5,6], [1,2,3]]) >>> np.cross(x, y) array([[ -6, 12, -6], [ 0, 0, 0], [ 6, -12, 6]]) >>> np.cross(x, y, axisa=0, axisb=0) array([[-24, 48, -24], [-30, 60, -30], [-36, 72, -36]]) # numpy.cumprod numpy.cumprod(_a_ , _axis =None_, _dtype =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L3453-L3516) Return the cumulative product of elements along a given axis. Parameters: **a** array_like Input array. **axis** int, optional Axis along which the cumulative product is computed. By default the input is flattened. **dtype** dtype, optional Type of the returned array, as well as of the accumulator in which the elements are multiplied. If _dtype_ is not specified, it defaults to the dtype of `a`, unless `a` has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used instead. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type of the resulting values will be cast if necessary. Returns: **cumprod** ndarray A new array holding the result is returned unless `out` is specified, in which case a reference to out is returned. See also [`cumulative_prod`](numpy.cumulative_prod#numpy.cumulative_prod "numpy.cumulative_prod") Array API compatible alternative for `cumprod`. [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes Arithmetic is modular when using integer types, and no error is raised on overflow. #### Examples >>> import numpy as np >>> a = np.array([1,2,3]) >>> np.cumprod(a) # intermediate results 1, 1*2 ... # total product 1*2*3 = 6 array([1, 2, 6]) >>> a = np.array([[1, 2, 3], [4, 5, 6]]) >>> np.cumprod(a, dtype=float) # specify type of output array([ 1., 2., 6., 24., 120., 720.]) The cumulative product for each column (i.e., over the rows) of `a`: >>> np.cumprod(a, axis=0) array([[ 1, 2, 3], [ 4, 10, 18]]) The cumulative product for each row (i.e. over the columns) of `a`: >>> np.cumprod(a,axis=1) array([[ 1, 2, 6], [ 4, 20, 120]]) # numpy.cumsum numpy.cumsum(_a_ , _axis =None_, _dtype =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L2879-L2955) Return the cumulative sum of the elements along a given axis. Parameters: **a** array_like Input array. **axis** int, optional Axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array. **dtype** dtype, optional Type of the returned array and of the accumulator in which the elements are summed. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not specified, it defaults to the dtype of `a`, unless `a` has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs- output-type) for more details. Returns: **cumsum_along_axis** ndarray. A new array holding the result is returned unless `out` is specified, in which case a reference to `out` is returned. The result has the same size as `a`, and the same shape as `a` if `axis` is not None or `a` is a 1-d array. See also [`cumulative_sum`](numpy.cumulative_sum#numpy.cumulative_sum "numpy.cumulative_sum") Array API compatible alternative for `cumsum`. [`sum`](numpy.sum#numpy.sum "numpy.sum") Sum array elements. [`trapezoid`](numpy.trapezoid#numpy.trapezoid "numpy.trapezoid") Integration of array values using composite trapezoidal rule. [`diff`](numpy.diff#numpy.diff "numpy.diff") Calculate the n-th discrete difference along given axis. #### Notes Arithmetic is modular when using integer types, and no error is raised on overflow. `cumsum(a)[-1]` may not be equal to `sum(a)` for floating-point values since `sum` may use a pairwise summation routine, reducing the roundoff-error. See [`sum`](numpy.sum#numpy.sum "numpy.sum") for more information. #### Examples >>> import numpy as np >>> a = np.array([[1,2,3], [4,5,6]]) >>> a array([[1, 2, 3], [4, 5, 6]]) >>> np.cumsum(a) array([ 1, 3, 6, 10, 15, 21]) >>> np.cumsum(a, dtype=float) # specifies type of output value(s) array([ 1., 3., 6., 10., 15., 21.]) >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns array([[1, 2, 3], [5, 7, 9]]) >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows array([[ 1, 3, 6], [ 4, 9, 15]]) `cumsum(b)[-1]` may not be equal to `sum(b)` >>> b = np.array([1, 2e-9, 3e-9] * 1000000) >>> b.cumsum()[-1] 1000000.0050045159 >>> b.sum() 1000000.0050000029 # numpy.cumulative_prod numpy.cumulative_prod(_x_ , _/_ , _*_ , _axis =None_, _dtype =None_, _out =None_, _include_initial =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L2713-L2782) Return the cumulative product of elements along a given axis. This function is an Array API compatible alternative to [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod"). Parameters: **x** array_like Input array. **axis** int, optional Axis along which the cumulative product is computed. The default (None) is only allowed for one-dimensional arrays. For arrays with more than one dimension `axis` is required. **dtype** dtype, optional Type of the returned array, as well as of the accumulator in which the elements are multiplied. If `dtype` is not specified, it defaults to the dtype of `x`, unless `x` has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used instead. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type of the resulting values will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. **include_initial** bool, optional Boolean indicating whether to include the initial value (ones) as the first value in the output. With `include_initial=True` the shape of the output is different than the shape of the input. Default: `False`. Returns: **cumulative_prod_along_axis** ndarray A new array holding the result is returned unless `out` is specified, in which case a reference to `out` is returned. The result has the same shape as `x` if `include_initial=False`. #### Notes Arithmetic is modular when using integer types, and no error is raised on overflow. #### Examples >>> a = np.array([1, 2, 3]) >>> np.cumulative_prod(a) # intermediate results 1, 1*2 ... # total product 1*2*3 = 6 array([1, 2, 6]) >>> a = np.array([1, 2, 3, 4, 5, 6]) >>> np.cumulative_prod(a, dtype=float) # specify type of output array([ 1., 2., 6., 24., 120., 720.]) The cumulative product for each column (i.e., over the rows) of `b`: >>> b = np.array([[1, 2, 3], [4, 5, 6]]) >>> np.cumulative_prod(b, axis=0) array([[ 1, 2, 3], [ 4, 10, 18]]) The cumulative product for each row (i.e. over the columns) of `b`: >>> np.cumulative_prod(b, axis=1) array([[ 1, 2, 6], [ 4, 20, 120]]) # numpy.cumulative_sum numpy.cumulative_sum(_x_ , _/_ , _*_ , _axis =None_, _dtype =None_, _out =None_, _include_initial =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L2790-L2872) Return the cumulative sum of the elements along a given axis. This function is an Array API compatible alternative to [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum"). Parameters: **x** array_like Input array. **axis** int, optional Axis along which the cumulative sum is computed. The default (None) is only allowed for one-dimensional arrays. For arrays with more than one dimension `axis` is required. **dtype** dtype, optional Type of the returned array and of the accumulator in which the elements are summed. If `dtype` is not specified, it defaults to the dtype of `x`, unless `x` has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs- output-type) for more details. **include_initial** bool, optional Boolean indicating whether to include the initial value (zeros) as the first value in the output. With `include_initial=True` the shape of the output is different than the shape of the input. Default: `False`. Returns: **cumulative_sum_along_axis** ndarray A new array holding the result is returned unless `out` is specified, in which case a reference to `out` is returned. The result has the same shape as `x` if `include_initial=False`. See also [`sum`](numpy.sum#numpy.sum "numpy.sum") Sum array elements. [`trapezoid`](numpy.trapezoid#numpy.trapezoid "numpy.trapezoid") Integration of array values using composite trapezoidal rule. [`diff`](numpy.diff#numpy.diff "numpy.diff") Calculate the n-th discrete difference along given axis. #### Notes Arithmetic is modular when using integer types, and no error is raised on overflow. `cumulative_sum(a)[-1]` may not be equal to `sum(a)` for floating-point values since `sum` may use a pairwise summation routine, reducing the roundoff-error. See [`sum`](numpy.sum#numpy.sum "numpy.sum") for more information. #### Examples >>> a = np.array([1, 2, 3, 4, 5, 6]) >>> a array([1, 2, 3, 4, 5, 6]) >>> np.cumulative_sum(a) array([ 1, 3, 6, 10, 15, 21]) >>> np.cumulative_sum(a, dtype=float) # specifies type of output value(s) array([ 1., 3., 6., 10., 15., 21.]) >>> b = np.array([[1, 2, 3], [4, 5, 6]]) >>> np.cumulative_sum(b,axis=0) # sum over rows for each of the 3 columns array([[1, 2, 3], [5, 7, 9]]) >>> np.cumulative_sum(b,axis=1) # sum over columns for each of the 2 rows array([[ 1, 3, 6], [ 4, 9, 15]]) `cumulative_sum(c)[-1]` may not be equal to `sum(c)` >>> c = np.array([1, 2e-9, 3e-9] * 1000000) >>> np.cumulative_sum(c)[-1] 1000000.0050045159 >>> c.sum() 1000000.0050000029 # numpy.datetime_as_string numpy.datetime_as_string(_arr_ , _unit =None_, _timezone ='naive'_, _casting ='same_kind'_) Convert an array of datetimes into an array of strings. Parameters: **arr** array_like of datetime64 The array of UTC timestamps to format. **unit** str One of None, ‘auto’, or a [datetime unit](../arrays.datetime#arrays-dtypes- dateunits). **timezone**{‘naive’, ‘UTC’, ‘local’} or tzinfo Timezone information to use when displaying the datetime. If ‘UTC’, end with a Z to indicate UTC time. If ‘local’, convert to the local timezone first, and suffix with a +-#### timezone offset. If a tzinfo object, then do as with ‘local’, but use the specified timezone. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’} Casting to allow when changing between datetime units. Returns: **str_arr** ndarray An array of strings the same shape as `arr`. #### Examples >>> import numpy as np >>> import pytz >>> d = np.arange('2002-10-27T04:30', 4*60, 60, dtype='M8[m]') >>> d array(['2002-10-27T04:30', '2002-10-27T05:30', '2002-10-27T06:30', '2002-10-27T07:30'], dtype='datetime64[m]') Setting the timezone to UTC shows the same information, but with a Z suffix >>> np.datetime_as_string(d, timezone='UTC') array(['2002-10-27T04:30Z', '2002-10-27T05:30Z', '2002-10-27T06:30Z', '2002-10-27T07:30Z'], dtype='>> np.datetime_as_string(d, timezone=pytz.timezone('US/Eastern')) array(['2002-10-27T00:30-0400', '2002-10-27T01:30-0400', '2002-10-27T01:30-0500', '2002-10-27T02:30-0500'], dtype='>> np.datetime_as_string(d, unit='h') array(['2002-10-27T04', '2002-10-27T05', '2002-10-27T06', '2002-10-27T07'], dtype='>> np.datetime_as_string(d, unit='s') array(['2002-10-27T04:30:00', '2002-10-27T05:30:00', '2002-10-27T06:30:00', '2002-10-27T07:30:00'], dtype='>> np.datetime_as_string(d, unit='h', casting='safe') Traceback (most recent call last): ... TypeError: Cannot create a datetime string as units 'h' from a NumPy datetime with units 'm' according to the rule 'safe' # numpy.datetime_data numpy.datetime_data(_dtype_ , _/_) Get information about the step size of a date or time type. The returned tuple can be passed as the second argument of [`numpy.datetime64`](../arrays.scalars#numpy.datetime64 "numpy.datetime64") and [`numpy.timedelta64`](../arrays.scalars#numpy.timedelta64 "numpy.timedelta64"). Parameters: **dtype** dtype The dtype object, which must be a [`datetime64`](../arrays.scalars#numpy.datetime64 "numpy.datetime64") or [`timedelta64`](../arrays.scalars#numpy.timedelta64 "numpy.timedelta64") type. Returns: **unit** str The [datetime unit](../arrays.datetime#arrays-dtypes-dateunits) on which this dtype is based. **count** int The number of base units in a step. #### Examples >>> import numpy as np >>> dt_25s = np.dtype('timedelta64[25s]') >>> np.datetime_data(dt_25s) ('s', 25) >>> np.array(10, dt_25s).astype('timedelta64[s]') array(250, dtype='timedelta64[s]') The result can be used to construct a datetime that uses the same units as a timedelta >>> np.datetime64('2010', np.datetime_data(dt_25s)) np.datetime64('2010-01-01T00:00:00','25s') # numpy.deg2rad numpy.deg2rad(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Convert angles from degrees to radians. Parameters: **x** array_like Angles in degrees. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The corresponding angle in radians. This is a scalar if `x` is a scalar. See also [`rad2deg`](numpy.rad2deg#numpy.rad2deg "numpy.rad2deg") Convert angles from radians to degrees. [`unwrap`](numpy.unwrap#numpy.unwrap "numpy.unwrap") Remove large jumps in angle by wrapping. #### Notes `deg2rad(x)` is `x * pi / 180`. #### Examples >>> import numpy as np >>> np.deg2rad(180) 3.1415926535897931 # numpy.degrees numpy.degrees(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Convert angles from radians to degrees. Parameters: **x** array_like Input array in radians. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray of floats The corresponding degree values; if `out` was supplied this is a reference to it. This is a scalar if `x` is a scalar. See also [`rad2deg`](numpy.rad2deg#numpy.rad2deg "numpy.rad2deg") equivalent function #### Examples Convert a radian array to degrees >>> import numpy as np >>> rad = np.arange(12.)*np.pi/6 >>> np.degrees(rad) array([ 0., 30., 60., 90., 120., 150., 180., 210., 240., 270., 300., 330.]) >>> out = np.zeros((rad.shape)) >>> r = np.degrees(rad, out) >>> np.all(r == out) True # numpy.delete numpy.delete(_arr_ , _obj_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L5272-L5449) Return a new array with sub-arrays along an axis deleted. For a one dimensional array, this returns those entries not returned by `arr[obj]`. Parameters: **arr** array_like Input array. **obj** slice, int, array-like of ints or bools Indicate indices of sub-arrays to remove along the specified axis. Changed in version 1.19.0: Boolean indices are now treated as a mask of elements to remove, rather than being cast to the integers 0 and 1. **axis** int, optional The axis along which to delete the subarray defined by `obj`. If `axis` is None, `obj` is applied to the flattened array. Returns: **out** ndarray A copy of `arr` with the elements specified by `obj` removed. Note that `delete` does not occur in-place. If `axis` is None, `out` is a flattened array. See also [`insert`](numpy.insert#numpy.insert "numpy.insert") Insert elements into an array. [`append`](numpy.append#numpy.append "numpy.append") Append elements at the end of an array. #### Notes Often it is preferable to use a boolean mask. For example: >>> arr = np.arange(12) + 1 >>> mask = np.ones(len(arr), dtype=bool) >>> mask[[0,2,4]] = False >>> result = arr[mask,...] Is equivalent to `np.delete(arr, [0,2,4], axis=0)`, but allows further use of `mask`. #### Examples >>> import numpy as np >>> arr = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]]) >>> arr array([[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12]]) >>> np.delete(arr, 1, 0) array([[ 1, 2, 3, 4], [ 9, 10, 11, 12]]) >>> np.delete(arr, np.s_[::2], 1) array([[ 2, 4], [ 6, 8], [10, 12]]) >>> np.delete(arr, [1,3,5], None) array([ 1, 3, 5, 7, 8, 9, 10, 11, 12]) # numpy.diag numpy.diag(_v_ , _k =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_twodim_base_impl.py#L245-L315) Extract a diagonal or construct a diagonal array. See the more detailed documentation for `numpy.diagonal` if you use this function to extract a diagonal and wish to write to the resulting array; whether it returns a copy or a view depends on what version of numpy you are using. Parameters: **v** array_like If `v` is a 2-D array, return a copy of its `k`-th diagonal. If `v` is a 1-D array, return a 2-D array with `v` on the `k`-th diagonal. **k** int, optional Diagonal in question. The default is 0. Use `k>0` for diagonals above the main diagonal, and `k<0` for diagonals below the main diagonal. Returns: **out** ndarray The extracted diagonal or constructed diagonal array. See also [`diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") Return specified diagonals. [`diagflat`](numpy.diagflat#numpy.diagflat "numpy.diagflat") Create a 2-D array with the flattened input as a diagonal. [`trace`](numpy.trace#numpy.trace "numpy.trace") Sum along diagonals. [`triu`](numpy.triu#numpy.triu "numpy.triu") Upper triangle of an array. [`tril`](numpy.tril#numpy.tril "numpy.tril") Lower triangle of an array. #### Examples >>> import numpy as np >>> x = np.arange(9).reshape((3,3)) >>> x array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> np.diag(x) array([0, 4, 8]) >>> np.diag(x, k=1) array([1, 5]) >>> np.diag(x, k=-1) array([3, 7]) >>> np.diag(np.diag(x)) array([[0, 0, 0], [0, 4, 0], [0, 0, 8]]) # numpy.diag_indices numpy.diag_indices(_n_ , _ndim =2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_index_tricks_impl.py#L947-L1011) Return the indices to access the main diagonal of an array. This returns a tuple of indices that can be used to access the main diagonal of an array `a` with `a.ndim >= 2` dimensions and shape (n, n, …, n). For `a.ndim = 2` this is the usual diagonal, for `a.ndim > 2` this is the set of indices to access `a[i, i, ..., i]` for `i = [0..n-1]`. Parameters: **n** int The size, along each dimension, of the arrays for which the returned indices can be used. **ndim** int, optional The number of dimensions. See also [`diag_indices_from`](numpy.diag_indices_from#numpy.diag_indices_from "numpy.diag_indices_from") #### Examples >>> import numpy as np Create a set of indices to access the diagonal of a (4, 4) array: >>> di = np.diag_indices(4) >>> di (array([0, 1, 2, 3]), array([0, 1, 2, 3])) >>> a = np.arange(16).reshape(4, 4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) >>> a[di] = 100 >>> a array([[100, 1, 2, 3], [ 4, 100, 6, 7], [ 8, 9, 100, 11], [ 12, 13, 14, 100]]) Now, we create indices to manipulate a 3-D array: >>> d3 = np.diag_indices(2, 3) >>> d3 (array([0, 1]), array([0, 1]), array([0, 1])) And use it to set the diagonal of an array of zeros to 1: >>> a = np.zeros((2, 2, 2), dtype=int) >>> a[d3] = 1 >>> a array([[[1, 0], [0, 0]], [[0, 0], [0, 1]]]) # numpy.diag_indices_from numpy.diag_indices_from(_arr_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_index_tricks_impl.py#L1018-L1069) Return the indices to access the main diagonal of an n-dimensional array. See [`diag_indices`](numpy.diag_indices#numpy.diag_indices "numpy.diag_indices") for full details. Parameters: **arr** array, at least 2-D See also [`diag_indices`](numpy.diag_indices#numpy.diag_indices "numpy.diag_indices") #### Examples >>> import numpy as np Create a 4 by 4 array. >>> a = np.arange(16).reshape(4, 4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) Get the indices of the diagonal elements. >>> di = np.diag_indices_from(a) >>> di (array([0, 1, 2, 3]), array([0, 1, 2, 3])) >>> a[di] array([ 0, 5, 10, 15]) This is simply syntactic sugar for diag_indices. >>> np.diag_indices(a.shape[0]) (array([0, 1, 2, 3]), array([0, 1, 2, 3])) # numpy.diagflat numpy.diagflat(_v_ , _k =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_twodim_base_impl.py#L318-L373) Create a two-dimensional array with the flattened input as a diagonal. Parameters: **v** array_like Input data, which is flattened and set as the `k`-th diagonal of the output. **k** int, optional Diagonal to set; 0, the default, corresponds to the “main” diagonal, a positive (negative) `k` giving the number of the diagonal above (below) the main. Returns: **out** ndarray The 2-D output array. See also [`diag`](numpy.diag#numpy.diag "numpy.diag") MATLAB work-alike for 1-D and 2-D arrays. [`diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") Return specified diagonals. [`trace`](numpy.trace#numpy.trace "numpy.trace") Sum along diagonals. #### Examples >>> import numpy as np >>> np.diagflat([[1,2], [3,4]]) array([[1, 0, 0, 0], [0, 2, 0, 0], [0, 0, 3, 0], [0, 0, 0, 4]]) >>> np.diagflat([1,2], 1) array([[0, 1, 0], [0, 0, 2], [0, 0, 0]]) # numpy.diagonal numpy.diagonal(_a_ , _offset =0_, _axis1 =0_, _axis2 =1_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L1695-L1823) Return specified diagonals. If `a` is 2-D, returns the diagonal of `a` with the given offset, i.e., the collection of elements of the form `a[i, i+offset]`. If `a` has more than two dimensions, then the axes specified by `axis1` and `axis2` are used to determine the 2-D sub-array whose diagonal is returned. The shape of the resulting array can be determined by removing `axis1` and `axis2` and appending an index to the right equal to the size of the resulting diagonals. In versions of NumPy prior to 1.7, this function always returned a new, independent array containing a copy of the values in the diagonal. In NumPy 1.7 and 1.8, it continues to return a copy of the diagonal, but depending on this fact is deprecated. Writing to the resulting array continues to work as it used to, but a FutureWarning is issued. Starting in NumPy 1.9 it returns a read-only view on the original array. Attempting to write to the resulting array will produce an error. In some future release, it will return a read/write view and writing to the returned array will alter your original array. The returned array will have the same type as the input array. If you don’t write to the array returned by this function, then you can just ignore all of the above. If you depend on the current behavior, then we suggest copying the returned array explicitly, i.e., use `np.diagonal(a).copy()` instead of just `np.diagonal(a)`. This will work with both past and future versions of NumPy. Parameters: **a** array_like Array from which the diagonals are taken. **offset** int, optional Offset of the diagonal from the main diagonal. Can be positive or negative. Defaults to main diagonal (0). **axis1** int, optional Axis to be used as the first axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to first axis (0). **axis2** int, optional Axis to be used as the second axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to second axis (1). Returns: **array_of_diagonals** ndarray If `a` is 2-D, then a 1-D array containing the diagonal and of the same type as `a` is returned unless `a` is a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix"), in which case a 1-D array rather than a (2-D) [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") is returned in order to maintain backward compatibility. If `a.ndim > 2`, then the dimensions specified by `axis1` and `axis2` are removed, and a new axis inserted at the end corresponding to the diagonal. Raises: ValueError If the dimension of `a` is less than 2. See also [`diag`](numpy.diag#numpy.diag "numpy.diag") MATLAB work-a-like for 1-D and 2-D arrays. [`diagflat`](numpy.diagflat#numpy.diagflat "numpy.diagflat") Create diagonal arrays. [`trace`](numpy.trace#numpy.trace "numpy.trace") Sum along diagonals. #### Examples >>> import numpy as np >>> a = np.arange(4).reshape(2,2) >>> a array([[0, 1], [2, 3]]) >>> a.diagonal() array([0, 3]) >>> a.diagonal(1) array([1]) A 3-D example: >>> a = np.arange(8).reshape(2,2,2); a array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> a.diagonal(0, # Main diagonals of two arrays created by skipping ... 0, # across the outer(left)-most axis last and ... 1) # the "middle" (row) axis first. array([[0, 6], [1, 7]]) The sub-arrays whose main diagonals we just obtained; note that each corresponds to fixing the right-most (column) axis, and that the diagonals are “packed” in rows. >>> a[:,:,0] # main diagonal is [0 6] array([[0, 2], [4, 6]]) >>> a[:,:,1] # main diagonal is [1 7] array([[1, 3], [5, 7]]) The anti-diagonal can be obtained by reversing the order of elements using either [`numpy.flipud`](numpy.flipud#numpy.flipud "numpy.flipud") or [`numpy.fliplr`](numpy.fliplr#numpy.fliplr "numpy.fliplr"). >>> a = np.arange(9).reshape(3, 3) >>> a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> np.fliplr(a).diagonal() # Horizontal flip array([2, 4, 6]) >>> np.flipud(a).diagonal() # Vertical flip array([6, 4, 2]) Note that the order in which the diagonal is retrieved varies depending on the flip function. # numpy.diff numpy.diff(_a_ , _n=1_ , _axis=-1_ , _prepend= _, _append= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L1369-L1498) Calculate the n-th discrete difference along the given axis. The first difference is given by `out[i] = a[i+1] - a[i]` along the given axis, higher differences are calculated by using `diff` recursively. Parameters: **a** array_like Input array **n** int, optional The number of times values are differenced. If zero, the input is returned as- is. **axis** int, optional The axis along which the difference is taken, default is the last axis. **prepend, append** array_like, optional Values to prepend or append to `a` along axis prior to performing the difference. Scalar values are expanded to arrays with length 1 in the direction of axis and the shape of the input array in along all other axes. Otherwise the dimension and shape must match `a` except along axis. Returns: **diff** ndarray The n-th differences. The shape of the output is the same as `a` except along `axis` where the dimension is smaller by `n`. The type of the output is the same as the type of the difference between any two elements of `a`. This is the same as the type of `a` in most cases. A notable exception is [`datetime64`](../arrays.scalars#numpy.datetime64 "numpy.datetime64"), which results in a [`timedelta64`](../arrays.scalars#numpy.timedelta64 "numpy.timedelta64") output array. See also [`gradient`](numpy.gradient#numpy.gradient "numpy.gradient"), [`ediff1d`](numpy.ediff1d#numpy.ediff1d "numpy.ediff1d"), [`cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") #### Notes Type is preserved for boolean arrays, so the result will contain `False` when consecutive elements are the same and `True` when they differ. For unsigned integer arrays, the results will also be unsigned. This should not be surprising, as the result is consistent with calculating the difference directly: >>> u8_arr = np.array([1, 0], dtype=np.uint8) >>> np.diff(u8_arr) array([255], dtype=uint8) >>> u8_arr[1,...] - u8_arr[0,...] np.uint8(255) If this is not desirable, then the array should be cast to a larger integer type first: >>> i16_arr = u8_arr.astype(np.int16) >>> np.diff(i16_arr) array([-1], dtype=int16) #### Examples >>> import numpy as np >>> x = np.array([1, 2, 4, 7, 0]) >>> np.diff(x) array([ 1, 2, 3, -7]) >>> np.diff(x, n=2) array([ 1, 1, -10]) >>> x = np.array([[1, 3, 6, 10], [0, 5, 6, 8]]) >>> np.diff(x) array([[2, 3, 4], [5, 1, 2]]) >>> np.diff(x, axis=0) array([[-1, 2, 0, -2]]) >>> x = np.arange('1066-10-13', '1066-10-16', dtype=np.datetime64) >>> np.diff(x) array([1, 1], dtype='timedelta64[D]') # numpy.digitize numpy.digitize(_x_ , _bins_ , _right =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L5718-L5827) Return the indices of the bins to which each value in input array belongs. `right` | order of bins | returned index `i` satisfies ---|---|--- `False` | increasing | `bins[i-1] <= x < bins[i]` `True` | increasing | `bins[i-1] < x <= bins[i]` `False` | decreasing | `bins[i-1] > x >= bins[i]` `True` | decreasing | `bins[i-1] >= x > bins[i]` If values in `x` are beyond the bounds of `bins`, 0 or `len(bins)` is returned as appropriate. Parameters: **x** array_like Input array to be binned. Prior to NumPy 1.10.0, this array had to be 1-dimensional, but can now have any shape. **bins** array_like Array of bins. It has to be 1-dimensional and monotonic. **right** bool, optional Indicating whether the intervals include the right or the left bin edge. Default behavior is (right==False) indicating that the interval does not include the right edge. The left bin end is open in this case, i.e., bins[i-1] <= x < bins[i] is the default behavior for monotonically increasing bins. Returns: **indices** ndarray of ints Output array of indices, of same shape as `x`. Raises: ValueError If `bins` is not monotonic. TypeError If the type of the input is complex. See also [`bincount`](numpy.bincount#numpy.bincount "numpy.bincount"), [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram"), [`unique`](numpy.unique#numpy.unique "numpy.unique"), [`searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") #### Notes If values in `x` are such that they fall outside the bin range, attempting to index `bins` with the indices that `digitize` returns will result in an IndexError. New in version 1.10.0. `numpy.digitize` is implemented in terms of [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted"). This means that a binary search is used to bin the values, which scales much better for larger number of bins than the previous linear search. It also removes the requirement for the input array to be 1-dimensional. For monotonically _increasing_ `bins`, the following are equivalent: np.digitize(x, bins, right=True) np.searchsorted(bins, x, side='left') Note that as the order of the arguments are reversed, the side must be too. The [`searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") call is marginally faster, as it does not do any monotonicity checks. Perhaps more importantly, it supports all dtypes. #### Examples >>> import numpy as np >>> x = np.array([0.2, 6.4, 3.0, 1.6]) >>> bins = np.array([0.0, 1.0, 2.5, 4.0, 10.0]) >>> inds = np.digitize(x, bins) >>> inds array([1, 4, 3, 2]) >>> for n in range(x.size): ... print(bins[inds[n]-1], "<=", x[n], "<", bins[inds[n]]) ... 0.0 <= 0.2 < 1.0 4.0 <= 6.4 < 10.0 2.5 <= 3.0 < 4.0 1.0 <= 1.6 < 2.5 >>> x = np.array([1.2, 10.0, 12.4, 15.5, 20.]) >>> bins = np.array([0, 5, 10, 15, 20]) >>> np.digitize(x,bins,right=True) array([1, 2, 3, 4, 4]) >>> np.digitize(x,bins,right=False) array([1, 3, 3, 4, 5]) # numpy.distutils.ccompiler.CCompiler_compile distutils.ccompiler.CCompiler_compile(_self_ , _sources_ , _output_dir =None_, _macros =None_, _include_dirs =None_, _debug =0_, _extra_preargs =None_, _extra_postargs =None_, _depends =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler.py#L234-L372) Compile one or more source files. Please refer to the Python distutils API reference for more details. Parameters: **sources** list of str A list of filenames **output_dir** str, optional Path to the output directory. **macros** list of tuples A list of macro definitions. **include_dirs** list of str, optional The directories to add to the default include file search path for this compilation only. **debug** bool, optional Whether or not to output debug symbols in or alongside the object file(s). **extra_preargs, extra_postargs**? Extra pre- and post-arguments. **depends** list of str, optional A list of file names that all targets depend on. Returns: **objects** list of str A list of object file names, one per source file `sources`. Raises: CompileError If compilation fails. # numpy.distutils.ccompiler.CCompiler_customize distutils.ccompiler.CCompiler_customize(_self_ , _dist_ , _need_cxx =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler.py#L479-L558) Do any platform-specific customization of a compiler instance. This method calls `distutils.sysconfig.customize_compiler` for platform- specific customization, as well as optionally remove a flag to suppress spurious warnings in case C++ code is being compiled. Parameters: **dist** object This parameter is not used for anything. **need_cxx** bool, optional Whether or not C++ has to be compiled. If so (True), the `"-Wstrict- prototypes"` option is removed to prevent spurious warnings. Default is False. Returns: None #### Notes All the default options used by distutils can be extracted with: from distutils import sysconfig sysconfig.get_config_vars('CC', 'CXX', 'OPT', 'BASECFLAGS', 'CCSHARED', 'LDSHARED', 'SO') # numpy.distutils.ccompiler.CCompiler_customize_cmd distutils.ccompiler.CCompiler_customize_cmd(_self_ , _cmd_ , _ignore =()_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler.py#L376-L428) Customize compiler using distutils command. Parameters: **cmd** class instance An instance inheriting from `distutils.cmd.Command`. **ignore** sequence of str, optional List of `distutils.ccompiler.CCompiler` commands (without `'set_'`) that should not be altered. Strings that are checked for are: `('include_dirs', 'define', 'undef', 'libraries', 'library_dirs', 'rpath', 'link_objects')`. Returns: None # numpy.distutils.ccompiler.CCompiler_cxx_compiler distutils.ccompiler.CCompiler_cxx_compiler(_self_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler.py#L678-L712) Return the C++ compiler. Parameters: **None** Returns: **cxx** class instance The C++ compiler, as a `distutils.ccompiler.CCompiler` instance. # numpy.distutils.ccompiler.CCompiler_find_executables distutils.ccompiler.CCompiler_find_executables(_self_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler.py#L100-L107) Does nothing here, but is called by the get_version method and can be overridden by subclasses. In particular it is redefined in the `FCompiler` class where more documentation can be found. # numpy.distutils.ccompiler.CCompiler_get_version distutils.ccompiler.CCompiler_get_version(_self_ , _force =False_, _ok_status =[0]_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler.py#L609-L674) Return compiler version, or None if compiler is not available. Parameters: **force** bool, optional If True, force a new determination of the version, even if the compiler already has a version attribute. Default is False. **ok_status** list of int, optional The list of status values returned by the version look-up process for which a version string is returned. If the status value is not in `ok_status`, None is returned. Default is `[0]`. Returns: **version** str or None Version string, in the format of `distutils.version.LooseVersion`. # numpy.distutils.ccompiler.CCompiler_object_filenames distutils.ccompiler.CCompiler_object_filenames(_self_ , _source_filenames_ , _strip_dir =0_, _output_dir =''_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler.py#L187-L230) Return the name of the object files for the given source files. Parameters: **source_filenames** list of str The list of paths to source files. Paths can be either relative or absolute, this is handled transparently. **strip_dir** bool, optional Whether to strip the directory from the returned paths. If True, the file name prepended by `output_dir` is returned. Default is False. **output_dir** str, optional If given, this path is prepended to the returned paths to the object files. Returns: **obj_names** list of str The list of paths to the object files corresponding to the source files in `source_filenames`. # numpy.distutils.ccompiler.CCompiler_show_customization distutils.ccompiler.CCompiler_show_customization(_self_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler.py#L450-L475) Print the compiler customizations to stdout. Parameters: **None** Returns: None #### Notes Printing is only done if the distutils log threshold is < 2. # numpy.distutils.ccompiler.CCompiler_spawn distutils.ccompiler.CCompiler_spawn(_self_ , _cmd_ , _display =None_, _env =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler.py#L114-L183) Execute a command in a sub-process. Parameters: **cmd** str The command to execute. **display** str or sequence of str, optional The text to add to the log file kept by [`numpy.distutils`](../distutils#module-numpy.distutils "numpy.distutils"). If not given, `display` is equal to [`cmd`](https://docs.python.org/3/library/cmd.html#module-cmd "\(in Python v3.13\)"). **env** a dictionary for environment variables, optional Returns: None Raises: DistutilsExecError If the command failed, i.e. the exit status was not 0. # numpy.distutils.ccompiler.gen_lib_options distutils.ccompiler.gen_lib_options(_compiler_ , _library_dirs_ , _runtime_library_dirs_ , _libraries_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler.py#L800-L816) # numpy.distutils.ccompiler #### Functions [`CCompiler_compile`](numpy.distutils.ccompiler.ccompiler_compile#numpy.distutils.ccompiler.CCompiler_compile "numpy.distutils.ccompiler.CCompiler_compile")(self, sources[, ...]) | Compile one or more source files. ---|--- [`CCompiler_customize`](numpy.distutils.ccompiler.ccompiler_customize#numpy.distutils.ccompiler.CCompiler_customize "numpy.distutils.ccompiler.CCompiler_customize")(self, dist[, need_cxx]) | Do any platform-specific customization of a compiler instance. [`CCompiler_customize_cmd`](numpy.distutils.ccompiler.ccompiler_customize_cmd#numpy.distutils.ccompiler.CCompiler_customize_cmd "numpy.distutils.ccompiler.CCompiler_customize_cmd")(self, cmd[, ignore]) | Customize compiler using distutils command. [`CCompiler_cxx_compiler`](numpy.distutils.ccompiler.ccompiler_cxx_compiler#numpy.distutils.ccompiler.CCompiler_cxx_compiler "numpy.distutils.ccompiler.CCompiler_cxx_compiler")(self) | Return the C++ compiler. [`CCompiler_find_executables`](numpy.distutils.ccompiler.ccompiler_find_executables#numpy.distutils.ccompiler.CCompiler_find_executables "numpy.distutils.ccompiler.CCompiler_find_executables")(self) | Does nothing here, but is called by the get_version method and can be overridden by subclasses. [`CCompiler_get_version`](numpy.distutils.ccompiler.ccompiler_get_version#numpy.distutils.ccompiler.CCompiler_get_version "numpy.distutils.ccompiler.CCompiler_get_version")(self[, force, ok_status]) | Return compiler version, or None if compiler is not available. [`CCompiler_object_filenames`](numpy.distutils.ccompiler.ccompiler_object_filenames#numpy.distutils.ccompiler.CCompiler_object_filenames "numpy.distutils.ccompiler.CCompiler_object_filenames")(self, ...[, ...]) | Return the name of the object files for the given source files. [`CCompiler_show_customization`](numpy.distutils.ccompiler.ccompiler_show_customization#numpy.distutils.ccompiler.CCompiler_show_customization "numpy.distutils.ccompiler.CCompiler_show_customization")(self) | Print the compiler customizations to stdout. [`CCompiler_spawn`](numpy.distutils.ccompiler.ccompiler_spawn#numpy.distutils.ccompiler.CCompiler_spawn "numpy.distutils.ccompiler.CCompiler_spawn")(self, cmd[, display, env]) | Execute a command in a sub-process. [`gen_lib_options`](numpy.distutils.ccompiler.gen_lib_options#numpy.distutils.ccompiler.gen_lib_options "numpy.distutils.ccompiler.gen_lib_options")(compiler, library_dirs, ...) | [`new_compiler`](numpy.distutils.ccompiler.new_compiler#numpy.distutils.ccompiler.new_compiler "numpy.distutils.ccompiler.new_compiler")([plat, compiler, verbose, ...]) | [`replace_method`](numpy.distutils.ccompiler.replace_method#numpy.distutils.ccompiler.replace_method "numpy.distutils.ccompiler.replace_method")(klass, method_name, func) | [`simple_version_match`](numpy.distutils.ccompiler.simple_version_match#numpy.distutils.ccompiler.simple_version_match "numpy.distutils.ccompiler.simple_version_match")([pat, ignore, start]) | Simple matching of version numbers, for use in CCompiler and FCompiler. # numpy.distutils.ccompiler.new_compiler distutils.ccompiler.new_compiler(_plat =None_, _compiler =None_, _verbose =None_, _dry_run =0_, _force =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler.py#L753-L795) # numpy.distutils.ccompiler.replace_method distutils.ccompiler.replace_method(_klass_ , _method_name_ , _func_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler.py#L89-L92) # numpy.distutils.ccompiler.simple_version_match distutils.ccompiler.simple_version_match(_pat ='[-.\\\d]+'_, _ignore =''_, _start =''_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler.py#L562-L607) Simple matching of version numbers, for use in CCompiler and FCompiler. Parameters: **pat** str, optional A regular expression matching version numbers. Default is `r'[-.\d]+'`. **ignore** str, optional A regular expression matching patterns to skip. Default is `''`, in which case nothing is skipped. **start** str, optional A regular expression matching the start of where to start looking for version numbers. Default is `''`, in which case searching is started at the beginning of the version string given to `matcher`. Returns: **matcher** callable A function that is appropriate to use as the `.version_match` attribute of a `distutils.ccompiler.CCompiler` class. `matcher` takes a single parameter, a version string. # numpy.distutils.ccompiler_opt.CCompilerOpt.cache_flush method distutils.ccompiler_opt.CCompilerOpt.cache_flush()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L857-L883) Force update the cache. # numpy.distutils.ccompiler_opt.CCompilerOpt.cc_normalize_flags method distutils.ccompiler_opt.CCompilerOpt.cc_normalize_flags(_flags_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L1112-L1141) Remove the conflicts that caused due gathering implied features flags. Parameters: **‘flags’ list, compiler flags** flags should be sorted from the lowest to the highest interest. Returns: list, filtered from any conflicts. #### Examples >>> self.cc_normalize_flags(['-march=armv8.2-a+fp16', '-march=armv8.2-a+dotprod']) ['armv8.2-a+fp16+dotprod'] >>> self.cc_normalize_flags( ['-msse', '-msse2', '-msse3', '-mssse3', '-msse4.1', '-msse4.2', '-mavx', '-march=core-avx2'] ) ['-march=core-avx2'] # numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features attribute distutils.ccompiler_opt.CCompilerOpt.conf_features _={'ASIMD': {'implies': 'NEON_FP16 NEON_VFPV4', 'implies_detect': False, 'interest': 4}, 'ASIMDDP': {'implies': 'ASIMD', 'interest': 6}, 'ASIMDFHM': {'implies': 'ASIMDHP', 'interest': 7}, 'ASIMDHP': {'implies': 'ASIMD', 'interest': 5}, 'AVX': {'headers': 'immintrin.h', 'implies': 'SSE42', 'implies_detect': False, 'interest': 8}, 'AVX2': {'implies': 'F16C', 'interest': 13}, 'AVX512CD': {'implies': 'AVX512F', 'interest': 21}, 'AVX512F': {'extra_checks': 'AVX512F_REDUCE', 'implies': 'FMA3 AVX2', 'implies_detect': False, 'interest': 20}, 'AVX512_CLX': {'detect': 'AVX512_CLX', 'group': 'AVX512VNNI', 'implies': 'AVX512_SKX', 'interest': 43}, 'AVX512_CNL': {'detect': 'AVX512_CNL', 'group': 'AVX512IFMA AVX512VBMI', 'implies': 'AVX512_SKX', 'implies_detect': False, 'interest': 44}, 'AVX512_ICL': {'detect': 'AVX512_ICL', 'group': 'AVX512VBMI2 AVX512BITALG AVX512VPOPCNTDQ', 'implies': 'AVX512_CLX AVX512_CNL', 'implies_detect': False, 'interest': 45}, 'AVX512_KNL': {'detect': 'AVX512_KNL', 'group': 'AVX512ER AVX512PF', 'implies': 'AVX512CD', 'implies_detect': False, 'interest': 40}, 'AVX512_KNM': {'detect': 'AVX512_KNM', 'group': 'AVX5124FMAPS AVX5124VNNIW AVX512VPOPCNTDQ', 'implies': 'AVX512_KNL', 'implies_detect': False, 'interest': 41}, 'AVX512_SKX': {'detect': 'AVX512_SKX', 'extra_checks': 'AVX512BW_MASK AVX512DQ_MASK', 'group': 'AVX512VL AVX512BW AVX512DQ', 'implies': 'AVX512CD', 'implies_detect': False, 'interest': 42}, 'AVX512_SPR': {'detect': 'AVX512_SPR', 'group': 'AVX512FP16', 'implies': 'AVX512_ICL', 'implies_detect': False, 'interest': 46}, 'F16C': {'implies': 'AVX', 'interest': 11}, 'FMA3': {'implies': 'F16C', 'interest': 12}, 'FMA4': {'headers': 'x86intrin.h', 'implies': 'AVX', 'interest': 10}, 'NEON': {'headers': 'arm_neon.h', 'interest': 1}, 'NEON_FP16': {'implies': 'NEON', 'interest': 2}, 'NEON_VFPV4': {'implies': 'NEON_FP16', 'interest': 3}, 'POPCNT': {'headers': 'popcntintrin.h', 'implies': 'SSE41', 'interest': 6}, 'SSE': {'headers': 'xmmintrin.h', 'implies': 'SSE2', 'interest': 1}, 'SSE2': {'headers': 'emmintrin.h', 'implies': 'SSE', 'interest': 2}, 'SSE3': {'headers': 'pmmintrin.h', 'implies': 'SSE2', 'interest': 3}, 'SSE41': {'headers': 'smmintrin.h', 'implies': 'SSSE3', 'interest': 5}, 'SSE42': {'implies': 'POPCNT', 'interest': 7}, 'SSSE3': {'headers': 'tmmintrin.h', 'implies': 'SSE3', 'interest': 4}, 'VSX': {'extra_checks': 'VSX_ASM', 'headers': 'altivec.h', 'interest': 1}, 'VSX2': {'implies': 'VSX', 'implies_detect': False, 'interest': 2}, 'VSX3': {'extra_checks': 'VSX3_HALF_DOUBLE', 'implies': 'VSX2', 'implies_detect': False, 'interest': 3}, 'VSX4': {'extra_checks': 'VSX4_MMA', 'implies': 'VSX3', 'implies_detect': False, 'interest': 4}, 'VX': {'headers': 'vecintrin.h', 'interest': 1}, 'VXE': {'implies': 'VX', 'implies_detect': False, 'interest': 2}, 'VXE2': {'implies': 'VXE', 'implies_detect': False, 'interest': 3}, 'XOP': {'headers': 'x86intrin.h', 'implies': 'AVX', 'interest': 9}}_ # numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features_partial method distutils.ccompiler_opt.CCompilerOpt.conf_features_partial()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L330-L569) Return a dictionary of supported CPU features by the platform, and accumulate the rest of undefined options in [`conf_features`](numpy.distutils.ccompiler_opt.ccompileropt.conf_features#numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features "numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features"), the returned dict has same rules and notes in class attribute [`conf_features`](numpy.distutils.ccompiler_opt.ccompileropt.conf_features#numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features "numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features"), also its override any options that been set in ‘conf_features’. # numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_flags method distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_flags()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L2251-L2255) Returns a list of final CPU baseline compiler flags # numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_names method distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_names()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L2257-L2261) return a list of final CPU baseline feature names # numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_dispatch_names method distutils.ccompiler_opt.CCompilerOpt.cpu_dispatch_names()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L2263-L2267) return a list of final CPU dispatch feature names # numpy.distutils.ccompiler_opt.CCompilerOpt.dist_compile method distutils.ccompiler_opt.CCompilerOpt.dist_compile(_sources_ , _flags_ , _ccompiler =None_, _** kwargs_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L606-L614) Wrap CCompiler.compile() # numpy.distutils.ccompiler_opt.CCompilerOpt.dist_error method _static_ distutils.ccompiler_opt.CCompilerOpt.dist_error(_* args_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L679-L683) Raise a compiler error # numpy.distutils.ccompiler_opt.CCompilerOpt.dist_fatal method _static_ distutils.ccompiler_opt.CCompilerOpt.dist_fatal(_* args_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L685-L689) Raise a distutils error # numpy.distutils.ccompiler_opt.CCompilerOpt.dist_info method distutils.ccompiler_opt.CCompilerOpt.dist_info()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L642-L677) Return a tuple containing info about (platform, compiler, extra_args), required by the abstract class ‘_CCompiler’ for discovering the platform environment. This is also used as a cache factor in order to detect any changes happening from outside. # numpy.distutils.ccompiler_opt.CCompilerOpt.dist_load_module method _static_ distutils.ccompiler_opt.CCompilerOpt.dist_load_module(_name_ , _path_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L701-L709) Load a module from file, required by the abstract class ‘_Cache’. # numpy.distutils.ccompiler_opt.CCompilerOpt.dist_log method _static_ distutils.ccompiler_opt.CCompilerOpt.dist_log(_* args_, _stderr =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L691-L699) Print a console message # numpy.distutils.ccompiler_opt.CCompilerOpt.dist_test method distutils.ccompiler_opt.CCompilerOpt.dist_test(_source_ , _flags_ , _macros =[]_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L616-L640) Return True if ‘CCompiler.compile()’ able to compile a source file with certain flags. # numpy.distutils.ccompiler_opt.CCompilerOpt.feature_ahead method distutils.ccompiler_opt.CCompilerOpt.feature_ahead(_names_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L1405-L1441) Return list of features in ‘names’ after remove any implied features and keep the origins. Parameters: **‘names’: sequence** sequence of CPU feature names in uppercase. Returns: list of CPU features sorted as-is ‘names’ #### Examples >>> self.feature_ahead(["SSE2", "SSE3", "SSE41"]) ["SSE41"] # assume AVX2 and FMA3 implies each other and AVX2 # is the highest interest >>> self.feature_ahead(["SSE2", "SSE3", "SSE41", "AVX2", "FMA3"]) ["AVX2"] # assume AVX2 and FMA3 don't implies each other >>> self.feature_ahead(["SSE2", "SSE3", "SSE41", "AVX2", "FMA3"]) ["AVX2", "FMA3"] # numpy.distutils.ccompiler_opt.CCompilerOpt.feature_c_preprocessor method distutils.ccompiler_opt.CCompilerOpt.feature_c_preprocessor(_feature_name_ , _tabs =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L1662-L1709) Generate C preprocessor definitions and include headers of a CPU feature. Parameters: **‘feature_name’: str** CPU feature name in uppercase. **‘tabs’: int** if > 0, align the generated strings to the right depend on number of tabs. Returns: str, generated C preprocessor #### Examples >>> self.feature_c_preprocessor("SSE3") /** SSE3 **/ #define NPY_HAVE_SSE3 1 #include # numpy.distutils.ccompiler_opt.CCompilerOpt.feature_detect method distutils.ccompiler_opt.CCompilerOpt.feature_detect(_names_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L1511-L1521) Return a list of CPU features that required to be detected sorted from the lowest to highest interest. # numpy.distutils.ccompiler_opt.CCompilerOpt.feature_get_til method distutils.ccompiler_opt.CCompilerOpt.feature_get_til(_names_ , _keyisfalse_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L1484-L1509) same as `feature_implies_c()` but stop collecting implied features when feature’s option that provided through parameter ‘keyisfalse’ is False, also sorting the returned features. # numpy.distutils.ccompiler_opt.CCompilerOpt.feature_implies method distutils.ccompiler_opt.CCompilerOpt.feature_implies(_names_ , _keep_origins =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L1348-L1395) Return a set of CPU features that implied by ‘names’ Parameters: **names** str or sequence of str CPU feature name(s) in uppercase. **keep_origins** bool if False(default) then the returned set will not contain any features from ‘names’. This case happens only when two features imply each other. #### Examples >>> self.feature_implies("SSE3") {'SSE', 'SSE2'} >>> self.feature_implies("SSE2") {'SSE'} >>> self.feature_implies("SSE2", keep_origins=True) # 'SSE2' found here since 'SSE' and 'SSE2' imply each other {'SSE', 'SSE2'} # numpy.distutils.ccompiler_opt.CCompilerOpt.feature_implies_c method distutils.ccompiler_opt.CCompilerOpt.feature_implies_c(_names_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L1397-L1403) same as feature_implies() but combining ‘names’ # numpy.distutils.ccompiler_opt.CCompilerOpt.feature_is_exist method distutils.ccompiler_opt.CCompilerOpt.feature_is_exist(_name_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L1309-L1320) Returns True if a certain feature is exist and covered within `_Config.conf_features`. Parameters: **‘name’: str** feature name in uppercase. # numpy.distutils.ccompiler_opt.CCompilerOpt.feature_names method distutils.ccompiler_opt.CCompilerOpt.feature_names(_names =None_, _force_flags =None_, _macros =[]_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L1274-L1307) Returns a set of CPU feature names that supported by platform and the **C** compiler. Parameters: **names** sequence or None, optional Specify certain CPU features to test it against the **C** compiler. if None(default), it will test all current supported features. **Note** : feature names must be in upper-case. **force_flags** list or None, optional If None(default), default compiler flags for every CPU feature will be used during the test. **macros** list of tuples, optional A list of C macro definitions. # numpy.distutils.ccompiler_opt.CCompilerOpt.feature_sorted method distutils.ccompiler_opt.CCompilerOpt.feature_sorted(_names_ , _reverse =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L1322-L1346) Sort a list of CPU features ordered by the lowest interest. Parameters: **‘names’: sequence** sequence of supported feature names in uppercase. **‘reverse’: bool, optional** If true, the sorted features is reversed. (highest interest) Returns: list, sorted CPU features # numpy.distutils.ccompiler_opt.CCompilerOpt.feature_untied method distutils.ccompiler_opt.CCompilerOpt.feature_untied(_names_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L1443-L1482) same as ‘feature_ahead()’ but if both features implied each other and keep the highest interest. Parameters: **‘names’: sequence** sequence of CPU feature names in uppercase. Returns: list of CPU features sorted as-is ‘names’ #### Examples >>> self.feature_untied(["SSE2", "SSE3", "SSE41"]) ["SSE2", "SSE3", "SSE41"] # assume AVX2 and FMA3 implies each other >>> self.feature_untied(["SSE2", "SSE3", "SSE41", "FMA3", "AVX2"]) ["SSE2", "SSE3", "SSE41", "AVX2"] # numpy.distutils.ccompiler_opt.CCompilerOpt.generate_dispatch_header method distutils.ccompiler_opt.CCompilerOpt.generate_dispatch_header(_header_path_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L2356-L2437) Generate the dispatch header which contains the #definitions and headers for platform-specific instruction-sets for the enabled CPU baseline and dispatch- able features. Its highly recommended to take a look at the generated header also the generated source files via `try_dispatch()` in order to get the full picture. # numpy.distutils.ccompiler_opt.CCompilerOpt _class_ numpy.distutils.ccompiler_opt.CCompilerOpt(_ccompiler_ , _cpu_baseline ='min'_, _cpu_dispatch ='max'_, _cache_path =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L2213-L2646) A helper class for `CCompiler` aims to provide extra build options to effectively control of compiler optimizations that are directly related to CPU features. Attributes: **conf_cache_factors** **conf_tmp_path** #### Methods [`cache_flush`](numpy.distutils.ccompiler_opt.ccompileropt.cache_flush#numpy.distutils.ccompiler_opt.CCompilerOpt.cache_flush "numpy.distutils.ccompiler_opt.CCompilerOpt.cache_flush")() | Force update the cache. ---|--- [`cc_normalize_flags`](numpy.distutils.ccompiler_opt.ccompileropt.cc_normalize_flags#numpy.distutils.ccompiler_opt.CCompilerOpt.cc_normalize_flags "numpy.distutils.ccompiler_opt.CCompilerOpt.cc_normalize_flags")(flags) | Remove the conflicts that caused due gathering implied features flags. [`conf_features_partial`](numpy.distutils.ccompiler_opt.ccompileropt.conf_features_partial#numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features_partial "numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features_partial")() | Return a dictionary of supported CPU features by the platform, and accumulate the rest of undefined options in [`conf_features`](numpy.distutils.ccompiler_opt.ccompileropt.conf_features#numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features "numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features"), the returned dict has same rules and notes in class attribute [`conf_features`](numpy.distutils.ccompiler_opt.ccompileropt.conf_features#numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features "numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features"), also its override any options that been set in 'conf_features'. [`cpu_baseline_flags`](numpy.distutils.ccompiler_opt.ccompileropt.cpu_baseline_flags#numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_flags "numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_flags")() | Returns a list of final CPU baseline compiler flags [`cpu_baseline_names`](numpy.distutils.ccompiler_opt.ccompileropt.cpu_baseline_names#numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_names "numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_names")() | return a list of final CPU baseline feature names [`cpu_dispatch_names`](numpy.distutils.ccompiler_opt.ccompileropt.cpu_dispatch_names#numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_dispatch_names "numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_dispatch_names")() | return a list of final CPU dispatch feature names [`dist_compile`](numpy.distutils.ccompiler_opt.ccompileropt.dist_compile#numpy.distutils.ccompiler_opt.CCompilerOpt.dist_compile "numpy.distutils.ccompiler_opt.CCompilerOpt.dist_compile")(sources, flags[, ccompiler]) | Wrap CCompiler.compile() [`dist_error`](numpy.distutils.ccompiler_opt.ccompileropt.dist_error#numpy.distutils.ccompiler_opt.CCompilerOpt.dist_error "numpy.distutils.ccompiler_opt.CCompilerOpt.dist_error")(*args) | Raise a compiler error [`dist_fatal`](numpy.distutils.ccompiler_opt.ccompileropt.dist_fatal#numpy.distutils.ccompiler_opt.CCompilerOpt.dist_fatal "numpy.distutils.ccompiler_opt.CCompilerOpt.dist_fatal")(*args) | Raise a distutils error [`dist_info`](numpy.distutils.ccompiler_opt.ccompileropt.dist_info#numpy.distutils.ccompiler_opt.CCompilerOpt.dist_info "numpy.distutils.ccompiler_opt.CCompilerOpt.dist_info")() | Return a tuple containing info about (platform, compiler, extra_args), required by the abstract class '_CCompiler' for discovering the platform environment. [`dist_load_module`](numpy.distutils.ccompiler_opt.ccompileropt.dist_load_module#numpy.distutils.ccompiler_opt.CCompilerOpt.dist_load_module "numpy.distutils.ccompiler_opt.CCompilerOpt.dist_load_module")(name, path) | Load a module from file, required by the abstract class '_Cache'. [`dist_log`](numpy.distutils.ccompiler_opt.ccompileropt.dist_log#numpy.distutils.ccompiler_opt.CCompilerOpt.dist_log "numpy.distutils.ccompiler_opt.CCompilerOpt.dist_log")(*args[, stderr]) | Print a console message [`dist_test`](numpy.distutils.ccompiler_opt.ccompileropt.dist_test#numpy.distutils.ccompiler_opt.CCompilerOpt.dist_test "numpy.distutils.ccompiler_opt.CCompilerOpt.dist_test")(source, flags[, macros]) | Return True if 'CCompiler.compile()' able to compile a source file with certain flags. [`feature_ahead`](numpy.distutils.ccompiler_opt.ccompileropt.feature_ahead#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_ahead "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_ahead")(names) | Return list of features in 'names' after remove any implied features and keep the origins. [`feature_c_preprocessor`](numpy.distutils.ccompiler_opt.ccompileropt.feature_c_preprocessor#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_c_preprocessor "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_c_preprocessor")(feature_name[, tabs]) | Generate C preprocessor definitions and include headers of a CPU feature. [`feature_detect`](numpy.distutils.ccompiler_opt.ccompileropt.feature_detect#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_detect "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_detect")(names) | Return a list of CPU features that required to be detected sorted from the lowest to highest interest. [`feature_get_til`](numpy.distutils.ccompiler_opt.ccompileropt.feature_get_til#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_get_til "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_get_til")(names, keyisfalse) | same as `feature_implies_c()` but stop collecting implied features when feature's option that provided through parameter 'keyisfalse' is False, also sorting the returned features. [`feature_implies`](numpy.distutils.ccompiler_opt.ccompileropt.feature_implies#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_implies "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_implies")(names[, keep_origins]) | Return a set of CPU features that implied by 'names' [`feature_implies_c`](numpy.distutils.ccompiler_opt.ccompileropt.feature_implies_c#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_implies_c "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_implies_c")(names) | same as feature_implies() but combining 'names' [`feature_is_exist`](numpy.distutils.ccompiler_opt.ccompileropt.feature_is_exist#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_is_exist "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_is_exist")(name) | Returns True if a certain feature is exist and covered within `_Config.conf_features`. [`feature_names`](numpy.distutils.ccompiler_opt.ccompileropt.feature_names#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_names "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_names")([names, force_flags, macros]) | Returns a set of CPU feature names that supported by platform and the **C** compiler. [`feature_sorted`](numpy.distutils.ccompiler_opt.ccompileropt.feature_sorted#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_sorted "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_sorted")(names[, reverse]) | Sort a list of CPU features ordered by the lowest interest. [`feature_untied`](numpy.distutils.ccompiler_opt.ccompileropt.feature_untied#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_untied "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_untied")(names) | same as 'feature_ahead()' but if both features implied each other and keep the highest interest. [`generate_dispatch_header`](numpy.distutils.ccompiler_opt.ccompileropt.generate_dispatch_header#numpy.distutils.ccompiler_opt.CCompilerOpt.generate_dispatch_header "numpy.distutils.ccompiler_opt.CCompilerOpt.generate_dispatch_header")(header_path) | Generate the dispatch header which contains the #definitions and headers for platform-specific instruction-sets for the enabled CPU baseline and dispatch-able features. [`is_cached`](numpy.distutils.ccompiler_opt.ccompileropt.is_cached#numpy.distutils.ccompiler_opt.CCompilerOpt.is_cached "numpy.distutils.ccompiler_opt.CCompilerOpt.is_cached")() | Returns True if the class loaded from the cache file [`me`](numpy.distutils.ccompiler_opt.ccompileropt.me#numpy.distutils.ccompiler_opt.CCompilerOpt.me "numpy.distutils.ccompiler_opt.CCompilerOpt.me")(cb) | A static method that can be treated as a decorator to dynamically cache certain methods. [`parse_targets`](numpy.distutils.ccompiler_opt.ccompileropt.parse_targets#numpy.distutils.ccompiler_opt.CCompilerOpt.parse_targets "numpy.distutils.ccompiler_opt.CCompilerOpt.parse_targets")(source) | Fetch and parse configuration statements that required for defining the targeted CPU features, statements should be declared in the top of source in between **C** comment and start with a special mark **@targets**. [`try_dispatch`](numpy.distutils.ccompiler_opt.ccompileropt.try_dispatch#numpy.distutils.ccompiler_opt.CCompilerOpt.try_dispatch "numpy.distutils.ccompiler_opt.CCompilerOpt.try_dispatch")(sources[, src_dir, ccompiler]) | Compile one or more dispatch-able sources and generates object files, also generates abstract C config headers and macros that used later for the final runtime dispatching process. **cache_hash** | ---|--- **cc_test_cexpr** | **cc_test_flags** | **feature_can_autovec** | **feature_extra_checks** | **feature_flags** | **feature_is_supported** | **feature_test** | **report** | # numpy.distutils.ccompiler_opt.CCompilerOpt.is_cached method distutils.ccompiler_opt.CCompilerOpt.is_cached()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L2245-L2249) Returns True if the class loaded from the cache file # numpy.distutils.ccompiler_opt.CCompilerOpt.me method _static_ distutils.ccompiler_opt.CCompilerOpt.me(_cb_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L895-L911) A static method that can be treated as a decorator to dynamically cache certain methods. # numpy.distutils.ccompiler_opt.CCompilerOpt.parse_targets method distutils.ccompiler_opt.CCompilerOpt.parse_targets(_source_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L1840-L1892) Fetch and parse configuration statements that required for defining the targeted CPU features, statements should be declared in the top of source in between **C** comment and start with a special mark **@targets**. Configuration statements are sort of keywords representing CPU features names, group of statements and policies, combined together to determine the required optimization. Parameters: **source** str the path of **C** source file. Returns: * bool, True if group has the ‘baseline’ option * list, list of CPU features * list, list of extra compiler flags # numpy.distutils.ccompiler_opt.CCompilerOpt.try_dispatch method distutils.ccompiler_opt.CCompilerOpt.try_dispatch(_sources_ , _src_dir =None_, _ccompiler =None_, _** kwargs_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L2269-L2354) Compile one or more dispatch-able sources and generates object files, also generates abstract C config headers and macros that used later for the final runtime dispatching process. The mechanism behind it is to takes each source file that specified in ‘sources’ and branching it into several files depend on special configuration statements that must be declared in the top of each source which contains targeted CPU features, then it compiles every branched source with the proper compiler flags. Parameters: **sources** list Must be a list of dispatch-able sources file paths, and configuration statements must be declared inside each file. **src_dir** str Path of parent directory for the generated headers and wrapped sources. If None(default) the files will generated in-place. **ccompiler** CCompiler Distutils `CCompiler` instance to be used for compilation. If None (default), the provided instance during the initialization will be used instead. ****kwargs** any Arguments to pass on to the `CCompiler.compile()` Returns: **list** generated object files Raises: CompileError Raises by `CCompiler.compile()` on compiling failure. DistutilsError Some errors during checking the sanity of configuration statements. See also [`parse_targets`](numpy.distutils.ccompiler_opt.ccompileropt.parse_targets#numpy.distutils.ccompiler_opt.CCompilerOpt.parse_targets "numpy.distutils.ccompiler_opt.CCompilerOpt.parse_targets") Parsing the configuration statements of dispatch-able sources. # numpy.distutils.ccompiler_opt Provides the [`CCompilerOpt`](numpy.distutils.ccompiler_opt.ccompileropt#numpy.distutils.ccompiler_opt.CCompilerOpt "numpy.distutils.ccompiler_opt.CCompilerOpt") class, used for handling the CPU/hardware optimization, starting from parsing the command arguments, to managing the relation between the CPU baseline and dispatch-able features, also generating the required C headers and ending with compiling the sources with proper compiler’s flags. [`CCompilerOpt`](numpy.distutils.ccompiler_opt.ccompileropt#numpy.distutils.ccompiler_opt.CCompilerOpt "numpy.distutils.ccompiler_opt.CCompilerOpt") doesn’t provide runtime detection for the CPU features, instead only focuses on the compiler side, but it creates abstract C headers that can be used later for the final runtime dispatching process. #### Functions [`new_ccompiler_opt`](numpy.distutils.ccompiler_opt.new_ccompiler_opt#numpy.distutils.ccompiler_opt.new_ccompiler_opt "numpy.distutils.ccompiler_opt.new_ccompiler_opt")(compiler, dispatch_hpath, ...) | Create a new instance of 'CCompilerOpt' and generate the dispatch header which contains the #definitions and headers of platform-specific instruction-sets for the enabled CPU baseline and dispatch-able features. ---|--- #### Classes [`CCompilerOpt`](numpy.distutils.ccompiler_opt.ccompileropt#numpy.distutils.ccompiler_opt.CCompilerOpt "numpy.distutils.ccompiler_opt.CCompilerOpt")(ccompiler[, cpu_baseline, ...]) | A helper class for `CCompiler` aims to provide extra build options to effectively control of compiler optimizations that are directly related to CPU features. ---|--- # numpy.distutils.ccompiler_opt.new_ccompiler_opt distutils.ccompiler_opt.new_ccompiler_opt(_compiler_ , _dispatch_hpath_ , _** kwargs_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/ccompiler_opt.py#L2648-L2668) Create a new instance of ‘CCompilerOpt’ and generate the dispatch header which contains the #definitions and headers of platform-specific instruction-sets for the enabled CPU baseline and dispatch-able features. Parameters: **compiler** CCompiler instance **dispatch_hpath** str path of the dispatch header ****kwargs: passed as-is to `CCompilerOpt(…)`** **Returns** **——-** **new instance of CCompilerOpt** # numpy.distutils.core.Extension _class_ numpy.distutils.core.Extension(_name_ , _sources_ , _include_dirs =None_, _define_macros =None_, _undef_macros =None_, _library_dirs =None_, _libraries =None_, _runtime_library_dirs =None_, _extra_objects =None_, _extra_compile_args =None_, _extra_link_args =None_, _export_symbols =None_, _swig_opts =None_, _depends =None_, _language =None_, _f2py_options =None_, _module_dirs =None_, _extra_c_compile_args =None_, _extra_cxx_compile_args =None_, _extra_f77_compile_args =None_, _extra_f90_compile_args =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/extension.py#L17-L99) Parameters: **name** str Extension name. **sources** list of str List of source file locations relative to the top directory of the package. **extra_compile_args** list of str Extra command line arguments to pass to the compiler. **extra_f77_compile_args** list of str Extra command line arguments to pass to the fortran77 compiler. **extra_f90_compile_args** list of str Extra command line arguments to pass to the fortran90 compiler. #### Methods **has_cxx_sources** | ---|--- **has_f2py_sources** | # numpy.distutils.cpuinfo.cpu distutils.cpuinfo.cpu _= _ # numpy.distutils.exec_command.exec_command distutils.exec_command.exec_command(_command_ , _execute_in =''_, _use_shell =None_, _use_tee =None_, __with_python =1_, _** env_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/exec_command.py#L177-L250) Return (status,output) of executed command. Deprecated since version 1.17: Use subprocess.Popen instead Parameters: **command** str A concatenated string of executable and arguments. **execute_in** str Before running command `cd execute_in` and after `cd -`. **use_shell**{bool, None}, optional If True, execute `sh -c command`. Default None (True) **use_tee**{bool, None}, optional If True use tee. Default None (True) Returns: **res** str Both stdout and stderr messages. #### Notes On NT, DOS systems the returned status is correct for external commands. Wild cards will not work for non-posix systems or when use_shell=0. # numpy.distutils.exec_command.filepath_from_subprocess_output distutils.exec_command.filepath_from_subprocess_output(_output_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/exec_command.py#L63-L77) Convert `bytes` in the encoding used by a subprocess into a filesystem- appropriate `str`. Inherited from [`exec_command`](numpy.distutils.exec_command.exec_command#numpy.distutils.exec_command.exec_command "numpy.distutils.exec_command.exec_command"), and possibly incorrect. # numpy.distutils.exec_command.find_executable distutils.exec_command.find_executable(_exe_ , _path =None_, __cache ={}_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/exec_command.py#L116-L163) Return full path of a executable or None. Symbolic links are not followed. # numpy.distutils.exec_command.forward_bytes_to_stdout distutils.exec_command.forward_bytes_to_stdout(_val_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/exec_command.py#L80-L96) Forward bytes from a subprocess call to the console, without attempting to decode them. The assumption is that the subprocess call already returned bytes in a suitable encoding. # numpy.distutils.exec_command.get_pythonexe distutils.exec_command.get_pythonexe()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/exec_command.py#L107-L114) # numpy.distutils.exec_command exec_command Implements exec_command function that is (almost) equivalent to commands.getstatusoutput function but on NT, DOS systems the returned status is actually correct (though, the returned status values may be different by a factor). In addition, exec_command takes keyword arguments for (re-)defining environment variables. Provides functions: exec_command — execute command in a specified directory and in the modified environment. find_executable — locate a command using info from environment variable PATH. Equivalent to posix `which` command. Author: Pearu Peterson <[pearu@cens.ioc.ee](https://numpy.org/cdn-cgi/l/email- protection#e29287839097c4c1d1d5d9c4c1d7d0d9c4c1d6dad981878c91c4c1d6d4d98b8d81c4c1d6d4d98787)> Created: 11 January 2003 Requires: Python 2.x Successfully tested on: os.name | sys.platform | comments ---|---|--- posix | linux2 | Debian (sid) Linux, Python 2.1.3+, 2.2.3+, 2.3.3 PyCrust 0.9.3, Idle 1.0.2 posix | linux2 | Red Hat 9 Linux, Python 2.1.3, 2.2.2, 2.3.2 posix | sunos5 | SunOS 5.9, Python 2.2, 2.3.2 posix | darwin | Darwin 7.2.0, Python 2.3 nt | win32 | Windows Me Python 2.3(EE), Idle 1.0, PyCrust 0.7.2 Python 2.1.1 Idle 0.8 nt | win32 | Windows 98, Python 2.1.1. Idle 0.8 nt | win32 | Cygwin 98-4.10, Python 2.1.1(MSC) - echo tests fail i.e. redefining environment variables may not work. FIXED: don’t use cygwin echo! Comment: also `cmd /c echo` will not work but redefining environment variables do work. posix | cygwin | Cygwin 98-4.10, Python 2.3.3(cygming special) nt | win32 | Windows XP, Python 2.3.3 Known bugs: * Tests, that send messages to stderr, fail when executed from MSYS prompt because the messages are lost at some point. #### Functions [`exec_command`](numpy.distutils.exec_command.exec_command#numpy.distutils.exec_command.exec_command "numpy.distutils.exec_command.exec_command")(command[, execute_in, ...]) | Return (status,output) of executed command. ---|--- [`filepath_from_subprocess_output`](numpy.distutils.exec_command.filepath_from_subprocess_output#numpy.distutils.exec_command.filepath_from_subprocess_output "numpy.distutils.exec_command.filepath_from_subprocess_output")(output) | Convert `bytes` in the encoding used by a subprocess into a filesystem-appropriate `str`. [`find_executable`](numpy.distutils.exec_command.find_executable#numpy.distutils.exec_command.find_executable "numpy.distutils.exec_command.find_executable")(exe[, path, _cache]) | Return full path of a executable or None. [`forward_bytes_to_stdout`](numpy.distutils.exec_command.forward_bytes_to_stdout#numpy.distutils.exec_command.forward_bytes_to_stdout "numpy.distutils.exec_command.forward_bytes_to_stdout")(val) | Forward bytes from a subprocess call to the console, without attempting to decode them. [`get_pythonexe`](numpy.distutils.exec_command.get_pythonexe#numpy.distutils.exec_command.get_pythonexe "numpy.distutils.exec_command.get_pythonexe")() | [`temp_file_name`](numpy.distutils.exec_command.temp_file_name#numpy.distutils.exec_command.temp_file_name "numpy.distutils.exec_command.temp_file_name")() | # numpy.distutils.exec_command.temp_file_name distutils.exec_command.temp_file_name()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/exec_command.py#L99-L105) # numpy.distutils.log.set_verbosity distutils.log.set_verbosity(_v_ , _force =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/log.py#L67-L77) # numpy.distutils.system_info.get_info distutils.system_info.get_info(_name_ , _notfound_action =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/system_info.py#L500-L592) notfound_action: 0 - do nothing 1 - display warning message 2 - raise error # numpy.distutils.system_info.get_standard_file distutils.system_info.get_standard_file(_fname_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/distutils/system_info.py#L381-L413) Returns a list of files named ‘fname’ from 1) System-wide directory (directory-location of this module) 2) Users HOME directory (os.environ[‘HOME’]) 3) Local directory # numpy.divide numpy.divide(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Divide arguments element-wise. Parameters: **x1** array_like Dividend array. **x2** array_like Divisor array. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray or scalar The quotient `x1/x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. See also [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr") Set whether to raise or warn on overflow, underflow and division by zero. #### Notes Equivalent to `x1` / `x2` in terms of array-broadcasting. The `true_divide(x1, x2)` function is an alias for `divide(x1, x2)`. #### Examples >>> import numpy as np >>> np.divide(2.0, 4.0) 0.5 >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> np.divide(x1, x2) array([[nan, 1. , 1. ], [inf, 4. , 2.5], [inf, 7. , 4. ]]) The `/` operator can be used as a shorthand for `np.divide` on ndarrays. >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = 2 * np.ones(3) >>> x1 / x2 array([[0. , 0.5, 1. ], [1.5, 2. , 2.5], [3. , 3.5, 4. ]]) # numpy.divmod numpy.divmod(_x1_ , _x2_ , [_out1_ , _out2_ , ]_/_ , [_out=(None_ , _None)_ , ]_*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return element-wise quotient and remainder simultaneously. `np.divmod(x, y)` is equivalent to `(x // y, x % y)`, but faster because it avoids redundant work. It is used to implement the Python built-in function `divmod` on NumPy arrays. Parameters: **x1** array_like Dividend array. **x2** array_like Divisor array. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out1** ndarray Element-wise quotient resulting from floor division. This is a scalar if both `x1` and `x2` are scalars. **out2** ndarray Element-wise remainder from floor division. This is a scalar if both `x1` and `x2` are scalars. See also [`floor_divide`](numpy.floor_divide#numpy.floor_divide "numpy.floor_divide") Equivalent to Python’s `//` operator. [`remainder`](numpy.remainder#numpy.remainder "numpy.remainder") Equivalent to Python’s `%` operator. [`modf`](numpy.modf#numpy.modf "numpy.modf") Equivalent to `divmod(x, 1)` for positive `x` with the return values switched. #### Examples >>> import numpy as np >>> np.divmod(np.arange(5), 3) (array([0, 0, 0, 1, 1]), array([0, 1, 2, 0, 1])) The `divmod` function can be used as a shorthand for `np.divmod` on ndarrays. >>> x = np.arange(5) >>> divmod(x, 3) (array([0, 0, 0, 1, 1]), array([0, 1, 2, 0, 1])) # numpy.dot numpy.dot(_a_ , _b_ , _out =None_) Dot product of two arrays. Specifically, * If both `a` and `b` are 1-D arrays, it is inner product of vectors (without complex conjugation). * If both `a` and `b` are 2-D arrays, it is matrix multiplication, but using [`matmul`](numpy.matmul#numpy.matmul "numpy.matmul") or `a @ b` is preferred. * If either `a` or `b` is 0-D (scalar), it is equivalent to [`multiply`](numpy.multiply#numpy.multiply "numpy.multiply") and using `numpy.multiply(a, b)` or `a * b` is preferred. * If `a` is an N-D array and `b` is a 1-D array, it is a sum product over the last axis of `a` and `b`. * If `a` is an N-D array and `b` is an M-D array (where `M>=2`), it is a sum product over the last axis of `a` and the second-to-last axis of `b`: dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m]) It uses an optimized BLAS library when possible (see [`numpy.linalg`](../routines.linalg#module-numpy.linalg "numpy.linalg")). Parameters: **a** array_like First argument. **b** array_like Second argument. **out** ndarray, optional Output argument. This must have the exact kind that would be returned if it was not used. In particular, it must have the right type, must be C-contiguous, and its dtype must be the dtype that would be returned for `dot(a,b)`. This is a performance feature. Therefore, if these conditions are not met, an exception is raised, instead of attempting to be flexible. Returns: **output** ndarray Returns the dot product of `a` and `b`. If `a` and `b` are both scalars or both 1-D arrays then a scalar is returned; otherwise an array is returned. If `out` is given, then it is returned. Raises: ValueError If the last dimension of `a` is not the same size as the second-to-last dimension of `b`. See also [`vdot`](numpy.vdot#numpy.vdot "numpy.vdot") Complex-conjugating dot product. [`vecdot`](numpy.vecdot#numpy.vecdot "numpy.vecdot") Vector dot product of two arrays. [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") Sum products over arbitrary axes. [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") Einstein summation convention. [`matmul`](numpy.matmul#numpy.matmul "numpy.matmul") ‘@’ operator as method with out parameter. [`linalg.multi_dot`](numpy.linalg.multi_dot#numpy.linalg.multi_dot "numpy.linalg.multi_dot") Chained dot product. #### Examples >>> import numpy as np >>> np.dot(3, 4) 12 Neither argument is complex-conjugated: >>> np.dot([2j, 3j], [2j, 3j]) (-13+0j) For 2-D arrays it is the matrix product: >>> a = [[1, 0], [0, 1]] >>> b = [[4, 1], [2, 2]] >>> np.dot(a, b) array([[4, 1], [2, 2]]) >>> a = np.arange(3*4*5*6).reshape((3,4,5,6)) >>> b = np.arange(3*4*5*6)[::-1].reshape((5,4,6,3)) >>> np.dot(a, b)[2,3,2,1,2,2] 499128 >>> sum(a[2,3,2,:] * b[1,2,:,2]) 499128 # numpy.dsplit numpy.dsplit(_ary_ , _indices_or_sections_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_shape_base_impl.py#L1011-L1054) Split array into multiple sub-arrays along the 3rd axis (depth). Please refer to the [`split`](numpy.split#numpy.split "numpy.split") documentation. `dsplit` is equivalent to [`split`](numpy.split#numpy.split "numpy.split") with `axis=2`, the array is always split along the third axis provided the array dimension is greater than or equal to 3. See also [`split`](numpy.split#numpy.split "numpy.split") Split an array into multiple sub-arrays of equal size. #### Examples >>> import numpy as np >>> x = np.arange(16.0).reshape(2, 2, 4) >>> x array([[[ 0., 1., 2., 3.], [ 4., 5., 6., 7.]], [[ 8., 9., 10., 11.], [12., 13., 14., 15.]]]) >>> np.dsplit(x, 2) [array([[[ 0., 1.], [ 4., 5.]], [[ 8., 9.], [12., 13.]]]), array([[[ 2., 3.], [ 6., 7.]], [[10., 11.], [14., 15.]]])] >>> np.dsplit(x, np.array([3, 6])) [array([[[ 0., 1., 2.], [ 4., 5., 6.]], [[ 8., 9., 10.], [12., 13., 14.]]]), array([[[ 3.], [ 7.]], [[11.], [15.]]]), array([], shape=(2, 2, 0), dtype=float64)] # numpy.dstack numpy.dstack(_tup_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_shape_base_impl.py#L669-L726) Stack arrays in sequence depth wise (along third axis). This is equivalent to concatenation along the third axis after 2-D arrays of shape `(M,N)` have been reshaped to `(M,N,1)` and 1-D arrays of shape `(N,)` have been reshaped to `(1,N,1)`. Rebuilds arrays divided by [`dsplit`](numpy.dsplit#numpy.dsplit "numpy.dsplit"). This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`stack`](numpy.stack#numpy.stack "numpy.stack") and [`block`](numpy.block#numpy.block "numpy.block") provide more general stacking and concatenation operations. Parameters: **tup** sequence of arrays The arrays must have the same shape along all but the third axis. 1-D or 2-D arrays must have the same shape. Returns: **stacked** ndarray The array formed by stacking the given arrays, will be at least 3-D. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack") Stack arrays in sequence vertically (row wise). [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`dsplit`](numpy.dsplit#numpy.dsplit "numpy.dsplit") Split array along third axis. #### Examples >>> import numpy as np >>> a = np.array((1,2,3)) >>> b = np.array((2,3,4)) >>> np.dstack((a,b)) array([[[1, 2], [2, 3], [3, 4]]]) >>> a = np.array([[1],[2],[3]]) >>> b = np.array([[2],[3],[4]]) >>> np.dstack((a,b)) array([[[1, 2]], [[2, 3]], [[3, 4]]]) # numpy.dtype.__class_getitem__ method dtype.__class_getitem__(_item_ , _/_) Return a parametrized wrapper around the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") type. New in version 1.22. Returns: **alias** types.GenericAlias A parametrized [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") type. See also [**PEP 585**](https://peps.python.org/pep-0585/) Type hinting generics in standard collections. #### Examples >>> import numpy as np >>> np.dtype[np.int64] numpy.dtype[numpy.int64] # numpy.dtype.__ge__ method dtype.__ge__(_value_ , _/_) Return self>=value. # numpy.dtype.__gt__ method dtype.__gt__(_value_ , _/_) Return self>value. # numpy.dtype.__le__ method dtype.__le__(_value_ , _/_) Return self<=value. # numpy.dtype.__lt__ method dtype.__lt__(_value_ , _/_) Return self>> import numpy as np >>> x = np.dtype('i4') >>> x.alignment 4 >>> x = np.dtype(float) >>> x.alignment 8 # numpy.dtype.base attribute dtype.base Returns dtype for the base element of the subarrays, regardless of their dimension or shape. See also [`dtype.subdtype`](numpy.dtype.subdtype#numpy.dtype.subdtype "numpy.dtype.subdtype") #### Examples >>> import numpy as np >>> x = numpy.dtype('8f') >>> x.base dtype('float32') >>> x = numpy.dtype('i2') >>> x.base dtype('int16') # numpy.dtype.byteorder attribute dtype.byteorder A character indicating the byte-order of this data-type object. One of: ‘=’ | native ---|--- ‘<’ | little-endian ‘>’ | big-endian ‘|’ | not applicable All built-in data-type objects have byteorder either ‘=’ or ‘|’. #### Examples >>> import numpy as np >>> dt = np.dtype('i2') >>> dt.byteorder '=' >>> # endian is not relevant for 8 bit numbers >>> np.dtype('i1').byteorder '|' >>> # or ASCII strings >>> np.dtype('S2').byteorder '|' >>> # Even if specific code is given, and it is native >>> # '=' is the byteorder >>> import sys >>> sys_is_le = sys.byteorder == 'little' >>> native_code = '<' if sys_is_le else '>' >>> swapped_code = '>' if sys_is_le else '<' >>> dt = np.dtype(native_code + 'i2') >>> dt.byteorder '=' >>> # Swapped code shows up as itself >>> dt = np.dtype(swapped_code + 'i2') >>> dt.byteorder == swapped_code True # numpy.dtype.char attribute dtype.char A unique character code for each of the 21 different built-in types. #### Examples >>> import numpy as np >>> x = np.dtype(float) >>> x.char 'd' # numpy.dtype.descr attribute dtype.descr `__array_interface__` description of the data-type. The format is that required by the ‘descr’ key in the `__array_interface__` attribute. Warning: This attribute exists specifically for `__array_interface__`, and passing it directly to [`numpy.dtype`](numpy.dtype#numpy.dtype "numpy.dtype") will not accurately reconstruct some dtypes (e.g., scalar and subarray dtypes). #### Examples >>> import numpy as np >>> x = np.dtype(float) >>> x.descr [('', '>> dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))]) >>> dt.descr [('name', '>> import numpy as np >>> dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))]) >>> print(dt.fields) {'grades': (dtype(('float64',(2,))), 16), 'name': (dtype('|S16'), 0)} # numpy.dtype.flags attribute dtype.flags Bit-flags describing how this data type is to be interpreted. Bit-masks are in `numpy._core.multiarray` as the constants `ITEM_HASOBJECT`, `LIST_PICKLE`, `ITEM_IS_POINTER`, `NEEDS_INIT`, `NEEDS_PYAPI`, `USE_GETITEM`, `USE_SETITEM`. A full explanation of these flags is in C-API documentation; they are largely useful for user-defined data-types. The following example demonstrates that operations on this particular dtype requires Python C-API. #### Examples >>> import numpy as np >>> x = np.dtype([('a', np.int32, 8), ('b', np.float64, 6)]) >>> x.flags 16 >>> np._core.multiarray.NEEDS_PYAPI 16 # numpy.dtype.hasobject attribute dtype.hasobject Boolean indicating whether this dtype contains any reference-counted objects in any fields or sub-dtypes. Recall that what is actually in the ndarray memory representing the Python object is the memory address of that object (a pointer). Special handling may be required, and this attribute is useful for distinguishing data types that may contain arbitrary Python objects and data-types that won’t. # numpy.dtype _class_ numpy.dtype(_dtype_ , _align=False_ , _copy=False_[, _metadata_])[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) Create a data type object. A numpy array is homogeneous, and contains elements described by a dtype object. A dtype object can be constructed from different combinations of fundamental numeric types. Parameters: **dtype** Object to be converted to a data type object. **align** bool, optional Add padding to the fields to match what a C compiler would output for a similar C-struct. Can be `True` only if `obj` is a dictionary or a comma- separated string. If a struct dtype is being created, this also sets a sticky alignment flag `isalignedstruct`. **copy** bool, optional Make a new copy of the data-type object. If `False`, the result may just be a reference to a built-in data-type object. **metadata** dict, optional An optional dictionary with dtype metadata. See also [`result_type`](numpy.result_type#numpy.result_type "numpy.result_type") #### Examples Using array-scalar type: >>> import numpy as np >>> np.dtype(np.int16) dtype('int16') Structured type, one field name ‘f1’, containing int16: >>> np.dtype([('f1', np.int16)]) dtype([('f1', '>> np.dtype([('f1', [('f1', np.int16)])]) dtype([('f1', [('f1', '>> np.dtype([('f1', np.uint64), ('f2', np.int32)]) dtype([('f1', '>> np.dtype([('a','f8'),('b','S10')]) dtype([('a', '>> np.dtype("i4, (2,3)f8") dtype([('f0', '>> np.dtype([('hello',(np.int64,3)),('world',np.void,10)]) dtype([('hello', '>> np.dtype((np.int16, {'x':(np.int8,0), 'y':(np.int8,1)})) dtype((numpy.int16, [('x', 'i1'), ('y', 'i1')])) Using dictionaries. Two fields named ‘gender’ and ‘age’: >>> np.dtype({'names':['gender','age'], 'formats':['S1',np.uint8]}) dtype([('gender', 'S1'), ('age', 'u1')]) Offsets in bytes, here 0 and 25: >>> np.dtype({'surname':('S25',0),'age':(np.uint8,25)}) dtype([('surname', 'S25'), ('age', 'u1')]) Attributes: [`alignment`](numpy.dtype.alignment#numpy.dtype.alignment "numpy.dtype.alignment") The required alignment (bytes) of this data-type according to the compiler. [`base`](numpy.dtype.base#numpy.dtype.base "numpy.dtype.base") Returns dtype for the base element of the subarrays, regardless of their dimension or shape. [`byteorder`](numpy.dtype.byteorder#numpy.dtype.byteorder "numpy.dtype.byteorder") A character indicating the byte-order of this data-type object. [`char`](../routines.char#module-numpy.char "numpy.char") A unique character code for each of the 21 different built-in types. [`descr`](numpy.dtype.descr#numpy.dtype.descr "numpy.dtype.descr") `__array_interface__` description of the data-type. [`fields`](numpy.dtype.fields#numpy.dtype.fields "numpy.dtype.fields") Dictionary of named fields defined for this data type, or `None`. [`flags`](numpy.dtype.flags#numpy.dtype.flags "numpy.dtype.flags") Bit-flags describing how this data type is to be interpreted. [`hasobject`](numpy.dtype.hasobject#numpy.dtype.hasobject "numpy.dtype.hasobject") Boolean indicating whether this dtype contains any reference-counted objects in any fields or sub-dtypes. [`isalignedstruct`](numpy.dtype.isalignedstruct#numpy.dtype.isalignedstruct "numpy.dtype.isalignedstruct") Boolean indicating whether the dtype is a struct which maintains field alignment. [`isbuiltin`](numpy.dtype.isbuiltin#numpy.dtype.isbuiltin "numpy.dtype.isbuiltin") Integer indicating how this dtype relates to the built-in dtypes. [`isnative`](numpy.dtype.isnative#numpy.dtype.isnative "numpy.dtype.isnative") Boolean indicating whether the byte order of this dtype is native to the platform. [`itemsize`](numpy.dtype.itemsize#numpy.dtype.itemsize "numpy.dtype.itemsize") The element size of this data-type object. [`kind`](numpy.dtype.kind#numpy.dtype.kind "numpy.dtype.kind") A character code (one of ‘biufcmMOSUV’) identifying the general kind of data. [`metadata`](numpy.dtype.metadata#numpy.dtype.metadata "numpy.dtype.metadata") Either `None` or a readonly dictionary of metadata (mappingproxy). [`name`](numpy.dtype.name#numpy.dtype.name "numpy.dtype.name") A bit-width name for this data-type. [`names`](numpy.dtype.names#numpy.dtype.names "numpy.dtype.names") Ordered list of field names, or `None` if there are no fields. [`ndim`](numpy.ndim#numpy.ndim "numpy.ndim") Number of dimensions of the sub-array if this data type describes a sub-array, and `0` otherwise. [`num`](numpy.dtype.num#numpy.dtype.num "numpy.dtype.num") A unique number for each of the 21 different built-in types. [`shape`](numpy.shape#numpy.shape "numpy.shape") Shape tuple of the sub-array if this data type describes a sub-array, and `()` otherwise. [`str`](numpy.dtype.str#numpy.dtype.str "numpy.dtype.str") The array-protocol typestring of this data-type object. [`subdtype`](numpy.dtype.subdtype#numpy.dtype.subdtype "numpy.dtype.subdtype") Tuple `(item_dtype, shape)` if this `dtype` describes a sub-array, and None otherwise. **type** #### Methods [`newbyteorder`](numpy.dtype.newbyteorder#numpy.dtype.newbyteorder "numpy.dtype.newbyteorder")([new_order]) | Return a new dtype with a different byte order. ---|--- # numpy.dtype.isalignedstruct attribute dtype.isalignedstruct Boolean indicating whether the dtype is a struct which maintains field alignment. This flag is sticky, so when combining multiple structs together, it is preserved and produces new dtypes which are also aligned. # numpy.dtype.isbuiltin attribute dtype.isbuiltin Integer indicating how this dtype relates to the built-in dtypes. Read-only. 0 | if this is a structured array type, with fields ---|--- 1 | if this is a dtype compiled into numpy (such as ints, floats etc) 2 | if the dtype is for a user-defined numpy type A user-defined type uses the numpy C-API machinery to extend numpy to handle a new array type. See [User-defined data-types](../../user/c-info.beyond-basics#user-user-defined-data-types) in the NumPy manual. #### Examples >>> import numpy as np >>> dt = np.dtype('i2') >>> dt.isbuiltin 1 >>> dt = np.dtype('f8') >>> dt.isbuiltin 1 >>> dt = np.dtype([('field1', 'f8')]) >>> dt.isbuiltin 0 # numpy.dtype.isnative attribute dtype.isnative Boolean indicating whether the byte order of this dtype is native to the platform. # numpy.dtype.itemsize attribute dtype.itemsize The element size of this data-type object. For 18 of the 21 types this number is fixed by the data-type. For the flexible data-types, this number can be anything. #### Examples >>> import numpy as np >>> arr = np.array([[1, 2], [3, 4]]) >>> arr.dtype dtype('int64') >>> arr.itemsize 8 >>> dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))]) >>> dt.itemsize 80 # numpy.dtype.kind attribute dtype.kind A character code (one of ‘biufcmMOSUV’) identifying the general kind of data. b | boolean ---|--- i | signed integer u | unsigned integer f | floating-point c | complex floating-point m | timedelta M | datetime O | object S | (byte-)string U | Unicode V | void #### Examples >>> import numpy as np >>> dt = np.dtype('i4') >>> dt.kind 'i' >>> dt = np.dtype('f8') >>> dt.kind 'f' >>> dt = np.dtype([('field1', 'f8')]) >>> dt.kind 'V' # numpy.dtype.metadata attribute dtype.metadata Either `None` or a readonly dictionary of metadata (mappingproxy). The metadata field can be set using any dictionary at data-type creation. NumPy currently has no uniform approach to propagating metadata; although some array operations preserve it, there is no guarantee that others will. Warning Although used in certain projects, this feature was long undocumented and is not well supported. Some aspects of metadata propagation are expected to change in the future. #### Examples >>> import numpy as np >>> dt = np.dtype(float, metadata={"key": "value"}) >>> dt.metadata["key"] 'value' >>> arr = np.array([1, 2, 3], dtype=dt) >>> arr.dtype.metadata mappingproxy({'key': 'value'}) Adding arrays with identical datatypes currently preserves the metadata: >>> (arr + arr).dtype.metadata mappingproxy({'key': 'value'}) But if the arrays have different dtype metadata, the metadata may be dropped: >>> dt2 = np.dtype(float, metadata={"key2": "value2"}) >>> arr2 = np.array([3, 2, 1], dtype=dt2) >>> (arr + arr2).dtype.metadata is None True # The metadata field is cleared so None is returned # numpy.dtype.name attribute dtype.name A bit-width name for this data-type. Un-sized flexible data-type objects do not have this attribute. #### Examples >>> import numpy as np >>> x = np.dtype(float) >>> x.name 'float64' >>> x = np.dtype([('a', np.int32, 8), ('b', np.float64, 6)]) >>> x.name 'void640' # numpy.dtype.names attribute dtype.names Ordered list of field names, or `None` if there are no fields. The names are ordered according to increasing byte offset. This can be used, for example, to walk through all of the named fields in offset order. #### Examples >>> dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))]) >>> dt.names ('name', 'grades') # numpy.dtype.newbyteorder method dtype.newbyteorder(_new_order ='S'_, _/_) Return a new dtype with a different byte order. Changes are also made in all fields and sub-arrays of the data type. Parameters: **new_order** string, optional Byte order to force; a value from the byte order specifications below. The default value (‘S’) results in swapping the current byte order. `new_order` codes can be any of: * ‘S’ - swap dtype from current to opposite endian * {‘<’, ‘little’} - little endian * {‘>’, ‘big’} - big endian * {‘=’, ‘native’} - native order * {‘|’, ‘I’} - ignore (no change to byte order) Returns: **new_dtype** dtype New dtype object with the given change to the byte order. #### Notes Changes are also made in all fields and sub-arrays of the data type. #### Examples >>> import sys >>> sys_is_le = sys.byteorder == 'little' >>> native_code = '<' if sys_is_le else '>' >>> swapped_code = '>' if sys_is_le else '<' >>> import numpy as np >>> native_dt = np.dtype(native_code+'i2') >>> swapped_dt = np.dtype(swapped_code+'i2') >>> native_dt.newbyteorder('S') == swapped_dt True >>> native_dt.newbyteorder() == swapped_dt True >>> native_dt == swapped_dt.newbyteorder('S') True >>> native_dt == swapped_dt.newbyteorder('=') True >>> native_dt == swapped_dt.newbyteorder('N') True >>> native_dt == native_dt.newbyteorder('|') True >>> np.dtype('>> np.dtype('>> np.dtype('>i2') == native_dt.newbyteorder('>') True >>> np.dtype('>i2') == native_dt.newbyteorder('B') True # numpy.dtype.num attribute dtype.num A unique number for each of the 21 different built-in types. These are roughly ordered from least-to-most precision. #### Examples >>> import numpy as np >>> dt = np.dtype(str) >>> dt.num 19 >>> dt = np.dtype(float) >>> dt.num 12 # numpy.dtype.shape attribute dtype.shape Shape tuple of the sub-array if this data type describes a sub-array, and `()` otherwise. #### Examples >>> import numpy as np >>> dt = np.dtype(('i4', 4)) >>> dt.shape (4,) >>> dt = np.dtype(('i4', (2, 3))) >>> dt.shape (2, 3) # numpy.dtype.str attribute dtype.str The array-protocol typestring of this data-type object. # numpy.dtype.subdtype attribute dtype.subdtype Tuple `(item_dtype, shape)` if this [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") describes a sub-array, and None otherwise. The _shape_ is the fixed shape of the sub-array described by this data type, and _item_dtype_ the data type of the array. If a field whose dtype object has this attribute is retrieved, then the extra dimensions implied by _shape_ are tacked on to the end of the retrieved array. See also [`dtype.base`](numpy.dtype.base#numpy.dtype.base "numpy.dtype.base") #### Examples >>> import numpy as np >>> x = numpy.dtype('8f') >>> x.subdtype (dtype('float32'), (8,)) >>> x = numpy.dtype('i2') >>> x.subdtype >>> # numpy.dtype.type attribute dtype.type _= None_ # numpy.ediff1d numpy.ediff1d(_ary_ , _to_end =None_, _to_begin =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_arraysetops_impl.py#L41-L129) The differences between consecutive elements of an array. Parameters: **ary** array_like If necessary, will be flattened before the differences are taken. **to_end** array_like, optional Number(s) to append at the end of the returned differences. **to_begin** array_like, optional Number(s) to prepend at the beginning of the returned differences. Returns: **ediff1d** ndarray The differences. Loosely, this is `ary.flat[1:] - ary.flat[:-1]`. See also [`diff`](numpy.diff#numpy.diff "numpy.diff"), [`gradient`](numpy.gradient#numpy.gradient "numpy.gradient") #### Notes When applied to masked arrays, this function drops the mask information if the `to_begin` and/or `to_end` parameters are used. #### Examples >>> import numpy as np >>> x = np.array([1, 2, 4, 7, 0]) >>> np.ediff1d(x) array([ 1, 2, 3, -7]) >>> np.ediff1d(x, to_begin=-99, to_end=np.array([88, 99])) array([-99, 1, 2, ..., -7, 88, 99]) The returned array is always 1D. >>> y = [[1, 2, 4], [1, 6, 24]] >>> np.ediff1d(y) array([ 1, 2, -3, 5, 18]) # numpy.einsum numpy.einsum(_subscripts_ , _* operands_, _out =None_, _dtype =None_, _order ='K'_, _casting ='safe'_, _optimize =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/einsumfunc.py#L1057-L1499) Evaluates the Einstein summation convention on the operands. Using the Einstein summation convention, many common multi-dimensional, linear algebraic array operations can be represented in a simple fashion. In _implicit_ mode `einsum` computes these values. In _explicit_ mode, `einsum` provides further flexibility to compute other array operations that might not be considered classical Einstein summation operations, by disabling, or forcing summation over specified subscript labels. See the notes and examples for clarification. Parameters: **subscripts** str Specifies the subscripts for summation as comma separated list of subscript labels. An implicit (classical Einstein summation) calculation is performed unless the explicit indicator ‘->’ is included as well as subscript labels of the precise output form. **operands** list of array_like These are the arrays for the operation. **out** ndarray, optional If provided, the calculation is done into this array. **dtype**{data-type, None}, optional If provided, forces the calculation to use the data type specified. Note that you may have to also give a more liberal `casting` parameter to allow the conversions. Default is None. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the output. ‘C’ means it should be C contiguous. ‘F’ means it should be Fortran contiguous, ‘A’ means it should be ‘F’ if the inputs are all ‘F’, ‘C’ otherwise. ‘K’ means it should be as close to the layout as the inputs as is possible, including arbitrarily permuted axes. Default is ‘K’. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Setting this to ‘unsafe’ is not recommended, as it can adversely affect accumulations. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. Default is ‘safe’. **optimize**{False, True, ‘greedy’, ‘optimal’}, optional Controls if intermediate optimization should occur. No optimization will occur if False and True will default to the ‘greedy’ algorithm. Also accepts an explicit contraction list from the `np.einsum_path` function. See `np.einsum_path` for more details. Defaults to False. Returns: **output** ndarray The calculation based on the Einstein summation convention. See also [`einsum_path`](numpy.einsum_path#numpy.einsum_path "numpy.einsum_path"), [`dot`](numpy.dot#numpy.dot "numpy.dot"), [`inner`](numpy.inner#numpy.inner "numpy.inner"), [`outer`](numpy.outer#numpy.outer "numpy.outer"), [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot"), [`linalg.multi_dot`](numpy.linalg.multi_dot#numpy.linalg.multi_dot "numpy.linalg.multi_dot") `einsum` Similar verbose interface is provided by the [einops](https://github.com/arogozhnikov/einops) package to cover additional operations: transpose, reshape/flatten, repeat/tile, squeeze/unsqueeze and reductions. The [opt_einsum](https://optimized- einsum.readthedocs.io/en/stable/) optimizes contraction order for einsum-like expressions in backend-agnostic manner. #### Notes The Einstein summation convention can be used to compute many multi- dimensional, linear algebraic array operations. `einsum` provides a succinct way of representing these. A non-exhaustive list of these operations, which can be computed by `einsum`, is shown below along with examples: * Trace of an array, [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace"). * Return a diagonal, [`numpy.diag`](numpy.diag#numpy.diag "numpy.diag"). * Array axis summations, [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum"). * Transpositions and permutations, [`numpy.transpose`](numpy.transpose#numpy.transpose "numpy.transpose"). * Matrix multiplication and dot product, [`numpy.matmul`](numpy.matmul#numpy.matmul "numpy.matmul") [`numpy.dot`](numpy.dot#numpy.dot "numpy.dot"). * Vector inner and outer products, [`numpy.inner`](numpy.inner#numpy.inner "numpy.inner") [`numpy.outer`](numpy.outer#numpy.outer "numpy.outer"). * Broadcasting, element-wise and scalar multiplication, [`numpy.multiply`](numpy.multiply#numpy.multiply "numpy.multiply"). * Tensor contractions, [`numpy.tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot"). * Chained array operations, in efficient calculation order, [`numpy.einsum_path`](numpy.einsum_path#numpy.einsum_path "numpy.einsum_path"). The subscripts string is a comma-separated list of subscript labels, where each label refers to a dimension of the corresponding operand. Whenever a label is repeated it is summed, so `np.einsum('i,i', a, b)` is equivalent to [`np.inner(a,b)`](numpy.inner#numpy.inner "numpy.inner"). If a label appears only once, it is not summed, so `np.einsum('i', a)` produces a view of `a` with no changes. A further example `np.einsum('ij,jk', a, b)` describes traditional matrix multiplication and is equivalent to [`np.matmul(a,b)`](numpy.matmul#numpy.matmul "numpy.matmul"). Repeated subscript labels in one operand take the diagonal. For example, `np.einsum('ii', a)` is equivalent to [`np.trace(a)`](numpy.trace#numpy.trace "numpy.trace"). In _implicit mode_ , the chosen subscripts are important since the axes of the output are reordered alphabetically. This means that `np.einsum('ij', a)` doesn’t affect a 2D array, while `np.einsum('ji', a)` takes its transpose. Additionally, `np.einsum('ij,jk', a, b)` returns a matrix multiplication, while, `np.einsum('ij,jh', a, b)` returns the transpose of the multiplication since subscript ‘h’ precedes subscript ‘i’. In _explicit mode_ the output can be directly controlled by specifying output subscript labels. This requires the identifier ‘->’ as well as the list of output subscript labels. This feature increases the flexibility of the function since summing can be disabled or forced when required. The call `np.einsum('i->', a)` is like [`np.sum(a)`](numpy.sum#numpy.sum "numpy.sum") if `a` is a 1-D array, and `np.einsum('ii->i', a)` is like [`np.diag(a)`](numpy.diag#numpy.diag "numpy.diag") if `a` is a square 2-D array. The difference is that `einsum` does not allow broadcasting by default. Additionally `np.einsum('ij,jh->ih', a, b)` directly specifies the order of the output subscript labels and therefore returns matrix multiplication, unlike the example above in implicit mode. To enable and control broadcasting, use an ellipsis. Default NumPy-style broadcasting is done by adding an ellipsis to the left of each term, like `np.einsum('...ii->...i', a)`. `np.einsum('...i->...', a)` is like [`np.sum(a, axis=-1)`](numpy.sum#numpy.sum "numpy.sum") for array `a` of any shape. To take the trace along the first and last axes, you can do `np.einsum('i...i', a)`, or to do a matrix-matrix product with the left-most indices instead of rightmost, one can do `np.einsum('ij...,jk...->ik...', a, b)`. When there is only one operand, no axes are summed, and no output parameter is provided, a view into the operand is returned instead of a new array. Thus, taking the diagonal as `np.einsum('ii->i', a)` produces a view (changed in version 1.10.0). `einsum` also provides an alternative way to provide the subscripts and operands as `einsum(op0, sublist0, op1, sublist1, ..., [sublistout])`. If the output shape is not provided in this format `einsum` will be calculated in implicit mode, otherwise it will be performed explicitly. The examples below have corresponding `einsum` calls with the two parameter methods. Views returned from einsum are now writeable whenever the input array is writeable. For example, `np.einsum('ijk...->kji...', a)` will now have the same effect as [`np.swapaxes(a, 0, 2)`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") and `np.einsum('ii->i', a)` will return a writeable view of the diagonal of a 2D array. Added the `optimize` argument which will optimize the contraction order of an einsum expression. For a contraction with three or more operands this can greatly increase the computational efficiency at the cost of a larger memory footprint during computation. Typically a ‘greedy’ algorithm is applied which empirical tests have shown returns the optimal path in the majority of cases. In some cases ‘optimal’ will return the superlative path through a more expensive, exhaustive search. For iterative calculations it may be advisable to calculate the optimal path once and reuse that path by supplying it as an argument. An example is given below. See [`numpy.einsum_path`](numpy.einsum_path#numpy.einsum_path "numpy.einsum_path") for more details. #### Examples >>> a = np.arange(25).reshape(5,5) >>> b = np.arange(5) >>> c = np.arange(6).reshape(2,3) Trace of a matrix: >>> np.einsum('ii', a) 60 >>> np.einsum(a, [0,0]) 60 >>> np.trace(a) 60 Extract the diagonal (requires explicit form): >>> np.einsum('ii->i', a) array([ 0, 6, 12, 18, 24]) >>> np.einsum(a, [0,0], [0]) array([ 0, 6, 12, 18, 24]) >>> np.diag(a) array([ 0, 6, 12, 18, 24]) Sum over an axis (requires explicit form): >>> np.einsum('ij->i', a) array([ 10, 35, 60, 85, 110]) >>> np.einsum(a, [0,1], [0]) array([ 10, 35, 60, 85, 110]) >>> np.sum(a, axis=1) array([ 10, 35, 60, 85, 110]) For higher dimensional arrays summing a single axis can be done with ellipsis: >>> np.einsum('...j->...', a) array([ 10, 35, 60, 85, 110]) >>> np.einsum(a, [Ellipsis,1], [Ellipsis]) array([ 10, 35, 60, 85, 110]) Compute a matrix transpose, or reorder any number of axes: >>> np.einsum('ji', c) array([[0, 3], [1, 4], [2, 5]]) >>> np.einsum('ij->ji', c) array([[0, 3], [1, 4], [2, 5]]) >>> np.einsum(c, [1,0]) array([[0, 3], [1, 4], [2, 5]]) >>> np.transpose(c) array([[0, 3], [1, 4], [2, 5]]) Vector inner products: >>> np.einsum('i,i', b, b) 30 >>> np.einsum(b, [0], b, [0]) 30 >>> np.inner(b,b) 30 Matrix vector multiplication: >>> np.einsum('ij,j', a, b) array([ 30, 80, 130, 180, 230]) >>> np.einsum(a, [0,1], b, [1]) array([ 30, 80, 130, 180, 230]) >>> np.dot(a, b) array([ 30, 80, 130, 180, 230]) >>> np.einsum('...j,j', a, b) array([ 30, 80, 130, 180, 230]) Broadcasting and scalar multiplication: >>> np.einsum('..., ...', 3, c) array([[ 0, 3, 6], [ 9, 12, 15]]) >>> np.einsum(',ij', 3, c) array([[ 0, 3, 6], [ 9, 12, 15]]) >>> np.einsum(3, [Ellipsis], c, [Ellipsis]) array([[ 0, 3, 6], [ 9, 12, 15]]) >>> np.multiply(3, c) array([[ 0, 3, 6], [ 9, 12, 15]]) Vector outer product: >>> np.einsum('i,j', np.arange(2)+1, b) array([[0, 1, 2, 3, 4], [0, 2, 4, 6, 8]]) >>> np.einsum(np.arange(2)+1, [0], b, [1]) array([[0, 1, 2, 3, 4], [0, 2, 4, 6, 8]]) >>> np.outer(np.arange(2)+1, b) array([[0, 1, 2, 3, 4], [0, 2, 4, 6, 8]]) Tensor contraction: >>> a = np.arange(60.).reshape(3,4,5) >>> b = np.arange(24.).reshape(4,3,2) >>> np.einsum('ijk,jil->kl', a, b) array([[4400., 4730.], [4532., 4874.], [4664., 5018.], [4796., 5162.], [4928., 5306.]]) >>> np.einsum(a, [0,1,2], b, [1,0,3], [2,3]) array([[4400., 4730.], [4532., 4874.], [4664., 5018.], [4796., 5162.], [4928., 5306.]]) >>> np.tensordot(a,b, axes=([1,0],[0,1])) array([[4400., 4730.], [4532., 4874.], [4664., 5018.], [4796., 5162.], [4928., 5306.]]) Writeable returned arrays (since version 1.10.0): >>> a = np.zeros((3, 3)) >>> np.einsum('ii->i', a)[:] = 1 >>> a array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) Example of ellipsis use: >>> a = np.arange(6).reshape((3,2)) >>> b = np.arange(12).reshape((4,3)) >>> np.einsum('ki,jk->ij', a, b) array([[10, 28, 46, 64], [13, 40, 67, 94]]) >>> np.einsum('ki,...k->i...', a, b) array([[10, 28, 46, 64], [13, 40, 67, 94]]) >>> np.einsum('k...,jk', a, b) array([[10, 28, 46, 64], [13, 40, 67, 94]]) Chained array operations. For more complicated contractions, speed ups might be achieved by repeatedly computing a ‘greedy’ path or pre-computing the ‘optimal’ path and repeatedly applying it, using an [`einsum_path`](numpy.einsum_path#numpy.einsum_path "numpy.einsum_path") insertion (since version 1.12.0). Performance improvements can be particularly significant with larger arrays: >>> a = np.ones(64).reshape(2,4,8) Basic `einsum`: ~1520ms (benchmarked on 3.1GHz Intel i5.) >>> for iteration in range(500): ... _ = np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a) Sub-optimal `einsum` (due to repeated path calculation time): ~330ms >>> for iteration in range(500): ... _ = np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, ... optimize='optimal') Greedy `einsum` (faster optimal path approximation): ~160ms >>> for iteration in range(500): ... _ = np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize='greedy') Optimal `einsum` (best usage pattern in some use cases): ~110ms >>> path = np.einsum_path('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, ... optimize='optimal')[0] >>> for iteration in range(500): ... _ = np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize=path) # numpy.einsum_path numpy.einsum_path(_subscripts_ , _* operands_, _optimize ='greedy'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/einsumfunc.py#L742-L1046) Evaluates the lowest cost contraction order for an einsum expression by considering the creation of intermediate arrays. Parameters: **subscripts** str Specifies the subscripts for summation. ***operands** list of array_like These are the arrays for the operation. **optimize**{bool, list, tuple, ‘greedy’, ‘optimal’} Choose the type of path. If a tuple is provided, the second argument is assumed to be the maximum intermediate size created. If only a single argument is provided the largest input or output array size is used as a maximum intermediate size. * if a list is given that starts with `einsum_path`, uses this as the contraction path * if False no optimization is taken * if True defaults to the ‘greedy’ algorithm * ‘optimal’ An algorithm that combinatorially explores all possible ways of contracting the listed tensors and chooses the least costly path. Scales exponentially with the number of terms in the contraction. * ‘greedy’ An algorithm that chooses the best pair contraction at each step. Effectively, this algorithm searches the largest inner, Hadamard, and then outer products at each step. Scales cubically with the number of terms in the contraction. Equivalent to the ‘optimal’ path for most contractions. Default is ‘greedy’. Returns: **path** list of tuples A list representation of the einsum path. **string_repr** str A printable representation of the einsum path. See also [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum"), [`linalg.multi_dot`](numpy.linalg.multi_dot#numpy.linalg.multi_dot "numpy.linalg.multi_dot") #### Notes The resulting path indicates which terms of the input contraction should be contracted first, the result of this contraction is then appended to the end of the contraction list. This list can then be iterated over until all intermediate contractions are complete. #### Examples We can begin with a chain dot example. In this case, it is optimal to contract the `b` and `c` tensors first as represented by the first element of the path `(1, 2)`. The resulting tensor is added to the end of the contraction and the remaining contraction `(0, 1)` is then completed. >>> np.random.seed(123) >>> a = np.random.rand(2, 2) >>> b = np.random.rand(2, 5) >>> c = np.random.rand(5, 2) >>> path_info = np.einsum_path('ij,jk,kl->il', a, b, c, optimize='greedy') >>> print(path_info[0]) ['einsum_path', (1, 2), (0, 1)] >>> print(path_info[1]) Complete contraction: ij,jk,kl->il # may vary Naive scaling: 4 Optimized scaling: 3 Naive FLOP count: 1.600e+02 Optimized FLOP count: 5.600e+01 Theoretical speedup: 2.857 Largest intermediate: 4.000e+00 elements ------------------------------------------------------------------------- scaling current remaining ------------------------------------------------------------------------- 3 kl,jk->jl ij,jl->il 3 jl,ij->il il->il A more complex index transformation example. >>> I = np.random.rand(10, 10, 10, 10) >>> C = np.random.rand(10, 10) >>> path_info = np.einsum_path('ea,fb,abcd,gc,hd->efgh', C, C, I, C, C, ... optimize='greedy') >>> print(path_info[0]) ['einsum_path', (0, 2), (0, 3), (0, 2), (0, 1)] >>> print(path_info[1]) Complete contraction: ea,fb,abcd,gc,hd->efgh # may vary Naive scaling: 8 Optimized scaling: 5 Naive FLOP count: 8.000e+08 Optimized FLOP count: 8.000e+05 Theoretical speedup: 1000.000 Largest intermediate: 1.000e+04 elements -------------------------------------------------------------------------- scaling current remaining -------------------------------------------------------------------------- 5 abcd,ea->bcde fb,gc,hd,bcde->efgh 5 bcde,fb->cdef gc,hd,cdef->efgh 5 cdef,gc->defg hd,defg->efgh 5 defg,hd->efgh efgh->efgh # numpy.emath.arccos emath.arccos(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L495-L539) Compute the inverse cosine of x. Return the “principal value” (for a description of this, see [`numpy.arccos`](numpy.arccos#numpy.arccos "numpy.arccos")) of the inverse cosine of `x`. For real `x` such that `abs(x) <= 1`, this is a real number in the closed interval \\([0, \pi]\\). Otherwise, the complex principle value is returned. Parameters: **x** array_like or scalar The value(s) whose arccos is (are) required. Returns: **out** ndarray or scalar The inverse cosine(s) of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array object is returned. See also [`numpy.arccos`](numpy.arccos#numpy.arccos "numpy.arccos") #### Notes For an arccos() that returns `NAN` when real `x` is not in the interval `[-1,1]`, use [`numpy.arccos`](numpy.arccos#numpy.arccos "numpy.arccos"). #### Examples >>> import numpy as np >>> np.set_printoptions(precision=4) >>> np.emath.arccos(1) # a scalar is returned 0.0 >>> np.emath.arccos([1,2]) array([0.-0.j , 0.-1.317j]) # numpy.emath.arcsin emath.arcsin(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L542-L587) Compute the inverse sine of x. Return the “principal value” (for a description of this, see [`numpy.arcsin`](numpy.arcsin#numpy.arcsin "numpy.arcsin")) of the inverse sine of `x`. For real `x` such that `abs(x) <= 1`, this is a real number in the closed interval \\([-\pi/2, \pi/2]\\). Otherwise, the complex principle value is returned. Parameters: **x** array_like or scalar The value(s) whose arcsin is (are) required. Returns: **out** ndarray or scalar The inverse sine(s) of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array object is returned. See also [`numpy.arcsin`](numpy.arcsin#numpy.arcsin "numpy.arcsin") #### Notes For an arcsin() that returns `NAN` when real `x` is not in the interval `[-1,1]`, use [`numpy.arcsin`](numpy.arcsin#numpy.arcsin "numpy.arcsin"). #### Examples >>> import numpy as np >>> np.set_printoptions(precision=4) >>> np.emath.arcsin(0) 0.0 >>> np.emath.arcsin([0,1]) array([0. , 1.5708]) # numpy.emath.arctanh emath.arctanh(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L590-L643) Compute the inverse hyperbolic tangent of `x`. Return the “principal value” (for a description of this, see [`numpy.arctanh`](numpy.arctanh#numpy.arctanh "numpy.arctanh")) of `arctanh(x)`. For real `x` such that `abs(x) < 1`, this is a real number. If `abs(x) > 1`, or if `x` is complex, the result is complex. Finally, `x = 1` returns``inf`` and `x=-1` returns `-inf`. Parameters: **x** array_like The value(s) whose arctanh is (are) required. Returns: **out** ndarray or scalar The inverse hyperbolic tangent(s) of the `x` value(s). If `x` was a scalar so is `out`, otherwise an array is returned. See also [`numpy.arctanh`](numpy.arctanh#numpy.arctanh "numpy.arctanh") #### Notes For an arctanh() that returns `NAN` when real `x` is not in the interval `(-1,1)`, use [`numpy.arctanh`](numpy.arctanh#numpy.arctanh "numpy.arctanh") (this latter, however, does return +/-inf for `x = +/-1`). #### Examples >>> import numpy as np >>> np.set_printoptions(precision=4) >>> np.emath.arctanh(0.5) 0.5493061443340549 >>> from numpy.testing import suppress_warnings >>> with suppress_warnings() as sup: ... sup.filter(RuntimeWarning) ... np.emath.arctanh(np.eye(2)) array([[inf, 0.], [ 0., inf]]) >>> np.emath.arctanh([1j]) array([0.+0.7854j]) # numpy.emath.log emath.log(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L242-L289) Compute the natural logarithm of `x`. Return the “principal value” (for a description of this, see [`numpy.log`](numpy.log#numpy.log "numpy.log")) of \\(log_e(x)\\). For real `x > 0`, this is a real number (`log(0)` returns `-inf` and `log(np.inf)` returns `inf`). Otherwise, the complex principle value is returned. Parameters: **x** array_like The value(s) whose log is (are) required. Returns: **out** ndarray or scalar The log of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array is returned. See also [`numpy.log`](numpy.log#numpy.log "numpy.log") #### Notes For a log() that returns `NAN` when real `x < 0`, use [`numpy.log`](numpy.log#numpy.log "numpy.log") (note, however, that otherwise [`numpy.log`](numpy.log#numpy.log "numpy.log") and this [`log`](numpy.log#numpy.log "numpy.log") are identical, i.e., both return `-inf` for `x = 0`, `inf` for `x = inf`, and, notably, the complex principle value if `x.imag != 0`). #### Examples >>> import numpy as np >>> np.emath.log(np.exp(1)) 1.0 Negative arguments are handled “correctly” (recall that `exp(log(x)) == x` does _not_ hold for real `x < 0`): >>> np.emath.log(-np.exp(1)) == (1 + np.pi * 1j) True # numpy.emath.log10 emath.log10(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L292-L341) Compute the logarithm base 10 of `x`. Return the “principal value” (for a description of this, see [`numpy.log10`](numpy.log10#numpy.log10 "numpy.log10")) of \\(log_{10}(x)\\). For real `x > 0`, this is a real number (`log10(0)` returns `-inf` and `log10(np.inf)` returns `inf`). Otherwise, the complex principle value is returned. Parameters: **x** array_like or scalar The value(s) whose log base 10 is (are) required. Returns: **out** ndarray or scalar The log base 10 of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array object is returned. See also [`numpy.log10`](numpy.log10#numpy.log10 "numpy.log10") #### Notes For a log10() that returns `NAN` when real `x < 0`, use [`numpy.log10`](numpy.log10#numpy.log10 "numpy.log10") (note, however, that otherwise [`numpy.log10`](numpy.log10#numpy.log10 "numpy.log10") and this [`log10`](numpy.log10#numpy.log10 "numpy.log10") are identical, i.e., both return `-inf` for `x = 0`, `inf` for `x = inf`, and, notably, the complex principle value if `x.imag != 0`). #### Examples >>> import numpy as np (We set the printing precision so the example can be auto-tested) >>> np.set_printoptions(precision=4) >>> np.emath.log10(10**1) 1.0 >>> np.emath.log10([-10**1, -10**2, 10**2]) array([1.+1.3644j, 2.+1.3644j, 2.+0.j ]) # numpy.emath.log2 emath.log2(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L386-L433) Compute the logarithm base 2 of `x`. Return the “principal value” (for a description of this, see [`numpy.log2`](numpy.log2#numpy.log2 "numpy.log2")) of \\(log_2(x)\\). For real `x > 0`, this is a real number (`log2(0)` returns `-inf` and `log2(np.inf)` returns `inf`). Otherwise, the complex principle value is returned. Parameters: **x** array_like The value(s) whose log base 2 is (are) required. Returns: **out** ndarray or scalar The log base 2 of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array is returned. See also [`numpy.log2`](numpy.log2#numpy.log2 "numpy.log2") #### Notes For a log2() that returns `NAN` when real `x < 0`, use [`numpy.log2`](numpy.log2#numpy.log2 "numpy.log2") (note, however, that otherwise [`numpy.log2`](numpy.log2#numpy.log2 "numpy.log2") and this [`log2`](numpy.log2#numpy.log2 "numpy.log2") are identical, i.e., both return `-inf` for `x = 0`, `inf` for `x = inf`, and, notably, the complex principle value if `x.imag != 0`). #### Examples We set the printing precision so the example can be auto-tested: >>> np.set_printoptions(precision=4) >>> np.emath.log2(8) 3.0 >>> np.emath.log2([-4, -8, 8]) array([2.+4.5324j, 3.+4.5324j, 3.+0.j ]) # numpy.emath.logn emath.logn(_n_ , _x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L348-L383) Take log base n of x. If `x` contains negative inputs, the answer is computed and returned in the complex domain. Parameters: **n** array_like The integer base(s) in which the log is taken. **x** array_like The value(s) whose log base `n` is (are) required. Returns: **out** ndarray or scalar The log base `n` of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array is returned. #### Examples >>> import numpy as np >>> np.set_printoptions(precision=4) >>> np.emath.logn(2, [4, 8]) array([2., 3.]) >>> np.emath.logn(2, [-4, -8, 8]) array([2.+4.5324j, 3.+4.5324j, 3.+0.j ]) # numpy.emath.power emath.power(_x_ , _p_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L440-L492) Return x to the power p, (x**p). If `x` contains negative values, the output is converted to the complex domain. Parameters: **x** array_like The input value(s). **p** array_like of ints The power(s) to which `x` is raised. If `x` contains multiple values, `p` has to either be a scalar, or contain the same number of values as `x`. In the latter case, the result is `x[0]**p[0], x[1]**p[1], ...`. Returns: **out** ndarray or scalar The result of `x**p`. If `x` and `p` are scalars, so is `out`, otherwise an array is returned. See also [`numpy.power`](numpy.power#numpy.power "numpy.power") #### Examples >>> import numpy as np >>> np.set_printoptions(precision=4) >>> np.emath.power(2, 2) 4 >>> np.emath.power([2, 4], 2) array([ 4, 16]) >>> np.emath.power([2, 4], -2) array([0.25 , 0.0625]) >>> np.emath.power([-2, 4], 2) array([ 4.-0.j, 16.+0.j]) >>> np.emath.power([2, 4], [2, 4]) array([ 4, 256]) # numpy.emath.sqrt emath.sqrt(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L186-L239) Compute the square root of x. For negative input elements, a complex value is returned (unlike [`numpy.sqrt`](numpy.sqrt#numpy.sqrt "numpy.sqrt") which returns NaN). Parameters: **x** array_like The input value(s). Returns: **out** ndarray or scalar The square root of `x`. If `x` was a scalar, so is `out`, otherwise an array is returned. See also [`numpy.sqrt`](numpy.sqrt#numpy.sqrt "numpy.sqrt") #### Examples For real, non-negative inputs this works just like [`numpy.sqrt`](numpy.sqrt#numpy.sqrt "numpy.sqrt"): >>> import numpy as np >>> np.emath.sqrt(1) 1.0 >>> np.emath.sqrt([1, 4]) array([1., 2.]) But it automatically handles negative inputs: >>> np.emath.sqrt(-1) 1j >>> np.emath.sqrt([-1,4]) array([0.+1.j, 2.+0.j]) Different results are expected because: floating point 0.0 and -0.0 are distinct. For more control, explicitly use complex() as follows: >>> np.emath.sqrt(complex(-4.0, 0.0)) 2j >>> np.emath.sqrt(complex(-4.0, -0.0)) -2j # numpy.empty numpy.empty(_shape_ , _dtype =float_, _order ='C'_, _*_ , _device =None_, _like =None_) Return a new array of given shape and type, without initializing entries. Parameters: **shape** int or tuple of int Shape of the empty array, e.g., `(2, 3)` or `2`. **dtype** data-type, optional Desired output data-type for the array, e.g, [`numpy.int8`](../arrays.scalars#numpy.int8 "numpy.int8"). Default is [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64"). **order**{‘C’, ‘F’}, optional, default: ‘C’ Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. **device** str, optional The device on which to place the created array. Default: `None`. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** ndarray Array of uninitialized (arbitrary) data of the given shape, dtype, and order. Object arrays will be initialized to None. See also [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like") Return an empty array with shape and type of input. [`ones`](numpy.ones#numpy.ones "numpy.ones") Return a new array setting values to one. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Return a new array setting values to zero. [`full`](numpy.full#numpy.full "numpy.full") Return a new array of given shape filled with value. #### Notes Unlike other array creation functions (e.g. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros"), [`ones`](numpy.ones#numpy.ones "numpy.ones"), [`full`](numpy.full#numpy.full "numpy.full")), `empty` does not initialize the values of the array, and may therefore be marginally faster. However, the values stored in the newly allocated array are arbitrary. For reproducible behavior, be sure to set each element of the array before reading. #### Examples >>> import numpy as np >>> np.empty([2, 2]) array([[ -9.74499359e+001, 6.69583040e-309], [ 2.13182611e-314, 3.06959433e-309]]) #uninitialized >>> np.empty([2, 2], dtype=int) array([[-1073741821, -1067949133], [ 496041986, 19249760]]) #uninitialized # numpy.empty_like numpy.empty_like(_prototype_ , _dtype =None_, _order ='K'_, _subok =True_, _shape =None_, _*_ , _device =None_) Return a new array with the same shape and type as a given array. Parameters: **prototype** array_like The shape and data-type of `prototype` define these same attributes of the returned array. **dtype** data-type, optional Overrides the data type of the result. **order**{‘C’, ‘F’, ‘A’, or ‘K’}, optional Overrides the memory layout of the result. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `prototype` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `prototype` as closely as possible. **subok** bool, optional. If True, then the newly created array will use the sub-class type of `prototype`, otherwise it will be a base-class array. Defaults to True. **shape** int or sequence of ints, optional. Overrides the shape of the result. If order=’K’ and the number of dimensions is unchanged, will try to keep order, otherwise, order=’C’ is implied. **device** str, optional The device on which to place the created array. Default: None. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. Returns: **out** ndarray Array of uninitialized (arbitrary) data with the same shape and type as `prototype`. See also [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") Return an array of ones with shape and type of input. [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Return an array of zeros with shape and type of input. [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like") Return a new array with shape of input filled with value. [`empty`](numpy.empty#numpy.empty "numpy.empty") Return a new uninitialized array. #### Notes Unlike other array creation functions (e.g. [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like"), [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like"), [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like")), `empty_like` does not initialize the values of the array, and may therefore be marginally faster. However, the values stored in the newly allocated array are arbitrary. For reproducible behavior, be sure to set each element of the array before reading. #### Examples >>> import numpy as np >>> a = ([1,2,3], [4,5,6]) # a is array-like >>> np.empty_like(a) array([[-1073741821, -1073741821, 3], # uninitialized [ 0, 0, -1073741821]]) >>> a = np.array([[1., 2., 3.],[4.,5.,6.]]) >>> np.empty_like(a) array([[ -2.00000715e+000, 1.48219694e-323, -2.00000572e+000], # uninitialized [ 4.38791518e-305, -2.00000715e+000, 4.17269252e-309]]) # numpy.equal numpy.equal(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return (x1 == x2) element-wise. Parameters: **x1, x2** array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less`](numpy.less#numpy.less "numpy.less") #### Examples >>> import numpy as np >>> np.equal([0, 1, 3], np.arange(3)) array([ True, True, False]) What is compared are values, not types. So an int (1) and an array of length one can evaluate as True: >>> np.equal(1, np.ones(1)) array([ True]) The `==` operator can be used as a shorthand for `np.equal` on ndarrays. >>> a = np.array([2, 4, 6]) >>> b = np.array([2, 4, 2]) >>> a == b array([ True, True, False]) # numpy.errstate.__call__ method errstate.__call__(_func_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_ufunc_config.py#L459-L483) Call self as a function. # numpy.errstate _class_ numpy.errstate(_** kwargs_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) Context manager for floating-point error handling. Using an instance of `errstate` as a context manager allows statements in that context to execute with a known error handling behavior. Upon entering the context the error handling is set with [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr") and [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall"), and upon exiting it is reset to what it was before. Changed in version 1.17.0: `errstate` is also usable as a function decorator, saving a level of indentation if an entire function is wrapped. Changed in version 2.0: `errstate` is now fully thread and asyncio safe, but may not be entered more than once. It is not safe to decorate async functions using `errstate`. Parameters: **kwargs**{divide, over, under, invalid} Keyword arguments. The valid keywords are the possible floating-point exceptions. Each keyword should have a string value that defines the treatment for the particular error. Possible values are {‘ignore’, ‘warn’, ‘raise’, ‘call’, ‘print’, ‘log’}. See also [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"), [`geterr`](numpy.geterr#numpy.geterr "numpy.geterr"), [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall"), [`geterrcall`](numpy.geterrcall#numpy.geterrcall "numpy.geterrcall") #### Notes For complete documentation of the types of floating-point exceptions and treatment options, see [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"). #### Examples >>> import numpy as np >>> olderr = np.seterr(all='ignore') # Set error handling to known state. >>> np.arange(3) / 0. array([nan, inf, inf]) >>> with np.errstate(divide='ignore'): ... np.arange(3) / 0. array([nan, inf, inf]) >>> np.sqrt(-1) np.float64(nan) >>> with np.errstate(invalid='raise'): ... np.sqrt(-1) Traceback (most recent call last): File "", line 2, in FloatingPointError: invalid value encountered in sqrt Outside the context the error handling behavior has not changed: >>> np.geterr() {'divide': 'ignore', 'over': 'ignore', 'under': 'ignore', 'invalid': 'ignore'} >>> olderr = np.seterr(**olderr) # restore original state #### Methods [`__call__`](numpy.errstate.__call__#numpy.errstate.__call__ "numpy.errstate.__call__")(func) | Call self as a function. ---|--- # numpy.exceptions.AxisError _exception_ exceptions.AxisError(_axis_ , _ndim =None_, _msg_prefix =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/exceptions.py#L109-L197) Axis supplied was invalid. This is raised whenever an `axis` parameter is specified that is larger than the number of array dimensions. For compatibility with code written against older numpy versions, which raised a mixture of [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "\(in Python v3.13\)") and [`IndexError`](https://docs.python.org/3/library/exceptions.html#IndexError "\(in Python v3.13\)") for this situation, this exception subclasses both to ensure that `except ValueError` and `except IndexError` statements continue to catch `AxisError`. Parameters: **axis** int or str The out of bounds axis or a custom exception message. If an axis is provided, then [`ndim`](numpy.ndim#numpy.ndim "numpy.ndim") should be specified as well. **ndim** int, optional The number of array dimensions. **msg_prefix** str, optional A prefix for the exception message. #### Examples >>> import numpy as np >>> array_1d = np.arange(10) >>> np.cumsum(array_1d, axis=1) Traceback (most recent call last): ... numpy.exceptions.AxisError: axis 1 is out of bounds for array of dimension 1 Negative axes are preserved: >>> np.cumsum(array_1d, axis=-2) Traceback (most recent call last): ... numpy.exceptions.AxisError: axis -2 is out of bounds for array of dimension 1 The class constructor generally takes the axis and arrays’ dimensionality as arguments: >>> print(np.exceptions.AxisError(2, 1, msg_prefix='error')) error: axis 2 is out of bounds for array of dimension 1 Alternatively, a custom exception message can be passed: >>> print(np.exceptions.AxisError('Custom error message')) Custom error message Attributes: **axis** int, optional The out of bounds axis or `None` if a custom exception message was provided. This should be the axis as passed by the user, before any normalization to resolve negative indices. New in version 1.22. **ndim** int, optional The number of array dimensions or `None` if a custom exception message was provided. New in version 1.22. # numpy.exceptions.ComplexWarning _exception_ exceptions.ComplexWarning[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/exceptions.py#L48-L56) The warning raised when casting a complex dtype to a real dtype. As implemented, casting a complex number to a real discards its imaginary part, but this behavior may not be what the user actually wants. # numpy.exceptions.DTypePromotionError _exception_ exceptions.DTypePromotionError[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/exceptions.py#L200-L247) Multiple DTypes could not be converted to a common one. This exception derives from `TypeError` and is raised whenever dtypes cannot be converted to a single common one. This can be because they are of a different category/class or incompatible instances of the same one (see Examples). #### Notes Many functions will use promotion to find the correct result and implementation. For these functions the error will typically be chained with a more specific error indicating that no implementation was found for the input dtypes. Typically promotion should be considered “invalid” between the dtypes of two arrays when `arr1 == arr2` can safely return all `False` because the dtypes are fundamentally different. #### Examples Datetimes and complex numbers are incompatible classes and cannot be promoted: >>> import numpy as np >>> np.result_type(np.dtype("M8[s]"), np.complex128) Traceback (most recent call last): ... DTypePromotionError: The DType could not be promoted by . This means that no common DType exists for the given inputs. For example they cannot be stored in a single array unless the dtype is `object`. The full list of DTypes is: (, ) For example for structured dtypes, the structure can mismatch and the same `DTypePromotionError` is given when two structured dtypes with a mismatch in their number of fields is given: >>> dtype1 = np.dtype([("field1", np.float64), ("field2", np.int64)]) >>> dtype2 = np.dtype([("field1", np.float64)]) >>> np.promote_types(dtype1, dtype2) Traceback (most recent call last): ... DTypePromotionError: field names `('field1', 'field2')` and `('field1',)` mismatch. # numpy.exceptions.RankWarning _exception_ exceptions.RankWarning[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/exceptions.py#L87-L93) Matrix rank warning. Issued by polynomial functions when the design matrix is rank deficient. # numpy.exceptions.TooHardError _exception_ exceptions.TooHardError[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/exceptions.py#L97-L106) max_work was exceeded. This is raised whenever the maximum number of candidate solutions to consider specified by the `max_work` parameter is exceeded. Assigning a finite number to max_work may have caused the operation to fail. # numpy.exceptions.VisibleDeprecationWarning _exception_ exceptions.VisibleDeprecationWarning[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/exceptions.py#L76-L84) Visible deprecation warning. By default, python will not show deprecation warnings, so this class can be used when a very visible warning is helpful, for example because the usage is most likely a user bug. # numpy.exp numpy.exp(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Calculate the exponential of all elements in the input array. Parameters: **x** array_like Input values. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Output array, element-wise exponential of `x`. This is a scalar if `x` is a scalar. See also [`expm1`](numpy.expm1#numpy.expm1 "numpy.expm1") Calculate `exp(x) - 1` for all elements in the array. [`exp2`](numpy.exp2#numpy.exp2 "numpy.exp2") Calculate `2**x` for all elements in the array. #### Notes The irrational number `e` is also known as Euler’s number. It is approximately 2.718281, and is the base of the natural logarithm, `ln` (this means that, if \\(x = \ln y = \log_e y\\), then \\(e^x = y\\). For real input, `exp(x)` is always positive. For complex arguments, `x = a + ib`, we can write \\(e^x = e^a e^{ib}\\). The first term, \\(e^a\\), is already known (it is the real argument, described above). The second term, \\(e^{ib}\\), is \\(\cos b + i \sin b\\), a function with magnitude 1 and a periodic phase. #### References [1] Wikipedia, “Exponential function”, [2] M. Abramovitz and I. A. Stegun, “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables,” Dover, 1964, p. 69, #### Examples Plot the magnitude and phase of `exp(x)` in the complex plane: >>> import numpy as np >>> import matplotlib.pyplot as plt >>> x = np.linspace(-2*np.pi, 2*np.pi, 100) >>> xx = x + 1j * x[:, np.newaxis] # a + ib over complex plane >>> out = np.exp(xx) >>> plt.subplot(121) >>> plt.imshow(np.abs(out), ... extent=[-2*np.pi, 2*np.pi, -2*np.pi, 2*np.pi], cmap='gray') >>> plt.title('Magnitude of exp(x)') >>> plt.subplot(122) >>> plt.imshow(np.angle(out), ... extent=[-2*np.pi, 2*np.pi, -2*np.pi, 2*np.pi], cmap='hsv') >>> plt.title('Phase (angle) of exp(x)') >>> plt.show() # numpy.exp2 numpy.exp2(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Calculate `2**p` for all `p` in the input array. Parameters: **x** array_like Input values. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Element-wise 2 to the power `x`. This is a scalar if `x` is a scalar. See also [`power`](numpy.power#numpy.power "numpy.power") #### Examples >>> import numpy as np >>> np.exp2([2, 3]) array([ 4., 8.]) # numpy.expand_dims numpy.expand_dims(_a_ , _axis_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_shape_base_impl.py#L511-L598) Expand the shape of an array. Insert a new axis that will appear at the `axis` position in the expanded array shape. Parameters: **a** array_like Input array. **axis** int or tuple of ints Position in the expanded axes where the new axis (or axes) is placed. Deprecated since version 1.13.0: Passing an axis where `axis > a.ndim` will be treated as `axis == a.ndim`, and passing `axis < -a.ndim - 1` will be treated as `axis == 0`. This behavior is deprecated. Returns: **result** ndarray View of `a` with the number of dimensions increased. See also [`squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") The inverse operation, removing singleton dimensions [`reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Insert, remove, and combine dimensions, and resize existing ones [`atleast_1d`](numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d"), [`atleast_2d`](numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d"), [`atleast_3d`](numpy.atleast_3d#numpy.atleast_3d "numpy.atleast_3d") #### Examples >>> import numpy as np >>> x = np.array([1, 2]) >>> x.shape (2,) The following is equivalent to `x[np.newaxis, :]` or `x[np.newaxis]`: >>> y = np.expand_dims(x, axis=0) >>> y array([[1, 2]]) >>> y.shape (1, 2) The following is equivalent to `x[:, np.newaxis]`: >>> y = np.expand_dims(x, axis=1) >>> y array([[1], [2]]) >>> y.shape (2, 1) `axis` may also be a tuple: >>> y = np.expand_dims(x, axis=(0, 1)) >>> y array([[[1, 2]]]) >>> y = np.expand_dims(x, axis=(2, 0)) >>> y array([[[1], [2]]]) Note that some examples may use `None` instead of `np.newaxis`. These are the same objects: >>> np.newaxis is None True # numpy.expm1 numpy.expm1(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Calculate `exp(x) - 1` for all elements in the array. Parameters: **x** array_like Input values. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Element-wise exponential minus one: `out = exp(x) - 1`. This is a scalar if `x` is a scalar. See also [`log1p`](numpy.log1p#numpy.log1p "numpy.log1p") `log(1 + x)`, the inverse of expm1. #### Notes This function provides greater precision than `exp(x) - 1` for small values of `x`. #### Examples The true value of `exp(1e-10) - 1` is `1.00000000005e-10` to about 32 significant digits. This example shows the superiority of expm1 in this case. >>> import numpy as np >>> np.expm1(1e-10) 1.00000000005e-10 >>> np.exp(1e-10) - 1 1.000000082740371e-10 # numpy.extract numpy.extract(_condition_ , _arr_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L1990-L2040) Return the elements of an array that satisfy some condition. This is equivalent to `np.compress(ravel(condition), ravel(arr))`. If `condition` is boolean `np.extract` is equivalent to `arr[condition]`. Note that [`place`](numpy.place#numpy.place "numpy.place") does the exact opposite of `extract`. Parameters: **condition** array_like An array whose nonzero or True entries indicate the elements of `arr` to extract. **arr** array_like Input array of the same size as `condition`. Returns: **extract** ndarray Rank 1 array of values from `arr` where `condition` is True. See also [`take`](numpy.take#numpy.take "numpy.take"), [`put`](numpy.put#numpy.put "numpy.put"), [`copyto`](numpy.copyto#numpy.copyto "numpy.copyto"), [`compress`](numpy.compress#numpy.compress "numpy.compress"), [`place`](numpy.place#numpy.place "numpy.place") #### Examples >>> import numpy as np >>> arr = np.arange(12).reshape((3, 4)) >>> arr array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> condition = np.mod(arr, 3)==0 >>> condition array([[ True, False, False, True], [False, False, True, False], [False, True, False, False]]) >>> np.extract(condition, arr) array([0, 3, 6, 9]) If `condition` is boolean: >>> arr[condition] array([0, 3, 6, 9]) # numpy.eye numpy.eye(_N_ , _M=None_ , _k=0_ , _dtype= _, _order='C'_ , _*_ , _device=None_ , _like=None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_twodim_base_impl.py#L163-L235) Return a 2-D array with ones on the diagonal and zeros elsewhere. Parameters: **N** int Number of rows in the output. **M** int, optional Number of columns in the output. If None, defaults to `N`. **k** int, optional Index of the diagonal: 0 (the default) refers to the main diagonal, a positive value refers to an upper diagonal, and a negative value to a lower diagonal. **dtype** data-type, optional Data-type of the returned array. **order**{‘C’, ‘F’}, optional Whether the output should be stored in row-major (C-style) or column-major (Fortran-style) order in memory. **device** str, optional The device on which to place the created array. Default: None. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **I** ndarray of shape (N,M) An array where all elements are equal to zero, except for the `k`-th diagonal, whose values are equal to one. See also [`identity`](numpy.identity#numpy.identity "numpy.identity") (almost) equivalent function [`diag`](numpy.diag#numpy.diag "numpy.diag") diagonal 2-D array from a 1-D array specified by the user. #### Examples >>> import numpy as np >>> np.eye(2, dtype=int) array([[1, 0], [0, 1]]) >>> np.eye(3, k=1) array([[0., 1., 0.], [0., 0., 1.], [0., 0., 0.]]) # numpy.fabs numpy.fabs(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Compute the absolute values element-wise. This function returns the absolute values (positive magnitude) of the data in `x`. Complex values are not handled, use [`absolute`](numpy.absolute#numpy.absolute "numpy.absolute") to find the absolute values of complex data. Parameters: **x** array_like The array of numbers for which the absolute values are required. If `x` is a scalar, the result `y` will also be a scalar. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray or scalar The absolute values of `x`, the returned values are always floats. This is a scalar if `x` is a scalar. See also [`absolute`](numpy.absolute#numpy.absolute "numpy.absolute") Absolute values including `complex` types. #### Examples >>> import numpy as np >>> np.fabs(-1) 1.0 >>> np.fabs([-1.2, 1.2]) array([ 1.2, 1.2]) # numpy.fft.fft fft.fft(_a_ , _n =None_, _axis =-1_, _norm =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_pocketfft.py#L113-L210) Compute the one-dimensional discrete Fourier Transform. This function computes the one-dimensional _n_ -point discrete Fourier Transform (DFT) with the efficient Fast Fourier Transform (FFT) algorithm [CT]. Parameters: **a** array_like Input array, can be complex. **n** int, optional Length of the transformed axis of the output. If `n` is smaller than the length of the input, the input is cropped. If it is larger, the input is padded with zeros. If `n` is not given, the length of the input along the axis specified by `axis` is used. **axis** int, optional Axis over which to compute the FFT. If not given, the last axis is used. **norm**{“backward”, “ortho”, “forward”}, optional Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. **out** complex ndarray, optional If provided, the result will be placed in this array. It should be of the appropriate shape and dtype. New in version 2.0.0. Returns: **out** complex ndarray The truncated or zero-padded input, transformed along the axis indicated by `axis`, or the last one if `axis` is not specified. Raises: IndexError If `axis` is not a valid axis of `a`. See also [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") for definition of the DFT and conventions used. [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft") The inverse of [`fft`](../routines.fft#module-numpy.fft "numpy.fft"). [`fft2`](numpy.fft.fft2#numpy.fft.fft2 "numpy.fft.fft2") The two-dimensional FFT. [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn") The _n_ -dimensional FFT. [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn") The _n_ -dimensional FFT of real input. [`fftfreq`](numpy.fft.fftfreq#numpy.fft.fftfreq "numpy.fft.fftfreq") Frequency bins for given FFT parameters. #### Notes FFT (Fast Fourier Transform) refers to a way the discrete Fourier Transform (DFT) can be calculated efficiently, by using symmetries in the calculated terms. The symmetry is highest when `n` is a power of 2, and the transform is therefore most efficient for these sizes. The DFT is defined, with the conventions used in this implementation, in the documentation for the [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") module. #### References [CT] Cooley, James W., and John W. Tukey, 1965, “An algorithm for the machine calculation of complex Fourier series,” _Math. Comput._ 19: 297-301. #### Examples >>> import numpy as np >>> np.fft.fft(np.exp(2j * np.pi * np.arange(8) / 8)) array([-2.33486982e-16+1.14423775e-17j, 8.00000000e+00-1.25557246e-15j, 2.33486982e-16+2.33486982e-16j, 0.00000000e+00+1.22464680e-16j, -1.14423775e-17+2.33486982e-16j, 0.00000000e+00+5.20784380e-16j, 1.14423775e-17+1.14423775e-17j, 0.00000000e+00+1.22464680e-16j]) In this example, real input has an FFT which is Hermitian, i.e., symmetric in the real part and anti-symmetric in the imaginary part, as described in the [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") documentation: >>> import matplotlib.pyplot as plt >>> t = np.arange(256) >>> sp = np.fft.fft(np.sin(t)) >>> freq = np.fft.fftfreq(t.shape[-1]) >>> plt.plot(freq, sp.real, freq, sp.imag) [, ] >>> plt.show() # numpy.fft.fft2 fft.fft2(_a_ , _s =None_, _axes =(-2, -1)_, _norm =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_pocketfft.py#L1013-L1135) Compute the 2-dimensional discrete Fourier Transform. This function computes the _n_ -dimensional discrete Fourier Transform over any axes in an _M_ -dimensional array by means of the Fast Fourier Transform (FFT). By default, the transform is computed over the last two axes of the input array, i.e., a 2-dimensional FFT. Parameters: **a** array_like Input array, can be complex **s** sequence of ints, optional Shape (length of each transformed axis) of the output (`s[0]` refers to axis 0, `s[1]` to axis 1, etc.). This corresponds to `n` for `fft(x, n)`. Along each axis, if the given shape is smaller than that of the input, the input is cropped. If it is larger, the input is padded with zeros. Changed in version 2.0: If it is `-1`, the whole input is used (no padding/trimming). If `s` is not given, the shape of the input along the axes specified by `axes` is used. Deprecated since version 2.0: If `s` is not `None`, `axes` must not be `None` either. Deprecated since version 2.0: `s` must contain only `int` s, not `None` values. `None` values currently mean that the default value for `n` is used in the corresponding 1-D transform, but this behaviour is deprecated. **axes** sequence of ints, optional Axes over which to compute the FFT. If not given, the last two axes are used. A repeated index in `axes` means the transform over that axis is performed multiple times. A one-element sequence means that a one-dimensional FFT is performed. Default: `(-2, -1)`. Deprecated since version 2.0: If `s` is specified, the corresponding `axes` to be transformed must not be `None`. **norm**{“backward”, “ortho”, “forward”}, optional Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. **out** complex ndarray, optional If provided, the result will be placed in this array. It should be of the appropriate shape and dtype for all axes (and hence only the last axis can have `s` not equal to the shape at that axis). New in version 2.0.0. Returns: **out** complex ndarray The truncated or zero-padded input, transformed along the axes indicated by `axes`, or the last two axes if `axes` is not given. Raises: ValueError If `s` and `axes` have different length, or `axes` not given and `len(s) != 2`. IndexError If an element of `axes` is larger than than the number of axes of `a`. See also [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") Overall view of discrete Fourier transforms, with definitions and conventions used. [`ifft2`](numpy.fft.ifft2#numpy.fft.ifft2 "numpy.fft.ifft2") The inverse two-dimensional FFT. [`fft`](../routines.fft#module-numpy.fft "numpy.fft") The one-dimensional FFT. [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn") The _n_ -dimensional FFT. [`fftshift`](numpy.fft.fftshift#numpy.fft.fftshift "numpy.fft.fftshift") Shifts zero-frequency terms to the center of the array. For two-dimensional input, swaps first and third quadrants, and second and fourth quadrants. #### Notes `fft2` is just [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn") with a different default for `axes`. The output, analogously to [`fft`](../routines.fft#module-numpy.fft "numpy.fft"), contains the term for zero frequency in the low-order corner of the transformed axes, the positive frequency terms in the first half of these axes, the term for the Nyquist frequency in the middle of the axes and the negative frequency terms in the second half of the axes, in order of decreasingly negative frequency. See [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn") for details and a plotting example, and [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") for definitions and conventions used. #### Examples >>> import numpy as np >>> a = np.mgrid[:5, :5][0] >>> np.fft.fft2(a) array([[ 50. +0.j , 0. +0.j , 0. +0.j , # may vary 0. +0.j , 0. +0.j ], [-12.5+17.20477401j, 0. +0.j , 0. +0.j , 0. +0.j , 0. +0.j ], [-12.5 +4.0614962j , 0. +0.j , 0. +0.j , 0. +0.j , 0. +0.j ], [-12.5 -4.0614962j , 0. +0.j , 0. +0.j , 0. +0.j , 0. +0.j ], [-12.5-17.20477401j, 0. +0.j , 0. +0.j , 0. +0.j , 0. +0.j ]]) # numpy.fft.fftfreq fft.fftfreq(_n_ , _d =1.0_, _device =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_helper.py#L125-L177) Return the Discrete Fourier Transform sample frequencies. The returned float array `f` contains the frequency bin centers in cycles per unit of the sample spacing (with zero at the start). For instance, if the sample spacing is in seconds, then the frequency unit is cycles/second. Given a window length `n` and a sample spacing `d`: f = [0, 1, ..., n/2-1, -n/2, ..., -1] / (d*n) if n is even f = [0, 1, ..., (n-1)/2, -(n-1)/2, ..., -1] / (d*n) if n is odd Parameters: **n** int Window length. **d** scalar, optional Sample spacing (inverse of the sampling rate). Defaults to 1. **device** str, optional The device on which to place the created array. Default: `None`. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. Returns: **f** ndarray Array of length `n` containing the sample frequencies. #### Examples >>> import numpy as np >>> signal = np.array([-2, 8, 6, 4, 1, 0, 3, 5], dtype=float) >>> fourier = np.fft.fft(signal) >>> n = signal.size >>> timestep = 0.1 >>> freq = np.fft.fftfreq(n, d=timestep) >>> freq array([ 0. , 1.25, 2.5 , ..., -3.75, -2.5 , -1.25]) # numpy.fft.fftn fft.fftn(_a_ , _s =None_, _axes =None_, _norm =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_pocketfft.py#L749-L878) Compute the N-dimensional discrete Fourier Transform. This function computes the _N_ -dimensional discrete Fourier Transform over any number of axes in an _M_ -dimensional array by means of the Fast Fourier Transform (FFT). Parameters: **a** array_like Input array, can be complex. **s** sequence of ints, optional Shape (length of each transformed axis) of the output (`s[0]` refers to axis 0, `s[1]` to axis 1, etc.). This corresponds to `n` for `fft(x, n)`. Along any axis, if the given shape is smaller than that of the input, the input is cropped. If it is larger, the input is padded with zeros. Changed in version 2.0: If it is `-1`, the whole input is used (no padding/trimming). If `s` is not given, the shape of the input along the axes specified by `axes` is used. Deprecated since version 2.0: If `s` is not `None`, `axes` must not be `None` either. Deprecated since version 2.0: `s` must contain only `int` s, not `None` values. `None` values currently mean that the default value for `n` is used in the corresponding 1-D transform, but this behaviour is deprecated. **axes** sequence of ints, optional Axes over which to compute the FFT. If not given, the last `len(s)` axes are used, or all axes if `s` is also not specified. Repeated indices in `axes` means that the transform over that axis is performed multiple times. Deprecated since version 2.0: If `s` is specified, the corresponding `axes` to be transformed must be explicitly specified too. **norm**{“backward”, “ortho”, “forward”}, optional Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. **out** complex ndarray, optional If provided, the result will be placed in this array. It should be of the appropriate shape and dtype for all axes (and hence is incompatible with passing in all but the trivial `s`). New in version 2.0.0. Returns: **out** complex ndarray The truncated or zero-padded input, transformed along the axes indicated by `axes`, or by a combination of `s` and `a`, as explained in the parameters section above. Raises: ValueError If `s` and `axes` have different length. IndexError If an element of `axes` is larger than than the number of axes of `a`. See also [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") Overall view of discrete Fourier transforms, with definitions and conventions used. [`ifftn`](numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn") The inverse of `fftn`, the inverse _n_ -dimensional FFT. [`fft`](../routines.fft#module-numpy.fft "numpy.fft") The one-dimensional FFT, with definitions and conventions used. [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn") The _n_ -dimensional FFT of real input. [`fft2`](numpy.fft.fft2#numpy.fft.fft2 "numpy.fft.fft2") The two-dimensional FFT. [`fftshift`](numpy.fft.fftshift#numpy.fft.fftshift "numpy.fft.fftshift") Shifts zero-frequency terms to centre of array #### Notes The output, analogously to [`fft`](../routines.fft#module-numpy.fft "numpy.fft"), contains the term for zero frequency in the low-order corner of all axes, the positive frequency terms in the first half of all axes, the term for the Nyquist frequency in the middle of all axes and the negative frequency terms in the second half of all axes, in order of decreasingly negative frequency. See [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") for details, definitions and conventions used. #### Examples >>> import numpy as np >>> a = np.mgrid[:3, :3, :3][0] >>> np.fft.fftn(a, axes=(1, 2)) array([[[ 0.+0.j, 0.+0.j, 0.+0.j], # may vary [ 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j]], [[ 9.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j]], [[18.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j]]]) >>> np.fft.fftn(a, (2, 2), axes=(0, 1)) array([[[ 2.+0.j, 2.+0.j, 2.+0.j], # may vary [ 0.+0.j, 0.+0.j, 0.+0.j]], [[-2.+0.j, -2.+0.j, -2.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j]]]) >>> import matplotlib.pyplot as plt >>> [X, Y] = np.meshgrid(2 * np.pi * np.arange(200) / 12, ... 2 * np.pi * np.arange(200) / 34) >>> S = np.sin(X) + np.cos(Y) + np.random.uniform(0, 1, X.shape) >>> FS = np.fft.fftn(S) >>> plt.imshow(np.log(np.abs(np.fft.fftshift(FS))**2)) >>> plt.show() # numpy.fft.fftshift fft.fftshift(_x_ , _axes =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_helper.py#L19-L74) Shift the zero-frequency component to the center of the spectrum. This function swaps half-spaces for all axes listed (defaults to all). Note that `y[0]` is the Nyquist component only if `len(x)` is even. Parameters: **x** array_like Input array. **axes** int or shape tuple, optional Axes over which to shift. Default is None, which shifts all axes. Returns: **y** ndarray The shifted array. See also [`ifftshift`](numpy.fft.ifftshift#numpy.fft.ifftshift "numpy.fft.ifftshift") The inverse of `fftshift`. #### Examples >>> import numpy as np >>> freqs = np.fft.fftfreq(10, 0.1) >>> freqs array([ 0., 1., 2., ..., -3., -2., -1.]) >>> np.fft.fftshift(freqs) array([-5., -4., -3., -2., -1., 0., 1., 2., 3., 4.]) Shift the zero-frequency component only along the second axis: >>> freqs = np.fft.fftfreq(9, d=1./9).reshape(3, 3) >>> freqs array([[ 0., 1., 2.], [ 3., 4., -4.], [-3., -2., -1.]]) >>> np.fft.fftshift(freqs, axes=(1,)) array([[ 2., 0., 1.], [-4., 3., 4.], [-1., -3., -2.]]) # numpy.fft.hfft fft.hfft(_a_ , _n =None_, _axis =-1_, _norm =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_pocketfft.py#L523-L623) Compute the FFT of a signal that has Hermitian symmetry, i.e., a real spectrum. Parameters: **a** array_like The input array. **n** int, optional Length of the transformed axis of the output. For `n` output points, `n//2 + 1` input points are necessary. If the input is longer than this, it is cropped. If it is shorter than this, it is padded with zeros. If `n` is not given, it is taken to be `2*(m-1)` where `m` is the length of the input along the axis specified by `axis`. **axis** int, optional Axis over which to compute the FFT. If not given, the last axis is used. **norm**{“backward”, “ortho”, “forward”}, optional Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. **out** ndarray, optional If provided, the result will be placed in this array. It should be of the appropriate shape and dtype. New in version 2.0.0. Returns: **out** ndarray The truncated or zero-padded input, transformed along the axis indicated by `axis`, or the last one if `axis` is not specified. The length of the transformed axis is `n`, or, if `n` is not given, `2*m - 2` where `m` is the length of the transformed axis of the input. To get an odd number of output points, `n` must be specified, for instance as `2*m - 1` in the typical case, Raises: IndexError If `axis` is not a valid axis of `a`. See also [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft") Compute the one-dimensional FFT for real input. [`ihfft`](numpy.fft.ihfft#numpy.fft.ihfft "numpy.fft.ihfft") The inverse of `hfft`. #### Notes `hfft`/[`ihfft`](numpy.fft.ihfft#numpy.fft.ihfft "numpy.fft.ihfft") are a pair analogous to [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft")/[`irfft`](numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft"), but for the opposite case: here the signal has Hermitian symmetry in the time domain and is real in the frequency domain. So here it’s `hfft` for which you must supply the length of the result if it is to be odd. * even: `ihfft(hfft(a, 2*len(a) - 2)) == a`, within roundoff error, * odd: `ihfft(hfft(a, 2*len(a) - 1)) == a`, within roundoff error. The correct interpretation of the hermitian input depends on the length of the original data, as given by `n`. This is because each input shape could correspond to either an odd or even length signal. By default, `hfft` assumes an even output length which puts the last entry at the Nyquist frequency; aliasing with its symmetric counterpart. By Hermitian symmetry, the value is thus treated as purely real. To avoid losing information, the shape of the full signal **must** be given. #### Examples >>> import numpy as np >>> signal = np.array([1, 2, 3, 4, 3, 2]) >>> np.fft.fft(signal) array([15.+0.j, -4.+0.j, 0.+0.j, -1.-0.j, 0.+0.j, -4.+0.j]) # may vary >>> np.fft.hfft(signal[:4]) # Input first half of signal array([15., -4., 0., -1., 0., -4.]) >>> np.fft.hfft(signal, 6) # Input entire signal and truncate array([15., -4., 0., -1., 0., -4.]) >>> signal = np.array([[1, 1.j], [-1.j, 2]]) >>> np.conj(signal.T) - signal # check Hermitian symmetry array([[ 0.-0.j, -0.+0.j], # may vary [ 0.+0.j, 0.-0.j]]) >>> freq_spectrum = np.fft.hfft(signal) >>> freq_spectrum array([[ 1., 1.], [ 2., -2.]]) # numpy.fft.ifft fft.ifft(_a_ , _n =None_, _axis =-1_, _norm =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_pocketfft.py#L213-L315) Compute the one-dimensional inverse discrete Fourier Transform. This function computes the inverse of the one-dimensional _n_ -point discrete Fourier transform computed by [`fft`](../routines.fft#module-numpy.fft "numpy.fft"). In other words, `ifft(fft(a)) == a` to within numerical accuracy. For a general description of the algorithm and definitions, see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft"). The input should be ordered in the same way as is returned by [`fft`](../routines.fft#module-numpy.fft "numpy.fft"), i.e., * `a[0]` should contain the zero frequency term, * `a[1:n//2]` should contain the positive-frequency terms, * `a[n//2 + 1:]` should contain the negative-frequency terms, in increasing order starting from the most negative frequency. For an even number of input points, `A[n//2]` represents the sum of the values at the positive and negative Nyquist frequencies, as the two are aliased together. See [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") for details. Parameters: **a** array_like Input array, can be complex. **n** int, optional Length of the transformed axis of the output. If `n` is smaller than the length of the input, the input is cropped. If it is larger, the input is padded with zeros. If `n` is not given, the length of the input along the axis specified by `axis` is used. See notes about padding issues. **axis** int, optional Axis over which to compute the inverse DFT. If not given, the last axis is used. **norm**{“backward”, “ortho”, “forward”}, optional Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. **out** complex ndarray, optional If provided, the result will be placed in this array. It should be of the appropriate shape and dtype. New in version 2.0.0. Returns: **out** complex ndarray The truncated or zero-padded input, transformed along the axis indicated by `axis`, or the last one if `axis` is not specified. Raises: IndexError If `axis` is not a valid axis of `a`. See also [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") An introduction, with definitions and general explanations. [`fft`](../routines.fft#module-numpy.fft "numpy.fft") The one-dimensional (forward) FFT, of which `ifft` is the inverse [`ifft2`](numpy.fft.ifft2#numpy.fft.ifft2 "numpy.fft.ifft2") The two-dimensional inverse FFT. [`ifftn`](numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn") The n-dimensional inverse FFT. #### Notes If the input parameter `n` is larger than the size of the input, the input is padded by appending zeros at the end. Even though this is the common approach, it might lead to surprising results. If a different padding is desired, it must be performed before calling `ifft`. #### Examples >>> import numpy as np >>> np.fft.ifft([0, 4, 0, 0]) array([ 1.+0.j, 0.+1.j, -1.+0.j, 0.-1.j]) # may vary Create and plot a band-limited signal with random phases: >>> import matplotlib.pyplot as plt >>> t = np.arange(400) >>> n = np.zeros((400,), dtype=complex) >>> n[40:60] = np.exp(1j*np.random.uniform(0, 2*np.pi, (20,))) >>> s = np.fft.ifft(n) >>> plt.plot(t, s.real, label='real') [] >>> plt.plot(t, s.imag, '--', label='imaginary') [] >>> plt.legend() >>> plt.show() # numpy.fft.ifft2 fft.ifft2(_a_ , _s =None_, _axes =(-2, -1)_, _norm =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_pocketfft.py#L1138-L1257) Compute the 2-dimensional inverse discrete Fourier Transform. This function computes the inverse of the 2-dimensional discrete Fourier Transform over any number of axes in an M-dimensional array by means of the Fast Fourier Transform (FFT). In other words, `ifft2(fft2(a)) == a` to within numerical accuracy. By default, the inverse transform is computed over the last two axes of the input array. The input, analogously to [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft"), should be ordered in the same way as is returned by [`fft2`](numpy.fft.fft2#numpy.fft.fft2 "numpy.fft.fft2"), i.e. it should have the term for zero frequency in the low-order corner of the two axes, the positive frequency terms in the first half of these axes, the term for the Nyquist frequency in the middle of the axes and the negative frequency terms in the second half of both axes, in order of decreasingly negative frequency. Parameters: **a** array_like Input array, can be complex. **s** sequence of ints, optional Shape (length of each axis) of the output (`s[0]` refers to axis 0, `s[1]` to axis 1, etc.). This corresponds to `n` for `ifft(x, n)`. Along each axis, if the given shape is smaller than that of the input, the input is cropped. If it is larger, the input is padded with zeros. Changed in version 2.0: If it is `-1`, the whole input is used (no padding/trimming). If `s` is not given, the shape of the input along the axes specified by `axes` is used. See notes for issue on [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft") zero padding. Deprecated since version 2.0: If `s` is not `None`, `axes` must not be `None` either. Deprecated since version 2.0: `s` must contain only `int` s, not `None` values. `None` values currently mean that the default value for `n` is used in the corresponding 1-D transform, but this behaviour is deprecated. **axes** sequence of ints, optional Axes over which to compute the FFT. If not given, the last two axes are used. A repeated index in `axes` means the transform over that axis is performed multiple times. A one-element sequence means that a one-dimensional FFT is performed. Default: `(-2, -1)`. Deprecated since version 2.0: If `s` is specified, the corresponding `axes` to be transformed must not be `None`. **norm**{“backward”, “ortho”, “forward”}, optional Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. **out** complex ndarray, optional If provided, the result will be placed in this array. It should be of the appropriate shape and dtype for all axes (and hence is incompatible with passing in all but the trivial `s`). New in version 2.0.0. Returns: **out** complex ndarray The truncated or zero-padded input, transformed along the axes indicated by `axes`, or the last two axes if `axes` is not given. Raises: ValueError If `s` and `axes` have different length, or `axes` not given and `len(s) != 2`. IndexError If an element of `axes` is larger than than the number of axes of `a`. See also [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") Overall view of discrete Fourier transforms, with definitions and conventions used. [`fft2`](numpy.fft.fft2#numpy.fft.fft2 "numpy.fft.fft2") The forward 2-dimensional FFT, of which `ifft2` is the inverse. [`ifftn`](numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn") The inverse of the _n_ -dimensional FFT. [`fft`](../routines.fft#module-numpy.fft "numpy.fft") The one-dimensional FFT. [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft") The one-dimensional inverse FFT. #### Notes `ifft2` is just [`ifftn`](numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn") with a different default for `axes`. See [`ifftn`](numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn") for details and a plotting example, and [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") for definition and conventions used. Zero-padding, analogously with [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft"), is performed by appending zeros to the input along the specified dimension. Although this is the common approach, it might lead to surprising results. If another form of zero padding is desired, it must be performed before `ifft2` is called. #### Examples >>> import numpy as np >>> a = 4 * np.eye(4) >>> np.fft.ifft2(a) array([[1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], # may vary [0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j], [0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j], [0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j]]) # numpy.fft.ifftn fft.ifftn(_a_ , _s =None_, _axes =None_, _norm =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_pocketfft.py#L881-L1010) Compute the N-dimensional inverse discrete Fourier Transform. This function computes the inverse of the N-dimensional discrete Fourier Transform over any number of axes in an M-dimensional array by means of the Fast Fourier Transform (FFT). In other words, `ifftn(fftn(a)) == a` to within numerical accuracy. For a description of the definitions and conventions used, see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft"). The input, analogously to [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft"), should be ordered in the same way as is returned by [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn"), i.e. it should have the term for zero frequency in all axes in the low-order corner, the positive frequency terms in the first half of all axes, the term for the Nyquist frequency in the middle of all axes and the negative frequency terms in the second half of all axes, in order of decreasingly negative frequency. Parameters: **a** array_like Input array, can be complex. **s** sequence of ints, optional Shape (length of each transformed axis) of the output (`s[0]` refers to axis 0, `s[1]` to axis 1, etc.). This corresponds to `n` for `ifft(x, n)`. Along any axis, if the given shape is smaller than that of the input, the input is cropped. If it is larger, the input is padded with zeros. Changed in version 2.0: If it is `-1`, the whole input is used (no padding/trimming). If `s` is not given, the shape of the input along the axes specified by `axes` is used. See notes for issue on [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft") zero padding. Deprecated since version 2.0: If `s` is not `None`, `axes` must not be `None` either. Deprecated since version 2.0: `s` must contain only `int` s, not `None` values. `None` values currently mean that the default value for `n` is used in the corresponding 1-D transform, but this behaviour is deprecated. **axes** sequence of ints, optional Axes over which to compute the IFFT. If not given, the last `len(s)` axes are used, or all axes if `s` is also not specified. Repeated indices in `axes` means that the inverse transform over that axis is performed multiple times. Deprecated since version 2.0: If `s` is specified, the corresponding `axes` to be transformed must be explicitly specified too. **norm**{“backward”, “ortho”, “forward”}, optional Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. **out** complex ndarray, optional If provided, the result will be placed in this array. It should be of the appropriate shape and dtype for all axes (and hence is incompatible with passing in all but the trivial `s`). New in version 2.0.0. Returns: **out** complex ndarray The truncated or zero-padded input, transformed along the axes indicated by `axes`, or by a combination of `s` or `a`, as explained in the parameters section above. Raises: ValueError If `s` and `axes` have different length. IndexError If an element of `axes` is larger than than the number of axes of `a`. See also [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") Overall view of discrete Fourier transforms, with definitions and conventions used. [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn") The forward _n_ -dimensional FFT, of which `ifftn` is the inverse. [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft") The one-dimensional inverse FFT. [`ifft2`](numpy.fft.ifft2#numpy.fft.ifft2 "numpy.fft.ifft2") The two-dimensional inverse FFT. [`ifftshift`](numpy.fft.ifftshift#numpy.fft.ifftshift "numpy.fft.ifftshift") Undoes [`fftshift`](numpy.fft.fftshift#numpy.fft.fftshift "numpy.fft.fftshift"), shifts zero-frequency terms to beginning of array. #### Notes See [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") for definitions and conventions used. Zero-padding, analogously with [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft"), is performed by appending zeros to the input along the specified dimension. Although this is the common approach, it might lead to surprising results. If another form of zero padding is desired, it must be performed before `ifftn` is called. #### Examples >>> import numpy as np >>> a = np.eye(4) >>> np.fft.ifftn(np.fft.fftn(a, axes=(0,)), axes=(1,)) array([[1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], # may vary [0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j], [0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j], [0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j]]) Create and plot an image with band-limited frequency content: >>> import matplotlib.pyplot as plt >>> n = np.zeros((200,200), dtype=complex) >>> n[60:80, 20:40] = np.exp(1j*np.random.uniform(0, 2*np.pi, (20, 20))) >>> im = np.fft.ifftn(n).real >>> plt.imshow(im) >>> plt.show() # numpy.fft.ifftshift fft.ifftshift(_x_ , _axes =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_helper.py#L77-L122) The inverse of [`fftshift`](numpy.fft.fftshift#numpy.fft.fftshift "numpy.fft.fftshift"). Although identical for even-length `x`, the functions differ by one sample for odd-length `x`. Parameters: **x** array_like Input array. **axes** int or shape tuple, optional Axes over which to calculate. Defaults to None, which shifts all axes. Returns: **y** ndarray The shifted array. See also [`fftshift`](numpy.fft.fftshift#numpy.fft.fftshift "numpy.fft.fftshift") Shift zero-frequency component to the center of the spectrum. #### Examples >>> import numpy as np >>> freqs = np.fft.fftfreq(9, d=1./9).reshape(3, 3) >>> freqs array([[ 0., 1., 2.], [ 3., 4., -4.], [-3., -2., -1.]]) >>> np.fft.ifftshift(np.fft.fftshift(freqs)) array([[ 0., 1., 2.], [ 3., 4., -4.], [-3., -2., -1.]]) # numpy.fft.ihfft fft.ihfft(_a_ , _n =None_, _axis =-1_, _norm =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_pocketfft.py#L626-L695) Compute the inverse FFT of a signal that has Hermitian symmetry. Parameters: **a** array_like Input array. **n** int, optional Length of the inverse FFT, the number of points along transformation axis in the input to use. If `n` is smaller than the length of the input, the input is cropped. If it is larger, the input is padded with zeros. If `n` is not given, the length of the input along the axis specified by `axis` is used. **axis** int, optional Axis over which to compute the inverse FFT. If not given, the last axis is used. **norm**{“backward”, “ortho”, “forward”}, optional Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. **out** complex ndarray, optional If provided, the result will be placed in this array. It should be of the appropriate shape and dtype. New in version 2.0.0. Returns: **out** complex ndarray The truncated or zero-padded input, transformed along the axis indicated by `axis`, or the last one if `axis` is not specified. The length of the transformed axis is `n//2 + 1`. See also [`hfft`](numpy.fft.hfft#numpy.fft.hfft "numpy.fft.hfft"), [`irfft`](numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft") #### Notes [`hfft`](numpy.fft.hfft#numpy.fft.hfft "numpy.fft.hfft")/`ihfft` are a pair analogous to [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft")/[`irfft`](numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft"), but for the opposite case: here the signal has Hermitian symmetry in the time domain and is real in the frequency domain. So here it’s [`hfft`](numpy.fft.hfft#numpy.fft.hfft "numpy.fft.hfft") for which you must supply the length of the result if it is to be odd: * even: `ihfft(hfft(a, 2*len(a) - 2)) == a`, within roundoff error, * odd: `ihfft(hfft(a, 2*len(a) - 1)) == a`, within roundoff error. #### Examples >>> import numpy as np >>> spectrum = np.array([ 15, -4, 0, -1, 0, -4]) >>> np.fft.ifft(spectrum) array([1.+0.j, 2.+0.j, 3.+0.j, 4.+0.j, 3.+0.j, 2.+0.j]) # may vary >>> np.fft.ihfft(spectrum) array([ 1.-0.j, 2.-0.j, 3.-0.j, 4.-0.j]) # may vary # numpy.fft.irfft fft.irfft(_a_ , _n =None_, _axis =-1_, _norm =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_pocketfft.py#L415-L520) Computes the inverse of [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft"). This function computes the inverse of the one-dimensional _n_ -point discrete Fourier Transform of real input computed by [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft"). In other words, `irfft(rfft(a), len(a)) == a` to within numerical accuracy. (See Notes below for why `len(a)` is necessary here.) The input is expected to be in the form returned by [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft"), i.e. the real zero- frequency term followed by the complex positive frequency terms in order of increasing frequency. Since the discrete Fourier Transform of real input is Hermitian-symmetric, the negative frequency terms are taken to be the complex conjugates of the corresponding positive frequency terms. Parameters: **a** array_like The input array. **n** int, optional Length of the transformed axis of the output. For `n` output points, `n//2+1` input points are necessary. If the input is longer than this, it is cropped. If it is shorter than this, it is padded with zeros. If `n` is not given, it is taken to be `2*(m-1)` where `m` is the length of the input along the axis specified by `axis`. **axis** int, optional Axis over which to compute the inverse FFT. If not given, the last axis is used. **norm**{“backward”, “ortho”, “forward”}, optional Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. **out** ndarray, optional If provided, the result will be placed in this array. It should be of the appropriate shape and dtype. New in version 2.0.0. Returns: **out** ndarray The truncated or zero-padded input, transformed along the axis indicated by `axis`, or the last one if `axis` is not specified. The length of the transformed axis is `n`, or, if `n` is not given, `2*(m-1)` where `m` is the length of the transformed axis of the input. To get an odd number of output points, `n` must be specified. Raises: IndexError If `axis` is not a valid axis of `a`. See also [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") For definition of the DFT and conventions used. [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft") The one-dimensional FFT of real input, of which `irfft` is inverse. [`fft`](../routines.fft#module-numpy.fft "numpy.fft") The one-dimensional FFT. [`irfft2`](numpy.fft.irfft2#numpy.fft.irfft2 "numpy.fft.irfft2") The inverse of the two-dimensional FFT of real input. [`irfftn`](numpy.fft.irfftn#numpy.fft.irfftn "numpy.fft.irfftn") The inverse of the _n_ -dimensional FFT of real input. #### Notes Returns the real valued `n`-point inverse discrete Fourier transform of `a`, where `a` contains the non-negative frequency terms of a Hermitian-symmetric sequence. `n` is the length of the result, not the input. If you specify an `n` such that `a` must be zero-padded or truncated, the extra/removed values will be added/removed at high frequencies. One can thus resample a series to `m` points via Fourier interpolation by: `a_resamp = irfft(rfft(a), m)`. The correct interpretation of the hermitian input depends on the length of the original data, as given by `n`. This is because each input shape could correspond to either an odd or even length signal. By default, `irfft` assumes an even output length which puts the last entry at the Nyquist frequency; aliasing with its symmetric counterpart. By Hermitian symmetry, the value is thus treated as purely real. To avoid losing information, the correct length of the real input **must** be given. #### Examples >>> import numpy as np >>> np.fft.ifft([1, -1j, -1, 1j]) array([0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j]) # may vary >>> np.fft.irfft([1, -1j, -1]) array([0., 1., 0., 0.]) Notice how the last term in the input to the ordinary [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft") is the complex conjugate of the second term, and the output has zero imaginary part everywhere. When calling `irfft`, the negative frequencies are not specified, and the output array is purely real. # numpy.fft.irfft2 fft.irfft2(_a_ , _s =None_, _axes =(-2, -1)_, _norm =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_pocketfft.py#L1606-L1687) Computes the inverse of [`rfft2`](numpy.fft.rfft2#numpy.fft.rfft2 "numpy.fft.rfft2"). Parameters: **a** array_like The input array **s** sequence of ints, optional Shape of the real output to the inverse FFT. Changed in version 2.0: If it is `-1`, the whole input is used (no padding/trimming). Deprecated since version 2.0: If `s` is not `None`, `axes` must not be `None` either. Deprecated since version 2.0: `s` must contain only `int` s, not `None` values. `None` values currently mean that the default value for `n` is used in the corresponding 1-D transform, but this behaviour is deprecated. **axes** sequence of ints, optional The axes over which to compute the inverse fft. Default: `(-2, -1)`, the last two axes. Deprecated since version 2.0: If `s` is specified, the corresponding `axes` to be transformed must not be `None`. **norm**{“backward”, “ortho”, “forward”}, optional Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. **out** ndarray, optional If provided, the result will be placed in this array. It should be of the appropriate shape and dtype for the last transformation. New in version 2.0.0. Returns: **out** ndarray The result of the inverse real 2-D FFT. See also [`rfft2`](numpy.fft.rfft2#numpy.fft.rfft2 "numpy.fft.rfft2") The forward two-dimensional FFT of real input, of which `irfft2` is the inverse. [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft") The one-dimensional FFT for real input. [`irfft`](numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft") The inverse of the one-dimensional FFT of real input. [`irfftn`](numpy.fft.irfftn#numpy.fft.irfftn "numpy.fft.irfftn") Compute the inverse of the N-dimensional FFT of real input. #### Notes This is really [`irfftn`](numpy.fft.irfftn#numpy.fft.irfftn "numpy.fft.irfftn") with different defaults. For more details see [`irfftn`](numpy.fft.irfftn#numpy.fft.irfftn "numpy.fft.irfftn"). #### Examples >>> import numpy as np >>> a = np.mgrid[:5, :5][0] >>> A = np.fft.rfft2(a) >>> np.fft.irfft2(A, s=a.shape) array([[0., 0., 0., 0., 0.], [1., 1., 1., 1., 1.], [2., 2., 2., 2., 2.], [3., 3., 3., 3., 3.], [4., 4., 4., 4., 4.]]) # numpy.fft.irfftn fft.irfftn(_a_ , _s =None_, _axes =None_, _norm =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_pocketfft.py#L1467-L1603) Computes the inverse of [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn"). This function computes the inverse of the N-dimensional discrete Fourier Transform for real input over any number of axes in an M-dimensional array by means of the Fast Fourier Transform (FFT). In other words, `irfftn(rfftn(a), a.shape) == a` to within numerical accuracy. (The `a.shape` is necessary like `len(a)` is for [`irfft`](numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft"), and for the same reason.) The input should be ordered in the same way as is returned by [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn"), i.e. as for [`irfft`](numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft") for the final transformation axis, and as for [`ifftn`](numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn") along all the other axes. Parameters: **a** array_like Input array. **s** sequence of ints, optional Shape (length of each transformed axis) of the output (`s[0]` refers to axis 0, `s[1]` to axis 1, etc.). `s` is also the number of input points used along this axis, except for the last axis, where `s[-1]//2+1` points of the input are used. Along any axis, if the shape indicated by `s` is smaller than that of the input, the input is cropped. If it is larger, the input is padded with zeros. Changed in version 2.0: If it is `-1`, the whole input is used (no padding/trimming). If `s` is not given, the shape of the input along the axes specified by axes is used. Except for the last axis which is taken to be `2*(m-1)` where `m` is the length of the input along that axis. Deprecated since version 2.0: If `s` is not `None`, `axes` must not be `None` either. Deprecated since version 2.0: `s` must contain only `int` s, not `None` values. `None` values currently mean that the default value for `n` is used in the corresponding 1-D transform, but this behaviour is deprecated. **axes** sequence of ints, optional Axes over which to compute the inverse FFT. If not given, the last `len(s)` axes are used, or all axes if `s` is also not specified. Repeated indices in `axes` means that the inverse transform over that axis is performed multiple times. Deprecated since version 2.0: If `s` is specified, the corresponding `axes` to be transformed must be explicitly specified too. **norm**{“backward”, “ortho”, “forward”}, optional Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. **out** ndarray, optional If provided, the result will be placed in this array. It should be of the appropriate shape and dtype for the last transformation. New in version 2.0.0. Returns: **out** ndarray The truncated or zero-padded input, transformed along the axes indicated by `axes`, or by a combination of `s` or `a`, as explained in the parameters section above. The length of each transformed axis is as given by the corresponding element of `s`, or the length of the input in every axis except for the last one if `s` is not given. In the final transformed axis the length of the output when `s` is not given is `2*(m-1)` where `m` is the length of the final transformed axis of the input. To get an odd number of output points in the final axis, `s` must be specified. Raises: ValueError If `s` and `axes` have different length. IndexError If an element of `axes` is larger than than the number of axes of `a`. See also [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn") The forward n-dimensional FFT of real input, of which [`ifftn`](numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn") is the inverse. [`fft`](../routines.fft#module-numpy.fft "numpy.fft") The one-dimensional FFT, with definitions and conventions used. [`irfft`](numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft") The inverse of the one-dimensional FFT of real input. [`irfft2`](numpy.fft.irfft2#numpy.fft.irfft2 "numpy.fft.irfft2") The inverse of the two-dimensional FFT of real input. #### Notes See [`fft`](../routines.fft#module-numpy.fft "numpy.fft") for definitions and conventions used. See [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft") for definitions and conventions used for real input. The correct interpretation of the hermitian input depends on the shape of the original data, as given by `s`. This is because each input shape could correspond to either an odd or even length signal. By default, `irfftn` assumes an even output length which puts the last entry at the Nyquist frequency; aliasing with its symmetric counterpart. When performing the final complex to real transform, the last value is thus treated as purely real. To avoid losing information, the correct shape of the real input **must** be given. #### Examples >>> import numpy as np >>> a = np.zeros((3, 2, 2)) >>> a[0, 0, 0] = 3 * 2 * 2 >>> np.fft.irfftn(a) array([[[1., 1.], [1., 1.]], [[1., 1.], [1., 1.]], [[1., 1.], [1., 1.]]]) # numpy.fft.rfft fft.rfft(_a_ , _n =None_, _axis =-1_, _norm =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_pocketfft.py#L318-L412) Compute the one-dimensional discrete Fourier Transform for real input. This function computes the one-dimensional _n_ -point discrete Fourier Transform (DFT) of a real-valued array by means of an efficient algorithm called the Fast Fourier Transform (FFT). Parameters: **a** array_like Input array **n** int, optional Number of points along transformation axis in the input to use. If `n` is smaller than the length of the input, the input is cropped. If it is larger, the input is padded with zeros. If `n` is not given, the length of the input along the axis specified by `axis` is used. **axis** int, optional Axis over which to compute the FFT. If not given, the last axis is used. **norm**{“backward”, “ortho”, “forward”}, optional Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. **out** complex ndarray, optional If provided, the result will be placed in this array. It should be of the appropriate shape and dtype. New in version 2.0.0. Returns: **out** complex ndarray The truncated or zero-padded input, transformed along the axis indicated by `axis`, or the last one if `axis` is not specified. If `n` is even, the length of the transformed axis is `(n/2)+1`. If `n` is odd, the length is `(n+1)/2`. Raises: IndexError If `axis` is not a valid axis of `a`. See also [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") For definition of the DFT and conventions used. [`irfft`](numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft") The inverse of `rfft`. [`fft`](../routines.fft#module-numpy.fft "numpy.fft") The one-dimensional FFT of general (complex) input. [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn") The _n_ -dimensional FFT. [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn") The _n_ -dimensional FFT of real input. #### Notes When the DFT is computed for purely real input, the output is Hermitian- symmetric, i.e. the negative frequency terms are just the complex conjugates of the corresponding positive-frequency terms, and the negative-frequency terms are therefore redundant. This function does not compute the negative frequency terms, and the length of the transformed axis of the output is therefore `n//2 + 1`. When `A = rfft(a)` and fs is the sampling frequency, `A[0]` contains the zero- frequency term 0*fs, which is real due to Hermitian symmetry. If `n` is even, `A[-1]` contains the term representing both positive and negative Nyquist frequency (+fs/2 and -fs/2), and must also be purely real. If `n` is odd, there is no term at fs/2; `A[-1]` contains the largest positive frequency (fs/2*(n-1)/n), and is complex in the general case. If the input `a` contains an imaginary part, it is silently discarded. #### Examples >>> import numpy as np >>> np.fft.fft([0, 1, 0, 0]) array([ 1.+0.j, 0.-1.j, -1.+0.j, 0.+1.j]) # may vary >>> np.fft.rfft([0, 1, 0, 0]) array([ 1.+0.j, 0.-1.j, -1.+0.j]) # may vary Notice how the final element of the [`fft`](../routines.fft#module-numpy.fft "numpy.fft") output is the complex conjugate of the second element, for real input. For `rfft`, this symmetry is exploited to compute only the non-negative frequency terms. # numpy.fft.rfft2 fft.rfft2(_a_ , _s =None_, _axes =(-2, -1)_, _norm =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_pocketfft.py#L1387-L1464) Compute the 2-dimensional FFT of a real array. Parameters: **a** array Input array, taken to be real. **s** sequence of ints, optional Shape of the FFT. Changed in version 2.0: If it is `-1`, the whole input is used (no padding/trimming). Deprecated since version 2.0: If `s` is not `None`, `axes` must not be `None` either. Deprecated since version 2.0: `s` must contain only `int` s, not `None` values. `None` values currently mean that the default value for `n` is used in the corresponding 1-D transform, but this behaviour is deprecated. **axes** sequence of ints, optional Axes over which to compute the FFT. Default: `(-2, -1)`. Deprecated since version 2.0: If `s` is specified, the corresponding `axes` to be transformed must not be `None`. **norm**{“backward”, “ortho”, “forward”}, optional Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. **out** complex ndarray, optional If provided, the result will be placed in this array. It should be of the appropriate shape and dtype for the last inverse transform. incompatible with passing in all but the trivial `s`). New in version 2.0.0. Returns: **out** ndarray The result of the real 2-D FFT. See also [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn") Compute the N-dimensional discrete Fourier Transform for real input. #### Notes This is really just [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn") with different default behavior. For more details see [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn"). #### Examples >>> import numpy as np >>> a = np.mgrid[:5, :5][0] >>> np.fft.rfft2(a) array([[ 50. +0.j , 0. +0.j , 0. +0.j ], [-12.5+17.20477401j, 0. +0.j , 0. +0.j ], [-12.5 +4.0614962j , 0. +0.j , 0. +0.j ], [-12.5 -4.0614962j , 0. +0.j , 0. +0.j ], [-12.5-17.20477401j, 0. +0.j , 0. +0.j ]]) # numpy.fft.rfftfreq fft.rfftfreq(_n_ , _d =1.0_, _device =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_helper.py#L180-L235) Return the Discrete Fourier Transform sample frequencies (for usage with rfft, irfft). The returned float array `f` contains the frequency bin centers in cycles per unit of the sample spacing (with zero at the start). For instance, if the sample spacing is in seconds, then the frequency unit is cycles/second. Given a window length `n` and a sample spacing `d`: f = [0, 1, ..., n/2-1, n/2] / (d*n) if n is even f = [0, 1, ..., (n-1)/2-1, (n-1)/2] / (d*n) if n is odd Unlike [`fftfreq`](numpy.fft.fftfreq#numpy.fft.fftfreq "numpy.fft.fftfreq") (but like [`scipy.fftpack.rfftfreq`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.fftpack.rfftfreq.html#scipy.fftpack.rfftfreq "\(in SciPy v1.14.1\)")) the Nyquist frequency component is considered to be positive. Parameters: **n** int Window length. **d** scalar, optional Sample spacing (inverse of the sampling rate). Defaults to 1. **device** str, optional The device on which to place the created array. Default: `None`. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. Returns: **f** ndarray Array of length `n//2 + 1` containing the sample frequencies. #### Examples >>> import numpy as np >>> signal = np.array([-2, 8, 6, 4, 1, 0, 3, 5, -3, 4], dtype=float) >>> fourier = np.fft.rfft(signal) >>> n = signal.size >>> sample_rate = 100 >>> freq = np.fft.fftfreq(n, d=1./sample_rate) >>> freq array([ 0., 10., 20., ..., -30., -20., -10.]) >>> freq = np.fft.rfftfreq(n, d=1./sample_rate) >>> freq array([ 0., 10., 20., 30., 40., 50.]) # numpy.fft.rfftn fft.rfftn(_a_ , _s =None_, _axes =None_, _norm =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/fft/_pocketfft.py#L1260-L1384) Compute the N-dimensional discrete Fourier Transform for real input. This function computes the N-dimensional discrete Fourier Transform over any number of axes in an M-dimensional real array by means of the Fast Fourier Transform (FFT). By default, all axes are transformed, with the real transform performed over the last axis, while the remaining transforms are complex. Parameters: **a** array_like Input array, taken to be real. **s** sequence of ints, optional Shape (length along each transformed axis) to use from the input. (`s[0]` refers to axis 0, `s[1]` to axis 1, etc.). The final element of `s` corresponds to `n` for `rfft(x, n)`, while for the remaining axes, it corresponds to `n` for `fft(x, n)`. Along any axis, if the given shape is smaller than that of the input, the input is cropped. If it is larger, the input is padded with zeros. Changed in version 2.0: If it is `-1`, the whole input is used (no padding/trimming). If `s` is not given, the shape of the input along the axes specified by `axes` is used. Deprecated since version 2.0: If `s` is not `None`, `axes` must not be `None` either. Deprecated since version 2.0: `s` must contain only `int` s, not `None` values. `None` values currently mean that the default value for `n` is used in the corresponding 1-D transform, but this behaviour is deprecated. **axes** sequence of ints, optional Axes over which to compute the FFT. If not given, the last `len(s)` axes are used, or all axes if `s` is also not specified. Deprecated since version 2.0: If `s` is specified, the corresponding `axes` to be transformed must be explicitly specified too. **norm**{“backward”, “ortho”, “forward”}, optional Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. **out** complex ndarray, optional If provided, the result will be placed in this array. It should be of the appropriate shape and dtype for all axes (and hence is incompatible with passing in all but the trivial `s`). New in version 2.0.0. Returns: **out** complex ndarray The truncated or zero-padded input, transformed along the axes indicated by `axes`, or by a combination of `s` and `a`, as explained in the parameters section above. The length of the last axis transformed will be `s[-1]//2+1`, while the remaining transformed axes will have lengths according to `s`, or unchanged from the input. Raises: ValueError If `s` and `axes` have different length. IndexError If an element of `axes` is larger than than the number of axes of `a`. See also [`irfftn`](numpy.fft.irfftn#numpy.fft.irfftn "numpy.fft.irfftn") The inverse of `rfftn`, i.e. the inverse of the n-dimensional FFT of real input. [`fft`](../routines.fft#module-numpy.fft "numpy.fft") The one-dimensional FFT, with definitions and conventions used. [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft") The one-dimensional FFT of real input. [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn") The n-dimensional FFT. [`rfft2`](numpy.fft.rfft2#numpy.fft.rfft2 "numpy.fft.rfft2") The two-dimensional FFT of real input. #### Notes The transform for real input is performed over the last transformation axis, as by [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft"), then the transform over the remaining axes is performed as by [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn"). The order of the output is as for [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft") for the final transformation axis, and as for [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn") for the remaining transformation axes. See [`fft`](../routines.fft#module-numpy.fft "numpy.fft") for details, definitions and conventions used. #### Examples >>> import numpy as np >>> a = np.ones((2, 2, 2)) >>> np.fft.rfftn(a) array([[[8.+0.j, 0.+0.j], # may vary [0.+0.j, 0.+0.j]], [[0.+0.j, 0.+0.j], [0.+0.j, 0.+0.j]]]) >>> np.fft.rfftn(a, axes=(2, 0)) array([[[4.+0.j, 0.+0.j], # may vary [4.+0.j, 0.+0.j]], [[0.+0.j, 0.+0.j], [0.+0.j, 0.+0.j]]]) # numpy.fill_diagonal numpy.fill_diagonal(_a_ , _val_ , _wrap =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_index_tricks_impl.py#L815-L944) Fill the main diagonal of the given array of any dimensionality. For an array `a` with `a.ndim >= 2`, the diagonal is the list of values `a[i, ..., i]` with indices `i` all identical. This function modifies the input array in-place without returning a value. Parameters: **a** array, at least 2-D. Array whose diagonal is to be filled in-place. **val** scalar or array_like Value(s) to write on the diagonal. If `val` is scalar, the value is written along the diagonal. If array-like, the flattened `val` is written along the diagonal, repeating if necessary to fill all diagonal entries. **wrap** bool For tall matrices in NumPy version up to 1.6.2, the diagonal “wrapped” after N columns. You can have this behavior with this option. This affects only tall matrices. See also [`diag_indices`](numpy.diag_indices#numpy.diag_indices "numpy.diag_indices"), [`diag_indices_from`](numpy.diag_indices_from#numpy.diag_indices_from "numpy.diag_indices_from") #### Notes This functionality can be obtained via [`diag_indices`](numpy.diag_indices#numpy.diag_indices "numpy.diag_indices"), but internally this version uses a much faster implementation that never constructs the indices and uses simple slicing. #### Examples >>> import numpy as np >>> a = np.zeros((3, 3), int) >>> np.fill_diagonal(a, 5) >>> a array([[5, 0, 0], [0, 5, 0], [0, 0, 5]]) The same function can operate on a 4-D array: >>> a = np.zeros((3, 3, 3, 3), int) >>> np.fill_diagonal(a, 4) We only show a few blocks for clarity: >>> a[0, 0] array([[4, 0, 0], [0, 0, 0], [0, 0, 0]]) >>> a[1, 1] array([[0, 0, 0], [0, 4, 0], [0, 0, 0]]) >>> a[2, 2] array([[0, 0, 0], [0, 0, 0], [0, 0, 4]]) The wrap option affects only tall matrices: >>> # tall matrices no wrap >>> a = np.zeros((5, 3), int) >>> np.fill_diagonal(a, 4) >>> a array([[4, 0, 0], [0, 4, 0], [0, 0, 4], [0, 0, 0], [0, 0, 0]]) >>> # tall matrices wrap >>> a = np.zeros((5, 3), int) >>> np.fill_diagonal(a, 4, wrap=True) >>> a array([[4, 0, 0], [0, 4, 0], [0, 0, 4], [0, 0, 0], [4, 0, 0]]) >>> # wide matrices >>> a = np.zeros((3, 5), int) >>> np.fill_diagonal(a, 4, wrap=True) >>> a array([[4, 0, 0, 0, 0], [0, 4, 0, 0, 0], [0, 0, 4, 0, 0]]) The anti-diagonal can be filled by reversing the order of elements using either [`numpy.flipud`](numpy.flipud#numpy.flipud "numpy.flipud") or [`numpy.fliplr`](numpy.fliplr#numpy.fliplr "numpy.fliplr"). >>> a = np.zeros((3, 3), int); >>> np.fill_diagonal(np.fliplr(a), [1,2,3]) # Horizontal flip >>> a array([[0, 0, 1], [0, 2, 0], [3, 0, 0]]) >>> np.fill_diagonal(np.flipud(a), [1,2,3]) # Vertical flip >>> a array([[0, 0, 3], [0, 2, 0], [1, 0, 0]]) Note that the order in which the diagonal is filled varies depending on the flip function. # numpy.finfo _class_ numpy.finfo(_dtype_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) Machine limits for floating point types. Parameters: **dtype** float, dtype, or instance Kind of floating point or complex floating point data-type about which to get information. See also [`iinfo`](numpy.iinfo#numpy.iinfo "numpy.iinfo") The equivalent for integer data types. [`spacing`](numpy.spacing#numpy.spacing "numpy.spacing") The distance between a value and the nearest adjacent number [`nextafter`](numpy.nextafter#numpy.nextafter "numpy.nextafter") The next floating point value after x1 towards x2 #### Notes For developers of NumPy: do not instantiate this at the module level. The initial calculation of these parameters is expensive and negatively impacts import times. These objects are cached, so calling `finfo()` repeatedly inside your functions is not a problem. Note that `smallest_normal` is not actually the smallest positive representable value in a NumPy floating point type. As in the IEEE-754 standard [1], NumPy floating point types make use of subnormal numbers to fill the gap between 0 and `smallest_normal`. However, subnormal numbers may have significantly reduced precision [2]. This function can also be used for complex data types as well. If used, the output will be the same as the corresponding real float type (e.g. numpy.finfo(numpy.csingle) is the same as numpy.finfo(numpy.single)). However, the output is true for the real and imaginary components. #### References [1] IEEE Standard for Floating-Point Arithmetic, IEEE Std 754-2008, pp.1-70, 2008, [2] Wikipedia, “Denormal Numbers”, #### Examples >>> import numpy as np >>> np.finfo(np.float64).dtype dtype('float64') >>> np.finfo(np.complex64).dtype dtype('float32') Attributes: **bits** int The number of bits occupied by the type. **dtype** dtype Returns the dtype for which `finfo` returns information. For complex input, the returned dtype is the associated `float*` dtype for its real and complex components. **eps** float The difference between 1.0 and the next smallest representable float larger than 1.0. For example, for 64-bit binary floats in the IEEE-754 standard, `eps = 2**-52`, approximately 2.22e-16. **epsneg** float The difference between 1.0 and the next smallest representable float less than 1.0. For example, for 64-bit binary floats in the IEEE-754 standard, `epsneg = 2**-53`, approximately 1.11e-16. **iexp** int The number of bits in the exponent portion of the floating point representation. **machep** int The exponent that yields `eps`. **max** floating point number of the appropriate type The largest representable number. **maxexp** int The smallest positive power of the base (2) that causes overflow. **min** floating point number of the appropriate type The smallest representable number, typically `-max`. **minexp** int The most negative power of the base (2) consistent with there being no leading 0’s in the mantissa. **negep** int The exponent that yields `epsneg`. **nexp** int The number of bits in the exponent including its sign and bias. **nmant** int The number of bits in the mantissa. **precision** int The approximate number of decimal digits to which this kind of float is precise. **resolution** floating point number of the appropriate type The approximate decimal resolution of this type, i.e., `10**-precision`. [`tiny`](numpy.finfo.tiny#numpy.finfo.tiny "numpy.finfo.tiny")float Return the value for tiny, alias of smallest_normal. [`smallest_normal`](numpy.finfo.smallest_normal#numpy.finfo.smallest_normal "numpy.finfo.smallest_normal")float Return the value for the smallest normal. **smallest_subnormal** float The smallest positive floating point number with 0 as leading bit in the mantissa following IEEE-754. # numpy.finfo.smallest_normal property _property_ finfo.smallest_normal Return the value for the smallest normal. Returns: **smallest_normal** float Value for the smallest normal. Warns: UserWarning If the calculated value for the smallest normal is requested for double- double. # numpy.finfo.tiny property _property_ finfo.tiny Return the value for tiny, alias of smallest_normal. Returns: **tiny** float Value for the smallest normal, alias of smallest_normal. Warns: UserWarning If the calculated value for the smallest normal is requested for double- double. # numpy.fix numpy.fix(_x_ , _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_ufunclike_impl.py#L16-L67) Round to nearest integer towards zero. Round an array of floats element-wise to nearest integer towards zero. The rounded values have the same data-type as the input. Parameters: **x** array_like An array to be rounded **out** ndarray, optional A location into which the result is stored. If provided, it must have a shape that the input broadcasts to. If not provided or None, a freshly-allocated array is returned. Returns: **out** ndarray of floats An array with the same dimensions and data-type as the input. If second argument is not supplied then a new array is returned with the rounded values. If a second argument is supplied the result is stored there. The return value `out` is then a reference to that array. See also [`rint`](numpy.rint#numpy.rint "numpy.rint"), [`trunc`](numpy.trunc#numpy.trunc "numpy.trunc"), [`floor`](numpy.floor#numpy.floor "numpy.floor"), [`ceil`](numpy.ceil#numpy.ceil "numpy.ceil") [`around`](numpy.around#numpy.around "numpy.around") Round to given number of decimals #### Examples >>> import numpy as np >>> np.fix(3.14) 3.0 >>> np.fix(3) 3 >>> np.fix([2.1, 2.9, -2.1, -2.9]) array([ 2., 2., -2., -2.]) # numpy.flatiter.base attribute flatiter.base A reference to the array that is iterated over. #### Examples >>> import numpy as np >>> x = np.arange(5) >>> fl = x.flat >>> fl.base is x True # numpy.flatiter.coords attribute flatiter.coords An N-dimensional tuple of current coordinates. #### Examples >>> import numpy as np >>> x = np.arange(6).reshape(2, 3) >>> fl = x.flat >>> fl.coords (0, 0) >>> next(fl) 0 >>> fl.coords (0, 1) # numpy.flatiter.copy method flatiter.copy() Get a copy of the iterator as a 1-D array. #### Examples >>> import numpy as np >>> x = np.arange(6).reshape(2, 3) >>> x array([[0, 1, 2], [3, 4, 5]]) >>> fl = x.flat >>> fl.copy() array([0, 1, 2, 3, 4, 5]) # numpy.flatiter _class_ numpy.flatiter[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) Flat iterator object to iterate over arrays. A `flatiter` iterator is returned by `x.flat` for any array `x`. It allows iterating over the array as if it were a 1-D array, either in a for-loop or by calling its `next` method. Iteration is done in row-major, C-style order (the last index varying the fastest). The iterator can also be indexed using basic slicing or advanced indexing. See also [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") Return a flat iterator over an array. [`ndarray.flatten`](numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten") Returns a flattened copy of an array. #### Notes A `flatiter` iterator can not be constructed directly from Python code by calling the `flatiter` constructor. #### Examples >>> import numpy as np >>> x = np.arange(6).reshape(2, 3) >>> fl = x.flat >>> type(fl) >>> for item in fl: ... print(item) ... 0 1 2 3 4 5 >>> fl[2:4] array([2, 3]) Attributes: [`base`](numpy.flatiter.base#numpy.flatiter.base "numpy.flatiter.base") A reference to the array that is iterated over. [`coords`](numpy.flatiter.coords#numpy.flatiter.coords "numpy.flatiter.coords") An N-dimensional tuple of current coordinates. [`index`](numpy.flatiter.index#numpy.flatiter.index "numpy.flatiter.index") Current flat index into the array. #### Methods [`copy`](numpy.flatiter.copy#numpy.flatiter.copy "numpy.flatiter.copy")() | Get a copy of the iterator as a 1-D array. ---|--- # numpy.flatiter.index attribute flatiter.index Current flat index into the array. #### Examples >>> import numpy as np >>> x = np.arange(6).reshape(2, 3) >>> fl = x.flat >>> fl.index 0 >>> next(fl) 0 >>> fl.index 1 # numpy.flatnonzero numpy.flatnonzero(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L646-L685) Return indices that are non-zero in the flattened version of a. This is equivalent to `np.nonzero(np.ravel(a))[0]`. Parameters: **a** array_like Input data. Returns: **res** ndarray Output array, containing the indices of the elements of `a.ravel()` that are non-zero. See also [`nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") Return the indices of the non-zero elements of the input array. [`ravel`](numpy.ravel#numpy.ravel "numpy.ravel") Return a 1-D array containing the elements of the input array. #### Examples >>> import numpy as np >>> x = np.arange(-2, 3) >>> x array([-2, -1, 0, 1, 2]) >>> np.flatnonzero(x) array([0, 1, 3, 4]) Use the indices of the non-zero elements as an index array to extract these elements: >>> x.ravel()[np.flatnonzero(x)] array([-2, -1, 1, 2]) # numpy.flip numpy.flip(_m_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L248-L336) Reverse the order of elements in an array along the given axis. The shape of the array is preserved, but the elements are reordered. Parameters: **m** array_like Input array. **axis** None or int or tuple of ints, optional Axis or axes along which to flip over. The default, axis=None, will flip over all of the axes of the input array. If axis is negative it counts from the last to the first axis. If axis is a tuple of ints, flipping is performed on all of the axes specified in the tuple. Returns: **out** array_like A view of `m` with the entries of axis reversed. Since a view is returned, this operation is done in constant time. See also [`flipud`](numpy.flipud#numpy.flipud "numpy.flipud") Flip an array vertically (axis=0). [`fliplr`](numpy.fliplr#numpy.fliplr "numpy.fliplr") Flip an array horizontally (axis=1). #### Notes flip(m, 0) is equivalent to flipud(m). flip(m, 1) is equivalent to fliplr(m). flip(m, n) corresponds to `m[...,::-1,...]` with `::-1` at position n. flip(m) corresponds to `m[::-1,::-1,...,::-1]` with `::-1` at all positions. flip(m, (0, 1)) corresponds to `m[::-1,::-1,...]` with `::-1` at position 0 and position 1. #### Examples >>> import numpy as np >>> A = np.arange(8).reshape((2,2,2)) >>> A array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> np.flip(A, 0) array([[[4, 5], [6, 7]], [[0, 1], [2, 3]]]) >>> np.flip(A, 1) array([[[2, 3], [0, 1]], [[6, 7], [4, 5]]]) >>> np.flip(A) array([[[7, 6], [5, 4]], [[3, 2], [1, 0]]]) >>> np.flip(A, (0, 2)) array([[[5, 4], [7, 6]], [[1, 0], [3, 2]]]) >>> rng = np.random.default_rng() >>> A = rng.normal(size=(3,4,5)) >>> np.all(np.flip(A,2) == A[:,:,::-1,...]) True # numpy.fliplr numpy.fliplr(_m_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_twodim_base_impl.py#L49-L102) Reverse the order of elements along axis 1 (left/right). For a 2-D array, this flips the entries in each row in the left/right direction. Columns are preserved, but appear in a different order than before. Parameters: **m** array_like Input array, must be at least 2-D. Returns: **f** ndarray A view of `m` with the columns reversed. Since a view is returned, this operation is \\(\mathcal O(1)\\). See also [`flipud`](numpy.flipud#numpy.flipud "numpy.flipud") Flip array in the up/down direction. [`flip`](numpy.flip#numpy.flip "numpy.flip") Flip array in one or more dimensions. [`rot90`](numpy.rot90#numpy.rot90 "numpy.rot90") Rotate array counterclockwise. #### Notes Equivalent to `m[:,::-1]` or `np.flip(m, axis=1)`. Requires the array to be at least 2-D. #### Examples >>> import numpy as np >>> A = np.diag([1.,2.,3.]) >>> A array([[1., 0., 0.], [0., 2., 0.], [0., 0., 3.]]) >>> np.fliplr(A) array([[0., 0., 1.], [0., 2., 0.], [3., 0., 0.]]) >>> rng = np.random.default_rng() >>> A = rng.normal(size=(2,3,5)) >>> np.all(np.fliplr(A) == A[:,::-1,...]) True # numpy.flipud numpy.flipud(_m_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_twodim_base_impl.py#L105-L160) Reverse the order of elements along axis 0 (up/down). For a 2-D array, this flips the entries in each column in the up/down direction. Rows are preserved, but appear in a different order than before. Parameters: **m** array_like Input array. Returns: **out** array_like A view of `m` with the rows reversed. Since a view is returned, this operation is \\(\mathcal O(1)\\). See also [`fliplr`](numpy.fliplr#numpy.fliplr "numpy.fliplr") Flip array in the left/right direction. [`flip`](numpy.flip#numpy.flip "numpy.flip") Flip array in one or more dimensions. [`rot90`](numpy.rot90#numpy.rot90 "numpy.rot90") Rotate array counterclockwise. #### Notes Equivalent to `m[::-1, ...]` or `np.flip(m, axis=0)`. Requires the array to be at least 1-D. #### Examples >>> import numpy as np >>> A = np.diag([1.0, 2, 3]) >>> A array([[1., 0., 0.], [0., 2., 0.], [0., 0., 3.]]) >>> np.flipud(A) array([[0., 0., 3.], [0., 2., 0.], [1., 0., 0.]]) >>> rng = np.random.default_rng() >>> A = rng.normal(size=(2,3,5)) >>> np.all(np.flipud(A) == A[::-1,...]) True >>> np.flipud([1,2]) array([2, 1]) # numpy.float_power numpy.float_power(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ First array elements raised to powers from second array, element-wise. Raise each base in `x1` to the positionally-corresponding power in `x2`. `x1` and `x2` must be broadcastable to the same shape. This differs from the power function in that integers, float16, and float32 are promoted to floats with a minimum precision of float64 so that the result is always inexact. The intent is that the function will return a usable result for negative powers and seldom overflow for positive powers. Negative values raised to a non-integral value will return `nan`. To get complex results, cast the input to complex, or specify the `dtype` to be `complex` (see the example below). Parameters: **x1** array_like The bases. **x2** array_like The exponents. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The bases in `x1` raised to the exponents in `x2`. This is a scalar if both `x1` and `x2` are scalars. See also [`power`](numpy.power#numpy.power "numpy.power") power function that preserves type #### Examples >>> import numpy as np Cube each element in a list. >>> x1 = range(6) >>> x1 [0, 1, 2, 3, 4, 5] >>> np.float_power(x1, 3) array([ 0., 1., 8., 27., 64., 125.]) Raise the bases to different exponents. >>> x2 = [1.0, 2.0, 3.0, 3.0, 2.0, 1.0] >>> np.float_power(x1, x2) array([ 0., 1., 8., 27., 16., 5.]) The effect of broadcasting. >>> x2 = np.array([[1, 2, 3, 3, 2, 1], [1, 2, 3, 3, 2, 1]]) >>> x2 array([[1, 2, 3, 3, 2, 1], [1, 2, 3, 3, 2, 1]]) >>> np.float_power(x1, x2) array([[ 0., 1., 8., 27., 16., 5.], [ 0., 1., 8., 27., 16., 5.]]) Negative values raised to a non-integral value will result in `nan` (and a warning will be generated). >>> x3 = np.array([-1, -4]) >>> with np.errstate(invalid='ignore'): ... p = np.float_power(x3, 1.5) ... >>> p array([nan, nan]) To get complex results, give the argument `dtype=complex`. >>> np.float_power(x3, 1.5, dtype=complex) array([-1.83697020e-16-1.j, -1.46957616e-15-8.j]) # numpy.floor numpy.floor(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the floor of the input, element-wise. The floor of the scalar `x` is the largest integer `i`, such that `i <= x`. It is often denoted as \\(\lfloor x \rfloor\\). Parameters: **x** array_like Input data. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray or scalar The floor of each element in `x`. This is a scalar if `x` is a scalar. See also [`ceil`](numpy.ceil#numpy.ceil "numpy.ceil"), [`trunc`](numpy.trunc#numpy.trunc "numpy.trunc"), [`rint`](numpy.rint#numpy.rint "numpy.rint"), [`fix`](numpy.fix#numpy.fix "numpy.fix") #### Notes Some spreadsheet programs calculate the “floor-towards-zero”, where `floor(-2.5) == -2`. NumPy instead uses the definition of `floor` where `floor(-2.5) == -3`. The “floor-towards-zero” function is called `fix` in NumPy. #### Examples >>> import numpy as np >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) >>> np.floor(a) array([-2., -2., -1., 0., 1., 1., 2.]) # numpy.floor_divide numpy.floor_divide(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the largest integer smaller or equal to the division of the inputs. It is equivalent to the Python `//` operator and pairs with the Python `%` ([`remainder`](numpy.remainder#numpy.remainder "numpy.remainder")), function so that `a = a % b + b * (a // b)` up to roundoff. Parameters: **x1** array_like Numerator. **x2** array_like Denominator. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray y = floor(`x1`/`x2`) This is a scalar if both `x1` and `x2` are scalars. See also [`remainder`](numpy.remainder#numpy.remainder "numpy.remainder") Remainder complementary to floor_divide. [`divmod`](numpy.divmod#numpy.divmod "numpy.divmod") Simultaneous floor division and remainder. [`divide`](numpy.divide#numpy.divide "numpy.divide") Standard division. [`floor`](numpy.floor#numpy.floor "numpy.floor") Round a number to the nearest integer toward minus infinity. [`ceil`](numpy.ceil#numpy.ceil "numpy.ceil") Round a number to the nearest integer toward infinity. #### Examples >>> import numpy as np >>> np.floor_divide(7,3) 2 >>> np.floor_divide([1., 2., 3., 4.], 2.5) array([ 0., 0., 1., 1.]) The `//` operator can be used as a shorthand for `np.floor_divide` on ndarrays. >>> x1 = np.array([1., 2., 3., 4.]) >>> x1 // 2.5 array([0., 0., 1., 1.]) # numpy.fmax numpy.fmax(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Element-wise maximum of array elements. Compare two arrays and return a new array containing the element-wise maxima. If one of the elements being compared is a NaN, then the non-nan element is returned. If both elements are NaNs then the first is returned. The latter distinction is important for complex NaNs, which are defined as at least one of the real or imaginary parts being a NaN. The net effect is that NaNs are ignored when possible. Parameters: **x1, x2** array_like The arrays holding the elements to be compared. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray or scalar The maximum of `x1` and `x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. See also [`fmin`](numpy.fmin#numpy.fmin "numpy.fmin") Element-wise minimum of two arrays, ignores NaNs. [`maximum`](numpy.maximum#numpy.maximum "numpy.maximum") Element-wise maximum of two arrays, propagates NaNs. [`amax`](numpy.amax#numpy.amax "numpy.amax") The maximum value of an array along a given axis, propagates NaNs. [`nanmax`](numpy.nanmax#numpy.nanmax "numpy.nanmax") The maximum value of an array along a given axis, ignores NaNs. [`minimum`](numpy.minimum#numpy.minimum "numpy.minimum"), [`amin`](numpy.amin#numpy.amin "numpy.amin"), [`nanmin`](numpy.nanmin#numpy.nanmin "numpy.nanmin") #### Notes The fmax is equivalent to `np.where(x1 >= x2, x1, x2)` when neither x1 nor x2 are NaNs, but it is faster and does proper broadcasting. #### Examples >>> import numpy as np >>> np.fmax([2, 3, 4], [1, 5, 2]) array([ 2, 5, 4]) >>> np.fmax(np.eye(2), [0.5, 2]) array([[ 1. , 2. ], [ 0.5, 2. ]]) >>> np.fmax([np.nan, 0, np.nan],[0, np.nan, np.nan]) array([ 0., 0., nan]) # numpy.fmin numpy.fmin(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Element-wise minimum of array elements. Compare two arrays and return a new array containing the element-wise minima. If one of the elements being compared is a NaN, then the non-nan element is returned. If both elements are NaNs then the first is returned. The latter distinction is important for complex NaNs, which are defined as at least one of the real or imaginary parts being a NaN. The net effect is that NaNs are ignored when possible. Parameters: **x1, x2** array_like The arrays holding the elements to be compared. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray or scalar The minimum of `x1` and `x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. See also [`fmax`](numpy.fmax#numpy.fmax "numpy.fmax") Element-wise maximum of two arrays, ignores NaNs. [`minimum`](numpy.minimum#numpy.minimum "numpy.minimum") Element-wise minimum of two arrays, propagates NaNs. [`amin`](numpy.amin#numpy.amin "numpy.amin") The minimum value of an array along a given axis, propagates NaNs. [`nanmin`](numpy.nanmin#numpy.nanmin "numpy.nanmin") The minimum value of an array along a given axis, ignores NaNs. [`maximum`](numpy.maximum#numpy.maximum "numpy.maximum"), [`amax`](numpy.amax#numpy.amax "numpy.amax"), [`nanmax`](numpy.nanmax#numpy.nanmax "numpy.nanmax") #### Notes The fmin is equivalent to `np.where(x1 <= x2, x1, x2)` when neither x1 nor x2 are NaNs, but it is faster and does proper broadcasting. #### Examples >>> import numpy as np >>> np.fmin([2, 3, 4], [1, 5, 2]) array([1, 3, 2]) >>> np.fmin(np.eye(2), [0.5, 2]) array([[ 0.5, 0. ], [ 0. , 1. ]]) >>> np.fmin([np.nan, 0, np.nan],[0, np.nan, np.nan]) array([ 0., 0., nan]) # numpy.fmod numpy.fmod(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns the element-wise remainder of division. This is the NumPy implementation of the C library function fmod, the remainder has the same sign as the dividend `x1`. It is equivalent to the Matlab(TM) `rem` function and should not be confused with the Python modulus operator `x1 % x2`. Parameters: **x1** array_like Dividend. **x2** array_like Divisor. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** array_like The remainder of the division of `x1` by `x2`. This is a scalar if both `x1` and `x2` are scalars. See also [`remainder`](numpy.remainder#numpy.remainder "numpy.remainder") Equivalent to the Python `%` operator. [`divide`](numpy.divide#numpy.divide "numpy.divide") #### Notes The result of the modulo operation for negative dividend and divisors is bound by conventions. For `fmod`, the sign of result is the sign of the dividend, while for [`remainder`](numpy.remainder#numpy.remainder "numpy.remainder") the sign of the result is the sign of the divisor. The `fmod` function is equivalent to the Matlab(TM) `rem` function. #### Examples >>> import numpy as np >>> np.fmod([-3, -2, -1, 1, 2, 3], 2) array([-1, 0, -1, 1, 0, 1]) >>> np.remainder([-3, -2, -1, 1, 2, 3], 2) array([1, 0, 1, 1, 0, 1]) >>> np.fmod([5, 3], [2, 2.]) array([ 1., 1.]) >>> a = np.arange(-3, 3).reshape(3, 2) >>> a array([[-3, -2], [-1, 0], [ 1, 2]]) >>> np.fmod(a, [2,2]) array([[-1, 0], [-1, 0], [ 1, 0]]) # numpy.format_float_positional numpy.format_float_positional(_x_ , _precision =None_, _unique =True_, _fractional =True_, _trim ='k'_, _sign =False_, _pad_left =None_, _pad_right =None_, _min_digits =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/arrayprint.py#L1189-L1279) Format a floating-point scalar as a decimal string in positional notation. Provides control over rounding, trimming and padding. Uses and assumes IEEE unbiased rounding. Uses the “Dragon4” algorithm. Parameters: **x** python float or numpy floating scalar Value to format. **precision** non-negative integer or None, optional Maximum number of digits to print. May be None if [`unique`](numpy.unique#numpy.unique "numpy.unique") is `True`, but must be an integer if unique is `False`. **unique** boolean, optional If `True`, use a digit-generation strategy which gives the shortest representation which uniquely identifies the floating-point number from other values of the same type, by judicious rounding. If `precision` is given fewer digits than necessary can be printed, or if `min_digits` is given more can be printed, in which cases the last digit is rounded with unbiased rounding. If `False`, digits are generated as if printing an infinite-precision value and stopping after `precision` digits, rounding the remaining value with unbiased rounding **fractional** boolean, optional If `True`, the cutoffs of `precision` and `min_digits` refer to the total number of digits after the decimal point, including leading zeros. If `False`, `precision` and `min_digits` refer to the total number of significant digits, before or after the decimal point, ignoring leading zeros. **trim** one of ‘k’, ‘.’, ‘0’, ‘-’, optional Controls post-processing trimming of trailing digits, as follows: * ‘k’ : keep trailing zeros, keep decimal point (no trimming) * ‘.’ : trim all trailing zeros, leave decimal point * ‘0’ : trim all but the zero before the decimal point. Insert the zero if it is missing. * ‘-’ : trim trailing zeros and any trailing decimal point **sign** boolean, optional Whether to show the sign for positive values. **pad_left** non-negative integer, optional Pad the left side of the string with whitespace until at least that many characters are to the left of the decimal point. **pad_right** non-negative integer, optional Pad the right side of the string with whitespace until at least that many characters are to the right of the decimal point. **min_digits** non-negative integer or None, optional Minimum number of digits to print. Only has an effect if `unique=True` in which case additional digits past those necessary to uniquely identify the value may be printed, rounding the last additional digit. New in version 1.21.0. Returns: **rep** string The string representation of the floating point value See also [`format_float_scientific`](numpy.format_float_scientific#numpy.format_float_scientific "numpy.format_float_scientific") #### Examples >>> import numpy as np >>> np.format_float_positional(np.float32(np.pi)) '3.1415927' >>> np.format_float_positional(np.float16(np.pi)) '3.14' >>> np.format_float_positional(np.float16(0.3)) '0.3' >>> np.format_float_positional(np.float16(0.3), unique=False, precision=10) '0.3000488281' # numpy.format_float_scientific numpy.format_float_scientific(_x_ , _precision =None_, _unique =True_, _trim ='k'_, _sign =False_, _pad_left =None_, _exp_digits =None_, _min_digits =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/arrayprint.py#L1108-L1186) Format a floating-point scalar as a decimal string in scientific notation. Provides control over rounding, trimming and padding. Uses and assumes IEEE unbiased rounding. Uses the “Dragon4” algorithm. Parameters: **x** python float or numpy floating scalar Value to format. **precision** non-negative integer or None, optional Maximum number of digits to print. May be None if [`unique`](numpy.unique#numpy.unique "numpy.unique") is `True`, but must be an integer if unique is `False`. **unique** boolean, optional If `True`, use a digit-generation strategy which gives the shortest representation which uniquely identifies the floating-point number from other values of the same type, by judicious rounding. If `precision` is given fewer digits than necessary can be printed. If `min_digits` is given more can be printed, in which cases the last digit is rounded with unbiased rounding. If `False`, digits are generated as if printing an infinite-precision value and stopping after `precision` digits, rounding the remaining value with unbiased rounding **trim** one of ‘k’, ‘.’, ‘0’, ‘-’, optional Controls post-processing trimming of trailing digits, as follows: * ‘k’ : keep trailing zeros, keep decimal point (no trimming) * ‘.’ : trim all trailing zeros, leave decimal point * ‘0’ : trim all but the zero before the decimal point. Insert the zero if it is missing. * ‘-’ : trim trailing zeros and any trailing decimal point **sign** boolean, optional Whether to show the sign for positive values. **pad_left** non-negative integer, optional Pad the left side of the string with whitespace until at least that many characters are to the left of the decimal point. **exp_digits** non-negative integer, optional Pad the exponent with zeros until it contains at least this many digits. If omitted, the exponent will be at least 2 digits. **min_digits** non-negative integer or None, optional Minimum number of digits to print. This only has an effect for `unique=True`. In that case more digits than necessary to uniquely identify the value may be printed and rounded unbiased. New in version 1.21.0. Returns: **rep** string The string representation of the floating point value See also [`format_float_positional`](numpy.format_float_positional#numpy.format_float_positional "numpy.format_float_positional") #### Examples >>> import numpy as np >>> np.format_float_scientific(np.float32(np.pi)) '3.1415927e+00' >>> s = np.float32(1.23e24) >>> np.format_float_scientific(s, unique=False, precision=15) '1.230000071797338e+24' >>> np.format_float_scientific(s, exp_digits=4) '1.23e+0024' # numpy.frexp numpy.frexp(_x_ , [_out1_ , _out2_ , ]_/_ , [_out=(None_ , _None)_ , ]_*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Decompose the elements of x into mantissa and twos exponent. Returns (`mantissa`, `exponent`), where `x = mantissa * 2**exponent`. The mantissa lies in the open interval(-1, 1), while the twos exponent is a signed integer. Parameters: **x** array_like Array of numbers to be decomposed. **out1** ndarray, optional Output array for the mantissa. Must have the same shape as `x`. **out2** ndarray, optional Output array for the exponent. Must have the same shape as `x`. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **mantissa** ndarray Floating values between -1 and 1. This is a scalar if `x` is a scalar. **exponent** ndarray Integer exponents of 2. This is a scalar if `x` is a scalar. See also [`ldexp`](numpy.ldexp#numpy.ldexp "numpy.ldexp") Compute `y = x1 * 2**x2`, the inverse of `frexp`. #### Notes Complex dtypes are not supported, they will raise a TypeError. #### Examples >>> import numpy as np >>> x = np.arange(9) >>> y1, y2 = np.frexp(x) >>> y1 array([ 0. , 0.5 , 0.5 , 0.75 , 0.5 , 0.625, 0.75 , 0.875, 0.5 ]) >>> y2 array([0, 1, 2, 2, 3, 3, 3, 3, 4], dtype=int32) >>> y1 * 2**y2 array([ 0., 1., 2., 3., 4., 5., 6., 7., 8.]) # numpy.from_dlpack numpy.from_dlpack(_x_ , _/_ , _*_ , _device =None_, _copy =None_) Create a NumPy array from an object implementing the `__dlpack__` protocol. Generally, the returned NumPy array is a read-only view of the input object. See [1] and [2] for more details. Parameters: **x** object A Python object that implements the `__dlpack__` and `__dlpack_device__` methods. **device** device, optional Device on which to place the created array. Default: `None`. Must be `"cpu"` if passed which may allow importing an array that is not already CPU available. **copy** bool, optional Boolean indicating whether or not to copy the input. If `True`, the copy will be made. If `False`, the function will never copy, and will raise `BufferError` in case a copy is deemed necessary. Passing it requests a copy from the exporter who may or may not implement the capability. If `None`, the function will reuse the existing memory buffer if possible and copy otherwise. Default: `None`. Returns: **out** ndarray #### References [1] Array API documentation, [2] Python specification for DLPack, #### Examples >>> import torch >>> x = torch.arange(10) >>> # create a view of the torch tensor "x" in NumPy >>> y = np.from_dlpack(x) # numpy.frombuffer numpy.frombuffer(_buffer_ , _dtype =float_, _count =-1_, _offset =0_, _*_ , _like =None_) Interpret a buffer as a 1-dimensional array. Parameters: **buffer** buffer_like An object that exposes the buffer interface. **dtype** data-type, optional Data-type of the returned array; default: float. **count** int, optional Number of items to read. `-1` means all data in the buffer. **offset** int, optional Start reading the buffer from this offset (in bytes); default: 0. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** ndarray See also [`ndarray.tobytes`](numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes") Inverse of this operation, construct Python bytes from the raw data bytes in the array. #### Notes If the buffer has data that is not in machine byte-order, this should be specified as part of the data-type, e.g.: >>> dt = np.dtype(int) >>> dt = dt.newbyteorder('>') >>> np.frombuffer(buf, dtype=dt) The data of the resulting array will not be byteswapped, but will be interpreted correctly. This function creates a view into the original object. This should be safe in general, but it may make sense to copy the result when the original object is mutable or untrusted. #### Examples >>> import numpy as np >>> s = b'hello world' >>> np.frombuffer(s, dtype='S1', count=5, offset=6) array([b'w', b'o', b'r', b'l', b'd'], dtype='|S1') >>> np.frombuffer(b'\x01\x02', dtype=np.uint8) array([1, 2], dtype=uint8) >>> np.frombuffer(b'\x01\x02\x03\x04\x05', dtype=np.uint8, count=3) array([1, 2, 3], dtype=uint8) # numpy.fromfile numpy.fromfile(_file_ , _dtype =float_, _count =-1_, _sep =''_, _offset =0_, _*_ , _like =None_) Construct an array from data in a text or binary file. A highly efficient way of reading binary data with a known data-type, as well as parsing simply formatted text files. Data written using the `tofile` method can be read using this function. Parameters: **file** file or str or Path Open file object or filename. **dtype** data-type Data type of the returned array. For binary files, it is used to determine the size and byte-order of the items in the file. Most builtin numeric types are supported and extension types may be supported. **count** int Number of items to read. `-1` means all items (i.e., the complete file). **sep** str Separator between items if file is a text file. Empty (“”) separator means the file should be treated as binary. Spaces (” “) in the separator match zero or more whitespace characters. A separator consisting only of spaces must match at least one whitespace. **offset** int The offset (in bytes) from the file’s current position. Defaults to 0. Only permitted for binary files. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. See also [`load`](numpy.load#numpy.load "numpy.load"), [`save`](numpy.save#numpy.save "numpy.save") [`ndarray.tofile`](numpy.ndarray.tofile#numpy.ndarray.tofile "numpy.ndarray.tofile") [`loadtxt`](numpy.loadtxt#numpy.loadtxt "numpy.loadtxt") More flexible way of loading data from a text file. #### Notes Do not rely on the combination of `tofile` and `fromfile` for data storage, as the binary files generated are not platform independent. In particular, no byte-order or data-type information is saved. Data can be stored in the platform independent `.npy` format using [`save`](numpy.save#numpy.save "numpy.save") and [`load`](numpy.load#numpy.load "numpy.load") instead. #### Examples Construct an ndarray: >>> import numpy as np >>> dt = np.dtype([('time', [('min', np.int64), ('sec', np.int64)]), ... ('temp', float)]) >>> x = np.zeros((1,), dtype=dt) >>> x['time']['min'] = 10; x['temp'] = 98.25 >>> x array([((10, 0), 98.25)], dtype=[('time', [('min', '>> import tempfile >>> fname = tempfile.mkstemp()[1] >>> x.tofile(fname) Read the raw data from disk: >>> np.fromfile(fname, dtype=dt) array([((10, 0), 98.25)], dtype=[('time', [('min', '>> np.save(fname, x) >>> np.load(fname + '.npy') array([((10, 0), 98.25)], dtype=[('time', [('min', '_, _like=None_ , _**kwargs_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L1847-L1917) Construct an array by executing a function over each coordinate. The resulting array therefore has a value `fn(x, y, z)` at coordinate `(x, y, z)`. Parameters: **function** callable The function is called with N parameters, where N is the rank of [`shape`](numpy.shape#numpy.shape "numpy.shape"). Each parameter represents the coordinates of the array varying along a specific axis. For example, if [`shape`](numpy.shape#numpy.shape "numpy.shape") were `(2, 2)`, then the parameters would be `array([[0, 0], [1, 1]])` and `array([[0, 1], [0, 1]])` **shape**(N,) tuple of ints Shape of the output array, which also determines the shape of the coordinate arrays passed to `function`. **dtype** data-type, optional Data-type of the coordinate arrays passed to `function`. By default, [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is float. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **fromfunction** any The result of the call to `function` is passed back directly. Therefore the shape of `fromfunction` is completely determined by `function`. If `function` returns a scalar value, the shape of `fromfunction` would not match the [`shape`](numpy.shape#numpy.shape "numpy.shape") parameter. See also [`indices`](numpy.indices#numpy.indices "numpy.indices"), [`meshgrid`](numpy.meshgrid#numpy.meshgrid "numpy.meshgrid") #### Notes Keywords other than [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") and `like` are passed to `function`. #### Examples >>> import numpy as np >>> np.fromfunction(lambda i, j: i, (2, 2), dtype=float) array([[0., 0.], [1., 1.]]) >>> np.fromfunction(lambda i, j: j, (2, 2), dtype=float) array([[0., 1.], [0., 1.]]) >>> np.fromfunction(lambda i, j: i == j, (3, 3), dtype=int) array([[ True, False, False], [False, True, False], [False, False, True]]) >>> np.fromfunction(lambda i, j: i + j, (3, 3), dtype=int) array([[0, 1, 2], [1, 2, 3], [2, 3, 4]]) # numpy.fromiter numpy.fromiter(_iter_ , _dtype_ , _count =-1_, _*_ , _like =None_) Create a new 1-dimensional array from an iterable object. Parameters: **iter** iterable object An iterable object providing data for the array. **dtype** data-type The data-type of the returned array. Changed in version 1.23: Object and subarray dtypes are now supported (note that the final result is not 1-D for a subarray dtype). **count** int, optional The number of items to read from _iterable_. The default is -1, which means all data is read. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** ndarray The output array. #### Notes Specify `count` to improve performance. It allows `fromiter` to pre-allocate the output array, instead of resizing it on demand. #### Examples >>> import numpy as np >>> iterable = (x*x for x in range(5)) >>> np.fromiter(iterable, float) array([ 0., 1., 4., 9., 16.]) A carefully constructed subarray dtype will lead to higher dimensional results: >>> iterable = ((x+1, x+2) for x in range(5)) >>> np.fromiter(iterable, dtype=np.dtype((int, 2))) array([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6]]) # numpy.frompyfunc numpy.frompyfunc(_func_ , _/_ , _nin_ , _nout_ , _*_[, _identity_]) Takes an arbitrary Python function and returns a NumPy ufunc. Can be used, for example, to add broadcasting to a built-in Python function (see Examples section). Parameters: **func** Python function object An arbitrary Python function. **nin** int The number of input arguments. **nout** int The number of objects returned by `func`. **identity** object, optional The value to use for the [`identity`](numpy.ufunc.identity#numpy.ufunc.identity "numpy.ufunc.identity") attribute of the resulting object. If specified, this is equivalent to setting the underlying C `identity` field to `PyUFunc_IdentityValue`. If omitted, the identity is set to `PyUFunc_None`. Note that this is _not_ equivalent to setting the identity to `None`, which implies the operation is reorderable. Returns: **out** ufunc Returns a NumPy universal function (`ufunc`) object. See also [`vectorize`](numpy.vectorize#numpy.vectorize "numpy.vectorize") Evaluates pyfunc over input arrays using broadcasting rules of numpy. #### Notes The returned ufunc always returns PyObject arrays. #### Examples Use frompyfunc to add broadcasting to the Python function `oct`: >>> import numpy as np >>> oct_array = np.frompyfunc(oct, 1, 1) >>> oct_array(np.array((10, 30, 100))) array(['0o12', '0o36', '0o144'], dtype=object) >>> np.array((oct(10), oct(30), oct(100))) # for comparison array(['0o12', '0o36', '0o144'], dtype='>> import numpy as np >>> from io import StringIO >>> text = StringIO("1312 foo\n1534 bar\n444 qux") >>> regexp = r"(\d+)\s+(...)" # match [digits, whitespace, anything] >>> output = np.fromregex(text, regexp, ... [('num', np.int64), ('key', 'S3')]) >>> output array([(1312, b'foo'), (1534, b'bar'), ( 444, b'qux')], dtype=[('num', '>> output['num'] array([1312, 1534, 444]) # numpy.fromstring numpy.fromstring(_string_ , _dtype =float_, _count =-1_, _*_ , _sep_ , _like =None_) A new 1-D array initialized from text data in a string. Parameters: **string** str A string containing the data. **dtype** data-type, optional The data type of the array; default: float. For binary input data, the data must be in exactly this format. Most builtin numeric types are supported and extension types may be supported. **count** int, optional Read this number of [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") elements from the data. If this is negative (the default), the count will be determined from the length of the data. **sep** str, optional The string separating numbers in the data; extra whitespace between elements is also ignored. Deprecated since version 1.14: Passing `sep=''`, the default, is deprecated since it will trigger the deprecated binary mode of this function. This mode interprets [`string`](https://docs.python.org/3/library/string.html#module- string "\(in Python v3.13\)") as binary bytes, rather than ASCII text with decimal numbers, an operation which is better spelt `frombuffer(string, dtype, count)`. If [`string`](https://docs.python.org/3/library/string.html#module- string "\(in Python v3.13\)") contains unicode text, the binary mode of `fromstring` will first encode it into bytes using utf-8, which will not produce sane results. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **arr** ndarray The constructed array. Raises: ValueError If the string is not the correct size to satisfy the requested [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") and `count`. See also [`frombuffer`](numpy.frombuffer#numpy.frombuffer "numpy.frombuffer"), [`fromfile`](numpy.fromfile#numpy.fromfile "numpy.fromfile"), [`fromiter`](numpy.fromiter#numpy.fromiter "numpy.fromiter") #### Examples >>> import numpy as np >>> np.fromstring('1 2', dtype=int, sep=' ') array([1, 2]) >>> np.fromstring('1, 2', dtype=int, sep=',') array([1, 2]) # numpy.full numpy.full(_shape_ , _fill_value_ , _dtype =None_, _order ='C'_, _*_ , _device =None_, _like =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L290-L354) Return a new array of given shape and type, filled with `fill_value`. Parameters: **shape** int or sequence of ints Shape of the new array, e.g., `(2, 3)` or `2`. **fill_value** scalar or array_like Fill value. **dtype** data-type, optional The desired data-type for the array The default, None, means `np.array(fill_value).dtype`. **order**{‘C’, ‘F’}, optional Whether to store multidimensional data in C- or Fortran-contiguous (row- or column-wise) order in memory. **device** str, optional The device on which to place the created array. Default: None. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** ndarray Array of `fill_value` with the given shape, dtype, and order. See also [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like") Return a new array with shape of input filled with value. [`empty`](numpy.empty#numpy.empty "numpy.empty") Return a new uninitialized array. [`ones`](numpy.ones#numpy.ones "numpy.ones") Return a new array setting values to one. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Return a new array setting values to zero. #### Examples >>> import numpy as np >>> np.full((2, 2), np.inf) array([[inf, inf], [inf, inf]]) >>> np.full((2, 2), 10) array([[10, 10], [10, 10]]) >>> np.full((2, 2), [1, 2]) array([[1, 2], [1, 2]]) # numpy.full_like numpy.full_like(_a_ , _fill_value_ , _dtype =None_, _order ='K'_, _subok =True_, _shape =None_, _*_ , _device =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L367-L443) Return a full array with the same shape and type as a given array. Parameters: **a** array_like The shape and data-type of `a` define these same attributes of the returned array. **fill_value** array_like Fill value. **dtype** data-type, optional Overrides the data type of the result. **order**{‘C’, ‘F’, ‘A’, or ‘K’}, optional Overrides the memory layout of the result. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. **subok** bool, optional. If True, then the newly created array will use the sub-class type of `a`, otherwise it will be a base-class array. Defaults to True. **shape** int or sequence of ints, optional. Overrides the shape of the result. If order=’K’ and the number of dimensions is unchanged, will try to keep order, otherwise, order=’C’ is implied. **device** str, optional The device on which to place the created array. Default: None. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. Returns: **out** ndarray Array of `fill_value` with the same shape and type as `a`. See also [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like") Return an empty array with shape and type of input. [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") Return an array of ones with shape and type of input. [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Return an array of zeros with shape and type of input. [`full`](numpy.full#numpy.full "numpy.full") Return a new array of given shape filled with value. #### Examples >>> import numpy as np >>> x = np.arange(6, dtype=int) >>> np.full_like(x, 1) array([1, 1, 1, 1, 1, 1]) >>> np.full_like(x, 0.1) array([0, 0, 0, 0, 0, 0]) >>> np.full_like(x, 0.1, dtype=np.double) array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1]) >>> np.full_like(x, np.nan, dtype=np.double) array([nan, nan, nan, nan, nan, nan]) >>> y = np.arange(6, dtype=np.double) >>> np.full_like(y, 0.1) array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1]) >>> y = np.zeros([2, 2, 3], dtype=int) >>> np.full_like(y, [0, 0, 255]) array([[[ 0, 0, 255], [ 0, 0, 255]], [[ 0, 0, 255], [ 0, 0, 255]]]) # numpy.gcd numpy.gcd(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns the greatest common divisor of `|x1|` and `|x2|` Parameters: **x1, x2** array_like, int Arrays of values. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). Returns: **y** ndarray or scalar The greatest common divisor of the absolute value of the inputs This is a scalar if both `x1` and `x2` are scalars. See also [`lcm`](numpy.lcm#numpy.lcm "numpy.lcm") The lowest common multiple #### Examples >>> import numpy as np >>> np.gcd(12, 20) 4 >>> np.gcd.reduce([15, 25, 35]) 5 >>> np.gcd(np.arange(6), 20) array([20, 1, 2, 1, 4, 5]) # numpy.generic.__array__ method generic.__array__() sc.__array__(dtype) return 0-dim array from scalar with specified dtype # numpy.generic.__array_interface__ attribute generic.__array_interface__ Array protocol: Python side # numpy.generic.__array_priority__ attribute generic.__array_priority__ Array priority. # numpy.generic.__array_struct__ attribute generic.__array_struct__ Array protocol: struct # numpy.generic.__array_wrap__ method generic.__array_wrap__() __array_wrap__ implementation for scalar types # numpy.generic.__reduce__ method generic.__reduce__() Helper for pickle. # numpy.generic.__setstate__ method generic.__setstate__() # numpy.generic.base attribute generic.base Scalar attribute identical to the corresponding array attribute. Please see [`ndarray.base`](numpy.ndarray.base#numpy.ndarray.base "numpy.ndarray.base"). # numpy.generic.byteswap method generic.byteswap() Scalar method identical to the corresponding array attribute. Please see [`ndarray.byteswap`](numpy.ndarray.byteswap#numpy.ndarray.byteswap "numpy.ndarray.byteswap"). # numpy.generic.data attribute generic.data Pointer to start of data. # numpy.generic.dtype attribute generic.dtype Get array data-descriptor. # numpy.generic.flags attribute generic.flags The integer value of flags. # numpy.generic.flat attribute generic.flat A 1-D view of the scalar. # numpy.generic.imag attribute generic.imag The imaginary part of the scalar. # numpy.generic.itemsize attribute generic.itemsize The length of one element in bytes. # numpy.generic.ndim attribute generic.ndim The number of array dimensions. # numpy.generic.real attribute generic.real The real part of the scalar. # numpy.generic.setflags method generic.setflags() Scalar method identical to the corresponding array attribute. Please see [`ndarray.setflags`](numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags"). # numpy.generic.shape attribute generic.shape Tuple of array dimensions. # numpy.generic.size attribute generic.size The number of elements in the gentype. # numpy.generic.squeeze method generic.squeeze() Scalar method identical to the corresponding array attribute. Please see [`ndarray.squeeze`](numpy.ndarray.squeeze#numpy.ndarray.squeeze "numpy.ndarray.squeeze"). # numpy.generic.strides attribute generic.strides Tuple of bytes steps in each dimension. # numpy.generic.T attribute generic.T Scalar attribute identical to the corresponding array attribute. Please see [`ndarray.T`](numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T"). # numpy.genfromtxt numpy.genfromtxt(_fname_ , _dtype= _, _comments='#'_ , _delimiter=None_ , _skip_header=0_ , _skip_footer=0_ , _converters=None_ , _missing_values=None_ , _filling_values=None_ , _usecols=None_ , _names=None_ , _excludelist=None_ , _deletechars=" !#$% &'()*+_, _-./:; <=>?@[\\\\]^{|}~"_, _replace_space='_'_ , _autostrip=False_ , _case_sensitive=True_ , _defaultfmt='f%i'_ , _unpack=None_ , _usemask=False_ , _loose=True_ , _invalid_raise=True_ , _max_rows=None_ , _encoding=None_ , _*_ , _ndmin=0_ , _like=None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_npyio_impl.py#L1747-L2496) Load data from a text file, with missing values handled as specified. Each line past the first `skip_header` lines is split at the `delimiter` character, and characters following the `comments` character are discarded. Parameters: **fname** file, str, pathlib.Path, list of str, generator File, filename, list, or generator to read. If the filename extension is `.gz` or `.bz2`, the file is first decompressed. Note that generators must return bytes or strings. The strings in a list or produced by a generator are treated as lines. **dtype** dtype, optional Data type of the resulting array. If None, the dtypes will be determined by the contents of each column, individually. **comments** str, optional The character used to indicate the start of a comment. All the characters occurring on a line after a comment are discarded. **delimiter** str, int, or sequence, optional The string used to separate values. By default, any consecutive whitespaces act as delimiter. An integer or sequence of integers can also be provided as width(s) of each field. **skiprows** int, optional `skiprows` was removed in numpy 1.10. Please use `skip_header` instead. **skip_header** int, optional The number of lines to skip at the beginning of the file. **skip_footer** int, optional The number of lines to skip at the end of the file. **converters** variable, optional The set of functions that convert the data of a column to a value. The converters can also be used to provide a default value for missing data: `converters = {3: lambda s: float(s or 0)}`. **missing** variable, optional `missing` was removed in numpy 1.10. Please use `missing_values` instead. **missing_values** variable, optional The set of strings corresponding to missing data. **filling_values** variable, optional The set of values to be used as default when the data are missing. **usecols** sequence, optional Which columns to read, with 0 being the first. For example, `usecols = (1, 4, 5)` will extract the 2nd, 5th and 6th columns. **names**{None, True, str, sequence}, optional If `names` is True, the field names are read from the first line after the first `skip_header` lines. This line can optionally be preceded by a comment delimiter. Any content before the comment delimiter is discarded. If `names` is a sequence or a single-string of comma-separated names, the names will be used to define the field names in a structured dtype. If `names` is None, the names of the dtype fields will be used, if any. **excludelist** sequence, optional A list of names to exclude. This list is appended to the default list [‘return’,’file’,’print’]. Excluded names are appended with an underscore: for example, `file` would become `file_`. **deletechars** str, optional A string combining invalid characters that must be deleted from the names. **defaultfmt** str, optional A format used to define default field names, such as “f%i” or “f_%02i”. **autostrip** bool, optional Whether to automatically strip white spaces from the variables. **replace_space** char, optional Character(s) used in replacement of white spaces in the variable names. By default, use a ‘_’. **case_sensitive**{True, False, ‘upper’, ‘lower’}, optional If True, field names are case sensitive. If False or ‘upper’, field names are converted to upper case. If ‘lower’, field names are converted to lower case. **unpack** bool, optional If True, the returned array is transposed, so that arguments may be unpacked using `x, y, z = genfromtxt(...)`. When used with a structured data-type, arrays are returned for each field. Default is False. **usemask** bool, optional If True, return a masked array. If False, return a regular array. **loose** bool, optional If True, do not raise errors for invalid values. **invalid_raise** bool, optional If True, an exception is raised if an inconsistency is detected in the number of columns. If False, a warning is emitted and the offending lines are skipped. **max_rows** int, optional The maximum number of rows to read. Must not be used with skip_footer at the same time. If given, the value must be at least 1. Default is to read the entire file. **encoding** str, optional Encoding used to decode the inputfile. Does not apply when `fname` is a file object. The special value ‘bytes’ enables backward compatibility workarounds that ensure that you receive byte arrays when possible and passes latin1 encoded strings to converters. Override this value to receive unicode arrays and pass strings as input to converters. If set to None the system default is used. The default value is ‘bytes’. Changed in version 2.0: Before NumPy 2, the default was `'bytes'` for Python 2 compatibility. The default is now `None`. **ndmin** int, optional Same parameter as [`loadtxt`](numpy.loadtxt#numpy.loadtxt "numpy.loadtxt") New in version 1.23.0. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** ndarray Data read from the text file. If `usemask` is True, this is a masked array. See also [`numpy.loadtxt`](numpy.loadtxt#numpy.loadtxt "numpy.loadtxt") equivalent function when no data is missing. #### Notes * When spaces are used as delimiters, or when no delimiter has been given as input, there should not be any missing data between two fields. * When variables are named (either by a flexible dtype or with a `names` sequence), there must not be any header in the file (else a ValueError exception is raised). * Individual values are not stripped of spaces by default. When using a custom converter, make sure the function does remove spaces. * Custom converters may receive unexpected values due to dtype discovery. #### References [1] NumPy User Guide, section [I/O with NumPy](https://docs.scipy.org/doc/numpy/user/basics.io.genfromtxt.html). #### Examples >>> from io import StringIO >>> import numpy as np Comma delimited file with mixed dtype >>> s = StringIO("1,1.3,abcde") >>> data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'), ... ('mystring','S5')], delimiter=",") >>> data array((1, 1.3, b'abcde'), dtype=[('myint', '>> _ = s.seek(0) # needed for StringIO example only >>> data = np.genfromtxt(s, dtype=None, ... names = ['myint','myfloat','mystring'], delimiter=",") >>> data array((1, 1.3, 'abcde'), dtype=[('myint', '>> _ = s.seek(0) >>> data = np.genfromtxt(s, dtype="i8,f8,S5", ... names=['myint','myfloat','mystring'], delimiter=",") >>> data array((1, 1.3, b'abcde'), dtype=[('myint', '>> s = StringIO("11.3abcde") >>> data = np.genfromtxt(s, dtype=None, names=['intvar','fltvar','strvar'], ... delimiter=[1,3,5]) >>> data array((1, 1.3, 'abcde'), dtype=[('intvar', '>> f = StringIO(''' ... text,# of chars ... hello world,11 ... numpy,5''') >>> np.genfromtxt(f, dtype='S12,S12', delimiter=',') array([(b'text', b''), (b'hello world', b'11'), (b'numpy', b'5')], dtype=[('f0', 'S12'), ('f1', 'S12')]) # numpy.geomspace numpy.geomspace(_start_ , _stop_ , _num =50_, _endpoint =True_, _dtype =None_, _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/function_base.py#L310-L452) Return numbers spaced evenly on a log scale (a geometric progression). This is similar to [`logspace`](numpy.logspace#numpy.logspace "numpy.logspace"), but with endpoints specified directly. Each output sample is a constant multiple of the previous. Parameters: **start** array_like The starting value of the sequence. **stop** array_like The final value of the sequence, unless `endpoint` is False. In that case, `num + 1` values are spaced over the interval in log-space, of which all but the last (a sequence of length `num`) are returned. **num** integer, optional Number of samples to generate. Default is 50. **endpoint** boolean, optional If true, `stop` is the last sample. Otherwise, it is not included. Default is True. **dtype** dtype The type of the output array. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not given, the data type is inferred from `start` and `stop`. The inferred dtype will never be an integer; `float` is chosen even if the arguments would produce an array of integers. **axis** int, optional The axis in the result to store the samples. Relevant only if start or stop are array-like. By default (0), the samples will be along a new axis inserted at the beginning. Use -1 to get an axis at the end. Returns: **samples** ndarray `num` samples, equally spaced on a log scale. See also [`logspace`](numpy.logspace#numpy.logspace "numpy.logspace") Similar to geomspace, but with endpoints specified using log and base. [`linspace`](numpy.linspace#numpy.linspace "numpy.linspace") Similar to geomspace, but with arithmetic instead of geometric progression. [`arange`](numpy.arange#numpy.arange "numpy.arange") Similar to linspace, with the step size specified instead of the number of samples. [How to create arrays with regularly-spaced values](../../user/how-to- partition#how-to-partition) #### Notes If the inputs or dtype are complex, the output will follow a logarithmic spiral in the complex plane. (There are an infinite number of spirals passing through two points; the output will follow the shortest such path.) #### Examples >>> import numpy as np >>> np.geomspace(1, 1000, num=4) array([ 1., 10., 100., 1000.]) >>> np.geomspace(1, 1000, num=3, endpoint=False) array([ 1., 10., 100.]) >>> np.geomspace(1, 1000, num=4, endpoint=False) array([ 1. , 5.62341325, 31.6227766 , 177.827941 ]) >>> np.geomspace(1, 256, num=9) array([ 1., 2., 4., 8., 16., 32., 64., 128., 256.]) Note that the above may not produce exact integers: >>> np.geomspace(1, 256, num=9, dtype=int) array([ 1, 2, 4, 7, 16, 32, 63, 127, 256]) >>> np.around(np.geomspace(1, 256, num=9)).astype(int) array([ 1, 2, 4, 8, 16, 32, 64, 128, 256]) Negative, decreasing, and complex inputs are allowed: >>> np.geomspace(1000, 1, num=4) array([1000., 100., 10., 1.]) >>> np.geomspace(-1000, -1, num=4) array([-1000., -100., -10., -1.]) >>> np.geomspace(1j, 1000j, num=4) # Straight line array([0. +1.j, 0. +10.j, 0. +100.j, 0.+1000.j]) >>> np.geomspace(-1+0j, 1+0j, num=5) # Circle array([-1.00000000e+00+1.22464680e-16j, -7.07106781e-01+7.07106781e-01j, 6.12323400e-17+1.00000000e+00j, 7.07106781e-01+7.07106781e-01j, 1.00000000e+00+0.00000000e+00j]) Graphical illustration of `endpoint` parameter: >>> import matplotlib.pyplot as plt >>> N = 10 >>> y = np.zeros(N) >>> plt.semilogx(np.geomspace(1, 1000, N, endpoint=True), y + 1, 'o') [] >>> plt.semilogx(np.geomspace(1, 1000, N, endpoint=False), y + 2, 'o') [] >>> plt.axis([0.5, 2000, 0, 3]) [0.5, 2000, 0, 3] >>> plt.grid(True, color='0.7', linestyle='-', which='both', axis='both') >>> plt.show() # numpy.get_include numpy.get_include()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_utils_impl.py#L72-L115) Return the directory that contains the NumPy *.h header files. Extension modules that need to compile against NumPy may need to use this function to locate the appropriate include directory. #### Notes When using `setuptools`, for example in `setup.py`: import numpy as np ... Extension('extension_name', ... include_dirs=[np.get_include()]) ... Note that a CLI tool `numpy-config` was introduced in NumPy 2.0, using that is likely preferred for build systems other than `setuptools`: $ numpy-config --cflags -I/path/to/site-packages/numpy/_core/include # Or rely on pkg-config: $ export PKG_CONFIG_PATH=$(numpy-config --pkgconfigdir) $ pkg-config --cflags -I/path/to/site-packages/numpy/_core/include #### Examples >>> np.get_include() '.../site-packages/numpy/core/include' # may vary # numpy.get_printoptions numpy.get_printoptions()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/arrayprint.py#L318-L364) Return the current print options. Returns: **print_opts** dict Dictionary of current print options with keys * precision : int * threshold : int * edgeitems : int * linewidth : int * suppress : bool * nanstr : str * infstr : str * sign : str * formatter : dict of callables * floatmode : str * legacy : str or False For a full description of these options, see [`set_printoptions`](numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions"). See also [`set_printoptions`](numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions"), [`printoptions`](numpy.printoptions#numpy.printoptions "numpy.printoptions") #### Examples >>> import numpy as np >>> np.get_printoptions() {'edgeitems': 3, 'threshold': 1000, ..., 'override_repr': None} >>> np.get_printoptions()['linewidth'] 75 >>> np.set_printoptions(linewidth=100) >>> np.get_printoptions()['linewidth'] 100 # numpy.getbufsize numpy.getbufsize()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_ufunc_config.py#L197-L214) Return the size of the buffer used in ufuncs. Returns: **getbufsize** int Size of ufunc buffer in bytes. #### Examples >>> import numpy as np >>> np.getbufsize() 8192 # numpy.geterr numpy.geterr()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_ufunc_config.py#L109-L154) Get the current way of handling floating-point errors. Returns: **res** dict A dictionary with keys “divide”, “over”, “under”, and “invalid”, whose values are from the strings “ignore”, “print”, “log”, “warn”, “raise”, and “call”. The keys represent possible floating-point exceptions, and the values define how these exceptions are handled. See also [`geterrcall`](numpy.geterrcall#numpy.geterrcall "numpy.geterrcall"), [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"), [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall") #### Notes For complete documentation of the types of floating-point exceptions and treatment options, see [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"). #### Examples >>> import numpy as np >>> np.geterr() {'divide': 'warn', 'over': 'warn', 'under': 'ignore', 'invalid': 'warn'} >>> np.arange(3.) / np.arange(3.) array([nan, 1., 1.]) RuntimeWarning: invalid value encountered in divide >>> oldsettings = np.seterr(all='warn', invalid='raise') >>> np.geterr() {'divide': 'warn', 'over': 'warn', 'under': 'warn', 'invalid': 'raise'} >>> np.arange(3.) / np.arange(3.) Traceback (most recent call last): ... FloatingPointError: invalid value encountered in divide >>> oldsettings = np.seterr(**oldsettings) # restore original # numpy.geterrcall numpy.geterrcall()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_ufunc_config.py#L307-L353) Return the current callback function used on floating-point errors. When the error handling for a floating-point error (one of “divide”, “over”, “under”, or “invalid”) is set to ‘call’ or ‘log’, the function that is called or the log instance that is written to is returned by `geterrcall`. This function or log instance has been set with [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall"). Returns: **errobj** callable, log instance or None The current error handler. If no handler was set through [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall"), `None` is returned. See also [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall"), [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"), [`geterr`](numpy.geterr#numpy.geterr "numpy.geterr") #### Notes For complete documentation of the types of floating-point exceptions and treatment options, see [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"). #### Examples >>> import numpy as np >>> np.geterrcall() # we did not yet set a handler, returns None >>> orig_settings = np.seterr(all='call') >>> def err_handler(type, flag): ... print("Floating point error (%s), with flag %s" % (type, flag)) >>> old_handler = np.seterrcall(err_handler) >>> np.array([1, 2, 3]) / 0.0 Floating point error (divide by zero), with flag 1 array([inf, inf, inf]) >>> cur_handler = np.geterrcall() >>> cur_handler is err_handler True >>> old_settings = np.seterr(**orig_settings) # restore original >>> old_handler = np.seterrcall(None) # restore original # numpy.gradient numpy.gradient(_f_ , _* varargs_, _axis =None_, _edge_order =1_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L976-L1362) Return the gradient of an N-dimensional array. The gradient is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array. Parameters: **f** array_like An N-dimensional array containing samples of a scalar function. **varargs** list of scalar or array, optional Spacing between f values. Default unitary spacing for all dimensions. Spacing can be specified using: 1. single scalar to specify a sample distance for all dimensions. 2. N scalars to specify a constant sample distance for each dimension. i.e. `dx`, `dy`, `dz`, … 3. N arrays to specify the coordinates of the values along each dimension of F. The length of the array must match the size of the corresponding dimension 4. Any combination of N scalars/arrays with the meaning of 2. and 3. If `axis` is given, the number of varargs must equal the number of axes. Default: 1. (see Examples below). **edge_order**{1, 2}, optional Gradient is calculated using N-th order accurate differences at the boundaries. Default: 1. **axis** None or int or tuple of ints, optional Gradient is calculated only along the given axis or axes The default (axis = None) is to calculate the gradient for all the axes of the input array. axis may be negative, in which case it counts from the last to the first axis. Returns: **gradient** ndarray or tuple of ndarray A tuple of ndarrays (or a single ndarray if there is only one dimension) corresponding to the derivatives of f with respect to each dimension. Each derivative has the same shape as f. #### Notes Assuming that \\(f\in C^{3}\\) (i.e., \\(f\\) has at least 3 continuous derivatives) and let \\(h_{*}\\) be a non-homogeneous stepsize, we minimize the “consistency error” \\(\eta_{i}\\) between the true gradient and its estimate from a linear combination of the neighboring grid-points: \\[\eta_{i} = f_{i}^{\left(1\right)} - \left[ \alpha f\left(x_{i}\right) + \beta f\left(x_{i} + h_{d}\right) + \gamma f\left(x_{i}-h_{s}\right) \right]\\] By substituting \\(f(x_{i} + h_{d})\\) and \\(f(x_{i} - h_{s})\\) with their Taylor series expansion, this translates into solving the following the linear system: \\[\begin{split}\left\\{ \begin{array}{r} \alpha+\beta+\gamma=0 \\\ \beta h_{d}-\gamma h_{s}=1 \\\ \beta h_{d}^{2}+\gamma h_{s}^{2}=0 \end{array} \right.\end{split}\\] The resulting approximation of \\(f_{i}^{(1)}\\) is the following: \\[\hat f_{i}^{(1)} = \frac{ h_{s}^{2}f\left(x_{i} + h_{d}\right) + \left(h_{d}^{2} - h_{s}^{2}\right)f\left(x_{i}\right) - h_{d}^{2}f\left(x_{i}-h_{s}\right)} { h_{s}h_{d}\left(h_{d} + h_{s}\right)} + \mathcal{O}\left(\frac{h_{d}h_{s}^{2} + h_{s}h_{d}^{2}}{h_{d} + h_{s}}\right)\\] It is worth noting that if \\(h_{s}=h_{d}\\) (i.e., data are evenly spaced) we find the standard second order approximation: \\[\hat f_{i}^{(1)}= \frac{f\left(x_{i+1}\right) - f\left(x_{i-1}\right)}{2h} + \mathcal{O}\left(h^{2}\right)\\] With a similar procedure the forward/backward approximations used for boundaries can be derived. #### References [1] Quarteroni A., Sacco R., Saleri F. (2007) Numerical Mathematics (Texts in Applied Mathematics). New York: Springer. [2] Durran D. R. (1999) Numerical Methods for Wave Equations in Geophysical Fluid Dynamics. New York: Springer. [3] Fornberg B. (1988) Generation of Finite Difference Formulas on Arbitrarily Spaced Grids, Mathematics of Computation 51, no. 184 : 699-706. [PDF](https://www.ams.org/journals/mcom/1988-51-184/S0025-5718-1988-0935077-0/S0025-5718-1988-0935077-0.pdf). #### Examples >>> import numpy as np >>> f = np.array([1, 2, 4, 7, 11, 16]) >>> np.gradient(f) array([1. , 1.5, 2.5, 3.5, 4.5, 5. ]) >>> np.gradient(f, 2) array([0.5 , 0.75, 1.25, 1.75, 2.25, 2.5 ]) Spacing can be also specified with an array that represents the coordinates of the values F along the dimensions. For instance a uniform spacing: >>> x = np.arange(f.size) >>> np.gradient(f, x) array([1. , 1.5, 2.5, 3.5, 4.5, 5. ]) Or a non uniform one: >>> x = np.array([0., 1., 1.5, 3.5, 4., 6.]) >>> np.gradient(f, x) array([1. , 3. , 3.5, 6.7, 6.9, 2.5]) For two dimensional arrays, the return will be two arrays ordered by axis. In this example the first array stands for the gradient in rows and the second one in columns direction: >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]])) (array([[ 2., 2., -1.], [ 2., 2., -1.]]), array([[1. , 2.5, 4. ], [1. , 1. , 1. ]])) In this example the spacing is also specified: uniform for axis=0 and non uniform for axis=1 >>> dx = 2. >>> y = [1., 1.5, 3.5] >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]]), dx, y) (array([[ 1. , 1. , -0.5], [ 1. , 1. , -0.5]]), array([[2. , 2. , 2. ], [2. , 1.7, 0.5]])) It is possible to specify how boundaries are treated using `edge_order` >>> x = np.array([0, 1, 2, 3, 4]) >>> f = x**2 >>> np.gradient(f, edge_order=1) array([1., 2., 4., 6., 7.]) >>> np.gradient(f, edge_order=2) array([0., 2., 4., 6., 8.]) The `axis` keyword can be used to specify a subset of axes of which the gradient is calculated >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]]), axis=0) array([[ 2., 2., -1.], [ 2., 2., -1.]]) The `varargs` argument defines the spacing between sample points in the input array. It can take two forms: 1. An array, specifying coordinates, which may be unevenly spaced: >>> x = np.array([0., 2., 3., 6., 8.]) >>> y = x ** 2 >>> np.gradient(y, x, edge_order=2) array([ 0., 4., 6., 12., 16.]) 2. A scalar, representing the fixed sample distance: >>> dx = 2 >>> x = np.array([0., 2., 4., 6., 8.]) >>> y = x ** 2 >>> np.gradient(y, dx, edge_order=2) array([ 0., 4., 8., 12., 16.]) It’s possible to provide different data for spacing along each dimension. The number of arguments must match the number of dimensions in the input data. >>> dx = 2 >>> dy = 3 >>> x = np.arange(0, 6, dx) >>> y = np.arange(0, 9, dy) >>> xs, ys = np.meshgrid(x, y) >>> zs = xs + 2 * ys >>> np.gradient(zs, dy, dx) # Passing two scalars (array([[2., 2., 2.], [2., 2., 2.], [2., 2., 2.]]), array([[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]])) Mixing scalars and arrays is also allowed: >>> np.gradient(zs, y, dx) # Passing one array and one scalar (array([[2., 2., 2.], [2., 2., 2.], [2., 2., 2.]]), array([[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]])) # numpy.greater numpy.greater(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the truth value of (x1 > x2) element-wise. Parameters: **x1, x2** array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`less`](numpy.less#numpy.less "numpy.less"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal") #### Examples >>> import numpy as np >>> np.greater([4,2],[2,2]) array([ True, False]) The `>` operator can be used as a shorthand for `np.greater` on ndarrays. >>> a = np.array([4, 2]) >>> b = np.array([2, 2]) >>> a > b array([ True, False]) # numpy.greater_equal numpy.greater_equal(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the truth value of (x1 >= x2) element-wise. Parameters: **x1, x2** array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** bool or ndarray of bool Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less`](numpy.less#numpy.less "numpy.less"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal") #### Examples >>> import numpy as np >>> np.greater_equal([4, 2, 1], [2, 2, 2]) array([ True, True, False]) The `>=` operator can be used as a shorthand for `np.greater_equal` on ndarrays. >>> a = np.array([4, 2, 1]) >>> b = np.array([2, 2, 2]) >>> a >= b array([ True, True, False]) # numpy.hamming numpy.hamming(_M_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L3367-L3463) Return the Hamming window. The Hamming window is a taper formed by using a weighted cosine. Parameters: **M** int Number of points in the output window. If zero or less, an empty array is returned. Returns: **out** ndarray The window, with the maximum value normalized to one (the value one appears only if the number of samples is odd). See also [`bartlett`](numpy.bartlett#numpy.bartlett "numpy.bartlett"), [`blackman`](numpy.blackman#numpy.blackman "numpy.blackman"), [`hanning`](numpy.hanning#numpy.hanning "numpy.hanning"), [`kaiser`](numpy.kaiser#numpy.kaiser "numpy.kaiser") #### Notes The Hamming window is defined as \\[w(n) = 0.54 - 0.46\cos\left(\frac{2\pi{n}}{M-1}\right) \qquad 0 \leq n \leq M-1\\] The Hamming was named for R. W. Hamming, an associate of J. W. Tukey and is described in Blackman and Tukey. It was recommended for smoothing the truncated autocovariance function in the time domain. Most references to the Hamming window come from the signal processing literature, where it is used as one of many windowing functions for smoothing values. It is also known as an apodization (which means “removing the foot”, i.e. smoothing discontinuities at the beginning and end of the sampled signal) or tapering function. #### References [1] Blackman, R.B. and Tukey, J.W., (1958) The measurement of power spectra, Dover Publications, New York. [2] E.R. Kanasewich, “Time Sequence Analysis in Geophysics”, The University of Alberta Press, 1975, pp. 109-110. [3] Wikipedia, “Window function”, [4] W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling, “Numerical Recipes”, Cambridge University Press, 1986, page 425. #### Examples >>> import numpy as np >>> np.hamming(12) array([ 0.08 , 0.15302337, 0.34890909, 0.60546483, 0.84123594, # may vary 0.98136677, 0.98136677, 0.84123594, 0.60546483, 0.34890909, 0.15302337, 0.08 ]) Plot the window and the frequency response. import matplotlib.pyplot as plt from numpy.fft import fft, fftshift window = np.hamming(51) plt.plot(window) plt.title("Hamming window") plt.ylabel("Amplitude") plt.xlabel("Sample") plt.show() plt.figure() A = fft(window, 2048) / 25.5 mag = np.abs(fftshift(A)) freq = np.linspace(-0.5, 0.5, len(A)) response = 20 * np.log10(mag) response = np.clip(response, -100, 100) plt.plot(freq, response) plt.title("Frequency response of Hamming window") plt.ylabel("Magnitude [dB]") plt.xlabel("Normalized frequency [cycles per sample]") plt.axis('tight') plt.show() # numpy.hanning numpy.hanning(_M_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L3265-L3364) Return the Hanning window. The Hanning window is a taper formed by using a weighted cosine. Parameters: **M** int Number of points in the output window. If zero or less, an empty array is returned. Returns: **out** ndarray, shape(M,) The window, with the maximum value normalized to one (the value one appears only if `M` is odd). See also [`bartlett`](numpy.bartlett#numpy.bartlett "numpy.bartlett"), [`blackman`](numpy.blackman#numpy.blackman "numpy.blackman"), [`hamming`](numpy.hamming#numpy.hamming "numpy.hamming"), [`kaiser`](numpy.kaiser#numpy.kaiser "numpy.kaiser") #### Notes The Hanning window is defined as \\[w(n) = 0.5 - 0.5\cos\left(\frac{2\pi{n}}{M-1}\right) \qquad 0 \leq n \leq M-1\\] The Hanning was named for Julius von Hann, an Austrian meteorologist. It is also known as the Cosine Bell. Some authors prefer that it be called a Hann window, to help avoid confusion with the very similar Hamming window. Most references to the Hanning window come from the signal processing literature, where it is used as one of many windowing functions for smoothing values. It is also known as an apodization (which means “removing the foot”, i.e. smoothing discontinuities at the beginning and end of the sampled signal) or tapering function. #### References [1] Blackman, R.B. and Tukey, J.W., (1958) The measurement of power spectra, Dover Publications, New York. [2] E.R. Kanasewich, “Time Sequence Analysis in Geophysics”, The University of Alberta Press, 1975, pp. 106-108. [3] Wikipedia, “Window function”, [4] W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling, “Numerical Recipes”, Cambridge University Press, 1986, page 425. #### Examples >>> import numpy as np >>> np.hanning(12) array([0. , 0.07937323, 0.29229249, 0.57115742, 0.82743037, 0.97974649, 0.97974649, 0.82743037, 0.57115742, 0.29229249, 0.07937323, 0. ]) Plot the window and its frequency response. import matplotlib.pyplot as plt from numpy.fft import fft, fftshift window = np.hanning(51) plt.plot(window) plt.title("Hann window") plt.ylabel("Amplitude") plt.xlabel("Sample") plt.show() plt.figure() A = fft(window, 2048) / 25.5 mag = np.abs(fftshift(A)) freq = np.linspace(-0.5, 0.5, len(A)) with np.errstate(divide='ignore', invalid='ignore'): response = 20 * np.log10(mag) response = np.clip(response, -100, 100) plt.plot(freq, response) plt.title("Frequency response of the Hann window") plt.ylabel("Magnitude [dB]") plt.xlabel("Normalized frequency [cycles per sample]") plt.axis('tight') plt.show() # numpy.heaviside numpy.heaviside(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Compute the Heaviside step function. The Heaviside step function [1] is defined as: 0 if x1 < 0 heaviside(x1, x2) = x2 if x1 == 0 1 if x1 > 0 where `x2` is often taken to be 0.5, but 0 and 1 are also sometimes used. Parameters: **x1** array_like Input values. **x2** array_like The value of the function when x1 is 0. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar The output array, element-wise Heaviside step function of `x1`. This is a scalar if both `x1` and `x2` are scalars. #### References [1] Wikipedia, “Heaviside step function”, #### Examples >>> import numpy as np >>> np.heaviside([-1.5, 0, 2.0], 0.5) array([ 0. , 0.5, 1. ]) >>> np.heaviside([-1.5, 0, 2.0], 1) array([ 0., 1., 1.]) # numpy.histogram numpy.histogram(_a_ , _bins =10_, _range =None_, _density =None_, _weights =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_histograms_impl.py#L689-L903) Compute the histogram of a dataset. Parameters: **a** array_like Input data. The histogram is computed over the flattened array. **bins** int or sequence of scalars or str, optional If `bins` is an int, it defines the number of equal-width bins in the given range (10, by default). If `bins` is a sequence, it defines a monotonically increasing array of bin edges, including the rightmost edge, allowing for non- uniform bin widths. If `bins` is a string, it defines the method used to calculate the optimal bin width, as defined by [`histogram_bin_edges`](numpy.histogram_bin_edges#numpy.histogram_bin_edges "numpy.histogram_bin_edges"). **range**(float, float), optional The lower and upper range of the bins. If not provided, range is simply `(a.min(), a.max())`. Values outside the range are ignored. The first element of the range must be less than or equal to the second. `range` affects the automatic bin computation as well. While bin width is computed to be optimal based on the actual data within `range`, the bin count will fill the entire range including portions containing no data. **weights** array_like, optional An array of weights, of the same shape as `a`. Each value in `a` only contributes its associated weight towards the bin count (instead of 1). If `density` is True, the weights are normalized, so that the integral of the density over the range remains 1. Please note that the `dtype` of `weights` will also become the `dtype` of the returned accumulator (`hist`), so it must be large enough to hold accumulated values as well. **density** bool, optional If `False`, the result will contain the number of samples in each bin. If `True`, the result is the value of the probability _density_ function at the bin, normalized such that the _integral_ over the range is 1. Note that the sum of the histogram values will not be equal to 1 unless bins of unity width are chosen; it is not a probability _mass_ function. Returns: **hist** array The values of the histogram. See `density` and `weights` for a description of the possible semantics. If `weights` are given, `hist.dtype` will be taken from `weights`. **bin_edges** array of dtype float Return the bin edges `(length(hist)+1)`. See also [`histogramdd`](numpy.histogramdd#numpy.histogramdd "numpy.histogramdd"), [`bincount`](numpy.bincount#numpy.bincount "numpy.bincount"), [`searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted"), [`digitize`](numpy.digitize#numpy.digitize "numpy.digitize"), [`histogram_bin_edges`](numpy.histogram_bin_edges#numpy.histogram_bin_edges "numpy.histogram_bin_edges") #### Notes All but the last (righthand-most) bin is half-open. In other words, if `bins` is: [1, 2, 3, 4] then the first bin is `[1, 2)` (including 1, but excluding 2) and the second `[2, 3)`. The last bin, however, is `[3, 4]`, which _includes_ 4. #### Examples >>> import numpy as np >>> np.histogram([1, 2, 1], bins=[0, 1, 2, 3]) (array([0, 2, 1]), array([0, 1, 2, 3])) >>> np.histogram(np.arange(4), bins=np.arange(5), density=True) (array([0.25, 0.25, 0.25, 0.25]), array([0, 1, 2, 3, 4])) >>> np.histogram([[1, 2, 1], [1, 0, 1]], bins=[0,1,2,3]) (array([1, 4, 1]), array([0, 1, 2, 3])) >>> a = np.arange(5) >>> hist, bin_edges = np.histogram(a, density=True) >>> hist array([0.5, 0. , 0.5, 0. , 0. , 0.5, 0. , 0.5, 0. , 0.5]) >>> hist.sum() 2.4999999999999996 >>> np.sum(hist * np.diff(bin_edges)) 1.0 Automated Bin Selection Methods example, using 2 peak random data with 2000 points. import matplotlib.pyplot as plt import numpy as np rng = np.random.RandomState(10) # deterministic random data a = np.hstack((rng.normal(size=1000), rng.normal(loc=5, scale=2, size=1000))) plt.hist(a, bins='auto') # arguments are passed to np.histogram plt.title("Histogram with 'auto' bins") plt.show() # numpy.histogram2d numpy.histogram2d(_x_ , _y_ , _bins =10_, _range =None_, _density =None_, _weights =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_twodim_base_impl.py#L655-L822) Compute the bi-dimensional histogram of two data samples. Parameters: **x** array_like, shape (N,) An array containing the x coordinates of the points to be histogrammed. **y** array_like, shape (N,) An array containing the y coordinates of the points to be histogrammed. **bins** int or array_like or [int, int] or [array, array], optional The bin specification: * If int, the number of bins for the two dimensions (nx=ny=bins). * If array_like, the bin edges for the two dimensions (x_edges=y_edges=bins). * If [int, int], the number of bins in each dimension (nx, ny = bins). * If [array, array], the bin edges in each dimension (x_edges, y_edges = bins). * A combination [int, array] or [array, int], where int is the number of bins and array is the bin edges. **range** array_like, shape(2,2), optional The leftmost and rightmost edges of the bins along each dimension (if not specified explicitly in the `bins` parameters): `[[xmin, xmax], [ymin, ymax]]`. All values outside of this range will be considered outliers and not tallied in the histogram. **density** bool, optional If False, the default, returns the number of samples in each bin. If True, returns the probability _density_ function at the bin, `bin_count / sample_count / bin_area`. **weights** array_like, shape(N,), optional An array of values `w_i` weighing each sample `(x_i, y_i)`. Weights are normalized to 1 if `density` is True. If `density` is False, the values of the returned histogram are equal to the sum of the weights belonging to the samples falling into each bin. Returns: **H** ndarray, shape(nx, ny) The bi-dimensional histogram of samples `x` and `y`. Values in `x` are histogrammed along the first dimension and values in `y` are histogrammed along the second dimension. **xedges** ndarray, shape(nx+1,) The bin edges along the first dimension. **yedges** ndarray, shape(ny+1,) The bin edges along the second dimension. See also [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram") 1D histogram [`histogramdd`](numpy.histogramdd#numpy.histogramdd "numpy.histogramdd") Multidimensional histogram #### Notes When `density` is True, then the returned histogram is the sample density, defined such that the sum over bins of the product `bin_value * bin_area` is 1. Please note that the histogram does not follow the Cartesian convention where `x` values are on the abscissa and `y` values on the ordinate axis. Rather, `x` is histogrammed along the first dimension of the array (vertical), and `y` along the second dimension of the array (horizontal). This ensures compatibility with [`histogramdd`](numpy.histogramdd#numpy.histogramdd "numpy.histogramdd"). #### Examples >>> import numpy as np >>> from matplotlib.image import NonUniformImage >>> import matplotlib.pyplot as plt Construct a 2-D histogram with variable bin width. First define the bin edges: >>> xedges = [0, 1, 3, 5] >>> yedges = [0, 2, 3, 4, 6] Next we create a histogram H with random bin content: >>> x = np.random.normal(2, 1, 100) >>> y = np.random.normal(1, 1, 100) >>> H, xedges, yedges = np.histogram2d(x, y, bins=(xedges, yedges)) >>> # Histogram does not follow Cartesian convention (see Notes), >>> # therefore transpose H for visualization purposes. >>> H = H.T [`imshow`](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html#matplotlib.pyplot.imshow "\(in Matplotlib v3.9.3\)") can only display square bins: >>> fig = plt.figure(figsize=(7, 3)) >>> ax = fig.add_subplot(131, title='imshow: square bins') >>> plt.imshow(H, interpolation='nearest', origin='lower', ... extent=[xedges[0], xedges[-1], yedges[0], yedges[-1]]) [`pcolormesh`](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.pcolormesh.html#matplotlib.pyplot.pcolormesh "\(in Matplotlib v3.9.3\)") can display actual edges: >>> ax = fig.add_subplot(132, title='pcolormesh: actual edges', ... aspect='equal') >>> X, Y = np.meshgrid(xedges, yedges) >>> ax.pcolormesh(X, Y, H) [`NonUniformImage`](https://matplotlib.org/stable/api/image_api.html#matplotlib.image.NonUniformImage "\(in Matplotlib v3.9.3\)") can be used to display actual bin edges with interpolation: >>> ax = fig.add_subplot(133, title='NonUniformImage: interpolated', ... aspect='equal', xlim=xedges[[0, -1]], ylim=yedges[[0, -1]]) >>> im = NonUniformImage(ax, interpolation='bilinear') >>> xcenters = (xedges[:-1] + xedges[1:]) / 2 >>> ycenters = (yedges[:-1] + yedges[1:]) / 2 >>> im.set_data(xcenters, ycenters, H) >>> ax.add_image(im) >>> plt.show() It is also possible to construct a 2-D histogram without specifying bin edges: >>> # Generate non-symmetric test data >>> n = 10000 >>> x = np.linspace(1, 100, n) >>> y = 2*np.log(x) + np.random.rand(n) - 0.5 >>> # Compute 2d histogram. Note the order of x/y and xedges/yedges >>> H, yedges, xedges = np.histogram2d(y, x, bins=20) Now we can plot the histogram using [`pcolormesh`](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.pcolormesh.html#matplotlib.pyplot.pcolormesh "\(in Matplotlib v3.9.3\)"), and a [`hexbin`](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.hexbin.html#matplotlib.pyplot.hexbin "\(in Matplotlib v3.9.3\)") for comparison. >>> # Plot histogram using pcolormesh >>> fig, (ax1, ax2) = plt.subplots(ncols=2, sharey=True) >>> ax1.pcolormesh(xedges, yedges, H, cmap='rainbow') >>> ax1.plot(x, 2*np.log(x), 'k-') >>> ax1.set_xlim(x.min(), x.max()) >>> ax1.set_ylim(y.min(), y.max()) >>> ax1.set_xlabel('x') >>> ax1.set_ylabel('y') >>> ax1.set_title('histogram2d') >>> ax1.grid() >>> # Create hexbin plot for comparison >>> ax2.hexbin(x, y, gridsize=20, cmap='rainbow') >>> ax2.plot(x, 2*np.log(x), 'k-') >>> ax2.set_title('hexbin') >>> ax2.set_xlim(x.min(), x.max()) >>> ax2.set_xlabel('x') >>> ax2.grid() >>> plt.show() # numpy.histogram_bin_edges numpy.histogram_bin_edges(_a_ , _bins =10_, _range =None_, _weights =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_histograms_impl.py#L477-L681) Function to calculate only the edges of the bins used by the [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram") function. Parameters: **a** array_like Input data. The histogram is computed over the flattened array. **bins** int or sequence of scalars or str, optional If `bins` is an int, it defines the number of equal-width bins in the given range (10, by default). If `bins` is a sequence, it defines the bin edges, including the rightmost edge, allowing for non-uniform bin widths. If `bins` is a string from the list below, `histogram_bin_edges` will use the method chosen to calculate the optimal bin width and consequently the number of bins (see the Notes section for more detail on the estimators) from the data that falls within the requested range. While the bin width will be optimal for the actual data in the range, the number of bins will be computed to fill the entire range, including the empty portions. For visualisation, using the ‘auto’ option is suggested. Weighted data is not supported for automated bin size selection. ‘auto’ Minimum bin width between the ‘sturges’ and ‘fd’ estimators. Provides good all-around performance. ‘fd’ (Freedman Diaconis Estimator) Robust (resilient to outliers) estimator that takes into account data variability and data size. ‘doane’ An improved version of Sturges’ estimator that works better with non-normal datasets. ‘scott’ Less robust estimator that takes into account data variability and data size. ‘stone’ Estimator based on leave-one-out cross-validation estimate of the integrated squared error. Can be regarded as a generalization of Scott’s rule. ‘rice’ Estimator does not take variability into account, only data size. Commonly overestimates number of bins required. ‘sturges’ R’s default method, only accounts for data size. Only optimal for gaussian data and underestimates number of bins for large non-gaussian datasets. ‘sqrt’ Square root (of data size) estimator, used by Excel and other programs for its speed and simplicity. **range**(float, float), optional The lower and upper range of the bins. If not provided, range is simply `(a.min(), a.max())`. Values outside the range are ignored. The first element of the range must be less than or equal to the second. `range` affects the automatic bin computation as well. While bin width is computed to be optimal based on the actual data within `range`, the bin count will fill the entire range including portions containing no data. **weights** array_like, optional An array of weights, of the same shape as `a`. Each value in `a` only contributes its associated weight towards the bin count (instead of 1). This is currently not used by any of the bin estimators, but may be in the future. Returns: **bin_edges** array of dtype float The edges to pass into [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram") See also [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram") #### Notes The methods to estimate the optimal number of bins are well founded in literature, and are inspired by the choices R provides for histogram visualisation. Note that having the number of bins proportional to \\(n^{1/3}\\) is asymptotically optimal, which is why it appears in most estimators. These are simply plug-in methods that give good starting points for number of bins. In the equations below, \\(h\\) is the binwidth and \\(n_h\\) is the number of bins. All estimators that compute bin counts are recast to bin width using the [`ptp`](numpy.ptp#numpy.ptp "numpy.ptp") of the data. The final bin count is obtained from `np.round(np.ceil(range / h))`. The final bin width is often less than what is returned by the estimators below. ‘auto’ (minimum bin width of the ‘sturges’ and ‘fd’ estimators) A compromise to get a good value. For small datasets the Sturges value will usually be chosen, while larger datasets will usually default to FD. Avoids the overly conservative behaviour of FD and Sturges for small and large datasets respectively. Switchover point is usually \\(a.size \approx 1000\\). ‘fd’ (Freedman Diaconis Estimator) \\[h = 2 \frac{IQR}{n^{1/3}}\\] The binwidth is proportional to the interquartile range (IQR) and inversely proportional to cube root of a.size. Can be too conservative for small datasets, but is quite good for large datasets. The IQR is very robust to outliers. ‘scott’ \\[h = \sigma \sqrt[3]{\frac{24 \sqrt{\pi}}{n}}\\] The binwidth is proportional to the standard deviation of the data and inversely proportional to cube root of `x.size`. Can be too conservative for small datasets, but is quite good for large datasets. The standard deviation is not very robust to outliers. Values are very similar to the Freedman- Diaconis estimator in the absence of outliers. ‘rice’ \\[n_h = 2n^{1/3}\\] The number of bins is only proportional to cube root of `a.size`. It tends to overestimate the number of bins and it does not take into account data variability. ‘sturges’ \\[n_h = \log _{2}(n) + 1\\] The number of bins is the base 2 log of `a.size`. This estimator assumes normality of data and is too conservative for larger, non-normal datasets. This is the default method in R’s `hist` method. ‘doane’ \\[ \begin{align}\begin{aligned}n_h = 1 + \log_{2}(n) + \log_{2}\left(1 + \frac{|g_1|}{\sigma_{g_1}}\right)\\\g_1 = mean\left[\left(\frac{x - \mu}{\sigma}\right)^3\right]\\\\\sigma_{g_1} = \sqrt{\frac{6(n - 2)}{(n + 1)(n + 3)}}\end{aligned}\end{align} \\] An improved version of Sturges’ formula that produces better estimates for non-normal datasets. This estimator attempts to account for the skew of the data. ‘sqrt’ \\[n_h = \sqrt n\\] The simplest and fastest estimator. Only takes into account the data size. Additionally, if the data is of integer dtype, then the binwidth will never be less than 1. #### Examples >>> import numpy as np >>> arr = np.array([0, 0, 0, 1, 2, 3, 3, 4, 5]) >>> np.histogram_bin_edges(arr, bins='auto', range=(0, 1)) array([0. , 0.25, 0.5 , 0.75, 1. ]) >>> np.histogram_bin_edges(arr, bins=2) array([0. , 2.5, 5. ]) For consistency with histogram, an array of pre-computed bins is passed through unmodified: >>> np.histogram_bin_edges(arr, [1, 2]) array([1, 2]) This function allows one set of bins to be computed, and reused across multiple histograms: >>> shared_bins = np.histogram_bin_edges(arr, bins='auto') >>> shared_bins array([0., 1., 2., 3., 4., 5.]) >>> group_id = np.array([0, 1, 1, 0, 1, 1, 0, 1, 1]) >>> hist_0, _ = np.histogram(arr[group_id == 0], bins=shared_bins) >>> hist_1, _ = np.histogram(arr[group_id == 1], bins=shared_bins) >>> hist_0; hist_1 array([1, 1, 0, 1, 0]) array([2, 0, 1, 1, 2]) Which gives more easily comparable results than using separate bins for each histogram: >>> hist_0, bins_0 = np.histogram(arr[group_id == 0], bins='auto') >>> hist_1, bins_1 = np.histogram(arr[group_id == 1], bins='auto') >>> hist_0; hist_1 array([1, 1, 1]) array([2, 1, 1, 2]) >>> bins_0; bins_1 array([0., 1., 2., 3.]) array([0. , 1.25, 2.5 , 3.75, 5. ]) # numpy.histogramdd numpy.histogramdd(_sample_ , _bins =10_, _range =None_, _density =None_, _weights =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_histograms_impl.py#L917-L1090) Compute the multidimensional histogram of some data. Parameters: **sample**(N, D) array, or (N, D) array_like The data to be histogrammed. Note the unusual interpretation of sample when an array_like: * When an array, each row is a coordinate in a D-dimensional space - such as `histogramdd(np.array([p1, p2, p3]))`. * When an array_like, each element is the list of values for single coordinate - such as `histogramdd((X, Y, Z))`. The first form should be preferred. **bins** sequence or int, optional The bin specification: * A sequence of arrays describing the monotonically increasing bin edges along each dimension. * The number of bins for each dimension (nx, ny, … =bins) * The number of bins for all dimensions (nx=ny=…=bins). **range** sequence, optional A sequence of length D, each an optional (lower, upper) tuple giving the outer bin edges to be used if the edges are not given explicitly in `bins`. An entry of None in the sequence results in the minimum and maximum values being used for the corresponding dimension. The default, None, is equivalent to passing a tuple of D None values. **density** bool, optional If False, the default, returns the number of samples in each bin. If True, returns the probability _density_ function at the bin, `bin_count / sample_count / bin_volume`. **weights**(N,) array_like, optional An array of values `w_i` weighing each sample `(x_i, y_i, z_i, …)`. Weights are normalized to 1 if density is True. If density is False, the values of the returned histogram are equal to the sum of the weights belonging to the samples falling into each bin. Returns: **H** ndarray The multidimensional histogram of sample x. See density and weights for the different possible semantics. **edges** tuple of ndarrays A tuple of D arrays describing the bin edges for each dimension. See also [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram") 1-D histogram [`histogram2d`](numpy.histogram2d#numpy.histogram2d "numpy.histogram2d") 2-D histogram #### Examples >>> import numpy as np >>> rng = np.random.default_rng() >>> r = rng.normal(size=(100,3)) >>> H, edges = np.histogramdd(r, bins = (5, 8, 4)) >>> H.shape, edges[0].size, edges[1].size, edges[2].size ((5, 8, 4), 6, 9, 5) # numpy.hsplit numpy.hsplit(_ary_ , _indices_or_sections_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_shape_base_impl.py#L886-L954) Split an array into multiple sub-arrays horizontally (column-wise). Please refer to the [`split`](numpy.split#numpy.split "numpy.split") documentation. `hsplit` is equivalent to [`split`](numpy.split#numpy.split "numpy.split") with `axis=1`, the array is always split along the second axis except for 1-D arrays, where it is split at `axis=0`. See also [`split`](numpy.split#numpy.split "numpy.split") Split an array into multiple sub-arrays of equal size. #### Examples >>> import numpy as np >>> x = np.arange(16.0).reshape(4, 4) >>> x array([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [12., 13., 14., 15.]]) >>> np.hsplit(x, 2) [array([[ 0., 1.], [ 4., 5.], [ 8., 9.], [12., 13.]]), array([[ 2., 3.], [ 6., 7.], [10., 11.], [14., 15.]])] >>> np.hsplit(x, np.array([3, 6])) [array([[ 0., 1., 2.], [ 4., 5., 6.], [ 8., 9., 10.], [12., 13., 14.]]), array([[ 3.], [ 7.], [11.], [15.]]), array([], shape=(4, 0), dtype=float64)] With a higher dimensional array the split is still along the second axis. >>> x = np.arange(8.0).reshape(2, 2, 2) >>> x array([[[0., 1.], [2., 3.]], [[4., 5.], [6., 7.]]]) >>> np.hsplit(x, 2) [array([[[0., 1.]], [[4., 5.]]]), array([[[2., 3.]], [[6., 7.]]])] With a 1-D array, the split is along axis 0. >>> x = np.array([0, 1, 2, 3, 4, 5]) >>> np.hsplit(x, 2) [array([0, 1, 2]), array([3, 4, 5])] # numpy.hstack numpy.hstack(_tup_ , _*_ , _dtype =None_, _casting ='same_kind'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/shape_base.py#L295-L367) Stack arrays in sequence horizontally (column wise). This is equivalent to concatenation along the second axis, except for 1-D arrays where it concatenates along the first axis. Rebuilds arrays divided by [`hsplit`](numpy.hsplit#numpy.hsplit "numpy.hsplit"). This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`stack`](numpy.stack#numpy.stack "numpy.stack") and [`block`](numpy.block#numpy.block "numpy.block") provide more general stacking and concatenation operations. Parameters: **tup** sequence of ndarrays The arrays must have the same shape along all but the second axis, except 1-D arrays which can be any length. In the case of a single array_like input, it will be treated as a sequence of arrays; i.e., each element along the zeroth axis is treated as a separate array. **dtype** str or dtype If provided, the destination array will have this dtype. Cannot be provided together with `out`. New in version 1.24. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘same_kind’. New in version 1.24. Returns: **stacked** ndarray The array formed by stacking the given arrays. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack") Stack arrays in sequence vertically (row wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third axis). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`hsplit`](numpy.hsplit#numpy.hsplit "numpy.hsplit") Split an array into multiple sub-arrays horizontally (column-wise). [`unstack`](numpy.unstack#numpy.unstack "numpy.unstack") Split an array into a tuple of sub-arrays along an axis. #### Examples >>> import numpy as np >>> a = np.array((1,2,3)) >>> b = np.array((4,5,6)) >>> np.hstack((a,b)) array([1, 2, 3, 4, 5, 6]) >>> a = np.array([[1],[2],[3]]) >>> b = np.array([[4],[5],[6]]) >>> np.hstack((a,b)) array([[1, 4], [2, 5], [3, 6]]) # numpy.hypot numpy.hypot(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Given the “legs” of a right triangle, return its hypotenuse. Equivalent to `sqrt(x1**2 + x2**2)`, element-wise. If `x1` or `x2` is scalar_like (i.e., unambiguously cast-able to a scalar type), it is broadcast for use with each element of the other argument. (See Examples) Parameters: **x1, x2** array_like Leg of the triangle(s). If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **z** ndarray The hypotenuse of the triangle(s). This is a scalar if both `x1` and `x2` are scalars. #### Examples >>> import numpy as np >>> np.hypot(3*np.ones((3, 3)), 4*np.ones((3, 3))) array([[ 5., 5., 5.], [ 5., 5., 5.], [ 5., 5., 5.]]) Example showing broadcast of scalar_like argument: >>> np.hypot(3*np.ones((3, 3)), [4]) array([[ 5., 5., 5.], [ 5., 5., 5.], [ 5., 5., 5.]]) # numpy.i0 numpy.i0(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L3554-L3612) Modified Bessel function of the first kind, order 0. Usually denoted \\(I_0\\). Parameters: **x** array_like of float Argument of the Bessel function. Returns: **out** ndarray, shape = x.shape, dtype = float The modified Bessel function evaluated at each of the elements of `x`. See also [`scipy.special.i0`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.i0.html#scipy.special.i0 "\(in SciPy v1.14.1\)"), [`scipy.special.iv`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.iv.html#scipy.special.iv "\(in SciPy v1.14.1\)"), [`scipy.special.ive`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.ive.html#scipy.special.ive "\(in SciPy v1.14.1\)") #### Notes The scipy implementation is recommended over this function: it is a proper ufunc written in C, and more than an order of magnitude faster. We use the algorithm published by Clenshaw [1] and referenced by Abramowitz and Stegun [2], for which the function domain is partitioned into the two intervals [0,8] and (8,inf), and Chebyshev polynomial expansions are employed in each interval. Relative error on the domain [0,30] using IEEE arithmetic is documented [3] as having a peak of 5.8e-16 with an rms of 1.4e-16 (n = 30000). #### References [1] C. W. Clenshaw, “Chebyshev series for mathematical functions”, in _National Physical Laboratory Mathematical Tables_ , vol. 5, London: Her Majesty’s Stationery Office, 1962. [2] M. Abramowitz and I. A. Stegun, _Handbook of Mathematical Functions_ , 10th printing, New York: Dover, 1964, pp. 379. [3] #### Examples >>> import numpy as np >>> np.i0(0.) array(1.0) >>> np.i0([0, 1, 2, 3]) array([1. , 1.26606588, 2.2795853 , 4.88079259]) # numpy.identity numpy.identity(_n_ , _dtype =None_, _*_ , _like =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L2195-L2233) Return the identity array. The identity array is a square array with ones on the main diagonal. Parameters: **n** int Number of rows (and columns) in `n` x `n` output. **dtype** data-type, optional Data-type of the output. Defaults to `float`. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** ndarray `n` x `n` array with its main diagonal set to one, and all other elements 0. #### Examples >>> import numpy as np >>> np.identity(3) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) # numpy.iinfo _class_ numpy.iinfo(_type_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) Machine limits for integer types. Parameters: **int_type** integer type, dtype, or instance The kind of integer data type to get information about. See also [`finfo`](numpy.finfo#numpy.finfo "numpy.finfo") The equivalent for floating point data types. #### Examples With types: >>> import numpy as np >>> ii16 = np.iinfo(np.int16) >>> ii16.min -32768 >>> ii16.max 32767 >>> ii32 = np.iinfo(np.int32) >>> ii32.min -2147483648 >>> ii32.max 2147483647 With instances: >>> ii32 = np.iinfo(np.int32(10)) >>> ii32.min -2147483648 >>> ii32.max 2147483647 Attributes: **bits** int The number of bits occupied by the type. **dtype** dtype Returns the dtype for which `iinfo` returns information. [`min`](numpy.min#numpy.min "numpy.min")int Minimum value of given dtype. [`max`](numpy.max#numpy.max "numpy.max")int Maximum value of given dtype. # numpy.imag numpy.imag(_val_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_type_check_impl.py#L131-L168) Return the imaginary part of the complex argument. Parameters: **val** array_like Input array. Returns: **out** ndarray or scalar The imaginary component of the complex argument. If `val` is real, the type of `val` is used for the output. If `val` has complex elements, the returned type is float. See also [`real`](numpy.real#numpy.real "numpy.real"), [`angle`](numpy.angle#numpy.angle "numpy.angle"), [`real_if_close`](numpy.real_if_close#numpy.real_if_close "numpy.real_if_close") #### Examples >>> import numpy as np >>> a = np.array([1+2j, 3+4j, 5+6j]) >>> a.imag array([2., 4., 6.]) >>> a.imag = np.array([8, 10, 12]) >>> a array([1. +8.j, 3.+10.j, 5.+12.j]) >>> np.imag(1 + 1j) 1.0 # numpy.in1d numpy.in1d(_ar1_ , _ar2_ , _assume_unique =False_, _invert =False_, _*_ , _kind =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_arraysetops_impl.py#L762-L859) Test whether each element of a 1-D array is also present in a second array. Deprecated since version 2.0: Use [`isin`](numpy.isin#numpy.isin "numpy.isin") instead of `in1d` for new code. Returns a boolean array the same length as `ar1` that is True where an element of `ar1` is in `ar2` and False otherwise. Parameters: **ar1**(M,) array_like Input array. **ar2** array_like The values against which to test each value of `ar1`. **assume_unique** bool, optional If True, the input arrays are both assumed to be unique, which can speed up the calculation. Default is False. **invert** bool, optional If True, the values in the returned array are inverted (that is, False where an element of `ar1` is in `ar2` and True otherwise). Default is False. `np.in1d(a, b, invert=True)` is equivalent to (but is faster than) `np.invert(in1d(a, b))`. **kind**{None, ‘sort’, ‘table’}, optional The algorithm to use. This will not affect the final result, but will affect the speed and memory use. The default, None, will select automatically based on memory considerations. * If ‘sort’, will use a mergesort-based approach. This will have a memory usage of roughly 6 times the sum of the sizes of `ar1` and `ar2`, not accounting for size of dtypes. * If ‘table’, will use a lookup table approach similar to a counting sort. This is only available for boolean and integer arrays. This will have a memory usage of the size of `ar1` plus the max-min value of `ar2`. `assume_unique` has no effect when the ‘table’ option is used. * If None, will automatically choose ‘table’ if the required memory allocation is less than or equal to 6 times the sum of the sizes of `ar1` and `ar2`, otherwise will use ‘sort’. This is done to not use a large amount of memory by default, even though ‘table’ may be faster in most cases. If ‘table’ is chosen, `assume_unique` will have no effect. Returns: **in1d**(M,) ndarray, bool The values `ar1[in1d]` are in `ar2`. See also [`isin`](numpy.isin#numpy.isin "numpy.isin") Version of this function that preserves the shape of ar1. #### Notes `in1d` can be considered as an element-wise function version of the python keyword `in`, for 1-D sequences. `in1d(a, b)` is roughly equivalent to `np.array([item in b for item in a])`. However, this idea fails if `ar2` is a set, or similar (non-sequence) container: As `ar2` is converted to an array, in those cases `asarray(ar2)` is an object array rather than the expected array of contained values. Using `kind='table'` tends to be faster than `kind=’sort’` if the following relationship is true: `log10(len(ar2)) > (log10(max(ar2)-min(ar2)) - 2.27) / 0.927`, but may use greater memory. The default value for `kind` will be automatically selected based only on memory usage, so one may manually set `kind='table'` if memory constraints can be relaxed. #### Examples >>> import numpy as np >>> test = np.array([0, 1, 2, 5, 0]) >>> states = [0, 2] >>> mask = np.in1d(test, states) >>> mask array([ True, False, True, False, True]) >>> test[mask] array([0, 2, 0]) >>> mask = np.in1d(test, states, invert=True) >>> mask array([False, True, False, True, False]) >>> test[mask] array([1, 5]) # numpy.indices numpy.indices(_dimensions_ , _dtype= _, _sparse=False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L1745-L1844) Return an array representing the indices of a grid. Compute an array where the subarrays contain index values 0, 1, … varying only along the corresponding axis. Parameters: **dimensions** sequence of ints The shape of the grid. **dtype** dtype, optional Data type of the result. **sparse** boolean, optional Return a sparse representation of the grid instead of a dense representation. Default is False. Returns: **grid** one ndarray or tuple of ndarrays If sparse is False: Returns one array of grid indices, `grid.shape = (len(dimensions),) + tuple(dimensions)`. If sparse is True: Returns a tuple of arrays, with `grid[i].shape = (1, ..., 1, dimensions[i], 1, ..., 1)` with dimensions[i] in the ith place See also [`mgrid`](numpy.mgrid#numpy.mgrid "numpy.mgrid"), [`ogrid`](numpy.ogrid#numpy.ogrid "numpy.ogrid"), [`meshgrid`](numpy.meshgrid#numpy.meshgrid "numpy.meshgrid") #### Notes The output shape in the dense case is obtained by prepending the number of dimensions in front of the tuple of dimensions, i.e. if `dimensions` is a tuple `(r0, ..., rN-1)` of length `N`, the output shape is `(N, r0, ..., rN-1)`. The subarrays `grid[k]` contains the N-D array of indices along the `k-th` axis. Explicitly: grid[k, i0, i1, ..., iN-1] = ik #### Examples >>> import numpy as np >>> grid = np.indices((2, 3)) >>> grid.shape (2, 2, 3) >>> grid[0] # row indices array([[0, 0, 0], [1, 1, 1]]) >>> grid[1] # column indices array([[0, 1, 2], [0, 1, 2]]) The indices can be used as an index into an array. >>> x = np.arange(20).reshape(5, 4) >>> row, col = np.indices((2, 3)) >>> x[row, col] array([[0, 1, 2], [4, 5, 6]]) Note that it would be more straightforward in the above example to extract the required elements directly with `x[:2, :3]`. If sparse is set to true, the grid will be returned in a sparse representation. >>> i, j = np.indices((2, 3), sparse=True) >>> i.shape (2, 1) >>> j.shape (1, 3) >>> i # row indices array([[0], [1]]) >>> j # column indices array([[0, 1, 2]]) # numpy.info numpy.info(_object =None_, _maxwidth =76_, _output =None_, _toplevel ='numpy'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_utils_impl.py#L413-L573) Get help information for an array, function, class, or module. Parameters: **object** object or str, optional Input object or name to get information about. If `object` is an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") instance, information about the array is printed. If `object` is a numpy object, its docstring is given. If it is a string, available modules are searched for matching objects. If None, information about `info` itself is returned. **maxwidth** int, optional Printing width. **output** file like object, optional File like object that the output is written to, default is `None`, in which case `sys.stdout` will be used. The object has to be opened in ‘w’ or ‘a’ mode. **toplevel** str, optional Start search at this level. #### Notes When used interactively with an object, `np.info(obj)` is equivalent to `help(obj)` on the Python prompt or `obj?` on the IPython prompt. #### Examples >>> np.info(np.polyval) polyval(p, x) Evaluate the polynomial p at x. ... When using a string for `object` it is possible to get multiple results. >>> np.info('fft') *** Found in numpy *** Core FFT routines ... *** Found in numpy.fft *** fft(a, n=None, axis=-1) ... *** Repeat reference found in numpy.fft.fftpack *** *** Total of 3 references found. *** When the argument is an array, information about the array is printed. >>> a = np.array([[1 + 2j, 3, -4], [-5j, 6, 0]], dtype=np.complex64) >>> np.info(a) class: ndarray shape: (2, 3) strides: (24, 8) itemsize: 8 aligned: True contiguous: True fortran: False data pointer: 0x562b6e0d2860 # may vary byteorder: little byteswap: False type: complex64 # numpy.inner numpy.inner(_a_ , _b_ , _/_) Inner product of two arrays. Ordinary inner product of vectors for 1-D arrays (without complex conjugation), in higher dimensions a sum product over the last axes. Parameters: **a, b** array_like If `a` and `b` are nonscalar, their last dimensions must match. Returns: **out** ndarray If `a` and `b` are both scalars or both 1-D arrays then a scalar is returned; otherwise an array is returned. `out.shape = (*a.shape[:-1], *b.shape[:-1])` Raises: ValueError If both `a` and `b` are nonscalar and their last dimensions have different sizes. See also [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") Sum products over arbitrary axes. [`dot`](numpy.dot#numpy.dot "numpy.dot") Generalised matrix product, using second last dimension of `b`. [`vecdot`](numpy.vecdot#numpy.vecdot "numpy.vecdot") Vector dot product of two arrays. [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") Einstein summation convention. #### Notes For vectors (1-D arrays) it computes the ordinary inner-product: np.inner(a, b) = sum(a[:]*b[:]) More generally, if `ndim(a) = r > 0` and `ndim(b) = s > 0`: np.inner(a, b) = np.tensordot(a, b, axes=(-1,-1)) or explicitly: np.inner(a, b)[i0,...,ir-2,j0,...,js-2] = sum(a[i0,...,ir-2,:]*b[j0,...,js-2,:]) In addition `a` or `b` may be scalars, in which case: np.inner(a,b) = a*b #### Examples Ordinary inner product for vectors: >>> import numpy as np >>> a = np.array([1,2,3]) >>> b = np.array([0,1,0]) >>> np.inner(a, b) 2 Some multidimensional examples: >>> a = np.arange(24).reshape((2,3,4)) >>> b = np.arange(4) >>> c = np.inner(a, b) >>> c.shape (2, 3) >>> c array([[ 14, 38, 62], [ 86, 110, 134]]) >>> a = np.arange(2).reshape((1,1,2)) >>> b = np.arange(6).reshape((3,2)) >>> c = np.inner(a, b) >>> c.shape (1, 1, 3) >>> c array([[[1, 3, 5]]]) An example where `b` is a scalar: >>> np.inner(np.eye(2), 7) array([[7., 0.], [0., 7.]]) # numpy.insert numpy.insert(_arr_ , _obj_ , _values_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L5456-L5637) Insert values along the given axis before the given indices. Parameters: **arr** array_like Input array. **obj** slice, int, array-like of ints or bools Object that defines the index or indices before which `values` is inserted. Changed in version 2.1.2: Boolean indices are now treated as a mask of elements to insert, rather than being cast to the integers 0 and 1. Support for multiple insertions when `obj` is a single scalar or a sequence with one element (similar to calling insert multiple times). **values** array_like Values to insert into `arr`. If the type of `values` is different from that of `arr`, `values` is converted to the type of `arr`. `values` should be shaped so that `arr[...,obj,...] = values` is legal. **axis** int, optional Axis along which to insert `values`. If `axis` is None then `arr` is flattened first. Returns: **out** ndarray A copy of `arr` with `values` inserted. Note that `insert` does not occur in- place: a new array is returned. If `axis` is None, `out` is a flattened array. See also [`append`](numpy.append#numpy.append "numpy.append") Append elements at the end of an array. [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`delete`](numpy.delete#numpy.delete "numpy.delete") Delete elements from an array. #### Notes Note that for higher dimensional inserts `obj=0` behaves very different from `obj=[0]` just like `arr[:,0,:] = values` is different from `arr[:,[0],:] = values`. This is because of the difference between basic and advanced [indexing](../../user/basics.indexing#basics-indexing). #### Examples >>> import numpy as np >>> a = np.arange(6).reshape(3, 2) >>> a array([[0, 1], [2, 3], [4, 5]]) >>> np.insert(a, 1, 6) array([0, 6, 1, 2, 3, 4, 5]) >>> np.insert(a, 1, 6, axis=1) array([[0, 6, 1], [2, 6, 3], [4, 6, 5]]) Difference between sequence and scalars, showing how `obj=[1]` behaves different from `obj=1`: >>> np.insert(a, [1], [[7],[8],[9]], axis=1) array([[0, 7, 1], [2, 8, 3], [4, 9, 5]]) >>> np.insert(a, 1, [[7],[8],[9]], axis=1) array([[0, 7, 8, 9, 1], [2, 7, 8, 9, 3], [4, 7, 8, 9, 5]]) >>> np.array_equal(np.insert(a, 1, [7, 8, 9], axis=1), ... np.insert(a, [1], [[7],[8],[9]], axis=1)) True >>> b = a.flatten() >>> b array([0, 1, 2, 3, 4, 5]) >>> np.insert(b, [2, 2], [6, 7]) array([0, 1, 6, 7, 2, 3, 4, 5]) >>> np.insert(b, slice(2, 4), [7, 8]) array([0, 1, 7, 2, 8, 3, 4, 5]) >>> np.insert(b, [2, 2], [7.13, False]) # type casting array([0, 1, 7, 0, 2, 3, 4, 5]) >>> x = np.arange(8).reshape(2, 4) >>> idx = (1, 3) >>> np.insert(x, idx, 999, axis=1) array([[ 0, 999, 1, 2, 999, 3], [ 4, 999, 5, 6, 999, 7]]) # numpy.interp numpy.interp(_x_ , _xp_ , _fp_ , _left =None_, _right =None_, _period =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L1505-L1642) One-dimensional linear interpolation for monotonically increasing sample points. Returns the one-dimensional piecewise linear interpolant to a function with given discrete data points (`xp`, `fp`), evaluated at `x`. Parameters: **x** array_like The x-coordinates at which to evaluate the interpolated values. **xp** 1-D sequence of floats The x-coordinates of the data points, must be increasing if argument `period` is not specified. Otherwise, `xp` is internally sorted after normalizing the periodic boundaries with `xp = xp % period`. **fp** 1-D sequence of float or complex The y-coordinates of the data points, same length as `xp`. **left** optional float or complex corresponding to fp Value to return for `x < xp[0]`, default is `fp[0]`. **right** optional float or complex corresponding to fp Value to return for `x > xp[-1]`, default is `fp[-1]`. **period** None or float, optional A period for the x-coordinates. This parameter allows the proper interpolation of angular x-coordinates. Parameters `left` and `right` are ignored if `period` is specified. Returns: **y** float or complex (corresponding to fp) or ndarray The interpolated values, same shape as `x`. Raises: ValueError If `xp` and `fp` have different length If `xp` or `fp` are not 1-D sequences If `period == 0` Warning The x-coordinate sequence is expected to be increasing, but this is not explicitly enforced. However, if the sequence `xp` is non-increasing, interpolation results are meaningless. Note that, since NaN is unsortable, `xp` also cannot contain NaNs. A simple check for `xp` being strictly increasing is: np.all(np.diff(xp) > 0) See also [`scipy.interpolate`](https://docs.scipy.org/doc/scipy/reference/interpolate.html#module- scipy.interpolate "\(in SciPy v1.14.1\)") #### Examples >>> import numpy as np >>> xp = [1, 2, 3] >>> fp = [3, 2, 0] >>> np.interp(2.5, xp, fp) 1.0 >>> np.interp([0, 1, 1.5, 2.72, 3.14], xp, fp) array([3. , 3. , 2.5 , 0.56, 0. ]) >>> UNDEF = -99.0 >>> np.interp(3.14, xp, fp, right=UNDEF) -99.0 Plot an interpolant to the sine function: >>> x = np.linspace(0, 2*np.pi, 10) >>> y = np.sin(x) >>> xvals = np.linspace(0, 2*np.pi, 50) >>> yinterp = np.interp(xvals, x, y) >>> import matplotlib.pyplot as plt >>> plt.plot(x, y, 'o') [] >>> plt.plot(xvals, yinterp, '-x') [] >>> plt.show() Interpolation with periodic x-coordinates: >>> x = [-180, -170, -185, 185, -10, -5, 0, 365] >>> xp = [190, -190, 350, -350] >>> fp = [5, 10, 3, 4] >>> np.interp(x, xp, fp, period=360) array([7.5 , 5. , 8.75, 6.25, 3. , 3.25, 3.5 , 3.75]) Complex interpolation: >>> x = [1.5, 4.0] >>> xp = [2,3,5] >>> fp = [1.0j, 0, 2+3j] >>> np.interp(x, xp, fp) array([0.+1.j , 1.+1.5j]) # numpy.intersect1d numpy.intersect1d(_ar1_ , _ar2_ , _assume_unique =False_, _return_indices =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_arraysetops_impl.py#L617-L706) Find the intersection of two arrays. Return the sorted, unique values that are in both of the input arrays. Parameters: **ar1, ar2** array_like Input arrays. Will be flattened if not already 1D. **assume_unique** bool If True, the input arrays are both assumed to be unique, which can speed up the calculation. If True but `ar1` or `ar2` are not unique, incorrect results and out-of-bounds indices could result. Default is False. **return_indices** bool If True, the indices which correspond to the intersection of the two arrays are returned. The first instance of a value is used if there are multiple. Default is False. Returns: **intersect1d** ndarray Sorted 1D array of common and unique elements. **comm1** ndarray The indices of the first occurrences of the common values in `ar1`. Only provided if `return_indices` is True. **comm2** ndarray The indices of the first occurrences of the common values in `ar2`. Only provided if `return_indices` is True. #### Examples >>> import numpy as np >>> np.intersect1d([1, 3, 4, 3], [3, 1, 2, 1]) array([1, 3]) To intersect more than two arrays, use functools.reduce: >>> from functools import reduce >>> reduce(np.intersect1d, ([1, 3, 4, 3], [3, 1, 2, 1], [6, 3, 4, 2])) array([3]) To return the indices of the values common to the input arrays along with the intersected values: >>> x = np.array([1, 1, 2, 3, 4]) >>> y = np.array([2, 1, 4, 6]) >>> xy, x_ind, y_ind = np.intersect1d(x, y, return_indices=True) >>> x_ind, y_ind (array([0, 2, 4]), array([1, 0, 2])) >>> xy, x[x_ind], y[y_ind] (array([1, 2, 4]), array([1, 2, 4]), array([1, 2, 4])) # numpy.invert numpy.invert(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Compute bit-wise inversion, or bit-wise NOT, element-wise. Computes the bit-wise NOT of the underlying binary representation of the integers in the input arrays. This ufunc implements the C/Python operator `~`. For signed integer inputs, the bit-wise NOT of the absolute value is returned. In a two’s-complement system, this operation effectively flips all the bits, resulting in a representation that corresponds to the negative of the input plus one. This is the most common method of representing signed integers on computers [1]. A N-bit two’s-complement system can represent every integer in the range \\(-2^{N-1}\\) to \\(+2^{N-1}-1\\). Parameters: **x** array_like Only integer and boolean types are handled. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Result. This is a scalar if `x` is a scalar. See also [`bitwise_and`](numpy.bitwise_and#numpy.bitwise_and "numpy.bitwise_and"), [`bitwise_or`](numpy.bitwise_or#numpy.bitwise_or "numpy.bitwise_or"), [`bitwise_xor`](numpy.bitwise_xor#numpy.bitwise_xor "numpy.bitwise_xor") [`logical_not`](numpy.logical_not#numpy.logical_not "numpy.logical_not") [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Return the binary representation of the input number as a string. #### Notes `numpy.bitwise_not` is an alias for `invert`: >>> np.bitwise_not is np.invert True #### References [1] Wikipedia, “Two’s complement”, [https://en.wikipedia.org/wiki/Two’s_complement](https://en.wikipedia.org/wiki/Two's_complement) #### Examples >>> import numpy as np We’ve seen that 13 is represented by `00001101`. The invert or bit-wise NOT of 13 is then: >>> x = np.invert(np.array(13, dtype=np.uint8)) >>> x np.uint8(242) >>> np.binary_repr(x, width=8) '11110010' The result depends on the bit-width: >>> x = np.invert(np.array(13, dtype=np.uint16)) >>> x np.uint16(65522) >>> np.binary_repr(x, width=16) '1111111111110010' When using signed integer types, the result is the bit-wise NOT of the unsigned type, interpreted as a signed integer: >>> np.invert(np.array([13], dtype=np.int8)) array([-14], dtype=int8) >>> np.binary_repr(-14, width=8) '11110010' Booleans are accepted as well: >>> np.invert(np.array([True, False])) array([False, True]) The `~` operator can be used as a shorthand for `np.invert` on ndarrays. >>> x1 = np.array([True, False]) >>> ~x1 array([False, True]) # numpy.is_busday numpy.is_busday(_dates_ , _weekmask ='1111100'_, _holidays =None_, _busdaycal =None_, _out =None_) Calculates which of the given dates are valid days, and which are not. Parameters: **dates** array_like of datetime64[D] The array of dates to process. **weekmask** str or array_like of bool, optional A seven-element array indicating which of Monday through Sunday are valid days. May be specified as a length-seven list or array, like [1,1,1,1,1,0,0]; a length-seven string, like ‘1111100’; or a string like “Mon Tue Wed Thu Fri”, made up of 3-character abbreviations for weekdays, optionally separated by white space. Valid abbreviations are: Mon Tue Wed Thu Fri Sat Sun **holidays** array_like of datetime64[D], optional An array of dates to consider as invalid dates. They may be specified in any order, and NaT (not-a-time) dates are ignored. This list is saved in a normalized form that is suited for fast calculations of valid days. **busdaycal** busdaycalendar, optional A [`busdaycalendar`](numpy.busdaycalendar#numpy.busdaycalendar "numpy.busdaycalendar") object which specifies the valid days. If this parameter is provided, neither weekmask nor holidays may be provided. **out** array of bool, optional If provided, this array is filled with the result. Returns: **out** array of bool An array with the same shape as `dates`, containing True for each valid day, and False for each invalid day. See also [`busdaycalendar`](numpy.busdaycalendar#numpy.busdaycalendar "numpy.busdaycalendar") An object that specifies a custom set of valid days. [`busday_offset`](numpy.busday_offset#numpy.busday_offset "numpy.busday_offset") Applies an offset counted in valid days. [`busday_count`](numpy.busday_count#numpy.busday_count "numpy.busday_count") Counts how many valid days are in a half-open date range. #### Examples >>> import numpy as np >>> # The weekdays are Friday, Saturday, and Monday ... np.is_busday(['2011-07-01', '2011-07-02', '2011-07-18'], ... holidays=['2011-07-01', '2011-07-04', '2011-07-17']) array([False, False, True]) # numpy.isclose numpy.isclose(_a_ , _b_ , _rtol =1e-05_, _atol =1e-08_, _equal_nan =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L2337-L2453) Returns a boolean array where two arrays are element-wise equal within a tolerance. The tolerance values are positive, typically very small numbers. The relative difference (`rtol` * abs(`b`)) and the absolute difference `atol` are added together to compare against the absolute difference between `a` and `b`. Warning The default `atol` is not appropriate for comparing numbers with magnitudes much smaller than one (see Notes). Parameters: **a, b** array_like Input arrays to compare. **rtol** array_like The relative tolerance parameter (see Notes). **atol** array_like The absolute tolerance parameter (see Notes). **equal_nan** bool Whether to compare NaN’s as equal. If True, NaN’s in `a` will be considered equal to NaN’s in `b` in the output array. Returns: **y** array_like Returns a boolean array of where `a` and `b` are equal within the given tolerance. If both `a` and `b` are scalars, returns a single boolean value. See also [`allclose`](numpy.allclose#numpy.allclose "numpy.allclose") [`math.isclose`](https://docs.python.org/3/library/math.html#math.isclose "\(in Python v3.13\)") #### Notes For finite values, isclose uses the following equation to test whether two floating point values are equivalent.: absolute(a - b) <= (atol + rtol * absolute(b)) Unlike the built-in [`math.isclose`](https://docs.python.org/3/library/math.html#math.isclose "\(in Python v3.13\)"), the above equation is not symmetric in `a` and `b` – it assumes `b` is the reference value – so that `isclose(a, b)` might be different from `isclose(b, a)`. The default value of `atol` is not appropriate when the reference value `b` has magnitude smaller than one. For example, it is unlikely that `a = 1e-9` and `b = 2e-9` should be considered “close”, yet `isclose(1e-9, 2e-9)` is `True` with default settings. Be sure to select `atol` for the use case at hand, especially for defining the threshold below which a non-zero value in `a` will be considered “close” to a very small or zero value in `b`. `isclose` is not defined for non-numeric data types. [`bool`](../arrays.scalars#numpy.bool "numpy.bool") is considered a numeric data-type for this purpose. #### Examples >>> import numpy as np >>> np.isclose([1e10,1e-7], [1.00001e10,1e-8]) array([ True, False]) >>> np.isclose([1e10,1e-8], [1.00001e10,1e-9]) array([ True, True]) >>> np.isclose([1e10,1e-8], [1.0001e10,1e-9]) array([False, True]) >>> np.isclose([1.0, np.nan], [1.0, np.nan]) array([ True, False]) >>> np.isclose([1.0, np.nan], [1.0, np.nan], equal_nan=True) array([ True, True]) >>> np.isclose([1e-8, 1e-7], [0.0, 0.0]) array([ True, False]) >>> np.isclose([1e-100, 1e-7], [0.0, 0.0], atol=0.0) array([False, False]) >>> np.isclose([1e-10, 1e-10], [1e-20, 0.0]) array([ True, True]) >>> np.isclose([1e-10, 1e-10], [1e-20, 0.999999e-10], atol=0.0) array([False, True]) # numpy.iscomplex numpy.iscomplex(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_type_check_impl.py#L175-L210) Returns a bool array, where True if input element is complex. What is tested is whether the input has a non-zero imaginary part, not if the input type is complex. Parameters: **x** array_like Input array. Returns: **out** ndarray of bools Output array. See also [`isreal`](numpy.isreal#numpy.isreal "numpy.isreal") [`iscomplexobj`](numpy.iscomplexobj#numpy.iscomplexobj "numpy.iscomplexobj") Return True if x is a complex type or an array of complex numbers. #### Examples >>> import numpy as np >>> np.iscomplex([1+1j, 1+0j, 4.5, 3, 2, 2j]) array([ True, False, False, False, False, True]) # numpy.iscomplexobj numpy.iscomplexobj(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_type_check_impl.py#L270-L309) Check for a complex type or an array of complex numbers. The type of the input is checked, not the value. Even if the input has an imaginary part equal to zero, `iscomplexobj` evaluates to True. Parameters: **x** any The input can be of any type and shape. Returns: **iscomplexobj** bool The return value, True if `x` is of a complex type or has at least one complex element. See also [`isrealobj`](numpy.isrealobj#numpy.isrealobj "numpy.isrealobj"), [`iscomplex`](numpy.iscomplex#numpy.iscomplex "numpy.iscomplex") #### Examples >>> import numpy as np >>> np.iscomplexobj(1) False >>> np.iscomplexobj(1+0j) True >>> np.iscomplexobj([3, 1+0j, True]) True # numpy.isdtype numpy.isdtype(_dtype_ , _kind_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numerictypes.py#L381-L468) Determine if a provided dtype is of a specified data type `kind`. This function only supports built-in NumPy’s data types. Third-party dtypes are not yet supported. Parameters: **dtype** dtype The input dtype. **kind** dtype or str or tuple of dtypes/strs. dtype or dtype kind. Allowed dtype kinds are: * `'bool'` : boolean kind * `'signed integer'` : signed integer data types * `'unsigned integer'` : unsigned integer data types * `'integral'` : integer data types * `'real floating'` : real-valued floating-point data types * `'complex floating'` : complex floating-point data types * `'numeric'` : numeric data types Returns: **out** bool See also [`issubdtype`](numpy.issubdtype#numpy.issubdtype "numpy.issubdtype") #### Examples >>> import numpy as np >>> np.isdtype(np.float32, np.float64) False >>> np.isdtype(np.float32, "real floating") True >>> np.isdtype(np.complex128, ("real floating", "complex floating")) True # numpy.isfinite numpy.isfinite(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Test element-wise for finiteness (not infinity and not Not a Number). The result is returned as a boolean array. Parameters: **x** array_like Input values. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray, bool True where `x` is not positive infinity, negative infinity, or NaN; false otherwise. This is a scalar if `x` is a scalar. See also [`isinf`](numpy.isinf#numpy.isinf "numpy.isinf"), [`isneginf`](numpy.isneginf#numpy.isneginf "numpy.isneginf"), [`isposinf`](numpy.isposinf#numpy.isposinf "numpy.isposinf"), [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") #### Notes Not a Number, positive infinity and negative infinity are considered to be non-finite. NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. Also that positive infinity is not equivalent to negative infinity. But infinity is equivalent to positive infinity. Errors result if the second argument is also supplied when `x` is a scalar input, or if first and second arguments have different shapes. #### Examples >>> import numpy as np >>> np.isfinite(1) True >>> np.isfinite(0) True >>> np.isfinite(np.nan) False >>> np.isfinite(np.inf) False >>> np.isfinite(-np.inf) False >>> np.isfinite([np.log(-1.),1.,np.log(0)]) array([False, True, False]) >>> x = np.array([-np.inf, 0., np.inf]) >>> y = np.array([2, 2, 2]) >>> np.isfinite(x, y) array([0, 1, 0]) >>> y array([0, 1, 0]) # numpy.isfortran numpy.isfortran(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L520-L584) Check if the array is Fortran contiguous but _not_ C contiguous. This function is obsolete. If you only want to check if an array is Fortran contiguous use `a.flags.f_contiguous` instead. Parameters: **a** ndarray Input array. Returns: **isfortran** bool Returns True if the array is Fortran contiguous but _not_ C contiguous. #### Examples np.array allows to specify whether the array is written in C-contiguous order (last index varies the fastest), or FORTRAN-contiguous order in memory (first index varies the fastest). >>> import numpy as np >>> a = np.array([[1, 2, 3], [4, 5, 6]], order='C') >>> a array([[1, 2, 3], [4, 5, 6]]) >>> np.isfortran(a) False >>> b = np.array([[1, 2, 3], [4, 5, 6]], order='F') >>> b array([[1, 2, 3], [4, 5, 6]]) >>> np.isfortran(b) True The transpose of a C-ordered array is a FORTRAN-ordered array. >>> a = np.array([[1, 2, 3], [4, 5, 6]], order='C') >>> a array([[1, 2, 3], [4, 5, 6]]) >>> np.isfortran(a) False >>> b = a.T >>> b array([[1, 4], [2, 5], [3, 6]]) >>> np.isfortran(b) True C-ordered arrays evaluate as False even if they are also FORTRAN-ordered. >>> np.isfortran(np.array([1, 2], order='F')) False # numpy.isin numpy.isin(_element_ , _test_elements_ , _assume_unique =False_, _invert =False_, _*_ , _kind =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_arraysetops_impl.py#L1015-L1133) Calculates `element in test_elements`, broadcasting over `element` only. Returns a boolean array of the same shape as `element` that is True where an element of `element` is in `test_elements` and False otherwise. Parameters: **element** array_like Input array. **test_elements** array_like The values against which to test each value of `element`. This argument is flattened if it is an array or array_like. See notes for behavior with non- array-like parameters. **assume_unique** bool, optional If True, the input arrays are both assumed to be unique, which can speed up the calculation. Default is False. **invert** bool, optional If True, the values in the returned array are inverted, as if calculating `element not in test_elements`. Default is False. `np.isin(a, b, invert=True)` is equivalent to (but faster than) `np.invert(np.isin(a, b))`. **kind**{None, ‘sort’, ‘table’}, optional The algorithm to use. This will not affect the final result, but will affect the speed and memory use. The default, None, will select automatically based on memory considerations. * If ‘sort’, will use a mergesort-based approach. This will have a memory usage of roughly 6 times the sum of the sizes of `element` and `test_elements`, not accounting for size of dtypes. * If ‘table’, will use a lookup table approach similar to a counting sort. This is only available for boolean and integer arrays. This will have a memory usage of the size of `element` plus the max-min value of `test_elements`. `assume_unique` has no effect when the ‘table’ option is used. * If None, will automatically choose ‘table’ if the required memory allocation is less than or equal to 6 times the sum of the sizes of `element` and `test_elements`, otherwise will use ‘sort’. This is done to not use a large amount of memory by default, even though ‘table’ may be faster in most cases. If ‘table’ is chosen, `assume_unique` will have no effect. Returns: **isin** ndarray, bool Has the same shape as `element`. The values `element[isin]` are in `test_elements`. #### Notes `isin` is an element-wise function version of the python keyword `in`. `isin(a, b)` is roughly equivalent to `np.array([item in b for item in a])` if `a` and `b` are 1-D sequences. `element` and `test_elements` are converted to arrays if they are not already. If `test_elements` is a set (or other non-sequence collection) it will be converted to an object array with one element, rather than an array of the values contained in `test_elements`. This is a consequence of the [`array`](numpy.array#numpy.array "numpy.array") constructor’s way of handling non-sequence collections. Converting the set to a list usually gives the desired behavior. Using `kind='table'` tends to be faster than `kind=’sort’` if the following relationship is true: `log10(len(test_elements)) > (log10(max(test_elements)-min(test_elements)) - 2.27) / 0.927`, but may use greater memory. The default value for `kind` will be automatically selected based only on memory usage, so one may manually set `kind='table'` if memory constraints can be relaxed. #### Examples >>> import numpy as np >>> element = 2*np.arange(4).reshape((2, 2)) >>> element array([[0, 2], [4, 6]]) >>> test_elements = [1, 2, 4, 8] >>> mask = np.isin(element, test_elements) >>> mask array([[False, True], [ True, False]]) >>> element[mask] array([2, 4]) The indices of the matched values can be obtained with [`nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero"): >>> np.nonzero(mask) (array([0, 1]), array([1, 0])) The test can also be inverted: >>> mask = np.isin(element, test_elements, invert=True) >>> mask array([[ True, False], [False, True]]) >>> element[mask] array([0, 6]) Because of how [`array`](numpy.array#numpy.array "numpy.array") handles sets, the following does not work as expected: >>> test_set = {1, 2, 4, 8} >>> np.isin(element, test_set) array([[False, False], [False, False]]) Casting the set to a list gives the expected result: >>> np.isin(element, list(test_set)) array([[False, True], [ True, False]]) # numpy.isinf numpy.isinf(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Test element-wise for positive or negative infinity. Returns a boolean array of the same shape as `x`, True where `x == +/-inf`, otherwise False. Parameters: **x** array_like Input values **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** bool (scalar) or boolean ndarray True where `x` is positive or negative infinity, false otherwise. This is a scalar if `x` is a scalar. See also [`isneginf`](numpy.isneginf#numpy.isneginf "numpy.isneginf"), [`isposinf`](numpy.isposinf#numpy.isposinf "numpy.isposinf"), [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan"), [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite") #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). Errors result if the second argument is supplied when the first argument is a scalar, or if the first and second arguments have different shapes. #### Examples >>> import numpy as np >>> np.isinf(np.inf) True >>> np.isinf(np.nan) False >>> np.isinf(-np.inf) True >>> np.isinf([np.inf, -np.inf, 1.0, np.nan]) array([ True, True, False, False]) >>> x = np.array([-np.inf, 0., np.inf]) >>> y = np.array([2, 2, 2]) >>> np.isinf(x, y) array([1, 0, 1]) >>> y array([1, 0, 1]) # numpy.isnan numpy.isnan(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Test element-wise for NaN and return result as a boolean array. Parameters: **x** array_like Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray or bool True where `x` is NaN, false otherwise. This is a scalar if `x` is a scalar. See also [`isinf`](numpy.isinf#numpy.isinf "numpy.isinf"), [`isneginf`](numpy.isneginf#numpy.isneginf "numpy.isneginf"), [`isposinf`](numpy.isposinf#numpy.isposinf "numpy.isposinf"), [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite"), [`isnat`](numpy.isnat#numpy.isnat "numpy.isnat") #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. #### Examples >>> import numpy as np >>> np.isnan(np.nan) True >>> np.isnan(np.inf) False >>> np.isnan([np.log(-1.),1.,np.log(0)]) array([ True, False, False]) # numpy.isnat numpy.isnat(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Test element-wise for NaT (not a time) and return result as a boolean array. Parameters: **x** array_like Input array with datetime or timedelta data type. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray or bool True where `x` is NaT, false otherwise. This is a scalar if `x` is a scalar. See also [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan"), [`isinf`](numpy.isinf#numpy.isinf "numpy.isinf"), [`isneginf`](numpy.isneginf#numpy.isneginf "numpy.isneginf"), [`isposinf`](numpy.isposinf#numpy.isposinf "numpy.isposinf"), [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite") #### Examples >>> import numpy as np >>> np.isnat(np.datetime64("NaT")) True >>> np.isnat(np.datetime64("2016-01-01")) False >>> np.isnat(np.array(["NaT", "2016-01-01"], dtype="datetime64[ns]")) array([ True, False]) # numpy.isneginf numpy.isneginf(_x_ , _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_ufunclike_impl.py#L140-L207) Test element-wise for negative infinity, return result as bool array. Parameters: **x** array_like The input array. **out** array_like, optional A location into which the result is stored. If provided, it must have a shape that the input broadcasts to. If not provided or None, a freshly-allocated boolean array is returned. Returns: **out** ndarray A boolean array with the same dimensions as the input. If second argument is not supplied then a numpy boolean array is returned with values True where the corresponding element of the input is negative infinity and values False where the element of the input is not negative infinity. If a second argument is supplied the result is stored there. If the type of that array is a numeric type the result is represented as zeros and ones, if the type is boolean then as False and True. The return value `out` is then a reference to that array. See also [`isinf`](numpy.isinf#numpy.isinf "numpy.isinf"), [`isposinf`](numpy.isposinf#numpy.isposinf "numpy.isposinf"), [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan"), [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite") #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). Errors result if the second argument is also supplied when x is a scalar input, if first and second arguments have different shapes, or if the first argument has complex values. #### Examples >>> import numpy as np >>> np.isneginf(-np.inf) True >>> np.isneginf(np.inf) False >>> np.isneginf([-np.inf, 0., np.inf]) array([ True, False, False]) >>> x = np.array([-np.inf, 0., np.inf]) >>> y = np.array([2, 2, 2]) >>> np.isneginf(x, y) array([1, 0, 0]) >>> y array([1, 0, 0]) # numpy.isposinf numpy.isposinf(_x_ , _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_ufunclike_impl.py#L70-L137) Test element-wise for positive infinity, return result as bool array. Parameters: **x** array_like The input array. **out** array_like, optional A location into which the result is stored. If provided, it must have a shape that the input broadcasts to. If not provided or None, a freshly-allocated boolean array is returned. Returns: **out** ndarray A boolean array with the same dimensions as the input. If second argument is not supplied then a boolean array is returned with values True where the corresponding element of the input is positive infinity and values False where the element of the input is not positive infinity. If a second argument is supplied the result is stored there. If the type of that array is a numeric type the result is represented as zeros and ones, if the type is boolean then as False and True. The return value `out` is then a reference to that array. See also [`isinf`](numpy.isinf#numpy.isinf "numpy.isinf"), [`isneginf`](numpy.isneginf#numpy.isneginf "numpy.isneginf"), [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite"), [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). Errors result if the second argument is also supplied when x is a scalar input, if first and second arguments have different shapes, or if the first argument has complex values #### Examples >>> import numpy as np >>> np.isposinf(np.inf) True >>> np.isposinf(-np.inf) False >>> np.isposinf([-np.inf, 0., np.inf]) array([False, False, True]) >>> x = np.array([-np.inf, 0., np.inf]) >>> y = np.array([2, 2, 2]) >>> np.isposinf(x, y) array([0, 0, 1]) >>> y array([0, 0, 1]) # numpy.isreal numpy.isreal(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_type_check_impl.py#L213-L267) Returns a bool array, where True if input element is real. If element has complex type with zero imaginary part, the return value for that element is True. Parameters: **x** array_like Input array. Returns: **out** ndarray, bool Boolean array of same shape as `x`. See also [`iscomplex`](numpy.iscomplex#numpy.iscomplex "numpy.iscomplex") [`isrealobj`](numpy.isrealobj#numpy.isrealobj "numpy.isrealobj") Return True if x is not a complex type. #### Notes `isreal` may behave unexpectedly for string or object arrays (see examples) #### Examples >>> import numpy as np >>> a = np.array([1+1j, 1+0j, 4.5, 3, 2, 2j], dtype=complex) >>> np.isreal(a) array([False, True, True, True, True, False]) The function does not work on string arrays. >>> a = np.array([2j, "a"], dtype="U") >>> np.isreal(a) # Warns about non-elementwise comparison False Returns True for all elements in input array of `dtype=object` even if any of the elements is complex. >>> a = np.array([1, "2", 3+4j], dtype=object) >>> np.isreal(a) array([ True, True, True]) isreal should not be used with object arrays >>> a = np.array([1+2j, 2+1j], dtype=object) >>> np.isreal(a) array([ True, True]) # numpy.isrealobj numpy.isrealobj(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_type_check_impl.py#L312-L359) Return True if x is a not complex type or an array of complex numbers. The type of the input is checked, not the value. So even if the input has an imaginary part equal to zero, `isrealobj` evaluates to False if the data type is complex. Parameters: **x** any The input can be of any type and shape. Returns: **y** bool The return value, False if `x` is of a complex type. See also [`iscomplexobj`](numpy.iscomplexobj#numpy.iscomplexobj "numpy.iscomplexobj"), [`isreal`](numpy.isreal#numpy.isreal "numpy.isreal") #### Notes The function is only meant for arrays with numerical values but it accepts all other objects. Since it assumes array input, the return value of other objects may be True. >>> np.isrealobj('A string') True >>> np.isrealobj(False) True >>> np.isrealobj(None) True #### Examples >>> import numpy as np >>> np.isrealobj(1) True >>> np.isrealobj(1+0j) False >>> np.isrealobj([3, 1+0j, True]) False # numpy.isscalar numpy.isscalar(_element_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L1927-L2011) Returns True if the type of `element` is a scalar type. Parameters: **element** any Input argument, can be of any type and shape. Returns: **val** bool True if `element` is a scalar type, False if it is not. See also [`ndim`](numpy.ndim#numpy.ndim "numpy.ndim") Get the number of dimensions of an array #### Notes If you need a stricter way to identify a _numerical_ scalar, use `isinstance(x, numbers.Number)`, as that returns `False` for most non- numerical elements such as strings. In most cases `np.ndim(x) == 0` should be used instead of this function, as that will also return true for 0d arrays. This is how numpy overloads functions in the style of the `dx` arguments to [`gradient`](numpy.gradient#numpy.gradient "numpy.gradient") and the `bins` argument to [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram"). Some key differences: x | `isscalar(x)` | `np.ndim(x) == 0` ---|---|--- PEP 3141 numeric objects (including builtins) | `True` | `True` builtin string and buffer objects | `True` | `True` other builtin objects, like [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "\(in Python v3.13\)"), `Exception`, the result of [`re.compile`](https://docs.python.org/3/library/re.html#re.compile "\(in Python v3.13\)") | `False` | `True` third-party objects like [`matplotlib.figure.Figure`](https://matplotlib.org/stable/api/_as_gen/matplotlib.figure.Figure.html#matplotlib.figure.Figure "\(in Matplotlib v3.9.3\)") | `False` | `True` zero-dimensional numpy arrays | `False` | `True` other numpy arrays | `False` | `False` `list`, `tuple`, and other sequence objects | `False` | `False` #### Examples >>> import numpy as np >>> np.isscalar(3.1) True >>> np.isscalar(np.array(3.1)) False >>> np.isscalar([3.1]) False >>> np.isscalar(False) True >>> np.isscalar('numpy') True NumPy supports PEP 3141 numbers: >>> from fractions import Fraction >>> np.isscalar(Fraction(5, 17)) True >>> from numbers import Number >>> np.isscalar(Number()) True # numpy.issubdtype numpy.issubdtype(_arg1_ , _arg2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numerictypes.py#L471-L534) Returns True if first argument is a typecode lower/equal in type hierarchy. This is like the builtin [`issubclass`](https://docs.python.org/3/library/functions.html#issubclass "\(in Python v3.13\)"), but for [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype")s. Parameters: **arg1, arg2** dtype_like [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") or object coercible to one Returns: **out** bool See also [Scalars](../arrays.scalars#arrays-scalars) Overview of the numpy type hierarchy. #### Examples `issubdtype` can be used to check the type of arrays: >>> ints = np.array([1, 2, 3], dtype=np.int32) >>> np.issubdtype(ints.dtype, np.integer) True >>> np.issubdtype(ints.dtype, np.floating) False >>> floats = np.array([1, 2, 3], dtype=np.float32) >>> np.issubdtype(floats.dtype, np.integer) False >>> np.issubdtype(floats.dtype, np.floating) True Similar types of different sizes are not subdtypes of each other: >>> np.issubdtype(np.float64, np.float32) False >>> np.issubdtype(np.float32, np.float64) False but both are subtypes of [`floating`](../arrays.scalars#numpy.floating "numpy.floating"): >>> np.issubdtype(np.float64, np.floating) True >>> np.issubdtype(np.float32, np.floating) True For convenience, dtype-like objects are allowed too: >>> np.issubdtype('S1', np.bytes_) True >>> np.issubdtype('i4', np.signedinteger) True # numpy.iterable numpy.iterable(_y_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L339-L382) Check whether or not an object can be iterated over. Parameters: **y** object Input object. Returns: **b** bool Return `True` if the object has an iterator method or is a sequence and `False` otherwise. #### Notes In most cases, the results of `np.iterable(obj)` are consistent with `isinstance(obj, collections.abc.Iterable)`. One notable exception is the treatment of 0-dimensional arrays: >>> from collections.abc import Iterable >>> a = np.array(1.0) # 0-dimensional numpy array >>> isinstance(a, Iterable) True >>> np.iterable(a) False #### Examples >>> import numpy as np >>> np.iterable([1, 2, 3]) True >>> np.iterable(2) False # numpy.ix_ numpy.ix_(_* args_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_index_tricks_impl.py#L34-L107) Construct an open mesh from multiple sequences. This function takes N 1-D sequences and returns N outputs with N dimensions each, such that the shape is 1 in all but one dimension and the dimension with the non-unit shape value cycles through all N dimensions. Using `ix_` one can quickly construct index arrays that will index the cross product. `a[np.ix_([1,3],[2,5])]` returns the array `[[a[1,2] a[1,5]], [a[3,2] a[3,5]]]`. Parameters: **args** 1-D sequences Each sequence should be of integer or boolean type. Boolean sequences will be interpreted as boolean masks for the corresponding dimension (equivalent to passing in `np.nonzero(boolean_sequence)`). Returns: **out** tuple of ndarrays N arrays with N dimensions each, with N the number of input sequences. Together these arrays form an open mesh. See also [`ogrid`](numpy.ogrid#numpy.ogrid "numpy.ogrid"), [`mgrid`](numpy.mgrid#numpy.mgrid "numpy.mgrid"), [`meshgrid`](numpy.meshgrid#numpy.meshgrid "numpy.meshgrid") #### Examples >>> import numpy as np >>> a = np.arange(10).reshape(2, 5) >>> a array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]) >>> ixgrid = np.ix_([0, 1], [2, 4]) >>> ixgrid (array([[0], [1]]), array([[2, 4]])) >>> ixgrid[0].shape, ixgrid[1].shape ((2, 1), (1, 2)) >>> a[ixgrid] array([[2, 4], [7, 9]]) >>> ixgrid = np.ix_([True, True], [2, 4]) >>> a[ixgrid] array([[2, 4], [7, 9]]) >>> ixgrid = np.ix_([True, True], [False, False, True, False, True]) >>> a[ixgrid] array([[2, 4], [7, 9]]) # numpy.kaiser numpy.kaiser(_M_ , _beta_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L3617-L3745) Return the Kaiser window. The Kaiser window is a taper formed by using a Bessel function. Parameters: **M** int Number of points in the output window. If zero or less, an empty array is returned. **beta** float Shape parameter for window. Returns: **out** array The window, with the maximum value normalized to one (the value one appears only if the number of samples is odd). See also [`bartlett`](numpy.bartlett#numpy.bartlett "numpy.bartlett"), [`blackman`](numpy.blackman#numpy.blackman "numpy.blackman"), [`hamming`](numpy.hamming#numpy.hamming "numpy.hamming"), [`hanning`](numpy.hanning#numpy.hanning "numpy.hanning") #### Notes The Kaiser window is defined as \\[w(n) = I_0\left( \beta \sqrt{1-\frac{4n^2}{(M-1)^2}} \right)/I_0(\beta)\\] with \\[\quad -\frac{M-1}{2} \leq n \leq \frac{M-1}{2},\\] where \\(I_0\\) is the modified zeroth-order Bessel function. The Kaiser was named for Jim Kaiser, who discovered a simple approximation to the DPSS window based on Bessel functions. The Kaiser window is a very good approximation to the Digital Prolate Spheroidal Sequence, or Slepian window, which is the transform which maximizes the energy in the main lobe of the window relative to total energy. The Kaiser can approximate many other windows by varying the beta parameter. beta | Window shape ---|--- 0 | Rectangular 5 | Similar to a Hamming 6 | Similar to a Hanning 8.6 | Similar to a Blackman A beta value of 14 is probably a good starting point. Note that as beta gets large, the window narrows, and so the number of samples needs to be large enough to sample the increasingly narrow spike, otherwise NaNs will get returned. Most references to the Kaiser window come from the signal processing literature, where it is used as one of many windowing functions for smoothing values. It is also known as an apodization (which means “removing the foot”, i.e. smoothing discontinuities at the beginning and end of the sampled signal) or tapering function. #### References [1] J. F. Kaiser, “Digital Filters” - Ch 7 in “Systems analysis by digital computer”, Editors: F.F. Kuo and J.F. Kaiser, p 218-285. John Wiley and Sons, New York, (1966). [2] E.R. Kanasewich, “Time Sequence Analysis in Geophysics”, The University of Alberta Press, 1975, pp. 177-178. [3] Wikipedia, “Window function”, #### Examples >>> import numpy as np >>> import matplotlib.pyplot as plt >>> np.kaiser(12, 14) array([7.72686684e-06, 3.46009194e-03, 4.65200189e-02, # may vary 2.29737120e-01, 5.99885316e-01, 9.45674898e-01, 9.45674898e-01, 5.99885316e-01, 2.29737120e-01, 4.65200189e-02, 3.46009194e-03, 7.72686684e-06]) Plot the window and the frequency response. import matplotlib.pyplot as plt from numpy.fft import fft, fftshift window = np.kaiser(51, 14) plt.plot(window) plt.title("Kaiser window") plt.ylabel("Amplitude") plt.xlabel("Sample") plt.show() plt.figure() A = fft(window, 2048) / 25.5 mag = np.abs(fftshift(A)) freq = np.linspace(-0.5, 0.5, len(A)) response = 20 * np.log10(mag) response = np.clip(response, -100, 100) plt.plot(freq, response) plt.title("Frequency response of Kaiser window") plt.ylabel("Magnitude [dB]") plt.xlabel("Normalized frequency [cycles per sample]") plt.axis('tight') plt.show() # numpy.kron numpy.kron(_a_ , _b_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_shape_base_impl.py#L1085-L1197) Kronecker product of two arrays. Computes the Kronecker product, a composite array made of blocks of the second array scaled by the first. Parameters: **a, b** array_like Returns: **out** ndarray See also [`outer`](numpy.outer#numpy.outer "numpy.outer") The outer product #### Notes The function assumes that the number of dimensions of `a` and `b` are the same, if necessary prepending the smallest with ones. If `a.shape = (r0,r1,..,rN)` and `b.shape = (s0,s1,...,sN)`, the Kronecker product has shape `(r0*s0, r1*s1, ..., rN*SN)`. The elements are products of elements from `a` and `b`, organized explicitly by: kron(a,b)[k0,k1,...,kN] = a[i0,i1,...,iN] * b[j0,j1,...,jN] where: kt = it * st + jt, t = 0,...,N In the common 2-D case (N=1), the block structure can be visualized: [[ a[0,0]*b, a[0,1]*b, ... , a[0,-1]*b ], [ ... ... ], [ a[-1,0]*b, a[-1,1]*b, ... , a[-1,-1]*b ]] #### Examples >>> import numpy as np >>> np.kron([1,10,100], [5,6,7]) array([ 5, 6, 7, ..., 500, 600, 700]) >>> np.kron([5,6,7], [1,10,100]) array([ 5, 50, 500, ..., 7, 70, 700]) >>> np.kron(np.eye(2), np.ones((2,2))) array([[1., 1., 0., 0.], [1., 1., 0., 0.], [0., 0., 1., 1.], [0., 0., 1., 1.]]) >>> a = np.arange(100).reshape((2,5,2,5)) >>> b = np.arange(24).reshape((2,3,4)) >>> c = np.kron(a,b) >>> c.shape (2, 10, 6, 20) >>> I = (1,3,0,2) >>> J = (0,2,1) >>> J1 = (0,) + J # extend to ndim=4 >>> S1 = (1,) + b.shape >>> K = tuple(np.array(I) * np.array(S1) + np.array(J1)) >>> c[K] == a[I]*b[J] True # numpy.lcm numpy.lcm(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns the lowest common multiple of `|x1|` and `|x2|` Parameters: **x1, x2** array_like, int Arrays of values. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). Returns: **y** ndarray or scalar The lowest common multiple of the absolute value of the inputs This is a scalar if both `x1` and `x2` are scalars. See also [`gcd`](numpy.gcd#numpy.gcd "numpy.gcd") The greatest common divisor #### Examples >>> import numpy as np >>> np.lcm(12, 20) 60 >>> np.lcm.reduce([3, 12, 20]) 60 >>> np.lcm.reduce([40, 12, 20]) 120 >>> np.lcm(np.arange(6), 20) array([ 0, 20, 20, 60, 20, 20]) # numpy.ldexp numpy.ldexp(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns x1 * 2**x2, element-wise. The mantissas `x1` and twos exponents `x2` are used to construct floating point numbers `x1 * 2**x2`. Parameters: **x1** array_like Array of multipliers. **x2** array_like, int Array of twos exponents. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray or scalar The result of `x1 * 2**x2`. This is a scalar if both `x1` and `x2` are scalars. See also [`frexp`](numpy.frexp#numpy.frexp "numpy.frexp") Return (y1, y2) from `x = y1 * 2**y2`, inverse to `ldexp`. #### Notes Complex dtypes are not supported, they will raise a TypeError. `ldexp` is useful as the inverse of [`frexp`](numpy.frexp#numpy.frexp "numpy.frexp"), if used by itself it is more clear to simply use the expression `x1 * 2**x2`. #### Examples >>> import numpy as np >>> np.ldexp(5, np.arange(4)) array([ 5., 10., 20., 40.], dtype=float16) >>> x = np.arange(6) >>> np.ldexp(*np.frexp(x)) array([ 0., 1., 2., 3., 4., 5.]) # numpy.left_shift numpy.left_shift(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Shift the bits of an integer to the left. Bits are shifted to the left by appending `x2` 0s at the right of `x1`. Since the internal representation of numbers is in binary format, this operation is equivalent to multiplying `x1` by `2**x2`. Parameters: **x1** array_like of integer type Input values. **x2** array_like of integer type Number of zeros to append to `x1`. Has to be non-negative. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** array of integer type Return `x1` with bits shifted `x2` times to the left. This is a scalar if both `x1` and `x2` are scalars. See also [`right_shift`](numpy.right_shift#numpy.right_shift "numpy.right_shift") Shift the bits of an integer to the right. [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Return the binary representation of the input number as a string. #### Examples >>> import numpy as np >>> np.binary_repr(5) '101' >>> np.left_shift(5, 2) 20 >>> np.binary_repr(20) '10100' >>> np.left_shift(5, [1,2,3]) array([10, 20, 40]) Note that the dtype of the second argument may change the dtype of the result and can lead to unexpected results in some cases (see [Casting Rules](../../user/basics.ufuncs#ufuncs-casting)): >>> a = np.left_shift(np.uint8(255), np.int64(1)) # Expect 254 >>> print(a, type(a)) # Unexpected result due to upcasting 510 >>> b = np.left_shift(np.uint8(255), np.uint8(1)) >>> print(b, type(b)) 254 The `<<` operator can be used as a shorthand for `np.left_shift` on ndarrays. >>> x1 = 5 >>> x2 = np.array([1, 2, 3]) >>> x1 << x2 array([10, 20, 40]) # numpy.less numpy.less(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the truth value of (x1 < x2) element-wise. Parameters: **x1, x2** array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal") #### Examples >>> import numpy as np >>> np.less([1, 2], [2, 2]) array([ True, False]) The `<` operator can be used as a shorthand for `np.less` on ndarrays. >>> a = np.array([1, 2]) >>> b = np.array([2, 2]) >>> a < b array([ True, False]) # numpy.less_equal numpy.less_equal(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the truth value of (x1 <= x2) element-wise. Parameters: **x1, x2** array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less`](numpy.less#numpy.less "numpy.less"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal") #### Examples >>> import numpy as np >>> np.less_equal([4, 2, 1], [2, 2, 2]) array([False, True, True]) The `<=` operator can be used as a shorthand for `np.less_equal` on ndarrays. >>> a = np.array([4, 2, 1]) >>> b = np.array([2, 2, 2]) >>> a <= b array([False, True, True]) # numpy.lexsort numpy.lexsort(_keys_ , _axis =-1_) Perform an indirect stable sort using a sequence of keys. Given multiple sorting keys, lexsort returns an array of integer indices that describes the sort order by multiple keys. The last key in the sequence is used for the primary sort order, ties are broken by the second-to-last key, and so on. Parameters: **keys**(k, m, n, …) array-like The `k` keys to be sorted. The _last_ key (e.g, the last row if `keys` is a 2D array) is the primary sort key. Each element of `keys` along the zeroth axis must be an array-like object of the same shape. **axis** int, optional Axis to be indirectly sorted. By default, sort over the last axis of each sequence. Separate slices along `axis` sorted over independently; see last example. Returns: **indices**(m, n, …) ndarray of ints Array of indices that sort the keys along the specified axis. See also [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Indirect sort. [`ndarray.sort`](numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort") In-place sort. [`sort`](numpy.sort#numpy.sort "numpy.sort") Return a sorted copy of an array. #### Examples Sort names: first by surname, then by name. >>> import numpy as np >>> surnames = ('Hertz', 'Galilei', 'Hertz') >>> first_names = ('Heinrich', 'Galileo', 'Gustav') >>> ind = np.lexsort((first_names, surnames)) >>> ind array([1, 2, 0]) >>> [surnames[i] + ", " + first_names[i] for i in ind] ['Galilei, Galileo', 'Hertz, Gustav', 'Hertz, Heinrich'] Sort according to two numerical keys, first by elements of `a`, then breaking ties according to elements of `b`: >>> a = [1, 5, 1, 4, 3, 4, 4] # First sequence >>> b = [9, 4, 0, 4, 0, 2, 1] # Second sequence >>> ind = np.lexsort((b, a)) # Sort by `a`, then by `b` >>> ind array([2, 0, 4, 6, 5, 3, 1]) >>> [(a[i], b[i]) for i in ind] [(1, 0), (1, 9), (3, 0), (4, 1), (4, 2), (4, 4), (5, 4)] Compare against [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort"), which would sort each key independently. >>> np.argsort((b, a), kind='stable') array([[2, 4, 6, 5, 1, 3, 0], [0, 2, 4, 3, 5, 6, 1]]) To sort lexicographically with [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort"), we would need to provide a structured array. >>> x = np.array([(ai, bi) for ai, bi in zip(a, b)], ... dtype = np.dtype([('x', int), ('y', int)])) >>> np.argsort(x) # or np.argsort(x, order=('x', 'y')) array([2, 0, 4, 6, 5, 3, 1]) The zeroth axis of `keys` always corresponds with the sequence of keys, so 2D arrays are treated just like other sequences of keys. >>> arr = np.asarray([b, a]) >>> ind2 = np.lexsort(arr) >>> np.testing.assert_equal(ind2, ind) Accordingly, the `axis` parameter refers to an axis of _each_ key, not of the `keys` argument itself. For instance, the array `arr` is treated as a sequence of two 1-D keys, so specifying `axis=0` is equivalent to using the default axis, `axis=-1`. >>> np.testing.assert_equal(np.lexsort(arr, axis=0), ... np.lexsort(arr, axis=-1)) For higher-dimensional arrays, the axis parameter begins to matter. The resulting array has the same shape as each key, and the values are what we would expect if `lexsort` were performed on corresponding slices of the keys independently. For instance, >>> x = [[1, 2, 3, 4], ... [4, 3, 2, 1], ... [2, 1, 4, 3]] >>> y = [[2, 2, 1, 1], ... [1, 2, 1, 2], ... [1, 1, 2, 1]] >>> np.lexsort((x, y), axis=1) array([[2, 3, 0, 1], [2, 0, 3, 1], [1, 0, 3, 2]]) Each row of the result is what we would expect if we were to perform `lexsort` on the corresponding row of the keys: >>> for i in range(3): ... print(np.lexsort((x[i], y[i]))) [2 3 0 1] [2 0 3 1] [1 0 3 2] # numpy.lib.add_docstring lib.add_docstring(_obj_ , _docstring_) Add a docstring to a built-in obj if possible. If the obj already has a docstring raise a RuntimeError If this routine does not know how to add a docstring to the object raise a TypeError # numpy.lib.add_newdoc lib.add_newdoc(_place_ , _obj_ , _doc_ , _warn_on_python =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/function_base.py#L487-L546) Add documentation to an existing object, typically one defined in C The purpose is to allow easier editing of the docstrings without requiring a re-compile. This exists primarily for internal use within numpy itself. Parameters: **place** str The absolute name of the module to import from **obj** str or None The name of the object to add documentation to, typically a class or function name. **doc**{str, Tuple[str, str], List[Tuple[str, str]]} If a string, the documentation to apply to `obj` If a tuple, then the first element is interpreted as an attribute of `obj` and the second as the docstring to apply - `(method, docstring)` If a list, then each element of the list should be a tuple of length two - `[(method1, docstring1), (method2, docstring2), ...]` **warn_on_python** bool If True, the default, emit `UserWarning` if this is used to attach documentation to a pure-python object. #### Notes This routine never raises an error if the docstring can’t be written, but will raise an error if the object being documented does not exist. This routine cannot modify read-only docstrings, as appear in new-style classes or built-in functions. Because this routine never raises an error the caller must check manually that the docstrings were changed. Since this function grabs the `char *` from a c-level str object and puts it into the `tp_doc` slot of the type of `obj`, it violates a number of C-API best-practices, by: * modifying a `PyTypeObject` after calling `PyType_Ready` * calling `Py_INCREF` on the str and losing the reference, so the str will never be released If possible it should be avoided. # numpy.lib.array_utils.byte_bounds lib.array_utils.byte_bounds(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_array_utils_impl.py#L11-L62) Returns pointers to the end-points of an array. Parameters: **a** ndarray Input array. It must conform to the Python-side of the array interface. Returns: **(low, high)** tuple of 2 integers The first integer is the first byte of the array, the second integer is just past the last byte of the array. If `a` is not contiguous it will not use every byte between the (`low`, `high`) values. #### Examples >>> import numpy as np >>> I = np.eye(2, dtype='f'); I.dtype dtype('float32') >>> low, high = np.lib.array_utils.byte_bounds(I) >>> high - low == I.size*I.itemsize True >>> I = np.eye(2); I.dtype dtype('float64') >>> low, high = np.lib.array_utils.byte_bounds(I) >>> high - low == I.size*I.itemsize True # numpy.lib.array_utils Miscellaneous utils. #### Functions [`byte_bounds`](numpy.lib.array_utils.byte_bounds#numpy.lib.array_utils.byte_bounds "numpy.lib.array_utils.byte_bounds")(a) | Returns pointers to the end-points of an array. ---|--- [`normalize_axis_index`](numpy.lib.array_utils.normalize_axis_index#numpy.lib.array_utils.normalize_axis_index "numpy.lib.array_utils.normalize_axis_index")(axis, ndim[, msg_prefix]) | Normalizes an axis index, `axis`, such that is a valid positive index into the shape of array with `ndim` dimensions. [`normalize_axis_tuple`](numpy.lib.array_utils.normalize_axis_tuple#numpy.lib.array_utils.normalize_axis_tuple "numpy.lib.array_utils.normalize_axis_tuple")(axis, ndim[, argname, ...]) | Normalizes an axis argument into a tuple of non-negative integer axes. # numpy.lib.array_utils.normalize_axis_index lib.array_utils.normalize_axis_index(_axis_ , _ndim_ , _msg_prefix =None_) Normalizes an axis index, `axis`, such that is a valid positive index into the shape of array with [`ndim`](numpy.ndim#numpy.ndim "numpy.ndim") dimensions. Raises an AxisError with an appropriate message if this is not possible. Used internally by all axis-checking logic. Parameters: **axis** int The un-normalized index of the axis. Can be negative **ndim** int The number of dimensions of the array that `axis` should be normalized against **msg_prefix** str A prefix to put before the message, typically the name of the argument Returns: **normalized_axis** int The normalized axis index, such that `0 <= normalized_axis < ndim` Raises: AxisError If the axis index is invalid, when `-ndim <= axis < ndim` is false. #### Examples >>> import numpy as np >>> from numpy.lib.array_utils import normalize_axis_index >>> normalize_axis_index(0, ndim=3) 0 >>> normalize_axis_index(1, ndim=3) 1 >>> normalize_axis_index(-1, ndim=3) 2 >>> normalize_axis_index(3, ndim=3) Traceback (most recent call last): ... numpy.exceptions.AxisError: axis 3 is out of bounds for array ... >>> normalize_axis_index(-4, ndim=3, msg_prefix='axes_arg') Traceback (most recent call last): ... numpy.exceptions.AxisError: axes_arg: axis -4 is out of bounds ... # numpy.lib.array_utils.normalize_axis_tuple lib.array_utils.normalize_axis_tuple(_axis_ , _ndim_ , _argname =None_, _allow_duplicate =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L1386-L1441) Normalizes an axis argument into a tuple of non-negative integer axes. This handles shorthands such as `1` and converts them to `(1,)`, as well as performing the handling of negative indices covered by [`normalize_axis_index`](numpy.lib.array_utils.normalize_axis_index#numpy.lib.array_utils.normalize_axis_index "numpy.lib.array_utils.normalize_axis_index"). By default, this forbids axes from being specified multiple times. Used internally by multi-axis-checking logic. Parameters: **axis** int, iterable of int The un-normalized index or indices of the axis. **ndim** int The number of dimensions of the array that `axis` should be normalized against. **argname** str, optional A prefix to put before the error message, typically the name of the argument. **allow_duplicate** bool, optional If False, the default, disallow an axis from being specified twice. Returns: **normalized_axes** tuple of int The normalized axis index, such that `0 <= normalized_axis < ndim` Raises: AxisError If any axis provided is out of range ValueError If an axis is repeated See also [`normalize_axis_index`](numpy.lib.array_utils.normalize_axis_index#numpy.lib.array_utils.normalize_axis_index "numpy.lib.array_utils.normalize_axis_index") normalizing a single scalar axis # numpy.lib.Arrayterator.flat property _property_ lib.Arrayterator.flat A 1-D flat iterator for Arrayterator objects. This iterator returns elements of the array to be iterated over in [`Arrayterator`](numpy.lib.arrayterator#numpy.lib.Arrayterator "numpy.lib.Arrayterator") one by one. It is similar to [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter"). See also [`lib.Arrayterator`](numpy.lib.arrayterator#numpy.lib.Arrayterator "numpy.lib.Arrayterator") [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples >>> a = np.arange(3 * 4 * 5 * 6).reshape(3, 4, 5, 6) >>> a_itor = np.lib.Arrayterator(a, 2) >>> for subarr in a_itor.flat: ... if not subarr: ... print(subarr, type(subarr)) ... 0 # numpy.lib.Arrayterator _class_ numpy.lib.Arrayterator(_var_ , _buf_size =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/__init__.py) Buffered iterator for big arrays. `Arrayterator` creates a buffered iterator for reading big arrays in small contiguous blocks. The class is useful for objects stored in the file system. It allows iteration over the object _without_ reading everything in memory; instead, small blocks are read and iterated over. `Arrayterator` can be used with any object that supports multidimensional slices. This includes NumPy arrays, but also variables from Scientific.IO.NetCDF or pynetcdf for example. Parameters: **var** array_like The object to iterate over. **buf_size** int, optional The buffer size. If `buf_size` is supplied, the maximum amount of data that will be read into memory is `buf_size` elements. Default is None, which will read as many element as possible into memory. See also [`numpy.ndenumerate`](numpy.ndenumerate#numpy.ndenumerate "numpy.ndenumerate") Multidimensional array iterator. [`numpy.flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") Flat array iterator. [`numpy.memmap`](numpy.memmap#numpy.memmap "numpy.memmap") Create a memory-map to an array stored in a binary file on disk. #### Notes The algorithm works by first finding a “running dimension”, along which the blocks will be extracted. Given an array of dimensions `(d1, d2, ..., dn)`, e.g. if `buf_size` is smaller than `d1`, the first dimension will be used. If, on the other hand, `d1 < buf_size < d1*d2` the second dimension will be used, and so on. Blocks are extracted along this dimension, and when the last block is returned the process continues from the next dimension, until all elements have been read. #### Examples >>> import numpy as np >>> a = np.arange(3 * 4 * 5 * 6).reshape(3, 4, 5, 6) >>> a_itor = np.lib.Arrayterator(a, 2) >>> a_itor.shape (3, 4, 5, 6) Now we can iterate over `a_itor`, and it will return arrays of size two. Since `buf_size` was smaller than any dimension, the first dimension will be iterated over first: >>> for subarr in a_itor: ... if not subarr.all(): ... print(subarr, subarr.shape) >>> # [[[[0 1]]]] (1, 1, 1, 2) Attributes: **var** **buf_size** **start** **stop** **step** [`shape`](numpy.lib.arrayterator.shape#numpy.lib.Arrayterator.shape "numpy.lib.Arrayterator.shape") The shape of the array to be iterated over. [`flat`](numpy.lib.arrayterator.flat#numpy.lib.Arrayterator.flat "numpy.lib.Arrayterator.flat") A 1-D flat iterator for Arrayterator objects. # numpy.lib.Arrayterator.shape property _property_ lib.Arrayterator.shape The shape of the array to be iterated over. For an example, see `Arrayterator`. # numpy.lib.format.descr_to_dtype lib.format.descr_to_dtype(_descr_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/format.py#L303-L357) Returns a dtype based off the given description. This is essentially the reverse of [`dtype_to_descr`](numpy.lib.format.dtype_to_descr#numpy.lib.format.dtype_to_descr "numpy.lib.format.dtype_to_descr"). It will remove the valueless padding fields created by, i.e. simple fields like dtype(‘float32’), and then convert the description to its corresponding dtype. Parameters: **descr** object The object retrieved by dtype.descr. Can be passed to [`numpy.dtype`](numpy.dtype#numpy.dtype "numpy.dtype") in order to replicate the input dtype. Returns: **dtype** dtype The dtype constructed by the description. # numpy.lib.format.drop_metadata lib.format.drop_metadata(_dtype_ , _/_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_utils_impl.py#L715-L775) Returns the dtype unchanged if it contained no metadata or a copy of the dtype if it (or any of its structure dtypes) contained metadata. This utility is used by `np.save` and `np.savez` to drop metadata before saving. Note Due to its limitation this function may move to a more appropriate home or change in the future and is considered semi-public API only. Warning This function does not preserve more strange things like record dtypes and user dtypes may simply return the wrong thing. If you need to be sure about the latter, check the result with: `np.can_cast(new_dtype, dtype, casting="no")`. # numpy.lib.format.dtype_to_descr lib.format.dtype_to_descr(_dtype_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/format.py#L245-L301) Get a serializable descriptor from the dtype. The .descr attribute of a dtype object cannot be round-tripped through the dtype() constructor. Simple types, like dtype(‘float32’), have a descr which looks like a record array with one field with ‘’ as a name. The dtype() constructor interprets this as a request to give a default name. Instead, we construct descriptor that can be passed to dtype(). Parameters: **dtype** dtype The dtype of the array that will be written to disk. Returns: **descr** object An object that can be passed to `numpy.dtype()` in order to replicate the input dtype. # numpy.lib.format.header_data_from_array_1_0 lib.format.header_data_from_array_1_0(_array_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/format.py#L359-L384) Get the dictionary of header metadata from a numpy.ndarray. Parameters: **array** numpy.ndarray Returns: **d** dict This has the appropriate entries for writing its string representation to the header of the file. # numpy.lib.format Binary serialization ## NPY format A simple format for saving numpy arrays to disk with the full information about them. The `.npy` format is the standard binary file format in NumPy for persisting a _single_ arbitrary NumPy array on disk. The format stores all of the shape and dtype information necessary to reconstruct the array correctly even on another machine with a different architecture. The format is designed to be as simple as possible while achieving its limited goals. The `.npz` format is the standard format for persisting _multiple_ NumPy arrays on disk. A `.npz` file is a zip file containing multiple `.npy` files, one for each array. ### Capabilities * Can represent all NumPy arrays including nested record arrays and object arrays. * Represents the data in its native binary form. * Supports Fortran-contiguous arrays directly. * Stores all of the necessary information to reconstruct the array including shape and dtype on a machine of a different architecture. Both little-endian and big-endian arrays are supported, and a file with little-endian numbers will yield a little-endian array on any machine reading the file. The types are described in terms of their actual sizes. For example, if a machine with a 64-bit C “long int” writes out an array with “long ints”, a reading machine with 32-bit C “long ints” will yield an array with 64-bit integers. * Is straightforward to reverse engineer. Datasets often live longer than the programs that created them. A competent developer should be able to create a solution in their preferred programming language to read most `.npy` files that they have been given without much documentation. * Allows memory-mapping of the data. See [`open_memmap`](numpy.lib.format.open_memmap#numpy.lib.format.open_memmap "numpy.lib.format.open_memmap"). * Can be read from a filelike stream object instead of an actual file. * Stores object arrays, i.e. arrays containing elements that are arbitrary Python objects. Files with object arrays are not to be mmapable, but can be read and written to disk. ### Limitations * Arbitrary subclasses of numpy.ndarray are not completely preserved. Subclasses will be accepted for writing, but only the array data will be written out. A regular numpy.ndarray object will be created upon reading the file. Warning Due to limitations in the interpretation of structured dtypes, dtypes with fields with empty names will have the names replaced by ‘f0’, ‘f1’, etc. Such arrays will not round-trip through the format entirely accurately. The data is intact; only the field names will differ. We are working on a fix for this. This fix will not require a change in the file format. The arrays with such structures can still be saved and restored, and the correct dtype may be restored by using the `loadedarray.view(correct_dtype)` method. ### File extensions We recommend using the `.npy` and `.npz` extensions for files saved in this format. This is by no means a requirement; applications may wish to use these file formats but use an extension specific to the application. In the absence of an obvious alternative, however, we suggest using `.npy` and `.npz`. ### Version numbering The version numbering of these formats is independent of NumPy version numbering. If the format is upgraded, the code in `numpy.io` will still be able to read and write Version 1.0 files. ### Format Version 1.0 The first 6 bytes are a magic string: exactly `\x93NUMPY`. The next 1 byte is an unsigned byte: the major version number of the file format, e.g. `\x01`. The next 1 byte is an unsigned byte: the minor version number of the file format, e.g. `\x00`. Note: the version of the file format is not tied to the version of the numpy package. The next 2 bytes form a little-endian unsigned short int: the length of the header data HEADER_LEN. The next HEADER_LEN bytes form the header data describing the array’s format. It is an ASCII string which contains a Python literal expression of a dictionary. It is terminated by a newline (`\n`) and padded with spaces (`\x20`) to make the total of `len(magic string) + 2 + len(length) + HEADER_LEN` be evenly divisible by 64 for alignment purposes. The dictionary contains three keys: “descr”dtype.descr An object that can be passed as an argument to the [`numpy.dtype`](numpy.dtype#numpy.dtype "numpy.dtype") constructor to create the array’s dtype. “fortran_order”bool Whether the array data is Fortran-contiguous or not. Since Fortran-contiguous arrays are a common form of non-C-contiguity, we allow them to be written directly to disk for efficiency. “shape”tuple of int The shape of the array. For repeatability and readability, the dictionary keys are sorted in alphabetic order. This is for convenience only. A writer SHOULD implement this if possible. A reader MUST NOT depend on this. Following the header comes the array data. If the dtype contains Python objects (i.e. `dtype.hasobject is True`), then the data is a Python pickle of the array. Otherwise the data is the contiguous (either C- or Fortran-, depending on `fortran_order`) bytes of the array. Consumers can figure out the number of bytes by multiplying the number of elements given by the shape (noting that `shape=()` means there is 1 element) by `dtype.itemsize`. ### Format Version 2.0 The version 1.0 format only allowed the array header to have a total size of 65535 bytes. This can be exceeded by structured arrays with a large number of columns. The version 2.0 format extends the header size to 4 GiB. [`numpy.save`](numpy.save#numpy.save "numpy.save") will automatically save in 2.0 format if the data requires it, else it will always use the more compatible 1.0 format. The description of the fourth element of the header therefore has become: “The next 4 bytes form a little-endian unsigned int: the length of the header data HEADER_LEN.” ### Format Version 3.0 This version replaces the ASCII string (which in practice was latin1) with a utf8-encoded string, so supports structured types with any unicode field names. ### Notes The `.npy` format, including motivation for creating it and a comparison of alternatives, is described in the [“npy-format” NEP](https://numpy.org/neps/nep-0001-npy-format.html "\(in NumPy Enhancement Proposals\)"), however details have evolved with time and this document is more current. #### Functions [`descr_to_dtype`](numpy.lib.format.descr_to_dtype#numpy.lib.format.descr_to_dtype "numpy.lib.format.descr_to_dtype")(descr) | Returns a dtype based off the given description. ---|--- [`drop_metadata`](numpy.lib.format.drop_metadata#numpy.lib.format.drop_metadata "numpy.lib.format.drop_metadata")(dtype, /) | Returns the dtype unchanged if it contained no metadata or a copy of the dtype if it (or any of its structure dtypes) contained metadata. [`dtype_to_descr`](numpy.lib.format.dtype_to_descr#numpy.lib.format.dtype_to_descr "numpy.lib.format.dtype_to_descr")(dtype) | Get a serializable descriptor from the dtype. [`header_data_from_array_1_0`](numpy.lib.format.header_data_from_array_1_0#numpy.lib.format.header_data_from_array_1_0 "numpy.lib.format.header_data_from_array_1_0")(array) | Get the dictionary of header metadata from a numpy.ndarray. [`isfileobj`](numpy.lib.format.isfileobj#numpy.lib.format.isfileobj "numpy.lib.format.isfileobj")(f) | [`magic`](numpy.lib.format.magic#numpy.lib.format.magic "numpy.lib.format.magic")(major, minor) | Return the magic string for the given file format version. [`open_memmap`](numpy.lib.format.open_memmap#numpy.lib.format.open_memmap "numpy.lib.format.open_memmap")(filename[, mode, dtype, shape, ...]) | Open a .npy file as a memory-mapped array. [`read_array`](numpy.lib.format.read_array#numpy.lib.format.read_array "numpy.lib.format.read_array")(fp[, allow_pickle, ...]) | Read an array from an NPY file. [`read_array_header_1_0`](numpy.lib.format.read_array_header_1_0#numpy.lib.format.read_array_header_1_0 "numpy.lib.format.read_array_header_1_0")(fp[, max_header_size]) | Read an array header from a filelike object using the 1.0 file format version. [`read_array_header_2_0`](numpy.lib.format.read_array_header_2_0#numpy.lib.format.read_array_header_2_0 "numpy.lib.format.read_array_header_2_0")(fp[, max_header_size]) | Read an array header from a filelike object using the 2.0 file format version. [`read_magic`](numpy.lib.format.read_magic#numpy.lib.format.read_magic "numpy.lib.format.read_magic")(fp) | Read the magic string to get the version of the file format. [`write_array`](numpy.lib.format.write_array#numpy.lib.format.write_array "numpy.lib.format.write_array")(fp, array[, version, ...]) | Write an array to an NPY file, including a header. [`write_array_header_1_0`](numpy.lib.format.write_array_header_1_0#numpy.lib.format.write_array_header_1_0 "numpy.lib.format.write_array_header_1_0")(fp, d) | Write the header for an array using the 1.0 format. [`write_array_header_2_0`](numpy.lib.format.write_array_header_2_0#numpy.lib.format.write_array_header_2_0 "numpy.lib.format.write_array_header_2_0")(fp, d) | Write the header for an array using the 2.0 format. # numpy.lib.format.isfileobj lib.format.isfileobj(_f_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/format.py#L999-L1008) # numpy.lib.format.magic lib.format.magic(_major_ , _minor_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/format.py#L203-L223) Return the magic string for the given file format version. Parameters: **major** int in [0, 255] **minor** int in [0, 255] Returns: **magic** str Raises: ValueError if the version cannot be formatted. # numpy.lib.format.open_memmap lib.format.open_memmap(_filename_ , _mode ='r+'_, _dtype =None_, _shape =None_, _fortran_order =False_, _version =None_, _*_ , _max_header_size =10000_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/format.py#L864-L968) Open a .npy file as a memory-mapped array. This may be used to read an existing file or create a new one. Parameters: **filename** str or path-like The name of the file on disk. This may _not_ be a file-like object. **mode** str, optional The mode in which to open the file; the default is ‘r+’. In addition to the standard file modes, ‘c’ is also accepted to mean “copy on write.” See [`memmap`](numpy.memmap#numpy.memmap "numpy.memmap") for the available mode strings. **dtype** data-type, optional The data type of the array if we are creating a new file in “write” mode, if not, [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is ignored. The default value is None, which results in a data-type of [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"). **shape** tuple of int The shape of the array if we are creating a new file in “write” mode, in which case this parameter is required. Otherwise, this parameter is ignored and is thus optional. **fortran_order** bool, optional Whether the array should be Fortran-contiguous (True) or C-contiguous (False, the default) if we are creating a new file in “write” mode. **version** tuple of int (major, minor) or None If the mode is a “write” mode, then this is the version of the file format used to create the file. None means use the oldest supported version that is able to store the data. Default: None **max_header_size** int, optional Maximum allowed size of the header. Large headers may not be safe to load securely and thus require explicitly passing a larger value. See [`ast.literal_eval`](https://docs.python.org/3/library/ast.html#ast.literal_eval "\(in Python v3.13\)") for details. Returns: **marray** memmap The memory-mapped array. Raises: ValueError If the data or the mode is invalid. OSError If the file is not found or cannot be opened correctly. See also [`numpy.memmap`](numpy.memmap#numpy.memmap "numpy.memmap") # numpy.lib.format.read_array lib.format.read_array(_fp_ , _allow_pickle =False_, _pickle_kwargs =None_, _*_ , _max_header_size =10000_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/format.py#L762-L861) Read an array from an NPY file. Parameters: **fp** file_like object If this is not a real file object, then this may take extra memory and time. **allow_pickle** bool, optional Whether to allow writing pickled data. Default: False **pickle_kwargs** dict Additional keyword arguments to pass to pickle.load. These are only useful when loading object arrays saved on Python 2 when using Python 3. **max_header_size** int, optional Maximum allowed size of the header. Large headers may not be safe to load securely and thus require explicitly passing a larger value. See [`ast.literal_eval`](https://docs.python.org/3/library/ast.html#ast.literal_eval "\(in Python v3.13\)") for details. This option is ignored when `allow_pickle` is passed. In that case the file is by definition trusted and the limit is unnecessary. Returns: **array** ndarray The array from the data on disk. Raises: ValueError If the data is invalid, or allow_pickle=False and the file contains an object array. # numpy.lib.format.read_array_header_1_0 lib.format.read_array_header_1_0(_fp_ , _max_header_size =10000_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/format.py#L496-L530) Read an array header from a filelike object using the 1.0 file format version. This will leave the file object located just after the header. Parameters: **fp** filelike object A file object or something with a `read()` method like a file. Returns: **shape** tuple of int The shape of the array. **fortran_order** bool The array data will be written out directly if it is either C-contiguous or Fortran-contiguous. Otherwise, it will be made contiguous before writing it out. **dtype** dtype The dtype of the file’s data. **max_header_size** int, optional Maximum allowed size of the header. Large headers may not be safe to load securely and thus require explicitly passing a larger value. See [`ast.literal_eval`](https://docs.python.org/3/library/ast.html#ast.literal_eval "\(in Python v3.13\)") for details. Raises: ValueError If the data is invalid. # numpy.lib.format.read_array_header_2_0 lib.format.read_array_header_2_0(_fp_ , _max_header_size =10000_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/format.py#L532-L566) Read an array header from a filelike object using the 2.0 file format version. This will leave the file object located just after the header. Parameters: **fp** filelike object A file object or something with a `read()` method like a file. **max_header_size** int, optional Maximum allowed size of the header. Large headers may not be safe to load securely and thus require explicitly passing a larger value. See [`ast.literal_eval`](https://docs.python.org/3/library/ast.html#ast.literal_eval "\(in Python v3.13\)") for details. Returns: **shape** tuple of int The shape of the array. **fortran_order** bool The array data will be written out directly if it is either C-contiguous or Fortran-contiguous. Otherwise, it will be made contiguous before writing it out. **dtype** dtype The dtype of the file’s data. Raises: ValueError If the data is invalid. # numpy.lib.format.read_magic lib.format.read_magic(_fp_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/format.py#L225-L242) Read the magic string to get the version of the file format. Parameters: **fp** filelike object Returns: **major** int **minor** int # numpy.lib.format.write_array lib.format.write_array(_fp_ , _array_ , _version =None_, _allow_pickle =True_, _pickle_kwargs =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/format.py#L684-L759) Write an array to an NPY file, including a header. If the array is neither C-contiguous nor Fortran-contiguous AND the file_like object is not a real file object, this function will have to copy data in memory. Parameters: **fp** file_like object An open, writable file object, or similar object with a `.write()` method. **array** ndarray The array to write to disk. **version**(int, int) or None, optional The version number of the format. None means use the oldest supported version that is able to store the data. Default: None **allow_pickle** bool, optional Whether to allow writing pickled data. Default: True **pickle_kwargs** dict, optional Additional keyword arguments to pass to pickle.dump, excluding ‘protocol’. These are only useful when pickling objects in object arrays on Python 3 to Python 2 compatible format. Raises: ValueError If the array cannot be persisted. This includes the case of allow_pickle=False and array being an object array. Various other errors If the array contains Python objects as part of its dtype, the process of pickling them may raise various errors if the objects are not picklable. # numpy.lib.format.write_array_header_1_0 lib.format.write_array_header_1_0(_fp_ , _d_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/format.py#L470-L480) Write the header for an array using the 1.0 format. Parameters: **fp** filelike object **d** dict This has the appropriate entries for writing its string representation to the header of the file. # numpy.lib.format.write_array_header_2_0 lib.format.write_array_header_2_0(_fp_ , _d_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/format.py#L483-L494) Write the header for an array using the 2.0 format. The 2.0 format allows storing very large structured arrays. Parameters: **fp** filelike object **d** dict This has the appropriate entries for writing its string representation to the header of the file. # numpy.lib.introspect Introspection helper functions. #### Functions [`opt_func_info`](numpy.lib.introspect.opt_func_info#numpy.lib.introspect.opt_func_info "numpy.lib.introspect.opt_func_info")([func_name, signature]) | Returns a dictionary containing the currently supported CPU dispatched features for all optimized functions. ---|--- # numpy.lib.introspect.opt_func_info lib.introspect.opt_func_info(_func_name =None_, _signature =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/introspect.py#L9-L95) Returns a dictionary containing the currently supported CPU dispatched features for all optimized functions. Parameters: **func_name** str (optional) Regular expression to filter by function name. **signature** str (optional) Regular expression to filter by data type. Returns: dict A dictionary where keys are optimized function names and values are nested dictionaries indicating supported targets based on data types. #### Examples Retrieve dispatch information for functions named ‘add’ or ‘sub’ and data types ‘float64’ or ‘float32’: >>> import numpy as np >>> dict = np.lib.introspect.opt_func_info( ... func_name="add|abs", signature="float64|complex64" ... ) >>> import json >>> print(json.dumps(dict, indent=2)) { "absolute": { "dd": { "current": "SSE41", "available": "SSE41 baseline(SSE SSE2 SSE3)" }, "Ff": { "current": "FMA3__AVX2", "available": "AVX512F FMA3__AVX2 baseline(SSE SSE2 SSE3)" }, "Dd": { "current": "FMA3__AVX2", "available": "AVX512F FMA3__AVX2 baseline(SSE SSE2 SSE3)" } }, "add": { "ddd": { "current": "FMA3__AVX2", "available": "FMA3__AVX2 baseline(SSE SSE2 SSE3)" }, "FFF": { "current": "FMA3__AVX2", "available": "FMA3__AVX2 baseline(SSE SSE2 SSE3)" } } } # numpy.lib.mixins Mixin classes for custom array types that don’t inherit from ndarray. #### Classes [`NDArrayOperatorsMixin`](numpy.lib.mixins.ndarrayoperatorsmixin#numpy.lib.mixins.NDArrayOperatorsMixin "numpy.lib.mixins.NDArrayOperatorsMixin")() | Mixin defining all operator special methods using __array_ufunc__. ---|--- # numpy.lib.mixins.NDArrayOperatorsMixin _class_ numpy.lib.mixins.NDArrayOperatorsMixin[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/mixins.py#L61-L182) Mixin defining all operator special methods using __array_ufunc__. This class implements the special methods for almost all of Python’s builtin operators defined in the [`operator`](https://docs.python.org/3/library/operator.html#module-operator "\(in Python v3.13\)") module, including comparisons (`==`, `>`, etc.) and arithmetic (`+`, `*`, `-`, etc.), by deferring to the `__array_ufunc__` method, which subclasses must implement. It is useful for writing classes that do not inherit from [`numpy.ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), but that should support arithmetic and numpy universal functions like arrays as described in [A Mechanism for Overriding Ufuncs](https://numpy.org/neps/nep-0013-ufunc-overrides.html). As an trivial example, consider this implementation of an `ArrayLike` class that simply wraps a NumPy array and ensures that the result of any arithmetic operation is also an `ArrayLike` object: >>> import numbers >>> class ArrayLike(np.lib.mixins.NDArrayOperatorsMixin): ... def __init__(self, value): ... self.value = np.asarray(value) ... ... # One might also consider adding the built-in list type to this ... # list, to support operations like np.add(array_like, list) ... _HANDLED_TYPES = (np.ndarray, numbers.Number) ... ... def __array_ufunc__(self, ufunc, method, *inputs, **kwargs): ... out = kwargs.get('out', ()) ... for x in inputs + out: ... # Only support operations with instances of ... # _HANDLED_TYPES. Use ArrayLike instead of type(self) ... # for isinstance to allow subclasses that don't ... # override __array_ufunc__ to handle ArrayLike objects. ... if not isinstance( ... x, self._HANDLED_TYPES + (ArrayLike,) ... ): ... return NotImplemented ... ... # Defer to the implementation of the ufunc ... # on unwrapped values. ... inputs = tuple(x.value if isinstance(x, ArrayLike) else x ... for x in inputs) ... if out: ... kwargs['out'] = tuple( ... x.value if isinstance(x, ArrayLike) else x ... for x in out) ... result = getattr(ufunc, method)(*inputs, **kwargs) ... ... if type(result) is tuple: ... # multiple return values ... return tuple(type(self)(x) for x in result) ... elif method == 'at': ... # no return value ... return None ... else: ... # one return value ... return type(self)(result) ... ... def __repr__(self): ... return '%s(%r)' % (type(self).__name__, self.value) In interactions between `ArrayLike` objects and numbers or numpy arrays, the result is always another `ArrayLike`: >>> x = ArrayLike([1, 2, 3]) >>> x - 1 ArrayLike(array([0, 1, 2])) >>> 1 - x ArrayLike(array([ 0, -1, -2])) >>> np.arange(3) - x ArrayLike(array([-1, -1, -1])) >>> x - np.arange(3) ArrayLike(array([1, 1, 1])) Note that unlike `numpy.ndarray`, `ArrayLike` does not allow operations with arbitrary, unrecognized types. This ensures that interactions with ArrayLike preserve a well-defined casting hierarchy. # numpy.lib.npyio.DataSource.abspath method lib.npyio.DataSource.abspath(_path_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_datasource.py#L371-L411) Return absolute path of file in the DataSource directory. If `path` is an URL, then `abspath` will return either the location the file exists locally or the location it would exist when opened using the [`open`](numpy.lib.npyio.datasource.open#numpy.lib.npyio.DataSource.open "numpy.lib.npyio.DataSource.open") method. Parameters: **path** str or pathlib.Path Can be a local file or a remote URL. Returns: **out** str Complete path, including the `DataSource` destination directory. #### Notes The functionality is based on [`os.path.abspath`](https://docs.python.org/3/library/os.path.html#os.path.abspath "\(in Python v3.13\)"). # numpy.lib.npyio.DataSource.exists method lib.npyio.DataSource.exists(_path_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_datasource.py#L427-L481) Test if path exists. Test if `path` exists as (and in this order): * a local file. * a remote URL that has been downloaded and stored locally in the `DataSource` directory. * a remote URL that has not been downloaded, but is valid and accessible. Parameters: **path** str or pathlib.Path Can be a local file or a remote URL. Returns: **out** bool True if `path` exists. #### Notes When `path` is an URL, `exists` will return True if it’s either stored locally in the `DataSource` directory, or is a valid remote URL. `DataSource` does not discriminate between the two, the file is accessible if it exists in either location. # numpy.lib.npyio.DataSource _class_ numpy.lib.npyio.DataSource(_destpath ='.'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/npyio.py) A generic data source file (file, http, ftp, …). DataSources can be local files or remote files/URLs. The files may also be compressed or uncompressed. DataSource hides some of the low-level details of downloading the file, allowing you to simply pass in a valid file path (or URL) and obtain a file object. Parameters: **destpath** str or None, optional Path to the directory where the source file gets downloaded to for use. If `destpath` is None, a temporary directory will be created. The default path is the current directory. #### Notes URLs require a scheme string (`http://`) to be used, without it they will fail: >>> repos = np.lib.npyio.DataSource() >>> repos.exists('www.google.com/index.html') False >>> repos.exists('http://www.google.com/index.html') True Temporary directories are deleted when the DataSource is deleted. #### Examples >>> ds = np.lib.npyio.DataSource('/home/guido') >>> urlname = 'http://www.google.com/' >>> gfile = ds.open('http://www.google.com/') >>> ds.abspath(urlname) '/home/guido/www.google.com/index.html' >>> ds = np.lib.npyio.DataSource(None) # use with temporary file >>> ds.open('/home/guido/foobar.txt') >>> ds.abspath('/home/guido/foobar.txt') '/tmp/.../home/guido/foobar.txt' #### Methods [`abspath`](numpy.lib.npyio.datasource.abspath#numpy.lib.npyio.DataSource.abspath "numpy.lib.npyio.DataSource.abspath")(path) | Return absolute path of file in the DataSource directory. ---|--- [`exists`](numpy.lib.npyio.datasource.exists#numpy.lib.npyio.DataSource.exists "numpy.lib.npyio.DataSource.exists")(path) | Test if path exists. [`open`](numpy.lib.npyio.datasource.open#numpy.lib.npyio.DataSource.open "numpy.lib.npyio.DataSource.open")(path[, mode, encoding, newline]) | Open and return file-like object. # numpy.lib.npyio.DataSource.open method lib.npyio.DataSource.open(_path_ , _mode ='r'_, _encoding =None_, _newline =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_datasource.py#L483-L529) Open and return file-like object. If `path` is an URL, it will be downloaded, stored in the `DataSource` directory and opened from there. Parameters: **path** str or pathlib.Path Local file path or URL to open. **mode**{‘r’, ‘w’, ‘a’}, optional Mode to open `path`. Mode ‘r’ for reading, ‘w’ for writing, ‘a’ to append. Available modes depend on the type of object specified by `path`. Default is ‘r’. **encoding**{None, str}, optional Open text file with given encoding. The default encoding will be what `open` uses. **newline**{None, str}, optional Newline to use when reading text file. Returns: **out** file object File object. # numpy.lib.npyio IO related functions. #### Classes [`DataSource`](numpy.lib.npyio.datasource#numpy.lib.npyio.DataSource "numpy.lib.npyio.DataSource")([destpath]) | A generic data source file (file, http, ftp, ...). ---|--- [`NpzFile`](numpy.lib.npyio.npzfile#numpy.lib.npyio.NpzFile "numpy.lib.npyio.NpzFile")(fid) | A dictionary-like object with lazy-loading of files in the zipped archive provided on construction. # numpy.lib.npyio.NpzFile.close method lib.npyio.NpzFile.close()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_npyio_impl.py#L210-L221) Close the file. # numpy.lib.npyio.NpzFile.fid attribute lib.npyio.NpzFile.fid _= None_ # numpy.lib.npyio.NpzFile.get method lib.npyio.NpzFile.get(_key_ , _default =None_, _/_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_npyio_impl.py#L283-L287) D.get(k,[,d]) returns D[k] if k in D, else d. d defaults to None. # numpy.lib.npyio.NpzFile _class_ numpy.lib.npyio.NpzFile(_fid_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/npyio.py) A dictionary-like object with lazy-loading of files in the zipped archive provided on construction. `NpzFile` is used to load files in the NumPy `.npz` data archive format. It assumes that files in the archive have a `.npy` extension, other files are ignored. The arrays and file strings are lazily loaded on either getitem access using `obj['key']` or attribute lookup using `obj.f.key`. A list of all files (without `.npy` extensions) can be obtained with `obj.files` and the ZipFile object itself using `obj.zip`. Parameters: **fid** file, str, or pathlib.Path The zipped archive to open. This is either a file-like object or a string containing the path to the archive. **own_fid** bool, optional Whether NpzFile should close the file handle. Requires that [`fid`](numpy.lib.npyio.npzfile.fid#numpy.lib.npyio.NpzFile.fid "numpy.lib.npyio.NpzFile.fid") is a file-like object. #### Examples >>> import numpy as np >>> from tempfile import TemporaryFile >>> outfile = TemporaryFile() >>> x = np.arange(10) >>> y = np.sin(x) >>> np.savez(outfile, x=x, y=y) >>> _ = outfile.seek(0) >>> npz = np.load(outfile) >>> isinstance(npz, np.lib.npyio.NpzFile) True >>> npz NpzFile 'object' with keys: x, y >>> sorted(npz.files) ['x', 'y'] >>> npz['x'] # getitem access array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> npz.f.x # attribute lookup array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) Attributes: **files** list of str List of all files in the archive with a `.npy` extension. **zip** ZipFile instance The ZipFile object initialized with the zipped archive. **f** BagObj instance An object on which attribute can be performed as an alternative to getitem access on the `NpzFile` instance itself. **allow_pickle** bool, optional Allow loading pickled data. Default: False **pickle_kwargs** dict, optional Additional keyword arguments to pass on to pickle.load. These are only useful when loading object arrays saved on Python 2 when using Python 3. **max_header_size** int, optional Maximum allowed size of the header. Large headers may not be safe to load securely and thus require explicitly passing a larger value. See [`ast.literal_eval`](https://docs.python.org/3/library/ast.html#ast.literal_eval "\(in Python v3.13\)") for details. This option is ignored when `allow_pickle` is passed. In that case the file is by definition trusted and the limit is unnecessary. #### Methods [`close`](numpy.lib.npyio.npzfile.close#numpy.lib.npyio.NpzFile.close "numpy.lib.npyio.NpzFile.close")() | Close the file. ---|--- [`get`](numpy.lib.npyio.npzfile.get#numpy.lib.npyio.NpzFile.get "numpy.lib.npyio.NpzFile.get")(key[, default]) | D.get(k,[,d]) returns D[k] if k in D, else d. [`items`](numpy.lib.npyio.npzfile.items#numpy.lib.npyio.NpzFile.items "numpy.lib.npyio.NpzFile.items")() | D.items() returns a set-like object providing a view on the items [`keys`](numpy.lib.npyio.npzfile.keys#numpy.lib.npyio.NpzFile.keys "numpy.lib.npyio.NpzFile.keys")() | D.keys() returns a set-like object providing a view on the keys [`values`](numpy.lib.npyio.npzfile.values#numpy.lib.npyio.NpzFile.values "numpy.lib.npyio.NpzFile.values")() | D.values() returns a set-like object providing a view on the values # numpy.lib.npyio.NpzFile.items method lib.npyio.NpzFile.items()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_npyio_impl.py#L289-L293) D.items() returns a set-like object providing a view on the items # numpy.lib.npyio.NpzFile.keys method lib.npyio.NpzFile.keys()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_npyio_impl.py#L295-L299) D.keys() returns a set-like object providing a view on the keys # numpy.lib.npyio.NpzFile.values method lib.npyio.NpzFile.values()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_npyio_impl.py#L301-L305) D.values() returns a set-like object providing a view on the values # numpy.lib.NumpyVersion _class_ numpy.lib.NumpyVersion(_vstring_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/__init__.py) Parse and compare numpy version strings. NumPy has the following versioning scheme (numbers given are examples; they can be > 9 in principle): * Released version: ‘1.8.0’, ‘1.8.1’, etc. * Alpha: ‘1.8.0a1’, ‘1.8.0a2’, etc. * Beta: ‘1.8.0b1’, ‘1.8.0b2’, etc. * Release candidates: ‘1.8.0rc1’, ‘1.8.0rc2’, etc. * Development versions: ‘1.8.0.dev-f1234afa’ (git commit hash appended) * Development versions after a1: ‘1.8.0a1.dev-f1234afa’, ‘1.8.0b2.dev-f1234afa’, ‘1.8.1rc1.dev-f1234afa’, etc. * Development versions (no git hash available): ‘1.8.0.dev-Unknown’ Comparing needs to be done against a valid version string or other `NumpyVersion` instance. Note that all development versions of the same (pre-)release compare equal. Parameters: **vstring** str NumPy version string (`np.__version__`). #### Examples >>> from numpy.lib import NumpyVersion >>> if NumpyVersion(np.__version__) < '1.7.0': ... print('skip') >>> # skip >>> NumpyVersion('1.7') # raises ValueError, add ".0" Traceback (most recent call last): ... ValueError: Not a valid numpy version string # numpy.lib.scimath.arccos lib.scimath.arccos(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L495-L539) Compute the inverse cosine of x. Return the “principal value” (for a description of this, see [`numpy.arccos`](numpy.arccos#numpy.arccos "numpy.arccos")) of the inverse cosine of `x`. For real `x` such that `abs(x) <= 1`, this is a real number in the closed interval \\([0, \pi]\\). Otherwise, the complex principle value is returned. Parameters: **x** array_like or scalar The value(s) whose arccos is (are) required. Returns: **out** ndarray or scalar The inverse cosine(s) of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array object is returned. See also [`numpy.arccos`](numpy.arccos#numpy.arccos "numpy.arccos") #### Notes For an arccos() that returns `NAN` when real `x` is not in the interval `[-1,1]`, use [`numpy.arccos`](numpy.arccos#numpy.arccos "numpy.arccos"). #### Examples >>> import numpy as np >>> np.set_printoptions(precision=4) >>> np.emath.arccos(1) # a scalar is returned 0.0 >>> np.emath.arccos([1,2]) array([0.-0.j , 0.-1.317j]) # numpy.lib.scimath.arcsin lib.scimath.arcsin(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L542-L587) Compute the inverse sine of x. Return the “principal value” (for a description of this, see [`numpy.arcsin`](numpy.arcsin#numpy.arcsin "numpy.arcsin")) of the inverse sine of `x`. For real `x` such that `abs(x) <= 1`, this is a real number in the closed interval \\([-\pi/2, \pi/2]\\). Otherwise, the complex principle value is returned. Parameters: **x** array_like or scalar The value(s) whose arcsin is (are) required. Returns: **out** ndarray or scalar The inverse sine(s) of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array object is returned. See also [`numpy.arcsin`](numpy.arcsin#numpy.arcsin "numpy.arcsin") #### Notes For an arcsin() that returns `NAN` when real `x` is not in the interval `[-1,1]`, use [`numpy.arcsin`](numpy.arcsin#numpy.arcsin "numpy.arcsin"). #### Examples >>> import numpy as np >>> np.set_printoptions(precision=4) >>> np.emath.arcsin(0) 0.0 >>> np.emath.arcsin([0,1]) array([0. , 1.5708]) # numpy.lib.scimath.arctanh lib.scimath.arctanh(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L590-L643) Compute the inverse hyperbolic tangent of `x`. Return the “principal value” (for a description of this, see [`numpy.arctanh`](numpy.arctanh#numpy.arctanh "numpy.arctanh")) of `arctanh(x)`. For real `x` such that `abs(x) < 1`, this is a real number. If `abs(x) > 1`, or if `x` is complex, the result is complex. Finally, `x = 1` returns``inf`` and `x=-1` returns `-inf`. Parameters: **x** array_like The value(s) whose arctanh is (are) required. Returns: **out** ndarray or scalar The inverse hyperbolic tangent(s) of the `x` value(s). If `x` was a scalar so is `out`, otherwise an array is returned. See also [`numpy.arctanh`](numpy.arctanh#numpy.arctanh "numpy.arctanh") #### Notes For an arctanh() that returns `NAN` when real `x` is not in the interval `(-1,1)`, use [`numpy.arctanh`](numpy.arctanh#numpy.arctanh "numpy.arctanh") (this latter, however, does return +/-inf for `x = +/-1`). #### Examples >>> import numpy as np >>> np.set_printoptions(precision=4) >>> np.emath.arctanh(0.5) 0.5493061443340549 >>> from numpy.testing import suppress_warnings >>> with suppress_warnings() as sup: ... sup.filter(RuntimeWarning) ... np.emath.arctanh(np.eye(2)) array([[inf, 0.], [ 0., inf]]) >>> np.emath.arctanh([1j]) array([0.+0.7854j]) # numpy.lib.scimath Wrapper functions to more user-friendly calling of certain math functions whose output data-type is different than the input data-type in certain domains of the input. For example, for functions like [`log`](numpy.lib.scimath.log#numpy.lib.scimath.log "numpy.lib.scimath.log") with branch cuts, the versions in this module provide the mathematically valid answers in the complex plane: >>> import math >>> np.emath.log(-math.exp(1)) == (1+1j*math.pi) True Similarly, [`sqrt`](numpy.lib.scimath.sqrt#numpy.lib.scimath.sqrt "numpy.lib.scimath.sqrt"), other base logarithms, [`power`](numpy.lib.scimath.power#numpy.lib.scimath.power "numpy.lib.scimath.power") and trig functions are correctly handled. See their respective docstrings for specific examples. #### Functions [`arccos`](numpy.lib.scimath.arccos#numpy.lib.scimath.arccos "numpy.lib.scimath.arccos")(x) | Compute the inverse cosine of x. ---|--- [`arcsin`](numpy.lib.scimath.arcsin#numpy.lib.scimath.arcsin "numpy.lib.scimath.arcsin")(x) | Compute the inverse sine of x. [`arctanh`](numpy.lib.scimath.arctanh#numpy.lib.scimath.arctanh "numpy.lib.scimath.arctanh")(x) | Compute the inverse hyperbolic tangent of `x`. [`log`](numpy.lib.scimath.log#numpy.lib.scimath.log "numpy.lib.scimath.log")(x) | Compute the natural logarithm of `x`. [`log10`](numpy.lib.scimath.log10#numpy.lib.scimath.log10 "numpy.lib.scimath.log10")(x) | Compute the logarithm base 10 of `x`. [`log2`](numpy.lib.scimath.log2#numpy.lib.scimath.log2 "numpy.lib.scimath.log2")(x) | Compute the logarithm base 2 of `x`. [`logn`](numpy.lib.scimath.logn#numpy.lib.scimath.logn "numpy.lib.scimath.logn")(n, x) | Take log base n of x. [`power`](numpy.lib.scimath.power#numpy.lib.scimath.power "numpy.lib.scimath.power")(x, p) | Return x to the power p, (x**p). [`sqrt`](numpy.lib.scimath.sqrt#numpy.lib.scimath.sqrt "numpy.lib.scimath.sqrt")(x) | Compute the square root of x. # numpy.lib.scimath.log lib.scimath.log(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L242-L289) Compute the natural logarithm of `x`. Return the “principal value” (for a description of this, see [`numpy.log`](numpy.log#numpy.log "numpy.log")) of \\(log_e(x)\\). For real `x > 0`, this is a real number (`log(0)` returns `-inf` and `log(np.inf)` returns `inf`). Otherwise, the complex principle value is returned. Parameters: **x** array_like The value(s) whose log is (are) required. Returns: **out** ndarray or scalar The log of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array is returned. See also [`numpy.log`](numpy.log#numpy.log "numpy.log") #### Notes For a log() that returns `NAN` when real `x < 0`, use [`numpy.log`](numpy.log#numpy.log "numpy.log") (note, however, that otherwise [`numpy.log`](numpy.log#numpy.log "numpy.log") and this [`log`](numpy.log#numpy.log "numpy.log") are identical, i.e., both return `-inf` for `x = 0`, `inf` for `x = inf`, and, notably, the complex principle value if `x.imag != 0`). #### Examples >>> import numpy as np >>> np.emath.log(np.exp(1)) 1.0 Negative arguments are handled “correctly” (recall that `exp(log(x)) == x` does _not_ hold for real `x < 0`): >>> np.emath.log(-np.exp(1)) == (1 + np.pi * 1j) True # numpy.lib.scimath.log10 lib.scimath.log10(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L292-L341) Compute the logarithm base 10 of `x`. Return the “principal value” (for a description of this, see [`numpy.log10`](numpy.log10#numpy.log10 "numpy.log10")) of \\(log_{10}(x)\\). For real `x > 0`, this is a real number (`log10(0)` returns `-inf` and `log10(np.inf)` returns `inf`). Otherwise, the complex principle value is returned. Parameters: **x** array_like or scalar The value(s) whose log base 10 is (are) required. Returns: **out** ndarray or scalar The log base 10 of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array object is returned. See also [`numpy.log10`](numpy.log10#numpy.log10 "numpy.log10") #### Notes For a log10() that returns `NAN` when real `x < 0`, use [`numpy.log10`](numpy.log10#numpy.log10 "numpy.log10") (note, however, that otherwise [`numpy.log10`](numpy.log10#numpy.log10 "numpy.log10") and this [`log10`](numpy.log10#numpy.log10 "numpy.log10") are identical, i.e., both return `-inf` for `x = 0`, `inf` for `x = inf`, and, notably, the complex principle value if `x.imag != 0`). #### Examples >>> import numpy as np (We set the printing precision so the example can be auto-tested) >>> np.set_printoptions(precision=4) >>> np.emath.log10(10**1) 1.0 >>> np.emath.log10([-10**1, -10**2, 10**2]) array([1.+1.3644j, 2.+1.3644j, 2.+0.j ]) # numpy.lib.scimath.log2 lib.scimath.log2(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L386-L433) Compute the logarithm base 2 of `x`. Return the “principal value” (for a description of this, see [`numpy.log2`](numpy.log2#numpy.log2 "numpy.log2")) of \\(log_2(x)\\). For real `x > 0`, this is a real number (`log2(0)` returns `-inf` and `log2(np.inf)` returns `inf`). Otherwise, the complex principle value is returned. Parameters: **x** array_like The value(s) whose log base 2 is (are) required. Returns: **out** ndarray or scalar The log base 2 of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array is returned. See also [`numpy.log2`](numpy.log2#numpy.log2 "numpy.log2") #### Notes For a log2() that returns `NAN` when real `x < 0`, use [`numpy.log2`](numpy.log2#numpy.log2 "numpy.log2") (note, however, that otherwise [`numpy.log2`](numpy.log2#numpy.log2 "numpy.log2") and this [`log2`](numpy.log2#numpy.log2 "numpy.log2") are identical, i.e., both return `-inf` for `x = 0`, `inf` for `x = inf`, and, notably, the complex principle value if `x.imag != 0`). #### Examples We set the printing precision so the example can be auto-tested: >>> np.set_printoptions(precision=4) >>> np.emath.log2(8) 3.0 >>> np.emath.log2([-4, -8, 8]) array([2.+4.5324j, 3.+4.5324j, 3.+0.j ]) # numpy.lib.scimath.logn lib.scimath.logn(_n_ , _x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L348-L383) Take log base n of x. If `x` contains negative inputs, the answer is computed and returned in the complex domain. Parameters: **n** array_like The integer base(s) in which the log is taken. **x** array_like The value(s) whose log base `n` is (are) required. Returns: **out** ndarray or scalar The log base `n` of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array is returned. #### Examples >>> import numpy as np >>> np.set_printoptions(precision=4) >>> np.emath.logn(2, [4, 8]) array([2., 3.]) >>> np.emath.logn(2, [-4, -8, 8]) array([2.+4.5324j, 3.+4.5324j, 3.+0.j ]) # numpy.lib.scimath.power lib.scimath.power(_x_ , _p_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L440-L492) Return x to the power p, (x**p). If `x` contains negative values, the output is converted to the complex domain. Parameters: **x** array_like The input value(s). **p** array_like of ints The power(s) to which `x` is raised. If `x` contains multiple values, `p` has to either be a scalar, or contain the same number of values as `x`. In the latter case, the result is `x[0]**p[0], x[1]**p[1], ...`. Returns: **out** ndarray or scalar The result of `x**p`. If `x` and `p` are scalars, so is `out`, otherwise an array is returned. See also [`numpy.power`](numpy.power#numpy.power "numpy.power") #### Examples >>> import numpy as np >>> np.set_printoptions(precision=4) >>> np.emath.power(2, 2) 4 >>> np.emath.power([2, 4], 2) array([ 4, 16]) >>> np.emath.power([2, 4], -2) array([0.25 , 0.0625]) >>> np.emath.power([-2, 4], 2) array([ 4.-0.j, 16.+0.j]) >>> np.emath.power([2, 4], [2, 4]) array([ 4, 256]) # numpy.lib.scimath.sqrt lib.scimath.sqrt(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_scimath_impl.py#L186-L239) Compute the square root of x. For negative input elements, a complex value is returned (unlike [`numpy.sqrt`](numpy.sqrt#numpy.sqrt "numpy.sqrt") which returns NaN). Parameters: **x** array_like The input value(s). Returns: **out** ndarray or scalar The square root of `x`. If `x` was a scalar, so is `out`, otherwise an array is returned. See also [`numpy.sqrt`](numpy.sqrt#numpy.sqrt "numpy.sqrt") #### Examples For real, non-negative inputs this works just like [`numpy.sqrt`](numpy.sqrt#numpy.sqrt "numpy.sqrt"): >>> import numpy as np >>> np.emath.sqrt(1) 1.0 >>> np.emath.sqrt([1, 4]) array([1., 2.]) But it automatically handles negative inputs: >>> np.emath.sqrt(-1) 1j >>> np.emath.sqrt([-1,4]) array([0.+1.j, 2.+0.j]) Different results are expected because: floating point 0.0 and -0.0 are distinct. For more control, explicitly use complex() as follows: >>> np.emath.sqrt(complex(-4.0, 0.0)) 2j >>> np.emath.sqrt(complex(-4.0, -0.0)) -2j # numpy.lib.stride_tricks.as_strided lib.stride_tricks.as_strided(_x_ , _shape =None_, _strides =None_, _subok =False_, _writeable =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_stride_tricks_impl.py#L37-L111) Create a view into the array with the given shape and strides. Warning This function has to be used with extreme care, see notes. Parameters: **x** ndarray Array to create a new. **shape** sequence of int, optional The shape of the new array. Defaults to `x.shape`. **strides** sequence of int, optional The strides of the new array. Defaults to `x.strides`. **subok** bool, optional If True, subclasses are preserved. **writeable** bool, optional If set to False, the returned array will always be readonly. Otherwise it will be writable if the original array was. It is advisable to set this to False if possible (see Notes). Returns: **view** ndarray See also [`broadcast_to`](numpy.broadcast_to#numpy.broadcast_to "numpy.broadcast_to") broadcast an array to a given shape. [`reshape`](numpy.reshape#numpy.reshape "numpy.reshape") reshape an array. [`lib.stride_tricks.sliding_window_view`](numpy.lib.stride_tricks.sliding_window_view#numpy.lib.stride_tricks.sliding_window_view "numpy.lib.stride_tricks.sliding_window_view") userfriendly and safe function for a creation of sliding window views. #### Notes `as_strided` creates a view into the array given the exact strides and shape. This means it manipulates the internal data structure of ndarray and, if done incorrectly, the array elements can point to invalid memory and can corrupt results or crash your program. It is advisable to always use the original `x.strides` when calculating new strides to avoid reliance on a contiguous memory layout. Furthermore, arrays created with this function often contain self overlapping memory, so that two elements are identical. Vectorized write operations on such arrays will typically be unpredictable. They may even give different results for small, large, or transposed arrays. Since writing to these arrays has to be tested and done with great care, you may want to use `writeable=False` to avoid accidental write operations. For these reasons it is advisable to avoid `as_strided` when possible. # numpy.lib.stride_tricks Utilities that manipulate strides to achieve desirable effects. An explanation of strides can be found in the [The N-dimensional array (ndarray)](../arrays.ndarray#arrays-ndarray). #### Functions [`as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided")(x[, shape, strides, subok, writeable]) | Create a view into the array with the given shape and strides. ---|--- [`sliding_window_view`](numpy.lib.stride_tricks.sliding_window_view#numpy.lib.stride_tricks.sliding_window_view "numpy.lib.stride_tricks.sliding_window_view")(x, window_shape[, axis, ...]) | Create a sliding window view into the array with the given window shape. # numpy.lib.stride_tricks.sliding_window_view lib.stride_tricks.sliding_window_view(_x_ , _window_shape_ , _axis =None_, _*_ , _subok =False_, _writeable =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_stride_tricks_impl.py#L119-L337) Create a sliding window view into the array with the given window shape. Also known as rolling or moving window, the window slides across all dimensions of the array and extracts subsets of the array at all window positions. New in version 1.20.0. Parameters: **x** array_like Array to create the sliding window view from. **window_shape** int or tuple of int Size of window over each axis that takes part in the sliding window. If `axis` is not present, must have same length as the number of input array dimensions. Single integers `i` are treated as if they were the tuple `(i,)`. **axis** int or tuple of int, optional Axis or axes along which the sliding window is applied. By default, the sliding window is applied to all axes and `window_shape[i]` will refer to axis `i` of `x`. If `axis` is given as a `tuple of int`, `window_shape[i]` will refer to the axis `axis[i]` of `x`. Single integers `i` are treated as if they were the tuple `(i,)`. **subok** bool, optional If True, sub-classes will be passed-through, otherwise the returned array will be forced to be a base-class array (default). **writeable** bool, optional When true, allow writing to the returned view. The default is false, as this should be used with caution: the returned view contains the same memory location multiple times, so writing to one location will cause others to change. Returns: **view** ndarray Sliding window view of the array. The sliding window dimensions are inserted at the end, and the original dimensions are trimmed as required by the size of the sliding window. That is, `view.shape = x_shape_trimmed + window_shape`, where `x_shape_trimmed` is `x.shape` with every entry reduced by one less than the corresponding window size. See also [`lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") A lower-level and less safe routine for creating arbitrary views from custom shape and strides. [`broadcast_to`](numpy.broadcast_to#numpy.broadcast_to "numpy.broadcast_to") broadcast an array to a given shape. #### Notes For many applications using a sliding window view can be convenient, but potentially very slow. Often specialized solutions exist, for example: * [`scipy.signal.fftconvolve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.fftconvolve.html#scipy.signal.fftconvolve "\(in SciPy v1.14.1\)") * filtering functions in [`scipy.ndimage`](https://docs.scipy.org/doc/scipy/reference/ndimage.html#module-scipy.ndimage "\(in SciPy v1.14.1\)") * moving window functions provided by [bottleneck](https://github.com/pydata/bottleneck). As a rough estimate, a sliding window approach with an input size of `N` and a window size of `W` will scale as `O(N*W)` where frequently a special algorithm can achieve `O(N)`. That means that the sliding window variant for a window size of 100 can be a 100 times slower than a more specialized version. Nevertheless, for small window sizes, when no custom algorithm exists, or as a prototyping and developing tool, this function can be a good solution. #### Examples >>> import numpy as np >>> from numpy.lib.stride_tricks import sliding_window_view >>> x = np.arange(6) >>> x.shape (6,) >>> v = sliding_window_view(x, 3) >>> v.shape (4, 3) >>> v array([[0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5]]) This also works in more dimensions, e.g. >>> i, j = np.ogrid[:3, :4] >>> x = 10*i + j >>> x.shape (3, 4) >>> x array([[ 0, 1, 2, 3], [10, 11, 12, 13], [20, 21, 22, 23]]) >>> shape = (2,2) >>> v = sliding_window_view(x, shape) >>> v.shape (2, 3, 2, 2) >>> v array([[[[ 0, 1], [10, 11]], [[ 1, 2], [11, 12]], [[ 2, 3], [12, 13]]], [[[10, 11], [20, 21]], [[11, 12], [21, 22]], [[12, 13], [22, 23]]]]) The axis can be specified explicitly: >>> v = sliding_window_view(x, 3, 0) >>> v.shape (1, 4, 3) >>> v array([[[ 0, 10, 20], [ 1, 11, 21], [ 2, 12, 22], [ 3, 13, 23]]]) The same axis can be used several times. In that case, every use reduces the corresponding original dimension: >>> v = sliding_window_view(x, (2, 3), (1, 1)) >>> v.shape (3, 1, 2, 3) >>> v array([[[[ 0, 1, 2], [ 1, 2, 3]]], [[[10, 11, 12], [11, 12, 13]]], [[[20, 21, 22], [21, 22, 23]]]]) Combining with stepped slicing (`::step`), this can be used to take sliding views which skip elements: >>> x = np.arange(7) >>> sliding_window_view(x, 5)[:, ::2] array([[0, 2, 4], [1, 3, 5], [2, 4, 6]]) or views which move by multiple elements >>> x = np.arange(7) >>> sliding_window_view(x, 3)[::2, :] array([[0, 1, 2], [2, 3, 4], [4, 5, 6]]) A common application of `sliding_window_view` is the calculation of running statistics. The simplest example is the [moving average](https://en.wikipedia.org/wiki/Moving_average): >>> x = np.arange(6) >>> x.shape (6,) >>> v = sliding_window_view(x, 3) >>> v.shape (4, 3) >>> v array([[0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5]]) >>> moving_average = v.mean(axis=-1) >>> moving_average array([1., 2., 3., 4.]) Note that a sliding window approach is often **not** optimal (see Notes). # numpy.lib.user_array.container _class_ numpy.lib.user_array.container(_data_ , _dtype =None_, _copy =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/user_array.py) Standard container-class for easy multiple-inheritance. #### Methods **copy** | ---|--- **tostring** | **byteswap** | **astype** | # numpy.linalg.cholesky linalg.cholesky(_a_ , _/_ , _*_ , _upper =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L740-L840) Cholesky decomposition. Return the lower or upper Cholesky decomposition, `L * L.H` or `U.H * U`, of the square matrix `a`, where `L` is lower-triangular, `U` is upper-triangular, and `.H` is the conjugate transpose operator (which is the ordinary transpose if `a` is real-valued). `a` must be Hermitian (symmetric if real-valued) and positive-definite. No checking is performed to verify whether `a` is Hermitian or not. In addition, only the lower or upper-triangular and diagonal elements of `a` are used. Only `L` or `U` is actually returned. Parameters: **a**(…, M, M) array_like Hermitian (symmetric if all elements are real), positive-definite input matrix. **upper** bool If `True`, the result must be the upper-triangular Cholesky factor. If `False`, the result must be the lower-triangular Cholesky factor. Default: `False`. Returns: **L**(…, M, M) array_like Lower or upper-triangular Cholesky factor of `a`. Returns a matrix object if `a` is a matrix object. Raises: LinAlgError If the decomposition fails, for example, if `a` is not positive-definite. See also [`scipy.linalg.cholesky`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.cholesky.html#scipy.linalg.cholesky "\(in SciPy v1.14.1\)") Similar function in SciPy. [`scipy.linalg.cholesky_banded`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.cholesky_banded.html#scipy.linalg.cholesky_banded "\(in SciPy v1.14.1\)") Cholesky decompose a banded Hermitian positive-definite matrix. [`scipy.linalg.cho_factor`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.cho_factor.html#scipy.linalg.cho_factor "\(in SciPy v1.14.1\)") Cholesky decomposition of a matrix, to use in [`scipy.linalg.cho_solve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.cho_solve.html#scipy.linalg.cho_solve "\(in SciPy v1.14.1\)"). #### Notes Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module- numpy.linalg "numpy.linalg") documentation for details. The Cholesky decomposition is often used as a fast way of solving \\[A \mathbf{x} = \mathbf{b}\\] (when `A` is both Hermitian/symmetric and positive-definite). First, we solve for \\(\mathbf{y}\\) in \\[L \mathbf{y} = \mathbf{b},\\] and then for \\(\mathbf{x}\\) in \\[L^{H} \mathbf{x} = \mathbf{y}.\\] #### Examples >>> import numpy as np >>> A = np.array([[1,-2j],[2j,5]]) >>> A array([[ 1.+0.j, -0.-2.j], [ 0.+2.j, 5.+0.j]]) >>> L = np.linalg.cholesky(A) >>> L array([[1.+0.j, 0.+0.j], [0.+2.j, 1.+0.j]]) >>> np.dot(L, L.T.conj()) # verify that L * L.H = A array([[1.+0.j, 0.-2.j], [0.+2.j, 5.+0.j]]) >>> A = [[1,-2j],[2j,5]] # what happens if A is only array_like? >>> np.linalg.cholesky(A) # an ndarray object is returned array([[1.+0.j, 0.+0.j], [0.+2.j, 1.+0.j]]) >>> # But a matrix object is returned if A is a matrix object >>> np.linalg.cholesky(np.matrix(A)) matrix([[ 1.+0.j, 0.+0.j], [ 0.+2.j, 1.+0.j]]) >>> # The upper-triangular Cholesky factor can also be obtained. >>> np.linalg.cholesky(A, upper=True) array([[1.-0.j, 0.-2.j], [0.-0.j, 1.-0.j]]) # numpy.linalg.cond linalg.cond(_x_ , _p =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L1885-L2003) Compute the condition number of a matrix. This function is capable of returning the condition number using one of seven different norms, depending on the value of `p` (see Parameters below). Parameters: **x**(…, M, N) array_like The matrix whose condition number is sought. **p**{None, 1, -1, 2, -2, inf, -inf, ‘fro’}, optional Order of the norm used in the condition number computation: p | norm for matrices ---|--- None | 2-norm, computed directly using the `SVD` ‘fro’ | Frobenius norm inf | max(sum(abs(x), axis=1)) -inf | min(sum(abs(x), axis=1)) 1 | max(sum(abs(x), axis=0)) -1 | min(sum(abs(x), axis=0)) 2 | 2-norm (largest sing. value) -2 | smallest singular value inf means the [`numpy.inf`](../constants#numpy.inf "numpy.inf") object, and the Frobenius norm is the root-of-sum-of-squares norm. Returns: **c**{float, inf} The condition number of the matrix. May be infinite. See also [`numpy.linalg.norm`](numpy.linalg.norm#numpy.linalg.norm "numpy.linalg.norm") #### Notes The condition number of `x` is defined as the norm of `x` times the norm of the inverse of `x` [1]; the norm can be the usual L2-norm (root-of-sum-of- squares) or one of a number of other matrix norms. #### References [1] G. Strang, _Linear Algebra and Its Applications_ , Orlando, FL, Academic Press, Inc., 1980, pg. 285. #### Examples >>> import numpy as np >>> from numpy import linalg as LA >>> a = np.array([[1, 0, -1], [0, 1, 0], [1, 0, 1]]) >>> a array([[ 1, 0, -1], [ 0, 1, 0], [ 1, 0, 1]]) >>> LA.cond(a) 1.4142135623730951 >>> LA.cond(a, 'fro') 3.1622776601683795 >>> LA.cond(a, np.inf) 2.0 >>> LA.cond(a, -np.inf) 1.0 >>> LA.cond(a, 1) 2.0 >>> LA.cond(a, -1) 1.0 >>> LA.cond(a, 2) 1.4142135623730951 >>> LA.cond(a, -2) 0.70710678118654746 # may vary >>> (min(LA.svd(a, compute_uv=False)) * ... min(LA.svd(LA.inv(a), compute_uv=False))) 0.70710678118654746 # may vary # numpy.linalg.cross linalg.cross(_x1_ , _x2_ , _/_ , _*_ , _axis =-1_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L3223-L3293) Returns the cross product of 3-element vectors. If `x1` and/or `x2` are multi-dimensional arrays, then the cross-product of each pair of corresponding 3-element vectors is independently computed. This function is Array API compatible, contrary to [`numpy.cross`](numpy.cross#numpy.cross "numpy.cross"). Parameters: **x1** array_like The first input array. **x2** array_like The second input array. Must be compatible with `x1` for all non-compute axes. The size of the axis over which to compute the cross-product must be the same size as the respective axis in `x1`. **axis** int, optional The axis (dimension) of `x1` and `x2` containing the vectors for which to compute the cross-product. Default: `-1`. Returns: **out** ndarray An array containing the cross products. See also [`numpy.cross`](numpy.cross#numpy.cross "numpy.cross") #### Examples Vector cross-product. >>> x = np.array([1, 2, 3]) >>> y = np.array([4, 5, 6]) >>> np.linalg.cross(x, y) array([-3, 6, -3]) Multiple vector cross-products. Note that the direction of the cross product vector is defined by the _right-hand rule_. >>> x = np.array([[1,2,3], [4,5,6]]) >>> y = np.array([[4,5,6], [1,2,3]]) >>> np.linalg.cross(x, y) array([[-3, 6, -3], [ 3, -6, 3]]) >>> x = np.array([[1, 2], [3, 4], [5, 6]]) >>> y = np.array([[4, 5], [6, 1], [2, 3]]) >>> np.linalg.cross(x, y, axis=0) array([[-24, 6], [ 18, 24], [-6, -18]]) # numpy.linalg.det linalg.det(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L2331-L2385) Compute the determinant of an array. Parameters: **a**(…, M, M) array_like Input array to compute determinants for. Returns: **det**(…) array_like Determinant of `a`. See also [`slogdet`](numpy.linalg.slogdet#numpy.linalg.slogdet "numpy.linalg.slogdet") Another way to represent the determinant, more suitable for large matrices where underflow/overflow may occur. [`scipy.linalg.det`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.det.html#scipy.linalg.det "\(in SciPy v1.14.1\)") Similar function in SciPy. #### Notes Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module- numpy.linalg "numpy.linalg") documentation for details. The determinant is computed via LU factorization using the LAPACK routine `z/dgetrf`. #### Examples The determinant of a 2-D array [[a, b], [c, d]] is ad - bc: >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> np.linalg.det(a) -2.0 # may vary Computing determinants for a stack of matrices: >>> a = np.array([ [[1, 2], [3, 4]], [[1, 2], [2, 1]], [[1, 3], [3, 1]] ]) >>> a.shape (3, 2, 2) >>> np.linalg.det(a) array([-2., -3., -8.]) # numpy.linalg.diagonal linalg.diagonal(_x_ , _/_ , _*_ , _offset =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L3042-L3129) Returns specified diagonals of a matrix (or a stack of matrices) `x`. This function is Array API compatible, contrary to [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal"), the matrix is assumed to be defined by the last two dimensions. Parameters: **x**(…,M,N) array_like Input array having shape (…, M, N) and whose innermost two dimensions form MxN matrices. **offset** int, optional Offset specifying the off-diagonal relative to the main diagonal, where: * offset = 0: the main diagonal. * offset > 0: off-diagonal above the main diagonal. * offset < 0: off-diagonal below the main diagonal. Returns: **out**(…,min(N,M)) ndarray An array containing the diagonals and whose shape is determined by removing the last two dimensions and appending a dimension equal to the size of the resulting diagonals. The returned array must have the same data type as `x`. See also [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") #### Examples >>> a = np.arange(4).reshape(2, 2); a array([[0, 1], [2, 3]]) >>> np.linalg.diagonal(a) array([0, 3]) A 3-D example: >>> a = np.arange(8).reshape(2, 2, 2); a array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> np.linalg.diagonal(a) array([[0, 3], [4, 7]]) Diagonals adjacent to the main diagonal can be obtained by using the `offset` argument: >>> a = np.arange(9).reshape(3, 3) >>> a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> np.linalg.diagonal(a, offset=1) # First superdiagonal array([1, 5]) >>> np.linalg.diagonal(a, offset=2) # Second superdiagonal array([2]) >>> np.linalg.diagonal(a, offset=-1) # First subdiagonal array([3, 7]) >>> np.linalg.diagonal(a, offset=-2) # Second subdiagonal array([6]) The anti-diagonal can be obtained by reversing the order of elements using either [`numpy.flipud`](numpy.flipud#numpy.flipud "numpy.flipud") or [`numpy.fliplr`](numpy.fliplr#numpy.fliplr "numpy.fliplr"). >>> a = np.arange(9).reshape(3, 3) >>> a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> np.linalg.diagonal(np.fliplr(a)) # Horizontal flip array([2, 4, 6]) >>> np.linalg.diagonal(np.flipud(a)) # Vertical flip array([6, 4, 2]) Note that the order in which the diagonal is retrieved varies depending on the flip function. # numpy.linalg.eig linalg.eig(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L1331-L1482) Compute the eigenvalues and right eigenvectors of a square array. Parameters: **a**(…, M, M) array Matrices for which the eigenvalues and right eigenvectors will be computed Returns: A namedtuple with the following attributes: **eigenvalues**(…, M) array The eigenvalues, each repeated according to its multiplicity. The eigenvalues are not necessarily ordered. The resulting array will be of complex type, unless the imaginary part is zero in which case it will be cast to a real type. When `a` is real the resulting eigenvalues will be real (0 imaginary part) or occur in conjugate pairs **eigenvectors**(…, M, M) array The normalized (unit “length”) eigenvectors, such that the column `eigenvectors[:,i]` is the eigenvector corresponding to the eigenvalue `eigenvalues[i]`. Raises: LinAlgError If the eigenvalue computation does not converge. See also [`eigvals`](numpy.linalg.eigvals#numpy.linalg.eigvals "numpy.linalg.eigvals") eigenvalues of a non-symmetric array. [`eigh`](numpy.linalg.eigh#numpy.linalg.eigh "numpy.linalg.eigh") eigenvalues and eigenvectors of a real symmetric or complex Hermitian (conjugate symmetric) array. [`eigvalsh`](numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh") eigenvalues of a real symmetric or complex Hermitian (conjugate symmetric) array. [`scipy.linalg.eig`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eig.html#scipy.linalg.eig "\(in SciPy v1.14.1\)") Similar function in SciPy that also solves the generalized eigenvalue problem. [`scipy.linalg.schur`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.schur.html#scipy.linalg.schur "\(in SciPy v1.14.1\)") Best choice for unitary and other non-Hermitian normal matrices. #### Notes Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module- numpy.linalg "numpy.linalg") documentation for details. This is implemented using the `_geev` LAPACK routines which compute the eigenvalues and eigenvectors of general square arrays. The number `w` is an eigenvalue of `a` if there exists a vector `v` such that `a @ v = w * v`. Thus, the arrays `a`, `eigenvalues`, and `eigenvectors` satisfy the equations `a @ eigenvectors[:,i] = eigenvalues[i] * eigenvectors[:,i]` for \\(i \in \\{0,...,M-1\\}\\). The array `eigenvectors` may not be of maximum rank, that is, some of the columns may be linearly dependent, although round-off error may obscure that fact. If the eigenvalues are all different, then theoretically the eigenvectors are linearly independent and `a` can be diagonalized by a similarity transformation using `eigenvectors`, i.e, `inv(eigenvectors) @ a @ eigenvectors` is diagonal. For non-Hermitian normal matrices the SciPy function [`scipy.linalg.schur`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.schur.html#scipy.linalg.schur "\(in SciPy v1.14.1\)") is preferred because the matrix `eigenvectors` is guaranteed to be unitary, which is not the case when using `eig`. The Schur factorization produces an upper triangular matrix rather than a diagonal matrix, but for normal matrices only the diagonal of the upper triangular matrix is needed, the rest is roundoff error. Finally, it is emphasized that `eigenvectors` consists of the _right_ (as in right-hand side) eigenvectors of `a`. A vector `y` satisfying `y.T @ a = z * y.T` for some number `z` is called a _left_ eigenvector of `a`, and, in general, the left and right eigenvectors of a matrix are not necessarily the (perhaps conjugate) transposes of each other. #### References G. Strang, _Linear Algebra and Its Applications_ , 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, Various pp. #### Examples >>> import numpy as np >>> from numpy import linalg as LA (Almost) trivial example with real eigenvalues and eigenvectors. >>> eigenvalues, eigenvectors = LA.eig(np.diag((1, 2, 3))) >>> eigenvalues array([1., 2., 3.]) >>> eigenvectors array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) Real matrix possessing complex eigenvalues and eigenvectors; note that the eigenvalues are complex conjugates of each other. >>> eigenvalues, eigenvectors = LA.eig(np.array([[1, -1], [1, 1]])) >>> eigenvalues array([1.+1.j, 1.-1.j]) >>> eigenvectors array([[0.70710678+0.j , 0.70710678-0.j ], [0. -0.70710678j, 0. +0.70710678j]]) Complex-valued matrix with real eigenvalues (but complex-valued eigenvectors); note that `a.conj().T == a`, i.e., `a` is Hermitian. >>> a = np.array([[1, 1j], [-1j, 1]]) >>> eigenvalues, eigenvectors = LA.eig(a) >>> eigenvalues array([2.+0.j, 0.+0.j]) >>> eigenvectors array([[ 0. +0.70710678j, 0.70710678+0.j ], # may vary [ 0.70710678+0.j , -0. +0.70710678j]]) Be careful about round-off error! >>> a = np.array([[1 + 1e-9, 0], [0, 1 - 1e-9]]) >>> # Theor. eigenvalues are 1 +/- 1e-9 >>> eigenvalues, eigenvectors = LA.eig(a) >>> eigenvalues array([1., 1.]) >>> eigenvectors array([[1., 0.], [0., 1.]]) # numpy.linalg.eigh linalg.eigh(_a_ , _UPLO ='L'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L1485-L1630) Return the eigenvalues and eigenvectors of a complex Hermitian (conjugate symmetric) or a real symmetric matrix. Returns two objects, a 1-D array containing the eigenvalues of `a`, and a 2-D square array or matrix (depending on the input type) of the corresponding eigenvectors (in columns). Parameters: **a**(…, M, M) array Hermitian or real symmetric matrices whose eigenvalues and eigenvectors are to be computed. **UPLO**{‘L’, ‘U’}, optional Specifies whether the calculation is done with the lower triangular part of `a` (‘L’, default) or the upper triangular part (‘U’). Irrespective of this value only the real parts of the diagonal will be considered in the computation to preserve the notion of a Hermitian matrix. It therefore follows that the imaginary part of the diagonal will always be treated as zero. Returns: A namedtuple with the following attributes: **eigenvalues**(…, M) ndarray The eigenvalues in ascending order, each repeated according to its multiplicity. **eigenvectors**{(…, M, M) ndarray, (…, M, M) matrix} The column `eigenvectors[:, i]` is the normalized eigenvector corresponding to the eigenvalue `eigenvalues[i]`. Will return a matrix object if `a` is a matrix object. Raises: LinAlgError If the eigenvalue computation does not converge. See also [`eigvalsh`](numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh") eigenvalues of real symmetric or complex Hermitian (conjugate symmetric) arrays. [`eig`](numpy.linalg.eig#numpy.linalg.eig "numpy.linalg.eig") eigenvalues and right eigenvectors for non-symmetric arrays. [`eigvals`](numpy.linalg.eigvals#numpy.linalg.eigvals "numpy.linalg.eigvals") eigenvalues of non-symmetric arrays. [`scipy.linalg.eigh`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigh.html#scipy.linalg.eigh "\(in SciPy v1.14.1\)") Similar function in SciPy (but also solves the generalized eigenvalue problem). #### Notes Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module- numpy.linalg "numpy.linalg") documentation for details. The eigenvalues/eigenvectors are computed using LAPACK routines `_syevd`, `_heevd`. The eigenvalues of real symmetric or complex Hermitian matrices are always real. [1] The array `eigenvalues` of (column) eigenvectors is unitary and `a`, `eigenvalues`, and `eigenvectors` satisfy the equations `dot(a, eigenvectors[:, i]) = eigenvalues[i] * eigenvectors[:, i]`. #### References [1] G. Strang, _Linear Algebra and Its Applications_ , 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pg. 222. #### Examples >>> import numpy as np >>> from numpy import linalg as LA >>> a = np.array([[1, -2j], [2j, 5]]) >>> a array([[ 1.+0.j, -0.-2.j], [ 0.+2.j, 5.+0.j]]) >>> eigenvalues, eigenvectors = LA.eigh(a) >>> eigenvalues array([0.17157288, 5.82842712]) >>> eigenvectors array([[-0.92387953+0.j , -0.38268343+0.j ], # may vary [ 0. +0.38268343j, 0. -0.92387953j]]) >>> (np.dot(a, eigenvectors[:, 0]) - ... eigenvalues[0] * eigenvectors[:, 0]) # verify 1st eigenval/vec pair array([5.55111512e-17+0.0000000e+00j, 0.00000000e+00+1.2490009e-16j]) >>> (np.dot(a, eigenvectors[:, 1]) - ... eigenvalues[1] * eigenvectors[:, 1]) # verify 2nd eigenval/vec pair array([0.+0.j, 0.+0.j]) >>> A = np.matrix(a) # what happens if input is a matrix object >>> A matrix([[ 1.+0.j, -0.-2.j], [ 0.+2.j, 5.+0.j]]) >>> eigenvalues, eigenvectors = LA.eigh(A) >>> eigenvalues array([0.17157288, 5.82842712]) >>> eigenvectors matrix([[-0.92387953+0.j , -0.38268343+0.j ], # may vary [ 0. +0.38268343j, 0. -0.92387953j]]) >>> # demonstrate the treatment of the imaginary part of the diagonal >>> a = np.array([[5+2j, 9-2j], [0+2j, 2-1j]]) >>> a array([[5.+2.j, 9.-2.j], [0.+2.j, 2.-1.j]]) >>> # with UPLO='L' this is numerically equivalent to using LA.eig() with: >>> b = np.array([[5.+0.j, 0.-2.j], [0.+2.j, 2.-0.j]]) >>> b array([[5.+0.j, 0.-2.j], [0.+2.j, 2.+0.j]]) >>> wa, va = LA.eigh(a) >>> wb, vb = LA.eig(b) >>> wa array([1., 6.]) >>> wb array([6.+0.j, 1.+0.j]) >>> va array([[-0.4472136 +0.j , -0.89442719+0.j ], # may vary [ 0. +0.89442719j, 0. -0.4472136j ]]) >>> vb array([[ 0.89442719+0.j , -0. +0.4472136j], [-0. +0.4472136j, 0.89442719+0.j ]]) # numpy.linalg.eigvals linalg.eigvals(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L1133-L1222) Compute the eigenvalues of a general matrix. Main difference between `eigvals` and [`eig`](numpy.linalg.eig#numpy.linalg.eig "numpy.linalg.eig"): the eigenvectors aren’t returned. Parameters: **a**(…, M, M) array_like A complex- or real-valued matrix whose eigenvalues will be computed. Returns: **w**(…, M,) ndarray The eigenvalues, each repeated according to its multiplicity. They are not necessarily ordered, nor are they necessarily real for real matrices. Raises: LinAlgError If the eigenvalue computation does not converge. See also [`eig`](numpy.linalg.eig#numpy.linalg.eig "numpy.linalg.eig") eigenvalues and right eigenvectors of general arrays [`eigvalsh`](numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh") eigenvalues of real symmetric or complex Hermitian (conjugate symmetric) arrays. [`eigh`](numpy.linalg.eigh#numpy.linalg.eigh "numpy.linalg.eigh") eigenvalues and eigenvectors of real symmetric or complex Hermitian (conjugate symmetric) arrays. [`scipy.linalg.eigvals`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigvals.html#scipy.linalg.eigvals "\(in SciPy v1.14.1\)") Similar function in SciPy. #### Notes Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module- numpy.linalg "numpy.linalg") documentation for details. This is implemented using the `_geev` LAPACK routines which compute the eigenvalues and eigenvectors of general square arrays. #### Examples Illustration, using the fact that the eigenvalues of a diagonal matrix are its diagonal elements, that multiplying a matrix on the left by an orthogonal matrix, `Q`, and on the right by `Q.T` (the transpose of `Q`), preserves the eigenvalues of the “middle” matrix. In other words, if `Q` is orthogonal, then `Q * A * Q.T` has the same eigenvalues as `A`: >>> import numpy as np >>> from numpy import linalg as LA >>> x = np.random.random() >>> Q = np.array([[np.cos(x), -np.sin(x)], [np.sin(x), np.cos(x)]]) >>> LA.norm(Q[0, :]), LA.norm(Q[1, :]), np.dot(Q[0, :],Q[1, :]) (1.0, 1.0, 0.0) Now multiply a diagonal matrix by `Q` on one side and by `Q.T` on the other: >>> D = np.diag((-1,1)) >>> LA.eigvals(D) array([-1., 1.]) >>> A = np.dot(Q, D) >>> A = np.dot(A, Q.T) >>> LA.eigvals(A) array([ 1., -1.]) # random # numpy.linalg.eigvalsh linalg.eigvalsh(_a_ , _UPLO ='L'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L1229-L1320) Compute the eigenvalues of a complex Hermitian or real symmetric matrix. Main difference from eigh: the eigenvectors are not computed. Parameters: **a**(…, M, M) array_like A complex- or real-valued matrix whose eigenvalues are to be computed. **UPLO**{‘L’, ‘U’}, optional Specifies whether the calculation is done with the lower triangular part of `a` (‘L’, default) or the upper triangular part (‘U’). Irrespective of this value only the real parts of the diagonal will be considered in the computation to preserve the notion of a Hermitian matrix. It therefore follows that the imaginary part of the diagonal will always be treated as zero. Returns: **w**(…, M,) ndarray The eigenvalues in ascending order, each repeated according to its multiplicity. Raises: LinAlgError If the eigenvalue computation does not converge. See also [`eigh`](numpy.linalg.eigh#numpy.linalg.eigh "numpy.linalg.eigh") eigenvalues and eigenvectors of real symmetric or complex Hermitian (conjugate symmetric) arrays. [`eigvals`](numpy.linalg.eigvals#numpy.linalg.eigvals "numpy.linalg.eigvals") eigenvalues of general real or complex arrays. [`eig`](numpy.linalg.eig#numpy.linalg.eig "numpy.linalg.eig") eigenvalues and right eigenvectors of general real or complex arrays. [`scipy.linalg.eigvalsh`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigvalsh.html#scipy.linalg.eigvalsh "\(in SciPy v1.14.1\)") Similar function in SciPy. #### Notes Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module- numpy.linalg "numpy.linalg") documentation for details. The eigenvalues are computed using LAPACK routines `_syevd`, `_heevd`. #### Examples >>> import numpy as np >>> from numpy import linalg as LA >>> a = np.array([[1, -2j], [2j, 5]]) >>> LA.eigvalsh(a) array([ 0.17157288, 5.82842712]) # may vary >>> # demonstrate the treatment of the imaginary part of the diagonal >>> a = np.array([[5+2j, 9-2j], [0+2j, 2-1j]]) >>> a array([[5.+2.j, 9.-2.j], [0.+2.j, 2.-1.j]]) >>> # with UPLO='L' this is numerically equivalent to using LA.eigvals() >>> # with: >>> b = np.array([[5.+0.j, 0.-2.j], [0.+2.j, 2.-0.j]]) >>> b array([[5.+0.j, 0.-2.j], [0.+2.j, 2.+0.j]]) >>> wa = LA.eigvalsh(a) >>> wb = LA.eigvals(b) >>> wa; wb array([1., 6.]) array([6.+0.j, 1.+0.j]) # numpy.linalg.inv linalg.inv(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L496-L610) Compute the inverse of a matrix. Given a square matrix `a`, return the matrix `ainv` satisfying `a @ ainv = ainv @ a = eye(a.shape[0])`. Parameters: **a**(…, M, M) array_like Matrix to be inverted. Returns: **ainv**(…, M, M) ndarray or matrix Inverse of the matrix `a`. Raises: LinAlgError If `a` is not square or inversion fails. See also [`scipy.linalg.inv`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.inv.html#scipy.linalg.inv "\(in SciPy v1.14.1\)") Similar function in SciPy. [`numpy.linalg.cond`](numpy.linalg.cond#numpy.linalg.cond "numpy.linalg.cond") Compute the condition number of a matrix. [`numpy.linalg.svd`](numpy.linalg.svd#numpy.linalg.svd "numpy.linalg.svd") Compute the singular value decomposition of a matrix. #### Notes Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module- numpy.linalg "numpy.linalg") documentation for details. If `a` is detected to be singular, a [`LinAlgError`](numpy.linalg.linalgerror#numpy.linalg.LinAlgError "numpy.linalg.LinAlgError") is raised. If `a` is ill-conditioned, a [`LinAlgError`](numpy.linalg.linalgerror#numpy.linalg.LinAlgError "numpy.linalg.LinAlgError") may or may not be raised, and results may be inaccurate due to floating-point errors. #### References [1] Wikipedia, “Condition number”, #### Examples >>> import numpy as np >>> from numpy.linalg import inv >>> a = np.array([[1., 2.], [3., 4.]]) >>> ainv = inv(a) >>> np.allclose(a @ ainv, np.eye(2)) True >>> np.allclose(ainv @ a, np.eye(2)) True If a is a matrix object, then the return value is a matrix as well: >>> ainv = inv(np.matrix(a)) >>> ainv matrix([[-2. , 1. ], [ 1.5, -0.5]]) Inverses of several matrices can be computed at once: >>> a = np.array([[[1., 2.], [3., 4.]], [[1, 3], [3, 5]]]) >>> inv(a) array([[[-2. , 1. ], [ 1.5 , -0.5 ]], [[-1.25, 0.75], [ 0.75, -0.25]]]) If a matrix is close to singular, the computed inverse may not satisfy `a @ ainv = ainv @ a = eye(a.shape[0])` even if a [`LinAlgError`](numpy.linalg.linalgerror#numpy.linalg.LinAlgError "numpy.linalg.LinAlgError") is not raised: >>> a = np.array([[2,4,6],[2,0,2],[6,8,14]]) >>> inv(a) # No errors raised array([[-1.12589991e+15, -5.62949953e+14, 5.62949953e+14], [-1.12589991e+15, -5.62949953e+14, 5.62949953e+14], [ 1.12589991e+15, 5.62949953e+14, -5.62949953e+14]]) >>> a @ inv(a) array([[ 0. , -0.5 , 0. ], # may vary [-0.5 , 0.625, 0.25 ], [ 0. , 0. , 1. ]]) To detect ill-conditioned matrices, you can use [`numpy.linalg.cond`](numpy.linalg.cond#numpy.linalg.cond "numpy.linalg.cond") to compute its _condition number_ [1]. The larger the condition number, the more ill-conditioned the matrix is. As a rule of thumb, if the condition number `cond(a) = 10**k`, then you may lose up to `k` digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods. >>> from numpy.linalg import cond >>> cond(a) np.float64(8.659885634118668e+17) # may vary It is also possible to detect ill-conditioning by inspecting the matrix’s singular values directly. The ratio between the largest and the smallest singular value is the condition number: >>> from numpy.linalg import svd >>> sigma = svd(a, compute_uv=False) # Do not compute singular vectors >>> sigma.max()/sigma.min() 8.659885634118668e+17 # may vary # numpy.linalg.LinAlgError _exception_ linalg.LinAlgError[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/__init__.py) Generic Python-exception-derived object raised by linalg functions. General purpose exception class, derived from Python’s ValueError class, programmatically raised in linalg functions when a Linear Algebra-related condition would prevent further correct execution of the function. Parameters: **None** #### Examples >>> from numpy import linalg as LA >>> LA.inv(np.zeros((2,2))) Traceback (most recent call last): File "", line 1, in File "...linalg.py", line 350, in inv return wrap(solve(a, identity(a.shape[0], dtype=a.dtype))) File "...linalg.py", line 249, in solve raise LinAlgError('Singular matrix') numpy.linalg.LinAlgError: Singular matrix # numpy.linalg.lstsq linalg.lstsq(_a_ , _b_ , _rcond =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L2394-L2540) Return the least-squares solution to a linear matrix equation. Computes the vector `x` that approximately solves the equation `a @ x = b`. The equation may be under-, well-, or over-determined (i.e., the number of linearly independent rows of `a` can be less than, equal to, or greater than its number of linearly independent columns). If `a` is square and of full rank, then `x` (but for round-off error) is the “exact” solution of the equation. Else, `x` minimizes the Euclidean 2-norm \\(||b - ax||\\). If there are multiple minimizing solutions, the one with the smallest 2-norm \\(||x||\\) is returned. Parameters: **a**(M, N) array_like “Coefficient” matrix. **b**{(M,), (M, K)} array_like Ordinate or “dependent variable” values. If `b` is two-dimensional, the least- squares solution is calculated for each of the `K` columns of `b`. **rcond** float, optional Cut-off ratio for small singular values of `a`. For the purposes of rank determination, singular values are treated as zero if they are smaller than `rcond` times the largest singular value of `a`. The default uses the machine precision times `max(M, N)`. Passing `-1` will use machine precision. Changed in version 2.0: Previously, the default was `-1`, but a warning was given that this would change. Returns: **x**{(N,), (N, K)} ndarray Least-squares solution. If `b` is two-dimensional, the solutions are in the `K` columns of `x`. **residuals**{(1,), (K,), (0,)} ndarray Sums of squared residuals: Squared Euclidean 2-norm for each column in `b - a @ x`. If the rank of `a` is < N or M <= N, this is an empty array. If `b` is 1-dimensional, this is a (1,) shape array. Otherwise the shape is (K,). **rank** int Rank of matrix `a`. **s**(min(M, N),) ndarray Singular values of `a`. Raises: LinAlgError If computation does not converge. See also [`scipy.linalg.lstsq`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.lstsq.html#scipy.linalg.lstsq "\(in SciPy v1.14.1\)") Similar function in SciPy. #### Notes If `b` is a matrix, then all array results are returned as matrices. #### Examples Fit a line, `y = mx + c`, through some noisy data-points: >>> import numpy as np >>> x = np.array([0, 1, 2, 3]) >>> y = np.array([-1, 0.2, 0.9, 2.1]) By examining the coefficients, we see that the line should have a gradient of roughly 1 and cut the y-axis at, more or less, -1. We can rewrite the line equation as `y = Ap`, where `A = [[x 1]]` and `p = [[m], [c]]`. Now use `lstsq` to solve for `p`: >>> A = np.vstack([x, np.ones(len(x))]).T >>> A array([[ 0., 1.], [ 1., 1.], [ 2., 1.], [ 3., 1.]]) >>> m, c = np.linalg.lstsq(A, y)[0] >>> m, c (1.0 -0.95) # may vary Plot the data along with the fitted line: >>> import matplotlib.pyplot as plt >>> _ = plt.plot(x, y, 'o', label='Original data', markersize=10) >>> _ = plt.plot(x, m*x + c, 'r', label='Fitted line') >>> _ = plt.legend() >>> plt.show() # numpy.linalg.matmul linalg.matmul(_x1_ , _x2_ , _/_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L3302-L3383) Computes the matrix product. This function is Array API compatible, contrary to [`numpy.matmul`](numpy.matmul#numpy.matmul "numpy.matmul"). Parameters: **x1** array_like The first input array. **x2** array_like The second input array. Returns: **out** ndarray The matrix product of the inputs. This is a scalar only when both `x1`, `x2` are 1-d vectors. Raises: ValueError If the last dimension of `x1` is not the same size as the second-to-last dimension of `x2`. If a scalar value is passed in. See also [`numpy.matmul`](numpy.matmul#numpy.matmul "numpy.matmul") #### Examples For 2-D arrays it is the matrix product: >>> a = np.array([[1, 0], ... [0, 1]]) >>> b = np.array([[4, 1], ... [2, 2]]) >>> np.linalg.matmul(a, b) array([[4, 1], [2, 2]]) For 2-D mixed with 1-D, the result is the usual. >>> a = np.array([[1, 0], ... [0, 1]]) >>> b = np.array([1, 2]) >>> np.linalg.matmul(a, b) array([1, 2]) >>> np.linalg.matmul(b, a) array([1, 2]) Broadcasting is conventional for stacks of arrays >>> a = np.arange(2 * 2 * 4).reshape((2, 2, 4)) >>> b = np.arange(2 * 2 * 4).reshape((2, 4, 2)) >>> np.linalg.matmul(a,b).shape (2, 2, 2) >>> np.linalg.matmul(a, b)[0, 1, 1] 98 >>> sum(a[0, 1, :] * b[0 , :, 1]) 98 Vector, vector returns the scalar inner product, but neither argument is complex-conjugated: >>> np.linalg.matmul([2j, 3j], [2j, 3j]) (-13+0j) Scalar multiplication raises an error. >>> np.linalg.matmul([1,2], 3) Traceback (most recent call last): ... ValueError: matmul: Input operand 1 does not have enough dimensions ... # numpy.linalg.matrix_norm linalg.matrix_norm(_x_ , _/_ , _*_ , _keepdims =False_, _ord ='fro'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L3418-L3473) Computes the matrix norm of a matrix (or a stack of matrices) `x`. This function is Array API compatible. Parameters: **x** array_like Input array having shape (…, M, N) and whose two innermost dimensions form `MxN` matrices. **keepdims** bool, optional If this is set to True, the axes which are normed over are left in the result as dimensions with size one. Default: False. **ord**{1, -1, 2, -2, inf, -inf, ‘fro’, ‘nuc’}, optional The order of the norm. For details see the table under `Notes` in [`numpy.linalg.norm`](numpy.linalg.norm#numpy.linalg.norm "numpy.linalg.norm"). See also [`numpy.linalg.norm`](numpy.linalg.norm#numpy.linalg.norm "numpy.linalg.norm") Generic norm function #### Examples >>> from numpy import linalg as LA >>> a = np.arange(9) - 4 >>> a array([-4, -3, -2, ..., 2, 3, 4]) >>> b = a.reshape((3, 3)) >>> b array([[-4, -3, -2], [-1, 0, 1], [ 2, 3, 4]]) >>> LA.matrix_norm(b) 7.745966692414834 >>> LA.matrix_norm(b, ord='fro') 7.745966692414834 >>> LA.matrix_norm(b, ord=np.inf) 9.0 >>> LA.matrix_norm(b, ord=-np.inf) 2.0 >>> LA.matrix_norm(b, ord=1) 7.0 >>> LA.matrix_norm(b, ord=-1) 6.0 >>> LA.matrix_norm(b, ord=2) 7.3484692283495345 >>> LA.matrix_norm(b, ord=-2) 1.8570331885190563e-016 # may vary # numpy.linalg.matrix_power linalg.matrix_power(_a_ , _n_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L617-L731) Raise a square matrix to the (integer) power `n`. For positive integers `n`, the power is computed by repeated matrix squarings and matrix multiplications. If `n == 0`, the identity matrix of the same shape as M is returned. If `n < 0`, the inverse is computed and then raised to the `abs(n)`. Note Stacks of object matrices are not currently supported. Parameters: **a**(…, M, M) array_like Matrix to be “powered”. **n** int The exponent can be any integer or long integer, positive, negative, or zero. Returns: **a**n**(…, M, M) ndarray or matrix object The return value is the same shape and type as `M`; if the exponent is positive or zero then the type of the elements is the same as those of `M`. If the exponent is negative the elements are floating-point. Raises: LinAlgError For matrices that are not square or that (for negative powers) cannot be inverted numerically. #### Examples >>> import numpy as np >>> from numpy.linalg import matrix_power >>> i = np.array([[0, 1], [-1, 0]]) # matrix equiv. of the imaginary unit >>> matrix_power(i, 3) # should = -i array([[ 0, -1], [ 1, 0]]) >>> matrix_power(i, 0) array([[1, 0], [0, 1]]) >>> matrix_power(i, -3) # should = 1/(-i) = i, but w/ f.p. elements array([[ 0., 1.], [-1., 0.]]) Somewhat more sophisticated example >>> q = np.zeros((4, 4)) >>> q[0:2, 0:2] = -i >>> q[2:4, 2:4] = i >>> q # one of the three quaternion units not equal to 1 array([[ 0., -1., 0., 0.], [ 1., 0., 0., 0.], [ 0., 0., 0., 1.], [ 0., 0., -1., 0.]]) >>> matrix_power(q, 2) # = -np.eye(4) array([[-1., 0., 0., 0.], [ 0., -1., 0., 0.], [ 0., 0., -1., 0.], [ 0., 0., 0., -1.]]) # numpy.linalg.matrix_rank linalg.matrix_rank(_A_ , _tol =None_, _hermitian =False_, _*_ , _rtol =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L2010-L2119) Return matrix rank of array using SVD method Rank of the array is the number of singular values of the array that are greater than `tol`. Parameters: **A**{(M,), (…, M, N)} array_like Input vector or stack of matrices. **tol**(…) array_like, float, optional Threshold below which SVD values are considered zero. If `tol` is None, and `S` is an array with singular values for `M`, and `eps` is the epsilon value for datatype of `S`, then `tol` is set to `S.max() * max(M, N) * eps`. **hermitian** bool, optional If True, `A` is assumed to be Hermitian (symmetric if real-valued), enabling a more efficient method for finding singular values. Defaults to False. **rtol**(…) array_like, float, optional Parameter for the relative tolerance component. Only `tol` or `rtol` can be set at a time. Defaults to `max(M, N) * eps`. New in version 2.0.0. Returns: **rank**(…) array_like Rank of A. #### Notes The default threshold to detect rank deficiency is a test on the magnitude of the singular values of `A`. By default, we identify singular values less than `S.max() * max(M, N) * eps` as indicating rank deficiency (with the symbols defined above). This is the algorithm MATLAB uses [1]. It also appears in _Numerical recipes_ in the discussion of SVD solutions for linear least squares [2]. This default threshold is designed to detect rank deficiency accounting for the numerical errors of the SVD computation. Imagine that there is a column in `A` that is an exact (in floating point) linear combination of other columns in `A`. Computing the SVD on `A` will not produce a singular value exactly equal to 0 in general: any difference of the smallest SVD value from 0 will be caused by numerical imprecision in the calculation of the SVD. Our threshold for small SVD values takes this numerical imprecision into account, and the default threshold will detect such numerical rank deficiency. The threshold may declare a matrix `A` rank deficient even if the linear combination of some columns of `A` is not exactly equal to another column of `A` but only numerically very close to another column of `A`. We chose our default threshold because it is in wide use. Other thresholds are possible. For example, elsewhere in the 2007 edition of _Numerical recipes_ there is an alternative threshold of `S.max() * np.finfo(A.dtype).eps / 2. * np.sqrt(m + n + 1.)`. The authors describe this threshold as being based on “expected roundoff error” (p 71). The thresholds above deal with floating point roundoff error in the calculation of the SVD. However, you may have more information about the sources of error in `A` that would make you consider other tolerance values to detect _effective_ rank deficiency. The most useful measure of the tolerance depends on the operations you intend to use on your matrix. For example, if your data come from uncertain measurements with uncertainties greater than floating point epsilon, choosing a tolerance near that uncertainty may be preferable. The tolerance may be absolute if the uncertainties are absolute rather than relative. #### References [1] MATLAB reference documentation, “Rank” [2] W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, “Numerical Recipes (3rd edition)”, Cambridge University Press, 2007, page 795. #### Examples >>> import numpy as np >>> from numpy.linalg import matrix_rank >>> matrix_rank(np.eye(4)) # Full rank matrix 4 >>> I=np.eye(4); I[-1,-1] = 0. # rank deficient matrix >>> matrix_rank(I) 3 >>> matrix_rank(np.ones((4,))) # 1 dimension - rank 1 unless all 0 1 >>> matrix_rank(np.zeros((4,))) 0 # numpy.linalg.matrix_transpose linalg.matrix_transpose(_x_ , _/_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L3405-L3407) Transposes a matrix (or a stack of matrices) `x`. This function is Array API compatible. Parameters: **x** array_like Input array having shape (…, M, N) and whose two innermost dimensions form `MxN` matrices. Returns: **out** ndarray An array containing the transpose for each matrix and having shape (…, N, M). See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") Generic transpose method. #### Examples >>> import numpy as np >>> np.matrix_transpose([[1, 2], [3, 4]]) array([[1, 3], [2, 4]]) >>> np.matrix_transpose([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) array([[[1, 3], [2, 4]], [[5, 7], [6, 8]]]) # numpy.linalg.multi_dot linalg.multi_dot(_arrays_ , _*_ , _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L2841-L2958) Compute the dot product of two or more arrays in a single function call, while automatically selecting the fastest evaluation order. `multi_dot` chains [`numpy.dot`](numpy.dot#numpy.dot "numpy.dot") and uses optimal parenthesization of the matrices [1] [2]. Depending on the shapes of the matrices, this can speed up the multiplication a lot. If the first argument is 1-D it is treated as a row vector. If the last argument is 1-D it is treated as a column vector. The other arguments must be 2-D. Think of `multi_dot` as: def multi_dot(arrays): return functools.reduce(np.dot, arrays) Parameters: **arrays** sequence of array_like If the first argument is 1-D it is treated as row vector. If the last argument is 1-D it is treated as column vector. The other arguments must be 2-D. **out** ndarray, optional Output argument. This must have the exact kind that would be returned if it was not used. In particular, it must have the right type, must be C-contiguous, and its dtype must be the dtype that would be returned for `dot(a, b)`. This is a performance feature. Therefore, if these conditions are not met, an exception is raised, instead of attempting to be flexible. Returns: **output** ndarray Returns the dot product of the supplied arrays. See also [`numpy.dot`](numpy.dot#numpy.dot "numpy.dot") dot multiplication with two arguments. #### Notes The cost for a matrix multiplication can be calculated with the following function: def cost(A, B): return A.shape[0] * A.shape[1] * B.shape[1] Assume we have three matrices \\(A_{10x100}, B_{100x5}, C_{5x50}\\). The costs for the two different parenthesizations are as follows: cost((AB)C) = 10*100*5 + 10*5*50 = 5000 + 2500 = 7500 cost(A(BC)) = 10*100*50 + 100*5*50 = 50000 + 25000 = 75000 #### References [1] Cormen, “Introduction to Algorithms”, Chapter 15.2, p. 370-378 [2] #### Examples `multi_dot` allows you to write: >>> import numpy as np >>> from numpy.linalg import multi_dot >>> # Prepare some data >>> A = np.random.random((10000, 100)) >>> B = np.random.random((100, 1000)) >>> C = np.random.random((1000, 5)) >>> D = np.random.random((5, 333)) >>> # the actual dot multiplication >>> _ = multi_dot([A, B, C, D]) instead of: >>> _ = np.dot(np.dot(np.dot(A, B), C), D) >>> # or >>> _ = A.dot(B).dot(C).dot(D) # numpy.linalg.norm linalg.norm(_x_ , _ord =None_, _axis =None_, _keepdims =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L2575-L2831) Matrix or vector norm. This function is able to return one of eight different matrix norms, or one of an infinite number of vector norms (described below), depending on the value of the `ord` parameter. Parameters: **x** array_like Input array. If `axis` is None, `x` must be 1-D or 2-D, unless `ord` is None. If both `axis` and `ord` are None, the 2-norm of `x.ravel` will be returned. **ord**{int, float, inf, -inf, ‘fro’, ‘nuc’}, optional Order of the norm (see table under `Notes` for what values are supported for matrices and vectors respectively). inf means numpy’s [`inf`](../constants#numpy.inf "numpy.inf") object. The default is None. **axis**{None, int, 2-tuple of ints}, optional. If `axis` is an integer, it specifies the axis of `x` along which to compute the vector norms. If `axis` is a 2-tuple, it specifies the axes that hold 2-D matrices, and the matrix norms of these matrices are computed. If `axis` is None then either a vector norm (when `x` is 1-D) or a matrix norm (when `x` is 2-D) is returned. The default is None. **keepdims** bool, optional If this is set to True, the axes which are normed over are left in the result as dimensions with size one. With this option the result will broadcast correctly against the original `x`. Returns: **n** float or ndarray Norm of the matrix or vector(s). See also [`scipy.linalg.norm`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.norm.html#scipy.linalg.norm "\(in SciPy v1.14.1\)") Similar function in SciPy. #### Notes For values of `ord < 1`, the result is, strictly speaking, not a mathematical ‘norm’, but it may still be useful for various numerical purposes. The following norms can be calculated: ord | norm for matrices | norm for vectors ---|---|--- None | Frobenius norm | 2-norm ‘fro’ | Frobenius norm | – ‘nuc’ | nuclear norm | – inf | max(sum(abs(x), axis=1)) | max(abs(x)) -inf | min(sum(abs(x), axis=1)) | min(abs(x)) 0 | – | sum(x != 0) 1 | max(sum(abs(x), axis=0)) | as below -1 | min(sum(abs(x), axis=0)) | as below 2 | 2-norm (largest sing. value) | as below -2 | smallest singular value | as below other | – | sum(abs(x)**ord)**(1./ord) The Frobenius norm is given by [1]: \\(||A||_F = [\sum_{i,j} abs(a_{i,j})^2]^{1/2}\\) The nuclear norm is the sum of the singular values. Both the Frobenius and nuclear norm orders are only defined for matrices and raise a ValueError when `x.ndim != 2`. #### References [1] G. H. Golub and C. F. Van Loan, _Matrix Computations_ , Baltimore, MD, Johns Hopkins University Press, 1985, pg. 15 #### Examples >>> import numpy as np >>> from numpy import linalg as LA >>> a = np.arange(9) - 4 >>> a array([-4, -3, -2, ..., 2, 3, 4]) >>> b = a.reshape((3, 3)) >>> b array([[-4, -3, -2], [-1, 0, 1], [ 2, 3, 4]]) >>> LA.norm(a) 7.745966692414834 >>> LA.norm(b) 7.745966692414834 >>> LA.norm(b, 'fro') 7.745966692414834 >>> LA.norm(a, np.inf) 4.0 >>> LA.norm(b, np.inf) 9.0 >>> LA.norm(a, -np.inf) 0.0 >>> LA.norm(b, -np.inf) 2.0 >>> LA.norm(a, 1) 20.0 >>> LA.norm(b, 1) 7.0 >>> LA.norm(a, -1) -4.6566128774142013e-010 >>> LA.norm(b, -1) 6.0 >>> LA.norm(a, 2) 7.745966692414834 >>> LA.norm(b, 2) 7.3484692283495345 >>> LA.norm(a, -2) 0.0 >>> LA.norm(b, -2) 1.8570331885190563e-016 # may vary >>> LA.norm(a, 3) 5.8480354764257312 # may vary >>> LA.norm(a, -3) 0.0 Using the `axis` argument to compute vector norms: >>> c = np.array([[ 1, 2, 3], ... [-1, 1, 4]]) >>> LA.norm(c, axis=0) array([ 1.41421356, 2.23606798, 5. ]) >>> LA.norm(c, axis=1) array([ 3.74165739, 4.24264069]) >>> LA.norm(c, ord=1, axis=1) array([ 6., 6.]) Using the `axis` argument to compute matrix norms: >>> m = np.arange(8).reshape(2,2,2) >>> LA.norm(m, axis=(1,2)) array([ 3.74165739, 11.22497216]) >>> LA.norm(m[0, :, :]), LA.norm(m[1, :, :]) (3.7416573867739413, 11.224972160321824) # numpy.linalg.outer linalg.outer(_x1_ , _x2_ , _/_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L850-L918) Compute the outer product of two vectors. This function is Array API compatible. Compared to `np.outer` it accepts 1-dimensional inputs only. Parameters: **x1**(M,) array_like One-dimensional input array of size `N`. Must have a numeric data type. **x2**(N,) array_like One-dimensional input array of size `M`. Must have a numeric data type. Returns: **out**(M, N) ndarray `out[i, j] = a[i] * b[j]` See also [`outer`](numpy.outer#numpy.outer "numpy.outer") #### Examples Make a (_very_ coarse) grid for computing a Mandelbrot set: >>> rl = np.linalg.outer(np.ones((5,)), np.linspace(-2, 2, 5)) >>> rl array([[-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.]]) >>> im = np.linalg.outer(1j*np.linspace(2, -2, 5), np.ones((5,))) >>> im array([[0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j], [0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j], [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j], [0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j]]) >>> grid = rl + im >>> grid array([[-2.+2.j, -1.+2.j, 0.+2.j, 1.+2.j, 2.+2.j], [-2.+1.j, -1.+1.j, 0.+1.j, 1.+1.j, 2.+1.j], [-2.+0.j, -1.+0.j, 0.+0.j, 1.+0.j, 2.+0.j], [-2.-1.j, -1.-1.j, 0.-1.j, 1.-1.j, 2.-1.j], [-2.-2.j, -1.-2.j, 0.-2.j, 1.-2.j, 2.-2.j]]) An example using a “vector” of letters: >>> x = np.array(['a', 'b', 'c'], dtype=object) >>> np.linalg.outer(x, [1, 2, 3]) array([['a', 'aa', 'aaa'], ['b', 'bb', 'bbb'], ['c', 'cc', 'ccc']], dtype=object) # numpy.linalg.pinv linalg.pinv(_a_ , _rcond=None_ , _hermitian=False_ , _*_ , _rtol= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L2128-L2240) Compute the (Moore-Penrose) pseudo-inverse of a matrix. Calculate the generalized inverse of a matrix using its singular-value decomposition (SVD) and including all _large_ singular values. Parameters: **a**(…, M, N) array_like Matrix or stack of matrices to be pseudo-inverted. **rcond**(…) array_like of float, optional Cutoff for small singular values. Singular values less than or equal to `rcond * largest_singular_value` are set to zero. Broadcasts against the stack of matrices. Default: `1e-15`. **hermitian** bool, optional If True, `a` is assumed to be Hermitian (symmetric if real-valued), enabling a more efficient method for finding singular values. Defaults to False. **rtol**(…) array_like of float, optional Same as `rcond`, but it’s an Array API compatible parameter name. Only `rcond` or `rtol` can be set at a time. If none of them are provided then NumPy’s `1e-15` default is used. If `rtol=None` is passed then the API standard default is used. New in version 2.0.0. Returns: **B**(…, N, M) ndarray The pseudo-inverse of `a`. If `a` is a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") instance, then so is `B`. Raises: LinAlgError If the SVD computation does not converge. See also [`scipy.linalg.pinv`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.pinv.html#scipy.linalg.pinv "\(in SciPy v1.14.1\)") Similar function in SciPy. [`scipy.linalg.pinvh`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.pinvh.html#scipy.linalg.pinvh "\(in SciPy v1.14.1\)") Compute the (Moore-Penrose) pseudo-inverse of a Hermitian matrix. #### Notes The pseudo-inverse of a matrix A, denoted \\(A^+\\), is defined as: “the matrix that ‘solves’ [the least-squares problem] \\(Ax = b\\),” i.e., if \\(\bar{x}\\) is said solution, then \\(A^+\\) is that matrix such that \\(\bar{x} = A^+b\\). It can be shown that if \\(Q_1 \Sigma Q_2^T = A\\) is the singular value decomposition of A, then \\(A^+ = Q_2 \Sigma^+ Q_1^T\\), where \\(Q_{1,2}\\) are orthogonal matrices, \\(\Sigma\\) is a diagonal matrix consisting of A’s so-called singular values, (followed, typically, by zeros), and then \\(\Sigma^+\\) is simply the diagonal matrix consisting of the reciprocals of A’s singular values (again, followed by zeros). [1] #### References [1] G. Strang, _Linear Algebra and Its Applications_ , 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pp. 139-142. #### Examples The following example checks that `a * a+ * a == a` and `a+ * a * a+ == a+`: >>> import numpy as np >>> rng = np.random.default_rng() >>> a = rng.normal(size=(9, 6)) >>> B = np.linalg.pinv(a) >>> np.allclose(a, np.dot(a, np.dot(B, a))) True >>> np.allclose(B, np.dot(B, np.dot(a, B))) True # numpy.linalg.qr linalg.qr(_a_ , _mode ='reduced'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L928-L1128) Compute the qr factorization of a matrix. Factor the matrix `a` as _qr_ , where `q` is orthonormal and `r` is upper- triangular. Parameters: **a** array_like, shape (…, M, N) An array-like object with the dimensionality of at least 2. **mode**{‘reduced’, ‘complete’, ‘r’, ‘raw’}, optional, default: ‘reduced’ If K = min(M, N), then * ‘reduced’ : returns Q, R with dimensions (…, M, K), (…, K, N) * ‘complete’ : returns Q, R with dimensions (…, M, M), (…, M, N) * ‘r’ : returns R only with dimensions (…, K, N) * ‘raw’ : returns h, tau with dimensions (…, N, M), (…, K,) The options ‘reduced’, ‘complete, and ‘raw’ are new in numpy 1.8, see the notes for more information. The default is ‘reduced’, and to maintain backward compatibility with earlier versions of numpy both it and the old default ‘full’ can be omitted. Note that array h returned in ‘raw’ mode is transposed for calling Fortran. The ‘economic’ mode is deprecated. The modes ‘full’ and ‘economic’ may be passed using only the first letter for backwards compatibility, but all others must be spelled out. See the Notes for more explanation. Returns: When mode is ‘reduced’ or ‘complete’, the result will be a namedtuple with the attributes `Q` and `R`. **Q** ndarray of float or complex, optional A matrix with orthonormal columns. When mode = ‘complete’ the result is an orthogonal/unitary matrix depending on whether or not a is real/complex. The determinant may be either +/- 1 in that case. In case the number of dimensions in the input array is greater than 2 then a stack of the matrices with above properties is returned. **R** ndarray of float or complex, optional The upper-triangular matrix or a stack of upper-triangular matrices if the number of dimensions in the input array is greater than 2. **(h, tau)** ndarrays of np.double or np.cdouble, optional The array h contains the Householder reflectors that generate q along with r. The tau array contains scaling factors for the reflectors. In the deprecated ‘economic’ mode only h is returned. Raises: LinAlgError If factoring fails. See also [`scipy.linalg.qr`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.qr.html#scipy.linalg.qr "\(in SciPy v1.14.1\)") Similar function in SciPy. [`scipy.linalg.rq`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.rq.html#scipy.linalg.rq "\(in SciPy v1.14.1\)") Compute RQ decomposition of a matrix. #### Notes This is an interface to the LAPACK routines `dgeqrf`, `zgeqrf`, `dorgqr`, and `zungqr`. For more information on the qr factorization, see for example: Subclasses of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") are preserved except for the ‘raw’ mode. So if `a` is of type [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix"), all the return values will be matrices too. New ‘reduced’, ‘complete’, and ‘raw’ options for mode were added in NumPy 1.8.0 and the old option ‘full’ was made an alias of ‘reduced’. In addition the options ‘full’ and ‘economic’ were deprecated. Because ‘full’ was the previous default and ‘reduced’ is the new default, backward compatibility can be maintained by letting `mode` default. The ‘raw’ option was added so that LAPACK routines that can multiply arrays by q using the Householder reflectors can be used. Note that in this case the returned arrays are of type np.double or np.cdouble and the h array is transposed to be FORTRAN compatible. No routines using the ‘raw’ return are currently exposed by numpy, but some are available in lapack_lite and just await the necessary work. #### Examples >>> import numpy as np >>> rng = np.random.default_rng() >>> a = rng.normal(size=(9, 6)) >>> Q, R = np.linalg.qr(a) >>> np.allclose(a, np.dot(Q, R)) # a does equal QR True >>> R2 = np.linalg.qr(a, mode='r') >>> np.allclose(R, R2) # mode='r' returns the same R as mode='full' True >>> a = np.random.normal(size=(3, 2, 2)) # Stack of 2 x 2 matrices as input >>> Q, R = np.linalg.qr(a) >>> Q.shape (3, 2, 2) >>> R.shape (3, 2, 2) >>> np.allclose(a, np.matmul(Q, R)) True Example illustrating a common use of `qr`: solving of least squares problems What are the least-squares-best `m` and `y0` in `y = y0 + mx` for the following data: {(0,1), (1,0), (1,2), (2,1)}. (Graph the points and you’ll see that it should be y0 = 0, m = 1.) The answer is provided by solving the over- determined matrix equation `Ax = b`, where: A = array([[0, 1], [1, 1], [1, 1], [2, 1]]) x = array([[y0], [m]]) b = array([[1], [0], [2], [1]]) If A = QR such that Q is orthonormal (which is always possible via Gram- Schmidt), then `x = inv(R) * (Q.T) * b`. (In numpy practice, however, we simply use [`lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq").) >>> A = np.array([[0, 1], [1, 1], [1, 1], [2, 1]]) >>> A array([[0, 1], [1, 1], [1, 1], [2, 1]]) >>> b = np.array([1, 2, 2, 3]) >>> Q, R = np.linalg.qr(A) >>> p = np.dot(Q.T, b) >>> np.dot(np.linalg.inv(R), p) array([ 1., 1.]) # numpy.linalg.slogdet linalg.slogdet(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L2246-L2328) Compute the sign and (natural) logarithm of the determinant of an array. If an array has a very small or very large determinant, then a call to [`det`](numpy.linalg.det#numpy.linalg.det "numpy.linalg.det") may overflow or underflow. This routine is more robust against such issues, because it computes the logarithm of the determinant rather than the determinant itself. Parameters: **a**(…, M, M) array_like Input array, has to be a square 2-D array. Returns: A namedtuple with the following attributes: **sign**(…) array_like A number representing the sign of the determinant. For a real matrix, this is 1, 0, or -1. For a complex matrix, this is a complex number with absolute value 1 (i.e., it is on the unit circle), or else 0. **logabsdet**(…) array_like The natural log of the absolute value of the determinant. If the determinant is zero, then [`sign`](numpy.sign#numpy.sign "numpy.sign") will be 0 and `logabsdet` will be -inf. In all cases, the determinant is equal to `sign * np.exp(logabsdet)`. See also [`det`](numpy.linalg.det#numpy.linalg.det "numpy.linalg.det") #### Notes Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module- numpy.linalg "numpy.linalg") documentation for details. The determinant is computed via LU factorization using the LAPACK routine `z/dgetrf`. #### Examples The determinant of a 2-D array `[[a, b], [c, d]]` is `ad - bc`: >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> (sign, logabsdet) = np.linalg.slogdet(a) >>> (sign, logabsdet) (-1, 0.69314718055994529) # may vary >>> sign * np.exp(logabsdet) -2.0 Computing log-determinants for a stack of matrices: >>> a = np.array([ [[1, 2], [3, 4]], [[1, 2], [2, 1]], [[1, 3], [3, 1]] ]) >>> a.shape (3, 2, 2) >>> sign, logabsdet = np.linalg.slogdet(a) >>> (sign, logabsdet) (array([-1., -1., -1.]), array([ 0.69314718, 1.09861229, 2.07944154])) >>> sign * np.exp(logabsdet) array([-2., -3., -8.]) This routine succeeds where ordinary [`det`](numpy.linalg.det#numpy.linalg.det "numpy.linalg.det") does not: >>> np.linalg.det(np.eye(500) * 0.1) 0.0 >>> np.linalg.slogdet(np.eye(500) * 0.1) (1, -1151.2925464970228) # numpy.linalg.solve linalg.solve(_a_ , _b_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L320-L412) Solve a linear matrix equation, or system of linear scalar equations. Computes the “exact” solution, `x`, of the well-determined, i.e., full rank, linear matrix equation `ax = b`. Parameters: **a**(…, M, M) array_like Coefficient matrix. **b**{(M,), (…, M, K)}, array_like Ordinate or “dependent variable” values. Returns: **x**{(…, M,), (…, M, K)} ndarray Solution to the system a x = b. Returned shape is (…, M) if b is shape (M,) and (…, M, K) if b is (…, M, K), where the “…” part is broadcasted between a and b. Raises: LinAlgError If `a` is singular or not square. See also [`scipy.linalg.solve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve.html#scipy.linalg.solve "\(in SciPy v1.14.1\)") Similar function in SciPy. #### Notes Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module- numpy.linalg "numpy.linalg") documentation for details. The solutions are computed using LAPACK routine `_gesv`. `a` must be square and of full-rank, i.e., all rows (or, equivalently, columns) must be linearly independent; if either is not true, use [`lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") for the least-squares best “solution” of the system/equation. Changed in version 2.0: The b array is only treated as a shape (M,) column vector if it is exactly 1-dimensional. In all other instances it is treated as a stack of (M, K) matrices. Previously b would be treated as a stack of (M,) vectors if b.ndim was equal to a.ndim - 1. #### References [1] G. Strang, _Linear Algebra and Its Applications_ , 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pg. 22. #### Examples Solve the system of equations: `x0 + 2 * x1 = 1` and `3 * x0 + 5 * x1 = 2`: >>> import numpy as np >>> a = np.array([[1, 2], [3, 5]]) >>> b = np.array([1, 2]) >>> x = np.linalg.solve(a, b) >>> x array([-1., 1.]) Check that the solution is correct: >>> np.allclose(np.dot(a, x), b) True # numpy.linalg.svd linalg.svd(_a_ , _full_matrices =True_, _compute_uv =True_, _hermitian =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L1639-L1824) Singular Value Decomposition. When `a` is a 2D array, and `full_matrices=False`, then it is factorized as `u @ np.diag(s) @ vh = (u * s) @ vh`, where `u` and the Hermitian transpose of `vh` are 2D arrays with orthonormal columns and `s` is a 1D array of `a`’s singular values. When `a` is higher-dimensional, SVD is applied in stacked mode as explained below. Parameters: **a**(…, M, N) array_like A real or complex array with `a.ndim >= 2`. **full_matrices** bool, optional If True (default), `u` and `vh` have the shapes `(..., M, M)` and `(..., N, N)`, respectively. Otherwise, the shapes are `(..., M, K)` and `(..., K, N)`, respectively, where `K = min(M, N)`. **compute_uv** bool, optional Whether or not to compute `u` and `vh` in addition to `s`. True by default. **hermitian** bool, optional If True, `a` is assumed to be Hermitian (symmetric if real-valued), enabling a more efficient method for finding singular values. Defaults to False. Returns: When `compute_uv` is True, the result is a namedtuple with the following attribute names: **U**{ (…, M, M), (…, M, K) } array Unitary array(s). The first `a.ndim - 2` dimensions have the same size as those of the input `a`. The size of the last two dimensions depends on the value of `full_matrices`. Only returned when `compute_uv` is True. **S**(…, K) array Vector(s) with the singular values, within each vector sorted in descending order. The first `a.ndim - 2` dimensions have the same size as those of the input `a`. **Vh**{ (…, N, N), (…, K, N) } array Unitary array(s). The first `a.ndim - 2` dimensions have the same size as those of the input `a`. The size of the last two dimensions depends on the value of `full_matrices`. Only returned when `compute_uv` is True. Raises: LinAlgError If SVD computation does not converge. See also [`scipy.linalg.svd`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.svd.html#scipy.linalg.svd "\(in SciPy v1.14.1\)") Similar function in SciPy. [`scipy.linalg.svdvals`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.svdvals.html#scipy.linalg.svdvals "\(in SciPy v1.14.1\)") Compute singular values of a matrix. #### Notes The decomposition is performed using LAPACK routine `_gesdd`. SVD is usually described for the factorization of a 2D matrix \\(A\\). The higher-dimensional case will be discussed below. In the 2D case, SVD is written as \\(A = U S V^H\\), where \\(A = a\\), \\(U= u\\), \\(S= \mathtt{np.diag}(s)\\) and \\(V^H = vh\\). The 1D array `s` contains the singular values of `a` and `u` and `vh` are unitary. The rows of `vh` are the eigenvectors of \\(A^H A\\) and the columns of `u` are the eigenvectors of \\(A A^H\\). In both cases the corresponding (possibly non-zero) eigenvalues are given by `s**2`. If `a` has more than two dimensions, then broadcasting rules apply, as explained in [Linear algebra on several matrices at once](../routines.linalg#routines-linalg-broadcasting). This means that SVD is working in “stacked” mode: it iterates over all indices of the first `a.ndim - 2` dimensions and for each combination SVD is applied to the last two indices. The matrix `a` can be reconstructed from the decomposition with either `(u * s[..., None, :]) @ vh` or `u @ (s[..., None] * vh)`. (The `@` operator can be replaced by the function `np.matmul` for python versions below 3.5.) If `a` is a `matrix` object (as opposed to an `ndarray`), then so are all the return values. #### Examples >>> import numpy as np >>> rng = np.random.default_rng() >>> a = rng.normal(size=(9, 6)) + 1j*rng.normal(size=(9, 6)) >>> b = rng.normal(size=(2, 7, 8, 3)) + 1j*rng.normal(size=(2, 7, 8, 3)) Reconstruction based on full SVD, 2D case: >>> U, S, Vh = np.linalg.svd(a, full_matrices=True) >>> U.shape, S.shape, Vh.shape ((9, 9), (6,), (6, 6)) >>> np.allclose(a, np.dot(U[:, :6] * S, Vh)) True >>> smat = np.zeros((9, 6), dtype=complex) >>> smat[:6, :6] = np.diag(S) >>> np.allclose(a, np.dot(U, np.dot(smat, Vh))) True Reconstruction based on reduced SVD, 2D case: >>> U, S, Vh = np.linalg.svd(a, full_matrices=False) >>> U.shape, S.shape, Vh.shape ((9, 6), (6,), (6, 6)) >>> np.allclose(a, np.dot(U * S, Vh)) True >>> smat = np.diag(S) >>> np.allclose(a, np.dot(U, np.dot(smat, Vh))) True Reconstruction based on full SVD, 4D case: >>> U, S, Vh = np.linalg.svd(b, full_matrices=True) >>> U.shape, S.shape, Vh.shape ((2, 7, 8, 8), (2, 7, 3), (2, 7, 3, 3)) >>> np.allclose(b, np.matmul(U[..., :3] * S[..., None, :], Vh)) True >>> np.allclose(b, np.matmul(U[..., :3], S[..., None] * Vh)) True Reconstruction based on reduced SVD, 4D case: >>> U, S, Vh = np.linalg.svd(b, full_matrices=False) >>> U.shape, S.shape, Vh.shape ((2, 7, 8, 3), (2, 7, 3), (2, 7, 3, 3)) >>> np.allclose(b, np.matmul(U * S[..., None, :], Vh)) True >>> np.allclose(b, np.matmul(U, S[..., None] * Vh)) True # numpy.linalg.svdvals linalg.svdvals(_x_ , _/_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L1831-L1878) Returns the singular values of a matrix (or a stack of matrices) `x`. When x is a stack of matrices, the function will compute the singular values for each matrix in the stack. This function is Array API compatible. Calling `np.svdvals(x)` to get singular values is the same as `np.svd(x, compute_uv=False, hermitian=False)`. Parameters: **x**(…, M, N) array_like Input array having shape (…, M, N) and whose last two dimensions form matrices on which to perform singular value decomposition. Should have a floating-point data type. Returns: **out** ndarray An array with shape (…, K) that contains the vector(s) of singular values of length K, where K = min(M, N). See also [`scipy.linalg.svdvals`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.svdvals.html#scipy.linalg.svdvals "\(in SciPy v1.14.1\)") Compute singular values of a matrix. #### Examples >>> np.linalg.svdvals([[1, 2, 3, 4, 5], ... [1, 4, 9, 16, 25], ... [1, 8, 27, 64, 125]]) array([146.68862757, 5.57510612, 0.60393245]) Determine the rank of a matrix using singular values: >>> s = np.linalg.svdvals([[1, 2, 3], ... [2, 4, 6], ... [-1, 1, -1]]); s array([8.38434191e+00, 1.64402274e+00, 2.31534378e-16]) >>> np.count_nonzero(s > 1e-10) # Matrix of rank 2 2 # numpy.linalg.tensordot linalg.tensordot(_x1_ , _x2_ , _/_ , _*_ , _axes =2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L3392-L3394) Compute tensor dot product along specified axes. Given two tensors, `a` and `b`, and an array_like object containing two array_like objects, `(a_axes, b_axes)`, sum the products of `a`’s and `b`’s elements (components) over the axes specified by `a_axes` and `b_axes`. The third argument can be a single non-negative integer_like scalar, `N`; if it is such, then the last `N` dimensions of `a` and the first `N` dimensions of `b` are summed over. Parameters: **a, b** array_like Tensors to “dot”. **axes** int or (2,) array_like * integer_like If an int N, sum over the last N axes of `a` and the first N axes of `b` in order. The sizes of the corresponding axes must match. * (2,) array_like Or, a list of axes to be summed over, first sequence applying to `a`, second to `b`. Both elements array_like must be of the same length. Returns: **output** ndarray The tensor dot product of the input. See also [`dot`](numpy.dot#numpy.dot "numpy.dot"), [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") #### Notes Three common use cases are: * `axes = 0` : tensor product \\(a\otimes b\\) * `axes = 1` : tensor dot product \\(a\cdot b\\) * `axes = 2` : (default) tensor double contraction \\(a:b\\) When `axes` is integer_like, the sequence of axes for evaluation will be: from the -Nth axis to the -1th axis in `a`, and from the 0th axis to (N-1)th axis in `b`. For example, `axes = 2` is the equal to `axes = [[-2, -1], [0, 1]]`. When N-1 is smaller than 0, or when -N is larger than -1, the element of `a` and `b` are defined as the `axes`. When there is more than one axis to sum over - and they are not the last (first) axes of `a` (`b`) - the argument `axes` should consist of two sequences of the same length, with the first axis to sum over given first in both sequences, the second axis second, and so forth. The calculation can be referred to `numpy.einsum`. The shape of the result consists of the non-contracted axes of the first tensor, followed by the non-contracted axes of the second. #### Examples An example on integer_like: >>> a_0 = np.array([[1, 2], [3, 4]]) >>> b_0 = np.array([[5, 6], [7, 8]]) >>> c_0 = np.tensordot(a_0, b_0, axes=0) >>> c_0.shape (2, 2, 2, 2) >>> c_0 array([[[[ 5, 6], [ 7, 8]], [[10, 12], [14, 16]]], [[[15, 18], [21, 24]], [[20, 24], [28, 32]]]]) An example on array_like: >>> a = np.arange(60.).reshape(3,4,5) >>> b = np.arange(24.).reshape(4,3,2) >>> c = np.tensordot(a,b, axes=([1,0],[0,1])) >>> c.shape (5, 2) >>> c array([[4400., 4730.], [4532., 4874.], [4664., 5018.], [4796., 5162.], [4928., 5306.]]) A slower but equivalent way of computing the same… >>> d = np.zeros((5,2)) >>> for i in range(5): ... for j in range(2): ... for k in range(3): ... for n in range(4): ... d[i,j] += a[k,n,i] * b[n,k,j] >>> c == d array([[ True, True], [ True, True], [ True, True], [ True, True], [ True, True]]) An extended example taking advantage of the overloading of + and *: >>> a = np.array(range(1, 9)) >>> a.shape = (2, 2, 2) >>> A = np.array(('a', 'b', 'c', 'd'), dtype=object) >>> A.shape = (2, 2) >>> a; A array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) array([['a', 'b'], ['c', 'd']], dtype=object) >>> np.tensordot(a, A) # third argument default is 2 for double-contraction array(['abbcccdddd', 'aaaaabbbbbbcccccccdddddddd'], dtype=object) >>> np.tensordot(a, A, 1) array([[['acc', 'bdd'], ['aaacccc', 'bbbdddd']], [['aaaaacccccc', 'bbbbbdddddd'], ['aaaaaaacccccccc', 'bbbbbbbdddddddd']]], dtype=object) >>> np.tensordot(a, A, 0) # tensor product (result too long to incl.) array([[[[['a', 'b'], ['c', 'd']], ... >>> np.tensordot(a, A, (0, 1)) array([[['abbbbb', 'cddddd'], ['aabbbbbb', 'ccdddddd']], [['aaabbbbbbb', 'cccddddddd'], ['aaaabbbbbbbb', 'ccccdddddddd']]], dtype=object) >>> np.tensordot(a, A, (2, 1)) array([[['abb', 'cdd'], ['aaabbbb', 'cccdddd']], [['aaaaabbbbbb', 'cccccdddddd'], ['aaaaaaabbbbbbbb', 'cccccccdddddddd']]], dtype=object) >>> np.tensordot(a, A, ((0, 1), (0, 1))) array(['abbbcccccddddddd', 'aabbbbccccccdddddddd'], dtype=object) >>> np.tensordot(a, A, ((2, 1), (1, 0))) array(['acccbbdddd', 'aaaaacccccccbbbbbbdddddddd'], dtype=object) # numpy.linalg.tensorinv linalg.tensorinv(_a_ , _ind =2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L419-L487) Compute the ‘inverse’ of an N-dimensional array. The result is an inverse for `a` relative to the tensordot operation `tensordot(a, b, ind)`, i. e., up to floating-point accuracy, `tensordot(tensorinv(a), a, ind)` is the “identity” tensor for the tensordot operation. Parameters: **a** array_like Tensor to ‘invert’. Its shape must be ‘square’, i. e., `prod(a.shape[:ind]) == prod(a.shape[ind:])`. **ind** int, optional Number of first indices that are involved in the inverse sum. Must be a positive integer, default is 2. Returns: **b** ndarray `a`’s tensordot inverse, shape `a.shape[ind:] + a.shape[:ind]`. Raises: LinAlgError If `a` is singular or not ‘square’ (in the above sense). See also [`numpy.tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot"), [`tensorsolve`](numpy.linalg.tensorsolve#numpy.linalg.tensorsolve "numpy.linalg.tensorsolve") #### Examples >>> import numpy as np >>> a = np.eye(4*6) >>> a.shape = (4, 6, 8, 3) >>> ainv = np.linalg.tensorinv(a, ind=2) >>> ainv.shape (8, 3, 4, 6) >>> rng = np.random.default_rng() >>> b = rng.normal(size=(4, 6)) >>> np.allclose(np.tensordot(ainv, b), np.linalg.tensorsolve(a, b)) True >>> a = np.eye(4*6) >>> a.shape = (24, 8, 3) >>> ainv = np.linalg.tensorinv(a, ind=1) >>> ainv.shape (8, 3, 24) >>> rng = np.random.default_rng() >>> b = rng.normal(size=24) >>> np.allclose(np.tensordot(ainv, b, 1), np.linalg.tensorsolve(a, b)) True # numpy.linalg.tensorsolve linalg.tensorsolve(_a_ , _b_ , _axes =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L237-L313) Solve the tensor equation `a x = b` for x. It is assumed that all indices of `x` are summed over in the product, together with the rightmost indices of `a`, as is done in, for example, `tensordot(a, x, axes=x.ndim)`. Parameters: **a** array_like Coefficient tensor, of shape `b.shape + Q`. `Q`, a tuple, equals the shape of that sub-tensor of `a` consisting of the appropriate number of its rightmost indices, and must be such that `prod(Q) == prod(b.shape)` (in which sense `a` is said to be ‘square’). **b** array_like Right-hand tensor, which can be of any shape. **axes** tuple of ints, optional Axes in `a` to reorder to the right, before inversion. If None (default), no reordering is done. Returns: **x** ndarray, shape Q Raises: LinAlgError If `a` is singular or not ‘square’ (in the above sense). See also [`numpy.tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot"), [`tensorinv`](numpy.linalg.tensorinv#numpy.linalg.tensorinv "numpy.linalg.tensorinv"), [`numpy.einsum`](numpy.einsum#numpy.einsum "numpy.einsum") #### Examples >>> import numpy as np >>> a = np.eye(2*3*4) >>> a.shape = (2*3, 4, 2, 3, 4) >>> rng = np.random.default_rng() >>> b = rng.normal(size=(2*3, 4)) >>> x = np.linalg.tensorsolve(a, b) >>> x.shape (2, 3, 4) >>> np.allclose(np.tensordot(a, x, axes=3), b) True # numpy.linalg.trace linalg.trace(_x_ , _/_ , _*_ , _offset =0_, _dtype =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L3138-L3214) Returns the sum along the specified diagonals of a matrix (or a stack of matrices) `x`. This function is Array API compatible, contrary to [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace"). Parameters: **x**(…,M,N) array_like Input array having shape (…, M, N) and whose innermost two dimensions form MxN matrices. **offset** int, optional Offset specifying the off-diagonal relative to the main diagonal, where: * offset = 0: the main diagonal. * offset > 0: off-diagonal above the main diagonal. * offset < 0: off-diagonal below the main diagonal. **dtype** dtype, optional Data type of the returned array. Returns: **out** ndarray An array containing the traces and whose shape is determined by removing the last two dimensions and storing the traces in the last array dimension. For example, if x has rank k and shape: (I, J, K, …, L, M, N), then an output array has rank k-2 and shape: (I, J, K, …, L) where: out[i, j, k, ..., l] = trace(a[i, j, k, ..., l, :, :]) The returned array must have a data type as described by the dtype parameter above. See also [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") #### Examples >>> np.linalg.trace(np.eye(3)) 3.0 >>> a = np.arange(8).reshape((2, 2, 2)) >>> np.linalg.trace(a) array([3, 11]) Trace is computed with the last two axes as the 2-d sub-arrays. This behavior differs from [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") which uses the first two axes by default. >>> a = np.arange(24).reshape((3, 2, 2, 2)) >>> np.linalg.trace(a).shape (3, 2) Traces adjacent to the main diagonal can be obtained by using the `offset` argument: >>> a = np.arange(9).reshape((3, 3)); a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> np.linalg.trace(a, offset=1) # First superdiagonal 6 >>> np.linalg.trace(a, offset=2) # Second superdiagonal 2 >>> np.linalg.trace(a, offset=-1) # First subdiagonal 10 >>> np.linalg.trace(a, offset=-2) # Second subdiagonal 6 # numpy.linalg.vecdot linalg.vecdot(_x1_ , _x2_ , _/_ , _*_ , _axis =-1_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L3583-L3629) Computes the vector dot product. This function is restricted to arguments compatible with the Array API, contrary to [`numpy.vecdot`](numpy.vecdot#numpy.vecdot "numpy.vecdot"). Let \\(\mathbf{a}\\) be a vector in `x1` and \\(\mathbf{b}\\) be a corresponding vector in `x2`. The dot product is defined as: \\[\mathbf{a} \cdot \mathbf{b} = \sum_{i=0}^{n-1} \overline{a_i}b_i\\] over the dimension specified by `axis` and where \\(\overline{a_i}\\) denotes the complex conjugate if \\(a_i\\) is complex and the identity otherwise. Parameters: **x1** array_like First input array. **x2** array_like Second input array. **axis** int, optional Axis over which to compute the dot product. Default: `-1`. Returns: **output** ndarray The vector dot product of the input. See also [`numpy.vecdot`](numpy.vecdot#numpy.vecdot "numpy.vecdot") #### Examples Get the projected size along a given normal for an array of vectors. >>> v = np.array([[0., 5., 0.], [0., 0., 10.], [0., 6., 8.]]) >>> n = np.array([0., 0.6, 0.8]) >>> np.linalg.vecdot(v, n) array([ 3., 8., 10.]) # numpy.linalg.vector_norm linalg.vector_norm(_x_ , _/_ , _*_ , _axis =None_, _keepdims =False_, _ord =2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/linalg/_linalg.py#L3481-L3575) Computes the vector norm of a vector (or batch of vectors) `x`. This function is Array API compatible. Parameters: **x** array_like Input array. **axis**{None, int, 2-tuple of ints}, optional If an integer, `axis` specifies the axis (dimension) along which to compute vector norms. If an n-tuple, `axis` specifies the axes (dimensions) along which to compute batched vector norms. If `None`, the vector norm must be computed over all array values (i.e., equivalent to computing the vector norm of a flattened array). Default: `None`. **keepdims** bool, optional If this is set to True, the axes which are normed over are left in the result as dimensions with size one. Default: False. **ord**{int, float, inf, -inf}, optional The order of the norm. For details see the table under `Notes` in [`numpy.linalg.norm`](numpy.linalg.norm#numpy.linalg.norm "numpy.linalg.norm"). See also [`numpy.linalg.norm`](numpy.linalg.norm#numpy.linalg.norm "numpy.linalg.norm") Generic norm function #### Examples >>> from numpy import linalg as LA >>> a = np.arange(9) + 1 >>> a array([1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> b = a.reshape((3, 3)) >>> b array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> LA.vector_norm(b) 16.881943016134134 >>> LA.vector_norm(b, ord=np.inf) 9.0 >>> LA.vector_norm(b, ord=-np.inf) 1.0 >>> LA.vector_norm(b, ord=0) 9.0 >>> LA.vector_norm(b, ord=1) 45.0 >>> LA.vector_norm(b, ord=-1) 0.3534857623790153 >>> LA.vector_norm(b, ord=2) 16.881943016134134 >>> LA.vector_norm(b, ord=-2) 0.8058837395885292 # numpy.linspace numpy.linspace(_start_ , _stop_ , _num =50_, _endpoint =True_, _retstep =False_, _dtype =None_, _axis =0_, _*_ , _device =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/function_base.py#L25-L187) Return evenly spaced numbers over a specified interval. Returns `num` evenly spaced samples, calculated over the interval [`start`, `stop`]. The endpoint of the interval can optionally be excluded. Changed in version 1.20.0: Values are rounded towards `-inf` instead of `0` when an integer `dtype` is specified. The old behavior can still be obtained with `np.linspace(start, stop, num).astype(int)` Parameters: **start** array_like The starting value of the sequence. **stop** array_like The end value of the sequence, unless `endpoint` is set to False. In that case, the sequence consists of all but the last of `num + 1` evenly spaced samples, so that `stop` is excluded. Note that the step size changes when `endpoint` is False. **num** int, optional Number of samples to generate. Default is 50. Must be non-negative. **endpoint** bool, optional If True, `stop` is the last sample. Otherwise, it is not included. Default is True. **retstep** bool, optional If True, return (`samples`, `step`), where `step` is the spacing between samples. **dtype** dtype, optional The type of the output array. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not given, the data type is inferred from `start` and `stop`. The inferred dtype will never be an integer; `float` is chosen even if the arguments would produce an array of integers. **axis** int, optional The axis in the result to store the samples. Relevant only if start or stop are array-like. By default (0), the samples will be along a new axis inserted at the beginning. Use -1 to get an axis at the end. **device** str, optional The device on which to place the created array. Default: None. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. Returns: **samples** ndarray There are `num` equally spaced samples in the closed interval `[start, stop]` or the half-open interval `[start, stop)` (depending on whether `endpoint` is True or False). **step** float, optional Only returned if `retstep` is True Size of spacing between samples. See also [`arange`](numpy.arange#numpy.arange "numpy.arange") Similar to `linspace`, but uses a step size (instead of the number of samples). [`geomspace`](numpy.geomspace#numpy.geomspace "numpy.geomspace") Similar to `linspace`, but with numbers spaced evenly on a log scale (a geometric progression). [`logspace`](numpy.logspace#numpy.logspace "numpy.logspace") Similar to [`geomspace`](numpy.geomspace#numpy.geomspace "numpy.geomspace"), but with the end points specified as logarithms. [How to create arrays with regularly-spaced values](../../user/how-to- partition#how-to-partition) #### Examples >>> import numpy as np >>> np.linspace(2.0, 3.0, num=5) array([2. , 2.25, 2.5 , 2.75, 3. ]) >>> np.linspace(2.0, 3.0, num=5, endpoint=False) array([2. , 2.2, 2.4, 2.6, 2.8]) >>> np.linspace(2.0, 3.0, num=5, retstep=True) (array([2. , 2.25, 2.5 , 2.75, 3. ]), 0.25) Graphical illustration: >>> import matplotlib.pyplot as plt >>> N = 8 >>> y = np.zeros(N) >>> x1 = np.linspace(0, 10, N, endpoint=True) >>> x2 = np.linspace(0, 10, N, endpoint=False) >>> plt.plot(x1, y, 'o') [] >>> plt.plot(x2, y + 0.5, 'o') [] >>> plt.ylim([-0.5, 1]) (-0.5, 1) >>> plt.show() # numpy.load numpy.load(_file_ , _mmap_mode =None_, _allow_pickle =False_, _fix_imports =True_, _encoding ='ASCII'_, _*_ , _max_header_size =10000_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_npyio_impl.py#L308-L494) Load arrays or pickled objects from `.npy`, `.npz` or pickled files. Warning Loading files that contain object arrays uses the `pickle` module, which is not secure against erroneous or maliciously constructed data. Consider passing `allow_pickle=False` to load data that is known not to contain object arrays for the safer handling of untrusted sources. Parameters: **file** file-like object, string, or pathlib.Path The file to read. File-like objects must support the `seek()` and `read()` methods and must always be opened in binary mode. Pickled files require that the file-like object support the `readline()` method as well. **mmap_mode**{None, ‘r+’, ‘r’, ‘w+’, ‘c’}, optional If not None, then memory-map the file, using the given mode (see [`numpy.memmap`](numpy.memmap#numpy.memmap "numpy.memmap") for a detailed description of the modes). A memory-mapped array is kept on disk. However, it can be accessed and sliced like any ndarray. Memory mapping is especially useful for accessing small fragments of large files without reading the entire file into memory. **allow_pickle** bool, optional Allow loading pickled object arrays stored in npy files. Reasons for disallowing pickles include security, as loading pickled data can execute arbitrary code. If pickles are disallowed, loading object arrays will fail. Default: False **fix_imports** bool, optional Only useful when loading Python 2 generated pickled files on Python 3, which includes npy/npz files containing object arrays. If `fix_imports` is True, pickle will try to map the old Python 2 names to the new names used in Python 3. **encoding** str, optional What encoding to use when reading Python 2 strings. Only useful when loading Python 2 generated pickled files in Python 3, which includes npy/npz files containing object arrays. Values other than ‘latin1’, ‘ASCII’, and ‘bytes’ are not allowed, as they can corrupt numerical data. Default: ‘ASCII’ **max_header_size** int, optional Maximum allowed size of the header. Large headers may not be safe to load securely and thus require explicitly passing a larger value. See [`ast.literal_eval`](https://docs.python.org/3/library/ast.html#ast.literal_eval "\(in Python v3.13\)") for details. This option is ignored when `allow_pickle` is passed. In that case the file is by definition trusted and the limit is unnecessary. Returns: **result** array, tuple, dict, etc. Data stored in the file. For `.npz` files, the returned instance of NpzFile class must be closed to avoid leaking file descriptors. Raises: OSError If the input file does not exist or cannot be read. UnpicklingError If `allow_pickle=True`, but the file cannot be loaded as a pickle. ValueError The file contains an object array, but `allow_pickle=False` given. EOFError When calling `np.load` multiple times on the same file handle, if all data has already been read See also [`save`](numpy.save#numpy.save "numpy.save"), [`savez`](numpy.savez#numpy.savez "numpy.savez"), [`savez_compressed`](numpy.savez_compressed#numpy.savez_compressed "numpy.savez_compressed"), [`loadtxt`](numpy.loadtxt#numpy.loadtxt "numpy.loadtxt") [`memmap`](numpy.memmap#numpy.memmap "numpy.memmap") Create a memory-map to an array stored in a file on disk. [`lib.format.open_memmap`](numpy.lib.format.open_memmap#numpy.lib.format.open_memmap "numpy.lib.format.open_memmap") Create or load a memory-mapped `.npy` file. #### Notes * If the file contains pickle data, then whatever object is stored in the pickle is returned. * If the file is a `.npy` file, then a single array is returned. * If the file is a `.npz` file, then a dictionary-like object is returned, containing `{filename: array}` key-value pairs, one for each file in the archive. * If the file is a `.npz` file, the returned value supports the context manager protocol in a similar fashion to the open function: with load('foo.npz') as data: a = data['a'] The underlying file descriptor is closed when exiting the ‘with’ block. #### Examples >>> import numpy as np Store data to disk, and load it again: >>> np.save('/tmp/123', np.array([[1, 2, 3], [4, 5, 6]])) >>> np.load('/tmp/123.npy') array([[1, 2, 3], [4, 5, 6]]) Store compressed data to disk, and load it again: >>> a=np.array([[1, 2, 3], [4, 5, 6]]) >>> b=np.array([1, 2]) >>> np.savez('/tmp/123.npz', a=a, b=b) >>> data = np.load('/tmp/123.npz') >>> data['a'] array([[1, 2, 3], [4, 5, 6]]) >>> data['b'] array([1, 2]) >>> data.close() Mem-map the stored array, and then access the second row directly from disk: >>> X = np.load('/tmp/123.npy', mmap_mode='r') >>> X[1, :] memmap([4, 5, 6]) # numpy.loadtxt numpy.loadtxt(_fname_ , _dtype= _, _comments='#'_ , _delimiter=None_ , _converters=None_ , _skiprows=0_ , _usecols=None_ , _unpack=False_ , _ndmin=0_ , _encoding=None_ , _max_rows=None_ , _*_ , _quotechar=None_ , _like=None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_npyio_impl.py#L1129-L1400) Load data from a text file. Parameters: **fname** file, str, pathlib.Path, list of str, generator File, filename, list, or generator to read. If the filename extension is `.gz` or `.bz2`, the file is first decompressed. Note that generators must return bytes or strings. The strings in a list or produced by a generator are treated as lines. **dtype** data-type, optional Data-type of the resulting array; default: float. If this is a structured data-type, the resulting array will be 1-dimensional, and each row will be interpreted as an element of the array. In this case, the number of columns used must match the number of fields in the data-type. **comments** str or sequence of str or None, optional The characters or list of characters used to indicate the start of a comment. None implies no comments. For backwards compatibility, byte strings will be decoded as ‘latin1’. The default is ‘#’. **delimiter** str, optional The character used to separate the values. For backwards compatibility, byte strings will be decoded as ‘latin1’. The default is whitespace. Changed in version 1.23.0: Only single character delimiters are supported. Newline characters cannot be used as the delimiter. **converters** dict or callable, optional Converter functions to customize value parsing. If `converters` is callable, the function is applied to all columns, else it must be a dict that maps column number to a parser function. See examples for further details. Default: None. Changed in version 1.23.0: The ability to pass a single callable to be applied to all columns was added. **skiprows** int, optional Skip the first `skiprows` lines, including comments; default: 0. **usecols** int or sequence, optional Which columns to read, with 0 being the first. For example, `usecols = (1,4,5)` will extract the 2nd, 5th and 6th columns. The default, None, results in all columns being read. **unpack** bool, optional If True, the returned array is transposed, so that arguments may be unpacked using `x, y, z = loadtxt(...)`. When used with a structured data-type, arrays are returned for each field. Default is False. **ndmin** int, optional The returned array will have at least `ndmin` dimensions. Otherwise mono- dimensional axes will be squeezed. Legal values: 0 (default), 1 or 2. **encoding** str, optional Encoding used to decode the inputfile. Does not apply to input streams. The special value ‘bytes’ enables backward compatibility workarounds that ensures you receive byte arrays as results if possible and passes ‘latin1’ encoded strings to converters. Override this value to receive unicode arrays and pass strings as input to converters. If set to None the system default is used. The default value is ‘bytes’. Changed in version 2.0: Before NumPy 2, the default was `'bytes'` for Python 2 compatibility. The default is now `None`. **max_rows** int, optional Read `max_rows` rows of content after `skiprows` lines. The default is to read all the rows. Note that empty rows containing no data such as empty lines and comment lines are not counted towards `max_rows`, while such lines are counted in `skiprows`. Changed in version 1.23.0: Lines containing no data, including comment lines (e.g., lines starting with ‘#’ or as specified via `comments`) are not counted towards `max_rows`. **quotechar** unicode character or None, optional The character used to denote the start and end of a quoted item. Occurrences of the delimiter or comment characters are ignored within a quoted item. The default value is `quotechar=None`, which means quoting support is disabled. If two consecutive instances of `quotechar` are found within a quoted field, the first is treated as an escape character. See examples. New in version 1.23.0. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** ndarray Data read from the text file. See also [`load`](numpy.load#numpy.load "numpy.load"), [`fromstring`](numpy.fromstring#numpy.fromstring "numpy.fromstring"), [`fromregex`](numpy.fromregex#numpy.fromregex "numpy.fromregex") [`genfromtxt`](numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") Load data with missing values handled as specified. [`scipy.io.loadmat`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.loadmat.html#scipy.io.loadmat "\(in SciPy v1.14.1\)") reads MATLAB data files #### Notes This function aims to be a fast reader for simply formatted files. The [`genfromtxt`](numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") function provides more sophisticated handling of, e.g., lines with missing values. Each row in the input text file must have the same number of values to be able to read all values. If all rows do not have same number of values, a subset of up to n columns (where n is the least number of values present in all rows) can be read by specifying the columns via `usecols`. The strings produced by the Python float.hex method can be used as input for floats. #### Examples >>> import numpy as np >>> from io import StringIO # StringIO behaves like a file object >>> c = StringIO("0 1\n2 3") >>> np.loadtxt(c) array([[0., 1.], [2., 3.]]) >>> d = StringIO("M 21 72\nF 35 58") >>> np.loadtxt(d, dtype={'names': ('gender', 'age', 'weight'), ... 'formats': ('S1', 'i4', 'f4')}) array([(b'M', 21, 72.), (b'F', 35, 58.)], dtype=[('gender', 'S1'), ('age', '>> c = StringIO("1,0,2\n3,0,4") >>> x, y = np.loadtxt(c, delimiter=',', usecols=(0, 2), unpack=True) >>> x array([1., 3.]) >>> y array([2., 4.]) The `converters` argument is used to specify functions to preprocess the text prior to parsing. `converters` can be a dictionary that maps preprocessing functions to each column: >>> s = StringIO("1.618, 2.296\n3.141, 4.669\n") >>> conv = { ... 0: lambda x: np.floor(float(x)), # conversion fn for column 0 ... 1: lambda x: np.ceil(float(x)), # conversion fn for column 1 ... } >>> np.loadtxt(s, delimiter=",", converters=conv) array([[1., 3.], [3., 5.]]) `converters` can be a callable instead of a dictionary, in which case it is applied to all columns: >>> s = StringIO("0xDE 0xAD\n0xC0 0xDE") >>> import functools >>> conv = functools.partial(int, base=16) >>> np.loadtxt(s, converters=conv) array([[222., 173.], [192., 222.]]) This example shows how `converters` can be used to convert a field with a trailing minus sign into a negative number. >>> s = StringIO("10.01 31.25-\n19.22 64.31\n17.57- 63.94") >>> def conv(fld): ... return -float(fld[:-1]) if fld.endswith("-") else float(fld) ... >>> np.loadtxt(s, converters=conv) array([[ 10.01, -31.25], [ 19.22, 64.31], [-17.57, 63.94]]) Using a callable as the converter can be particularly useful for handling values with different formatting, e.g. floats with underscores: >>> s = StringIO("1 2.7 100_000") >>> np.loadtxt(s, converters=float) array([1.e+00, 2.7e+00, 1.e+05]) This idea can be extended to automatically handle values specified in many different formats, such as hex values: >>> def conv(val): ... try: ... return float(val) ... except ValueError: ... return float.fromhex(val) >>> s = StringIO("1, 2.5, 3_000, 0b4, 0x1.4000000000000p+2") >>> np.loadtxt(s, delimiter=",", converters=conv) array([1.0e+00, 2.5e+00, 3.0e+03, 1.8e+02, 5.0e+00]) Or a format where the `-` sign comes after the number: >>> s = StringIO("10.01 31.25-\n19.22 64.31\n17.57- 63.94") >>> conv = lambda x: -float(x[:-1]) if x.endswith("-") else float(x) >>> np.loadtxt(s, converters=conv) array([[ 10.01, -31.25], [ 19.22, 64.31], [-17.57, 63.94]]) Support for quoted fields is enabled with the `quotechar` parameter. Comment and delimiter characters are ignored when they appear within a quoted item delineated by `quotechar`: >>> s = StringIO('"alpha, #42", 10.0\n"beta, #64", 2.0\n') >>> dtype = np.dtype([("label", "U12"), ("value", float)]) >>> np.loadtxt(s, dtype=dtype, delimiter=",", quotechar='"') array([('alpha, #42', 10.), ('beta, #64', 2.)], dtype=[('label', '>> s = StringIO('"alpha, #42" 10.0\n"beta, #64" 2.0\n') >>> dtype = np.dtype([("label", "U12"), ("value", float)]) >>> np.loadtxt(s, dtype=dtype, delimiter=None, quotechar='"') array([('alpha, #42', 10.), ('beta, #64', 2.)], dtype=[('label', '>> s = StringIO('"Hello, my name is ""Monty""!"') >>> np.loadtxt(s, dtype="U", delimiter=",", quotechar='"') array('Hello, my name is "Monty"!', dtype='>> d = StringIO("1 2\n2 4\n3 9 12\n4 16 20") >>> np.loadtxt(d, usecols=(0, 1)) array([[ 1., 2.], [ 2., 4.], [ 3., 9.], [ 4., 16.]]) # numpy.log numpy.log(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Natural logarithm, element-wise. The natural logarithm `log` is the inverse of the exponential function, so that `log(exp(x)) = x`. The natural logarithm is logarithm in base [`e`](../constants#numpy.e "numpy.e"). Parameters: **x** array_like Input value. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The natural logarithm of `x`, element-wise. This is a scalar if `x` is a scalar. See also [`log10`](numpy.log10#numpy.log10 "numpy.log10"), [`log2`](numpy.log2#numpy.log2 "numpy.log2"), [`log1p`](numpy.log1p#numpy.log1p "numpy.log1p"), [`emath.log`](numpy.emath.log#numpy.emath.log "numpy.emath.log") #### Notes Logarithm is a multivalued function: for each `x` there is an infinite number of `z` such that `exp(z) = x`. The convention is to return the `z` whose imaginary part lies in `(-pi, pi]`. For real-valued input data types, `log` always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, `log` is a complex analytical function that has a branch cut `[-inf, 0]` and is continuous from above on it. `log` handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard. In the cases where the input has a negative real part and a very small negative complex part (approaching 0), the result is so close to `-pi` that it evaluates to exactly `-pi`. #### References [1] M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 67. [2] Wikipedia, “Logarithm”. #### Examples >>> import numpy as np >>> np.log([1, np.e, np.e**2, 0]) array([ 0., 1., 2., -inf]) # numpy.log10 numpy.log10(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the base 10 logarithm of the input array, element-wise. Parameters: **x** array_like Input values. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The logarithm to the base 10 of `x`, element-wise. NaNs are returned where x is negative. This is a scalar if `x` is a scalar. See also [`emath.log10`](numpy.emath.log10#numpy.emath.log10 "numpy.emath.log10") #### Notes Logarithm is a multivalued function: for each `x` there is an infinite number of `z` such that `10**z = x`. The convention is to return the `z` whose imaginary part lies in `(-pi, pi]`. For real-valued input data types, `log10` always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, `log10` is a complex analytical function that has a branch cut `[-inf, 0]` and is continuous from above on it. `log10` handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard. In the cases where the input has a negative real part and a very small negative complex part (approaching 0), the result is so close to `-pi` that it evaluates to exactly `-pi`. #### References [1] M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 67. [2] Wikipedia, “Logarithm”. #### Examples >>> import numpy as np >>> np.log10([1e-15, -3.]) array([-15., nan]) # numpy.log1p numpy.log1p(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the natural logarithm of one plus the input array, element-wise. Calculates `log(1 + x)`. Parameters: **x** array_like Input values. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray Natural logarithm of `1 + x`, element-wise. This is a scalar if `x` is a scalar. See also [`expm1`](numpy.expm1#numpy.expm1 "numpy.expm1") `exp(x) - 1`, the inverse of `log1p`. #### Notes For real-valued input, `log1p` is accurate also for `x` so small that `1 + x == 1` in floating-point accuracy. Logarithm is a multivalued function: for each `x` there is an infinite number of `z` such that `exp(z) = 1 + x`. The convention is to return the `z` whose imaginary part lies in `[-pi, pi]`. For real-valued input data types, `log1p` always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, `log1p` is a complex analytical function that has a branch cut `[-inf, -1]` and is continuous from above on it. `log1p` handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard. #### References [1] M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 67. [2] Wikipedia, “Logarithm”. #### Examples >>> import numpy as np >>> np.log1p(1e-99) 1e-99 >>> np.log(1 + 1e-99) 0.0 # numpy.log2 numpy.log2(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Base-2 logarithm of `x`. Parameters: **x** array_like Input values. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray Base-2 logarithm of `x`. This is a scalar if `x` is a scalar. See also [`log`](numpy.log#numpy.log "numpy.log"), [`log10`](numpy.log10#numpy.log10 "numpy.log10"), [`log1p`](numpy.log1p#numpy.log1p "numpy.log1p"), [`emath.log2`](numpy.emath.log2#numpy.emath.log2 "numpy.emath.log2") #### Notes Logarithm is a multivalued function: for each `x` there is an infinite number of `z` such that `2**z = x`. The convention is to return the `z` whose imaginary part lies in `(-pi, pi]`. For real-valued input data types, `log2` always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, `log2` is a complex analytical function that has a branch cut `[-inf, 0]` and is continuous from above on it. `log2` handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard. In the cases where the input has a negative real part and a very small negative complex part (approaching 0), the result is so close to `-pi` that it evaluates to exactly `-pi`. #### Examples >>> import numpy as np >>> x = np.array([0, 1, 2, 2**4]) >>> np.log2(x) array([-inf, 0., 1., 4.]) >>> xi = np.array([0+1.j, 1, 2+0.j, 4.j]) >>> np.log2(xi) array([ 0.+2.26618007j, 0.+0.j , 1.+0.j , 2.+2.26618007j]) # numpy.logaddexp numpy.logaddexp(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Logarithm of the sum of exponentiations of the inputs. Calculates `log(exp(x1) + exp(x2))`. This function is useful in statistics where the calculated probabilities of events may be so small as to exceed the range of normal floating point numbers. In such cases the logarithm of the calculated probability is stored. This function allows adding probabilities stored in such a fashion. Parameters: **x1, x2** array_like Input values. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **result** ndarray Logarithm of `exp(x1) + exp(x2)`. This is a scalar if both `x1` and `x2` are scalars. See also [`logaddexp2`](numpy.logaddexp2#numpy.logaddexp2 "numpy.logaddexp2") Logarithm of the sum of exponentiations of inputs in base 2. #### Examples >>> import numpy as np >>> prob1 = np.log(1e-50) >>> prob2 = np.log(2.5e-50) >>> prob12 = np.logaddexp(prob1, prob2) >>> prob12 -113.87649168120691 >>> np.exp(prob12) 3.5000000000000057e-50 # numpy.logaddexp2 numpy.logaddexp2(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Logarithm of the sum of exponentiations of the inputs in base-2. Calculates `log2(2**x1 + 2**x2)`. This function is useful in machine learning when the calculated probabilities of events may be so small as to exceed the range of normal floating point numbers. In such cases the base-2 logarithm of the calculated probability can be used instead. This function allows adding probabilities stored in such a fashion. Parameters: **x1, x2** array_like Input values. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **result** ndarray Base-2 logarithm of `2**x1 + 2**x2`. This is a scalar if both `x1` and `x2` are scalars. See also [`logaddexp`](numpy.logaddexp#numpy.logaddexp "numpy.logaddexp") Logarithm of the sum of exponentiations of the inputs. #### Examples >>> import numpy as np >>> prob1 = np.log2(1e-50) >>> prob2 = np.log2(2.5e-50) >>> prob12 = np.logaddexp2(prob1, prob2) >>> prob1, prob2, prob12 (-166.09640474436813, -164.77447664948076, -164.28904982231052) >>> 2**prob12 3.4999999999999914e-50 # numpy.logical_and numpy.logical_and(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Compute the truth value of x1 AND x2 element-wise. Parameters: **x1, x2** array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray or bool Boolean result of the logical AND operation applied to the elements of `x1` and `x2`; the shape is determined by broadcasting. This is a scalar if both `x1` and `x2` are scalars. See also [`logical_or`](numpy.logical_or#numpy.logical_or "numpy.logical_or"), [`logical_not`](numpy.logical_not#numpy.logical_not "numpy.logical_not"), [`logical_xor`](numpy.logical_xor#numpy.logical_xor "numpy.logical_xor") [`bitwise_and`](numpy.bitwise_and#numpy.bitwise_and "numpy.bitwise_and") #### Examples >>> import numpy as np >>> np.logical_and(True, False) False >>> np.logical_and([True, False], [False, False]) array([False, False]) >>> x = np.arange(5) >>> np.logical_and(x>1, x<4) array([False, False, True, True, False]) The `&` operator can be used as a shorthand for `np.logical_and` on boolean ndarrays. >>> a = np.array([True, False]) >>> b = np.array([False, False]) >>> a & b array([False, False]) # numpy.logical_not numpy.logical_not(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Compute the truth value of NOT x element-wise. Parameters: **x** array_like Logical NOT is applied to the elements of `x`. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** bool or ndarray of bool Boolean result with the same shape as `x` of the NOT operation on elements of `x`. This is a scalar if `x` is a scalar. See also [`logical_and`](numpy.logical_and#numpy.logical_and "numpy.logical_and"), [`logical_or`](numpy.logical_or#numpy.logical_or "numpy.logical_or"), [`logical_xor`](numpy.logical_xor#numpy.logical_xor "numpy.logical_xor") #### Examples >>> import numpy as np >>> np.logical_not(3) False >>> np.logical_not([True, False, 0, 1]) array([False, True, True, False]) >>> x = np.arange(5) >>> np.logical_not(x<3) array([False, False, False, True, True]) # numpy.logical_or numpy.logical_or(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Compute the truth value of x1 OR x2 element-wise. Parameters: **x1, x2** array_like Logical OR is applied to the elements of `x1` and `x2`. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray or bool Boolean result of the logical OR operation applied to the elements of `x1` and `x2`; the shape is determined by broadcasting. This is a scalar if both `x1` and `x2` are scalars. See also [`logical_and`](numpy.logical_and#numpy.logical_and "numpy.logical_and"), [`logical_not`](numpy.logical_not#numpy.logical_not "numpy.logical_not"), [`logical_xor`](numpy.logical_xor#numpy.logical_xor "numpy.logical_xor") [`bitwise_or`](numpy.bitwise_or#numpy.bitwise_or "numpy.bitwise_or") #### Examples >>> import numpy as np >>> np.logical_or(True, False) True >>> np.logical_or([True, False], [False, False]) array([ True, False]) >>> x = np.arange(5) >>> np.logical_or(x < 1, x > 3) array([ True, False, False, False, True]) The `|` operator can be used as a shorthand for `np.logical_or` on boolean ndarrays. >>> a = np.array([True, False]) >>> b = np.array([False, False]) >>> a | b array([ True, False]) # numpy.logical_xor numpy.logical_xor(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Compute the truth value of x1 XOR x2, element-wise. Parameters: **x1, x2** array_like Logical XOR is applied to the elements of `x1` and `x2`. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** bool or ndarray of bool Boolean result of the logical XOR operation applied to the elements of `x1` and `x2`; the shape is determined by broadcasting. This is a scalar if both `x1` and `x2` are scalars. See also [`logical_and`](numpy.logical_and#numpy.logical_and "numpy.logical_and"), [`logical_or`](numpy.logical_or#numpy.logical_or "numpy.logical_or"), [`logical_not`](numpy.logical_not#numpy.logical_not "numpy.logical_not"), [`bitwise_xor`](numpy.bitwise_xor#numpy.bitwise_xor "numpy.bitwise_xor") #### Examples >>> import numpy as np >>> np.logical_xor(True, False) True >>> np.logical_xor([True, True, False, False], [True, False, True, False]) array([False, True, True, False]) >>> x = np.arange(5) >>> np.logical_xor(x < 1, x > 3) array([ True, False, False, False, True]) Simple example showing support of broadcasting >>> np.logical_xor(0, np.eye(2)) array([[ True, False], [False, True]]) # numpy.logspace numpy.logspace(_start_ , _stop_ , _num =50_, _endpoint =True_, _base =10.0_, _dtype =None_, _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/function_base.py#L195-L302) Return numbers spaced evenly on a log scale. In linear space, the sequence starts at `base ** start` (`base` to the power of `start`) and ends with `base ** stop` (see `endpoint` below). Changed in version 1.25.0: Non-scalar ‘base` is now supported Parameters: **start** array_like `base ** start` is the starting value of the sequence. **stop** array_like `base ** stop` is the final value of the sequence, unless `endpoint` is False. In that case, `num + 1` values are spaced over the interval in log-space, of which all but the last (a sequence of length `num`) are returned. **num** integer, optional Number of samples to generate. Default is 50. **endpoint** boolean, optional If true, `stop` is the last sample. Otherwise, it is not included. Default is True. **base** array_like, optional The base of the log space. The step size between the elements in `ln(samples) / ln(base)` (or `log_base(samples)`) is uniform. Default is 10.0. **dtype** dtype The type of the output array. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not given, the data type is inferred from `start` and `stop`. The inferred type will never be an integer; `float` is chosen even if the arguments would produce an array of integers. **axis** int, optional The axis in the result to store the samples. Relevant only if start, stop, or base are array-like. By default (0), the samples will be along a new axis inserted at the beginning. Use -1 to get an axis at the end. Returns: **samples** ndarray `num` samples, equally spaced on a log scale. See also [`arange`](numpy.arange#numpy.arange "numpy.arange") Similar to linspace, with the step size specified instead of the number of samples. Note that, when used with a float endpoint, the endpoint may or may not be included. [`linspace`](numpy.linspace#numpy.linspace "numpy.linspace") Similar to logspace, but with the samples uniformly distributed in linear space, instead of log space. [`geomspace`](numpy.geomspace#numpy.geomspace "numpy.geomspace") Similar to logspace, but with endpoints specified directly. [How to create arrays with regularly-spaced values](../../user/how-to- partition#how-to-partition) #### Notes If base is a scalar, logspace is equivalent to the code >>> y = np.linspace(start, stop, num=num, endpoint=endpoint) ... >>> power(base, y).astype(dtype) ... #### Examples >>> import numpy as np >>> np.logspace(2.0, 3.0, num=4) array([ 100. , 215.443469 , 464.15888336, 1000. ]) >>> np.logspace(2.0, 3.0, num=4, endpoint=False) array([100. , 177.827941 , 316.22776602, 562.34132519]) >>> np.logspace(2.0, 3.0, num=4, base=2.0) array([4. , 5.0396842 , 6.34960421, 8. ]) >>> np.logspace(2.0, 3.0, num=4, base=[2.0, 3.0], axis=-1) array([[ 4. , 5.0396842 , 6.34960421, 8. ], [ 9. , 12.98024613, 18.72075441, 27. ]]) Graphical illustration: >>> import matplotlib.pyplot as plt >>> N = 10 >>> x1 = np.logspace(0.1, 1, N, endpoint=True) >>> x2 = np.logspace(0.1, 1, N, endpoint=False) >>> y = np.zeros(N) >>> plt.plot(x1, y, 'o') [] >>> plt.plot(x2, y + 0.5, 'o') [] >>> plt.ylim([-0.5, 1]) (-0.5, 1) >>> plt.show() # numpy.ma.all ma.all(_self_ , _axis=None_ , _out=None_ , _keepdims= _)_= _ Returns True if all elements evaluate to True. The output array is masked where all the values along the given axis are masked: if the output would have been a scalar and that all the values are masked, then the output is [`masked`](../maskedarray.baseclass#numpy.ma.masked "numpy.ma.masked"). Refer to [`numpy.all`](numpy.all#numpy.all "numpy.all") for full documentation. See also [`numpy.ndarray.all`](numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all") corresponding function for ndarrays [`numpy.all`](numpy.all#numpy.all "numpy.all") equivalent function #### Examples >>> import numpy as np >>> np.ma.array([1,2,3]).all() True >>> a = np.ma.array([1,2,3], mask=True) >>> (a.all() is np.ma.masked) True # numpy.ma.allclose ma.allclose(_a_ , _b_ , _masked_equal =True_, _rtol =1e-05_, _atol =1e-08_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L8487-L8593) Returns True if two arrays are element-wise equal within a tolerance. This function is equivalent to [`allclose`](numpy.allclose#numpy.allclose "numpy.allclose") except that masked values are treated as equal (default) or unequal, depending on the [`masked_equal`](numpy.ma.masked_equal#numpy.ma.masked_equal "numpy.ma.masked_equal") argument. Parameters: **a, b** array_like Input arrays to compare. **masked_equal** bool, optional Whether masked values in `a` and `b` are considered equal (True) or not (False). They are considered equal by default. **rtol** float, optional Relative tolerance. The relative difference is equal to `rtol * b`. Default is 1e-5. **atol** float, optional Absolute tolerance. The absolute difference is equal to `atol`. Default is 1e-8. Returns: **y** bool Returns True if the two arrays are equal within the given tolerance, False otherwise. If either array contains NaN, then False is returned. See also [`all`](numpy.all#numpy.all "numpy.all"), [`any`](numpy.any#numpy.any "numpy.any") [`numpy.allclose`](numpy.allclose#numpy.allclose "numpy.allclose") the non-masked [`allclose`](numpy.allclose#numpy.allclose "numpy.allclose"). #### Notes If the following equation is element-wise True, then [`allclose`](numpy.allclose#numpy.allclose "numpy.allclose") returns True: absolute(`a` - `b`) <= (`atol` + `rtol` * absolute(`b`)) Return True if all elements of `a` and `b` are equal subject to given tolerances. #### Examples >>> import numpy as np >>> a = np.ma.array([1e10, 1e-7, 42.0], mask=[0, 0, 1]) >>> a masked_array(data=[10000000000.0, 1e-07, --], mask=[False, False, True], fill_value=1e+20) >>> b = np.ma.array([1e10, 1e-8, -42.0], mask=[0, 0, 1]) >>> np.ma.allclose(a, b) False >>> a = np.ma.array([1e10, 1e-8, 42.0], mask=[0, 0, 1]) >>> b = np.ma.array([1.00001e10, 1e-9, -42.0], mask=[0, 0, 1]) >>> np.ma.allclose(a, b) True >>> np.ma.allclose(a, b, masked_equal=False) False Masked values are not compared directly. >>> a = np.ma.array([1e10, 1e-8, 42.0], mask=[0, 0, 1]) >>> b = np.ma.array([1.00001e10, 1e-9, 42.0], mask=[0, 0, 1]) >>> np.ma.allclose(a, b) True >>> np.ma.allclose(a, b, masked_equal=False) False # numpy.ma.allequal ma.allequal(_a_ , _b_ , _fill_value =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L8428-L8484) Return True if all entries of a and b are equal, using fill_value as a truth value where either or both are masked. Parameters: **a, b** array_like Input arrays to compare. **fill_value** bool, optional Whether masked values in a or b are considered equal (True) or not (False). Returns: **y** bool Returns True if the two arrays are equal within the given tolerance, False otherwise. If either array contains NaN, then False is returned. See also [`all`](numpy.all#numpy.all "numpy.all"), [`any`](numpy.any#numpy.any "numpy.any") [`numpy.ma.allclose`](numpy.ma.allclose#numpy.ma.allclose "numpy.ma.allclose") #### Examples >>> import numpy as np >>> a = np.ma.array([1e10, 1e-7, 42.0], mask=[0, 0, 1]) >>> a masked_array(data=[10000000000.0, 1e-07, --], mask=[False, False, True], fill_value=1e+20) >>> b = np.array([1e10, 1e-7, -42.0]) >>> b array([ 1.00000000e+10, 1.00000000e-07, -4.20000000e+01]) >>> np.ma.allequal(a, b, fill_value=False) False >>> np.ma.allequal(a, b) True # numpy.ma.amax ma.amax(_a_ , _axis=None_ , _out=None_ , _keepdims= _, _initial= _, _where= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L3168-L3182) Return the maximum of an array or maximum along an axis. [`amax`](numpy.amax#numpy.amax "numpy.amax") is an alias of [`max`](numpy.max#numpy.max "numpy.max"). See also [`max`](numpy.max#numpy.max "numpy.max") alias of this function [`ndarray.max`](numpy.ndarray.max#numpy.ndarray.max "numpy.ndarray.max") equivalent method # numpy.ma.amin ma.amin(_a_ , _axis=None_ , _out=None_ , _keepdims= _, _initial= _, _where= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L3306-L3320) Return the minimum of an array or minimum along an axis. [`amin`](numpy.amin#numpy.amin "numpy.amin") is an alias of [`min`](numpy.min#numpy.min "numpy.min"). See also [`min`](numpy.min#numpy.min "numpy.min") alias of this function [`ndarray.min`](numpy.ndarray.min#numpy.ndarray.min "numpy.ndarray.min") equivalent method # numpy.ma.anom ma.anom(_self_ , _axis =None_, _dtype =None_)_= _ Compute the anomalies (deviations from the arithmetic mean) along the given axis. Returns an array of anomalies, with the same shape as the input and where the arithmetic mean is computed along the given axis. Parameters: **axis** int, optional Axis over which the anomalies are taken. The default is to use the mean of the flattened array as reference. **dtype** dtype, optional Type to use in computing the variance. For arrays of integer type the default is float32; for arrays of float types it is the same as the array type. See also [`mean`](numpy.mean#numpy.mean "numpy.mean") Compute the mean of the array. #### Examples >>> import numpy as np >>> a = np.ma.array([1,2,3]) >>> a.anom() masked_array(data=[-1., 0., 1.], mask=False, fill_value=1e+20) # numpy.ma.anomalies ma.anomalies(_self_ , _axis =None_, _dtype =None_)_= _ Compute the anomalies (deviations from the arithmetic mean) along the given axis. Returns an array of anomalies, with the same shape as the input and where the arithmetic mean is computed along the given axis. Parameters: **axis** int, optional Axis over which the anomalies are taken. The default is to use the mean of the flattened array as reference. **dtype** dtype, optional Type to use in computing the variance. For arrays of integer type the default is float32; for arrays of float types it is the same as the array type. See also [`mean`](numpy.mean#numpy.mean "numpy.mean") Compute the mean of the array. #### Examples >>> import numpy as np >>> a = np.ma.array([1,2,3]) >>> a.anom() masked_array(data=[-1., 0., 1.], mask=False, fill_value=1e+20) # numpy.ma.any ma.any(_self_ , _axis=None_ , _out=None_ , _keepdims= _)_= _ Returns True if any of the elements of `a` evaluate to True. Masked values are considered as False during computation. Refer to [`numpy.any`](numpy.any#numpy.any "numpy.any") for full documentation. See also [`numpy.ndarray.any`](numpy.ndarray.any#numpy.ndarray.any "numpy.ndarray.any") corresponding function for ndarrays [`numpy.any`](numpy.any#numpy.any "numpy.any") equivalent function # numpy.ma.append ma.append(_a_ , _b_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L8920-L8959) Append values to the end of an array. Parameters: **a** array_like Values are appended to a copy of this array. **b** array_like These values are appended to a copy of `a`. It must be of the correct shape (the same shape as `a`, excluding `axis`). If `axis` is not specified, `b` can be any shape and will be flattened before use. **axis** int, optional The axis along which `v` are appended. If `axis` is not given, both `a` and `b` are flattened before use. Returns: **append** MaskedArray A copy of `a` with `b` appended to `axis`. Note that [`append`](numpy.append#numpy.append "numpy.append") does not occur in-place: a new array is allocated and filled. If `axis` is None, the result is a flattened array. See also [`numpy.append`](numpy.append#numpy.append "numpy.append") Equivalent function in the top-level NumPy module. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = ma.masked_values([1, 2, 3], 2) >>> b = ma.masked_values([[4, 5, 6], [7, 8, 9]], 7) >>> ma.append(a, b) masked_array(data=[1, --, 3, 4, 5, 6, --, 8, 9], mask=[False, True, False, False, False, False, True, False, False], fill_value=999999) # numpy.ma.apply_along_axis ma.apply_along_axis(_func1d_ , _axis_ , _arr_ , _* args_, _** kwargs_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L390-L469) Apply a function to 1-D slices along the given axis. Execute `func1d(a, *args, **kwargs)` where `func1d` operates on 1-D arrays and `a` is a 1-D slice of `arr` along `axis`. This is equivalent to (but faster than) the following use of [`ndindex`](numpy.ndindex#numpy.ndindex "numpy.ndindex") and [`s_`](numpy.s_#numpy.s_ "numpy.s_"), which sets each of `ii`, `jj`, and `kk` to a tuple of indices: Ni, Nk = a.shape[:axis], a.shape[axis+1:] for ii in ndindex(Ni): for kk in ndindex(Nk): f = func1d(arr[ii + s_[:,] + kk]) Nj = f.shape for jj in ndindex(Nj): out[ii + jj + kk] = f[jj] Equivalently, eliminating the inner loop, this can be expressed as: Ni, Nk = a.shape[:axis], a.shape[axis+1:] for ii in ndindex(Ni): for kk in ndindex(Nk): out[ii + s_[...,] + kk] = func1d(arr[ii + s_[:,] + kk]) Parameters: **func1d** function (M,) -> (Nj…) This function should accept 1-D arrays. It is applied to 1-D slices of `arr` along the specified axis. **axis** integer Axis along which `arr` is sliced. **arr** ndarray (Ni…, M, Nk…) Input array. **args** any Additional arguments to `func1d`. **kwargs** any Additional named arguments to `func1d`. Returns: **out** ndarray (Ni…, Nj…, Nk…) The output array. The shape of `out` is identical to the shape of `arr`, except along the `axis` dimension. This axis is removed, and replaced with new dimensions equal to the shape of the return value of `func1d`. So if `func1d` returns a scalar `out` will have one fewer dimensions than `arr`. See also [`apply_over_axes`](numpy.apply_over_axes#numpy.apply_over_axes "numpy.apply_over_axes") Apply a function repeatedly over multiple axes. #### Examples >>> import numpy as np >>> def my_func(a): ... """Average first and last element of a 1-D array""" ... return (a[0] + a[-1]) * 0.5 >>> b = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> np.apply_along_axis(my_func, 0, b) array([4., 5., 6.]) >>> np.apply_along_axis(my_func, 1, b) array([2., 5., 8.]) For a function that returns a 1D array, the number of dimensions in `outarr` is the same as `arr`. >>> b = np.array([[8,1,7], [4,3,9], [5,2,6]]) >>> np.apply_along_axis(sorted, 1, b) array([[1, 7, 8], [3, 4, 9], [2, 5, 6]]) For a function that returns a higher dimensional array, those dimensions are inserted in place of the `axis` dimension. >>> b = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> np.apply_along_axis(np.diag, -1, b) array([[[1, 0, 0], [0, 2, 0], [0, 0, 3]], [[4, 0, 0], [0, 5, 0], [0, 0, 6]], [[7, 0, 0], [0, 8, 0], [0, 0, 9]]]) # numpy.ma.apply_over_axes ma.apply_over_axes(_func_ , _a_ , _axes_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L473-L495) Apply a function repeatedly over multiple axes. `func` is called as `res = func(a, axis)`, where `axis` is the first element of `axes`. The result `res` of the function call must have either the same dimensions as `a` or one less dimension. If `res` has one less dimension than `a`, a dimension is inserted before `axis`. The call to `func` is then repeated for each axis in `axes`, with `res` as the first argument. Parameters: **func** function This function must take two arguments, `func(a, axis)`. **a** array_like Input array. **axes** array_like Axes over which `func` is applied; the elements must be integers. Returns: **apply_over_axis** ndarray The output array. The number of dimensions is the same as `a`, but the shape can be different. This depends on whether `func` changes the shape of its output with respect to its input. See also [`apply_along_axis`](numpy.apply_along_axis#numpy.apply_along_axis "numpy.apply_along_axis") Apply a function to 1-D slices of an array along the given axis. #### Examples >>> import numpy as np >>> a = np.ma.arange(24).reshape(2,3,4) >>> a[:,0,1] = np.ma.masked >>> a[:,1,:] = np.ma.masked >>> a masked_array( data=[[[0, --, 2, 3], [--, --, --, --], [8, 9, 10, 11]], [[12, --, 14, 15], [--, --, --, --], [20, 21, 22, 23]]], mask=[[[False, True, False, False], [ True, True, True, True], [False, False, False, False]], [[False, True, False, False], [ True, True, True, True], [False, False, False, False]]], fill_value=999999) >>> np.ma.apply_over_axes(np.ma.sum, a, [0,2]) masked_array( data=[[[46], [--], [124]]], mask=[[[False], [ True], [False]]], fill_value=999999) Tuple axis arguments to ufuncs are equivalent: >>> np.ma.sum(a, axis=(0,2)).reshape((1,-1,1)) masked_array( data=[[[46], [--], [124]]], mask=[[[False], [ True], [False]]], fill_value=999999) # numpy.ma.arange ma.arange([_start_ , ]_stop_ , [_step_ , ]_dtype=None_ , _*_ , _device=None_ , _like=None_)_= _ Return evenly spaced values within a given interval. `arange` can be called with a varying number of positional arguments: * `arange(stop)`: Values are generated within the half-open interval `[0, stop)` (in other words, the interval including `start` but excluding `stop`). * `arange(start, stop)`: Values are generated within the half-open interval `[start, stop)`. * `arange(start, stop, step)` Values are generated within the half-open interval `[start, stop)`, with spacing between values given by `step`. For integer arguments the function is roughly equivalent to the Python built- in [`range`](https://docs.python.org/3/library/stdtypes.html#range "\(in Python v3.13\)"), but returns an ndarray rather than a `range` instance. When using a non-integer step, such as 0.1, it is often better to use [`numpy.linspace`](numpy.linspace#numpy.linspace "numpy.linspace"). See the Warning sections below for more information. Parameters: **start** integer or real, optional Start of interval. The interval includes this value. The default start value is 0. **stop** integer or real End of interval. The interval does not include this value, except in some cases where `step` is not an integer and floating point round-off affects the length of `out`. **step** integer or real, optional Spacing between values. For any output `out`, this is the distance between two adjacent values, `out[i+1] - out[i]`. The default step size is 1. If `step` is specified as a position argument, `start` must also be given. **dtype** dtype, optional The type of the output array. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not given, infer the data type from the other input arguments. **device** str, optional The device on which to place the created array. Default: `None`. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **arange** MaskedArray Array of evenly spaced values. For floating point arguments, the length of the result is `ceil((stop - start)/step)`. Because of floating point overflow, this rule may result in the last element of `out` being greater than `stop`. Warning The length of the output might not be numerically stable. Another stability issue is due to the internal implementation of [`numpy.arange`](numpy.arange#numpy.arange "numpy.arange"). The actual step value used to populate the array is `dtype(start + step) - dtype(start)` and not `step`. Precision loss can occur here, due to casting or due to using floating points when `start` is much larger than `step`. This can lead to unexpected behaviour. For example: >>> np.arange(0, 5, 0.5, dtype=int) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) >>> np.arange(-3, 3, 0.5, dtype=int) array([-3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8]) In such cases, the use of [`numpy.linspace`](numpy.linspace#numpy.linspace "numpy.linspace") should be preferred. The built-in [`range`](https://docs.python.org/3/library/stdtypes.html#range "\(in Python v3.13\)") generates [Python built-in integers that have arbitrary size](https://docs.python.org/3/c-api/long.html "\(in Python v3.13\)"), while [`numpy.arange`](numpy.arange#numpy.arange "numpy.arange") produces [`numpy.int32`](../arrays.scalars#numpy.int32 "numpy.int32") or [`numpy.int64`](../arrays.scalars#numpy.int64 "numpy.int64") numbers. This may result in incorrect results for large integer values: >>> power = 40 >>> modulo = 10000 >>> x1 = [(n ** power) % modulo for n in range(8)] >>> x2 = [(n ** power) % modulo for n in np.arange(8)] >>> print(x1) [0, 1, 7776, 8801, 6176, 625, 6576, 4001] # correct >>> print(x2) [0, 1, 7776, 7185, 0, 5969, 4816, 3361] # incorrect See also [`numpy.linspace`](numpy.linspace#numpy.linspace "numpy.linspace") Evenly spaced numbers with careful handling of endpoints. [`numpy.ogrid`](numpy.ogrid#numpy.ogrid "numpy.ogrid") Arrays of evenly spaced numbers in N-dimensions. [`numpy.mgrid`](numpy.mgrid#numpy.mgrid "numpy.mgrid") Grid-shaped arrays of evenly spaced numbers in N-dimensions. [How to create arrays with regularly-spaced values](../../user/how-to- partition#how-to-partition) #### Examples >>> import numpy as np >>> np.arange(3) array([0, 1, 2]) >>> np.arange(3.0) array([ 0., 1., 2.]) >>> np.arange(3,7) array([3, 4, 5, 6]) >>> np.arange(3,7,2) array([3, 5]) # numpy.ma.argmax ma.argmax(_self_ , _axis =None_, _fill_value =None_, _out =None_)_= _ Returns array of indices of the maximum values along the given axis. Masked values are treated as if they had the value fill_value. Parameters: **axis**{None, integer} If None, the index is into the flattened array, otherwise along the specified axis **fill_value** scalar or None, optional Value used to fill in the masked values. If None, the output of maximum_fill_value(self._data) is used instead. **out**{None, array}, optional Array into which the result can be placed. Its type is preserved and it must be of the right shape to hold the output. Returns: **index_array**{integer_array} #### Examples >>> import numpy as np >>> a = np.arange(6).reshape(2,3) >>> a.argmax() 5 >>> a.argmax(0) array([1, 1, 1]) >>> a.argmax(1) array([2, 2]) # numpy.ma.argmin ma.argmin(_self_ , _axis =None_, _fill_value =None_, _out =None_)_= _ Return array of indices to the minimum values along the given axis. Parameters: **axis**{None, integer} If None, the index is into the flattened array, otherwise along the specified axis **fill_value** scalar or None, optional Value used to fill in the masked values. If None, the output of minimum_fill_value(self._data) is used instead. **out**{None, array}, optional Array into which the result can be placed. Its type is preserved and it must be of the right shape to hold the output. Returns: ndarray or scalar If multi-dimension input, returns a new ndarray of indices to the minimum values along the given axis. Otherwise, returns a scalar of index to the minimum values along the given axis. #### Examples >>> import numpy as np >>> x = np.ma.array(np.arange(4), mask=[1,1,0,0]) >>> x.shape = (2,2) >>> x masked_array( data=[[--, --], [2, 3]], mask=[[ True, True], [False, False]], fill_value=999999) >>> x.argmin(axis=0, fill_value=-1) array([0, 0]) >>> x.argmin(axis=0, fill_value=9) array([1, 1]) # numpy.ma.argsort ma.argsort(_a_ , _axis= _, _kind=None_ , _order=None_ , _endwith=True_ , _fill_value=None_ , _*_ , _stable=None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7250-L7263) Return an ndarray of indices that sort the array along the specified axis. Masked values are filled beforehand to `fill_value`. Parameters: **axis** int, optional Axis along which to sort. If None, the default, the flattened array is used. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional The sorting algorithm used. **order** list, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. Not all fields need be specified. **endwith**{True, False}, optional Whether missing values (if any) should be treated as the largest values (True) or the smallest values (False) When the array contains unmasked values at the same extremes of the datatype, the ordering of these values and the masked values is undefined. **fill_value** scalar or None, optional Value used internally for the masked values. If `fill_value` is not None, it supersedes `endwith`. **stable** bool, optional Only for compatibility with `np.argsort`. Ignored. Returns: **index_array** ndarray, int Array of indices that sort `a` along the specified axis. In other words, `a[index_array]` yields a sorted `a`. See also [`ma.MaskedArray.sort`](numpy.ma.maskedarray.sort#numpy.ma.MaskedArray.sort "numpy.ma.MaskedArray.sort") Describes sorting algorithms used. [`lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort with multiple keys. [`numpy.ndarray.sort`](numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort") Inplace sort. #### Notes See [`sort`](numpy.sort#numpy.sort "numpy.sort") for notes on the different sorting algorithms. #### Examples >>> import numpy as np >>> a = np.ma.array([3,2,1], mask=[False, False, True]) >>> a masked_array(data=[3, 2, --], mask=[False, False, True], fill_value=999999) >>> a.argsort() array([1, 0, 2]) # numpy.ma.around ma.around _= _ Round an array to the given number of decimals. [`around`](numpy.around#numpy.around "numpy.around") is an alias of [`round`](numpy.round#numpy.round "numpy.round"). See also [`ndarray.round`](numpy.ndarray.round#numpy.ndarray.round "numpy.ndarray.round") equivalent method [`round`](numpy.round#numpy.round "numpy.round") alias for this function [`ceil`](numpy.ceil#numpy.ceil "numpy.ceil"), [`fix`](numpy.fix#numpy.fix "numpy.fix"), [`floor`](numpy.floor#numpy.floor "numpy.floor"), [`rint`](numpy.rint#numpy.rint "numpy.rint"), [`trunc`](numpy.trunc#numpy.trunc "numpy.trunc") # numpy.ma.array ma.array(_data_ , _dtype =None_, _copy =False_, _order =None_, _mask =np.False__, _fill_value =None_, _keep_mask =True_, _hard_mask =False_, _shrink =True_, _subok =True_, _ndmin =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L6905-L6918) An array class with possibly masked values. Masked values of True exclude the corresponding element from any computation. Construction: x = MaskedArray(data, mask=nomask, dtype=None, copy=False, subok=True, ndmin=0, fill_value=None, keep_mask=True, hard_mask=None, shrink=True, order=None) Parameters: **data** array_like Input data. **mask** sequence, optional Mask. Must be convertible to an array of booleans with the same shape as `data`. True indicates a masked (i.e. invalid) data. **dtype** dtype, optional Data type of the output. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is None, the type of the data argument (`data.dtype`) is used. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not None and different from `data.dtype`, a copy is performed. **copy** bool, optional Whether to copy the input data (True), or to use a reference instead. Default is False. **subok** bool, optional Whether to return a subclass of [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") if possible (True) or a plain [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"). Default is True. **ndmin** int, optional Minimum number of dimensions. Default is 0. **fill_value** scalar, optional Value used to fill in the masked values when necessary. If None, a default based on the data-type is used. **keep_mask** bool, optional Whether to combine `mask` with the mask of the input data, if any (True), or to use only `mask` for the output (False). Default is True. **hard_mask** bool, optional Whether to use a hard mask or not. With a hard mask, masked values cannot be unmasked. Default is False. **shrink** bool, optional Whether to force compression of an empty mask. Default is True. **order**{‘C’, ‘F’, ‘A’}, optional Specify the order of the array. If order is ‘C’, then the array will be in C-contiguous order (last-index varies the fastest). If order is ‘F’, then the returned array will be in Fortran-contiguous order (first-index varies the fastest). If order is ‘A’ (default), then the returned array may be in any order (either C-, Fortran-contiguous, or even discontiguous), unless a copy is required, in which case it will be C-contiguous. #### Examples >>> import numpy as np The `mask` can be initialized with an array of boolean values with the same shape as `data`. >>> data = np.arange(6).reshape((2, 3)) >>> np.ma.MaskedArray(data, mask=[[False, True, False], ... [False, False, True]]) masked_array( data=[[0, --, 2], [3, 4, --]], mask=[[False, True, False], [False, False, True]], fill_value=999999) Alternatively, the `mask` can be initialized to homogeneous boolean array with the same shape as `data` by passing in a scalar boolean value: >>> np.ma.MaskedArray(data, mask=False) masked_array( data=[[0, 1, 2], [3, 4, 5]], mask=[[False, False, False], [False, False, False]], fill_value=999999) >>> np.ma.MaskedArray(data, mask=True) masked_array( data=[[--, --, --], [--, --, --]], mask=[[ True, True, True], [ True, True, True]], fill_value=999999, dtype=int64) Note The recommended practice for initializing `mask` with a scalar boolean value is to use `True`/`False` rather than `np.True_`/`np.False_`. The reason is [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask") is represented internally as `np.False_`. >>> np.False_ is np.ma.nomask True # numpy.ma.asanyarray ma.asanyarray(_a_ , _dtype =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L8646-L8693) Convert the input to a masked array, conserving subclasses. If `a` is a subclass of [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"), its class is conserved. No copy is performed if the input is already an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). Parameters: **a** array_like Input data, in any form that can be converted to an array. **dtype** dtype, optional By default, the data-type is inferred from the input data. **order**{‘C’, ‘F’}, optional Whether to use row-major (‘C’) or column-major (‘FORTRAN’) memory representation. Default is ‘C’. Returns: **out** MaskedArray MaskedArray interpretation of `a`. See also [`asarray`](numpy.asarray#numpy.asarray "numpy.asarray") Similar to [`asanyarray`](numpy.asanyarray#numpy.asanyarray "numpy.asanyarray"), but does not conserve subclass. #### Examples >>> import numpy as np >>> x = np.arange(10.).reshape(2, 5) >>> x array([[0., 1., 2., 3., 4.], [5., 6., 7., 8., 9.]]) >>> np.ma.asanyarray(x) masked_array( data=[[0., 1., 2., 3., 4.], [5., 6., 7., 8., 9.]], mask=False, fill_value=1e+20) >>> type(np.ma.asanyarray(x)) # numpy.ma.asarray ma.asarray(_a_ , _dtype =None_, _order =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L8596-L8643) Convert the input to a masked array of the given data-type. No copy is performed if the input is already an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). If `a` is a subclass of [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"), a base class [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") is returned. Parameters: **a** array_like Input data, in any form that can be converted to a masked array. This includes lists, lists of tuples, tuples, tuples of tuples, tuples of lists, ndarrays and masked arrays. **dtype** dtype, optional By default, the data-type is inferred from the input data. **order**{‘C’, ‘F’}, optional Whether to use row-major (‘C’) or column-major (‘FORTRAN’) memory representation. Default is ‘C’. Returns: **out** MaskedArray Masked array interpretation of `a`. See also [`asanyarray`](numpy.asanyarray#numpy.asanyarray "numpy.asanyarray") Similar to [`asarray`](numpy.asarray#numpy.asarray "numpy.asarray"), but conserves subclasses. #### Examples >>> import numpy as np >>> x = np.arange(10.).reshape(2, 5) >>> x array([[0., 1., 2., 3., 4.], [5., 6., 7., 8., 9.]]) >>> np.ma.asarray(x) masked_array( data=[[0., 1., 2., 3., 4.], [5., 6., 7., 8., 9.]], mask=False, fill_value=1e+20) >>> type(np.ma.asarray(x)) # numpy.ma.atleast_1d ma.atleast_1d _= _ Convert inputs to arrays with at least one dimension. Scalar inputs are converted to 1-dimensional arrays, whilst higher-dimensional inputs are preserved. Parameters: **arys1, arys2, …** array_like One or more input arrays. Returns: **ret** ndarray An array, or tuple of arrays, each with `a.ndim >= 1`. Copies are made only if necessary. See also [`atleast_2d`](numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d"), [`atleast_3d`](numpy.atleast_3d#numpy.atleast_3d "numpy.atleast_3d") #### Notes The function is applied to both the _data and the _mask, if any. #### Examples >>> import numpy as np >>> np.atleast_1d(1.0) array([1.]) >>> x = np.arange(9.0).reshape(3,3) >>> np.atleast_1d(x) array([[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]) >>> np.atleast_1d(x) is x True >>> np.atleast_1d(1, [3, 4]) (array([1]), array([3, 4])) # numpy.ma.atleast_2d ma.atleast_2d _= _ View inputs as arrays with at least two dimensions. Parameters: **arys1, arys2, …** array_like One or more array-like sequences. Non-array inputs are converted to arrays. Arrays that already have two or more dimensions are preserved. Returns: **res, res2, …** ndarray An array, or tuple of arrays, each with `a.ndim >= 2`. Copies are avoided where possible, and views with two or more dimensions are returned. See also [`atleast_1d`](numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d"), [`atleast_3d`](numpy.atleast_3d#numpy.atleast_3d "numpy.atleast_3d") #### Notes The function is applied to both the _data and the _mask, if any. #### Examples >>> import numpy as np >>> np.atleast_2d(3.0) array([[3.]]) >>> x = np.arange(3.0) >>> np.atleast_2d(x) array([[0., 1., 2.]]) >>> np.atleast_2d(x).base is x True >>> np.atleast_2d(1, [1, 2], [[1, 2]]) (array([[1]]), array([[1, 2]]), array([[1, 2]])) # numpy.ma.atleast_3d ma.atleast_3d _= _ View inputs as arrays with at least three dimensions. Parameters: **arys1, arys2, …** array_like One or more array-like sequences. Non-array inputs are converted to arrays. Arrays that already have three or more dimensions are preserved. Returns: **res1, res2, …** ndarray An array, or tuple of arrays, each with `a.ndim >= 3`. Copies are avoided where possible, and views with three or more dimensions are returned. For example, a 1-D array of shape `(N,)` becomes a view of shape `(1, N, 1)`, and a 2-D array of shape `(M, N)` becomes a view of shape `(M, N, 1)`. See also [`atleast_1d`](numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d"), [`atleast_2d`](numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d") #### Notes The function is applied to both the _data and the _mask, if any. #### Examples >>> import numpy as np >>> np.atleast_3d(3.0) array([[[3.]]]) >>> x = np.arange(3.0) >>> np.atleast_3d(x).shape (1, 3, 1) >>> x = np.arange(12.0).reshape(4,3) >>> np.atleast_3d(x).shape (4, 3, 1) >>> np.atleast_3d(x).base is x.base # x is a reshape, so not base itself True >>> for arr in np.atleast_3d([1, 2], [[1, 2]], [[[1, 2]]]): ... print(arr, arr.shape) ... [[[1] [2]]] (1, 2, 1) [[[1] [2]]] (1, 2, 1) [[[1 2]]] (1, 1, 2) # numpy.ma.average ma.average(_a_ , _axis=None_ , _weights=None_ , _returned=False_ , _*_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L548-L713) Return the weighted average of array over the given axis. Parameters: **a** array_like Data to be averaged. Masked entries are not taken into account in the computation. **axis** None or int or tuple of ints, optional Axis or axes along which to average `a`. The default, `axis=None`, will average over all of the elements of the input array. If axis is a tuple of ints, averaging is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before. **weights** array_like, optional An array of weights associated with the values in `a`. Each value in `a` contributes to the average according to its associated weight. The array of weights must be the same shape as `a` if no axis is specified, otherwise the weights must have dimensions and shape consistent with `a` along the specified axis. If `weights=None`, then all data in `a` are assumed to have a weight equal to one. The calculation is: avg = sum(a * weights) / sum(weights) where the sum is over all included elements. The only constraint on the values of `weights` is that `sum(weights)` must not be 0. **returned** bool, optional Flag indicating whether a tuple `(result, sum of weights)` should be returned as output (True), or just the result (False). Default is False. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. _Note:_ `keepdims` will not work with instances of [`numpy.matrix`](numpy.matrix#numpy.matrix "numpy.matrix") or other classes whose methods do not support `keepdims`. New in version 1.23.0. Returns: **average, [sum_of_weights]**(tuple of) scalar or MaskedArray The average along the specified axis. When returned is `True`, return a tuple with the average as the first element and the sum of the weights as the second element. The return type is `np.float64` if `a` is of integer type and floats smaller than [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"), or the input data-type, otherwise. If returned, `sum_of_weights` is always [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"). Raises: ZeroDivisionError When all weights along axis are zero. See `numpy.ma.average` for a version robust to this type of error. TypeError When `weights` does not have the same shape as `a`, and `axis=None`. ValueError When `weights` does not have dimensions and shape consistent with `a` along specified `axis`. #### Examples >>> import numpy as np >>> a = np.ma.array([1., 2., 3., 4.], mask=[False, False, True, True]) >>> np.ma.average(a, weights=[3, 1, 0, 0]) 1.25 >>> x = np.ma.arange(6.).reshape(3, 2) >>> x masked_array( data=[[0., 1.], [2., 3.], [4., 5.]], mask=False, fill_value=1e+20) >>> data = np.arange(8).reshape((2, 2, 2)) >>> data array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> np.ma.average(data, axis=(0, 1), weights=[[1./4, 3./4], [1., 1./2]]) masked_array(data=[3.4, 4.4], mask=[False, False], fill_value=1e+20) >>> np.ma.average(data, axis=0, weights=[[1./4, 3./4], [1., 1./2]]) Traceback (most recent call last): ... ValueError: Shape of weights must be consistent with shape of a along specified axis. >>> avg, sumweights = np.ma.average(x, axis=0, weights=[1, 2, 3], ... returned=True) >>> avg masked_array(data=[2.6666666666666665, 3.6666666666666665], mask=[False, False], fill_value=1e+20) With `keepdims=True`, the following result has shape (3, 1). >>> np.ma.average(x, axis=1, keepdims=True) masked_array( data=[[0.5], [2.5], [4.5]], mask=False, fill_value=1e+20) # numpy.ma.choose ma.choose(_indices_ , _choices_ , _out =None_, _mode ='raise'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L8052-L8125) Use an index array to construct a new array from a list of choices. Given an array of integers and a list of n choice arrays, this method will create a new array that merges each of the choice arrays. Where a value in `index` is i, the new array will have the value that choices[i] contains in the same place. Parameters: **indices** ndarray of ints This array must contain integers in `[0, n-1]`, where n is the number of choices. **choices** sequence of arrays Choice arrays. The index array and all of the choices should be broadcastable to the same shape. **out** array, optional If provided, the result will be inserted into this array. It should be of the appropriate shape and [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"). **mode**{‘raise’, ‘wrap’, ‘clip’}, optional Specifies how out-of-bounds indices will behave. * ‘raise’ : raise an error * ‘wrap’ : wrap around * ‘clip’ : clip to the range Returns: **merged_array** array See also [`choose`](numpy.choose#numpy.choose "numpy.choose") equivalent function #### Examples >>> import numpy as np >>> choice = np.array([[1,1,1], [2,2,2], [3,3,3]]) >>> a = np.array([2, 1, 0]) >>> np.ma.choose(a, choice) masked_array(data=[3, 2, 1], mask=False, fill_value=999999) # numpy.ma.clip ma.clip _= _ Clip (limit) the values in an array. Given an interval, values outside the interval are clipped to the interval edges. For example, if an interval of `[0, 1]` is specified, values smaller than 0 become 0, and values larger than 1 become 1. Equivalent to but faster than `np.minimum(a_max, np.maximum(a, a_min))`. No check is performed to ensure `a_min < a_max`. Parameters: **a** array_like Array containing elements to clip. **a_min, a_max** array_like or None Minimum and maximum value. If `None`, clipping is not performed on the corresponding edge. If both `a_min` and `a_max` are `None`, the elements of the returned array stay the same. Both are broadcasted against `a`. **out** ndarray, optional The results will be placed in this array. It may be the input array for in- place clipping. `out` must be of the right shape to hold the output. Its type is preserved. **min, max** array_like or None Array API compatible alternatives for `a_min` and `a_max` arguments. Either `a_min` and `a_max` or `min` and `max` can be passed at the same time. Default: `None`. New in version 2.1.0. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **clipped_array** MaskedArray An array with the elements of `a`, but where values < `a_min` are replaced with `a_min`, and those > `a_max` with `a_max`. See also [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes When `a_min` is greater than `a_max`, [`clip`](numpy.clip#numpy.clip "numpy.clip") returns an array in which all values are equal to `a_max`, as shown in the second example. #### Examples >>> import numpy as np >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.clip(a, 1, 8) array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) >>> np.clip(a, 8, 1) array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) >>> np.clip(a, 3, 6, out=a) array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) >>> a array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.clip(a, [3, 4, 1, 1, 1, 4, 4, 4, 4, 4], 8) array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8]) # numpy.ma.clump_masked ma.clump_masked(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L2232-L2265) Returns a list of slices corresponding to the masked clumps of a 1-D array. (A “clump” is defined as a contiguous region of the array). Parameters: **a** ndarray A one-dimensional masked array. Returns: **slices** list of slice The list of slices, one for each continuous region of masked elements in `a`. See also [`flatnotmasked_edges`](numpy.ma.flatnotmasked_edges#numpy.ma.flatnotmasked_edges "numpy.ma.flatnotmasked_edges"), [`flatnotmasked_contiguous`](numpy.ma.flatnotmasked_contiguous#numpy.ma.flatnotmasked_contiguous "numpy.ma.flatnotmasked_contiguous"), [`notmasked_edges`](numpy.ma.notmasked_edges#numpy.ma.notmasked_edges "numpy.ma.notmasked_edges") [`notmasked_contiguous`](numpy.ma.notmasked_contiguous#numpy.ma.notmasked_contiguous "numpy.ma.notmasked_contiguous"), [`clump_unmasked`](numpy.ma.clump_unmasked#numpy.ma.clump_unmasked "numpy.ma.clump_unmasked") #### Examples >>> import numpy as np >>> a = np.ma.masked_array(np.arange(10)) >>> a[[0, 1, 2, 6, 8, 9]] = np.ma.masked >>> np.ma.clump_masked(a) [slice(0, 3, None), slice(6, 7, None), slice(8, 10, None)] # numpy.ma.clump_unmasked ma.clump_unmasked(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L2196-L2229) Return list of slices corresponding to the unmasked clumps of a 1-D array. (A “clump” is defined as a contiguous region of the array). Parameters: **a** ndarray A one-dimensional masked array. Returns: **slices** list of slice The list of slices, one for each continuous region of unmasked elements in `a`. See also [`flatnotmasked_edges`](numpy.ma.flatnotmasked_edges#numpy.ma.flatnotmasked_edges "numpy.ma.flatnotmasked_edges"), [`flatnotmasked_contiguous`](numpy.ma.flatnotmasked_contiguous#numpy.ma.flatnotmasked_contiguous "numpy.ma.flatnotmasked_contiguous"), [`notmasked_edges`](numpy.ma.notmasked_edges#numpy.ma.notmasked_edges "numpy.ma.notmasked_edges") [`notmasked_contiguous`](numpy.ma.notmasked_contiguous#numpy.ma.notmasked_contiguous "numpy.ma.notmasked_contiguous"), [`clump_masked`](numpy.ma.clump_masked#numpy.ma.clump_masked "numpy.ma.clump_masked") #### Examples >>> import numpy as np >>> a = np.ma.masked_array(np.arange(10)) >>> a[[0, 1, 2, 6, 8, 9]] = np.ma.masked >>> np.ma.clump_unmasked(a) [slice(3, 6, None), slice(7, 8, None)] # numpy.ma.column_stack ma.column_stack _= _ Stack 1-D arrays as columns into a 2-D array. Take a sequence of 1-D arrays and stack them as columns to make a single 2-D array. 2-D arrays are stacked as-is, just like with [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack"). 1-D arrays are turned into 2-D columns first. Parameters: **tup** sequence of 1-D or 2-D arrays. Arrays to stack. All of them must have the same first dimension. Returns: **stacked** 2-D array The array formed by stacking the given arrays. See also [`stack`](numpy.stack#numpy.stack "numpy.stack"), [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack"), [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack"), [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") #### Notes The function is applied to both the _data and the _mask, if any. #### Examples >>> import numpy as np >>> a = np.array((1,2,3)) >>> b = np.array((2,3,4)) >>> np.column_stack((a,b)) array([[1, 2], [2, 3], [3, 4]]) # numpy.ma.common_fill_value ma.common_fill_value(_a_ , _b_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L591-L621) Return the common filling value of two masked arrays, if any. If `a.fill_value == b.fill_value`, return the fill value, otherwise return None. Parameters: **a, b** MaskedArray The masked arrays for which to compare fill values. Returns: **fill_value** scalar or None The common fill value, or None. #### Examples >>> import numpy as np >>> x = np.ma.array([0, 1.], fill_value=3) >>> y = np.ma.array([0, 1.], fill_value=3) >>> np.ma.common_fill_value(x, y) 3.0 # numpy.ma.compress_cols ma.compress_cols(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1032-L1070) Suppress whole columns of a 2-D array that contain masked values. This is equivalent to `np.ma.compress_rowcols(a, 1)`, see [`compress_rowcols`](numpy.ma.compress_rowcols#numpy.ma.compress_rowcols "numpy.ma.compress_rowcols") for details. Parameters: **x** array_like, MaskedArray The array to operate on. If not a MaskedArray instance (or if no array elements are masked), `x` is interpreted as a MaskedArray with `mask` set to [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask"). Must be a 2D array. Returns: **compressed_array** ndarray The compressed array. See also [`compress_rowcols`](numpy.ma.compress_rowcols#numpy.ma.compress_rowcols "numpy.ma.compress_rowcols") #### Examples >>> import numpy as np >>> a = np.ma.array(np.arange(9).reshape(3, 3), mask=[[1, 0, 0], ... [1, 0, 0], ... [0, 0, 0]]) >>> np.ma.compress_cols(a) array([[1, 2], [4, 5], [7, 8]]) # numpy.ma.compress_nd ma.compress_nd(_x_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L879-L934) Suppress slices from multiple dimensions which contain masked values. Parameters: **x** array_like, MaskedArray The array to operate on. If not a MaskedArray instance (or if no array elements are masked), `x` is interpreted as a MaskedArray with `mask` set to [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask"). **axis** tuple of ints or int, optional Which dimensions to suppress slices from can be configured with this parameter. - If axis is a tuple of ints, those are the axes to suppress slices from. - If axis is an int, then that is the only axis to suppress slices from. - If axis is None, all axis are selected. Returns: **compress_array** ndarray The compressed array. #### Examples >>> import numpy as np >>> arr = [[1, 2], [3, 4]] >>> mask = [[0, 1], [0, 0]] >>> x = np.ma.array(arr, mask=mask) >>> np.ma.compress_nd(x, axis=0) array([[3, 4]]) >>> np.ma.compress_nd(x, axis=1) array([[1], [3]]) >>> np.ma.compress_nd(x) array([[3]]) # numpy.ma.compress_rowcols ma.compress_rowcols(_x_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L937-L990) Suppress the rows and/or columns of a 2-D array that contain masked values. The suppression behavior is selected with the `axis` parameter. * If axis is None, both rows and columns are suppressed. * If axis is 0, only rows are suppressed. * If axis is 1 or -1, only columns are suppressed. Parameters: **x** array_like, MaskedArray The array to operate on. If not a MaskedArray instance (or if no array elements are masked), `x` is interpreted as a MaskedArray with `mask` set to [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask"). Must be a 2D array. **axis** int, optional Axis along which to perform the operation. Default is None. Returns: **compressed_array** ndarray The compressed array. #### Examples >>> import numpy as np >>> x = np.ma.array(np.arange(9).reshape(3, 3), mask=[[1, 0, 0], ... [1, 0, 0], ... [0, 0, 0]]) >>> x masked_array( data=[[--, 1, 2], [--, 4, 5], [6, 7, 8]], mask=[[ True, False, False], [ True, False, False], [False, False, False]], fill_value=999999) >>> np.ma.compress_rowcols(x) array([[7, 8]]) >>> np.ma.compress_rowcols(x, 0) array([[6, 7, 8]]) >>> np.ma.compress_rowcols(x, 1) array([[1, 2], [4, 5], [7, 8]]) # numpy.ma.compress_rows ma.compress_rows(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L993-L1029) Suppress whole rows of a 2-D array that contain masked values. This is equivalent to `np.ma.compress_rowcols(a, 0)`, see [`compress_rowcols`](numpy.ma.compress_rowcols#numpy.ma.compress_rowcols "numpy.ma.compress_rowcols") for details. Parameters: **x** array_like, MaskedArray The array to operate on. If not a MaskedArray instance (or if no array elements are masked), `x` is interpreted as a MaskedArray with `mask` set to [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask"). Must be a 2D array. Returns: **compressed_array** ndarray The compressed array. See also [`compress_rowcols`](numpy.ma.compress_rowcols#numpy.ma.compress_rowcols "numpy.ma.compress_rowcols") #### Examples >>> import numpy as np >>> a = np.ma.array(np.arange(9).reshape(3, 3), mask=[[1, 0, 0], ... [1, 0, 0], ... [0, 0, 0]]) >>> np.ma.compress_rows(a) array([[6, 7, 8]]) # numpy.ma.compressed ma.compressed(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7309-L7345) Return all the non-masked data as a 1-D array. This function is equivalent to calling the “compressed” method of a [`ma.MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"), see [`ma.MaskedArray.compressed`](numpy.ma.maskedarray.compressed#numpy.ma.MaskedArray.compressed "numpy.ma.MaskedArray.compressed") for details. See also [`ma.MaskedArray.compressed`](numpy.ma.maskedarray.compressed#numpy.ma.MaskedArray.compressed "numpy.ma.MaskedArray.compressed") Equivalent method. #### Examples >>> import numpy as np Create an array with negative values masked: >>> import numpy as np >>> x = np.array([[1, -1, 0], [2, -1, 3], [7, 4, -1]]) >>> masked_x = np.ma.masked_array(x, mask=x < 0) >>> masked_x masked_array( data=[[1, --, 0], [2, --, 3], [7, 4, --]], mask=[[False, True, False], [False, True, False], [False, False, True]], fill_value=999999) Compress the masked array into a 1-D array of non-masked values: >>> np.ma.compressed(masked_x) array([1, 0, 2, 3, 7, 4]) # numpy.ma.concatenate ma.concatenate(_arrays_ , _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7348-L7406) Concatenate a sequence of arrays along the given axis. Parameters: **arrays** sequence of array_like The arrays must have the same shape, except in the dimension corresponding to `axis` (the first, by default). **axis** int, optional The axis along which the arrays will be joined. Default is 0. Returns: **result** MaskedArray The concatenated array with any masked entries preserved. See also [`numpy.concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Equivalent function in the top-level NumPy module. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = ma.arange(3) >>> a[1] = ma.masked >>> b = ma.arange(2, 5) >>> a masked_array(data=[0, --, 2], mask=[False, True, False], fill_value=999999) >>> b masked_array(data=[2, 3, 4], mask=False, fill_value=999999) >>> ma.concatenate([a, b]) masked_array(data=[0, --, 2, 2, 3, 4], mask=[False, True, False, False, False, False], fill_value=999999) # numpy.ma.conjugate ma.conjugate(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the complex conjugate, element-wise. The complex conjugate of a complex number is obtained by changing the sign of its imaginary part. Parameters: **x** array_like Input value. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The complex conjugate of `x`, with same dtype as `y`. This is a scalar if `x` is a scalar. #### Notes [`conj`](numpy.conj#numpy.conj "numpy.conj") is an alias for [`conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate"): >>> np.conj is np.conjugate True #### Examples >>> import numpy as np >>> np.conjugate(1+2j) (1-2j) >>> x = np.eye(2) + 1j * np.eye(2) >>> np.conjugate(x) array([[ 1.-1.j, 0.-0.j], [ 0.-0.j, 1.-1.j]]) # numpy.ma.convolve ma.convolve(_a_ , _v_ , _mode ='full'_, _propagate_mask =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L8400-L8425) Returns the discrete, linear convolution of two one-dimensional sequences. Parameters: **a, v** array_like Input sequences. **mode**{‘valid’, ‘same’, ‘full’}, optional Refer to the `np.convolve` docstring. **propagate_mask** bool If True, then if any masked element is included in the sum for a result element, then the result is masked. If False, then the result element is only masked if no non-masked cells contribute towards it Returns: **out** MaskedArray Discrete, linear convolution of `a` and `v`. See also [`numpy.convolve`](numpy.convolve#numpy.convolve "numpy.convolve") Equivalent function in the top-level NumPy module. # numpy.ma.copy ma.copy(_self_ , _*args_ , _**params) a.copy(order='C'_)_= _ Return a copy of the array. Parameters: **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. (Note that this function and [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") Similar function with different default behavior [`numpy.copyto`](numpy.copyto#numpy.copyto "numpy.copyto") #### Notes This function is the preferred method for creating an array copy. The function [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. #### Examples >>> import numpy as np >>> x = np.array([[1,2,3],[4,5,6]], order='F') >>> y = x.copy() >>> x.fill(0) >>> x array([[0, 0, 0], [0, 0, 0]]) >>> y array([[1, 2, 3], [4, 5, 6]]) >>> y.flags['C_CONTIGUOUS'] True For arrays containing Python objects (e.g. dtype=object), the copy is a shallow one. The new array will contain the same object which may lead to surprises if that object can be modified (is mutable): >>> a = np.array([1, 'm', [2, 3, 4]], dtype=object) >>> b = a.copy() >>> b[2][0] = 10 >>> a array([1, 'm', list([10, 3, 4])], dtype=object) To ensure all elements within an `object` array are copied, use [`copy.deepcopy`](https://docs.python.org/3/library/copy.html#copy.deepcopy "\(in Python v3.13\)"): >>> import copy >>> a = np.array([1, 'm', [2, 3, 4]], dtype=object) >>> c = copy.deepcopy(a) >>> c[2][0] = 10 >>> c array([1, 'm', list([10, 3, 4])], dtype=object) >>> a array([1, 'm', list([2, 3, 4])], dtype=object) # numpy.ma.corrcoef ma.corrcoef(_x_ , _y=None_ , _rowvar=True_ , _bias= _, _allow_masked=True_ , _ddof= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1714-L1788) Return Pearson product-moment correlation coefficients. Except for the handling of missing data this function does the same as [`numpy.corrcoef`](numpy.corrcoef#numpy.corrcoef "numpy.corrcoef"). For more details and examples, see [`numpy.corrcoef`](numpy.corrcoef#numpy.corrcoef "numpy.corrcoef"). Parameters: **x** array_like A 1-D or 2-D array containing multiple variables and observations. Each row of `x` represents a variable, and each column a single observation of all those variables. Also see `rowvar` below. **y** array_like, optional An additional set of variables and observations. `y` has the same shape as `x`. **rowvar** bool, optional If `rowvar` is True (default), then each row represents a variable, with observations in the columns. Otherwise, the relationship is transposed: each column represents a variable, while the rows contain observations. **bias** _NoValue, optional Has no effect, do not use. Deprecated since version 1.10.0. **allow_masked** bool, optional If True, masked values are propagated pair-wise: if a value is masked in `x`, the corresponding value is masked in `y`. If False, raises an exception. Because `bias` is deprecated, this argument needs to be treated as keyword only to avoid a warning. **ddof** _NoValue, optional Has no effect, do not use. Deprecated since version 1.10.0. See also [`numpy.corrcoef`](numpy.corrcoef#numpy.corrcoef "numpy.corrcoef") Equivalent function in top-level NumPy module. [`cov`](numpy.cov#numpy.cov "numpy.cov") Estimate the covariance matrix. #### Notes This function accepts but discards arguments `bias` and `ddof`. This is for backwards compatibility with previous versions of this function. These arguments had no effect on the return values of the function and can be safely ignored in this and previous versions of numpy. #### Examples >>> import numpy as np >>> x = np.ma.array([[0, 1], [1, 1]], mask=[0, 1, 0, 1]) >>> np.ma.corrcoef(x) masked_array( data=[[--, --], [--, --]], mask=[[ True, True], [ True, True]], fill_value=1e+20, dtype=float64) # numpy.ma.correlate ma.correlate(_a_ , _v_ , _mode ='valid'_, _propagate_mask =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L8341-L8397) Cross-correlation of two 1-dimensional sequences. Parameters: **a, v** array_like Input sequences. **mode**{‘valid’, ‘same’, ‘full’}, optional Refer to the `np.convolve` docstring. Note that the default is ‘valid’, unlike [`convolve`](numpy.convolve#numpy.convolve "numpy.convolve"), which uses ‘full’. **propagate_mask** bool If True, then a result element is masked if any masked element contributes towards it. If False, then a result element is only masked if no non-masked element contribute towards it Returns: **out** MaskedArray Discrete cross-correlation of `a` and `v`. See also [`numpy.correlate`](numpy.correlate#numpy.correlate "numpy.correlate") Equivalent function in the top-level NumPy module. #### Examples Basic correlation: >>> a = np.ma.array([1, 2, 3]) >>> v = np.ma.array([0, 1, 0]) >>> np.ma.correlate(a, v, mode='valid') masked_array(data=[2], mask=[False], fill_value=999999) Correlation with masked elements: >>> a = np.ma.array([1, 2, 3], mask=[False, True, False]) >>> v = np.ma.array([0, 1, 0]) >>> np.ma.correlate(a, v, mode='valid', propagate_mask=True) masked_array(data=[--], mask=[ True], fill_value=999999, dtype=int64) Correlation with different modes and mixed array types: >>> a = np.ma.array([1, 2, 3]) >>> v = np.ma.array([0, 1, 0]) >>> np.ma.correlate(a, v, mode='full') masked_array(data=[0, 1, 2, 3, 0], mask=[False, False, False, False, False], fill_value=999999) # numpy.ma.count ma.count(_self_ , _axis=None_ , _keepdims= _)_= _ Count the non-masked elements of the array along the given axis. Parameters: **axis** None or int or tuple of ints, optional Axis or axes along which the count is performed. The default, None, performs the count over all the dimensions of the input array. `axis` may be negative, in which case it counts from the last to the first axis. If this is a tuple of ints, the count is performed on multiple axes, instead of a single axis or all the axes as before. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns: **result** ndarray or scalar An array with the same shape as the input array, with the specified axis removed. If the array is a 0-d array, or if `axis` is None, a scalar is returned. See also [`ma.count_masked`](numpy.ma.count_masked#numpy.ma.count_masked "numpy.ma.count_masked") Count masked elements in array or along a given axis. #### Examples >>> import numpy.ma as ma >>> a = ma.arange(6).reshape((2, 3)) >>> a[1, :] = ma.masked >>> a masked_array( data=[[0, 1, 2], [--, --, --]], mask=[[False, False, False], [ True, True, True]], fill_value=999999) >>> a.count() 3 When the `axis` keyword is specified an array of appropriate size is returned. >>> a.count(axis=0) array([1, 1, 1]) >>> a.count(axis=1) array([3, 0]) # numpy.ma.count_masked ma.count_masked(_arr_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L48-L99) Count the number of masked elements along the given axis. Parameters: **arr** array_like An array with (possibly) masked elements. **axis** int, optional Axis along which to count. If None (default), a flattened version of the array is used. Returns: **count** int, ndarray The total number of masked elements (axis=None) or the number of masked elements along each slice of the given axis. See also [`MaskedArray.count`](numpy.ma.maskedarray.count#numpy.ma.MaskedArray.count "numpy.ma.MaskedArray.count") Count non-masked elements. #### Examples >>> import numpy as np >>> a = np.arange(9).reshape((3,3)) >>> a = np.ma.array(a) >>> a[1, 0] = np.ma.masked >>> a[1, 2] = np.ma.masked >>> a[2, 1] = np.ma.masked >>> a masked_array( data=[[0, 1, 2], [--, 4, --], [6, --, 8]], mask=[[False, False, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> np.ma.count_masked(a) 3 When the `axis` keyword is used an array is returned. >>> np.ma.count_masked(a, axis=0) array([1, 1, 1]) >>> np.ma.count_masked(a, axis=1) array([0, 2, 1]) # numpy.ma.cov ma.cov(_x_ , _y =None_, _rowvar =True_, _bias =False_, _allow_masked =True_, _ddof =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1619-L1711) Estimate the covariance matrix. Except for the handling of missing data this function does the same as [`numpy.cov`](numpy.cov#numpy.cov "numpy.cov"). For more details and examples, see [`numpy.cov`](numpy.cov#numpy.cov "numpy.cov"). By default, masked values are recognized as such. If `x` and `y` have the same shape, a common mask is allocated: if `x[i,j]` is masked, then `y[i,j]` will also be masked. Setting `allow_masked` to False will raise an exception if values are missing in either of the input arrays. Parameters: **x** array_like A 1-D or 2-D array containing multiple variables and observations. Each row of `x` represents a variable, and each column a single observation of all those variables. Also see `rowvar` below. **y** array_like, optional An additional set of variables and observations. `y` has the same shape as `x`. **rowvar** bool, optional If `rowvar` is True (default), then each row represents a variable, with observations in the columns. Otherwise, the relationship is transposed: each column represents a variable, while the rows contain observations. **bias** bool, optional Default normalization (False) is by `(N-1)`, where `N` is the number of observations given (unbiased estimate). If `bias` is True, then normalization is by `N`. This keyword can be overridden by the keyword `ddof` in numpy versions >= 1.5. **allow_masked** bool, optional If True, masked values are propagated pair-wise: if a value is masked in `x`, the corresponding value is masked in `y`. If False, raises a `ValueError` exception when some values are missing. **ddof**{None, int}, optional If not `None` normalization is by `(N - ddof)`, where `N` is the number of observations; this overrides the value implied by `bias`. The default value is `None`. Raises: ValueError Raised if some values are missing and `allow_masked` is False. See also [`numpy.cov`](numpy.cov#numpy.cov "numpy.cov") #### Examples >>> import numpy as np >>> x = np.ma.array([[0, 1], [1, 1]], mask=[0, 1, 0, 1]) >>> y = np.ma.array([[1, 0], [0, 1]], mask=[0, 0, 1, 1]) >>> np.ma.cov(x, y) masked_array( data=[[--, --, --, --], [--, --, --, --], [--, --, --, --], [--, --, --, --]], mask=[[ True, True, True, True], [ True, True, True, True], [ True, True, True, True], [ True, True, True, True]], fill_value=1e+20, dtype=float64) # numpy.ma.cumprod ma.cumprod(_self_ , _axis =None_, _dtype =None_, _out =None_)_= _ Return the cumulative product of the array elements over the given axis. Masked values are set to 1 internally during the computation. However, their position is saved, and the result will be masked at the same locations. Refer to [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") for full documentation. See also [`numpy.ndarray.cumprod`](numpy.ndarray.cumprod#numpy.ndarray.cumprod "numpy.ndarray.cumprod") corresponding function for ndarrays [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") equivalent function #### Notes The mask is lost if `out` is not a valid MaskedArray ! Arithmetic is modular when using integer types, and no error is raised on overflow. # numpy.ma.cumsum ma.cumsum(_self_ , _axis =None_, _dtype =None_, _out =None_)_= _ Return the cumulative sum of the array elements over the given axis. Masked values are set to 0 internally during the computation. However, their position is saved, and the result will be masked at the same locations. Refer to [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") for full documentation. See also [`numpy.ndarray.cumsum`](numpy.ndarray.cumsum#numpy.ndarray.cumsum "numpy.ndarray.cumsum") corresponding function for ndarrays [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") equivalent function #### Notes The mask is lost if `out` is not a valid [`ma.MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") ! Arithmetic is modular when using integer types, and no error is raised on overflow. #### Examples >>> import numpy as np >>> marr = np.ma.array(np.arange(10), mask=[0,0,0,1,1,1,0,0,0,0]) >>> marr.cumsum() masked_array(data=[0, 1, 3, --, --, --, 9, 16, 24, 33], mask=[False, False, False, True, True, True, False, False, False, False], fill_value=999999) # numpy.ma.default_fill_value ma.default_fill_value(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L252-L305) Return the default fill value for the argument object. The default filling value depends on the datatype of the input array or the type of the input scalar: datatype | default ---|--- bool | True int | 999999 float | 1.e20 complex | 1.e20+0j object | ‘?’ string | ‘N/A’ For structured types, a structured scalar is returned, with each field the default fill value for its type. For subarray types, the fill value is an array of the same size containing the default scalar fill value. Parameters: **obj** ndarray, dtype or scalar The array data-type or scalar for which the default fill value is returned. Returns: **fill_value** scalar The default fill value. #### Examples >>> import numpy as np >>> np.ma.default_fill_value(1) 999999 >>> np.ma.default_fill_value(np.array([1.1, 2., np.pi])) 1e+20 >>> np.ma.default_fill_value(np.dtype(complex)) (1e+20+0j) # numpy.ma.diag ma.diag(_v_ , _k =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7409-L7457) Extract a diagonal or construct a diagonal array. This function is the equivalent of [`numpy.diag`](numpy.diag#numpy.diag "numpy.diag") that takes masked values into account, see [`numpy.diag`](numpy.diag#numpy.diag "numpy.diag") for details. See also [`numpy.diag`](numpy.diag#numpy.diag "numpy.diag") Equivalent function for ndarrays. #### Examples >>> import numpy as np Create an array with negative values masked: >>> import numpy as np >>> x = np.array([[11.2, -3.973, 18], [0.801, -1.41, 12], [7, 33, -12]]) >>> masked_x = np.ma.masked_array(x, mask=x < 0) >>> masked_x masked_array( data=[[11.2, --, 18.0], [0.801, --, 12.0], [7.0, 33.0, --]], mask=[[False, True, False], [False, True, False], [False, False, True]], fill_value=1e+20) Isolate the main diagonal from the masked array: >>> np.ma.diag(masked_x) masked_array(data=[11.2, --, --], mask=[False, True, True], fill_value=1e+20) Isolate the first diagonal below the main diagonal: >>> np.ma.diag(masked_x, -1) masked_array(data=[0.801, 33.0], mask=[False, False], fill_value=1e+20) # numpy.ma.diagflat ma.diagflat _= _ Create a two-dimensional array with the flattened input as a diagonal. Parameters: **v** array_like Input data, which is flattened and set as the `k`-th diagonal of the output. **k** int, optional Diagonal to set; 0, the default, corresponds to the “main” diagonal, a positive (negative) `k` giving the number of the diagonal above (below) the main. Returns: **out** ndarray The 2-D output array. See also [`diag`](numpy.diag#numpy.diag "numpy.diag") MATLAB work-alike for 1-D and 2-D arrays. [`diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") Return specified diagonals. [`trace`](numpy.trace#numpy.trace "numpy.trace") Sum along diagonals. #### Notes The function is applied to both the _data and the _mask, if any. #### Examples >>> import numpy as np >>> np.diagflat([[1,2], [3,4]]) array([[1, 0, 0, 0], [0, 2, 0, 0], [0, 0, 3, 0], [0, 0, 0, 4]]) >>> np.diagflat([1,2], 1) array([[0, 1, 0], [0, 0, 2], [0, 0, 0]]) # numpy.ma.diff ma.diff(_a_ , _/_ , _n=1_ , _axis=-1_ , _prepend= _, _append= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7819-L7952) Calculate the n-th discrete difference along the given axis. The first difference is given by `out[i] = a[i+1] - a[i]` along the given axis, higher differences are calculated by using [`diff`](numpy.diff#numpy.diff "numpy.diff") recursively. Preserves the input mask. Parameters: **a** array_like Input array **n** int, optional The number of times values are differenced. If zero, the input is returned as- is. **axis** int, optional The axis along which the difference is taken, default is the last axis. **prepend, append** array_like, optional Values to prepend or append to `a` along axis prior to performing the difference. Scalar values are expanded to arrays with length 1 in the direction of axis and the shape of the input array in along all other axes. Otherwise the dimension and shape must match `a` except along axis. Returns: **diff** MaskedArray The n-th differences. The shape of the output is the same as `a` except along `axis` where the dimension is smaller by `n`. The type of the output is the same as the type of the difference between any two elements of `a`. This is the same as the type of `a` in most cases. A notable exception is [`datetime64`](../arrays.scalars#numpy.datetime64 "numpy.datetime64"), which results in a [`timedelta64`](../arrays.scalars#numpy.timedelta64 "numpy.timedelta64") output array. See also [`numpy.diff`](numpy.diff#numpy.diff "numpy.diff") Equivalent function in the top-level NumPy module. #### Notes Type is preserved for boolean arrays, so the result will contain `False` when consecutive elements are the same and `True` when they differ. For unsigned integer arrays, the results will also be unsigned. This should not be surprising, as the result is consistent with calculating the difference directly: >>> u8_arr = np.array([1, 0], dtype=np.uint8) >>> np.ma.diff(u8_arr) masked_array(data=[255], mask=False, fill_value=np.uint64(999999), dtype=uint8) >>> u8_arr[1,...] - u8_arr[0,...] np.uint8(255) If this is not desirable, then the array should be cast to a larger integer type first: >>> i16_arr = u8_arr.astype(np.int16) >>> np.ma.diff(i16_arr) masked_array(data=[-1], mask=False, fill_value=np.int64(999999), dtype=int16) #### Examples >>> import numpy as np >>> a = np.array([1, 2, 3, 4, 7, 0, 2, 3]) >>> x = np.ma.masked_where(a < 2, a) >>> np.ma.diff(x) masked_array(data=[--, 1, 1, 3, --, --, 1], mask=[ True, False, False, False, True, True, False], fill_value=999999) >>> np.ma.diff(x, n=2) masked_array(data=[--, 0, 2, --, --, --], mask=[ True, False, False, True, True, True], fill_value=999999) >>> a = np.array([[1, 3, 1, 5, 10], [0, 1, 5, 6, 8]]) >>> x = np.ma.masked_equal(a, value=1) >>> np.ma.diff(x) masked_array( data=[[--, --, --, 5], [--, --, 1, 2]], mask=[[ True, True, True, False], [ True, True, False, False]], fill_value=1) >>> np.ma.diff(x, axis=0) masked_array(data=[[--, --, --, 1, -2]], mask=[[ True, True, True, False, False]], fill_value=1) # numpy.ma.dot ma.dot(_a_ , _b_ , _strict =False_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L8203-L8281) Return the dot product of two arrays. This function is the equivalent of [`numpy.dot`](numpy.dot#numpy.dot "numpy.dot") that takes masked values into account. Note that `strict` and `out` are in different position than in the method version. In order to maintain compatibility with the corresponding method, it is recommended that the optional arguments be treated as keyword only. At some point that may be mandatory. Parameters: **a, b** masked_array_like Inputs arrays. **strict** bool, optional Whether masked data are propagated (True) or set to 0 (False) for the computation. Default is False. Propagating the mask means that if a masked value appears in a row or column, the whole row or column is considered masked. **out** masked_array, optional Output argument. This must have the exact kind that would be returned if it was not used. In particular, it must have the right type, must be C-contiguous, and its dtype must be the dtype that would be returned for `dot(a,b)`. This is a performance feature. Therefore, if these conditions are not met, an exception is raised, instead of attempting to be flexible. See also [`numpy.dot`](numpy.dot#numpy.dot "numpy.dot") Equivalent function for ndarrays. #### Examples >>> import numpy as np >>> a = np.ma.array([[1, 2, 3], [4, 5, 6]], mask=[[1, 0, 0], [0, 0, 0]]) >>> b = np.ma.array([[1, 2], [3, 4], [5, 6]], mask=[[1, 0], [0, 0], [0, 0]]) >>> np.ma.dot(a, b) masked_array( data=[[21, 26], [45, 64]], mask=[[False, False], [False, False]], fill_value=999999) >>> np.ma.dot(a, b, strict=True) masked_array( data=[[--, --], [--, 64]], mask=[[ True, True], [ True, False]], fill_value=999999) # numpy.ma.dstack ma.dstack _= _ Stack arrays in sequence depth wise (along third axis). This is equivalent to concatenation along the third axis after 2-D arrays of shape `(M,N)` have been reshaped to `(M,N,1)` and 1-D arrays of shape `(N,)` have been reshaped to `(1,N,1)`. Rebuilds arrays divided by [`dsplit`](numpy.dsplit#numpy.dsplit "numpy.dsplit"). This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`stack`](numpy.stack#numpy.stack "numpy.stack") and [`block`](numpy.block#numpy.block "numpy.block") provide more general stacking and concatenation operations. Parameters: **tup** sequence of arrays The arrays must have the same shape along all but the third axis. 1-D or 2-D arrays must have the same shape. Returns: **stacked** ndarray The array formed by stacking the given arrays, will be at least 3-D. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack") Stack arrays in sequence vertically (row wise). [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`dsplit`](numpy.dsplit#numpy.dsplit "numpy.dsplit") Split array along third axis. #### Notes The function is applied to both the _data and the _mask, if any. #### Examples >>> import numpy as np >>> a = np.array((1,2,3)) >>> b = np.array((2,3,4)) >>> np.dstack((a,b)) array([[[1, 2], [2, 3], [3, 4]]]) >>> a = np.array([[1],[2],[3]]) >>> b = np.array([[2],[3],[4]]) >>> np.dstack((a,b)) array([[[1, 2]], [[2, 3]], [[3, 4]]]) # numpy.ma.ediff1d ma.ediff1d(_arr_ , _to_end =None_, _to_begin =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1267-L1302) Compute the differences between consecutive elements of an array. This function is the equivalent of [`numpy.ediff1d`](numpy.ediff1d#numpy.ediff1d "numpy.ediff1d") that takes masked values into account, see [`numpy.ediff1d`](numpy.ediff1d#numpy.ediff1d "numpy.ediff1d") for details. See also [`numpy.ediff1d`](numpy.ediff1d#numpy.ediff1d "numpy.ediff1d") Equivalent function for ndarrays. #### Examples >>> import numpy as np >>> arr = np.ma.array([1, 2, 4, 7, 0]) >>> np.ma.ediff1d(arr) masked_array(data=[ 1, 2, 3, -7], mask=False, fill_value=999999) # numpy.ma.empty ma.empty(_shape_ , _dtype =float_, _order ='C'_, _*_ , _device =None_, _like =None_)_= _ Return a new array of given shape and type, without initializing entries. Parameters: **shape** int or tuple of int Shape of the empty array, e.g., `(2, 3)` or `2`. **dtype** data-type, optional Desired output data-type for the array, e.g, [`numpy.int8`](../arrays.scalars#numpy.int8 "numpy.int8"). Default is [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64"). **order**{‘C’, ‘F’}, optional, default: ‘C’ Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. **device** str, optional The device on which to place the created array. Default: `None`. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** MaskedArray Array of uninitialized (arbitrary) data of the given shape, dtype, and order. Object arrays will be initialized to None. See also [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like") Return an empty array with shape and type of input. [`ones`](numpy.ones#numpy.ones "numpy.ones") Return a new array setting values to one. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Return a new array setting values to zero. [`full`](numpy.full#numpy.full "numpy.full") Return a new array of given shape filled with value. #### Notes Unlike other array creation functions (e.g. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros"), [`ones`](numpy.ones#numpy.ones "numpy.ones"), [`full`](numpy.full#numpy.full "numpy.full")), [`empty`](numpy.empty#numpy.empty "numpy.empty") does not initialize the values of the array, and may therefore be marginally faster. However, the values stored in the newly allocated array are arbitrary. For reproducible behavior, be sure to set each element of the array before reading. #### Examples >>> import numpy as np >>> np.empty([2, 2]) array([[ -9.74499359e+001, 6.69583040e-309], [ 2.13182611e-314, 3.06959433e-309]]) #uninitialized >>> np.empty([2, 2], dtype=int) array([[-1073741821, -1067949133], [ 496041986, 19249760]]) #uninitialized # numpy.ma.empty_like ma.empty_like(_prototype_ , _dtype =None_, _order ='K'_, _subok =True_, _shape =None_, _*_ , _device =None_)_= _ Return a new array with the same shape and type as a given array. Parameters: **prototype** array_like The shape and data-type of `prototype` define these same attributes of the returned array. **dtype** data-type, optional Overrides the data type of the result. **order**{‘C’, ‘F’, ‘A’, or ‘K’}, optional Overrides the memory layout of the result. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `prototype` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `prototype` as closely as possible. **subok** bool, optional. If True, then the newly created array will use the sub-class type of `prototype`, otherwise it will be a base-class array. Defaults to True. **shape** int or sequence of ints, optional. Overrides the shape of the result. If order=’K’ and the number of dimensions is unchanged, will try to keep order, otherwise, order=’C’ is implied. **device** str, optional The device on which to place the created array. Default: None. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. Returns: **out** MaskedArray Array of uninitialized (arbitrary) data with the same shape and type as `prototype`. See also [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") Return an array of ones with shape and type of input. [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Return an array of zeros with shape and type of input. [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like") Return a new array with shape of input filled with value. [`empty`](numpy.empty#numpy.empty "numpy.empty") Return a new uninitialized array. #### Notes Unlike other array creation functions (e.g. [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like"), [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like"), [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like")), [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like") does not initialize the values of the array, and may therefore be marginally faster. However, the values stored in the newly allocated array are arbitrary. For reproducible behavior, be sure to set each element of the array before reading. #### Examples >>> import numpy as np >>> a = ([1,2,3], [4,5,6]) # a is array-like >>> np.empty_like(a) array([[-1073741821, -1073741821, 3], # uninitialized [ 0, 0, -1073741821]]) >>> a = np.array([[1., 2., 3.],[4.,5.,6.]]) >>> np.empty_like(a) array([[ -2.00000715e+000, 1.48219694e-323, -2.00000572e+000], # uninitialized [ 4.38791518e-305, -2.00000715e+000, 4.17269252e-309]]) # numpy.ma.expand_dims ma.expand_dims(_a_ , _axis_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_shape_base_impl.py#L511-L598) Expand the shape of an array. Insert a new axis that will appear at the `axis` position in the expanded array shape. Parameters: **a** array_like Input array. **axis** int or tuple of ints Position in the expanded axes where the new axis (or axes) is placed. Deprecated since version 1.13.0: Passing an axis where `axis > a.ndim` will be treated as `axis == a.ndim`, and passing `axis < -a.ndim - 1` will be treated as `axis == 0`. This behavior is deprecated. Returns: **result** ndarray View of `a` with the number of dimensions increased. See also [`squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") The inverse operation, removing singleton dimensions [`reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Insert, remove, and combine dimensions, and resize existing ones [`atleast_1d`](numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d"), [`atleast_2d`](numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d"), [`atleast_3d`](numpy.atleast_3d#numpy.atleast_3d "numpy.atleast_3d") #### Examples >>> import numpy as np >>> x = np.array([1, 2]) >>> x.shape (2,) The following is equivalent to `x[np.newaxis, :]` or `x[np.newaxis]`: >>> y = np.expand_dims(x, axis=0) >>> y array([[1, 2]]) >>> y.shape (1, 2) The following is equivalent to `x[:, np.newaxis]`: >>> y = np.expand_dims(x, axis=1) >>> y array([[1], [2]]) >>> y.shape (2, 1) `axis` may also be a tuple: >>> y = np.expand_dims(x, axis=(0, 1)) >>> y array([[[1, 2]]]) >>> y = np.expand_dims(x, axis=(2, 0)) >>> y array([[[1], [2]]]) Note that some examples may use `None` instead of `np.newaxis`. These are the same objects: >>> np.newaxis is None True # numpy.ma.filled ma.filled(_a_ , _fill_value =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L624-L683) Return input as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), with masked values replaced by `fill_value`. If `a` is not a [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"), `a` itself is returned. If `a` is a [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") with no masked values, then `a.data` is returned. If `a` is a [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") and `fill_value` is None, `fill_value` is set to `a.fill_value`. Parameters: **a** MaskedArray or array_like An input object. **fill_value** array_like, optional. Can be scalar or non-scalar. If non-scalar, the resulting filled array should be broadcastable over input array. Default is None. Returns: **a** ndarray The filled array. See also [`compressed`](numpy.ma.compressed#numpy.ma.compressed "numpy.ma.compressed") #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> x = ma.array(np.arange(9).reshape(3, 3), mask=[[1, 0, 0], ... [1, 0, 0], ... [0, 0, 0]]) >>> x.filled() array([[999999, 1, 2], [999999, 4, 5], [ 6, 7, 8]]) >>> x.filled(fill_value=333) array([[333, 1, 2], [333, 4, 5], [ 6, 7, 8]]) >>> x.filled(fill_value=np.arange(3)) array([[0, 1, 2], [0, 4, 5], [6, 7, 8]]) # numpy.ma.fix_invalid ma.fix_invalid(_a_ , _mask =np.False__, _copy =True_, _fill_value =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L768-L825) Return input with invalid data masked and replaced by a fill value. Invalid data means values of [`nan`](../constants#numpy.nan "numpy.nan"), [`inf`](../constants#numpy.inf "numpy.inf"), etc. Parameters: **a** array_like Input array, a (subclass of) ndarray. **mask** sequence, optional Mask. Must be convertible to an array of booleans with the same shape as `data`. True indicates a masked (i.e. invalid) data. **copy** bool, optional Whether to use a copy of `a` (True) or to fix `a` in place (False). Default is True. **fill_value** scalar, optional Value used for fixing invalid data. Default is None, in which case the `a.fill_value` is used. Returns: **b** MaskedArray The input array with invalid entries fixed. #### Notes A copy is performed by default. #### Examples >>> import numpy as np >>> x = np.ma.array([1., -1, np.nan, np.inf], mask=[1] + [0]*3) >>> x masked_array(data=[--, -1.0, nan, inf], mask=[ True, False, False, False], fill_value=1e+20) >>> np.ma.fix_invalid(x) masked_array(data=[--, -1.0, --, --], mask=[ True, False, True, True], fill_value=1e+20) >>> fixed = np.ma.fix_invalid(x) >>> fixed.data array([ 1.e+00, -1.e+00, 1.e+20, 1.e+20]) >>> x.data array([ 1., -1., nan, inf]) # numpy.ma.flatnotmasked_contiguous ma.flatnotmasked_contiguous(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L2034-L2086) Find contiguous unmasked data in a masked array. Parameters: **a** array_like The input array. Returns: **slice_list** list A sorted sequence of `slice` objects (start index, end index). See also [`flatnotmasked_edges`](numpy.ma.flatnotmasked_edges#numpy.ma.flatnotmasked_edges "numpy.ma.flatnotmasked_edges"), [`notmasked_contiguous`](numpy.ma.notmasked_contiguous#numpy.ma.notmasked_contiguous "numpy.ma.notmasked_contiguous"), [`notmasked_edges`](numpy.ma.notmasked_edges#numpy.ma.notmasked_edges "numpy.ma.notmasked_edges") [`clump_masked`](numpy.ma.clump_masked#numpy.ma.clump_masked "numpy.ma.clump_masked"), [`clump_unmasked`](numpy.ma.clump_unmasked#numpy.ma.clump_unmasked "numpy.ma.clump_unmasked") #### Notes Only accepts 2-D arrays at most. #### Examples >>> import numpy as np >>> a = np.ma.arange(10) >>> np.ma.flatnotmasked_contiguous(a) [slice(0, 10, None)] >>> mask = (a < 3) | (a > 8) | (a == 5) >>> a[mask] = np.ma.masked >>> np.array(a[~a.mask]) array([3, 4, 6, 7, 8]) >>> np.ma.flatnotmasked_contiguous(a) [slice(3, 5, None), slice(6, 9, None)] >>> a[:] = np.ma.masked >>> np.ma.flatnotmasked_contiguous(a) [] # numpy.ma.flatnotmasked_edges ma.flatnotmasked_edges(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1926-L1979) Find the indices of the first and last unmasked values. Expects a 1-D [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"), returns None if all values are masked. Parameters: **a** array_like Input 1-D [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") Returns: **edges** ndarray or None The indices of first and last non-masked value in the array. Returns None if all values are masked. See also [`flatnotmasked_contiguous`](numpy.ma.flatnotmasked_contiguous#numpy.ma.flatnotmasked_contiguous "numpy.ma.flatnotmasked_contiguous"), [`notmasked_contiguous`](numpy.ma.notmasked_contiguous#numpy.ma.notmasked_contiguous "numpy.ma.notmasked_contiguous"), [`notmasked_edges`](numpy.ma.notmasked_edges#numpy.ma.notmasked_edges "numpy.ma.notmasked_edges") [`clump_masked`](numpy.ma.clump_masked#numpy.ma.clump_masked "numpy.ma.clump_masked"), [`clump_unmasked`](numpy.ma.clump_unmasked#numpy.ma.clump_unmasked "numpy.ma.clump_unmasked") #### Notes Only accepts 1-D arrays. #### Examples >>> import numpy as np >>> a = np.ma.arange(10) >>> np.ma.flatnotmasked_edges(a) array([0, 9]) >>> mask = (a < 3) | (a > 8) | (a == 5) >>> a[mask] = np.ma.masked >>> np.array(a[~a.mask]) array([3, 4, 6, 7, 8]) >>> np.ma.flatnotmasked_edges(a) array([3, 8]) >>> a[:] = np.ma.masked >>> print(np.ma.flatnotmasked_edges(a)) None # numpy.ma.flatten_mask ma.flatten_mask(_mask_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L1813-L1867) Returns a completely flattened version of the mask, where nested fields are collapsed. Parameters: **mask** array_like Input array, which will be interpreted as booleans. Returns: **flattened_mask** ndarray of bools The flattened input. #### Examples >>> import numpy as np >>> mask = np.array([0, 0, 1]) >>> np.ma.flatten_mask(mask) array([False, False, True]) >>> mask = np.array([(0, 0), (0, 1)], dtype=[('a', bool), ('b', bool)]) >>> np.ma.flatten_mask(mask) array([False, False, False, True]) >>> mdtype = [('a', bool), ('b', [('ba', bool), ('bb', bool)])] >>> mask = np.array([(0, (0, 0)), (0, (0, 1))], dtype=mdtype) >>> np.ma.flatten_mask(mask) array([False, False, False, False, False, True]) # numpy.ma.flatten_structured_array ma.flatten_structured_array(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2555-L2608) Flatten a structured array. The data type of the output is chosen such that it can represent all of the (nested) fields. Parameters: **a** structured array Returns: **output** masked array or ndarray A flattened masked array if the input is a masked array, otherwise a standard ndarray. #### Examples >>> import numpy as np >>> ndtype = [('a', int), ('b', float)] >>> a = np.array([(1, 1), (2, 2)], dtype=ndtype) >>> np.ma.flatten_structured_array(a) array([[1., 1.], [2., 2.]]) # numpy.ma.frombuffer ma.frombuffer(_buffer_ , _dtype =float_, _count =-1_, _offset =0_, _*_ , _like =None_)_= _ Interpret a buffer as a 1-dimensional array. Parameters: **buffer** buffer_like An object that exposes the buffer interface. **dtype** data-type, optional Data-type of the returned array; default: float. **count** int, optional Number of items to read. `-1` means all data in the buffer. **offset** int, optional Start reading the buffer from this offset (in bytes); default: 0. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: out: MaskedArray See also [`ndarray.tobytes`](numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes") Inverse of this operation, construct Python bytes from the raw data bytes in the array. #### Notes If the buffer has data that is not in machine byte-order, this should be specified as part of the data-type, e.g.: >>> dt = np.dtype(int) >>> dt = dt.newbyteorder('>') >>> np.frombuffer(buf, dtype=dt) The data of the resulting array will not be byteswapped, but will be interpreted correctly. This function creates a view into the original object. This should be safe in general, but it may make sense to copy the result when the original object is mutable or untrusted. #### Examples >>> import numpy as np >>> s = b'hello world' >>> np.frombuffer(s, dtype='S1', count=5, offset=6) array([b'w', b'o', b'r', b'l', b'd'], dtype='|S1') >>> np.frombuffer(b'\x01\x02', dtype=np.uint8) array([1, 2], dtype=uint8) >>> np.frombuffer(b'\x01\x02\x03\x04\x05', dtype=np.uint8, count=3) array([1, 2, 3], dtype=uint8) # numpy.ma.fromflex ma.fromflex(_fxarray_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L8706-L8768) Build a masked array from a suitable flexible-type array. The input array has to have a data-type with `_data` and `_mask` fields. This type of array is output by [`MaskedArray.toflex`](numpy.ma.maskedarray.toflex#numpy.ma.MaskedArray.toflex "numpy.ma.MaskedArray.toflex"). Parameters: **fxarray** ndarray The structured input array, containing `_data` and `_mask` fields. If present, other fields are discarded. Returns: **result** MaskedArray The constructed masked array. See also [`MaskedArray.toflex`](numpy.ma.maskedarray.toflex#numpy.ma.MaskedArray.toflex "numpy.ma.MaskedArray.toflex") Build a flexible-type array from a masked array. #### Examples >>> import numpy as np >>> x = np.ma.array(np.arange(9).reshape(3, 3), mask=[0] + [1, 0] * 4) >>> rec = x.toflex() >>> rec array([[(0, False), (1, True), (2, False)], [(3, True), (4, False), (5, True)], [(6, False), (7, True), (8, False)]], dtype=[('_data', '>> x2 = np.ma.fromflex(rec) >>> x2 masked_array( data=[[0, --, 2], [--, 4, --], [6, --, 8]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) Extra fields can be present in the structured array but are discarded: >>> dt = [('_data', '>> rec2 = np.zeros((2, 2), dtype=dt) >>> rec2 array([[(0, False, 0.), (0, False, 0.)], [(0, False, 0.), (0, False, 0.)]], dtype=[('_data', '>> y = np.ma.fromflex(rec2) >>> y masked_array( data=[[0, 0], [0, 0]], mask=[[False, False], [False, False]], fill_value=np.int64(999999), dtype=int32) # numpy.ma.fromfunction ma.fromfunction(_function_ , _shape_ , _** dtype_)_= _ Construct an array by executing a function over each coordinate. The resulting array therefore has a value `fn(x, y, z)` at coordinate `(x, y, z)`. Parameters: **function** callable The function is called with N parameters, where N is the rank of [`shape`](numpy.shape#numpy.shape "numpy.shape"). Each parameter represents the coordinates of the array varying along a specific axis. For example, if [`shape`](numpy.shape#numpy.shape "numpy.shape") were `(2, 2)`, then the parameters would be `array([[0, 0], [1, 1]])` and `array([[0, 1], [0, 1]])` **shape**(N,) tuple of ints Shape of the output array, which also determines the shape of the coordinate arrays passed to `function`. **dtype** data-type, optional Data-type of the coordinate arrays passed to `function`. By default, [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is float. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: fromfunction: MaskedArray The result of the call to `function` is passed back directly. Therefore the shape of [`fromfunction`](numpy.fromfunction#numpy.fromfunction "numpy.fromfunction") is completely determined by `function`. If `function` returns a scalar value, the shape of [`fromfunction`](numpy.fromfunction#numpy.fromfunction "numpy.fromfunction") would not match the [`shape`](numpy.shape#numpy.shape "numpy.shape") parameter. See also [`indices`](numpy.indices#numpy.indices "numpy.indices"), [`meshgrid`](numpy.meshgrid#numpy.meshgrid "numpy.meshgrid") #### Notes Keywords other than [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") and `like` are passed to `function`. #### Examples >>> import numpy as np >>> np.fromfunction(lambda i, j: i, (2, 2), dtype=float) array([[0., 0.], [1., 1.]]) >>> np.fromfunction(lambda i, j: j, (2, 2), dtype=float) array([[0., 1.], [0., 1.]]) >>> np.fromfunction(lambda i, j: i == j, (3, 3), dtype=int) array([[ True, False, False], [False, True, False], [False, False, True]]) >>> np.fromfunction(lambda i, j: i + j, (3, 3), dtype=int) array([[0, 1, 2], [1, 2, 3], [2, 3, 4]]) # numpy.ma.getdata ma.getdata(_a_ , _subok =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L713-L762) Return the data of a masked array as an ndarray. Return the data of `a` (if any) as an ndarray if `a` is a `MaskedArray`, else return `a` as a ndarray or subclass (depending on `subok`) if not. Parameters: **a** array_like Input `MaskedArray`, alternatively a ndarray or a subclass thereof. **subok** bool Whether to force the output to be a `pure` ndarray (False) or to return a subclass of ndarray if appropriate (True, default). See also [`getmask`](numpy.ma.getmask#numpy.ma.getmask "numpy.ma.getmask") Return the mask of a masked array, or nomask. [`getmaskarray`](numpy.ma.getmaskarray#numpy.ma.getmaskarray "numpy.ma.getmaskarray") Return the mask of a masked array, or full array of False. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = ma.masked_equal([[1,2],[3,4]], 2) >>> a masked_array( data=[[1, --], [3, 4]], mask=[[False, True], [False, False]], fill_value=2) >>> ma.getdata(a) array([[1, 2], [3, 4]]) Equivalently use the `MaskedArray` `data` attribute. >>> a.data array([[1, 2], [3, 4]]) # numpy.ma.getmask ma.getmask(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L1408-L1465) Return the mask of a masked array, or nomask. Return the mask of `a` as an ndarray if `a` is a [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") and the mask is not [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask"), else return [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask"). To guarantee a full array of booleans of the same shape as a, use [`getmaskarray`](numpy.ma.getmaskarray#numpy.ma.getmaskarray "numpy.ma.getmaskarray"). Parameters: **a** array_like Input [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") for which the mask is required. See also [`getdata`](numpy.ma.getdata#numpy.ma.getdata "numpy.ma.getdata") Return the data of a masked array as an ndarray. [`getmaskarray`](numpy.ma.getmaskarray#numpy.ma.getmaskarray "numpy.ma.getmaskarray") Return the mask of a masked array, or full array of False. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = ma.masked_equal([[1,2],[3,4]], 2) >>> a masked_array( data=[[1, --], [3, 4]], mask=[[False, True], [False, False]], fill_value=2) >>> ma.getmask(a) array([[False, True], [False, False]]) Equivalently use the [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") `mask` attribute. >>> a.mask array([[False, True], [False, False]]) Result when mask == [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask") >>> b = ma.masked_array([[1,2],[3,4]]) >>> b masked_array( data=[[1, 2], [3, 4]], mask=False, fill_value=999999) >>> ma.nomask False >>> ma.getmask(b) == ma.nomask True >>> b.mask == ma.nomask True # numpy.ma.getmaskarray ma.getmaskarray(_arr_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L1471-L1522) Return the mask of a masked array, or full boolean array of False. Return the mask of `arr` as an ndarray if `arr` is a [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") and the mask is not [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask"), else return a full boolean array of False of the same shape as `arr`. Parameters: **arr** array_like Input [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") for which the mask is required. See also [`getmask`](numpy.ma.getmask#numpy.ma.getmask "numpy.ma.getmask") Return the mask of a masked array, or nomask. [`getdata`](numpy.ma.getdata#numpy.ma.getdata "numpy.ma.getdata") Return the data of a masked array as an ndarray. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = ma.masked_equal([[1,2],[3,4]], 2) >>> a masked_array( data=[[1, --], [3, 4]], mask=[[False, True], [False, False]], fill_value=2) >>> ma.getmaskarray(a) array([[False, True], [False, False]]) Result when mask == `nomask` >>> b = ma.masked_array([[1,2],[3,4]]) >>> b masked_array( data=[[1, 2], [3, 4]], mask=False, fill_value=999999) >>> ma.getmaskarray(b) array([[False, False], [False, False]]) # numpy.ma.harden_mask ma.harden_mask(_self_)_= _ Force the mask to hard, preventing unmasking by assignment. Whether the mask of a masked array is hard or soft is determined by its [`hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") property. `harden_mask` sets [`hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") to `True` (and returns the modified self). See also [`ma.MaskedArray.hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") [`ma.MaskedArray.soften_mask`](numpy.ma.maskedarray.soften_mask#numpy.ma.MaskedArray.soften_mask "numpy.ma.MaskedArray.soften_mask") # numpy.ma.hsplit ma.hsplit _= _ Split an array into multiple sub-arrays horizontally (column-wise). Please refer to the [`split`](numpy.split#numpy.split "numpy.split") documentation. [`hsplit`](numpy.hsplit#numpy.hsplit "numpy.hsplit") is equivalent to [`split`](numpy.split#numpy.split "numpy.split") with `axis=1`, the array is always split along the second axis except for 1-D arrays, where it is split at `axis=0`. See also [`split`](numpy.split#numpy.split "numpy.split") Split an array into multiple sub-arrays of equal size. #### Notes The function is applied to both the _data and the _mask, if any. #### Examples >>> import numpy as np >>> x = np.arange(16.0).reshape(4, 4) >>> x array([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [12., 13., 14., 15.]]) >>> np.hsplit(x, 2) [array([[ 0., 1.], [ 4., 5.], [ 8., 9.], [12., 13.]]), array([[ 2., 3.], [ 6., 7.], [10., 11.], [14., 15.]])] >>> np.hsplit(x, np.array([3, 6])) [array([[ 0., 1., 2.], [ 4., 5., 6.], [ 8., 9., 10.], [12., 13., 14.]]), array([[ 3.], [ 7.], [11.], [15.]]), array([], shape=(4, 0), dtype=float64)] With a higher dimensional array the split is still along the second axis. >>> x = np.arange(8.0).reshape(2, 2, 2) >>> x array([[[0., 1.], [2., 3.]], [[4., 5.], [6., 7.]]]) >>> np.hsplit(x, 2) [array([[[0., 1.]], [[4., 5.]]]), array([[[2., 3.]], [[6., 7.]]])] With a 1-D array, the split is along axis 0. >>> x = np.array([0, 1, 2, 3, 4, 5]) >>> np.hsplit(x, 2) [array([0, 1, 2]), array([3, 4, 5])] # numpy.ma.hstack ma.hstack _= _ Stack arrays in sequence horizontally (column wise). This is equivalent to concatenation along the second axis, except for 1-D arrays where it concatenates along the first axis. Rebuilds arrays divided by [`hsplit`](numpy.hsplit#numpy.hsplit "numpy.hsplit"). This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`stack`](numpy.stack#numpy.stack "numpy.stack") and [`block`](numpy.block#numpy.block "numpy.block") provide more general stacking and concatenation operations. Parameters: **tup** sequence of ndarrays The arrays must have the same shape along all but the second axis, except 1-D arrays which can be any length. In the case of a single array_like input, it will be treated as a sequence of arrays; i.e., each element along the zeroth axis is treated as a separate array. **dtype** str or dtype If provided, the destination array will have this dtype. Cannot be provided together with `out`. New in version 1.24. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘same_kind’. New in version 1.24. Returns: **stacked** ndarray The array formed by stacking the given arrays. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack") Stack arrays in sequence vertically (row wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third axis). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`hsplit`](numpy.hsplit#numpy.hsplit "numpy.hsplit") Split an array into multiple sub-arrays horizontally (column-wise). [`unstack`](numpy.unstack#numpy.unstack "numpy.unstack") Split an array into a tuple of sub-arrays along an axis. #### Notes The function is applied to both the _data and the _mask, if any. #### Examples >>> import numpy as np >>> a = np.array((1,2,3)) >>> b = np.array((4,5,6)) >>> np.hstack((a,b)) array([1, 2, 3, 4, 5, 6]) >>> a = np.array([[1],[2],[3]]) >>> b = np.array([[4],[5],[6]]) >>> np.hstack((a,b)) array([[1, 4], [2, 5], [3, 6]]) # numpy.ma.identity ma.identity(_n_ , _dtype =None_)_= _ Return the identity array. The identity array is a square array with ones on the main diagonal. Parameters: **n** int Number of rows (and columns) in `n` x `n` output. **dtype** data-type, optional Data-type of the output. Defaults to `float`. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** MaskedArray `n` x `n` array with its main diagonal set to one, and all other elements 0. #### Examples >>> import numpy as np >>> np.identity(3) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) # numpy.ma.in1d ma.in1d(_ar1_ , _ar2_ , _assume_unique =False_, _invert =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1425-L1470) Test whether each element of an array is also present in a second array. The output is always a masked array. See [`numpy.in1d`](numpy.in1d#numpy.in1d "numpy.in1d") for more details. We recommend using [`isin`](numpy.isin#numpy.isin "numpy.isin") instead of [`in1d`](numpy.in1d#numpy.in1d "numpy.in1d") for new code. See also [`isin`](numpy.isin#numpy.isin "numpy.isin") Version of this function that preserves the shape of ar1. [`numpy.in1d`](numpy.in1d#numpy.in1d "numpy.in1d") Equivalent function for ndarrays. #### Examples >>> import numpy as np >>> ar1 = np.ma.array([0, 1, 2, 5, 0]) >>> ar2 = [0, 2] >>> np.ma.in1d(ar1, ar2) masked_array(data=[ True, False, True, False, True], mask=False, fill_value=True) # numpy.ma.indices ma.indices(_dimensions_ , _dtype= _, _sparse=False_)_= _ Return an array representing the indices of a grid. Compute an array where the subarrays contain index values 0, 1, … varying only along the corresponding axis. Parameters: **dimensions** sequence of ints The shape of the grid. **dtype** dtype, optional Data type of the result. **sparse** boolean, optional Return a sparse representation of the grid instead of a dense representation. Default is False. Returns: **grid** one MaskedArray or tuple of MaskedArrays If sparse is False: Returns one array of grid indices, `grid.shape = (len(dimensions),) + tuple(dimensions)`. If sparse is True: Returns a tuple of arrays, with `grid[i].shape = (1, ..., 1, dimensions[i], 1, ..., 1)` with dimensions[i] in the ith place See also [`mgrid`](numpy.mgrid#numpy.mgrid "numpy.mgrid"), [`ogrid`](numpy.ogrid#numpy.ogrid "numpy.ogrid"), [`meshgrid`](numpy.meshgrid#numpy.meshgrid "numpy.meshgrid") #### Notes The output shape in the dense case is obtained by prepending the number of dimensions in front of the tuple of dimensions, i.e. if `dimensions` is a tuple `(r0, ..., rN-1)` of length `N`, the output shape is `(N, r0, ..., rN-1)`. The subarrays `grid[k]` contains the N-D array of indices along the `k-th` axis. Explicitly: grid[k, i0, i1, ..., iN-1] = ik #### Examples >>> import numpy as np >>> grid = np.indices((2, 3)) >>> grid.shape (2, 2, 3) >>> grid[0] # row indices array([[0, 0, 0], [1, 1, 1]]) >>> grid[1] # column indices array([[0, 1, 2], [0, 1, 2]]) The indices can be used as an index into an array. >>> x = np.arange(20).reshape(5, 4) >>> row, col = np.indices((2, 3)) >>> x[row, col] array([[0, 1, 2], [4, 5, 6]]) Note that it would be more straightforward in the above example to extract the required elements directly with `x[:2, :3]`. If sparse is set to true, the grid will be returned in a sparse representation. >>> i, j = np.indices((2, 3), sparse=True) >>> i.shape (2, 1) >>> j.shape (1, 3) >>> i # row indices array([[0], [1]]) >>> j # column indices array([[0, 1, 2]]) # numpy.ma.inner ma.inner(_a_ , _b_ , _/_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L8284-L8298) Inner product of two arrays. Ordinary inner product of vectors for 1-D arrays (without complex conjugation), in higher dimensions a sum product over the last axes. Parameters: **a, b** array_like If `a` and `b` are nonscalar, their last dimensions must match. Returns: **out** ndarray If `a` and `b` are both scalars or both 1-D arrays then a scalar is returned; otherwise an array is returned. `out.shape = (*a.shape[:-1], *b.shape[:-1])` Raises: ValueError If both `a` and `b` are nonscalar and their last dimensions have different sizes. See also [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") Sum products over arbitrary axes. [`dot`](numpy.dot#numpy.dot "numpy.dot") Generalised matrix product, using second last dimension of `b`. [`vecdot`](numpy.vecdot#numpy.vecdot "numpy.vecdot") Vector dot product of two arrays. [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") Einstein summation convention. #### Notes Masked values are replaced by 0. For vectors (1-D arrays) it computes the ordinary inner-product: np.inner(a, b) = sum(a[:]*b[:]) More generally, if `ndim(a) = r > 0` and `ndim(b) = s > 0`: np.inner(a, b) = np.tensordot(a, b, axes=(-1,-1)) or explicitly: np.inner(a, b)[i0,...,ir-2,j0,...,js-2] = sum(a[i0,...,ir-2,:]*b[j0,...,js-2,:]) In addition `a` or `b` may be scalars, in which case: np.inner(a,b) = a*b #### Examples Ordinary inner product for vectors: >>> import numpy as np >>> a = np.array([1,2,3]) >>> b = np.array([0,1,0]) >>> np.inner(a, b) 2 Some multidimensional examples: >>> a = np.arange(24).reshape((2,3,4)) >>> b = np.arange(4) >>> c = np.inner(a, b) >>> c.shape (2, 3) >>> c array([[ 14, 38, 62], [ 86, 110, 134]]) >>> a = np.arange(2).reshape((1,1,2)) >>> b = np.arange(6).reshape((3,2)) >>> c = np.inner(a, b) >>> c.shape (1, 1, 3) >>> c array([[[1, 3, 5]]]) An example where `b` is a scalar: >>> np.inner(np.eye(2), 7) array([[7., 0.], [0., 7.]]) # numpy.ma.innerproduct ma.innerproduct(_a_ , _b_ , _/_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L8284-L8298) Inner product of two arrays. Ordinary inner product of vectors for 1-D arrays (without complex conjugation), in higher dimensions a sum product over the last axes. Parameters: **a, b** array_like If `a` and `b` are nonscalar, their last dimensions must match. Returns: **out** ndarray If `a` and `b` are both scalars or both 1-D arrays then a scalar is returned; otherwise an array is returned. `out.shape = (*a.shape[:-1], *b.shape[:-1])` Raises: ValueError If both `a` and `b` are nonscalar and their last dimensions have different sizes. See also [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") Sum products over arbitrary axes. [`dot`](numpy.dot#numpy.dot "numpy.dot") Generalised matrix product, using second last dimension of `b`. [`vecdot`](numpy.vecdot#numpy.vecdot "numpy.vecdot") Vector dot product of two arrays. [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") Einstein summation convention. #### Notes Masked values are replaced by 0. For vectors (1-D arrays) it computes the ordinary inner-product: np.inner(a, b) = sum(a[:]*b[:]) More generally, if `ndim(a) = r > 0` and `ndim(b) = s > 0`: np.inner(a, b) = np.tensordot(a, b, axes=(-1,-1)) or explicitly: np.inner(a, b)[i0,...,ir-2,j0,...,js-2] = sum(a[i0,...,ir-2,:]*b[j0,...,js-2,:]) In addition `a` or `b` may be scalars, in which case: np.inner(a,b) = a*b #### Examples Ordinary inner product for vectors: >>> import numpy as np >>> a = np.array([1,2,3]) >>> b = np.array([0,1,0]) >>> np.inner(a, b) 2 Some multidimensional examples: >>> a = np.arange(24).reshape((2,3,4)) >>> b = np.arange(4) >>> c = np.inner(a, b) >>> c.shape (2, 3) >>> c array([[ 14, 38, 62], [ 86, 110, 134]]) >>> a = np.arange(2).reshape((1,1,2)) >>> b = np.arange(6).reshape((3,2)) >>> c = np.inner(a, b) >>> c.shape (1, 1, 3) >>> c array([[[1, 3, 5]]]) An example where `b` is a scalar: >>> np.inner(np.eye(2), 7) array([[7., 0.], [0., 7.]]) # numpy.ma.intersect1d ma.intersect1d(_ar1_ , _ar2_ , _assume_unique =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1355-L1385) Returns the unique elements common to both arrays. Masked values are considered equal one to the other. The output is always a masked array. See [`numpy.intersect1d`](numpy.intersect1d#numpy.intersect1d "numpy.intersect1d") for more details. See also [`numpy.intersect1d`](numpy.intersect1d#numpy.intersect1d "numpy.intersect1d") Equivalent function for ndarrays. #### Examples >>> import numpy as np >>> x = np.ma.array([1, 3, 3, 3], mask=[0, 0, 0, 1]) >>> y = np.ma.array([3, 1, 1, 1], mask=[0, 0, 0, 1]) >>> np.ma.intersect1d(x, y) masked_array(data=[1, 3, --], mask=[False, False, True], fill_value=999999) # numpy.ma.is_mask ma.is_mask(_m_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L1525-L1591) Return True if m is a valid, standard mask. This function does not check the contents of the input, only that the type is MaskType. In particular, this function returns False if the mask has a flexible dtype. Parameters: **m** array_like Array to test. Returns: **result** bool True if `m.dtype.type` is MaskType, False otherwise. See also [`ma.isMaskedArray`](numpy.ma.ismaskedarray#numpy.ma.isMaskedArray "numpy.ma.isMaskedArray") Test whether input is an instance of MaskedArray. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> m = ma.masked_equal([0, 1, 0, 2, 3], 0) >>> m masked_array(data=[--, 1, --, 2, 3], mask=[ True, False, True, False, False], fill_value=0) >>> ma.is_mask(m) False >>> ma.is_mask(m.mask) True Input must be an ndarray (or have similar attributes) for it to be considered a valid mask. >>> m = [False, True, False] >>> ma.is_mask(m) False >>> m = np.array([False, True, False]) >>> m array([False, True, False]) >>> ma.is_mask(m) True Arrays with complex dtypes don’t return True. >>> dtype = np.dtype({'names':['monty', 'pithon'], ... 'formats':[bool, bool]}) >>> dtype dtype([('monty', '|b1'), ('pithon', '|b1')]) >>> m = np.array([(True, False), (False, True), (True, False)], ... dtype=dtype) >>> m array([( True, False), (False, True), ( True, False)], dtype=[('monty', '?'), ('pithon', '?')]) >>> ma.is_mask(m) False # numpy.ma.is_masked ma.is_masked(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L6922-L6973) Determine whether input has masked values. Accepts any object as input, but always returns False unless the input is a MaskedArray containing masked values. Parameters: **x** array_like Array to check for masked values. Returns: **result** bool True if `x` is a MaskedArray with masked values, False otherwise. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> x = ma.masked_equal([0, 1, 0, 2, 3], 0) >>> x masked_array(data=[--, 1, --, 2, 3], mask=[ True, False, True, False, False], fill_value=0) >>> ma.is_masked(x) True >>> x = ma.masked_equal([0, 1, 0, 2, 3], 42) >>> x masked_array(data=[0, 1, 0, 2, 3], mask=False, fill_value=42) >>> ma.is_masked(x) False Always returns False if `x` isn’t a MaskedArray. >>> x = [False, True, False] >>> ma.is_masked(x) False >>> x = 'a string' >>> ma.is_masked(x) False # numpy.ma.isarray ma.isarray(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L6732-L6781) Test whether input is an instance of MaskedArray. This function returns True if `x` is an instance of MaskedArray and returns False otherwise. Any object is accepted as input. Parameters: **x** object Object to test. Returns: **result** bool True if `x` is a MaskedArray. See also [`isMA`](numpy.ma.isma#numpy.ma.isMA "numpy.ma.isMA") Alias to isMaskedArray. `isarray` Alias to isMaskedArray. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = np.eye(3, 3) >>> a array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) >>> m = ma.masked_values(a, 0) >>> m masked_array( data=[[1.0, --, --], [--, 1.0, --], [--, --, 1.0]], mask=[[False, True, True], [ True, False, True], [ True, True, False]], fill_value=0.0) >>> ma.isMaskedArray(a) False >>> ma.isMaskedArray(m) True >>> ma.isMaskedArray([0, 1, 2]) False # numpy.ma.isin ma.isin(_element_ , _test_elements_ , _assume_unique =False_, _invert =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1473-L1499) Calculates `element in test_elements`, broadcasting over `element` only. The output is always a masked array of the same shape as `element`. See [`numpy.isin`](numpy.isin#numpy.isin "numpy.isin") for more details. See also [`in1d`](numpy.in1d#numpy.in1d "numpy.in1d") Flattened version of this function. [`numpy.isin`](numpy.isin#numpy.isin "numpy.isin") Equivalent function for ndarrays. #### Examples >>> import numpy as np >>> element = np.ma.array([1, 2, 3, 4, 5, 6]) >>> test_elements = [0, 2] >>> np.ma.isin(element, test_elements) masked_array(data=[False, True, False, False, False, False], mask=False, fill_value=True) # numpy.ma.isMA ma.isMA(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L6732-L6781) Test whether input is an instance of MaskedArray. This function returns True if `x` is an instance of MaskedArray and returns False otherwise. Any object is accepted as input. Parameters: **x** object Object to test. Returns: **result** bool True if `x` is a MaskedArray. See also `isMA` Alias to isMaskedArray. [`isarray`](numpy.ma.isarray#numpy.ma.isarray "numpy.ma.isarray") Alias to isMaskedArray. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = np.eye(3, 3) >>> a array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) >>> m = ma.masked_values(a, 0) >>> m masked_array( data=[[1.0, --, --], [--, 1.0, --], [--, --, 1.0]], mask=[[False, True, True], [ True, False, True], [ True, True, False]], fill_value=0.0) >>> ma.isMaskedArray(a) False >>> ma.isMaskedArray(m) True >>> ma.isMaskedArray([0, 1, 2]) False # numpy.ma.isMaskedArray ma.isMaskedArray(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L6732-L6781) Test whether input is an instance of MaskedArray. This function returns True if `x` is an instance of MaskedArray and returns False otherwise. Any object is accepted as input. Parameters: **x** object Object to test. Returns: **result** bool True if `x` is a MaskedArray. See also [`isMA`](numpy.ma.isma#numpy.ma.isMA "numpy.ma.isMA") Alias to isMaskedArray. [`isarray`](numpy.ma.isarray#numpy.ma.isarray "numpy.ma.isarray") Alias to isMaskedArray. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = np.eye(3, 3) >>> a array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) >>> m = ma.masked_values(a, 0) >>> m masked_array( data=[[1.0, --, --], [--, 1.0, --], [--, --, 1.0]], mask=[[False, True, True], [ True, False, True], [ True, True, False]], fill_value=0.0) >>> ma.isMaskedArray(a) False >>> ma.isMaskedArray(m) True >>> ma.isMaskedArray([0, 1, 2]) False # numpy.ma.left_shift ma.left_shift(_a_ , _n_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7460-L7505) Shift the bits of an integer to the left. This is the masked array version of [`numpy.left_shift`](numpy.left_shift#numpy.left_shift "numpy.left_shift"), for details see that function. See also [`numpy.left_shift`](numpy.left_shift#numpy.left_shift "numpy.left_shift") #### Examples Shift with a masked array: >>> arr = np.ma.array([10, 20, 30], mask=[False, True, False]) >>> np.ma.left_shift(arr, 1) masked_array(data=[20, --, 60], mask=[False, True, False], fill_value=999999) Large shift: >>> np.ma.left_shift(10, 10) masked_array(data=10240, mask=False, fill_value=999999) Shift with a scalar and an array: >>> scalar = 10 >>> arr = np.ma.array([1, 2, 3], mask=[False, True, False]) >>> np.ma.left_shift(scalar, arr) masked_array(data=[20, --, 80], mask=[False, True, False], fill_value=999999) # numpy.ma.make_mask ma.make_mask(_m_ , _copy=False_ , _shrink=True_ , _dtype= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L1604-L1692) Create a boolean mask from an array. Return `m` as a boolean mask, creating a copy if necessary or requested. The function can accept any sequence that is convertible to integers, or `nomask`. Does not require that contents must be 0s and 1s, values of 0 are interpreted as False, everything else as True. Parameters: **m** array_like Potential mask. **copy** bool, optional Whether to return a copy of `m` (True) or `m` itself (False). **shrink** bool, optional Whether to shrink `m` to `nomask` if all its values are False. **dtype** dtype, optional Data-type of the output mask. By default, the output mask has a dtype of MaskType (bool). If the dtype is flexible, each field has a boolean dtype. This is ignored when `m` is `nomask`, in which case `nomask` is always returned. Returns: **result** ndarray A boolean mask derived from `m`. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> m = [True, False, True, True] >>> ma.make_mask(m) array([ True, False, True, True]) >>> m = [1, 0, 1, 1] >>> ma.make_mask(m) array([ True, False, True, True]) >>> m = [1, 0, 2, -3] >>> ma.make_mask(m) array([ True, False, True, True]) Effect of the `shrink` parameter. >>> m = np.zeros(4) >>> m array([0., 0., 0., 0.]) >>> ma.make_mask(m) False >>> ma.make_mask(m, shrink=False) array([False, False, False, False]) Using a flexible [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"). >>> m = [1, 0, 1, 1] >>> n = [0, 1, 0, 0] >>> arr = [] >>> for man, mouse in zip(m, n): ... arr.append((man, mouse)) >>> arr [(1, 0), (0, 1), (1, 0), (1, 0)] >>> dtype = np.dtype({'names':['man', 'mouse'], ... 'formats':[np.int64, np.int64]}) >>> arr = np.array(arr, dtype=dtype) >>> arr array([(1, 0), (0, 1), (1, 0), (1, 0)], dtype=[('man', '>> ma.make_mask(arr, dtype=dtype) array([(True, False), (False, True), (True, False), (True, False)], dtype=[('man', '|b1'), ('mouse', '|b1')]) # numpy.ma.make_mask_descr ma.make_mask_descr(_ndtype_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L1374-L1405) Construct a dtype description list from a given dtype. Returns a new dtype object, with the type of all fields in `ndtype` to a boolean type. Field names are not altered. Parameters: **ndtype** dtype The dtype to convert. Returns: **result** dtype A dtype that looks like `ndtype`, the type of all fields is boolean. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> dtype = np.dtype({'names':['foo', 'bar'], ... 'formats':[np.float32, np.int64]}) >>> dtype dtype([('foo', '>> ma.make_mask_descr(dtype) dtype([('foo', '|b1'), ('bar', '|b1')]) >>> ma.make_mask_descr(np.float32) dtype('bool') # numpy.ma.make_mask_none ma.make_mask_none(_newshape_ , _dtype =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L1695-L1743) Return a boolean mask of the given shape, filled with False. This function returns a boolean ndarray with all entries False, that can be used in common mask manipulations. If a complex dtype is specified, the type of each field is converted to a boolean type. Parameters: **newshape** tuple A tuple indicating the shape of the mask. **dtype**{None, dtype}, optional If None, use a MaskType instance. Otherwise, use a new datatype with the same fields as [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), converted to boolean types. Returns: **result** ndarray An ndarray of appropriate shape and dtype, filled with False. See also [`make_mask`](numpy.ma.make_mask#numpy.ma.make_mask "numpy.ma.make_mask") Create a boolean mask from an array. [`make_mask_descr`](numpy.ma.make_mask_descr#numpy.ma.make_mask_descr "numpy.ma.make_mask_descr") Construct a dtype description list from a given dtype. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> ma.make_mask_none((3,)) array([False, False, False]) Defining a more complex dtype. >>> dtype = np.dtype({'names':['foo', 'bar'], ... 'formats':[np.float32, np.int64]}) >>> dtype dtype([('foo', '>> ma.make_mask_none((3,), dtype=dtype) array([(False, False), (False, False), (False, False)], dtype=[('foo', '|b1'), ('bar', '|b1')]) # numpy.ma.mask_cols ma.mask_cols(_a_ , _axis= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1213-L1260) Mask columns of a 2D array that contain masked values. This function is a shortcut to `mask_rowcols` with `axis` equal to 1. See also [`mask_rowcols`](numpy.ma.mask_rowcols#numpy.ma.mask_rowcols "numpy.ma.mask_rowcols") Mask rows and/or columns of a 2D array. [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Examples >>> import numpy as np >>> a = np.zeros((3, 3), dtype=int) >>> a[1, 1] = 1 >>> a array([[0, 0, 0], [0, 1, 0], [0, 0, 0]]) >>> a = np.ma.masked_equal(a, 1) >>> a masked_array( data=[[0, 0, 0], [0, --, 0], [0, 0, 0]], mask=[[False, False, False], [False, True, False], [False, False, False]], fill_value=1) >>> np.ma.mask_cols(a) masked_array( data=[[0, --, 0], [0, --, 0], [0, --, 0]], mask=[[False, True, False], [False, True, False], [False, True, False]], fill_value=1) # numpy.ma.mask_or ma.mask_or(_m1_ , _m2_ , _copy =False_, _shrink =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L1756-L1810) Combine two masks with the `logical_or` operator. The result may be a view on `m1` or `m2` if the other is [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask") (i.e. False). Parameters: **m1, m2** array_like Input masks. **copy** bool, optional If copy is False and one of the inputs is [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask"), return a view of the other input mask. Defaults to False. **shrink** bool, optional Whether to shrink the output to [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask") if all its values are False. Defaults to True. Returns: **mask** output mask The result masks values that are masked in either `m1` or `m2`. Raises: ValueError If `m1` and `m2` have different flexible dtypes. #### Examples >>> import numpy as np >>> m1 = np.ma.make_mask([0, 1, 1, 0]) >>> m2 = np.ma.make_mask([1, 0, 0, 0]) >>> np.ma.mask_or(m1, m2) array([ True, True, True, False]) # numpy.ma.mask_rowcols ma.mask_rowcols(_a_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1073-L1159) Mask rows and/or columns of a 2D array that contain masked values. Mask whole rows and/or columns of a 2D array that contain masked values. The masking behavior is selected using the `axis` parameter. * If `axis` is None, rows _and_ columns are masked. * If `axis` is 0, only rows are masked. * If `axis` is 1 or -1, only columns are masked. Parameters: **a** array_like, MaskedArray The array to mask. If not a MaskedArray instance (or if no array elements are masked), the result is a MaskedArray with `mask` set to [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask") (False). Must be a 2D array. **axis** int, optional Axis along which to perform the operation. If None, applies to a flattened version of the array. Returns: **a** MaskedArray A modified version of the input array, masked depending on the value of the `axis` parameter. Raises: NotImplementedError If input array `a` is not 2D. See also [`mask_rows`](numpy.ma.mask_rows#numpy.ma.mask_rows "numpy.ma.mask_rows") Mask rows of a 2D array that contain masked values. [`mask_cols`](numpy.ma.mask_cols#numpy.ma.mask_cols "numpy.ma.mask_cols") Mask cols of a 2D array that contain masked values. [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Notes The input array’s mask is modified by this function. #### Examples >>> import numpy as np >>> a = np.zeros((3, 3), dtype=int) >>> a[1, 1] = 1 >>> a array([[0, 0, 0], [0, 1, 0], [0, 0, 0]]) >>> a = np.ma.masked_equal(a, 1) >>> a masked_array( data=[[0, 0, 0], [0, --, 0], [0, 0, 0]], mask=[[False, False, False], [False, True, False], [False, False, False]], fill_value=1) >>> np.ma.mask_rowcols(a) masked_array( data=[[0, --, 0], [--, --, --], [0, --, 0]], mask=[[False, True, False], [ True, True, True], [False, True, False]], fill_value=1) # numpy.ma.mask_rows ma.mask_rows(_a_ , _axis= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1162-L1210) Mask rows of a 2D array that contain masked values. This function is a shortcut to `mask_rowcols` with `axis` equal to 0. See also [`mask_rowcols`](numpy.ma.mask_rowcols#numpy.ma.mask_rowcols "numpy.ma.mask_rowcols") Mask rows and/or columns of a 2D array. [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Examples >>> import numpy as np >>> a = np.zeros((3, 3), dtype=int) >>> a[1, 1] = 1 >>> a array([[0, 0, 0], [0, 1, 0], [0, 0, 0]]) >>> a = np.ma.masked_equal(a, 1) >>> a masked_array( data=[[0, 0, 0], [0, --, 0], [0, 0, 0]], mask=[[False, False, False], [False, True, False], [False, False, False]], fill_value=1) >>> np.ma.mask_rows(a) masked_array( data=[[0, 0, 0], [--, --, --], [0, 0, 0]], mask=[[False, False, False], [ True, True, True], [False, False, False]], fill_value=1) # numpy.ma.masked_all ma.masked_all(_shape_ , _dtype= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L102-L160) Empty masked array with all elements masked. Return an empty masked array of the given shape and dtype, where all the data are masked. Parameters: **shape** int or tuple of ints Shape of the required MaskedArray, e.g., `(2, 3)` or `2`. **dtype** dtype, optional Data type of the output. Returns: **a** MaskedArray A masked array with all data masked. See also [`masked_all_like`](numpy.ma.masked_all_like#numpy.ma.masked_all_like "numpy.ma.masked_all_like") Empty masked array modelled on an existing array. #### Notes Unlike other masked array creation functions (e.g. [`numpy.ma.zeros`](numpy.ma.zeros#numpy.ma.zeros "numpy.ma.zeros"), [`numpy.ma.ones`](numpy.ma.ones#numpy.ma.ones "numpy.ma.ones"), `numpy.ma.full`), `masked_all` does not initialize the values of the array, and may therefore be marginally faster. However, the values stored in the newly allocated array are arbitrary. For reproducible behavior, be sure to set each element of the array before reading. #### Examples >>> import numpy as np >>> np.ma.masked_all((3, 3)) masked_array( data=[[--, --, --], [--, --, --], [--, --, --]], mask=[[ True, True, True], [ True, True, True], [ True, True, True]], fill_value=1e+20, dtype=float64) The [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") parameter defines the underlying data type. >>> a = np.ma.masked_all((3, 3)) >>> a.dtype dtype('float64') >>> a = np.ma.masked_all((3, 3), dtype=np.int32) >>> a.dtype dtype('int32') # numpy.ma.masked_all_like ma.masked_all_like(_arr_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L163-L224) Empty masked array with the properties of an existing array. Return an empty masked array of the same shape and dtype as the array `arr`, where all the data are masked. Parameters: **arr** ndarray An array describing the shape and dtype of the required MaskedArray. Returns: **a** MaskedArray A masked array with all data masked. Raises: AttributeError If `arr` doesn’t have a shape attribute (i.e. not an ndarray) See also [`masked_all`](numpy.ma.masked_all#numpy.ma.masked_all "numpy.ma.masked_all") Empty masked array with all elements masked. #### Notes Unlike other masked array creation functions (e.g. [`numpy.ma.zeros_like`](numpy.ma.zeros_like#numpy.ma.zeros_like "numpy.ma.zeros_like"), [`numpy.ma.ones_like`](numpy.ma.ones_like#numpy.ma.ones_like "numpy.ma.ones_like"), `numpy.ma.full_like`), `masked_all_like` does not initialize the values of the array, and may therefore be marginally faster. However, the values stored in the newly allocated array are arbitrary. For reproducible behavior, be sure to set each element of the array before reading. #### Examples >>> import numpy as np >>> arr = np.zeros((2, 3), dtype=np.float32) >>> arr array([[0., 0., 0.], [0., 0., 0.]], dtype=float32) >>> np.ma.masked_all_like(arr) masked_array( data=[[--, --, --], [--, --, --]], mask=[[ True, True, True], [ True, True, True]], fill_value=np.float64(1e+20), dtype=float32) The dtype of the masked array matches the dtype of `arr`. >>> arr.dtype dtype('float32') >>> np.ma.masked_all_like(arr).dtype dtype('float32') # numpy.ma.masked_array numpy.ma.masked_array[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/__init__.py) alias of [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") # numpy.ma.masked_array.mask property _property_ ma.masked_array.mask Current mask. # numpy.ma.masked_equal ma.masked_equal(_x_ , _value_ , _copy =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2140-L2170) Mask an array where equal to a given value. Return a MaskedArray, masked where the data in array `x` are equal to `value`. The fill_value of the returned MaskedArray is set to `value`. For floating point arrays, consider using `masked_values(x, value)`. See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. [`masked_values`](numpy.ma.masked_values#numpy.ma.masked_values "numpy.ma.masked_values") Mask using floating point equality. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = np.arange(4) >>> a array([0, 1, 2, 3]) >>> ma.masked_equal(a, 2) masked_array(data=[0, 1, --, 3], mask=[False, False, True, False], fill_value=2) # numpy.ma.masked_greater ma.masked_greater(_x_ , _value_ , _copy =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2005-L2029) Mask an array where greater than a given value. This function is a shortcut to `masked_where`, with `condition` = (x > value). See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = np.arange(4) >>> a array([0, 1, 2, 3]) >>> ma.masked_greater(a, 2) masked_array(data=[0, 1, 2, --], mask=[False, False, False, True], fill_value=999999) # numpy.ma.masked_greater_equal ma.masked_greater_equal(_x_ , _value_ , _copy =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2032-L2056) Mask an array where greater than or equal to a given value. This function is a shortcut to `masked_where`, with `condition` = (x >= value). See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = np.arange(4) >>> a array([0, 1, 2, 3]) >>> ma.masked_greater_equal(a, 2) masked_array(data=[0, 1, --, --], mask=[False, False, True, True], fill_value=999999) # numpy.ma.masked_inside ma.masked_inside(_x_ , _v1_ , _v2_ , _copy =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2173-L2211) Mask an array inside a given interval. Shortcut to `masked_where`, where `condition` is True for `x` inside the interval [v1,v2] (v1 <= x <= v2). The boundaries `v1` and `v2` can be given in either order. See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Notes The array `x` is prefilled with its filling value. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> x = [0.31, 1.2, 0.01, 0.2, -0.4, -1.1] >>> ma.masked_inside(x, -0.3, 0.3) masked_array(data=[0.31, 1.2, --, --, -0.4, -1.1], mask=[False, False, True, True, False, False], fill_value=1e+20) The order of `v1` and `v2` doesn’t matter. >>> ma.masked_inside(x, 0.3, -0.3) masked_array(data=[0.31, 1.2, --, --, -0.4, -1.1], mask=[False, False, True, True, False, False], fill_value=1e+20) # numpy.ma.masked_invalid ma.masked_invalid(_a_ , _copy =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2397-L2431) Mask an array where invalid values occur (NaNs or infs). This function is a shortcut to `masked_where`, with `condition` = ~(np.isfinite(a)). Any pre-existing mask is conserved. Only applies to arrays with a dtype where NaNs or infs make sense (i.e. floating point types), but accepts any array_like object. See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = np.arange(5, dtype=float) >>> a[2] = np.nan >>> a[3] = np.inf >>> a array([ 0., 1., nan, inf, 4.]) >>> ma.masked_invalid(a) masked_array(data=[0.0, 1.0, --, --, 4.0], mask=[False, False, True, True, False], fill_value=1e+20) # numpy.ma.masked_less ma.masked_less(_x_ , _value_ , _copy =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2059-L2083) Mask an array where less than a given value. This function is a shortcut to `masked_where`, with `condition` = (x < value). See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = np.arange(4) >>> a array([0, 1, 2, 3]) >>> ma.masked_less(a, 2) masked_array(data=[--, --, 2, 3], mask=[ True, True, False, False], fill_value=999999) # numpy.ma.masked_less_equal ma.masked_less_equal(_x_ , _value_ , _copy =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2086-L2110) Mask an array where less than or equal to a given value. This function is a shortcut to `masked_where`, with `condition` = (x <= value). See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = np.arange(4) >>> a array([0, 1, 2, 3]) >>> ma.masked_less_equal(a, 2) masked_array(data=[--, --, --, 3], mask=[ True, True, True, False], fill_value=999999) # numpy.ma.masked_not_equal ma.masked_not_equal(_x_ , _value_ , _copy =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2113-L2137) Mask an array where _not_ equal to a given value. This function is a shortcut to `masked_where`, with `condition` = (x != value). See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = np.arange(4) >>> a array([0, 1, 2, 3]) >>> ma.masked_not_equal(a, 2) masked_array(data=[--, --, 2, --], mask=[ True, True, False, True], fill_value=999999) # numpy.ma.masked_object ma.masked_object(_x_ , _value_ , _copy =True_, _shrink =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2255-L2321) Mask the array `x` where the data are exactly equal to value. This function is similar to [`masked_values`](numpy.ma.masked_values#numpy.ma.masked_values "numpy.ma.masked_values"), but only suitable for object arrays: for floating point, use [`masked_values`](numpy.ma.masked_values#numpy.ma.masked_values "numpy.ma.masked_values") instead. Parameters: **x** array_like Array to mask **value** object Comparison value **copy**{True, False}, optional Whether to return a copy of `x`. **shrink**{True, False}, optional Whether to collapse a mask full of False to nomask Returns: **result** MaskedArray The result of masking `x` where equal to `value`. See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. [`masked_equal`](numpy.ma.masked_equal#numpy.ma.masked_equal "numpy.ma.masked_equal") Mask where equal to a given value (integers). [`masked_values`](numpy.ma.masked_values#numpy.ma.masked_values "numpy.ma.masked_values") Mask using floating point equality. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> food = np.array(['green_eggs', 'ham'], dtype=object) >>> # don't eat spoiled food >>> eat = ma.masked_object(food, 'green_eggs') >>> eat masked_array(data=[--, 'ham'], mask=[ True, False], fill_value='green_eggs', dtype=object) >>> # plain ol` ham is boring >>> fresh_food = np.array(['cheese', 'ham', 'pineapple'], dtype=object) >>> eat = ma.masked_object(fresh_food, 'green_eggs') >>> eat masked_array(data=['cheese', 'ham', 'pineapple'], mask=False, fill_value='green_eggs', dtype=object) Note that `mask` is set to `nomask` if possible. >>> eat masked_array(data=['cheese', 'ham', 'pineapple'], mask=False, fill_value='green_eggs', dtype=object) # numpy.ma.masked_outside ma.masked_outside(_x_ , _v1_ , _v2_ , _copy =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2214-L2252) Mask an array outside a given interval. Shortcut to `masked_where`, where `condition` is True for `x` outside the interval [v1,v2] (x < v1)|(x > v2). The boundaries `v1` and `v2` can be given in either order. See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Notes The array `x` is prefilled with its filling value. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> x = [0.31, 1.2, 0.01, 0.2, -0.4, -1.1] >>> ma.masked_outside(x, -0.3, 0.3) masked_array(data=[--, --, 0.01, 0.2, --, --], mask=[ True, True, False, False, True, True], fill_value=1e+20) The order of `v1` and `v2` doesn’t matter. >>> ma.masked_outside(x, 0.3, -0.3) masked_array(data=[--, --, 0.01, 0.2, --, --], mask=[ True, True, False, False, True, True], fill_value=1e+20) # numpy.ma.masked_values ma.masked_values(_x_ , _value_ , _rtol =1e-05_, _atol =1e-08_, _copy =True_, _shrink =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2324-L2394) Mask using floating point equality. Return a MaskedArray, masked where the data in array `x` are approximately equal to `value`, determined using [`isclose`](numpy.isclose#numpy.isclose "numpy.isclose"). The default tolerances for `masked_values` are the same as those for [`isclose`](numpy.isclose#numpy.isclose "numpy.isclose"). For integer types, exact equality is used, in the same way as [`masked_equal`](numpy.ma.masked_equal#numpy.ma.masked_equal "numpy.ma.masked_equal"). The fill_value is set to `value` and the mask is set to `nomask` if possible. Parameters: **x** array_like Array to mask. **value** float Masking value. **rtol, atol** float, optional Tolerance parameters passed on to [`isclose`](numpy.isclose#numpy.isclose "numpy.isclose") **copy** bool, optional Whether to return a copy of `x`. **shrink** bool, optional Whether to collapse a mask full of False to `nomask`. Returns: **result** MaskedArray The result of masking `x` where approximately equal to `value`. See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. [`masked_equal`](numpy.ma.masked_equal#numpy.ma.masked_equal "numpy.ma.masked_equal") Mask where equal to a given value (integers). #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> x = np.array([1, 1.1, 2, 1.1, 3]) >>> ma.masked_values(x, 1.1) masked_array(data=[1.0, --, 2.0, --, 3.0], mask=[False, True, False, True, False], fill_value=1.1) Note that `mask` is set to `nomask` if possible. >>> ma.masked_values(x, 2.1) masked_array(data=[1. , 1.1, 2. , 1.1, 3. ], mask=False, fill_value=2.1) Unlike [`masked_equal`](numpy.ma.masked_equal#numpy.ma.masked_equal "numpy.ma.masked_equal"), `masked_values` can perform approximate equalities. >>> ma.masked_values(x, 2.1, atol=1e-1) masked_array(data=[1.0, 1.1, --, 1.1, 3.0], mask=[False, False, True, False, False], fill_value=2.1) # numpy.ma.masked_where ma.masked_where(_condition_ , _a_ , _copy =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L1882-L2002) Mask an array where a condition is met. Return `a` as an array masked where `condition` is True. Any masked values of `a` or `condition` are also masked in the output. Parameters: **condition** array_like Masking condition. When `condition` tests floating point values for equality, consider using `masked_values` instead. **a** array_like Array to mask. **copy** bool If True (default) make a copy of `a` in the result. If False modify `a` in place and return a view. Returns: **result** MaskedArray The result of masking `a` where `condition` is True. See also [`masked_values`](numpy.ma.masked_values#numpy.ma.masked_values "numpy.ma.masked_values") Mask using floating point equality. [`masked_equal`](numpy.ma.masked_equal#numpy.ma.masked_equal "numpy.ma.masked_equal") Mask where equal to a given value. [`masked_not_equal`](numpy.ma.masked_not_equal#numpy.ma.masked_not_equal "numpy.ma.masked_not_equal") Mask where _not_ equal to a given value. [`masked_less_equal`](numpy.ma.masked_less_equal#numpy.ma.masked_less_equal "numpy.ma.masked_less_equal") Mask where less than or equal to a given value. [`masked_greater_equal`](numpy.ma.masked_greater_equal#numpy.ma.masked_greater_equal "numpy.ma.masked_greater_equal") Mask where greater than or equal to a given value. [`masked_less`](numpy.ma.masked_less#numpy.ma.masked_less "numpy.ma.masked_less") Mask where less than a given value. [`masked_greater`](numpy.ma.masked_greater#numpy.ma.masked_greater "numpy.ma.masked_greater") Mask where greater than a given value. [`masked_inside`](numpy.ma.masked_inside#numpy.ma.masked_inside "numpy.ma.masked_inside") Mask inside a given interval. [`masked_outside`](numpy.ma.masked_outside#numpy.ma.masked_outside "numpy.ma.masked_outside") Mask outside a given interval. [`masked_invalid`](numpy.ma.masked_invalid#numpy.ma.masked_invalid "numpy.ma.masked_invalid") Mask invalid values (NaNs or infs). #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = np.arange(4) >>> a array([0, 1, 2, 3]) >>> ma.masked_where(a <= 2, a) masked_array(data=[--, --, --, 3], mask=[ True, True, True, False], fill_value=999999) Mask array `b` conditional on `a`. >>> b = ['a', 'b', 'c', 'd'] >>> ma.masked_where(a == 2, b) masked_array(data=['a', 'b', --, 'd'], mask=[False, False, True, False], fill_value='N/A', dtype='>> c = ma.masked_where(a <= 2, a) >>> c masked_array(data=[--, --, --, 3], mask=[ True, True, True, False], fill_value=999999) >>> c[0] = 99 >>> c masked_array(data=[99, --, --, 3], mask=[False, True, True, False], fill_value=999999) >>> a array([0, 1, 2, 3]) >>> c = ma.masked_where(a <= 2, a, copy=False) >>> c[0] = 99 >>> c masked_array(data=[99, --, --, 3], mask=[False, True, True, False], fill_value=999999) >>> a array([99, 1, 2, 3]) When `condition` or `a` contain masked values. >>> a = np.arange(4) >>> a = ma.masked_where(a == 2, a) >>> a masked_array(data=[0, 1, --, 3], mask=[False, False, True, False], fill_value=999999) >>> b = np.arange(4) >>> b = ma.masked_where(b == 0, b) >>> b masked_array(data=[--, 1, 2, 3], mask=[ True, False, False, False], fill_value=999999) >>> ma.masked_where(a == 3, b) masked_array(data=[--, 1, --, --], mask=[ True, False, True, True], fill_value=999999) # numpy.ma.MaskedArray.__abs__ method ma.MaskedArray.__abs__(_self_) # numpy.ma.MaskedArray.__add__ method ma.MaskedArray.__add__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4316-L4323) Add self to other, and return a new masked array. # numpy.ma.MaskedArray.__and__ method ma.MaskedArray.__and__(_value_ , _/_) Return self&value. # numpy.ma.MaskedArray.__array__ method ma.MaskedArray.__array__([_dtype_ , ]_*_ , _copy=None_) For `dtype` parameter it returns a new reference to self if `dtype` is not given or it matches array’s data type. A new array of provided data type is returned if `dtype` is different from the current data type of the array. For `copy` parameter it returns a new reference to self if `copy=False` or `copy=None` and copying isn’t enforced by `dtype` parameter. The method returns a new array for `copy=True`, regardless of `dtype` parameter. A more detailed explanation of the `__array__` interface can be found in [The __array__() method](../../user/basics.interoperability#dunder-array- interface). # numpy.ma.MaskedArray.__array_priority__ attribute ma.MaskedArray.__array_priority___= 15_ # numpy.ma.MaskedArray.__array_wrap__ method ma.MaskedArray.__array_wrap__(_obj_ , _context =None_, _return_scalar =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L3143-L3201) Special hook for ufuncs. Wraps the numpy array and sets the mask according to context. # numpy.ma.MaskedArray.__bool__ method ma.MaskedArray.__bool__(_/_) True if self else False # numpy.ma.MaskedArray.__contains__ method ma.MaskedArray.__contains__(_key_ , _/_) Return key in self. # numpy.ma.MaskedArray.__copy__ method ma.MaskedArray.__copy__() Used if [`copy.copy`](https://docs.python.org/3/library/copy.html#copy.copy "\(in Python v3.13\)") is called on an array. Returns a copy of the array. Equivalent to `a.copy(order='K')`. # numpy.ma.MaskedArray.__deepcopy__ method ma.MaskedArray.__deepcopy__(_memo_ , _/_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L6564-L6578) Used if [`copy.deepcopy`](https://docs.python.org/3/library/copy.html#copy.deepcopy "\(in Python v3.13\)") is called on an array. # numpy.ma.MaskedArray.__delitem__ method ma.MaskedArray.__delitem__(_key_ , _/_) Delete self[key]. # numpy.ma.MaskedArray.__div__ method ma.MaskedArray.__div__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4365-L4372) Divide other into self, and return a new masked array. # numpy.ma.MaskedArray.__divmod__ method ma.MaskedArray.__divmod__(_value_ , _/_) Return divmod(self, value). # numpy.ma.MaskedArray.__eq__ method ma.MaskedArray.__eq__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4277-L4288) Check whether other equals self elementwise. When either of the elements is masked, the result is masked as well, but the underlying boolean data are still set, with self and other considered equal if both are masked, and unequal otherwise. For structured arrays, all fields are combined, with masked values ignored. The result is masked if all fields were masked, with self and other considered equal only if both were fully masked. # numpy.ma.MaskedArray.__float__ method ma.MaskedArray.__float__()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4552-L4563) Convert to float. # numpy.ma.MaskedArray.__floordiv__ method ma.MaskedArray.__floordiv__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4390-L4397) Divide other into self, and return a new masked array. # numpy.ma.MaskedArray.__ge__ method ma.MaskedArray.__ge__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4310-L4311) Return self>=value. # numpy.ma.MaskedArray.__getitem__ method ma.MaskedArray.__getitem__(_indx_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L3293-L3417) x.__getitem__(y) <==> x[y] Return the item described by i, as a masked array. # numpy.ma.MaskedArray.__getstate__ method ma.MaskedArray.__getstate__()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L6530-L6537) Return the internal state of the masked array, for pickling purposes. # numpy.ma.MaskedArray.__gt__ method ma.MaskedArray.__gt__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4313-L4314) Return self>value. # numpy.ma.MaskedArray.__iadd__ method ma.MaskedArray.__iadd__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4422-L4438) Add other to self in-place. # numpy.ma.MaskedArray.__iand__ method ma.MaskedArray.__iand__(_value_ , _/_) Return self&=value. # numpy.ma.MaskedArray.__idiv__ method ma.MaskedArray.__idiv__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4474-L4491) Divide self by other in-place. # numpy.ma.MaskedArray.__ifloordiv__ method ma.MaskedArray.__ifloordiv__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4493-L4510) Floor divide self by other in-place. # numpy.ma.MaskedArray.__ilshift__ method ma.MaskedArray.__ilshift__(_value_ , _/_) Return self<<=value. # numpy.ma.MaskedArray.__imod__ method ma.MaskedArray.__imod__(_value_ , _/_) Return self%=value. # numpy.ma.MaskedArray.__imul__ method ma.MaskedArray.__imul__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4457-L4472) Multiply self by other in-place. # numpy.ma.MaskedArray.__int__ method ma.MaskedArray.__int__()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4565-L4575) Convert to int. # numpy.ma.MaskedArray.__ior__ method ma.MaskedArray.__ior__(_value_ , _/_) Return self|=value. # numpy.ma.MaskedArray.__ipow__ method ma.MaskedArray.__ipow__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4531-L4550) Raise self to the power other, in place. # numpy.ma.MaskedArray.__irshift__ method ma.MaskedArray.__irshift__(_value_ , _/_) Return self>>=value. # numpy.ma.MaskedArray.__isub__ method ma.MaskedArray.__isub__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4440-L4455) Subtract other from self in-place. # numpy.ma.MaskedArray.__itruediv__ method ma.MaskedArray.__itruediv__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4512-L4529) True divide self by other in-place. # numpy.ma.MaskedArray.__ixor__ method ma.MaskedArray.__ixor__(_value_ , _/_) Return self^=value. # numpy.ma.MaskedArray.__le__ method ma.MaskedArray.__le__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4304-L4305) Return self<=value. # numpy.ma.MaskedArray.__len__ method ma.MaskedArray.__len__(_/_) Return len(self). # numpy.ma.MaskedArray.__lshift__ method ma.MaskedArray.__lshift__(_value_ , _/_) Return self<>self. # numpy.ma.MaskedArray.__rshift__ method ma.MaskedArray.__rshift__(_value_ , _/_) Return self>>value. # numpy.ma.MaskedArray.__rsub__ method ma.MaskedArray.__rsub__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4343-L4348) Subtract self from other, and return a new masked array. # numpy.ma.MaskedArray.__rtruediv__ method ma.MaskedArray.__rtruediv__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4383-L4388) Divide self into other, and return a new masked array. # numpy.ma.MaskedArray.__rxor__ method ma.MaskedArray.__rxor__(_value_ , _/_) Return value^self. # numpy.ma.MaskedArray.__setitem__ method ma.MaskedArray.__setitem__(_indx_ , _value_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L3422-L3490) x.__setitem__(i, y) <==> x[i]=y Set item described by index. If value is masked, masks those locations. # numpy.ma.MaskedArray.__setmask__ method ma.MaskedArray.__setmask__(_mask_ , _copy =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L3521-L3589) Set the mask. # numpy.ma.MaskedArray.__setstate__ method ma.MaskedArray.__setstate__(_state_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L6539-L6554) Restore the internal state of the masked array, for pickling purposes. `state` is typically the output of the `__getstate__` output, and is a 5-tuple: * class name * a tuple giving the shape of the data * a typecode for the data * a binary string for the data * a binary string for the mask. # numpy.ma.MaskedArray.__str__ method ma.MaskedArray.__str__()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4087-L4088) Return str(self). # numpy.ma.MaskedArray.__sub__ method ma.MaskedArray.__sub__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4334-L4341) Subtract other from self, and return a new masked array. # numpy.ma.MaskedArray.__truediv__ method ma.MaskedArray.__truediv__(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4374-L4381) Divide other into self, and return a new masked array. # numpy.ma.MaskedArray.__xor__ method ma.MaskedArray.__xor__(_value_ , _/_) Return self^value. # numpy.ma.MaskedArray.all method ma.MaskedArray.all(_axis=None_ , _out=None_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5018-L5057) Returns True if all elements evaluate to True. The output array is masked where all the values along the given axis are masked: if the output would have been a scalar and that all the values are masked, then the output is `masked`. Refer to [`numpy.all`](numpy.all#numpy.all "numpy.all") for full documentation. See also [`numpy.ndarray.all`](numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all") corresponding function for ndarrays [`numpy.all`](numpy.all#numpy.all "numpy.all") equivalent function #### Examples >>> import numpy as np >>> np.ma.array([1,2,3]).all() True >>> a = np.ma.array([1,2,3], mask=True) >>> (a.all() is np.ma.masked) True # numpy.ma.MaskedArray.anom method ma.MaskedArray.anom(_axis =None_, _dtype =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5472-L5508) Compute the anomalies (deviations from the arithmetic mean) along the given axis. Returns an array of anomalies, with the same shape as the input and where the arithmetic mean is computed along the given axis. Parameters: **axis** int, optional Axis over which the anomalies are taken. The default is to use the mean of the flattened array as reference. **dtype** dtype, optional Type to use in computing the variance. For arrays of integer type the default is float32; for arrays of float types it is the same as the array type. See also [`mean`](numpy.mean#numpy.mean "numpy.mean") Compute the mean of the array. #### Examples >>> import numpy as np >>> a = np.ma.array([1,2,3]) >>> a.anom() masked_array(data=[-1., 0., 1.], mask=False, fill_value=1e+20) # numpy.ma.MaskedArray.any method ma.MaskedArray.any(_axis=None_ , _out=None_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5059-L5087) Returns True if any of the elements of `a` evaluate to True. Masked values are considered as False during computation. Refer to [`numpy.any`](numpy.any#numpy.any "numpy.any") for full documentation. See also [`numpy.ndarray.any`](numpy.ndarray.any#numpy.ndarray.any "numpy.ndarray.any") corresponding function for ndarrays [`numpy.any`](numpy.any#numpy.any "numpy.any") equivalent function # numpy.ma.MaskedArray.argmax method ma.MaskedArray.argmax(_axis=None_ , _fill_value=None_ , _out=None_ , _*_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5775-L5813) Returns array of indices of the maximum values along the given axis. Masked values are treated as if they had the value fill_value. Parameters: **axis**{None, integer} If None, the index is into the flattened array, otherwise along the specified axis **fill_value** scalar or None, optional Value used to fill in the masked values. If None, the output of maximum_fill_value(self._data) is used instead. **out**{None, array}, optional Array into which the result can be placed. Its type is preserved and it must be of the right shape to hold the output. Returns: **index_array**{integer_array} #### Examples >>> import numpy as np >>> a = np.arange(6).reshape(2,3) >>> a.argmax() 5 >>> a.argmax(0) array([1, 1, 1]) >>> a.argmax(1) array([2, 2]) # numpy.ma.MaskedArray.argmin method ma.MaskedArray.argmin(_axis=None_ , _fill_value=None_ , _out=None_ , _*_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5727-L5773) Return array of indices to the minimum values along the given axis. Parameters: **axis**{None, integer} If None, the index is into the flattened array, otherwise along the specified axis **fill_value** scalar or None, optional Value used to fill in the masked values. If None, the output of minimum_fill_value(self._data) is used instead. **out**{None, array}, optional Array into which the result can be placed. Its type is preserved and it must be of the right shape to hold the output. Returns: ndarray or scalar If multi-dimension input, returns a new ndarray of indices to the minimum values along the given axis. Otherwise, returns a scalar of index to the minimum values along the given axis. #### Examples >>> import numpy as np >>> x = np.ma.array(np.arange(4), mask=[1,1,0,0]) >>> x.shape = (2,2) >>> x masked_array( data=[[--, --], [2, 3]], mask=[[ True, True], [False, False]], fill_value=999999) >>> x.argmin(axis=0, fill_value=-1) array([0, 0]) >>> x.argmin(axis=0, fill_value=9) array([1, 1]) # numpy.ma.MaskedArray.argsort method ma.MaskedArray.argsort(_axis= _, _kind=None_ , _order=None_ , _endwith=True_ , _fill_value=None_ , _*_ , _stable=False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5647-L5725) Return an ndarray of indices that sort the array along the specified axis. Masked values are filled beforehand to [`fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value"). Parameters: **axis** int, optional Axis along which to sort. If None, the default, the flattened array is used. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional The sorting algorithm used. **order** list, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. Not all fields need be specified. **endwith**{True, False}, optional Whether missing values (if any) should be treated as the largest values (True) or the smallest values (False) When the array contains unmasked values at the same extremes of the datatype, the ordering of these values and the masked values is undefined. **fill_value** scalar or None, optional Value used internally for the masked values. If `fill_value` is not None, it supersedes `endwith`. **stable** bool, optional Only for compatibility with `np.argsort`. Ignored. Returns: **index_array** ndarray, int Array of indices that sort `a` along the specified axis. In other words, `a[index_array]` yields a sorted `a`. See also [`ma.MaskedArray.sort`](numpy.ma.maskedarray.sort#numpy.ma.MaskedArray.sort "numpy.ma.MaskedArray.sort") Describes sorting algorithms used. [`lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort with multiple keys. [`numpy.ndarray.sort`](numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort") Inplace sort. #### Notes See [`sort`](numpy.sort#numpy.sort "numpy.sort") for notes on the different sorting algorithms. #### Examples >>> import numpy as np >>> a = np.ma.array([3,2,1], mask=[False, False, True]) >>> a masked_array(data=[3, 2, --], mask=[False, False, True], fill_value=999999) >>> a.argsort() array([1, 0, 2]) # numpy.ma.MaskedArray.astype method ma.MaskedArray.astype(_dtype_ , _order ='K'_, _casting ='unsafe'_, _subok =True_, _copy =True_) Copy of the array, cast to a specified type. Parameters: **dtype** str or dtype Typecode or data-type to which the array is cast. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. **subok** bool, optional If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array. **copy** bool, optional By default, astype always returns a newly allocated array. If this is set to false, and the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`, and `subok` requirements are satisfied, the input array is returned instead of a copy. Returns: **arr_t** ndarray Unless [`copy`](numpy.copy#numpy.copy "numpy.copy") is False and the other conditions for returning the input array are satisfied (see description for [`copy`](numpy.copy#numpy.copy "numpy.copy") input parameter), `arr_t` is a new array of the same shape as the input array, with dtype, order given by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`. Raises: ComplexWarning When casting from complex to float or int. To avoid this, one should use `a.real.astype(t)`. #### Examples >>> import numpy as np >>> x = np.array([1, 2, 2.5]) >>> x array([1. , 2. , 2.5]) >>> x.astype(int) array([1, 2, 2]) # numpy.ma.MaskedArray.base attribute ma.MaskedArray.base Base object if memory is from some other object. #### Examples The base of an array that owns its memory is None: >>> import numpy as np >>> x = np.array([1,2,3,4]) >>> x.base is None True Slicing creates a view, whose memory is shared with x: >>> y = x[2:] >>> y.base is x True # numpy.ma.MaskedArray.byteswap method ma.MaskedArray.byteswap(_inplace =False_) Swap the bytes of the array elements Toggle between low-endian and big-endian data representation by returning a byteswapped array, optionally swapped in-place. Arrays of byte-strings are not swapped. The real and imaginary parts of a complex number are swapped individually. Parameters: **inplace** bool, optional If `True`, swap bytes in-place, default is `False`. Returns: **out** ndarray The byteswapped array. If `inplace` is `True`, this is a view to self. #### Examples >>> import numpy as np >>> A = np.array([1, 256, 8755], dtype=np.int16) >>> list(map(hex, A)) ['0x1', '0x100', '0x2233'] >>> A.byteswap(inplace=True) array([ 256, 1, 13090], dtype=int16) >>> list(map(hex, A)) ['0x100', '0x1', '0x3322'] Arrays of byte-strings are not swapped >>> A = np.array([b'ceg', b'fac']) >>> A.byteswap() array([b'ceg', b'fac'], dtype='|S3') `A.view(A.dtype.newbyteorder()).byteswap()` produces an array with the same values but different representation in memory >>> A = np.array([1, 2, 3],dtype=np.int64) >>> A.view(np.uint8) array([1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0], dtype=uint8) >>> A.view(A.dtype.newbyteorder()).byteswap(inplace=True) array([1, 2, 3], dtype='>i8') >>> A.view(np.uint8) array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3], dtype=uint8) # numpy.ma.MaskedArray.choose method ma.MaskedArray.choose(_choices_ , _out =None_, _mode ='raise'_) Use an index array to construct a new array from a set of choices. Refer to [`numpy.choose`](numpy.choose#numpy.choose "numpy.choose") for full documentation. See also [`numpy.choose`](numpy.choose#numpy.choose "numpy.choose") equivalent function # numpy.ma.MaskedArray.clip method ma.MaskedArray.clip(_min =None_, _max =None_, _out =None_, _** kwargs_) Return an array whose values are limited to `[min, max]`. One of max or min must be given. Refer to [`numpy.clip`](numpy.clip#numpy.clip "numpy.clip") for full documentation. See also [`numpy.clip`](numpy.clip#numpy.clip "numpy.clip") equivalent function # numpy.ma.MaskedArray.compress method ma.MaskedArray.compress(_condition_ , _axis =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L3983-L4054) Return `a` where condition is `True`. If condition is a [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"), missing values are considered as `False`. Parameters: **condition** var Boolean 1-d array selecting which entries to return. If len(condition) is less than the size of a along the axis, then output is truncated to length of condition array. **axis**{None, int}, optional Axis along which the operation must be performed. **out**{None, ndarray}, optional Alternative output array in which to place the result. It must have the same shape as the expected output but the type will be cast if necessary. Returns: **result** MaskedArray A [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") object. #### Notes Please note the difference with [`compressed`](numpy.ma.maskedarray.compressed#numpy.ma.MaskedArray.compressed "numpy.ma.MaskedArray.compressed") ! The output of [`compress`](numpy.compress#numpy.compress "numpy.compress") has a mask, the output of [`compressed`](numpy.ma.maskedarray.compressed#numpy.ma.MaskedArray.compressed "numpy.ma.MaskedArray.compressed") does not. #### Examples >>> import numpy as np >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.compress([1, 0, 1]) masked_array(data=[1, 3], mask=[False, False], fill_value=999999) >>> x.compress([1, 0, 1], axis=1) masked_array( data=[[1, 3], [--, --], [7, 9]], mask=[[False, False], [ True, True], [False, False]], fill_value=999999) # numpy.ma.MaskedArray.compressed method ma.MaskedArray.compressed()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L3947-L3981) Return all the non-masked data as a 1-D array. Returns: **data** ndarray A new [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") holding the non-masked data is returned. #### Notes The result is **not** a MaskedArray! #### Examples >>> import numpy as np >>> x = np.ma.array(np.arange(5), mask=[0]*2 + [1]*3) >>> x.compressed() array([0, 1]) >>> type(x.compressed()) N-D arrays are compressed to 1-D. >>> arr = [[1, 2], [3, 4]] >>> mask = [[1, 0], [0, 1]] >>> x = np.ma.array(arr, mask=mask) >>> x.compressed() array([2, 3]) # numpy.ma.MaskedArray.conj method ma.MaskedArray.conj() Complex-conjugate all elements. Refer to [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") for full documentation. See also [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") equivalent function # numpy.ma.MaskedArray.conjugate method ma.MaskedArray.conjugate() Return the complex conjugate, element-wise. Refer to [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") for full documentation. See also [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") equivalent function # numpy.ma.MaskedArray.copy method ma.MaskedArray.copy(_order ='C'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2638-L2648) Return a copy of the array. Parameters: **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. (Note that this function and [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") Similar function with different default behavior [`numpy.copyto`](numpy.copyto#numpy.copyto "numpy.copyto") #### Notes This function is the preferred method for creating an array copy. The function [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. #### Examples >>> import numpy as np >>> x = np.array([[1,2,3],[4,5,6]], order='F') >>> y = x.copy() >>> x.fill(0) >>> x array([[0, 0, 0], [0, 0, 0]]) >>> y array([[1, 2, 3], [4, 5, 6]]) >>> y.flags['C_CONTIGUOUS'] True For arrays containing Python objects (e.g. dtype=object), the copy is a shallow one. The new array will contain the same object which may lead to surprises if that object can be modified (is mutable): >>> a = np.array([1, 'm', [2, 3, 4]], dtype=object) >>> b = a.copy() >>> b[2][0] = 10 >>> a array([1, 'm', list([10, 3, 4])], dtype=object) To ensure all elements within an `object` array are copied, use [`copy.deepcopy`](https://docs.python.org/3/library/copy.html#copy.deepcopy "\(in Python v3.13\)"): >>> import copy >>> a = np.array([1, 'm', [2, 3, 4]], dtype=object) >>> c = copy.deepcopy(a) >>> c[2][0] = 10 >>> c array([1, 'm', list([10, 3, 4])], dtype=object) >>> a array([1, 'm', list([2, 3, 4])], dtype=object) # numpy.ma.MaskedArray.count method ma.MaskedArray.count(_axis=None_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4633-L4727) Count the non-masked elements of the array along the given axis. Parameters: **axis** None or int or tuple of ints, optional Axis or axes along which the count is performed. The default, None, performs the count over all the dimensions of the input array. `axis` may be negative, in which case it counts from the last to the first axis. If this is a tuple of ints, the count is performed on multiple axes, instead of a single axis or all the axes as before. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns: **result** ndarray or scalar An array with the same shape as the input array, with the specified axis removed. If the array is a 0-d array, or if `axis` is None, a scalar is returned. See also [`ma.count_masked`](numpy.ma.count_masked#numpy.ma.count_masked "numpy.ma.count_masked") Count masked elements in array or along a given axis. #### Examples >>> import numpy.ma as ma >>> a = ma.arange(6).reshape((2, 3)) >>> a[1, :] = ma.masked >>> a masked_array( data=[[0, 1, 2], [--, --, --]], mask=[[False, False, False], [ True, True, True]], fill_value=999999) >>> a.count() 3 When the `axis` keyword is specified an array of appropriate size is returned. >>> a.count(axis=0) array([1, 1, 1]) >>> a.count(axis=1) array([3, 0]) # numpy.ma.MaskedArray.ctypes attribute ma.MaskedArray.ctypes An object to simplify the interaction of the array with the ctypes module. This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library. Parameters: **None** Returns: **c** Python object Possessing attributes data, shape, strides, etc. See also [`numpy.ctypeslib`](../routines.ctypeslib#module-numpy.ctypeslib "numpy.ctypeslib") #### Notes Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes): _ctypes.data A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as: `self._array_interface_['data'][0]`. Note that unlike `data_as`, a reference won’t be kept to the array: code like `ctypes.c_void_p((a + b).ctypes.data)` will result in a pointer to a deallocated array, and should be spelt `(a + b).ctypes.data_as(ctypes.c_void_p)` _ctypes.shape (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to `dtype('p')` on this platform (see [`c_intp`](../routines.ctypeslib#numpy.ctypeslib.c_intp "numpy.ctypeslib.c_intp")). This base-type could be [`ctypes.c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "\(in Python v3.13\)"), [`ctypes.c_long`](https://docs.python.org/3/library/ctypes.html#ctypes.c_long "\(in Python v3.13\)"), or [`ctypes.c_longlong`](https://docs.python.org/3/library/ctypes.html#ctypes.c_longlong "\(in Python v3.13\)") depending on the platform. The ctypes array contains the shape of the underlying array. _ctypes.strides (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array. _ctypes.data_as(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_internal.py#L279-L296) Return the data pointer cast to a particular c-types object. For example, calling `self._as_parameter_` is equivalent to `self.data_as(ctypes.c_void_p)`. Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: `self.data_as(ctypes.POINTER(ctypes.c_double))`. The returned pointer will keep a reference to the array. _ctypes.shape_as(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_internal.py#L298-L305) Return the shape tuple as an array of some other c-types type. For example: `self.shape_as(ctypes.c_short)`. _ctypes.strides_as(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_internal.py#L307-L314) Return the strides tuple as an array of some other c-types type. For example: `self.strides_as(ctypes.c_longlong)`. If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the `as_parameter` attribute which will return an integer equal to the data attribute. #### Examples >>> import numpy as np >>> import ctypes >>> x = np.array([[0, 1], [2, 3]], dtype=np.int32) >>> x array([[0, 1], [2, 3]], dtype=int32) >>> x.ctypes.data 31962608 # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)) <__main__.LP_c_uint object at 0x7ff2fc1fc200> # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)).contents c_uint(0) >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint64)).contents c_ulong(4294967296) >>> x.ctypes.shape # may vary >>> x.ctypes.strides # may vary # numpy.ma.MaskedArray.cumprod method ma.MaskedArray.cumprod(_axis =None_, _dtype =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5385-L5414) Return the cumulative product of the array elements over the given axis. Masked values are set to 1 internally during the computation. However, their position is saved, and the result will be masked at the same locations. Refer to [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") for full documentation. See also [`numpy.ndarray.cumprod`](numpy.ndarray.cumprod#numpy.ndarray.cumprod "numpy.ndarray.cumprod") corresponding function for ndarrays [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") equivalent function #### Notes The mask is lost if `out` is not a valid MaskedArray ! Arithmetic is modular when using integer types, and no error is raised on overflow. # numpy.ma.MaskedArray.cumsum method ma.MaskedArray.cumsum(_axis =None_, _dtype =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5301-L5341) Return the cumulative sum of the array elements over the given axis. Masked values are set to 0 internally during the computation. However, their position is saved, and the result will be masked at the same locations. Refer to [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") for full documentation. See also [`numpy.ndarray.cumsum`](numpy.ndarray.cumsum#numpy.ndarray.cumsum "numpy.ndarray.cumsum") corresponding function for ndarrays [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") equivalent function #### Notes The mask is lost if `out` is not a valid [`ma.MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") ! Arithmetic is modular when using integer types, and no error is raised on overflow. #### Examples >>> import numpy as np >>> marr = np.ma.array(np.arange(10), mask=[0,0,0,1,1,1,0,0,0,0]) >>> marr.cumsum() masked_array(data=[0, 1, 3, --, --, --, 9, 16, 24, 33], mask=[False, False, False, True, True, True, False, False, False, False], fill_value=999999) # numpy.ma.MaskedArray.diagonal method ma.MaskedArray.diagonal(_offset =0_, _axis1 =0_, _axis2 =1_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2638-L2648) Return specified diagonals. In NumPy 1.9 the returned array is a read-only view instead of a copy as in previous NumPy versions. In a future version the read-only restriction will be removed. Refer to [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") for full documentation. See also [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") equivalent function # numpy.ma.MaskedArray.dtype property _property_ ma.MaskedArray.dtype Data-type of the array’s elements. Warning Setting `arr.dtype` is discouraged and may be deprecated in the future. Setting will replace the `dtype` without modifying the memory (see also [`ndarray.view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") and [`ndarray.astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype")). Parameters: **None** Returns: **d** numpy dtype object See also [`ndarray.astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype") Cast the values contained in the array to a new data-type. [`ndarray.view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") Create a view of the same data but a different data-type. [`numpy.dtype`](numpy.dtype#numpy.dtype "numpy.dtype") #### Examples >>> x array([[0, 1], [2, 3]]) >>> x.dtype dtype('int32') >>> type(x.dtype) # numpy.ma.MaskedArray.dump method ma.MaskedArray.dump(_file_) Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load. Parameters: **file** str or Path A string naming the dump file. # numpy.ma.MaskedArray.dumps method ma.MaskedArray.dumps() Returns the pickle of the array as a string. pickle.loads will convert the string back to an array. Parameters: **None** # numpy.ma.MaskedArray.fill method ma.MaskedArray.fill(_value_) Fill the array with a scalar value. Parameters: **value** scalar All elements of `a` will be assigned this value. #### Examples >>> import numpy as np >>> a = np.array([1, 2]) >>> a.fill(0) >>> a array([0, 0]) >>> a = np.empty(2) >>> a.fill(1) >>> a array([1., 1.]) Fill expects a scalar value and always behaves the same as assigning to a single array element. The following is a rare example where this distinction is important: >>> a = np.array([None, None], dtype=object) >>> a[0] = np.array(3) >>> a array([array(3), None], dtype=object) >>> a.fill(np.array(3)) >>> a array([array(3), array(3)], dtype=object) Where other forms of assignments will unpack the array being assigned: >>> a[...] = np.array(3) >>> a array([3, 3], dtype=object) # numpy.ma.MaskedArray.filled method ma.MaskedArray.filled(_fill_value =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L3866-L3945) Return a copy of self, with masked values filled with a given value. **However** , if there are no masked values to fill, self will be returned instead as an ndarray. Parameters: **fill_value** array_like, optional The value to use for invalid entries. Can be scalar or non-scalar. If non- scalar, the resulting ndarray must be broadcastable over input array. Default is None, in which case, the [`fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") attribute of the array is used instead. Returns: **filled_array** ndarray A copy of `self` with invalid entries replaced by _fill_value_ (be it the function argument or the attribute of `self`), or `self` itself as an ndarray if there are no invalid entries to be replaced. #### Notes The result is **not** a MaskedArray! #### Examples >>> import numpy as np >>> x = np.ma.array([1,2,3,4,5], mask=[0,0,1,0,1], fill_value=-999) >>> x.filled() array([ 1, 2, -999, 4, -999]) >>> x.filled(fill_value=1000) array([ 1, 2, 1000, 4, 1000]) >>> type(x.filled()) Subclassing is preserved. This means that if, e.g., the data part of the masked array is a recarray, `filled` returns a recarray: >>> x = np.array([(-1, 2), (-3, 4)], dtype='i8,i8').view(np.recarray) >>> m = np.ma.array(x, mask=[(True, False), (False, True)]) >>> m.filled() rec.array([(999999, 2), ( -3, 999999)], dtype=[('f0', '>> import numpy as np >>> a = np.array([[1,2], [3,4]]) >>> a.flatten() array([1, 2, 3, 4]) >>> a.flatten('F') array([1, 3, 2, 4]) # numpy.ma.MaskedArray.get_fill_value method ma.MaskedArray.get_fill_value()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L3801-L3841) The filling value of the masked array is a scalar. When setting, None will set to a default based on the data type. #### Examples >>> import numpy as np >>> for dt in [np.int32, np.int64, np.float64, np.complex128]: ... np.ma.array([0, 1], dtype=dt).get_fill_value() ... np.int64(999999) np.int64(999999) np.float64(1e+20) np.complex128(1e+20+0j) >>> x = np.ma.array([0, 1.], fill_value=-np.inf) >>> x.fill_value np.float64(-inf) >>> x.fill_value = np.pi >>> x.fill_value np.float64(3.1415926535897931) Reset to default: >>> x.fill_value = None >>> x.fill_value np.float64(1e+20) # numpy.ma.MaskedArray.harden_mask method ma.MaskedArray.harden_mask()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L3630-L3646) Force the mask to hard, preventing unmasking by assignment. Whether the mask of a masked array is hard or soft is determined by its [`hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") property. `harden_mask` sets [`hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") to `True` (and returns the modified self). See also [`ma.MaskedArray.hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") [`ma.MaskedArray.soften_mask`](numpy.ma.maskedarray.soften_mask#numpy.ma.MaskedArray.soften_mask "numpy.ma.MaskedArray.soften_mask") # numpy.ma.MaskedArray.ids method ma.MaskedArray.ids()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4963-L4988) Return the addresses of the data and mask areas. Parameters: **None** #### Examples >>> import numpy as np >>> x = np.ma.array([1, 2, 3], mask=[0, 1, 1]) >>> x.ids() (166670640, 166659832) # may vary If the array has no mask, the address of `nomask` is returned. This address is typically not close to the data in memory: >>> x = np.ma.array([1, 2, 3]) >>> x.ids() (166691080, 3083169284) # may vary # numpy.ma.MaskedArray.imag property _property_ ma.MaskedArray.imag The imaginary part of the masked array. This property is a view on the imaginary part of this `MaskedArray`. See also [`real`](numpy.real#numpy.real "numpy.real") #### Examples >>> import numpy as np >>> x = np.ma.array([1+1.j, -2j, 3.45+1.6j], mask=[False, True, False]) >>> x.imag masked_array(data=[1.0, --, 1.6], mask=[False, True, False], fill_value=1e+20) # numpy.ma.MaskedArray.iscontiguous method ma.MaskedArray.iscontiguous()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4990-L5016) Return a boolean indicating whether the data is contiguous. Parameters: **None** #### Examples >>> import numpy as np >>> x = np.ma.array([1, 2, 3]) >>> x.iscontiguous() True `iscontiguous` returns one of the flags of the masked array: >>> x.flags C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False # numpy.ma.MaskedArray.item method ma.MaskedArray.item(_* args_) Copy an element of an array to a standard Python scalar and return it. Parameters: ***args** Arguments (variable number and type) * none: in this case, the method only works for arrays with one element (`a.size == 1`), which element is copied into a standard Python scalar object and returned. * int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return. * tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array. Returns: **z** Standard Python scalar object A copy of the specified element of the array as a suitable Python scalar #### Notes When the data type of `a` is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned. `item` is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math. #### Examples >>> import numpy as np >>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.item(3) 1 >>> x.item(7) 0 >>> x.item((0, 1)) 2 >>> x.item((2, 2)) 1 For an array with object dtype, elements are returned as-is. >>> a = np.array([np.int64(1)], dtype=object) >>> a.item() #return np.int64 np.int64(1) # numpy.ma.MaskedArray.itemsize attribute ma.MaskedArray.itemsize Length of one array element in bytes. #### Examples >>> import numpy as np >>> x = np.array([1,2,3], dtype=np.float64) >>> x.itemsize 8 >>> x = np.array([1,2,3], dtype=np.complex128) >>> x.itemsize 16 # numpy.ma.MaskedArray.max method ma.MaskedArray.max(_axis=None_ , _out=None_ , _fill_value=None_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L6004-L6108) Return the maximum along a given axis. Parameters: **axis** None or int or tuple of ints, optional Axis along which to operate. By default, `axis` is None and the flattened input is used. If this is a tuple of ints, the maximum is selected over multiple axes, instead of a single axis or all the axes as before. **out** array_like, optional Alternative output array in which to place the result. Must be of the same shape and buffer length as the expected output. **fill_value** scalar or None, optional Value used to fill in the masked values. If None, use the output of maximum_fill_value(). **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns: **amax** array_like New array holding the result. If `out` was specified, `out` is returned. See also [`ma.maximum_fill_value`](numpy.ma.maximum_fill_value#numpy.ma.maximum_fill_value "numpy.ma.maximum_fill_value") Returns the maximum filling value for a given datatype. #### Examples >>> import numpy.ma as ma >>> x = [[-1., 2.5], [4., -2.], [3., 0.]] >>> mask = [[0, 0], [1, 0], [1, 0]] >>> masked_x = ma.masked_array(x, mask) >>> masked_x masked_array( data=[[-1.0, 2.5], [--, -2.0], [--, 0.0]], mask=[[False, False], [ True, False], [ True, False]], fill_value=1e+20) >>> ma.max(masked_x) 2.5 >>> ma.max(masked_x, axis=0) masked_array(data=[-1.0, 2.5], mask=[False, False], fill_value=1e+20) >>> ma.max(masked_x, axis=1, keepdims=True) masked_array( data=[[2.5], [-2.0], [0.0]], mask=[[False], [False], [False]], fill_value=1e+20) >>> mask = [[1, 1], [1, 1], [1, 1]] >>> masked_x = ma.masked_array(x, mask) >>> ma.max(masked_x, axis=1) masked_array(data=[--, --, --], mask=[ True, True, True], fill_value=1e+20, dtype=float64) # numpy.ma.MaskedArray.mean method ma.MaskedArray.mean(_axis=None_ , _dtype=None_ , _out=None_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5416-L5470) Returns the average of the array elements along given axis. Masked entries are ignored, and result elements which are not finite will be masked. Refer to [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") for full documentation. See also [`numpy.ndarray.mean`](numpy.ndarray.mean#numpy.ndarray.mean "numpy.ndarray.mean") corresponding function for ndarrays [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") Equivalent function [`numpy.ma.average`](numpy.ma.average#numpy.ma.average "numpy.ma.average") Weighted average. #### Examples >>> import numpy as np >>> a = np.ma.array([1,2,3], mask=[False, False, True]) >>> a masked_array(data=[1, 2, --], mask=[False, False, True], fill_value=999999) >>> a.mean() 1.5 # numpy.ma.MaskedArray.min method ma.MaskedArray.min(_axis=None_ , _out=None_ , _fill_value=None_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5906-L6002) Return the minimum along a given axis. Parameters: **axis** None or int or tuple of ints, optional Axis along which to operate. By default, `axis` is None and the flattened input is used. If this is a tuple of ints, the minimum is selected over multiple axes, instead of a single axis or all the axes as before. **out** array_like, optional Alternative output array in which to place the result. Must be of the same shape and buffer length as the expected output. **fill_value** scalar or None, optional Value used to fill in the masked values. If None, use the output of `minimum_fill_value`. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns: **amin** array_like New array holding the result. If `out` was specified, `out` is returned. See also [`ma.minimum_fill_value`](numpy.ma.minimum_fill_value#numpy.ma.minimum_fill_value "numpy.ma.minimum_fill_value") Returns the minimum filling value for a given datatype. #### Examples >>> import numpy.ma as ma >>> x = [[1., -2., 3.], [0.2, -0.7, 0.1]] >>> mask = [[1, 1, 0], [0, 0, 1]] >>> masked_x = ma.masked_array(x, mask) >>> masked_x masked_array( data=[[--, --, 3.0], [0.2, -0.7, --]], mask=[[ True, True, False], [False, False, True]], fill_value=1e+20) >>> ma.min(masked_x) -0.7 >>> ma.min(masked_x, axis=-1) masked_array(data=[3.0, -0.7], mask=[False, False], fill_value=1e+20) >>> ma.min(masked_x, axis=0, keepdims=True) masked_array(data=[[0.2, -0.7, 3.0]], mask=[[False, False, False]], fill_value=1e+20) >>> mask = [[1, 1, 1,], [1, 1, 1]] >>> masked_x = ma.masked_array(x, mask) >>> ma.min(masked_x, axis=0) masked_array(data=[--, --, --], mask=[ True, True, True], fill_value=1e+20, dtype=float64) # numpy.ma.MaskedArray.nbytes attribute ma.MaskedArray.nbytes Total bytes consumed by the elements of the array. See also [`sys.getsizeof`](https://docs.python.org/3/library/sys.html#sys.getsizeof "\(in Python v3.13\)") Memory consumed by the object itself without parents in case view. This does include memory consumed by non-element attributes. #### Notes Does not include memory consumed by non-element attributes of the array object. #### Examples >>> import numpy as np >>> x = np.zeros((3,5,2), dtype=np.complex128) >>> x.nbytes 480 >>> np.prod(x.shape) * x.itemsize 480 # numpy.ma.MaskedArray.ndim attribute ma.MaskedArray.ndim Number of array dimensions. #### Examples >>> import numpy as np >>> x = np.array([1, 2, 3]) >>> x.ndim 1 >>> y = np.zeros((2, 3, 4)) >>> y.ndim 3 # numpy.ma.MaskedArray.nonzero method ma.MaskedArray.nonzero()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5089-L5186) Return the indices of unmasked elements that are not zero. Returns a tuple of arrays, one for each dimension, containing the indices of the non-zero elements in that dimension. The corresponding non-zero values can be obtained with: a[a.nonzero()] To group the indices by element, rather than dimension, use instead: np.transpose(a.nonzero()) The result of this is always a 2d array, with a row for each non-zero element. Parameters: **None** Returns: **tuple_of_arrays** tuple Indices of elements that are non-zero. See also [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") Function operating on ndarrays. [`flatnonzero`](numpy.flatnonzero#numpy.flatnonzero "numpy.flatnonzero") Return indices that are non-zero in the flattened version of the input array. [`numpy.ndarray.nonzero`](numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero") Equivalent ndarray method. [`count_nonzero`](numpy.count_nonzero#numpy.count_nonzero "numpy.count_nonzero") Counts the number of non-zero elements in the input array. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> x = ma.array(np.eye(3)) >>> x masked_array( data=[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]], mask=False, fill_value=1e+20) >>> x.nonzero() (array([0, 1, 2]), array([0, 1, 2])) Masked elements are ignored. >>> x[1, 1] = ma.masked >>> x masked_array( data=[[1.0, 0.0, 0.0], [0.0, --, 0.0], [0.0, 0.0, 1.0]], mask=[[False, False, False], [False, True, False], [False, False, False]], fill_value=1e+20) >>> x.nonzero() (array([0, 2]), array([0, 2])) Indices can also be grouped by element. >>> np.transpose(x.nonzero()) array([[0, 0], [2, 2]]) A common use for `nonzero` is to find the indices of an array, where a condition is True. Given an array `a`, the condition `a` > 3 is a boolean array and since False is interpreted as 0, ma.nonzero(a > 3) yields the indices of the `a` where the condition is true. >>> a = ma.array([[1,2,3],[4,5,6],[7,8,9]]) >>> a > 3 masked_array( data=[[False, False, False], [ True, True, True], [ True, True, True]], mask=False, fill_value=True) >>> ma.nonzero(a > 3) (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) The `nonzero` method of the condition array can also be called. >>> (a > 3).nonzero() (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) # numpy.ma.MaskedArray.prod method ma.MaskedArray.prod(_axis=None_ , _dtype=None_ , _out=None_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5343-L5382) Return the product of the array elements over the given axis. Masked elements are set to 1 internally for computation. Refer to [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") for full documentation. See also [`numpy.ndarray.prod`](numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod") corresponding function for ndarrays [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") equivalent function #### Notes Arithmetic is modular when using integer types, and no error is raised on overflow. # numpy.ma.MaskedArray.product method ma.MaskedArray.product(_axis=None_ , _dtype=None_ , _out=None_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5343-L5382) Return the product of the array elements over the given axis. Masked elements are set to 1 internally for computation. Refer to [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") for full documentation. See also [`numpy.ndarray.prod`](numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod") corresponding function for ndarrays [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") equivalent function #### Notes Arithmetic is modular when using integer types, and no error is raised on overflow. # numpy.ma.MaskedArray.ptp method ma.MaskedArray.ptp(_axis =None_, _out =None_, _fill_value =None_, _keepdims =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L6110-L6197) Return (maximum - minimum) along the given dimension (i.e. peak-to-peak value). Warning [`ptp`](numpy.ptp#numpy.ptp "numpy.ptp") preserves the data type of the array. This means the return value for an input of signed integers with n bits (e.g. `np.int8`, `np.int16`, etc) is also a signed integer with n bits. In that case, peak-to-peak values greater than `2**(n-1)-1` will be returned as negative values. An example with a work-around is shown below. Parameters: **axis**{None, int}, optional Axis along which to find the peaks. If None (default) the flattened array is used. **out**{None, array_like}, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type will be cast if necessary. **fill_value** scalar or None, optional Value used to fill in the masked values. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns: **ptp** ndarray. A new array holding the result, unless `out` was specified, in which case a reference to `out` is returned. #### Examples >>> import numpy as np >>> x = np.ma.MaskedArray([[4, 9, 2, 10], ... [6, 9, 7, 12]]) >>> x.ptp(axis=1) masked_array(data=[8, 6], mask=False, fill_value=999999) >>> x.ptp(axis=0) masked_array(data=[2, 0, 5, 2], mask=False, fill_value=999999) >>> x.ptp() 10 This example shows that a negative value can be returned when the input is an array of signed integers. >>> y = np.ma.MaskedArray([[1, 127], ... [0, 127], ... [-1, 127], ... [-2, 127]], dtype=np.int8) >>> y.ptp(axis=1) masked_array(data=[ 126, 127, -128, -127], mask=False, fill_value=np.int64(999999), dtype=int8) A work-around is to use the `view()` method to view the result as unsigned integers with the same bit width: >>> y.ptp(axis=1).view(np.uint8) masked_array(data=[126, 127, 128, 129], mask=False, fill_value=np.uint64(999999), dtype=uint8) # numpy.ma.MaskedArray.put method ma.MaskedArray.put(_indices_ , _values_ , _mode ='raise'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4877-L4961) Set storage-indexed locations to corresponding values. Sets self._data.flat[n] = values[n] for each n in indices. If `values` is shorter than [`indices`](numpy.indices#numpy.indices "numpy.indices") then it will repeat. If `values` has some masked values, the initial mask is updated in consequence, else the corresponding values are unmasked. Parameters: **indices** 1-D array_like Target indices, interpreted as integers. **values** array_like Values to place in self._data copy at target indices. **mode**{‘raise’, ‘wrap’, ‘clip’}, optional Specifies how out-of-bounds indices will behave. ‘raise’ : raise an error. ‘wrap’ : wrap around. ‘clip’ : clip to the range. #### Notes `values` can be a scalar or length 1 array. #### Examples >>> import numpy as np >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.put([0,4,8],[10,20,30]) >>> x masked_array( data=[[10, --, 3], [--, 20, --], [7, --, 30]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.put(4,999) >>> x masked_array( data=[[10, --, 3], [--, 999, --], [7, --, 30]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) # numpy.ma.MaskedArray.ravel method ma.MaskedArray.ravel(_order ='C'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4729-L4789) Returns a 1D version of self, as a view. Parameters: **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional The elements of `a` are read using this index order. ‘C’ means to index the elements in C-like order, with the last axis index changing fastest, back to the first axis index changing slowest. ‘F’ means to index the elements in Fortran-like index order, with the first index changing fastest, and the last index changing slowest. Note that the ‘C’ and ‘F’ options take no account of the memory layout of the underlying array, and only refer to the order of axis indexing. ‘A’ means to read the elements in Fortran-like index order if `m` is Fortran _contiguous_ in memory, C-like order otherwise. ‘K’ means to read the elements in the order they occur in memory, except for reversing the data when strides are negative. By default, ‘C’ index order is used. (Masked arrays currently use ‘A’ on the data when ‘K’ is passed.) Returns: MaskedArray Output view is of shape `(self.size,)` (or `(np.ma.product(self.shape),)`). #### Examples >>> import numpy as np >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.ravel() masked_array(data=[1, --, 3, --, 5, --, 7, --, 9], mask=[False, True, False, True, False, True, False, True, False], fill_value=999999) # numpy.ma.MaskedArray.real property _property_ ma.MaskedArray.real The real part of the masked array. This property is a view on the real part of this `MaskedArray`. See also [`imag`](numpy.imag#numpy.imag "numpy.imag") #### Examples >>> import numpy as np >>> x = np.ma.array([1+1.j, -2j, 3.45+1.6j], mask=[False, True, False]) >>> x.real masked_array(data=[1.0, --, 3.45], mask=[False, True, False], fill_value=1e+20) # numpy.ma.MaskedArray.repeat method ma.MaskedArray.repeat(_repeats_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2638-L2648) Repeat elements of an array. Refer to [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") for full documentation. See also [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") equivalent function # numpy.ma.MaskedArray.reshape method ma.MaskedArray.reshape(_* s_, _** kwargs_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4792-L4857) Give a new shape to the array without changing its data. Returns a masked array containing the same data, but with a new shape. The result is a view on the original array; if this is not possible, a ValueError is raised. Parameters: **shape** int or tuple of ints The new shape should be compatible with the original shape. If an integer is supplied, then the result will be a 1-D array of that length. **order**{‘C’, ‘F’}, optional Determines whether the array data should be viewed as in C (row-major) or FORTRAN (column-major) order. Returns: **reshaped_array** array A new view on the array. See also [`reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Equivalent function in the masked array module. [`numpy.ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Equivalent method on ndarray object. [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Equivalent function in the NumPy module. #### Notes The reshaping operation cannot guarantee that a copy will not be made, to modify the shape in place, use `a.shape = s` #### Examples >>> import numpy as np >>> x = np.ma.array([[1,2],[3,4]], mask=[1,0,0,1]) >>> x masked_array( data=[[--, 2], [3, --]], mask=[[ True, False], [False, True]], fill_value=999999) >>> x = x.reshape((4,1)) >>> x masked_array( data=[[--], [2], [3], [--]], mask=[[ True], [False], [False], [ True]], fill_value=999999) # numpy.ma.MaskedArray.resize method ma.MaskedArray.resize(_newshape_ , _refcheck =True_, _order =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L4859-L4875) Warning This method does nothing, except raise a ValueError exception. A masked array does not own its data and therefore cannot safely be resized in place. Use the [`numpy.ma.resize`](numpy.ma.resize#numpy.ma.resize "numpy.ma.resize") function instead. This method is difficult to implement safely and may be deprecated in future releases of NumPy. # numpy.ma.MaskedArray.round method ma.MaskedArray.round(_decimals =0_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5610-L5645) Return each element rounded to the given number of decimals. Refer to [`numpy.around`](numpy.around#numpy.around "numpy.around") for full documentation. See also [`numpy.ndarray.round`](numpy.ndarray.round#numpy.ndarray.round "numpy.ndarray.round") corresponding function for ndarrays [`numpy.around`](numpy.around#numpy.around "numpy.around") equivalent function #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> x = ma.array([1.35, 2.5, 1.5, 1.75, 2.25, 2.75], ... mask=[0, 0, 0, 1, 0, 0]) >>> ma.round(x) masked_array(data=[1.0, 2.0, 2.0, --, 2.0, 3.0], mask=[False, False, False, True, False, False], fill_value=1e+20) # numpy.ma.MaskedArray.searchsorted method ma.MaskedArray.searchsorted(_v_ , _side ='left'_, _sorter =None_) Find indices where elements of v should be inserted in a to maintain order. For full documentation, see [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") See also [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") equivalent function # numpy.ma.MaskedArray.set_fill_value method ma.MaskedArray.set_fill_value(_value =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L3843-L3860) # numpy.ma.MaskedArray.shape property _property_ ma.MaskedArray.shape Tuple of array dimensions. The shape property is usually used to get the current shape of an array, but may also be used to reshape the array in-place by assigning a tuple of array dimensions to it. As with [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape"), one of the new shape dimensions can be -1, in which case its value is inferred from the size of the array and the remaining dimensions. Reshaping an array in-place will fail if a copy is required. Warning Setting `arr.shape` is discouraged and may be deprecated in the future. Using [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") is the preferred approach. See also [`numpy.shape`](numpy.shape#numpy.shape "numpy.shape") Equivalent getter function. [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Function similar to setting `shape`. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Method similar to setting `shape`. #### Examples >>> import numpy as np >>> x = np.array([1, 2, 3, 4]) >>> x.shape (4,) >>> y = np.zeros((2, 3, 4)) >>> y.shape (2, 3, 4) >>> y.shape = (3, 8) >>> y array([[ 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0.]]) >>> y.shape = (3, 6) Traceback (most recent call last): File "", line 1, in ValueError: total size of new array must be unchanged >>> np.zeros((4,2))[::2].shape = (-1,) Traceback (most recent call last): File "", line 1, in AttributeError: Incompatible shape for in-place modification. Use `.reshape()` to make a copy with the desired shape. # numpy.ma.MaskedArray.shrink_mask method ma.MaskedArray.shrink_mask()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L3734-L3764) Reduce a mask to nomask when possible. Parameters: **None** Returns: None #### Examples >>> import numpy as np >>> x = np.ma.array([[1,2 ], [3, 4]], mask=[0]*4) >>> x.mask array([[False, False], [False, False]]) >>> x.shrink_mask() masked_array( data=[[1, 2], [3, 4]], mask=False, fill_value=999999) >>> x.mask False # numpy.ma.MaskedArray.size attribute ma.MaskedArray.size Number of elements in the array. Equal to `np.prod(a.shape)`, i.e., the product of the array’s dimensions. #### Notes `a.size` returns a standard arbitrary precision Python integer. This may not be the case with other methods of obtaining the same value (like the suggested `np.prod(a.shape)`, which returns an instance of `np.int_`), and may be relevant if the value is used further in calculations that may overflow a fixed size integer type. #### Examples >>> import numpy as np >>> x = np.zeros((3, 5, 2), dtype=np.complex128) >>> x.size 30 >>> np.prod(x.shape) 30 # numpy.ma.MaskedArray.soften_mask method ma.MaskedArray.soften_mask()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L3648-L3664) Force the mask to soft (default), allowing unmasking by assignment. Whether the mask of a masked array is hard or soft is determined by its [`hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") property. `soften_mask` sets [`hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") to `False` (and returns the modified self). See also [`ma.MaskedArray.hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") [`ma.MaskedArray.harden_mask`](numpy.ma.maskedarray.harden_mask#numpy.ma.MaskedArray.harden_mask "numpy.ma.MaskedArray.harden_mask") # numpy.ma.MaskedArray.sort method ma.MaskedArray.sort(_axis =-1_, _kind =None_, _order =None_, _endwith =True_, _fill_value =None_, _*_ , _stable =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5815-L5904) Sort the array, in-place Parameters: **a** array_like Array to be sorted. **axis** int, optional Axis along which to sort. If None, the array is flattened before sorting. The default is -1, which sorts along the last axis. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional The sorting algorithm used. **order** list, optional When `a` is a structured array, this argument specifies which fields to compare first, second, and so on. This list does not need to include all of the fields. **endwith**{True, False}, optional Whether missing values (if any) should be treated as the largest values (True) or the smallest values (False) When the array contains unmasked values sorting at the same extremes of the datatype, the ordering of these values and the masked values is undefined. **fill_value** scalar or None, optional Value used internally for the masked values. If `fill_value` is not None, it supersedes `endwith`. **stable** bool, optional Only for compatibility with `np.sort`. Ignored. Returns: **sorted_array** ndarray Array of the same type and shape as `a`. See also [`numpy.ndarray.sort`](numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort") Method to sort an array in-place. [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Indirect sort. [`lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort on multiple keys. [`searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") Find elements in a sorted array. #### Notes See `sort` for notes on the different sorting algorithms. #### Examples >>> import numpy as np >>> a = np.ma.array([1, 2, 5, 4, 3],mask=[0, 1, 0, 1, 0]) >>> # Default >>> a.sort() >>> a masked_array(data=[1, 3, 5, --, --], mask=[False, False, False, True, True], fill_value=999999) >>> a = np.ma.array([1, 2, 5, 4, 3],mask=[0, 1, 0, 1, 0]) >>> # Put missing values in the front >>> a.sort(endwith=False) >>> a masked_array(data=[--, --, 1, 3, 5], mask=[ True, True, False, False, False], fill_value=999999) >>> a = np.ma.array([1, 2, 5, 4, 3],mask=[0, 1, 0, 1, 0]) >>> # fill_value takes over endwith >>> a.sort(endwith=False, fill_value=3) >>> a masked_array(data=[1, --, --, 3, 5], mask=[False, True, True, False, False], fill_value=999999) # numpy.ma.MaskedArray.squeeze method ma.MaskedArray.squeeze(_axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2638-L2648) Remove axes of length one from `a`. Refer to [`numpy.squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") for full documentation. See also [`numpy.squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") equivalent function # numpy.ma.MaskedArray.std method ma.MaskedArray.std(_axis=None_ , _dtype=None_ , _out=None_ , _ddof=0_ , _keepdims= _, _mean= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5586-L5608) Returns the standard deviation of the array elements along given axis. Masked entries are ignored. Refer to [`numpy.std`](numpy.std#numpy.std "numpy.std") for full documentation. See also [`numpy.ndarray.std`](numpy.ndarray.std#numpy.ndarray.std "numpy.ndarray.std") corresponding function for ndarrays [`numpy.std`](numpy.std#numpy.std "numpy.std") Equivalent function # numpy.ma.MaskedArray.strides attribute ma.MaskedArray.strides Tuple of bytes to step in each dimension when traversing an array. The byte offset of element `(i[0], i[1], ..., i[n])` in an array `a` is: offset = sum(np.array(i) * a.strides) A more detailed explanation of strides can be found in [The N-dimensional array (ndarray)](../arrays.ndarray#arrays-ndarray). Warning Setting `arr.strides` is discouraged and may be deprecated in the future. [`numpy.lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") should be preferred to create a new view of the same data in a safer way. See also [`numpy.lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") #### Notes Imagine an array of 32-bit integers (each 4 bytes): x = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]], dtype=np.int32) This array is stored in memory as 40 bytes, one after the other (known as a contiguous block of memory). The strides of an array tell us how many bytes we have to skip in memory to move to the next position along a certain axis. For example, we have to skip 4 bytes (1 value) to move to the next column, but 20 bytes (5 values) to get to the same position in the next row. As such, the strides for the array `x` will be `(20, 4)`. #### Examples >>> import numpy as np >>> y = np.reshape(np.arange(2*3*4), (2,3,4)) >>> y array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> y.strides (48, 16, 4) >>> y[1,1,1] 17 >>> offset=sum(y.strides * np.array((1,1,1))) >>> offset/y.itemsize 17 >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0) >>> x.strides (32, 4, 224, 1344) >>> i = np.array([3,5,2,2]) >>> offset = sum(i * x.strides) >>> x[3,5,2,2] 813 >>> offset / x.itemsize 813 # numpy.ma.MaskedArray.sum method ma.MaskedArray.sum(_axis=None_ , _dtype=None_ , _out=None_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5238-L5299) Return the sum of the array elements over the given axis. Masked elements are set to 0 internally. Refer to [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") for full documentation. See also [`numpy.ndarray.sum`](numpy.ndarray.sum#numpy.ndarray.sum "numpy.ndarray.sum") corresponding function for ndarrays [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") equivalent function #### Examples >>> import numpy as np >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.sum() 25 >>> x.sum(axis=1) masked_array(data=[4, 5, 16], mask=[False, False, False], fill_value=999999) >>> x.sum(axis=0) masked_array(data=[8, 5, 12], mask=[False, False, False], fill_value=999999) >>> print(type(x.sum(axis=0, dtype=np.int64)[0])) # numpy.ma.MaskedArray.swapaxes method ma.MaskedArray.swapaxes(_axis1_ , _axis2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L2638-L2648) Return a view of the array with `axis1` and `axis2` interchanged. Refer to [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") for full documentation. See also [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") equivalent function # numpy.ma.MaskedArray.T property _property_ ma.MaskedArray.T View of the transposed array. Same as `self.transpose()`. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.T array([[1, 3], [2, 4]]) >>> a = np.array([1, 2, 3, 4]) >>> a array([1, 2, 3, 4]) >>> a.T array([1, 2, 3, 4]) # numpy.ma.MaskedArray.take method ma.MaskedArray.take(_indices_ , _axis =None_, _out =None_, _mode ='raise'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L6211-L6300) Take elements from a masked array along an axis. This function does the same thing as “fancy” indexing (indexing arrays using arrays) for masked arrays. It can be easier to use if you need elements along a given axis. Parameters: **a** masked_array The source masked array. **indices** array_like The indices of the values to extract. Also allow scalars for indices. **axis** int, optional The axis over which to select values. By default, the flattened input array is used. **out** MaskedArray, optional If provided, the result will be placed in this array. It should be of the appropriate shape and dtype. Note that `out` is always buffered if `mode=’raise’`; use other modes for better performance. **mode**{‘raise’, ‘wrap’, ‘clip’}, optional Specifies how out-of-bounds indices will behave. * ‘raise’ – raise an error (default) * ‘wrap’ – wrap around * ‘clip’ – clip to the range ‘clip’ mode means that all indices that are too large are replaced by the index that addresses the last element along that axis. Note that this disables indexing with negative numbers. Returns: **out** MaskedArray The returned array has the same type as `a`. See also [`numpy.take`](numpy.take#numpy.take "numpy.take") Equivalent function for ndarrays. [`compress`](numpy.compress#numpy.compress "numpy.compress") Take elements using a boolean mask. [`take_along_axis`](numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") Take elements by matching the array and the index arrays. #### Notes This function behaves similarly to [`numpy.take`](numpy.take#numpy.take "numpy.take"), but it handles masked values. The mask is retained in the output array, and masked values in the input array remain masked in the output. #### Examples >>> import numpy as np >>> a = np.ma.array([4, 3, 5, 7, 6, 8], mask=[0, 0, 1, 0, 1, 0]) >>> indices = [0, 1, 4] >>> np.ma.take(a, indices) masked_array(data=[4, 3, --], mask=[False, False, True], fill_value=999999) When [`indices`](numpy.indices#numpy.indices "numpy.indices") is not one- dimensional, the output also has these dimensions: >>> np.ma.take(a, [[0, 1], [2, 3]]) masked_array(data=[[4, 3], [--, 7]], mask=[[False, False], [ True, False]], fill_value=999999) # numpy.ma.MaskedArray.tobytes method ma.MaskedArray.tobytes(_fill_value =None_, _order ='C'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L6414-L6451) Return the array data as a string containing the raw bytes in the array. The array is filled with a fill value before the string conversion. Parameters: **fill_value** scalar, optional Value used to fill in the masked values. Default is None, in which case `MaskedArray.fill_value` is used. **order**{‘C’,’F’,’A’}, optional Order of the data item in the copy. Default is ‘C’. * ‘C’ – C order (row major). * ‘F’ – Fortran order (column major). * ‘A’ – Any, current order of array. * None – Same as ‘A’. See also [`numpy.ndarray.tobytes`](numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes") [`tolist`](numpy.ma.maskedarray.tolist#numpy.ma.MaskedArray.tolist "numpy.ma.MaskedArray.tolist"), [`tofile`](numpy.ma.maskedarray.tofile#numpy.ma.MaskedArray.tofile "numpy.ma.MaskedArray.tofile") #### Notes As for [`ndarray.tobytes`](numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes"), information about the shape, dtype, etc., but also about [`fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value"), will be lost. #### Examples >>> import numpy as np >>> x = np.ma.array(np.array([[1, 2], [3, 4]]), mask=[[0, 1], [1, 0]]) >>> x.tobytes() b'\x01\x00\x00\x00\x00\x00\x00\x00?B\x0f\x00\x00\x00\x00\x00?B\x0f\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00' # numpy.ma.MaskedArray.tofile method ma.MaskedArray.tofile(_fid_ , _sep =''_, _format ='%s'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L6453-L6466) Save a masked array to a file in binary format. Warning This function is not implemented yet. Raises: NotImplementedError When `tofile` is called. # numpy.ma.MaskedArray.toflex method ma.MaskedArray.toflex()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L6468-L6526) Transforms a masked array into a flexible-type array. The flexible type array that is returned will have two fields: * the `_data` field stores the `_data` part of the array. * the `_mask` field stores the `_mask` part of the array. Parameters: **None** Returns: **record** ndarray A new flexible-type [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") with two fields: the first element containing a value, the second element containing the corresponding mask boolean. The returned record shape matches self.shape. #### Notes A side-effect of transforming a masked array into a flexible [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") is that meta information (`fill_value`, …) will be lost. #### Examples >>> import numpy as np >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.toflex() array([[(1, False), (2, True), (3, False)], [(4, True), (5, False), (6, True)], [(7, False), (8, True), (9, False)]], dtype=[('_data', '>> import numpy as np >>> x = np.ma.array([[1,2,3], [4,5,6], [7,8,9]], mask=[0] + [1,0]*4) >>> x.tolist() [[1, None, 3], [None, 5, None], [7, None, 9]] >>> x.tolist(-999) [[1, -999, 3], [-999, 5, -999], [7, -999, 9]] # numpy.ma.MaskedArray.torecords method ma.MaskedArray.torecords()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L6468-L6526) Transforms a masked array into a flexible-type array. The flexible type array that is returned will have two fields: * the `_data` field stores the `_data` part of the array. * the `_mask` field stores the `_mask` part of the array. Parameters: **None** Returns: **record** ndarray A new flexible-type [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") with two fields: the first element containing a value, the second element containing the corresponding mask boolean. The returned record shape matches self.shape. #### Notes A side-effect of transforming a masked array into a flexible [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") is that meta information (`fill_value`, …) will be lost. #### Examples >>> import numpy as np >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.toflex() array([[(1, False), (2, True), (3, False)], [(4, True), (5, False), (6, True)], [(7, False), (8, True), (9, False)]], dtype=[('_data', '>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.transpose() array([[1, 3], [2, 4]]) >>> a.transpose((1, 0)) array([[1, 3], [2, 4]]) >>> a.transpose(1, 0) array([[1, 3], [2, 4]]) >>> a = np.array([1, 2, 3, 4]) >>> a array([1, 2, 3, 4]) >>> a.transpose() array([1, 2, 3, 4]) # numpy.ma.MaskedArray.unshare_mask method ma.MaskedArray.unshare_mask()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L3711-L3727) Copy the mask and set the [`sharedmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.sharedmask "numpy.ma.MaskedArray.sharedmask") flag to `False`. Whether the mask is shared between masked arrays can be seen from the [`sharedmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.sharedmask "numpy.ma.MaskedArray.sharedmask") property. `unshare_mask` ensures the mask is not shared. A copy of the mask is only made if it was shared. See also [`sharedmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.sharedmask "numpy.ma.MaskedArray.sharedmask") # numpy.ma.MaskedArray.var method ma.MaskedArray.var(_axis=None_ , _dtype=None_ , _out=None_ , _ddof=0_ , _keepdims= _, _mean= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L5510-L5583) Compute the variance along the specified axis. Returns the variance of the array elements, a measure of the spread of a distribution. The variance is computed for the flattened array by default, otherwise over the specified axis. Parameters: **a** array_like Array containing numbers whose variance is desired. If `a` is not an array, a conversion is attempted. **axis** None or int or tuple of ints, optional Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array. If this is a tuple of ints, a variance is performed over multiple axes, instead of a single axis or all the axes as before. **dtype** data-type, optional Type to use in computing the variance. For arrays of integer type the default is [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"); for arrays of float types it is the same as the array type. **out** ndarray, optional Alternate output array in which to place the result. It must have the same shape as the expected output, but the type is cast if necessary. **ddof**{int, float}, optional “Delta Degrees of Freedom”: the divisor used in the calculation is `N - ddof`, where `N` represents the number of elements. By default `ddof` is zero. See notes for details about use of `ddof`. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the [`var`](numpy.var#numpy.var "numpy.var") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non- default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **where** array_like of bool, optional Elements to include in the variance. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.20.0. **mean** array like, optional Provide the mean to prevent its recalculation. The mean should have a shape as if it was calculated with `keepdims=True`. The axis for the calculation of the mean should be the same as used in the call to this var function. New in version 2.0.0. **correction**{int, float}, optional Array API compatible name for the `ddof` parameter. Only one of them can be provided at the same time. New in version 2.0.0. Returns: **variance** ndarray, see dtype parameter above If `out=None`, returns a new array containing the variance; otherwise, a reference to the output array is returned. See also [`std`](numpy.std#numpy.std "numpy.std"), [`mean`](numpy.mean#numpy.mean "numpy.mean"), [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean"), [`nanstd`](numpy.nanstd#numpy.nanstd "numpy.nanstd"), [`nanvar`](numpy.nanvar#numpy.nanvar "numpy.nanvar") [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes There are several common variants of the array variance calculation. Assuming the input `a` is a one-dimensional NumPy array and `mean` is either provided as an argument or computed as `a.mean()`, NumPy computes the variance of an array as: N = len(a) d2 = abs(a - mean)**2 # abs is for complex `a` var = d2.sum() / (N - ddof) # note use of `ddof` Different values of the argument `ddof` are useful in different contexts. NumPy’s default `ddof=0` corresponds with the expression: \\[\frac{\sum_i{|a_i - \bar{a}|^2 }}{N}\\] which is sometimes called the “population variance” in the field of statistics because it applies the definition of variance to `a` as if `a` were a complete population of possible observations. Many other libraries define the variance of an array differently, e.g.: \\[\frac{\sum_i{|a_i - \bar{a}|^2}}{N - 1}\\] In statistics, the resulting quantity is sometimes called the “sample variance” because if `a` is a random sample from a larger population, this calculation provides an unbiased estimate of the variance of the population. The use of \\(N-1\\) in the denominator is often called “Bessel’s correction” because it corrects for bias (toward lower values) in the variance estimate introduced when the sample mean of `a` is used in place of the true mean of the population. For this quantity, use `ddof=1`. Note that for complex numbers, the absolute value is taken before squaring, so that the result is always real and nonnegative. For floating-point input, the variance is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for [`float32`](../arrays.scalars#numpy.float32 "numpy.float32") (see example below). Specifying a higher-accuracy accumulator using the `dtype` keyword can alleviate this issue. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> np.var(a) 1.25 >>> np.var(a, axis=0) array([1., 1.]) >>> np.var(a, axis=1) array([0.25, 0.25]) In single precision, var() can be inaccurate: >>> a = np.zeros((2, 512*512), dtype=np.float32) >>> a[0, :] = 1.0 >>> a[1, :] = 0.1 >>> np.var(a) np.float32(0.20250003) Computing the variance in float64 is more accurate: >>> np.var(a, dtype=np.float64) 0.20249999932944759 # may vary >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 0.2025 Specifying a where argument: >>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> np.var(a) 6.833333333333333 # may vary >>> np.var(a, where=[[True], [True], [False]]) 4.0 Using the mean keyword to save computation time: >>> import numpy as np >>> from timeit import timeit >>> >>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> mean = np.mean(a, axis=1, keepdims=True) >>> >>> g = globals() >>> n = 10000 >>> t1 = timeit("var = np.var(a, axis=1, mean=mean)", globals=g, number=n) >>> t2 = timeit("var = np.var(a, axis=1)", globals=g, number=n) >>> print(f'Percentage execution time saved {100*(t2-t1)/t2:.0f}%') Percentage execution time saved 32% # numpy.ma.MaskedArray.view method ma.MaskedArray.view(_dtype =None_, _type =None_, _fill_value =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L3203-L3291) Return a view of the MaskedArray data. Parameters: **dtype** data-type or ndarray sub-class, optional Data-type descriptor of the returned view, e.g., float32 or int16. The default, None, results in the view having the same data-type as `a`. As with `ndarray.view`, dtype can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the `type` parameter). **type** Python type, optional Type of the returned view, either ndarray or a subclass. The default None results in type preservation. **fill_value** scalar, optional The value to use for invalid entries (None by default). If None, then this argument is inferred from the passed [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), or in its absence the original array, as discussed in the notes below. See also [`numpy.ndarray.view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") Equivalent method on ndarray object. #### Notes `a.view()` is used two different ways: `a.view(some_dtype)` or `a.view(dtype=some_dtype)` constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory. `a.view(ndarray_subclass)` or `a.view(type=ndarray_subclass)` just returns an instance of `ndarray_subclass` that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory. If [`fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") is not specified, but [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is specified (and is not an ndarray sub-class), the [`fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") of the MaskedArray will be reset. If neither [`fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") nor [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") are specified (or if [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is an ndarray sub-class), then the fill value is preserved. Finally, if [`fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") is specified, but [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not, the fill value is set to the specified value. For `a.view(some_dtype)`, if `some_dtype` has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the behavior of the view cannot be predicted just from the superficial appearance of `a` (shown by `print(a)`). It also depends on exactly how `a` is stored in memory. Therefore if `a` is C-ordered versus fortran-ordered, versus defined as a slice or transpose, etc., the view may give different results. # numpy.ma.MaskType numpy.ma.MaskType[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) alias of [`bool`](../arrays.scalars#numpy.bool "numpy.bool") # numpy.ma.max ma.max(_obj_ , _axis=None_ , _out=None_ , _fill_value=None_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7060-L7069) Return the maximum along a given axis. Parameters: **axis** None or int or tuple of ints, optional Axis along which to operate. By default, `axis` is None and the flattened input is used. If this is a tuple of ints, the maximum is selected over multiple axes, instead of a single axis or all the axes as before. **out** array_like, optional Alternative output array in which to place the result. Must be of the same shape and buffer length as the expected output. **fill_value** scalar or None, optional Value used to fill in the masked values. If None, use the output of maximum_fill_value(). **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns: **amax** array_like New array holding the result. If `out` was specified, `out` is returned. See also [`ma.maximum_fill_value`](numpy.ma.maximum_fill_value#numpy.ma.maximum_fill_value "numpy.ma.maximum_fill_value") Returns the maximum filling value for a given datatype. #### Examples >>> import numpy.ma as ma >>> x = [[-1., 2.5], [4., -2.], [3., 0.]] >>> mask = [[0, 0], [1, 0], [1, 0]] >>> masked_x = ma.masked_array(x, mask) >>> masked_x masked_array( data=[[-1.0, 2.5], [--, -2.0], [--, 0.0]], mask=[[False, False], [ True, False], [ True, False]], fill_value=1e+20) >>> ma.max(masked_x) 2.5 >>> ma.max(masked_x, axis=0) masked_array(data=[-1.0, 2.5], mask=[False, False], fill_value=1e+20) >>> ma.max(masked_x, axis=1, keepdims=True) masked_array( data=[[2.5], [-2.0], [0.0]], mask=[[False], [False], [False]], fill_value=1e+20) >>> mask = [[1, 1], [1, 1], [1, 1]] >>> masked_x = ma.masked_array(x, mask) >>> ma.max(masked_x, axis=1) masked_array(data=[--, --, --], mask=[ True, True, True], fill_value=1e+20, dtype=float64) # numpy.ma.maximum_fill_value ma.maximum_fill_value(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L374-L423) Return the minimum value that can be represented by the dtype of an object. This function is useful for calculating a fill value suitable for taking the maximum of an array with a given dtype. Parameters: **obj** ndarray, dtype or scalar An object that can be queried for it’s numeric type. Returns: **val** scalar The minimum representable value. Raises: TypeError If `obj` isn’t a suitable numeric type. See also [`minimum_fill_value`](numpy.ma.minimum_fill_value#numpy.ma.minimum_fill_value "numpy.ma.minimum_fill_value") The inverse function. [`set_fill_value`](numpy.ma.set_fill_value#numpy.ma.set_fill_value "numpy.ma.set_fill_value") Set the filling value of a masked array. [`MaskedArray.fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") Return current fill value. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = np.int8() >>> ma.maximum_fill_value(a) -128 >>> a = np.int32() >>> ma.maximum_fill_value(a) -2147483648 An array of numeric data can also be passed. >>> a = np.array([1, 2, 3], dtype=np.int8) >>> ma.maximum_fill_value(a) -128 >>> a = np.array([1, 2, 3], dtype=np.float32) >>> ma.maximum_fill_value(a) -inf # numpy.ma.mean ma.mean(_self_ , _axis=None_ , _dtype=None_ , _out=None_ , _keepdims= _)_= _ Returns the average of the array elements along given axis. Masked entries are ignored, and result elements which are not finite will be masked. Refer to [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") for full documentation. See also [`numpy.ndarray.mean`](numpy.ndarray.mean#numpy.ndarray.mean "numpy.ndarray.mean") corresponding function for ndarrays [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") Equivalent function [`numpy.ma.average`](numpy.ma.average#numpy.ma.average "numpy.ma.average") Weighted average. #### Examples >>> import numpy as np >>> a = np.ma.array([1,2,3], mask=[False, False, True]) >>> a masked_array(data=[1, 2, --], mask=[False, False, True], fill_value=999999) >>> a.mean() 1.5 # numpy.ma.median ma.median(_a_ , _axis =None_, _out =None_, _overwrite_input =False_, _keepdims =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L716-L791) Compute the median along the specified axis. Returns the median of the array elements. Parameters: **a** array_like Input array or object that can be converted to an array. **axis** int, optional Axis along which the medians are computed. The default (None) is to compute the median along a flattened version of the array. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type will be cast if necessary. **overwrite_input** bool, optional If True, then allow use of memory of input array (a) for calculations. The input array will be modified by the call to median. This will save memory when you do not need to preserve the contents of the input array. Treat the input as undefined, but it will probably be fully or partially sorted. Default is False. Note that, if `overwrite_input` is True, and the input is not already an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), an error will be raised. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. Returns: **median** ndarray A new array holding the result is returned unless out is specified, in which case a reference to out is returned. Return data-type is [`float64`](../arrays.scalars#numpy.float64 "numpy.float64") for integers and floats smaller than [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"), or the input data-type, otherwise. See also [`mean`](numpy.mean#numpy.mean "numpy.mean") #### Notes Given a vector `V` with `N` non masked values, the median of `V` is the middle value of a sorted copy of `V` (`Vs`) - i.e. `Vs[(N-1)/2]`, when `N` is odd, or `{Vs[N/2 - 1] + Vs[N/2]}/2` when `N` is even. #### Examples >>> import numpy as np >>> x = np.ma.array(np.arange(8), mask=[0]*4 + [1]*4) >>> np.ma.median(x) 1.5 >>> x = np.ma.array(np.arange(10).reshape(2, 5), mask=[0]*6 + [1]*4) >>> np.ma.median(x) 2.5 >>> np.ma.median(x, axis=-1, overwrite_input=True) masked_array(data=[2.0, 5.0], mask=[False, False], fill_value=1e+20) # numpy.ma.min ma.min(_obj_ , _axis=None_ , _out=None_ , _fill_value=None_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7048-L7057) Return the minimum along a given axis. Parameters: **axis** None or int or tuple of ints, optional Axis along which to operate. By default, `axis` is None and the flattened input is used. If this is a tuple of ints, the minimum is selected over multiple axes, instead of a single axis or all the axes as before. **out** array_like, optional Alternative output array in which to place the result. Must be of the same shape and buffer length as the expected output. **fill_value** scalar or None, optional Value used to fill in the masked values. If None, use the output of [`minimum_fill_value`](numpy.ma.minimum_fill_value#numpy.ma.minimum_fill_value "numpy.ma.minimum_fill_value"). **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns: **amin** array_like New array holding the result. If `out` was specified, `out` is returned. See also [`ma.minimum_fill_value`](numpy.ma.minimum_fill_value#numpy.ma.minimum_fill_value "numpy.ma.minimum_fill_value") Returns the minimum filling value for a given datatype. #### Examples >>> import numpy.ma as ma >>> x = [[1., -2., 3.], [0.2, -0.7, 0.1]] >>> mask = [[1, 1, 0], [0, 0, 1]] >>> masked_x = ma.masked_array(x, mask) >>> masked_x masked_array( data=[[--, --, 3.0], [0.2, -0.7, --]], mask=[[ True, True, False], [False, False, True]], fill_value=1e+20) >>> ma.min(masked_x) -0.7 >>> ma.min(masked_x, axis=-1) masked_array(data=[3.0, -0.7], mask=[False, False], fill_value=1e+20) >>> ma.min(masked_x, axis=0, keepdims=True) masked_array(data=[[0.2, -0.7, 3.0]], mask=[[False, False, False]], fill_value=1e+20) >>> mask = [[1, 1, 1,], [1, 1, 1]] >>> masked_x = ma.masked_array(x, mask) >>> ma.min(masked_x, axis=0) masked_array(data=[--, --, --], mask=[ True, True, True], fill_value=1e+20, dtype=float64) # numpy.ma.minimum_fill_value ma.minimum_fill_value(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L322-L371) Return the maximum value that can be represented by the dtype of an object. This function is useful for calculating a fill value suitable for taking the minimum of an array with a given dtype. Parameters: **obj** ndarray, dtype or scalar An object that can be queried for it’s numeric type. Returns: **val** scalar The maximum representable value. Raises: TypeError If `obj` isn’t a suitable numeric type. See also [`maximum_fill_value`](numpy.ma.maximum_fill_value#numpy.ma.maximum_fill_value "numpy.ma.maximum_fill_value") The inverse function. [`set_fill_value`](numpy.ma.set_fill_value#numpy.ma.set_fill_value "numpy.ma.set_fill_value") Set the filling value of a masked array. [`MaskedArray.fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") Return current fill value. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = np.int8() >>> ma.minimum_fill_value(a) 127 >>> a = np.int32() >>> ma.minimum_fill_value(a) 2147483647 An array of numeric data can also be passed. >>> a = np.array([1, 2, 3], dtype=np.int8) >>> ma.minimum_fill_value(a) 127 >>> a = np.array([1, 2, 3], dtype=np.float32) >>> ma.minimum_fill_value(a) inf # numpy.ma.mr_ ma.mr__= _ Translate slice objects to concatenation along the first axis. This is the masked array version of [`r_`](numpy.r_#numpy.r_ "numpy.r_"). See also [`r_`](numpy.r_#numpy.r_ "numpy.r_") #### Examples >>> import numpy as np >>> np.ma.mr_[np.ma.array([1,2,3]), 0, 0, np.ma.array([4,5,6])] masked_array(data=[1, 2, 3, ..., 4, 5, 6], mask=False, fill_value=999999) # numpy.ma.ndenumerate ma.ndenumerate(_a_ , _compressed =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1857-L1923) Multidimensional index iterator. Return an iterator yielding pairs of array coordinates and values, skipping elements that are masked. With `compressed=False`, [`ma.masked`](../maskedarray.baseclass#numpy.ma.masked "numpy.ma.masked") is yielded as the value of masked elements. This behavior differs from that of [`numpy.ndenumerate`](numpy.ndenumerate#numpy.ndenumerate "numpy.ndenumerate"), which yields the value of the underlying data array. Parameters: **a** array_like An array with (possibly) masked elements. **compressed** bool, optional If True (default), masked elements are skipped. See also [`numpy.ndenumerate`](numpy.ndenumerate#numpy.ndenumerate "numpy.ndenumerate") Equivalent function ignoring any mask. #### Notes New in version 1.23.0. #### Examples >>> import numpy as np >>> a = np.ma.arange(9).reshape((3, 3)) >>> a[1, 0] = np.ma.masked >>> a[1, 2] = np.ma.masked >>> a[2, 1] = np.ma.masked >>> a masked_array( data=[[0, 1, 2], [--, 4, --], [6, --, 8]], mask=[[False, False, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> for index, x in np.ma.ndenumerate(a): ... print(index, x) (0, 0) 0 (0, 1) 1 (0, 2) 2 (1, 1) 4 (2, 0) 6 (2, 2) 8 >>> for index, x in np.ma.ndenumerate(a, compressed=False): ... print(index, x) (0, 0) 0 (0, 1) 1 (0, 2) 2 (1, 0) -- (1, 1) 4 (1, 2) -- (2, 0) 6 (2, 1) -- (2, 2) 8 # numpy.ma.ndim ma.ndim(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7797-L7802) Return the number of dimensions of an array. Parameters: **a** array_like Input array. If it is not already an ndarray, a conversion is attempted. Returns: **number_of_dimensions** int The number of dimensions in `a`. Scalars are zero-dimensional. See also [`ndarray.ndim`](numpy.ndarray.ndim#numpy.ndarray.ndim "numpy.ndarray.ndim") equivalent method [`shape`](numpy.shape#numpy.shape "numpy.shape") dimensions of array [`ndarray.shape`](numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") dimensions of array #### Examples >>> import numpy as np >>> np.ndim([[1,2,3],[4,5,6]]) 2 >>> np.ndim(np.array([[1,2,3],[4,5,6]])) 2 >>> np.ndim(1) 0 # numpy.ma.nonzero ma.nonzero(_self_)_= _ Return the indices of unmasked elements that are not zero. Returns a tuple of arrays, one for each dimension, containing the indices of the non-zero elements in that dimension. The corresponding non-zero values can be obtained with: a[a.nonzero()] To group the indices by element, rather than dimension, use instead: np.transpose(a.nonzero()) The result of this is always a 2d array, with a row for each non-zero element. Parameters: **None** Returns: **tuple_of_arrays** tuple Indices of elements that are non-zero. See also [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") Function operating on ndarrays. [`flatnonzero`](numpy.flatnonzero#numpy.flatnonzero "numpy.flatnonzero") Return indices that are non-zero in the flattened version of the input array. [`numpy.ndarray.nonzero`](numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero") Equivalent ndarray method. [`count_nonzero`](numpy.count_nonzero#numpy.count_nonzero "numpy.count_nonzero") Counts the number of non-zero elements in the input array. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> x = ma.array(np.eye(3)) >>> x masked_array( data=[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]], mask=False, fill_value=1e+20) >>> x.nonzero() (array([0, 1, 2]), array([0, 1, 2])) Masked elements are ignored. >>> x[1, 1] = ma.masked >>> x masked_array( data=[[1.0, 0.0, 0.0], [0.0, --, 0.0], [0.0, 0.0, 1.0]], mask=[[False, False, False], [False, True, False], [False, False, False]], fill_value=1e+20) >>> x.nonzero() (array([0, 2]), array([0, 2])) Indices can also be grouped by element. >>> np.transpose(x.nonzero()) array([[0, 0], [2, 2]]) A common use for `nonzero` is to find the indices of an array, where a condition is True. Given an array `a`, the condition `a` > 3 is a boolean array and since False is interpreted as 0, ma.nonzero(a > 3) yields the indices of the `a` where the condition is true. >>> a = ma.array([[1,2,3],[4,5,6],[7,8,9]]) >>> a > 3 masked_array( data=[[False, False, False], [ True, True, True], [ True, True, True]], mask=False, fill_value=True) >>> ma.nonzero(a > 3) (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) The `nonzero` method of the condition array can also be called. >>> (a > 3).nonzero() (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) # numpy.ma.notmasked_contiguous ma.notmasked_contiguous(_a_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L2089-L2164) Find contiguous unmasked data in a masked array along the given axis. Parameters: **a** array_like The input array. **axis** int, optional Axis along which to perform the operation. If None (default), applies to a flattened version of the array, and this is the same as [`flatnotmasked_contiguous`](numpy.ma.flatnotmasked_contiguous#numpy.ma.flatnotmasked_contiguous "numpy.ma.flatnotmasked_contiguous"). Returns: **endpoints** list A list of slices (start and end indexes) of unmasked indexes in the array. If the input is 2d and axis is specified, the result is a list of lists. See also [`flatnotmasked_edges`](numpy.ma.flatnotmasked_edges#numpy.ma.flatnotmasked_edges "numpy.ma.flatnotmasked_edges"), [`flatnotmasked_contiguous`](numpy.ma.flatnotmasked_contiguous#numpy.ma.flatnotmasked_contiguous "numpy.ma.flatnotmasked_contiguous"), [`notmasked_edges`](numpy.ma.notmasked_edges#numpy.ma.notmasked_edges "numpy.ma.notmasked_edges") [`clump_masked`](numpy.ma.clump_masked#numpy.ma.clump_masked "numpy.ma.clump_masked"), [`clump_unmasked`](numpy.ma.clump_unmasked#numpy.ma.clump_unmasked "numpy.ma.clump_unmasked") #### Notes Only accepts 2-D arrays at most. #### Examples >>> import numpy as np >>> a = np.arange(12).reshape((3, 4)) >>> mask = np.zeros_like(a) >>> mask[1:, :-1] = 1; mask[0, 1] = 1; mask[-1, 0] = 0 >>> ma = np.ma.array(a, mask=mask) >>> ma masked_array( data=[[0, --, 2, 3], [--, --, --, 7], [8, --, --, 11]], mask=[[False, True, False, False], [ True, True, True, False], [False, True, True, False]], fill_value=999999) >>> np.array(ma[~ma.mask]) array([ 0, 2, 3, 7, 8, 11]) >>> np.ma.notmasked_contiguous(ma) [slice(0, 1, None), slice(2, 4, None), slice(7, 9, None), slice(11, 12, None)] >>> np.ma.notmasked_contiguous(ma, axis=0) [[slice(0, 1, None), slice(2, 3, None)], [], [slice(0, 1, None)], [slice(0, 3, None)]] >>> np.ma.notmasked_contiguous(ma, axis=1) [[slice(0, 1, None), slice(2, 4, None)], [slice(3, 4, None)], [slice(0, 1, None), slice(3, 4, None)]] # numpy.ma.notmasked_edges ma.notmasked_edges(_a_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1982-L2031) Find the indices of the first and last unmasked values along an axis. If all values are masked, return None. Otherwise, return a list of two tuples, corresponding to the indices of the first and last unmasked values respectively. Parameters: **a** array_like The input array. **axis** int, optional Axis along which to perform the operation. If None (default), applies to a flattened version of the array. Returns: **edges** ndarray or list An array of start and end indexes if there are any masked data in the array. If there are no masked data in the array, `edges` is a list of the first and last index. See also [`flatnotmasked_contiguous`](numpy.ma.flatnotmasked_contiguous#numpy.ma.flatnotmasked_contiguous "numpy.ma.flatnotmasked_contiguous"), [`flatnotmasked_edges`](numpy.ma.flatnotmasked_edges#numpy.ma.flatnotmasked_edges "numpy.ma.flatnotmasked_edges"), [`notmasked_contiguous`](numpy.ma.notmasked_contiguous#numpy.ma.notmasked_contiguous "numpy.ma.notmasked_contiguous") [`clump_masked`](numpy.ma.clump_masked#numpy.ma.clump_masked "numpy.ma.clump_masked"), [`clump_unmasked`](numpy.ma.clump_unmasked#numpy.ma.clump_unmasked "numpy.ma.clump_unmasked") #### Examples >>> import numpy as np >>> a = np.arange(9).reshape((3, 3)) >>> m = np.zeros_like(a) >>> m[1:, 1:] = 1 >>> am = np.ma.array(a, mask=m) >>> np.array(am[~am.mask]) array([0, 1, 2, 3, 6]) >>> np.ma.notmasked_edges(am) array([0, 6]) # numpy.ma.ones ma.ones(_shape_ , _dtype =None_, _order ='C'_)_= _ Return a new array of given shape and type, filled with ones. Parameters: **shape** int or sequence of ints Shape of the new array, e.g., `(2, 3)` or `2`. **dtype** data-type, optional The desired data-type for the array, e.g., [`numpy.int8`](../arrays.scalars#numpy.int8 "numpy.int8"). Default is [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64"). **order**{‘C’, ‘F’}, optional, default: C Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. **device** str, optional The device on which to place the created array. Default: None. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** MaskedArray Array of ones with the given shape, dtype, and order. See also [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") Return an array of ones with shape and type of input. [`empty`](numpy.empty#numpy.empty "numpy.empty") Return a new uninitialized array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Return a new array setting values to zero. [`full`](numpy.full#numpy.full "numpy.full") Return a new array of given shape filled with value. #### Examples >>> import numpy as np >>> np.ones(5) array([1., 1., 1., 1., 1.]) >>> np.ones((5,), dtype=int) array([1, 1, 1, 1, 1]) >>> np.ones((2, 1)) array([[1.], [1.]]) >>> s = (2,2) >>> np.ones(s) array([[1., 1.], [1., 1.]]) # numpy.ma.ones_like ma.ones_like _= _ Return an array of ones with the same shape and type as a given array. Parameters: **a** array_like The shape and data-type of `a` define these same attributes of the returned array. **dtype** data-type, optional Overrides the data type of the result. **order**{‘C’, ‘F’, ‘A’, or ‘K’}, optional Overrides the memory layout of the result. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. **subok** bool, optional. If True, then the newly created array will use the sub-class type of `a`, otherwise it will be a base-class array. Defaults to True. **shape** int or sequence of ints, optional. Overrides the shape of the result. If order=’K’ and the number of dimensions is unchanged, will try to keep order, otherwise, order=’C’ is implied. **device** str, optional The device on which to place the created array. Default: None. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. Returns: **out** MaskedArray Array of ones with the same shape and type as `a`. See also [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like") Return an empty array with shape and type of input. [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Return an array of zeros with shape and type of input. [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like") Return a new array with shape of input filled with value. [`ones`](numpy.ones#numpy.ones "numpy.ones") Return a new array setting values to one. #### Examples >>> import numpy as np >>> x = np.arange(6) >>> x = x.reshape((2, 3)) >>> x array([[0, 1, 2], [3, 4, 5]]) >>> np.ones_like(x) array([[1, 1, 1], [1, 1, 1]]) >>> y = np.arange(3, dtype=float) >>> y array([0., 1., 2.]) >>> np.ones_like(y) array([1., 1., 1.]) # numpy.ma.outer ma.outer(_a_ , _b_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L8304-L8316) Compute the outer product of two vectors. Given two vectors `a` and `b` of length `M` and `N`, respectively, the outer product [1] is: [[a_0*b_0 a_0*b_1 ... a_0*b_{N-1} ] [a_1*b_0 . [ ... . [a_{M-1}*b_0 a_{M-1}*b_{N-1} ]] Parameters: **a**(M,) array_like First input vector. Input is flattened if not already 1-dimensional. **b**(N,) array_like Second input vector. Input is flattened if not already 1-dimensional. **out**(M, N) ndarray, optional A location where the result is stored Returns: **out**(M, N) ndarray `out[i, j] = a[i] * b[j]` See also [`inner`](numpy.inner#numpy.inner "numpy.inner") [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") `einsum('i,j->ij', a.ravel(), b.ravel())` is the equivalent. [`ufunc.outer`](numpy.ufunc.outer#numpy.ufunc.outer "numpy.ufunc.outer") A generalization to dimensions other than 1D and other operations. `np.multiply.outer(a.ravel(), b.ravel())` is the equivalent. [`linalg.outer`](numpy.linalg.outer#numpy.linalg.outer "numpy.linalg.outer") An Array API compatible variation of `np.outer`, which accepts 1-dimensional inputs only. [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") `np.tensordot(a.ravel(), b.ravel(), axes=((), ()))` is the equivalent. #### Notes Masked values are replaced by 0. #### References [1] G. H. Golub and C. F. Van Loan, _Matrix Computations_ , 3rd ed., Baltimore, MD, Johns Hopkins University Press, 1996, pg. 8. #### Examples Make a (_very_ coarse) grid for computing a Mandelbrot set: >>> import numpy as np >>> rl = np.outer(np.ones((5,)), np.linspace(-2, 2, 5)) >>> rl array([[-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.]]) >>> im = np.outer(1j*np.linspace(2, -2, 5), np.ones((5,))) >>> im array([[0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j], [0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j], [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j], [0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j]]) >>> grid = rl + im >>> grid array([[-2.+2.j, -1.+2.j, 0.+2.j, 1.+2.j, 2.+2.j], [-2.+1.j, -1.+1.j, 0.+1.j, 1.+1.j, 2.+1.j], [-2.+0.j, -1.+0.j, 0.+0.j, 1.+0.j, 2.+0.j], [-2.-1.j, -1.-1.j, 0.-1.j, 1.-1.j, 2.-1.j], [-2.-2.j, -1.-2.j, 0.-2.j, 1.-2.j, 2.-2.j]]) An example using a “vector” of letters: >>> x = np.array(['a', 'b', 'c'], dtype=object) >>> np.outer(x, [1, 2, 3]) array([['a', 'aa', 'aaa'], ['b', 'bb', 'bbb'], ['c', 'cc', 'ccc']], dtype=object) # numpy.ma.outerproduct ma.outerproduct(_a_ , _b_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L8304-L8316) Compute the outer product of two vectors. Given two vectors `a` and `b` of length `M` and `N`, respectively, the outer product [1] is: [[a_0*b_0 a_0*b_1 ... a_0*b_{N-1} ] [a_1*b_0 . [ ... . [a_{M-1}*b_0 a_{M-1}*b_{N-1} ]] Parameters: **a**(M,) array_like First input vector. Input is flattened if not already 1-dimensional. **b**(N,) array_like Second input vector. Input is flattened if not already 1-dimensional. **out**(M, N) ndarray, optional A location where the result is stored Returns: **out**(M, N) ndarray `out[i, j] = a[i] * b[j]` See also [`inner`](numpy.inner#numpy.inner "numpy.inner") [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") `einsum('i,j->ij', a.ravel(), b.ravel())` is the equivalent. [`ufunc.outer`](numpy.ufunc.outer#numpy.ufunc.outer "numpy.ufunc.outer") A generalization to dimensions other than 1D and other operations. `np.multiply.outer(a.ravel(), b.ravel())` is the equivalent. [`linalg.outer`](numpy.linalg.outer#numpy.linalg.outer "numpy.linalg.outer") An Array API compatible variation of `np.outer`, which accepts 1-dimensional inputs only. [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") `np.tensordot(a.ravel(), b.ravel(), axes=((), ()))` is the equivalent. #### Notes Masked values are replaced by 0. #### References [1] G. H. Golub and C. F. Van Loan, _Matrix Computations_ , 3rd ed., Baltimore, MD, Johns Hopkins University Press, 1996, pg. 8. #### Examples Make a (_very_ coarse) grid for computing a Mandelbrot set: >>> import numpy as np >>> rl = np.outer(np.ones((5,)), np.linspace(-2, 2, 5)) >>> rl array([[-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.]]) >>> im = np.outer(1j*np.linspace(2, -2, 5), np.ones((5,))) >>> im array([[0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j], [0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j], [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j], [0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j]]) >>> grid = rl + im >>> grid array([[-2.+2.j, -1.+2.j, 0.+2.j, 1.+2.j, 2.+2.j], [-2.+1.j, -1.+1.j, 0.+1.j, 1.+1.j, 2.+1.j], [-2.+0.j, -1.+0.j, 0.+0.j, 1.+0.j, 2.+0.j], [-2.-1.j, -1.-1.j, 0.-1.j, 1.-1.j, 2.-1.j], [-2.-2.j, -1.-2.j, 0.-2.j, 1.-2.j, 2.-2.j]]) An example using a “vector” of letters: >>> x = np.array(['a', 'b', 'c'], dtype=object) >>> np.outer(x, [1, 2, 3]) array([['a', 'aa', 'aaa'], ['b', 'bb', 'bbb'], ['c', 'cc', 'ccc']], dtype=object) # numpy.ma.polyfit ma.polyfit(_x_ , _y_ , _deg_ , _rcond =None_, _full =False_, _w =None_, _cov =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L2287-L2319) Least squares polynomial fit. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials- package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). Fit a polynomial `p(x) = p[0] * x**deg + ... + p[deg]` of degree `deg` to points `(x, y)`. Returns a vector of coefficients `p` that minimises the squared error in the order `deg`, `deg-1`, … `0`. The [`Polynomial.fit`](numpy.polynomial.polynomial.polynomial.fit#numpy.polynomial.polynomial.Polynomial.fit "numpy.polynomial.polynomial.Polynomial.fit") class method is recommended for new code as it is more stable numerically. See the documentation of the method for more information. Parameters: **x** array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y** array_like, shape (M,) or (M, K) y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column. **deg** int Degree of the fitting polynomial **rcond** float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full** bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w** array_like, shape (M,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse- variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. **cov** bool or str, optional If given and not `False`, return not just the estimate but also its covariance matrix. By default, the covariance are scaled by chi2/dof, where dof = M - (deg + 1), i.e., the weights are presumed to be unreliable except in a relative sense and everything is scaled such that the reduced chi2 is unity. This scaling is omitted if `cov='unscaled'`, as is relevant for the case that the weights are w = 1/sigma, with sigma known to be a reliable estimate of the uncertainty. Returns: **p** ndarray, shape (deg + 1,) or (deg + 1, K) Polynomial coefficients, highest power first. If `y` was 2-D, the coefficients for `k`-th data set are in `p[:,k]`. residuals, rank, singular_values, rcond These values are only returned if `full == True` * residuals – sum of squared residuals of the least squares fit * rank – the effective rank of the scaled Vandermonde coefficient matrix * singular_values – singular values of the scaled Vandermonde coefficient matrix * rcond – value of `rcond`. For more details, see [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). **V** ndarray, shape (deg + 1, deg + 1) or (deg + 1, deg + 1, K) Present only if `full == False` and `cov == True`. The covariance matrix of the polynomial coefficient estimates. The diagonal of this matrix are the variance estimates for each coefficient. If y is a 2-D array, then the covariance matrix for the `k`-th data set are in `V[:,:,k]` Warns: RankWarning The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if `full == False`. The warnings can be turned off by >>> import warnings >>> warnings.simplefilter('ignore', np.exceptions.RankWarning) See also [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") Compute polynomial values. [`linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") Computes a least-squares fit. [`scipy.interpolate.UnivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline "\(in SciPy v1.14.1\)") Computes spline fits. #### Notes Any masked values in x is propagated in y, and vice-versa. The solution minimizes the squared error \\[E = \sum_{j=0}^k |p(x_j) - y_j|^2\\] in the equations: x[0]**n * p[0] + ... + x[0] * p[n-1] + p[n] = y[0] x[1]**n * p[0] + ... + x[1] * p[n-1] + p[n] = y[1] ... x[k]**n * p[0] + ... + x[k] * p[n-1] + p[n] = y[k] The coefficient matrix of the coefficients `p` is a Vandermonde matrix. [`polyfit`](numpy.polyfit#numpy.polyfit "numpy.polyfit") issues a [`RankWarning`](numpy.exceptions.rankwarning#numpy.exceptions.RankWarning "numpy.exceptions.RankWarning") when the least-squares fit is badly conditioned. This implies that the best fit is not well-defined due to numerical error. The results may be improved by lowering the polynomial degree or by replacing `x` by `x` \- `x`.mean(). The `rcond` parameter can also be set to a value smaller than its default, but the resulting fit may be spurious: including contributions from the small singular values can add numerical noise to the result. Note that fitting polynomial coefficients is inherently badly conditioned when the degree of the polynomial is large or the interval of sample points is badly centered. The quality of the fit should always be checked in these cases. When polynomial fits are not satisfactory, splines may be a good alternative. #### References [1] Wikipedia, “Curve fitting”, [2] Wikipedia, “Polynomial interpolation”, #### Examples >>> import numpy as np >>> import warnings >>> x = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0]) >>> y = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0]) >>> z = np.polyfit(x, y, 3) >>> z array([ 0.08703704, -0.81349206, 1.69312169, -0.03968254]) # may vary It is convenient to use [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") objects for dealing with polynomials: >>> p = np.poly1d(z) >>> p(0.5) 0.6143849206349179 # may vary >>> p(3.5) -0.34732142857143039 # may vary >>> p(10) 22.579365079365115 # may vary High-order polynomials may oscillate wildly: >>> with warnings.catch_warnings(): ... warnings.simplefilter('ignore', np.exceptions.RankWarning) ... p30 = np.poly1d(np.polyfit(x, y, 30)) ... >>> p30(4) -0.80000000000000204 # may vary >>> p30(5) -0.99999999999999445 # may vary >>> p30(4.5) -0.10547061179440398 # may vary Illustration: >>> import matplotlib.pyplot as plt >>> xp = np.linspace(-2, 6, 100) >>> _ = plt.plot(x, y, '.', xp, p(xp), '-', xp, p30(xp), '--') >>> plt.ylim(-2,2) (-2, 2) >>> plt.show() # numpy.ma.power ma.power(_a_ , _b_ , _third =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7169-L7245) Returns element-wise base array raised to power from second array. This is the masked array version of [`numpy.power`](numpy.power#numpy.power "numpy.power"). For details see [`numpy.power`](numpy.power#numpy.power "numpy.power"). See also [`numpy.power`](numpy.power#numpy.power "numpy.power") #### Notes The _out_ argument to [`numpy.power`](numpy.power#numpy.power "numpy.power") is not supported, `third` has to be None. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> x = [11.2, -3.973, 0.801, -1.41] >>> mask = [0, 0, 0, 1] >>> masked_x = ma.masked_array(x, mask) >>> masked_x masked_array(data=[11.2, -3.973, 0.801, --], mask=[False, False, False, True], fill_value=1e+20) >>> ma.power(masked_x, 2) masked_array(data=[125.43999999999998, 15.784728999999999, 0.6416010000000001, --], mask=[False, False, False, True], fill_value=1e+20) >>> y = [-0.5, 2, 0, 17] >>> masked_y = ma.masked_array(y, mask) >>> masked_y masked_array(data=[-0.5, 2.0, 0.0, --], mask=[False, False, False, True], fill_value=1e+20) >>> ma.power(masked_x, masked_y) masked_array(data=[0.2988071523335984, 15.784728999999999, 1.0, --], mask=[False, False, False, True], fill_value=1e+20) # numpy.ma.prod ma.prod(_self_ , _axis=None_ , _dtype=None_ , _out=None_ , _keepdims= _)_= _ Return the product of the array elements over the given axis. Masked elements are set to 1 internally for computation. Refer to [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") for full documentation. See also [`numpy.ndarray.prod`](numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod") corresponding function for ndarrays [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") equivalent function #### Notes Arithmetic is modular when using integer types, and no error is raised on overflow. # numpy.ma.ptp ma.ptp(_obj_ , _axis=None_ , _out=None_ , _fill_value=None_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7073-L7081) Return (maximum - minimum) along the given dimension (i.e. peak-to-peak value). Warning [`ptp`](numpy.ptp#numpy.ptp "numpy.ptp") preserves the data type of the array. This means the return value for an input of signed integers with n bits (e.g. `np.int8`, `np.int16`, etc) is also a signed integer with n bits. In that case, peak-to-peak values greater than `2**(n-1)-1` will be returned as negative values. An example with a work-around is shown below. Parameters: **axis**{None, int}, optional Axis along which to find the peaks. If None (default) the flattened array is used. **out**{None, array_like}, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type will be cast if necessary. **fill_value** scalar or None, optional Value used to fill in the masked values. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns: **ptp** ndarray. A new array holding the result, unless `out` was specified, in which case a reference to `out` is returned. #### Examples >>> import numpy as np >>> x = np.ma.MaskedArray([[4, 9, 2, 10], ... [6, 9, 7, 12]]) >>> x.ptp(axis=1) masked_array(data=[8, 6], mask=False, fill_value=999999) >>> x.ptp(axis=0) masked_array(data=[2, 0, 5, 2], mask=False, fill_value=999999) >>> x.ptp() 10 This example shows that a negative value can be returned when the input is an array of signed integers. >>> y = np.ma.MaskedArray([[1, 127], ... [0, 127], ... [-1, 127], ... [-2, 127]], dtype=np.int8) >>> y.ptp(axis=1) masked_array(data=[ 126, 127, -128, -127], mask=False, fill_value=np.int64(999999), dtype=int8) A work-around is to use the `view()` method to view the result as unsigned integers with the same bit width: >>> y.ptp(axis=1).view(np.uint8) masked_array(data=[126, 127, 128, 129], mask=False, fill_value=np.uint64(999999), dtype=uint8) # numpy.ma.put ma.put(_a_ , _indices_ , _values_ , _mode ='raise'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7545-L7583) Set storage-indexed locations to corresponding values. This function is equivalent to [`MaskedArray.put`](numpy.ma.maskedarray.put#numpy.ma.MaskedArray.put "numpy.ma.MaskedArray.put"), see that method for details. See also [`MaskedArray.put`](numpy.ma.maskedarray.put#numpy.ma.MaskedArray.put "numpy.ma.MaskedArray.put") #### Examples Putting values in a masked array: >>> a = np.ma.array([1, 2, 3, 4], mask=[False, True, False, False]) >>> np.ma.put(a, [1, 3], [10, 30]) >>> a masked_array(data=[ 1, 10, 3, 30], mask=False, fill_value=999999) Using put with a 2D array: >>> b = np.ma.array([[1, 2], [3, 4]], mask=[[False, True], [False, False]]) >>> np.ma.put(b, [[0, 1], [1, 0]], [[10, 20], [30, 40]]) >>> b masked_array( data=[[40, 30], [ 3, 4]], mask=False, fill_value=999999) # numpy.ma.putmask ma.putmask(_a_ , _mask_ , _values_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7586-L7640) Changes elements of an array based on conditional and input values. This is the masked array version of [`numpy.putmask`](numpy.putmask#numpy.putmask "numpy.putmask"), for details see [`numpy.putmask`](numpy.putmask#numpy.putmask "numpy.putmask"). See also [`numpy.putmask`](numpy.putmask#numpy.putmask "numpy.putmask") #### Notes Using a masked array as `values` will **not** transform a [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") into a [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"). #### Examples >>> import numpy as np >>> arr = [[1, 2], [3, 4]] >>> mask = [[1, 0], [0, 0]] >>> x = np.ma.array(arr, mask=mask) >>> np.ma.putmask(x, x < 4, 10*x) >>> x masked_array( data=[[--, 20], [30, 4]], mask=[[ True, False], [False, False]], fill_value=999999) >>> x.data array([[10, 20], [30, 4]]) # numpy.ma.ravel ma.ravel(_self_ , _order ='C'_)_= _ Returns a 1D version of self, as a view. Parameters: **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional The elements of `a` are read using this index order. ‘C’ means to index the elements in C-like order, with the last axis index changing fastest, back to the first axis index changing slowest. ‘F’ means to index the elements in Fortran-like index order, with the first index changing fastest, and the last index changing slowest. Note that the ‘C’ and ‘F’ options take no account of the memory layout of the underlying array, and only refer to the order of axis indexing. ‘A’ means to read the elements in Fortran-like index order if `m` is Fortran _contiguous_ in memory, C-like order otherwise. ‘K’ means to read the elements in the order they occur in memory, except for reversing the data when strides are negative. By default, ‘C’ index order is used. (Masked arrays currently use ‘A’ on the data when ‘K’ is passed.) Returns: MaskedArray Output view is of shape `(self.size,)` (or `(np.ma.product(self.shape),)`). #### Examples >>> import numpy as np >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.ravel() masked_array(data=[1, --, 3, --, 5, --, 7, --, 9], mask=[False, True, False, True, False, True, False, True, False], fill_value=999999) # numpy.ma.reshape ma.reshape(_a_ , _new_shape_ , _order ='C'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7682-L7729) Returns an array containing the same data with a new shape. Refer to [`MaskedArray.reshape`](numpy.ma.maskedarray.reshape#numpy.ma.MaskedArray.reshape "numpy.ma.MaskedArray.reshape") for full documentation. See also [`MaskedArray.reshape`](numpy.ma.maskedarray.reshape#numpy.ma.MaskedArray.reshape "numpy.ma.MaskedArray.reshape") equivalent function #### Examples Reshaping a 1-D array: >>> a = np.ma.array([1, 2, 3, 4]) >>> np.ma.reshape(a, (2, 2)) masked_array( data=[[1, 2], [3, 4]], mask=False, fill_value=999999) Reshaping a 2-D array: >>> b = np.ma.array([[1, 2], [3, 4]]) >>> np.ma.reshape(b, (1, 4)) masked_array(data=[[1, 2, 3, 4]], mask=False, fill_value=999999) Reshaping a 1-D array with a mask: >>> c = np.ma.array([1, 2, 3, 4], mask=[False, True, False, False]) >>> np.ma.reshape(c, (2, 2)) masked_array( data=[[1, --], [3, 4]], mask=[[False, True], [False, False]], fill_value=999999) # numpy.ma.resize ma.resize(_x_ , _new_shape_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7732-L7794) Return a new masked array with the specified size and shape. This is the masked equivalent of the [`numpy.resize`](numpy.resize#numpy.resize "numpy.resize") function. The new array is filled with repeated copies of `x` (in the order that the data are stored in memory). If `x` is masked, the new array will be masked, and the new mask will be a repetition of the old one. See also [`numpy.resize`](numpy.resize#numpy.resize "numpy.resize") Equivalent function in the top level NumPy module. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = ma.array([[1, 2] ,[3, 4]]) >>> a[0, 1] = ma.masked >>> a masked_array( data=[[1, --], [3, 4]], mask=[[False, True], [False, False]], fill_value=999999) >>> np.resize(a, (3, 3)) masked_array( data=[[1, 2, 3], [4, 1, 2], [3, 4, 1]], mask=False, fill_value=999999) >>> ma.resize(a, (3, 3)) masked_array( data=[[1, --, 3], [4, 1, --], [3, 4, 1]], mask=[[False, True, False], [False, False, True], [False, False, False]], fill_value=999999) A MaskedArray is always returned, regardless of the input type. >>> a = np.array([[1, 2] ,[3, 4]]) >>> ma.resize(a, (3, 3)) masked_array( data=[[1, 2, 3], [4, 1, 2], [3, 4, 1]], mask=False, fill_value=999999) # numpy.ma.right_shift ma.right_shift(_a_ , _n_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7508-L7542) Shift the bits of an integer to the right. This is the masked array version of [`numpy.right_shift`](numpy.right_shift#numpy.right_shift "numpy.right_shift"), for details see that function. See also [`numpy.right_shift`](numpy.right_shift#numpy.right_shift "numpy.right_shift") #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> x = [11, 3, 8, 1] >>> mask = [0, 0, 0, 1] >>> masked_x = ma.masked_array(x, mask) >>> masked_x masked_array(data=[11, 3, 8, --], mask=[False, False, False, True], fill_value=999999) >>> ma.right_shift(masked_x,1) masked_array(data=[5, 1, 4, --], mask=[False, False, False, True], fill_value=999999) # numpy.ma.round ma.round(_a_ , _decimals =0_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L8128-L8181) Return a copy of a, rounded to ‘decimals’ places. When ‘decimals’ is negative, it specifies the number of positions to the left of the decimal point. The real and imaginary parts of complex numbers are rounded separately. Nothing is done if the array is not of float type and ‘decimals’ is greater than or equal to 0. Parameters: **decimals** int Number of decimals to round to. May be negative. **out** array_like Existing array to use for output. If not given, returns a default copy of a. #### Notes If out is given and does not have a mask attribute, the mask of a is lost! #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> x = [11.2, -3.973, 0.801, -1.41] >>> mask = [0, 0, 0, 1] >>> masked_x = ma.masked_array(x, mask) >>> masked_x masked_array(data=[11.2, -3.973, 0.801, --], mask=[False, False, False, True], fill_value=1e+20) >>> ma.round_(masked_x) masked_array(data=[11.0, -4.0, 1.0, --], mask=[False, False, False, True], fill_value=1e+20) >>> ma.round(masked_x, decimals=1) masked_array(data=[11.2, -4.0, 0.8, --], mask=[False, False, False, True], fill_value=1e+20) >>> ma.round_(masked_x, decimals=-1) masked_array(data=[10.0, -0.0, 0.0, --], mask=[False, False, False, True], fill_value=1e+20) # numpy.ma.round_ ma.round_(_a_ , _decimals =0_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L8128-L8181) Return a copy of a, rounded to ‘decimals’ places. When ‘decimals’ is negative, it specifies the number of positions to the left of the decimal point. The real and imaginary parts of complex numbers are rounded separately. Nothing is done if the array is not of float type and ‘decimals’ is greater than or equal to 0. Parameters: **decimals** int Number of decimals to round to. May be negative. **out** array_like Existing array to use for output. If not given, returns a default copy of a. #### Notes If out is given and does not have a mask attribute, the mask of a is lost! #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> x = [11.2, -3.973, 0.801, -1.41] >>> mask = [0, 0, 0, 1] >>> masked_x = ma.masked_array(x, mask) >>> masked_x masked_array(data=[11.2, -3.973, 0.801, --], mask=[False, False, False, True], fill_value=1e+20) >>> ma.round_(masked_x) masked_array(data=[11.0, -4.0, 1.0, --], mask=[False, False, False, True], fill_value=1e+20) >>> ma.round(masked_x, decimals=1) masked_array(data=[11.2, -4.0, 0.8, --], mask=[False, False, False, True], fill_value=1e+20) >>> ma.round_(masked_x, decimals=-1) masked_array(data=[10.0, -0.0, 0.0, --], mask=[False, False, False, True], fill_value=1e+20) # numpy.ma.set_fill_value ma.set_fill_value(_a_ , _fill_value_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L512-L575) Set the filling value of a, if a is a masked array. This function changes the fill value of the masked array `a` in place. If `a` is not a masked array, the function returns silently, without doing anything. Parameters: **a** array_like Input array. **fill_value** dtype Filling value. A consistency test is performed to make sure the value is compatible with the dtype of `a`. Returns: None Nothing returned by this function. See also [`maximum_fill_value`](numpy.ma.maximum_fill_value#numpy.ma.maximum_fill_value "numpy.ma.maximum_fill_value") Return the default fill value for a dtype. [`MaskedArray.fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") Return current fill value. [`MaskedArray.set_fill_value`](numpy.ma.maskedarray.set_fill_value#numpy.ma.MaskedArray.set_fill_value "numpy.ma.MaskedArray.set_fill_value") Equivalent method. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> a = np.arange(5) >>> a array([0, 1, 2, 3, 4]) >>> a = ma.masked_where(a < 3, a) >>> a masked_array(data=[--, --, --, 3, 4], mask=[ True, True, True, False, False], fill_value=999999) >>> ma.set_fill_value(a, -999) >>> a masked_array(data=[--, --, --, 3, 4], mask=[ True, True, True, False, False], fill_value=-999) Nothing happens if `a` is not a masked array. >>> a = list(range(5)) >>> a [0, 1, 2, 3, 4] >>> ma.set_fill_value(a, 100) >>> a [0, 1, 2, 3, 4] >>> a = np.arange(5) >>> a array([0, 1, 2, 3, 4]) >>> ma.set_fill_value(a, 100) >>> a array([0, 1, 2, 3, 4]) # numpy.ma.setdiff1d ma.setdiff1d(_ar1_ , _ar2_ , _assume_unique =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1526-L1552) Set difference of 1D arrays with unique elements. The output is always a masked array. See [`numpy.setdiff1d`](numpy.setdiff1d#numpy.setdiff1d "numpy.setdiff1d") for more details. See also [`numpy.setdiff1d`](numpy.setdiff1d#numpy.setdiff1d "numpy.setdiff1d") Equivalent function for ndarrays. #### Examples >>> import numpy as np >>> x = np.ma.array([1, 2, 3, 4], mask=[0, 1, 0, 1]) >>> np.ma.setdiff1d(x, [1, 2]) masked_array(data=[3, --], mask=[False, True], fill_value=999999) # numpy.ma.setxor1d ma.setxor1d(_ar1_ , _ar2_ , _assume_unique =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1388-L1422) Set exclusive-or of 1-D arrays with unique elements. The output is always a masked array. See [`numpy.setxor1d`](numpy.setxor1d#numpy.setxor1d "numpy.setxor1d") for more details. See also [`numpy.setxor1d`](numpy.setxor1d#numpy.setxor1d "numpy.setxor1d") Equivalent function for ndarrays. #### Examples >>> import numpy as np >>> ar1 = np.ma.array([1, 2, 3, 2, 4]) >>> ar2 = np.ma.array([2, 3, 5, 7, 5]) >>> np.ma.setxor1d(ar1, ar2) masked_array(data=[1, 4, 5, 7], mask=False, fill_value=999999) # numpy.ma.shape ma.shape(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7807-L7809) Return the shape of an array. Parameters: **a** array_like Input array. Returns: **shape** tuple of ints The elements of the shape tuple give the lengths of the corresponding array dimensions. See also [`len`](https://docs.python.org/3/library/functions.html#len "\(in Python v3.13\)") `len(a)` is equivalent to `np.shape(a)[0]` for N-D arrays with `N>=1`. [`ndarray.shape`](numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") Equivalent array method. #### Examples >>> import numpy as np >>> np.shape(np.eye(3)) (3, 3) >>> np.shape([[1, 3]]) (1, 2) >>> np.shape([0]) (1,) >>> np.shape(0) () >>> a = np.array([(1, 2), (3, 4), (5, 6)], ... dtype=[('x', 'i4'), ('y', 'i4')]) >>> np.shape(a) (3,) >>> a.shape (3,) # numpy.ma.size ma.size(_obj_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7813-L7815) Return the number of elements along a given axis. Parameters: **a** array_like Input data. **axis** int, optional Axis along which the elements are counted. By default, give the total number of elements. Returns: **element_count** int Number of elements along the specified axis. See also [`shape`](numpy.shape#numpy.shape "numpy.shape") dimensions of array [`ndarray.shape`](numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") dimensions of array [`ndarray.size`](numpy.ndarray.size#numpy.ndarray.size "numpy.ndarray.size") number of elements in array #### Examples >>> import numpy as np >>> a = np.array([[1,2,3],[4,5,6]]) >>> np.size(a) 6 >>> np.size(a,1) 3 >>> np.size(a,0) 2 # numpy.ma.soften_mask ma.soften_mask(_self_)_= _ Force the mask to soft (default), allowing unmasking by assignment. Whether the mask of a masked array is hard or soft is determined by its [`hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") property. `soften_mask` sets [`hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") to `False` (and returns the modified self). See also [`ma.MaskedArray.hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") [`ma.MaskedArray.harden_mask`](numpy.ma.maskedarray.harden_mask#numpy.ma.MaskedArray.harden_mask "numpy.ma.MaskedArray.harden_mask") # numpy.ma.sort ma.sort(_a_ , _axis =-1_, _kind =None_, _order =None_, _endwith =True_, _fill_value =None_, _*_ , _stable =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7266-L7306) Return a sorted copy of the masked array. Equivalent to creating a copy of the array and applying the MaskedArray `sort()` method. Refer to `MaskedArray.sort` for the full documentation See also [`MaskedArray.sort`](numpy.ma.maskedarray.sort#numpy.ma.MaskedArray.sort "numpy.ma.MaskedArray.sort") equivalent method #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> x = [11.2, -3.973, 0.801, -1.41] >>> mask = [0, 0, 0, 1] >>> masked_x = ma.masked_array(x, mask) >>> masked_x masked_array(data=[11.2, -3.973, 0.801, --], mask=[False, False, False, True], fill_value=1e+20) >>> ma.sort(masked_x) masked_array(data=[-3.973, 0.801, 11.2, --], mask=[False, False, False, True], fill_value=1e+20) # numpy.ma.squeeze ma.squeeze _= _ Remove axes of length one from `a`. Parameters: **a** array_like Input data. **axis** None or int or tuple of ints, optional Selects a subset of the entries of length one in the shape. If an axis is selected with shape entry greater than one, an error is raised. Returns: **squeezed** MaskedArray The input array, but with all or a subset of the dimensions of length 1 removed. This is always `a` itself or a view into `a`. Note that if all axes are squeezed, the result is a 0d array and not a scalar. Raises: ValueError If `axis` is not None, and an axis being squeezed is not of length 1 See also [`expand_dims`](numpy.expand_dims#numpy.expand_dims "numpy.expand_dims") The inverse operation, adding entries of length one [`reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Insert, remove, and combine dimensions, and resize existing ones #### Examples >>> import numpy as np >>> x = np.array([[[0], [1], [2]]]) >>> x.shape (1, 3, 1) >>> np.squeeze(x).shape (3,) >>> np.squeeze(x, axis=0).shape (3, 1) >>> np.squeeze(x, axis=1).shape Traceback (most recent call last): ... ValueError: cannot select an axis to squeeze out which has size not equal to one >>> np.squeeze(x, axis=2).shape (1, 3) >>> x = np.array([[1234]]) >>> x.shape (1, 1) >>> np.squeeze(x) array(1234) # 0d array >>> np.squeeze(x).shape () >>> np.squeeze(x)[()] 1234 # numpy.ma.stack ma.stack _= _ Join a sequence of arrays along a new axis. The `axis` parameter specifies the index of the new axis in the dimensions of the result. For example, if `axis=0` it will be the first dimension and if `axis=-1` it will be the last dimension. Parameters: **arrays** sequence of ndarrays Each array must have the same shape. In the case of a single ndarray array_like input, it will be treated as a sequence of arrays; i.e., each element along the zeroth axis is treated as a separate array. **axis** int, optional The axis in the result array along which the input arrays are stacked. **out** ndarray, optional If provided, the destination to place the result. The shape must be correct, matching that of what stack would have returned if no out argument were specified. **dtype** str or dtype If provided, the destination array will have this dtype. Cannot be provided together with `out`. New in version 1.24. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘same_kind’. New in version 1.24. Returns: **stacked** ndarray The stacked array has one more dimension than the input arrays. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`split`](numpy.split#numpy.split "numpy.split") Split array into a list of multiple sub-arrays of equal size. [`unstack`](numpy.unstack#numpy.unstack "numpy.unstack") Split an array into a tuple of sub-arrays along an axis. #### Notes The function is applied to both the _data and the _mask, if any. #### Examples >>> import numpy as np >>> rng = np.random.default_rng() >>> arrays = [rng.normal(size=(3,4)) for _ in range(10)] >>> np.stack(arrays, axis=0).shape (10, 3, 4) >>> np.stack(arrays, axis=1).shape (3, 10, 4) >>> np.stack(arrays, axis=2).shape (3, 4, 10) >>> a = np.array([1, 2, 3]) >>> b = np.array([4, 5, 6]) >>> np.stack((a, b)) array([[1, 2, 3], [4, 5, 6]]) >>> np.stack((a, b), axis=-1) array([[1, 4], [2, 5], [3, 6]]) # numpy.ma.std ma.std(_self_ , _axis=None_ , _dtype=None_ , _out=None_ , _ddof=0_ , _keepdims= _, _mean= _)_= _ Returns the standard deviation of the array elements along given axis. Masked entries are ignored. Refer to [`numpy.std`](numpy.std#numpy.std "numpy.std") for full documentation. See also [`numpy.ndarray.std`](numpy.ndarray.std#numpy.ndarray.std "numpy.ndarray.std") corresponding function for ndarrays [`numpy.std`](numpy.std#numpy.std "numpy.std") Equivalent function # numpy.ma.sum ma.sum(_self_ , _axis=None_ , _dtype=None_ , _out=None_ , _keepdims= _)_= _ Return the sum of the array elements over the given axis. Masked elements are set to 0 internally. Refer to [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") for full documentation. See also [`numpy.ndarray.sum`](numpy.ndarray.sum#numpy.ndarray.sum "numpy.ndarray.sum") corresponding function for ndarrays [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") equivalent function #### Examples >>> import numpy as np >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.sum() 25 >>> x.sum(axis=1) masked_array(data=[4, 5, 16], mask=[False, False, False], fill_value=999999) >>> x.sum(axis=0) masked_array(data=[8, 5, 12], mask=[False, False, False], fill_value=999999) >>> print(type(x.sum(axis=0, dtype=np.int64)[0])) # numpy.ma.swapaxes ma.swapaxes(_self_ , _*args_ , _**params) a.swapaxes(axis1_ , _axis2_)_= _ Return a view of the array with `axis1` and `axis2` interchanged. Refer to [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") for full documentation. See also [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") equivalent function # numpy.ma.take ma.take(_a_ , _indices_ , _axis =None_, _out =None_, _mode ='raise'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7161-L7166) # numpy.ma.trace ma.trace(_self_ , _offset=0_ , _axis1=0_ , _axis2=1_ , _dtype=None_ , _out=None) a.trace(offset=0_ , _axis1=0_ , _axis2=1_ , _dtype=None_ , _out=None_)_= _ Return the sum along diagonals of the array. Refer to [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") for full documentation. See also [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") equivalent function # numpy.ma.transpose ma.transpose(_a_ , _axes =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7643-L7679) Permute the dimensions of an array. This function is exactly equivalent to [`numpy.transpose`](numpy.transpose#numpy.transpose "numpy.transpose"). See also [`numpy.transpose`](numpy.transpose#numpy.transpose "numpy.transpose") Equivalent function in top-level NumPy module. #### Examples >>> import numpy as np >>> import numpy.ma as ma >>> x = ma.arange(4).reshape((2,2)) >>> x[1, 1] = ma.masked >>> x masked_array( data=[[0, 1], [2, --]], mask=[[False, False], [False, True]], fill_value=999999) >>> ma.transpose(x) masked_array( data=[[0, 2], [1, --]], mask=[[False, False], [False, True]], fill_value=999999) # numpy.ma.union1d ma.union1d(_ar1_ , _ar2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1502-L1523) Union of two arrays. The output is always a masked array. See [`numpy.union1d`](numpy.union1d#numpy.union1d "numpy.union1d") for more details. See also [`numpy.union1d`](numpy.union1d#numpy.union1d "numpy.union1d") Equivalent function for ndarrays. #### Examples >>> import numpy as np >>> ar1 = np.ma.array([1, 2, 3, 4]) >>> ar2 = np.ma.array([3, 4, 5, 6]) >>> np.ma.union1d(ar1, ar2) masked_array(data=[1, 2, 3, 4, 5, 6], mask=False, fill_value=999999) # numpy.ma.unique ma.unique(_ar1_ , _return_index =False_, _return_inverse =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L1305-L1352) Finds the unique elements of an array. Masked values are considered the same element (masked). The output array is always a masked array. See [`numpy.unique`](numpy.unique#numpy.unique "numpy.unique") for more details. See also [`numpy.unique`](numpy.unique#numpy.unique "numpy.unique") Equivalent function for ndarrays. #### Examples >>> import numpy as np >>> a = [1, 2, 1000, 2, 3] >>> mask = [0, 0, 1, 0, 0] >>> masked_a = np.ma.masked_array(a, mask) >>> masked_a masked_array(data=[1, 2, --, 2, 3], mask=[False, False, True, False, False], fill_value=999999) >>> np.ma.unique(masked_a) masked_array(data=[1, 2, 3, --], mask=[False, False, False, True], fill_value=999999) >>> np.ma.unique(masked_a, return_index=True) (masked_array(data=[1, 2, 3, --], mask=[False, False, False, True], fill_value=999999), array([0, 1, 4, 2])) >>> np.ma.unique(masked_a, return_inverse=True) (masked_array(data=[1, 2, 3, --], mask=[False, False, False, True], fill_value=999999), array([0, 1, 3, 1, 2])) >>> np.ma.unique(masked_a, return_index=True, return_inverse=True) (masked_array(data=[1, 2, 3, --], mask=[False, False, False, True], fill_value=999999), array([0, 1, 4, 2]), array([0, 1, 3, 1, 2])) # numpy.ma.vander ma.vander(_x_ , _n =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/extras.py#L2273-L2282) Generate a Vandermonde matrix. The columns of the output matrix are powers of the input vector. The order of the powers is determined by the `increasing` boolean argument. Specifically, when `increasing` is False, the `i`-th output column is the input vector raised element-wise to the power of `N - i - 1`. Such a matrix with a geometric progression in each row is named for Alexandre- Theophile Vandermonde. Parameters: **x** array_like 1-D input array. **N** int, optional Number of columns in the output. If `N` is not specified, a square array is returned (`N = len(x)`). **increasing** bool, optional Order of the powers of the columns. If True, the powers increase from left to right, if False (the default) they are reversed. Returns: **out** ndarray Vandermonde matrix. If `increasing` is False, the first column is `x^(N-1)`, the second `x^(N-2)` and so forth. If `increasing` is True, the columns are `x^0, x^1, ..., x^(N-1)`. See also [`polynomial.polynomial.polyvander`](numpy.polynomial.polynomial.polyvander#numpy.polynomial.polynomial.polyvander "numpy.polynomial.polynomial.polyvander") #### Notes Masked values in the input array result in rows of zeros. #### Examples >>> import numpy as np >>> x = np.array([1, 2, 3, 5]) >>> N = 3 >>> np.vander(x, N) array([[ 1, 1, 1], [ 4, 2, 1], [ 9, 3, 1], [25, 5, 1]]) >>> np.column_stack([x**(N-1-i) for i in range(N)]) array([[ 1, 1, 1], [ 4, 2, 1], [ 9, 3, 1], [25, 5, 1]]) >>> x = np.array([1, 2, 3, 5]) >>> np.vander(x) array([[ 1, 1, 1, 1], [ 8, 4, 2, 1], [ 27, 9, 3, 1], [125, 25, 5, 1]]) >>> np.vander(x, increasing=True) array([[ 1, 1, 1, 1], [ 1, 2, 4, 8], [ 1, 3, 9, 27], [ 1, 5, 25, 125]]) The determinant of a square Vandermonde matrix is the product of the differences between the values of the input vector: >>> np.linalg.det(np.vander(x)) 48.000000000000043 # may vary >>> (5-3)*(5-2)*(5-1)*(3-2)*(3-1)*(2-1) 48 # numpy.ma.var ma.var(_self_ , _axis=None_ , _dtype=None_ , _out=None_ , _ddof=0_ , _keepdims= _, _mean= _)_= _ Compute the variance along the specified axis. Returns the variance of the array elements, a measure of the spread of a distribution. The variance is computed for the flattened array by default, otherwise over the specified axis. Parameters: **a** array_like Array containing numbers whose variance is desired. If `a` is not an array, a conversion is attempted. **axis** None or int or tuple of ints, optional Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array. If this is a tuple of ints, a variance is performed over multiple axes, instead of a single axis or all the axes as before. **dtype** data-type, optional Type to use in computing the variance. For arrays of integer type the default is [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"); for arrays of float types it is the same as the array type. **out** ndarray, optional Alternate output array in which to place the result. It must have the same shape as the expected output, but the type is cast if necessary. **ddof**{int, float}, optional “Delta Degrees of Freedom”: the divisor used in the calculation is `N - ddof`, where `N` represents the number of elements. By default `ddof` is zero. See notes for details about use of `ddof`. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the [`var`](numpy.var#numpy.var "numpy.var") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non- default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **where** array_like of bool, optional Elements to include in the variance. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.20.0. **mean** array like, optional Provide the mean to prevent its recalculation. The mean should have a shape as if it was calculated with `keepdims=True`. The axis for the calculation of the mean should be the same as used in the call to this var function. New in version 2.0.0. **correction**{int, float}, optional Array API compatible name for the `ddof` parameter. Only one of them can be provided at the same time. New in version 2.0.0. Returns: **variance** ndarray, see dtype parameter above If `out=None`, returns a new array containing the variance; otherwise, a reference to the output array is returned. See also [`std`](numpy.std#numpy.std "numpy.std"), [`mean`](numpy.mean#numpy.mean "numpy.mean"), [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean"), [`nanstd`](numpy.nanstd#numpy.nanstd "numpy.nanstd"), [`nanvar`](numpy.nanvar#numpy.nanvar "numpy.nanvar") [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes There are several common variants of the array variance calculation. Assuming the input `a` is a one-dimensional NumPy array and `mean` is either provided as an argument or computed as `a.mean()`, NumPy computes the variance of an array as: N = len(a) d2 = abs(a - mean)**2 # abs is for complex `a` var = d2.sum() / (N - ddof) # note use of `ddof` Different values of the argument `ddof` are useful in different contexts. NumPy’s default `ddof=0` corresponds with the expression: \\[\frac{\sum_i{|a_i - \bar{a}|^2 }}{N}\\] which is sometimes called the “population variance” in the field of statistics because it applies the definition of variance to `a` as if `a` were a complete population of possible observations. Many other libraries define the variance of an array differently, e.g.: \\[\frac{\sum_i{|a_i - \bar{a}|^2}}{N - 1}\\] In statistics, the resulting quantity is sometimes called the “sample variance” because if `a` is a random sample from a larger population, this calculation provides an unbiased estimate of the variance of the population. The use of \\(N-1\\) in the denominator is often called “Bessel’s correction” because it corrects for bias (toward lower values) in the variance estimate introduced when the sample mean of `a` is used in place of the true mean of the population. For this quantity, use `ddof=1`. Note that for complex numbers, the absolute value is taken before squaring, so that the result is always real and nonnegative. For floating-point input, the variance is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for [`float32`](../arrays.scalars#numpy.float32 "numpy.float32") (see example below). Specifying a higher-accuracy accumulator using the `dtype` keyword can alleviate this issue. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> np.var(a) 1.25 >>> np.var(a, axis=0) array([1., 1.]) >>> np.var(a, axis=1) array([0.25, 0.25]) In single precision, var() can be inaccurate: >>> a = np.zeros((2, 512*512), dtype=np.float32) >>> a[0, :] = 1.0 >>> a[1, :] = 0.1 >>> np.var(a) np.float32(0.20250003) Computing the variance in float64 is more accurate: >>> np.var(a, dtype=np.float64) 0.20249999932944759 # may vary >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 0.2025 Specifying a where argument: >>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> np.var(a) 6.833333333333333 # may vary >>> np.var(a, where=[[True], [True], [False]]) 4.0 Using the mean keyword to save computation time: >>> import numpy as np >>> from timeit import timeit >>> >>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> mean = np.mean(a, axis=1, keepdims=True) >>> >>> g = globals() >>> n = 10000 >>> t1 = timeit("var = np.var(a, axis=1, mean=mean)", globals=g, number=n) >>> t2 = timeit("var = np.var(a, axis=1)", globals=g, number=n) >>> print(f'Percentage execution time saved {100*(t2-t1)/t2:.0f}%') Percentage execution time saved 32% # numpy.ma.vstack ma.vstack _= _ Stack arrays in sequence vertically (row wise). This is equivalent to concatenation along the first axis after 1-D arrays of shape `(N,)` have been reshaped to `(1,N)`. Rebuilds arrays divided by [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit"). This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`stack`](numpy.stack#numpy.stack "numpy.stack") and [`block`](numpy.block#numpy.block "numpy.block") provide more general stacking and concatenation operations. Parameters: **tup** sequence of ndarrays The arrays must have the same shape along all but the first axis. 1-D arrays must have the same length. In the case of a single array_like input, it will be treated as a sequence of arrays; i.e., each element along the zeroth axis is treated as a separate array. **dtype** str or dtype If provided, the destination array will have this dtype. Cannot be provided together with `out`. New in version 1.24. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘same_kind’. New in version 1.24. Returns: **stacked** ndarray The array formed by stacking the given arrays, will be at least 2-D. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third axis). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit") Split an array into multiple sub-arrays vertically (row-wise). [`unstack`](numpy.unstack#numpy.unstack "numpy.unstack") Split an array into a tuple of sub-arrays along an axis. #### Notes The function is applied to both the _data and the _mask, if any. #### Examples >>> import numpy as np >>> a = np.array([1, 2, 3]) >>> b = np.array([4, 5, 6]) >>> np.vstack((a,b)) array([[1, 2, 3], [4, 5, 6]]) >>> a = np.array([[1], [2], [3]]) >>> b = np.array([[4], [5], [6]]) >>> np.vstack((a,b)) array([[1], [2], [3], [4], [5], [6]]) # numpy.ma.where ma.where(_condition_ , _x= _, _y= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/core.py#L7960-L8049) Return a masked array with elements from `x` or `y`, depending on condition. Note When only `condition` is provided, this function is identical to [`nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero"). The rest of this documentation covers only the case where all three arguments are provided. Parameters: **condition** array_like, bool Where True, yield `x`, otherwise yield `y`. **x, y** array_like, optional Values from which to choose. `x`, `y` and `condition` need to be broadcastable to some shape. Returns: **out** MaskedArray An masked array with [`masked`](../maskedarray.baseclass#numpy.ma.masked "numpy.ma.masked") elements where the condition is masked, elements from `x` where `condition` is True, and elements from `y` elsewhere. See also [`numpy.where`](numpy.where#numpy.where "numpy.where") Equivalent function in the top-level NumPy module. [`nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") The function that is called when x and y are omitted #### Examples >>> import numpy as np >>> x = np.ma.array(np.arange(9.).reshape(3, 3), mask=[[0, 1, 0], ... [1, 0, 1], ... [0, 1, 0]]) >>> x masked_array( data=[[0.0, --, 2.0], [--, 4.0, --], [6.0, --, 8.0]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=1e+20) >>> np.ma.where(x > 5, x, -3.1416) masked_array( data=[[-3.1416, --, -3.1416], [--, -3.1416, --], [6.0, --, 8.0]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=1e+20) # numpy.ma.zeros ma.zeros(_shape_ , _dtype =float_, _order ='C'_, _*_ , _like =None_)_= _ Return a new array of given shape and type, filled with zeros. Parameters: **shape** int or tuple of ints Shape of the new array, e.g., `(2, 3)` or `2`. **dtype** data-type, optional The desired data-type for the array, e.g., [`numpy.int8`](../arrays.scalars#numpy.int8 "numpy.int8"). Default is [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64"). **order**{‘C’, ‘F’}, optional, default: ‘C’ Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** MaskedArray Array of zeros with the given shape, dtype, and order. See also [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Return an array of zeros with shape and type of input. [`empty`](numpy.empty#numpy.empty "numpy.empty") Return a new uninitialized array. [`ones`](numpy.ones#numpy.ones "numpy.ones") Return a new array setting values to one. [`full`](numpy.full#numpy.full "numpy.full") Return a new array of given shape filled with value. #### Examples >>> import numpy as np >>> np.zeros(5) array([ 0., 0., 0., 0., 0.]) >>> np.zeros((5,), dtype=int) array([0, 0, 0, 0, 0]) >>> np.zeros((2, 1)) array([[ 0.], [ 0.]]) >>> s = (2,2) >>> np.zeros(s) array([[ 0., 0.], [ 0., 0.]]) >>> np.zeros((2,), dtype=[('x', 'i4'), ('y', 'i4')]) # custom dtype array([(0, 0), (0, 0)], dtype=[('x', '_ Return an array of zeros with the same shape and type as a given array. Parameters: **a** array_like The shape and data-type of `a` define these same attributes of the returned array. **dtype** data-type, optional Overrides the data type of the result. **order**{‘C’, ‘F’, ‘A’, or ‘K’}, optional Overrides the memory layout of the result. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. **subok** bool, optional. If True, then the newly created array will use the sub-class type of `a`, otherwise it will be a base-class array. Defaults to True. **shape** int or sequence of ints, optional. Overrides the shape of the result. If order=’K’ and the number of dimensions is unchanged, will try to keep order, otherwise, order=’C’ is implied. **device** str, optional The device on which to place the created array. Default: None. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. Returns: **out** MaskedArray Array of zeros with the same shape and type as `a`. See also [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like") Return an empty array with shape and type of input. [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") Return an array of ones with shape and type of input. [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like") Return a new array with shape of input filled with value. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Return a new array setting values to zero. #### Examples >>> import numpy as np >>> x = np.arange(6) >>> x = x.reshape((2, 3)) >>> x array([[0, 1, 2], [3, 4, 5]]) >>> np.zeros_like(x) array([[0, 0, 0], [0, 0, 0]]) >>> y = np.arange(3, dtype=float) >>> y array([0., 1., 2.]) >>> np.zeros_like(y) array([0., 0., 0.]) # numpy.mask_indices numpy.mask_indices(_n_ , _mask_func_ , _k =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_twodim_base_impl.py#L825-L891) Return the indices to access (n, n) arrays, given a masking function. Assume `mask_func` is a function that, for a square array a of size `(n, n)` with a possible offset argument `k`, when called as `mask_func(a, k)` returns a new array with zeros in certain locations (functions like [`triu`](numpy.triu#numpy.triu "numpy.triu") or [`tril`](numpy.tril#numpy.tril "numpy.tril") do precisely this). Then this function returns the indices where the non-zero values would be located. Parameters: **n** int The returned indices will be valid to access arrays of shape (n, n). **mask_func** callable A function whose call signature is similar to that of [`triu`](numpy.triu#numpy.triu "numpy.triu"), [`tril`](numpy.tril#numpy.tril "numpy.tril"). That is, `mask_func(x, k)` returns a boolean array, shaped like `x`. `k` is an optional argument to the function. **k** scalar An optional argument which is passed through to `mask_func`. Functions like [`triu`](numpy.triu#numpy.triu "numpy.triu"), [`tril`](numpy.tril#numpy.tril "numpy.tril") take a second argument that is interpreted as an offset. Returns: **indices** tuple of arrays. The `n` arrays of indices corresponding to the locations where `mask_func(np.ones((n, n)), k)` is True. See also [`triu`](numpy.triu#numpy.triu "numpy.triu"), [`tril`](numpy.tril#numpy.tril "numpy.tril"), [`triu_indices`](numpy.triu_indices#numpy.triu_indices "numpy.triu_indices"), [`tril_indices`](numpy.tril_indices#numpy.tril_indices "numpy.tril_indices") #### Examples >>> import numpy as np These are the indices that would allow you to access the upper triangular part of any 3x3 array: >>> iu = np.mask_indices(3, np.triu) For example, if `a` is a 3x3 array: >>> a = np.arange(9).reshape(3, 3) >>> a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> a[iu] array([0, 1, 2, 4, 5, 8]) An offset can be passed also to the masking function. This gets us the indices starting on the first diagonal right of the main one: >>> iu1 = np.mask_indices(3, np.triu, 1) with which we now extract only three elements: >>> a[iu1] array([1, 2, 5]) # numpy.matlib.empty matlib.empty(_shape_ , _dtype =None_, _order ='C'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matlib.py#L24-L63) Return a new matrix of given shape and type, without initializing entries. Parameters: **shape** int or tuple of int Shape of the empty matrix. **dtype** data-type, optional Desired output data-type. **order**{‘C’, ‘F’}, optional Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. See also [`numpy.empty`](numpy.empty#numpy.empty "numpy.empty") Equivalent array function. [`matlib.zeros`](numpy.matlib.zeros#numpy.matlib.zeros "numpy.matlib.zeros") Return a matrix of zeros. [`matlib.ones`](numpy.matlib.ones#numpy.matlib.ones "numpy.matlib.ones") Return a matrix of ones. #### Notes Unlike other matrix creation functions (e.g. [`matlib.zeros`](numpy.matlib.zeros#numpy.matlib.zeros "numpy.matlib.zeros"), [`matlib.ones`](numpy.matlib.ones#numpy.matlib.ones "numpy.matlib.ones")), `matlib.empty` does not initialize the values of the matrix, and may therefore be marginally faster. However, the values stored in the newly allocated matrix are arbitrary. For reproducible behavior, be sure to set each element of the matrix before reading. #### Examples >>> import numpy.matlib >>> np.matlib.empty((2, 2)) # filled with random data matrix([[ 6.76425276e-320, 9.79033856e-307], # random [ 7.39337286e-309, 3.22135945e-309]]) >>> np.matlib.empty((2, 2), dtype=int) matrix([[ 6600475, 0], # random [ 6586976, 22740995]]) # numpy.matlib.eye matlib.eye(_n_ , _M=None_ , _k=0_ , _dtype= _, _order='C'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matlib.py#L190-L230) Return a matrix with ones on the diagonal and zeros elsewhere. Parameters: **n** int Number of rows in the output. **M** int, optional Number of columns in the output, defaults to `n`. **k** int, optional Index of the diagonal: 0 refers to the main diagonal, a positive value refers to an upper diagonal, and a negative value to a lower diagonal. **dtype** dtype, optional Data-type of the returned matrix. **order**{‘C’, ‘F’}, optional Whether the output should be stored in row-major (C-style) or column-major (Fortran-style) order in memory. Returns: **I** matrix A `n` x `M` matrix where all elements are equal to zero, except for the `k`-th diagonal, whose values are equal to one. See also [`numpy.eye`](numpy.eye#numpy.eye "numpy.eye") Equivalent array function. [`identity`](numpy.identity#numpy.identity "numpy.identity") Square identity matrix. #### Examples >>> import numpy.matlib >>> np.matlib.eye(3, k=1, dtype=float) matrix([[0., 1., 0.], [0., 0., 1.], [0., 0., 0.]]) # numpy.matlib.identity matlib.identity(_n_ , _dtype =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matlib.py#L154-L188) Returns the square identity matrix of given size. Parameters: **n** int Size of the returned identity matrix. **dtype** data-type, optional Data-type of the output. Defaults to `float`. Returns: **out** matrix `n` x `n` matrix with its main diagonal set to one, and all other elements zero. See also [`numpy.identity`](numpy.identity#numpy.identity "numpy.identity") Equivalent array function. [`matlib.eye`](numpy.matlib.eye#numpy.matlib.eye "numpy.matlib.eye") More general matrix identity function. #### Examples >>> import numpy.matlib >>> np.matlib.identity(3, dtype=int) matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) # numpy.matlib.ones matlib.ones(_shape_ , _dtype =None_, _order ='C'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matlib.py#L65-L108) Matrix of ones. Return a matrix of given shape and type, filled with ones. Parameters: **shape**{sequence of ints, int} Shape of the matrix **dtype** data-type, optional The desired data-type for the matrix, default is np.float64. **order**{‘C’, ‘F’}, optional Whether to store matrix in C- or Fortran-contiguous order, default is ‘C’. Returns: **out** matrix Matrix of ones of given shape, dtype, and order. See also [`ones`](numpy.ones#numpy.ones "numpy.ones") Array of ones. [`matlib.zeros`](numpy.matlib.zeros#numpy.matlib.zeros "numpy.matlib.zeros") Zero matrix. #### Notes If [`shape`](numpy.shape#numpy.shape "numpy.shape") has length one i.e. `(N,)`, or is a scalar `N`, `out` becomes a single row matrix of shape `(1,N)`. #### Examples >>> np.matlib.ones((2,3)) matrix([[1., 1., 1.], [1., 1., 1.]]) >>> np.matlib.ones(2) matrix([[1., 1.]]) # numpy.matlib.rand matlib.rand(_* args_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matlib.py#L232-L276) Return a matrix of random values with given shape. Create a matrix of the given shape and propagate it with random samples from a uniform distribution over `[0, 1)`. Parameters: ***args** Arguments Shape of the output. If given as N integers, each integer specifies the size of one dimension. If given as a tuple, this tuple gives the complete shape. Returns: **out** ndarray The matrix of random values with shape given by `*args`. See also [`randn`](numpy.matlib.randn#numpy.matlib.randn "numpy.matlib.randn"), [`numpy.random.RandomState.rand`](../random/generated/numpy.random.randomstate.rand#numpy.random.RandomState.rand "numpy.random.RandomState.rand") #### Examples >>> np.random.seed(123) >>> import numpy.matlib >>> np.matlib.rand(2, 3) matrix([[0.69646919, 0.28613933, 0.22685145], [0.55131477, 0.71946897, 0.42310646]]) >>> np.matlib.rand((2, 3)) matrix([[0.9807642 , 0.68482974, 0.4809319 ], [0.39211752, 0.34317802, 0.72904971]]) If the first argument is a tuple, other arguments are ignored: >>> np.matlib.rand((2, 3), 4) matrix([[0.43857224, 0.0596779 , 0.39804426], [0.73799541, 0.18249173, 0.17545176]]) # numpy.matlib.randn matlib.randn(_* args_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matlib.py#L278-L329) Return a random matrix with data from the “standard normal” distribution. `randn` generates a matrix filled with random floats sampled from a univariate “normal” (Gaussian) distribution of mean 0 and variance 1. Parameters: ***args** Arguments Shape of the output. If given as N integers, each integer specifies the size of one dimension. If given as a tuple, this tuple gives the complete shape. Returns: **Z** matrix of floats A matrix of floating-point samples drawn from the standard normal distribution. See also [`rand`](numpy.matlib.rand#numpy.matlib.rand "numpy.matlib.rand"), [`numpy.random.RandomState.randn`](../random/generated/numpy.random.randomstate.randn#numpy.random.RandomState.randn "numpy.random.RandomState.randn") #### Notes For random samples from the normal distribution with mean `mu` and standard deviation `sigma`, use: sigma * np.matlib.randn(...) + mu #### Examples >>> np.random.seed(123) >>> import numpy.matlib >>> np.matlib.randn(1) matrix([[-1.0856306]]) >>> np.matlib.randn(1, 2, 3) matrix([[ 0.99734545, 0.2829785 , -1.50629471], [-0.57860025, 1.65143654, -2.42667924]]) Two-by-four matrix of samples from the normal distribution with mean 3 and standard deviation 2.5: >>> 2.5 * np.matlib.randn((2, 4)) + 3 matrix([[1.92771843, 6.16484065, 0.83314899, 1.30278462], [2.76322758, 6.72847407, 1.40274501, 1.8900451 ]]) # numpy.matlib.repmat matlib.repmat(_a_ , _m_ , _n_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matlib.py#L331-L379) Repeat a 0-D to 2-D array or matrix MxN times. Parameters: **a** array_like The array or matrix to be repeated. **m, n** int The number of times `a` is repeated along the first and second axes. Returns: **out** ndarray The result of repeating `a`. #### Examples >>> import numpy.matlib >>> a0 = np.array(1) >>> np.matlib.repmat(a0, 2, 3) array([[1, 1, 1], [1, 1, 1]]) >>> a1 = np.arange(4) >>> np.matlib.repmat(a1, 2, 2) array([[0, 1, 2, 3, 0, 1, 2, 3], [0, 1, 2, 3, 0, 1, 2, 3]]) >>> a2 = np.asmatrix(np.arange(6).reshape(2, 3)) >>> np.matlib.repmat(a2, 2, 3) matrix([[0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5], [0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5]]) # numpy.matlib.zeros matlib.zeros(_shape_ , _dtype =None_, _order ='C'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matlib.py#L110-L152) Return a matrix of given shape and type, filled with zeros. Parameters: **shape** int or sequence of ints Shape of the matrix **dtype** data-type, optional The desired data-type for the matrix, default is float. **order**{‘C’, ‘F’}, optional Whether to store the result in C- or Fortran-contiguous order, default is ‘C’. Returns: **out** matrix Zero matrix of given shape, dtype, and order. See also [`numpy.zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Equivalent array function. [`matlib.ones`](numpy.matlib.ones#numpy.matlib.ones "numpy.matlib.ones") Return a matrix of ones. #### Notes If [`shape`](numpy.shape#numpy.shape "numpy.shape") has length one i.e. `(N,)`, or is a scalar `N`, `out` becomes a single row matrix of shape `(1,N)`. #### Examples >>> import numpy.matlib >>> np.matlib.zeros((2, 3)) matrix([[0., 0., 0.], [0., 0., 0.]]) >>> np.matlib.zeros(2) matrix([[0., 0.]]) # numpy.matmul numpy.matmul(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_ , _axes_ , _axis_])_= _ Matrix product of two arrays. Parameters: **x1, x2** array_like Input arrays, scalars not allowed. **out** ndarray, optional A location into which the result is stored. If provided, it must have a shape that matches the signature `(n,k),(k,m)->(n,m)`. If not provided or None, a freshly-allocated array is returned. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The matrix product of the inputs. This is a scalar only when both x1, x2 are 1-d vectors. Raises: ValueError If the last dimension of `x1` is not the same size as the second-to-last dimension of `x2`. If a scalar value is passed in. See also [`vecdot`](numpy.vecdot#numpy.vecdot "numpy.vecdot") Complex-conjugating dot product for stacks of vectors. [`matvec`](numpy.matvec#numpy.matvec "numpy.matvec") Matrix-vector product for stacks of matrices and vectors. [`vecmat`](numpy.vecmat#numpy.vecmat "numpy.vecmat") Vector-matrix product for stacks of vectors and matrices. [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") Sum products over arbitrary axes. [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") Einstein summation convention. [`dot`](numpy.dot#numpy.dot "numpy.dot") alternative matrix product with different broadcasting rules. #### Notes The behavior depends on the arguments in the following way. * If both arguments are 2-D they are multiplied like conventional matrices. * If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly. * If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed. (For stacks of vectors, use `vecmat`.) * If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed. (For stacks of vectors, use `matvec`.) `matmul` differs from `dot` in two important ways: * Multiplication by scalars is not allowed, use `*` instead. * Stacks of matrices are broadcast together as if the matrices were elements, respecting the signature `(n,k),(k,m)->(n,m)`: >>> a = np.ones([9, 5, 7, 4]) >>> c = np.ones([9, 5, 4, 3]) >>> np.dot(a, c).shape (9, 5, 7, 9, 5, 3) >>> np.matmul(a, c).shape (9, 5, 7, 3) >>> # n is 7, k is 4, m is 3 The matmul function implements the semantics of the `@` operator introduced in Python 3.5 following [**PEP 465**](https://peps.python.org/pep-0465/). It uses an optimized BLAS library when possible (see [`numpy.linalg`](../routines.linalg#module-numpy.linalg "numpy.linalg")). #### Examples For 2-D arrays it is the matrix product: >>> import numpy as np >>> a = np.array([[1, 0], ... [0, 1]]) >>> b = np.array([[4, 1], ... [2, 2]]) >>> np.matmul(a, b) array([[4, 1], [2, 2]]) For 2-D mixed with 1-D, the result is the usual. >>> a = np.array([[1, 0], ... [0, 1]]) >>> b = np.array([1, 2]) >>> np.matmul(a, b) array([1, 2]) >>> np.matmul(b, a) array([1, 2]) Broadcasting is conventional for stacks of arrays >>> a = np.arange(2 * 2 * 4).reshape((2, 2, 4)) >>> b = np.arange(2 * 2 * 4).reshape((2, 4, 2)) >>> np.matmul(a,b).shape (2, 2, 2) >>> np.matmul(a, b)[0, 1, 1] 98 >>> sum(a[0, 1, :] * b[0 , :, 1]) 98 Vector, vector returns the scalar inner product, but neither argument is complex-conjugated: >>> np.matmul([2j, 3j], [2j, 3j]) (-13+0j) Scalar multiplication raises an error. >>> np.matmul([1,2], 3) Traceback (most recent call last): ... ValueError: matmul: Input operand 1 does not have enough dimensions ... The `@` operator can be used as a shorthand for `np.matmul` on ndarrays. >>> x1 = np.array([2j, 3j]) >>> x2 = np.array([2j, 3j]) >>> x1 @ x2 (-13+0j) # numpy.matrix.A property _property_ matrix.A Return `self` as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") object. Equivalent to `np.asarray(self)`. Parameters: **None** Returns: **ret** ndarray `self` as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") #### Examples >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.getA() array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) # numpy.matrix.A1 property _property_ matrix.A1 Return `self` as a flattened [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). Equivalent to `np.asarray(x).ravel()` Parameters: **None** Returns: **ret** ndarray `self`, 1-D, as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") #### Examples >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.getA1() array([ 0, 1, 2, ..., 9, 10, 11]) # numpy.matrix.all method matrix.all(_axis =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L577-L615) Test whether all matrix elements along a given axis evaluate to True. Parameters: **See `numpy.all` for complete descriptions** See also [`numpy.all`](numpy.all#numpy.all "numpy.all") #### Notes This is the same as [`ndarray.all`](numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all"), but it returns a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object. #### Examples >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> y = x[0]; y matrix([[0, 1, 2, 3]]) >>> (x == y) matrix([[ True, True, True, True], [False, False, False, False], [False, False, False, False]]) >>> (x == y).all() False >>> (x == y).all(0) matrix([[False, False, False, False]]) >>> (x == y).all(1) matrix([[ True], [False], [False]]) # numpy.matrix.any method matrix.any(_axis =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L554-L575) Test whether any array element along a given axis evaluates to True. Refer to [`numpy.any`](numpy.any#numpy.any "numpy.any") for full documentation. Parameters: **axis** int, optional Axis along which logical OR is performed **out** ndarray, optional Output to existing array instead of creating new one, must have same shape as expected output Returns: **any** bool, ndarray Returns a single bool if `axis` is `None`; otherwise, returns [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") # numpy.matrix.argmax method matrix.argmax(_axis =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L652-L689) Indexes of the maximum values along an axis. Return the indexes of the first occurrences of the maximum values along the specified axis. If axis is None, the index is for the flattened matrix. Parameters: **See `numpy.argmax` for complete descriptions** See also [`numpy.argmax`](numpy.argmax#numpy.argmax "numpy.argmax") #### Notes This is the same as [`ndarray.argmax`](numpy.ndarray.argmax#numpy.ndarray.argmax "numpy.ndarray.argmax"), but returns a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object where [`ndarray.argmax`](numpy.ndarray.argmax#numpy.ndarray.argmax "numpy.ndarray.argmax") would return an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). #### Examples >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.argmax() 11 >>> x.argmax(0) matrix([[2, 2, 2, 2]]) >>> x.argmax(1) matrix([[3], [3], [3]]) # numpy.matrix.argmin method matrix.argmin(_axis =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L726-L763) Indexes of the minimum values along an axis. Return the indexes of the first occurrences of the minimum values along the specified axis. If axis is None, the index is for the flattened matrix. Parameters: **See `numpy.argmin` for complete descriptions.** See also [`numpy.argmin`](numpy.argmin#numpy.argmin "numpy.argmin") #### Notes This is the same as [`ndarray.argmin`](numpy.ndarray.argmin#numpy.ndarray.argmin "numpy.ndarray.argmin"), but returns a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object where [`ndarray.argmin`](numpy.ndarray.argmin#numpy.ndarray.argmin "numpy.ndarray.argmin") would return an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). #### Examples >>> x = -np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, -1, -2, -3], [ -4, -5, -6, -7], [ -8, -9, -10, -11]]) >>> x.argmin() 11 >>> x.argmin(0) matrix([[2, 2, 2, 2]]) >>> x.argmin(1) matrix([[3], [3], [3]]) # numpy.matrix.argpartition method matrix.argpartition(_kth_ , _axis =-1_, _kind ='introselect'_, _order =None_) Returns the indices that would partition this array. Refer to [`numpy.argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") for full documentation. See also [`numpy.argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") equivalent function # numpy.matrix.argsort method matrix.argsort(_axis =-1_, _kind =None_, _order =None_) Returns the indices that would sort this array. Refer to [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") for full documentation. See also [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") equivalent function # numpy.matrix.astype method matrix.astype(_dtype_ , _order ='K'_, _casting ='unsafe'_, _subok =True_, _copy =True_) Copy of the array, cast to a specified type. Parameters: **dtype** str or dtype Typecode or data-type to which the array is cast. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. **subok** bool, optional If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array. **copy** bool, optional By default, astype always returns a newly allocated array. If this is set to false, and the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`, and `subok` requirements are satisfied, the input array is returned instead of a copy. Returns: **arr_t** ndarray Unless [`copy`](numpy.copy#numpy.copy "numpy.copy") is False and the other conditions for returning the input array are satisfied (see description for [`copy`](numpy.copy#numpy.copy "numpy.copy") input parameter), `arr_t` is a new array of the same shape as the input array, with dtype, order given by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`. Raises: ComplexWarning When casting from complex to float or int. To avoid this, one should use `a.real.astype(t)`. #### Examples >>> import numpy as np >>> x = np.array([1, 2, 2.5]) >>> x array([1. , 2. , 2.5]) >>> x.astype(int) array([1, 2, 2]) # numpy.matrix.base attribute matrix.base Base object if memory is from some other object. #### Examples The base of an array that owns its memory is None: >>> import numpy as np >>> x = np.array([1,2,3,4]) >>> x.base is None True Slicing creates a view, whose memory is shared with x: >>> y = x[2:] >>> y.base is x True # numpy.matrix.byteswap method matrix.byteswap(_inplace =False_) Swap the bytes of the array elements Toggle between low-endian and big-endian data representation by returning a byteswapped array, optionally swapped in-place. Arrays of byte-strings are not swapped. The real and imaginary parts of a complex number are swapped individually. Parameters: **inplace** bool, optional If `True`, swap bytes in-place, default is `False`. Returns: **out** ndarray The byteswapped array. If `inplace` is `True`, this is a view to self. #### Examples >>> import numpy as np >>> A = np.array([1, 256, 8755], dtype=np.int16) >>> list(map(hex, A)) ['0x1', '0x100', '0x2233'] >>> A.byteswap(inplace=True) array([ 256, 1, 13090], dtype=int16) >>> list(map(hex, A)) ['0x100', '0x1', '0x3322'] Arrays of byte-strings are not swapped >>> A = np.array([b'ceg', b'fac']) >>> A.byteswap() array([b'ceg', b'fac'], dtype='|S3') `A.view(A.dtype.newbyteorder()).byteswap()` produces an array with the same values but different representation in memory >>> A = np.array([1, 2, 3],dtype=np.int64) >>> A.view(np.uint8) array([1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0], dtype=uint8) >>> A.view(A.dtype.newbyteorder()).byteswap(inplace=True) array([1, 2, 3], dtype='>i8') >>> A.view(np.uint8) array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3], dtype=uint8) # numpy.matrix.choose method matrix.choose(_choices_ , _out =None_, _mode ='raise'_) Use an index array to construct a new array from a set of choices. Refer to [`numpy.choose`](numpy.choose#numpy.choose "numpy.choose") for full documentation. See also [`numpy.choose`](numpy.choose#numpy.choose "numpy.choose") equivalent function # numpy.matrix.clip method matrix.clip(_min =None_, _max =None_, _out =None_, _** kwargs_) Return an array whose values are limited to `[min, max]`. One of max or min must be given. Refer to [`numpy.clip`](numpy.clip#numpy.clip "numpy.clip") for full documentation. See also [`numpy.clip`](numpy.clip#numpy.clip "numpy.clip") equivalent function # numpy.matrix.compress method matrix.compress(_condition_ , _axis =None_, _out =None_) Return selected slices of this array along given axis. Refer to [`numpy.compress`](numpy.compress#numpy.compress "numpy.compress") for full documentation. See also [`numpy.compress`](numpy.compress#numpy.compress "numpy.compress") equivalent function # numpy.matrix.conj method matrix.conj() Complex-conjugate all elements. Refer to [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") for full documentation. See also [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") equivalent function # numpy.matrix.conjugate method matrix.conjugate() Return the complex conjugate, element-wise. Refer to [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") for full documentation. See also [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") equivalent function # numpy.matrix.copy method matrix.copy(_order ='C'_) Return a copy of the array. Parameters: **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. (Note that this function and [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") Similar function with different default behavior [`numpy.copyto`](numpy.copyto#numpy.copyto "numpy.copyto") #### Notes This function is the preferred method for creating an array copy. The function [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. #### Examples >>> import numpy as np >>> x = np.array([[1,2,3],[4,5,6]], order='F') >>> y = x.copy() >>> x.fill(0) >>> x array([[0, 0, 0], [0, 0, 0]]) >>> y array([[1, 2, 3], [4, 5, 6]]) >>> y.flags['C_CONTIGUOUS'] True For arrays containing Python objects (e.g. dtype=object), the copy is a shallow one. The new array will contain the same object which may lead to surprises if that object can be modified (is mutable): >>> a = np.array([1, 'm', [2, 3, 4]], dtype=object) >>> b = a.copy() >>> b[2][0] = 10 >>> a array([1, 'm', list([10, 3, 4])], dtype=object) To ensure all elements within an `object` array are copied, use [`copy.deepcopy`](https://docs.python.org/3/library/copy.html#copy.deepcopy "\(in Python v3.13\)"): >>> import copy >>> a = np.array([1, 'm', [2, 3, 4]], dtype=object) >>> c = copy.deepcopy(a) >>> c[2][0] = 10 >>> c array([1, 'm', list([10, 3, 4])], dtype=object) >>> a array([1, 'm', list([2, 3, 4])], dtype=object) # numpy.matrix.ctypes attribute matrix.ctypes An object to simplify the interaction of the array with the ctypes module. This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library. Parameters: **None** Returns: **c** Python object Possessing attributes data, shape, strides, etc. See also [`numpy.ctypeslib`](../routines.ctypeslib#module-numpy.ctypeslib "numpy.ctypeslib") #### Notes Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes): _ctypes.data A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as: `self._array_interface_['data'][0]`. Note that unlike `data_as`, a reference won’t be kept to the array: code like `ctypes.c_void_p((a + b).ctypes.data)` will result in a pointer to a deallocated array, and should be spelt `(a + b).ctypes.data_as(ctypes.c_void_p)` _ctypes.shape (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to `dtype('p')` on this platform (see [`c_intp`](../routines.ctypeslib#numpy.ctypeslib.c_intp "numpy.ctypeslib.c_intp")). This base-type could be [`ctypes.c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "\(in Python v3.13\)"), [`ctypes.c_long`](https://docs.python.org/3/library/ctypes.html#ctypes.c_long "\(in Python v3.13\)"), or [`ctypes.c_longlong`](https://docs.python.org/3/library/ctypes.html#ctypes.c_longlong "\(in Python v3.13\)") depending on the platform. The ctypes array contains the shape of the underlying array. _ctypes.strides (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array. _ctypes.data_as(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_internal.py#L279-L296) Return the data pointer cast to a particular c-types object. For example, calling `self._as_parameter_` is equivalent to `self.data_as(ctypes.c_void_p)`. Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: `self.data_as(ctypes.POINTER(ctypes.c_double))`. The returned pointer will keep a reference to the array. _ctypes.shape_as(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_internal.py#L298-L305) Return the shape tuple as an array of some other c-types type. For example: `self.shape_as(ctypes.c_short)`. _ctypes.strides_as(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_internal.py#L307-L314) Return the strides tuple as an array of some other c-types type. For example: `self.strides_as(ctypes.c_longlong)`. If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the `as_parameter` attribute which will return an integer equal to the data attribute. #### Examples >>> import numpy as np >>> import ctypes >>> x = np.array([[0, 1], [2, 3]], dtype=np.int32) >>> x array([[0, 1], [2, 3]], dtype=int32) >>> x.ctypes.data 31962608 # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)) <__main__.LP_c_uint object at 0x7ff2fc1fc200> # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)).contents c_uint(0) >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint64)).contents c_ulong(4294967296) >>> x.ctypes.shape # may vary >>> x.ctypes.strides # may vary # numpy.matrix.cumprod method matrix.cumprod(_axis =None_, _dtype =None_, _out =None_) Return the cumulative product of the elements along the given axis. Refer to [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") for full documentation. See also [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") equivalent function # numpy.matrix.cumsum method matrix.cumsum(_axis =None_, _dtype =None_, _out =None_) Return the cumulative sum of the elements along the given axis. Refer to [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") for full documentation. See also [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") equivalent function # numpy.matrix.data attribute matrix.data Python buffer object pointing to the start of the array’s data. # numpy.matrix.diagonal method matrix.diagonal(_offset =0_, _axis1 =0_, _axis2 =1_) Return specified diagonals. In NumPy 1.9 the returned array is a read-only view instead of a copy as in previous NumPy versions. In a future version the read-only restriction will be removed. Refer to [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") for full documentation. See also [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") equivalent function # numpy.matrix.dump method matrix.dump(_file_) Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load. Parameters: **file** str or Path A string naming the dump file. # numpy.matrix.dumps method matrix.dumps() Returns the pickle of the array as a string. pickle.loads will convert the string back to an array. Parameters: **None** # numpy.matrix.fill method matrix.fill(_value_) Fill the array with a scalar value. Parameters: **value** scalar All elements of `a` will be assigned this value. #### Examples >>> import numpy as np >>> a = np.array([1, 2]) >>> a.fill(0) >>> a array([0, 0]) >>> a = np.empty(2) >>> a.fill(1) >>> a array([1., 1.]) Fill expects a scalar value and always behaves the same as assigning to a single array element. The following is a rare example where this distinction is important: >>> a = np.array([None, None], dtype=object) >>> a[0] = np.array(3) >>> a array([array(3), None], dtype=object) >>> a.fill(np.array(3)) >>> a array([array(3), array(3)], dtype=object) Where other forms of assignments will unpack the array being assigned: >>> a[...] = np.array(3) >>> a array([3, 3], dtype=object) # numpy.matrix.flags attribute matrix.flags Information about the memory layout of the array. #### Notes The `flags` object can be accessed dictionary-like (as in `a.flags['WRITEABLE']`), or by using lowercased attribute names (as in `a.flags.writeable`). Short flag names are only supported in dictionary access. Only the WRITEBACKIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling [`ndarray.setflags`](numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags"). The array flags cannot be set arbitrarily: * WRITEBACKIFCOPY can only be set `False`. * ALIGNED can only be set `True` if the data is truly aligned. * WRITEABLE can only be set `True` if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string. Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays. Even for contiguous arrays a stride for a given dimension `arr.strides[dim]` may be _arbitrary_ if `arr.shape[dim] == 1` or the array has no elements. It does _not_ generally hold that `self.strides[-1] == self.itemsize` for C-style contiguous arrays or `self.strides[0] == self.itemsize` for Fortran-style contiguous arrays is true. Attributes: **C_CONTIGUOUS (C)** The data is in a single, C-style contiguous segment. **F_CONTIGUOUS (F)** The data is in a single, Fortran-style contiguous segment. **OWNDATA (O)** The array owns the memory it uses or borrows it from another object. **WRITEABLE (W)** The data area can be written to. Setting this to False locks the data, making it read-only. A view (slice, etc.) inherits WRITEABLE from its base array at creation time, but a view of a writeable array may be subsequently locked while the base array remains writeable. (The opposite is not true, in that a view of a locked array may not be made writeable. However, currently, locking a base object does not lock any views that already reference it, so under that circumstance it is possible to alter the contents of a locked array via a previously created writeable view onto it.) Attempting to change a non- writeable array raises a RuntimeError exception. **ALIGNED (A)** The data and all elements are aligned appropriately for the hardware. **WRITEBACKIFCOPY (X)** This array is a copy of some other array. The C-API function PyArray_ResolveWritebackIfCopy must be called before deallocating to the base array will be updated with the contents of this array. **FNC** F_CONTIGUOUS and not C_CONTIGUOUS. **FORC** F_CONTIGUOUS or C_CONTIGUOUS (one-segment test). **BEHAVED (B)** ALIGNED and WRITEABLE. **CARRAY (CA)** BEHAVED and C_CONTIGUOUS. **FARRAY (FA)** BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS. # numpy.matrix.flat attribute matrix.flat A 1-D iterator over the array. This is a [`numpy.flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") instance, which acts similarly to, but is not a subclass of, Python’s built-in iterator object. See also [`flatten`](numpy.matrix.flatten#numpy.matrix.flatten "numpy.matrix.flatten") Return a copy of the array collapsed into one dimension. [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples >>> import numpy as np >>> x = np.arange(1, 7).reshape(2, 3) >>> x array([[1, 2, 3], [4, 5, 6]]) >>> x.flat[3] 4 >>> x.T array([[1, 4], [2, 5], [3, 6]]) >>> x.T.flat[3] 5 >>> type(x.flat) An assignment example: >>> x.flat = 3; x array([[3, 3, 3], [3, 3, 3]]) >>> x.flat[[1,4]] = 1; x array([[3, 1, 3], [3, 1, 3]]) # numpy.matrix.flatten method matrix.flatten(_order ='C'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L382-L417) Return a flattened copy of the matrix. All `N` elements of the matrix are placed into a single row. Parameters: **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional ‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran-style) order. ‘A’ means to flatten in column-major order if `m` is Fortran _contiguous_ in memory, row-major order otherwise. ‘K’ means to flatten `m` in the order the elements occur in memory. The default is ‘C’. Returns: **y** matrix A copy of the matrix, flattened to a `(1, N)` matrix where `N` is the number of elements in the original matrix. See also [`ravel`](numpy.ravel#numpy.ravel "numpy.ravel") Return a flattened array. [`flat`](numpy.matrix.flat#numpy.matrix.flat "numpy.matrix.flat") A 1-D flat iterator over the matrix. #### Examples >>> m = np.matrix([[1,2], [3,4]]) >>> m.flatten() matrix([[1, 2, 3, 4]]) >>> m.flatten('F') matrix([[1, 3, 2, 4]]) # numpy.matrix.getA method matrix.getA()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L843-L871) Return `self` as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") object. Equivalent to `np.asarray(self)`. Parameters: **None** Returns: **ret** ndarray `self` as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") #### Examples >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.getA() array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) # numpy.matrix.getA1 method matrix.getA1()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L873-L900) Return `self` as a flattened [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). Equivalent to `np.asarray(x).ravel()` Parameters: **None** Returns: **ret** ndarray `self`, 1-D, as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") #### Examples >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.getA1() array([ 0, 1, 2, ..., 9, 10, 11]) # numpy.matrix.getfield method matrix.getfield(_dtype_ , _offset =0_) Returns a field of the given array as a certain type. A field is a view of the array data with a given data-type. The values in the view are determined by the given type and the offset into the current array in bytes. The offset needs to be such that the view dtype fits in the array dtype; for example an array of dtype complex128 has 16-byte elements. If taking a view with a 32-bit integer (4 bytes), the offset needs to be between 0 and 12 bytes. Parameters: **dtype** str or dtype The data type of the view. The dtype size of the view can not be larger than that of the array itself. **offset** int Number of bytes to skip before beginning the element view. #### Examples >>> import numpy as np >>> x = np.diag([1.+1.j]*2) >>> x[1, 1] = 2 + 4.j >>> x array([[1.+1.j, 0.+0.j], [0.+0.j, 2.+4.j]]) >>> x.getfield(np.float64) array([[1., 0.], [0., 2.]]) By choosing an offset of 8 bytes we can select the complex part of the array for our view: >>> x.getfield(np.float64, offset=8) array([[1., 0.], [0., 4.]]) # numpy.matrix.getH method matrix.getH()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L974-L1007) Returns the (complex) conjugate transpose of `self`. Equivalent to `np.transpose(self)` if `self` is real-valued. Parameters: **None** Returns: **ret** matrix object complex conjugate transpose of `self` #### Examples >>> x = np.matrix(np.arange(12).reshape((3,4))) >>> z = x - 1j*x; z matrix([[ 0. +0.j, 1. -1.j, 2. -2.j, 3. -3.j], [ 4. -4.j, 5. -5.j, 6. -6.j, 7. -7.j], [ 8. -8.j, 9. -9.j, 10.-10.j, 11.-11.j]]) >>> z.getH() matrix([[ 0. -0.j, 4. +4.j, 8. +8.j], [ 1. +1.j, 5. +5.j, 9. +9.j], [ 2. +2.j, 6. +6.j, 10.+10.j], [ 3. +3.j, 7. +7.j, 11.+11.j]]) # numpy.matrix.getI method matrix.getI()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L798-L841) Returns the (multiplicative) inverse of invertible `self`. Parameters: **None** Returns: **ret** matrix object If `self` is non-singular, `ret` is such that `ret * self` == `self * ret` == `np.matrix(np.eye(self[0,:].size))` all return `True`. Raises: numpy.linalg.LinAlgError: Singular matrix If `self` is singular. See also [`linalg.inv`](numpy.linalg.inv#numpy.linalg.inv "numpy.linalg.inv") #### Examples >>> m = np.matrix('[1, 2; 3, 4]'); m matrix([[1, 2], [3, 4]]) >>> m.getI() matrix([[-2. , 1. ], [ 1.5, -0.5]]) >>> m.getI() * m matrix([[ 1., 0.], # may vary [ 0., 1.]]) # numpy.matrix.getT method matrix.getT()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L941-L972) Returns the transpose of the matrix. Does _not_ conjugate! For the complex conjugate transpose, use `.H`. Parameters: **None** Returns: **ret** matrix object The (non-conjugated) transpose of the matrix. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose"), [`getH`](numpy.matrix.geth#numpy.matrix.getH "numpy.matrix.getH") #### Examples >>> m = np.matrix('[1, 2; 3, 4]') >>> m matrix([[1, 2], [3, 4]]) >>> m.getT() matrix([[1, 3], [2, 4]]) # numpy.matrix.H property _property_ matrix.H Returns the (complex) conjugate transpose of `self`. Equivalent to `np.transpose(self)` if `self` is real-valued. Parameters: **None** Returns: **ret** matrix object complex conjugate transpose of `self` #### Examples >>> x = np.matrix(np.arange(12).reshape((3,4))) >>> z = x - 1j*x; z matrix([[ 0. +0.j, 1. -1.j, 2. -2.j, 3. -3.j], [ 4. -4.j, 5. -5.j, 6. -6.j, 7. -7.j], [ 8. -8.j, 9. -9.j, 10.-10.j, 11.-11.j]]) >>> z.getH() matrix([[ 0. -0.j, 4. +4.j, 8. +8.j], [ 1. +1.j, 5. +5.j, 9. +9.j], [ 2. +2.j, 6. +6.j, 10.+10.j], [ 3. +3.j, 7. +7.j, 11.+11.j]]) # numpy.matrix _class_ numpy.matrix(_data_ , _dtype =None_, _copy =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) Returns a matrix from an array-like object, or from a string of data. A matrix is a specialized 2-D array that retains its 2-D nature through operations. It has certain special operators, such as `*` (matrix multiplication) and `**` (matrix power). Note It is no longer recommended to use this class, even for linear algebra. Instead use regular arrays. The class may be removed in the future. Parameters: **data** array_like or string If [`data`](numpy.matrix.data#numpy.matrix.data "numpy.matrix.data") is a string, it is interpreted as a matrix with commas or spaces separating columns, and semicolons separating rows. **dtype** data-type Data-type of the output matrix. **copy** bool If [`data`](numpy.matrix.data#numpy.matrix.data "numpy.matrix.data") is already an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), then this flag determines whether the data is copied (the default), or whether a view is constructed. See also [`array`](numpy.array#numpy.array "numpy.array") #### Examples >>> import numpy as np >>> a = np.matrix('1 2; 3 4') >>> a matrix([[1, 2], [3, 4]]) >>> np.matrix([[1, 2], [3, 4]]) matrix([[1, 2], [3, 4]]) Attributes: [`A`](numpy.matrix.a#numpy.matrix.A "numpy.matrix.A") Return `self` as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") object. [`A1`](numpy.matrix.a1#numpy.matrix.A1 "numpy.matrix.A1") Return `self` as a flattened [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). [`H`](numpy.matrix.h#numpy.matrix.H "numpy.matrix.H") Returns the (complex) conjugate transpose of `self`. [`I`](numpy.matrix.i#numpy.matrix.I "numpy.matrix.I") Returns the (multiplicative) inverse of invertible `self`. [`T`](numpy.matrix.t#numpy.matrix.T "numpy.matrix.T") Returns the transpose of the matrix. [`base`](numpy.matrix.base#numpy.matrix.base "numpy.matrix.base") Base object if memory is from some other object. [`ctypes`](numpy.matrix.ctypes#numpy.matrix.ctypes "numpy.matrix.ctypes") An object to simplify the interaction of the array with the ctypes module. [`data`](numpy.matrix.data#numpy.matrix.data "numpy.matrix.data") Python buffer object pointing to the start of the array’s data. **device** [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Data-type of the array’s elements. [`flags`](numpy.matrix.flags#numpy.matrix.flags "numpy.matrix.flags") Information about the memory layout of the array. [`flat`](numpy.matrix.flat#numpy.matrix.flat "numpy.matrix.flat") A 1-D iterator over the array. [`imag`](numpy.imag#numpy.imag "numpy.imag") The imaginary part of the array. **itemset** [`itemsize`](numpy.matrix.itemsize#numpy.matrix.itemsize "numpy.matrix.itemsize") Length of one array element in bytes. [`mT`](numpy.matrix.mt#numpy.matrix.mT "numpy.matrix.mT") View of the matrix transposed array. [`nbytes`](numpy.matrix.nbytes#numpy.matrix.nbytes "numpy.matrix.nbytes") Total bytes consumed by the elements of the array. [`ndim`](numpy.ndim#numpy.ndim "numpy.ndim") Number of array dimensions. **newbyteorder** [`real`](numpy.real#numpy.real "numpy.real") The real part of the array. [`shape`](numpy.shape#numpy.shape "numpy.shape") Tuple of array dimensions. [`size`](numpy.size#numpy.size "numpy.size") Number of elements in the array. [`strides`](numpy.matrix.strides#numpy.matrix.strides "numpy.matrix.strides") Tuple of bytes to step in each dimension when traversing an array. #### Methods [`all`](numpy.matrix.all#numpy.matrix.all "numpy.matrix.all")([axis, out]) | Test whether all matrix elements along a given axis evaluate to True. ---|--- [`any`](numpy.matrix.any#numpy.matrix.any "numpy.matrix.any")([axis, out]) | Test whether any array element along a given axis evaluates to True. [`argmax`](numpy.matrix.argmax#numpy.matrix.argmax "numpy.matrix.argmax")([axis, out]) | Indexes of the maximum values along an axis. [`argmin`](numpy.matrix.argmin#numpy.matrix.argmin "numpy.matrix.argmin")([axis, out]) | Indexes of the minimum values along an axis. [`argpartition`](numpy.matrix.argpartition#numpy.matrix.argpartition "numpy.matrix.argpartition")(kth[, axis, kind, order]) | Returns the indices that would partition this array. [`argsort`](numpy.matrix.argsort#numpy.matrix.argsort "numpy.matrix.argsort")([axis, kind, order]) | Returns the indices that would sort this array. [`astype`](numpy.matrix.astype#numpy.matrix.astype "numpy.matrix.astype")(dtype[, order, casting, subok, copy]) | Copy of the array, cast to a specified type. [`byteswap`](numpy.matrix.byteswap#numpy.matrix.byteswap "numpy.matrix.byteswap")([inplace]) | Swap the bytes of the array elements [`choose`](numpy.matrix.choose#numpy.matrix.choose "numpy.matrix.choose")(choices[, out, mode]) | Use an index array to construct a new array from a set of choices. [`clip`](numpy.matrix.clip#numpy.matrix.clip "numpy.matrix.clip")([min, max, out]) | Return an array whose values are limited to `[min, max]`. [`compress`](numpy.matrix.compress#numpy.matrix.compress "numpy.matrix.compress")(condition[, axis, out]) | Return selected slices of this array along given axis. [`conj`](numpy.matrix.conj#numpy.matrix.conj "numpy.matrix.conj")() | Complex-conjugate all elements. [`conjugate`](numpy.matrix.conjugate#numpy.matrix.conjugate "numpy.matrix.conjugate")() | Return the complex conjugate, element-wise. [`copy`](numpy.matrix.copy#numpy.matrix.copy "numpy.matrix.copy")([order]) | Return a copy of the array. [`cumprod`](numpy.matrix.cumprod#numpy.matrix.cumprod "numpy.matrix.cumprod")([axis, dtype, out]) | Return the cumulative product of the elements along the given axis. [`cumsum`](numpy.matrix.cumsum#numpy.matrix.cumsum "numpy.matrix.cumsum")([axis, dtype, out]) | Return the cumulative sum of the elements along the given axis. [`diagonal`](numpy.matrix.diagonal#numpy.matrix.diagonal "numpy.matrix.diagonal")([offset, axis1, axis2]) | Return specified diagonals. [`dump`](numpy.matrix.dump#numpy.matrix.dump "numpy.matrix.dump")(file) | Dump a pickle of the array to the specified file. [`dumps`](numpy.matrix.dumps#numpy.matrix.dumps "numpy.matrix.dumps")() | Returns the pickle of the array as a string. [`fill`](numpy.matrix.fill#numpy.matrix.fill "numpy.matrix.fill")(value) | Fill the array with a scalar value. [`flatten`](numpy.matrix.flatten#numpy.matrix.flatten "numpy.matrix.flatten")([order]) | Return a flattened copy of the matrix. [`getA`](numpy.matrix.geta#numpy.matrix.getA "numpy.matrix.getA")() | Return `self` as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") object. [`getA1`](numpy.matrix.geta1#numpy.matrix.getA1 "numpy.matrix.getA1")() | Return `self` as a flattened [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). [`getH`](numpy.matrix.geth#numpy.matrix.getH "numpy.matrix.getH")() | Returns the (complex) conjugate transpose of `self`. [`getI`](numpy.matrix.geti#numpy.matrix.getI "numpy.matrix.getI")() | Returns the (multiplicative) inverse of invertible `self`. [`getT`](numpy.matrix.gett#numpy.matrix.getT "numpy.matrix.getT")() | Returns the transpose of the matrix. [`getfield`](numpy.matrix.getfield#numpy.matrix.getfield "numpy.matrix.getfield")(dtype[, offset]) | Returns a field of the given array as a certain type. [`item`](numpy.matrix.item#numpy.matrix.item "numpy.matrix.item")(*args) | Copy an element of an array to a standard Python scalar and return it. [`max`](numpy.matrix.max#numpy.matrix.max "numpy.matrix.max")([axis, out]) | Return the maximum value along an axis. [`mean`](numpy.matrix.mean#numpy.matrix.mean "numpy.matrix.mean")([axis, dtype, out]) | Returns the average of the matrix elements along the given axis. [`min`](numpy.matrix.min#numpy.matrix.min "numpy.matrix.min")([axis, out]) | Return the minimum value along an axis. [`nonzero`](numpy.matrix.nonzero#numpy.matrix.nonzero "numpy.matrix.nonzero")() | Return the indices of the elements that are non-zero. [`partition`](numpy.matrix.partition#numpy.matrix.partition "numpy.matrix.partition")(kth[, axis, kind, order]) | Partially sorts the elements in the array in such a way that the value of the element in k-th position is in the position it would be in a sorted array. [`prod`](numpy.matrix.prod#numpy.matrix.prod "numpy.matrix.prod")([axis, dtype, out]) | Return the product of the array elements over the given axis. [`ptp`](numpy.matrix.ptp#numpy.matrix.ptp "numpy.matrix.ptp")([axis, out]) | Peak-to-peak (maximum - minimum) value along the given axis. [`put`](numpy.matrix.put#numpy.matrix.put "numpy.matrix.put")(indices, values[, mode]) | Set `a.flat[n] = values[n]` for all `n` in indices. [`ravel`](numpy.matrix.ravel#numpy.matrix.ravel "numpy.matrix.ravel")([order]) | Return a flattened matrix. [`repeat`](numpy.matrix.repeat#numpy.matrix.repeat "numpy.matrix.repeat")(repeats[, axis]) | Repeat elements of an array. [`reshape`](numpy.matrix.reshape#numpy.matrix.reshape "numpy.matrix.reshape")(shape, /, *[, order, copy]) | Returns an array containing the same data with a new shape. [`resize`](numpy.matrix.resize#numpy.matrix.resize "numpy.matrix.resize")(new_shape[, refcheck]) | Change shape and size of array in-place. [`round`](numpy.matrix.round#numpy.matrix.round "numpy.matrix.round")([decimals, out]) | Return `a` with each element rounded to the given number of decimals. [`searchsorted`](numpy.matrix.searchsorted#numpy.matrix.searchsorted "numpy.matrix.searchsorted")(v[, side, sorter]) | Find indices where elements of v should be inserted in a to maintain order. [`setfield`](numpy.matrix.setfield#numpy.matrix.setfield "numpy.matrix.setfield")(val, dtype[, offset]) | Put a value into a specified place in a field defined by a data-type. [`setflags`](numpy.matrix.setflags#numpy.matrix.setflags "numpy.matrix.setflags")([write, align, uic]) | Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. [`sort`](numpy.matrix.sort#numpy.matrix.sort "numpy.matrix.sort")([axis, kind, order]) | Sort an array in-place. [`squeeze`](numpy.matrix.squeeze#numpy.matrix.squeeze "numpy.matrix.squeeze")([axis]) | Return a possibly reshaped matrix. [`std`](numpy.matrix.std#numpy.matrix.std "numpy.matrix.std")([axis, dtype, out, ddof]) | Return the standard deviation of the array elements along the given axis. [`sum`](numpy.matrix.sum#numpy.matrix.sum "numpy.matrix.sum")([axis, dtype, out]) | Returns the sum of the matrix elements, along the given axis. [`swapaxes`](numpy.matrix.swapaxes#numpy.matrix.swapaxes "numpy.matrix.swapaxes")(axis1, axis2) | Return a view of the array with `axis1` and `axis2` interchanged. [`take`](numpy.matrix.take#numpy.matrix.take "numpy.matrix.take")(indices[, axis, out, mode]) | Return an array formed from the elements of `a` at the given indices. [`tobytes`](numpy.matrix.tobytes#numpy.matrix.tobytes "numpy.matrix.tobytes")([order]) | Construct Python bytes containing the raw data bytes in the array. [`tofile`](numpy.matrix.tofile#numpy.matrix.tofile "numpy.matrix.tofile")(fid[, sep, format]) | Write array to a file as text or binary (default). [`tolist`](numpy.matrix.tolist#numpy.matrix.tolist "numpy.matrix.tolist")() | Return the matrix as a (possibly nested) list. [`tostring`](numpy.matrix.tostring#numpy.matrix.tostring "numpy.matrix.tostring")([order]) | A compatibility alias for [`tobytes`](numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes"), with exactly the same behavior. [`trace`](numpy.matrix.trace#numpy.matrix.trace "numpy.matrix.trace")([offset, axis1, axis2, dtype, out]) | Return the sum along diagonals of the array. [`transpose`](numpy.matrix.transpose#numpy.matrix.transpose "numpy.matrix.transpose")(*axes) | Returns a view of the array with axes transposed. [`var`](numpy.matrix.var#numpy.matrix.var "numpy.matrix.var")([axis, dtype, out, ddof]) | Returns the variance of the matrix elements, along the given axis. [`view`](numpy.matrix.view#numpy.matrix.view "numpy.matrix.view")([dtype][, type]) | New view of array with the same data. **dot** | ---|--- **to_device** | # numpy.matrix.I property _property_ matrix.I Returns the (multiplicative) inverse of invertible `self`. Parameters: **None** Returns: **ret** matrix object If `self` is non-singular, `ret` is such that `ret * self` == `self * ret` == `np.matrix(np.eye(self[0,:].size))` all return `True`. Raises: numpy.linalg.LinAlgError: Singular matrix If `self` is singular. See also [`linalg.inv`](numpy.linalg.inv#numpy.linalg.inv "numpy.linalg.inv") #### Examples >>> m = np.matrix('[1, 2; 3, 4]'); m matrix([[1, 2], [3, 4]]) >>> m.getI() matrix([[-2. , 1. ], [ 1.5, -0.5]]) >>> m.getI() * m matrix([[ 1., 0.], # may vary [ 0., 1.]]) # numpy.matrix.item method matrix.item(_* args_) Copy an element of an array to a standard Python scalar and return it. Parameters: ***args** Arguments (variable number and type) * none: in this case, the method only works for arrays with one element (`a.size == 1`), which element is copied into a standard Python scalar object and returned. * int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return. * tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array. Returns: **z** Standard Python scalar object A copy of the specified element of the array as a suitable Python scalar #### Notes When the data type of `a` is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned. `item` is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math. #### Examples >>> import numpy as np >>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.item(3) 1 >>> x.item(7) 0 >>> x.item((0, 1)) 2 >>> x.item((2, 2)) 1 For an array with object dtype, elements are returned as-is. >>> a = np.array([np.int64(1)], dtype=object) >>> a.item() #return np.int64 np.int64(1) # numpy.matrix.itemsize attribute matrix.itemsize Length of one array element in bytes. #### Examples >>> import numpy as np >>> x = np.array([1,2,3], dtype=np.float64) >>> x.itemsize 8 >>> x = np.array([1,2,3], dtype=np.complex128) >>> x.itemsize 16 # numpy.matrix.max method matrix.max(_axis =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L617-L650) Return the maximum value along an axis. Parameters: **See `amax` for complete descriptions** See also [`amax`](numpy.amax#numpy.amax "numpy.amax"), [`ndarray.max`](numpy.ndarray.max#numpy.ndarray.max "numpy.ndarray.max") #### Notes This is the same as [`ndarray.max`](numpy.ndarray.max#numpy.ndarray.max "numpy.ndarray.max"), but returns a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object where [`ndarray.max`](numpy.ndarray.max#numpy.ndarray.max "numpy.ndarray.max") would return an ndarray. #### Examples >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.max() 11 >>> x.max(0) matrix([[ 8, 9, 10, 11]]) >>> x.max(1) matrix([[ 3], [ 7], [11]]) # numpy.matrix.mean method matrix.mean(_axis =None_, _dtype =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L419-L451) Returns the average of the matrix elements along the given axis. Refer to [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") for full documentation. See also [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") #### Notes Same as [`ndarray.mean`](numpy.ndarray.mean#numpy.ndarray.mean "numpy.ndarray.mean") except that, where that returns an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), this returns a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object. #### Examples >>> x = np.matrix(np.arange(12).reshape((3, 4))) >>> x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.mean() 5.5 >>> x.mean(0) matrix([[4., 5., 6., 7.]]) >>> x.mean(1) matrix([[ 1.5], [ 5.5], [ 9.5]]) # numpy.matrix.min method matrix.min(_axis =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L691-L724) Return the minimum value along an axis. Parameters: **See `amin` for complete descriptions.** See also [`amin`](numpy.amin#numpy.amin "numpy.amin"), [`ndarray.min`](numpy.ndarray.min#numpy.ndarray.min "numpy.ndarray.min") #### Notes This is the same as [`ndarray.min`](numpy.ndarray.min#numpy.ndarray.min "numpy.ndarray.min"), but returns a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object where [`ndarray.min`](numpy.ndarray.min#numpy.ndarray.min "numpy.ndarray.min") would return an ndarray. #### Examples >>> x = -np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, -1, -2, -3], [ -4, -5, -6, -7], [ -8, -9, -10, -11]]) >>> x.min() -11 >>> x.min(0) matrix([[ -8, -9, -10, -11]]) >>> x.min(1) matrix([[ -3], [ -7], [-11]]) # numpy.matrix.mT attribute matrix.mT View of the matrix transposed array. The matrix transpose is the transpose of the last two dimensions, even if the array is of higher dimension. New in version 2.0. Raises: ValueError If the array is of dimension less than 2. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.mT array([[1, 3], [2, 4]]) >>> a = np.arange(8).reshape((2, 2, 2)) >>> a array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> a.mT array([[[0, 2], [1, 3]], [[4, 6], [5, 7]]]) # numpy.matrix.nbytes attribute matrix.nbytes Total bytes consumed by the elements of the array. See also [`sys.getsizeof`](https://docs.python.org/3/library/sys.html#sys.getsizeof "\(in Python v3.13\)") Memory consumed by the object itself without parents in case view. This does include memory consumed by non-element attributes. #### Notes Does not include memory consumed by non-element attributes of the array object. #### Examples >>> import numpy as np >>> x = np.zeros((3,5,2), dtype=np.complex128) >>> x.nbytes 480 >>> np.prod(x.shape) * x.itemsize 480 # numpy.matrix.nonzero method matrix.nonzero() Return the indices of the elements that are non-zero. Refer to [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") for full documentation. See also [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") equivalent function # numpy.matrix.partition method matrix.partition(_kth_ , _axis =-1_, _kind ='introselect'_, _order =None_) Partially sorts the elements in the array in such a way that the value of the element in k-th position is in the position it would be in a sorted array. In the output array, all elements smaller than the k-th element are located to the left of this element and all equal or greater are located to its right. The ordering of the elements in the two partitions on the either side of the k-th element in the output array is undefined. Parameters: **kth** int or sequence of ints Element index to partition by. The kth element value will be in its final sorted position and all smaller elements will be moved before it and all equal or greater elements behind it. The order of all elements in the partitions is undefined. If provided with a sequence of kth it will partition all elements indexed by kth of them into their sorted position at once. Deprecated since version 1.22.0: Passing booleans as index is deprecated. **axis** int, optional Axis along which to sort. Default is -1, which means sort along the last axis. **kind**{‘introselect’}, optional Selection algorithm. Default is ‘introselect’. **order** str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need to be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. See also [`numpy.partition`](numpy.partition#numpy.partition "numpy.partition") Return a partitioned copy of an array. [`argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") Indirect partition. [`sort`](numpy.sort#numpy.sort "numpy.sort") Full sort. #### Notes See `np.partition` for notes on the different algorithms. #### Examples >>> import numpy as np >>> a = np.array([3, 4, 2, 1]) >>> a.partition(3) >>> a array([2, 1, 3, 4]) # may vary >>> a.partition((1, 3)) >>> a array([1, 2, 3, 4]) # numpy.matrix.prod method matrix.prod(_axis =None_, _dtype =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L521-L552) Return the product of the array elements over the given axis. Refer to [`prod`](numpy.prod#numpy.prod "numpy.prod") for full documentation. See also [`prod`](numpy.prod#numpy.prod "numpy.prod"), [`ndarray.prod`](numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod") #### Notes Same as [`ndarray.prod`](numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod"), except, where that returns an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), this returns a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object instead. #### Examples >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.prod() 0 >>> x.prod(0) matrix([[ 0, 45, 120, 231]]) >>> x.prod(1) matrix([[ 0], [ 840], [7920]]) # numpy.matrix.ptp method matrix.ptp(_axis =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L765-L796) Peak-to-peak (maximum - minimum) value along the given axis. Refer to [`numpy.ptp`](numpy.ptp#numpy.ptp "numpy.ptp") for full documentation. See also [`numpy.ptp`](numpy.ptp#numpy.ptp "numpy.ptp") #### Notes Same as [`ndarray.ptp`](numpy.ndarray.ptp#numpy.ndarray.ptp "numpy.ndarray.ptp"), except, where that would return an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") object, this returns a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object. #### Examples >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.ptp() 11 >>> x.ptp(0) matrix([[8, 8, 8, 8]]) >>> x.ptp(1) matrix([[3], [3], [3]]) # numpy.matrix.put method matrix.put(_indices_ , _values_ , _mode ='raise'_) Set `a.flat[n] = values[n]` for all `n` in indices. Refer to [`numpy.put`](numpy.put#numpy.put "numpy.put") for full documentation. See also [`numpy.put`](numpy.put#numpy.put "numpy.put") equivalent function # numpy.matrix.ravel method matrix.ravel(_order ='C'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L903-L939) Return a flattened matrix. Refer to [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") for more documentation. Parameters: **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional The elements of `m` are read using this index order. ‘C’ means to index the elements in C-like order, with the last axis index changing fastest, back to the first axis index changing slowest. ‘F’ means to index the elements in Fortran-like index order, with the first index changing fastest, and the last index changing slowest. Note that the ‘C’ and ‘F’ options take no account of the memory layout of the underlying array, and only refer to the order of axis indexing. ‘A’ means to read the elements in Fortran-like index order if `m` is Fortran _contiguous_ in memory, C-like order otherwise. ‘K’ means to read the elements in the order they occur in memory, except for reversing the data when strides are negative. By default, ‘C’ index order is used. Returns: **ret** matrix Return the matrix flattened to shape `(1, N)` where `N` is the number of elements in the original matrix. A copy is made only if necessary. See also [`matrix.flatten`](numpy.matrix.flatten#numpy.matrix.flatten "numpy.matrix.flatten") returns a similar output matrix but always a copy [`matrix.flat`](numpy.matrix.flat#numpy.matrix.flat "numpy.matrix.flat") a flat iterator on the array. [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") related function which returns an ndarray # numpy.matrix.repeat method matrix.repeat(_repeats_ , _axis =None_) Repeat elements of an array. Refer to [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") for full documentation. See also [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") equivalent function # numpy.matrix.reshape method matrix.reshape(_shape_ , _/_ , _*_ , _order ='C'_, _copy =None_) Returns an array containing the same data with a new shape. Refer to [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") for full documentation. See also [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") equivalent function #### Notes Unlike the free function [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape"), this method on [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") allows the elements of the shape parameter to be passed in as separate arguments. For example, `a.reshape(10, 11)` is equivalent to `a.reshape((10, 11))`. # numpy.matrix.resize method matrix.resize(_new_shape_ , _refcheck =True_) Change shape and size of array in-place. Parameters: **new_shape** tuple of ints, or `n` ints Shape of resized array. **refcheck** bool, optional If False, reference count will not be checked. Default is True. Returns: None Raises: ValueError If `a` does not own its own data or references or views to it exist, and the data memory must be changed. PyPy only: will always raise if the data memory must be changed, since there is no reliable way to determine if references or views to it exist. SystemError If the `order` keyword argument is specified. This behaviour is a bug in NumPy. See also [`resize`](numpy.resize#numpy.resize "numpy.resize") Return a new array with the specified shape. #### Notes This reallocates space for the data area if necessary. Only contiguous arrays (data elements consecutive in memory) can be resized. The purpose of the reference count check is to make sure you do not use this array as a buffer for another Python object and then reallocate the memory. However, reference counts can increase in other ways so if you are sure that you have not shared the memory for this array with another Python object, then you may safely set `refcheck` to False. #### Examples Shrinking an array: array is flattened (in the order that the data are stored in memory), resized, and reshaped: >>> import numpy as np >>> a = np.array([[0, 1], [2, 3]], order='C') >>> a.resize((2, 1)) >>> a array([[0], [1]]) >>> a = np.array([[0, 1], [2, 3]], order='F') >>> a.resize((2, 1)) >>> a array([[0], [2]]) Enlarging an array: as above, but missing entries are filled with zeros: >>> b = np.array([[0, 1], [2, 3]]) >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple >>> b array([[0, 1, 2], [3, 0, 0]]) Referencing an array prevents resizing… >>> c = a >>> a.resize((1, 1)) Traceback (most recent call last): ... ValueError: cannot resize an array that references or is referenced ... Unless `refcheck` is False: >>> a.resize((1, 1), refcheck=False) >>> a array([[0]]) >>> c array([[0]]) # numpy.matrix.round method matrix.round(_decimals =0_, _out =None_) Return `a` with each element rounded to the given number of decimals. Refer to [`numpy.around`](numpy.around#numpy.around "numpy.around") for full documentation. See also [`numpy.around`](numpy.around#numpy.around "numpy.around") equivalent function # numpy.matrix.searchsorted method matrix.searchsorted(_v_ , _side ='left'_, _sorter =None_) Find indices where elements of v should be inserted in a to maintain order. For full documentation, see [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") See also [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") equivalent function # numpy.matrix.setfield method matrix.setfield(_val_ , _dtype_ , _offset =0_) Put a value into a specified place in a field defined by a data-type. Place `val` into `a`’s field defined by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") and beginning `offset` bytes into the field. Parameters: **val** object Value to be placed in field. **dtype** dtype object Data-type of the field in which to place `val`. **offset** int, optional The number of bytes into the field at which to place `val`. Returns: None See also [`getfield`](numpy.matrix.getfield#numpy.matrix.getfield "numpy.matrix.getfield") #### Examples >>> import numpy as np >>> x = np.eye(3) >>> x.getfield(np.float64) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) >>> x.setfield(3, np.int32) >>> x.getfield(np.int32) array([[3, 3, 3], [3, 3, 3], [3, 3, 3]], dtype=int32) >>> x array([[1.0e+000, 1.5e-323, 1.5e-323], [1.5e-323, 1.0e+000, 1.5e-323], [1.5e-323, 1.5e-323, 1.0e+000]]) >>> x.setfield(np.eye(3), np.int32) >>> x array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) # numpy.matrix.setflags method matrix.setflags(_write =None_, _align =None_, _uic =None_) Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. These Boolean-valued flags affect how numpy interprets the memory area used by `a` (see Notes below). The ALIGNED flag can only be set to True if the data is actually aligned according to the type. The WRITEBACKIFCOPY flag can never be set to True. The flag WRITEABLE can only be set to True if the array owns its own memory, or the ultimate owner of the memory exposes a writeable buffer interface, or is a string. (The exception for string is made so that unpickling can be done without copying memory.) Parameters: **write** bool, optional Describes whether or not `a` can be written to. **align** bool, optional Describes whether or not `a` is aligned properly for its type. **uic** bool, optional Describes whether or not `a` is a copy of another “base” array. #### Notes Array flags provide information about how the memory area used for the array is to be interpreted. There are 7 Boolean flags in use, only three of which can be changed by the user: WRITEBACKIFCOPY, WRITEABLE, and ALIGNED. WRITEABLE (W) the data area can be written to; ALIGNED (A) the data and strides are aligned appropriately for the hardware (as determined by the compiler); WRITEBACKIFCOPY (X) this array is a copy of some other array (referenced by .base). When the C-API function PyArray_ResolveWritebackIfCopy is called, the base array will be updated with the contents of this array. All flags can be accessed using the single (upper case) letter as well as the full name. #### Examples >>> import numpy as np >>> y = np.array([[3, 1, 7], ... [2, 0, 0], ... [8, 5, 9]]) >>> y array([[3, 1, 7], [2, 0, 0], [8, 5, 9]]) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False >>> y.setflags(write=0, align=0) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : False ALIGNED : False WRITEBACKIFCOPY : False >>> y.setflags(uic=1) Traceback (most recent call last): File "", line 1, in ValueError: cannot set WRITEBACKIFCOPY flag to True # numpy.matrix.sort method matrix.sort(_axis =-1_, _kind =None_, _order =None_) Sort an array in-place. Refer to [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for full documentation. Parameters: **axis** int, optional Axis along which to sort. Default is -1, which means sort along the last axis. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional Sorting algorithm. The default is ‘quicksort’. Note that both ‘stable’ and ‘mergesort’ use timsort under the covers and, in general, the actual implementation will vary with datatype. The ‘mergesort’ option is retained for backwards compatibility. **order** str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. See also [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") Return a sorted copy of an array. [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Indirect sort. [`numpy.lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort on multiple keys. [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") Find elements in sorted array. [`numpy.partition`](numpy.partition#numpy.partition "numpy.partition") Partial sort. #### Notes See [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for notes on the different sorting algorithms. #### Examples >>> import numpy as np >>> a = np.array([[1,4], [3,1]]) >>> a.sort(axis=1) >>> a array([[1, 4], [1, 3]]) >>> a.sort(axis=0) >>> a array([[1, 3], [1, 4]]) Use the `order` keyword to specify a field to use when sorting a structured array: >>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)]) >>> a.sort(order='y') >>> a array([(b'c', 1), (b'a', 2)], dtype=[('x', 'S1'), ('y', '>> c = np.matrix([[1], [2]]) >>> c matrix([[1], [2]]) >>> c.squeeze() matrix([[1, 2]]) >>> r = c.T >>> r matrix([[1, 2]]) >>> r.squeeze() matrix([[1, 2]]) >>> m = np.matrix([[1, 2], [3, 4]]) >>> m.squeeze() matrix([[1, 2], [3, 4]]) # numpy.matrix.std method matrix.std(_axis =None_, _dtype =None_, _out =None_, _ddof =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L453-L485) Return the standard deviation of the array elements along the given axis. Refer to [`numpy.std`](numpy.std#numpy.std "numpy.std") for full documentation. See also [`numpy.std`](numpy.std#numpy.std "numpy.std") #### Notes This is the same as [`ndarray.std`](numpy.ndarray.std#numpy.ndarray.std "numpy.ndarray.std"), except that where an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") would be returned, a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object is returned instead. #### Examples >>> x = np.matrix(np.arange(12).reshape((3, 4))) >>> x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.std() 3.4520525295346629 # may vary >>> x.std(0) matrix([[ 3.26598632, 3.26598632, 3.26598632, 3.26598632]]) # may vary >>> x.std(1) matrix([[ 1.11803399], [ 1.11803399], [ 1.11803399]]) # numpy.matrix.strides attribute matrix.strides Tuple of bytes to step in each dimension when traversing an array. The byte offset of element `(i[0], i[1], ..., i[n])` in an array `a` is: offset = sum(np.array(i) * a.strides) A more detailed explanation of strides can be found in [The N-dimensional array (ndarray)](../arrays.ndarray#arrays-ndarray). Warning Setting `arr.strides` is discouraged and may be deprecated in the future. [`numpy.lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") should be preferred to create a new view of the same data in a safer way. See also [`numpy.lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") #### Notes Imagine an array of 32-bit integers (each 4 bytes): x = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]], dtype=np.int32) This array is stored in memory as 40 bytes, one after the other (known as a contiguous block of memory). The strides of an array tell us how many bytes we have to skip in memory to move to the next position along a certain axis. For example, we have to skip 4 bytes (1 value) to move to the next column, but 20 bytes (5 values) to get to the same position in the next row. As such, the strides for the array `x` will be `(20, 4)`. #### Examples >>> import numpy as np >>> y = np.reshape(np.arange(2*3*4), (2,3,4)) >>> y array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> y.strides (48, 16, 4) >>> y[1,1,1] 17 >>> offset=sum(y.strides * np.array((1,1,1))) >>> offset/y.itemsize 17 >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0) >>> x.strides (32, 4, 224, 1344) >>> i = np.array([3,5,2,2]) >>> offset = sum(i * x.strides) >>> x[3,5,2,2] 813 >>> offset / x.itemsize 813 # numpy.matrix.sum method matrix.sum(_axis =None_, _dtype =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L293-L325) Returns the sum of the matrix elements, along the given axis. Refer to [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") for full documentation. See also [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") #### Notes This is the same as [`ndarray.sum`](numpy.ndarray.sum#numpy.ndarray.sum "numpy.ndarray.sum"), except that where an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") would be returned, a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object is returned instead. #### Examples >>> x = np.matrix([[1, 2], [4, 3]]) >>> x.sum() 10 >>> x.sum(axis=1) matrix([[3], [7]]) >>> x.sum(axis=1, dtype='float') matrix([[3.], [7.]]) >>> out = np.zeros((2, 1), dtype='float') >>> x.sum(axis=1, dtype='float', out=np.asmatrix(out)) matrix([[3.], [7.]]) # numpy.matrix.swapaxes method matrix.swapaxes(_axis1_ , _axis2_) Return a view of the array with `axis1` and `axis2` interchanged. Refer to [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") for full documentation. See also [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") equivalent function # numpy.matrix.T property _property_ matrix.T Returns the transpose of the matrix. Does _not_ conjugate! For the complex conjugate transpose, use `.H`. Parameters: **None** Returns: **ret** matrix object The (non-conjugated) transpose of the matrix. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose"), [`getH`](numpy.matrix.geth#numpy.matrix.getH "numpy.matrix.getH") #### Examples >>> m = np.matrix('[1, 2; 3, 4]') >>> m matrix([[1, 2], [3, 4]]) >>> m.getT() matrix([[1, 3], [2, 4]]) # numpy.matrix.take method matrix.take(_indices_ , _axis =None_, _out =None_, _mode ='raise'_) Return an array formed from the elements of `a` at the given indices. Refer to [`numpy.take`](numpy.take#numpy.take "numpy.take") for full documentation. See also [`numpy.take`](numpy.take#numpy.take "numpy.take") equivalent function # numpy.matrix.tobytes method matrix.tobytes(_order ='C'_) Construct Python bytes containing the raw data bytes in the array. Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object is produced in C-order by default. This behavior is controlled by the `order` parameter. Parameters: **order**{‘C’, ‘F’, ‘A’}, optional Controls the memory layout of the bytes object. ‘C’ means C-order, ‘F’ means F-order, ‘A’ (short for _Any_) means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. Default is ‘C’. Returns: **s** bytes Python bytes exhibiting a copy of `a`’s raw data. See also [`frombuffer`](numpy.frombuffer#numpy.frombuffer "numpy.frombuffer") Inverse of this operation, construct a 1-dimensional array from Python bytes. #### Examples >>> import numpy as np >>> x = np.array([[0, 1], [2, 3]], dtype='>> x.tobytes() b'\x00\x00\x01\x00\x02\x00\x03\x00' >>> x.tobytes('C') == x.tobytes() True >>> x.tobytes('F') b'\x00\x00\x02\x00\x01\x00\x03\x00' # numpy.matrix.tofile method matrix.tofile(_fid_ , _sep =''_, _format ='%s'_) Write array to a file as text or binary (default). Data is always written in ‘C’ order, independent of the order of `a`. The data produced by this method can be recovered using the function fromfile(). Parameters: **fid** file or str or Path An open file object, or a string containing a filename. **sep** str Separator between array items for text output. If “” (empty), a binary file is written, equivalent to `file.write(a.tobytes())`. **format** str Format string for text file output. Each entry in the array is formatted to text by first converting it to the closest Python type, and then using “format” % item. #### Notes This is a convenience function for quick storage of array data. Information on endianness and precision is lost, so this method is not a good choice for files intended to archive data or transport data between machines with different endianness. Some of these problems can be overcome by outputting the data as text files, at the expense of speed and file size. When fid is a file object, array contents are directly written to the file, bypassing the file object’s `write` method. As a result, tofile cannot be used with files objects supporting compression (e.g., GzipFile) or file-like objects that do not support `fileno()` (e.g., BytesIO). # numpy.matrix.tolist method matrix.tolist()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L270-L290) Return the matrix as a (possibly nested) list. See [`ndarray.tolist`](numpy.ndarray.tolist#numpy.ndarray.tolist "numpy.ndarray.tolist") for full documentation. See also [`ndarray.tolist`](numpy.ndarray.tolist#numpy.ndarray.tolist "numpy.ndarray.tolist") #### Examples >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.tolist() [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]] # numpy.matrix.tostring method matrix.tostring(_order ='C'_) A compatibility alias for [`tobytes`](numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes"), with exactly the same behavior. Despite its name, it returns [`bytes`](https://docs.python.org/3/library/stdtypes.html#bytes "\(in Python v3.13\)") not [`str`](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.13\)")s. Deprecated since version 1.19.0. # numpy.matrix.trace method matrix.trace(_offset =0_, _axis1 =0_, _axis2 =1_, _dtype =None_, _out =None_) Return the sum along diagonals of the array. Refer to [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") for full documentation. See also [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") equivalent function # numpy.matrix.transpose method matrix.transpose(_* axes_) Returns a view of the array with axes transposed. Refer to [`numpy.transpose`](numpy.transpose#numpy.transpose "numpy.transpose") for full documentation. Parameters: **axes** None, tuple of ints, or `n` ints * None or no argument: reverses the order of the axes. * tuple of ints: `i` in the `j`-th place in the tuple means that the array’s `i`-th axis becomes the transposed array’s `j`-th axis. * `n` ints: same as an n-tuple of the same ints (this form is intended simply as a “convenience” alternative to the tuple form). Returns: **p** ndarray View of the array with its axes suitably permuted. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") Equivalent function. [`ndarray.T`](numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T") Array property returning the array transposed. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Give a new shape to an array without changing its data. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.transpose() array([[1, 3], [2, 4]]) >>> a.transpose((1, 0)) array([[1, 3], [2, 4]]) >>> a.transpose(1, 0) array([[1, 3], [2, 4]]) >>> a = np.array([1, 2, 3, 4]) >>> a array([1, 2, 3, 4]) >>> a.transpose() array([1, 2, 3, 4]) # numpy.matrix.var method matrix.var(_axis =None_, _dtype =None_, _out =None_, _ddof =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/matrixlib/defmatrix.py#L487-L519) Returns the variance of the matrix elements, along the given axis. Refer to [`numpy.var`](numpy.var#numpy.var "numpy.var") for full documentation. See also [`numpy.var`](numpy.var#numpy.var "numpy.var") #### Notes This is the same as [`ndarray.var`](numpy.ndarray.var#numpy.ndarray.var "numpy.ndarray.var"), except that where an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") would be returned, a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object is returned instead. #### Examples >>> x = np.matrix(np.arange(12).reshape((3, 4))) >>> x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.var() 11.916666666666666 >>> x.var(0) matrix([[ 10.66666667, 10.66666667, 10.66666667, 10.66666667]]) # may vary >>> x.var(1) matrix([[1.25], [1.25], [1.25]]) # numpy.matrix.view method matrix.view(_[dtype][, type]_) New view of array with the same data. Note Passing None for `dtype` is different from omitting the parameter, since the former invokes `dtype(None)` which is an alias for `dtype('float64')`. Parameters: **dtype** data-type or ndarray sub-class, optional Data-type descriptor of the returned view, e.g., float32 or int16. Omitting it results in the view having the same data-type as `a`. This argument can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the `type` parameter). **type** Python type, optional Type of the returned view, e.g., ndarray or matrix. Again, omission of the parameter results in type preservation. #### Notes `a.view()` is used two different ways: `a.view(some_dtype)` or `a.view(dtype=some_dtype)` constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory. `a.view(ndarray_subclass)` or `a.view(type=ndarray_subclass)` just returns an instance of `ndarray_subclass` that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory. For `a.view(some_dtype)`, if `some_dtype` has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the last axis of `a` must be contiguous. This axis will be resized in the result. Changed in version 1.23.0: Only the last axis needs to be contiguous. Previously, the entire array had to be C-contiguous. #### Examples >>> import numpy as np >>> x = np.array([(-1, 2)], dtype=[('a', np.int8), ('b', np.int8)]) Viewing array data using a different type and dtype: >>> nonneg = np.dtype([("a", np.uint8), ("b", np.uint8)]) >>> y = x.view(dtype=nonneg, type=np.recarray) >>> x["a"] array([-1], dtype=int8) >>> y.a array([255], dtype=uint8) Creating a view on a structured array so it can be used in calculations >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)]) >>> xv = x.view(dtype=np.int8).reshape(-1,2) >>> xv array([[1, 2], [3, 4]], dtype=int8) >>> xv.mean(0) array([2., 3.]) Making changes to the view changes the underlying array >>> xv[0,1] = 20 >>> x array([(1, 20), (3, 4)], dtype=[('a', 'i1'), ('b', 'i1')]) Using a view to convert an array to a recarray: >>> z = x.view(np.recarray) >>> z.a array([1, 3], dtype=int8) Views share data: >>> x[0] = (9, 10) >>> z[0] np.record((9, 10), dtype=[('a', 'i1'), ('b', 'i1')]) Views that change the dtype size (bytes per entry) should normally be avoided on arrays defined by slices, transposes, fortran-ordering, etc.: >>> x = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.int16) >>> y = x[:, ::2] >>> y array([[1, 3], [4, 6]], dtype=int16) >>> y.view(dtype=[('width', np.int16), ('length', np.int16)]) Traceback (most recent call last): ... ValueError: To change to a dtype of a different size, the last axis must be contiguous >>> z = y.copy() >>> z.view(dtype=[('width', np.int16), ('length', np.int16)]) array([[(1, 3)], [(4, 6)]], dtype=[('width', '>> x = np.arange(2 * 3 * 4, dtype=np.int8).reshape(2, 3, 4) >>> x.transpose(1, 0, 2).view(np.int16) array([[[ 256, 770], [3340, 3854]], [[1284, 1798], [4368, 4882]], [[2312, 2826], [5396, 5910]]], dtype=int16) # numpy.matrix_transpose numpy.matrix_transpose(_x_ , _/_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L709-L751) Transposes a matrix (or a stack of matrices) `x`. This function is Array API compatible. Parameters: **x** array_like Input array having shape (…, M, N) and whose two innermost dimensions form `MxN` matrices. Returns: **out** ndarray An array containing the transpose for each matrix and having shape (…, N, M). See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") Generic transpose method. #### Examples >>> import numpy as np >>> np.matrix_transpose([[1, 2], [3, 4]]) array([[1, 3], [2, 4]]) >>> np.matrix_transpose([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) array([[[1, 3], [2, 4]], [[5, 7], [6, 8]]]) # numpy.matvec numpy.matvec(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_ , _axes_ , _axis_])_= _ Matrix-vector dot product of two arrays. Given a matrix (or stack of matrices) \\(\mathbf{A}\\) in `x1` and a vector (or stack of vectors) \\(\mathbf{v}\\) in `x2`, the matrix-vector product is defined as: \\[\mathbf{A} \cdot \mathbf{b} = \sum_{j=0}^{n-1} A_{ij} v_j\\] where the sum is over the last dimensions in `x1` and `x2` (unless `axes` is specified). (For a matrix-vector product with the vector conjugated, use `np.vecmat(x2, x1.mT)`.) New in version 2.2.0. Parameters: **x1, x2** array_like Input arrays, scalars not allowed. **out** ndarray, optional A location into which the result is stored. If provided, it must have the broadcasted shape of `x1` and `x2` with the summation axis removed. If not provided or None, a freshly-allocated array is used. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The matrix-vector product of the inputs. Raises: ValueError If the last dimensions of `x1` and `x2` are not the same size. If a scalar value is passed in. See also [`vecdot`](numpy.vecdot#numpy.vecdot "numpy.vecdot") Vector-vector product. [`vecmat`](numpy.vecmat#numpy.vecmat "numpy.vecmat") Vector-matrix product. [`matmul`](numpy.matmul#numpy.matmul "numpy.matmul") Matrix-matrix product. [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") Einstein summation convention. #### Examples Rotate a set of vectors from Y to X along Z. >>> a = np.array([[0., 1., 0.], ... [-1., 0., 0.], ... [0., 0., 1.]]) >>> v = np.array([[1., 0., 0.], ... [0., 1., 0.], ... [0., 0., 1.], ... [0., 6., 8.]]) >>> np.matvec(a, v) array([[ 0., -1., 0.], [ 1., 0., 0.], [ 0., 0., 1.], [ 6., 0., 8.]]) # numpy.max numpy.max(_a_ , _axis=None_ , _out=None_ , _keepdims= _, _initial= _, _where= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L3052-L3165) Return the maximum of an array or maximum along an axis. Parameters: **a** array_like Input data. **axis** None or int or tuple of ints, optional Axis or axes along which to operate. By default, flattened input is used. If this is a tuple of ints, the maximum is selected over multiple axes, instead of a single axis or all the axes as before. **out** ndarray, optional Alternative output array in which to place the result. Must be of the same shape and buffer length as the expected output. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the `max` method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **initial** scalar, optional The minimum value of an output element. Must be present to allow computation on empty slice. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. **where** array_like of bool, optional Elements to compare for the maximum. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. Returns: **max** ndarray or scalar Maximum of `a`. If `axis` is None, the result is a scalar value. If `axis` is an int, the result is an array of dimension `a.ndim - 1`. If `axis` is a tuple, the result is an array of dimension `a.ndim - len(axis)`. See also [`amin`](numpy.amin#numpy.amin "numpy.amin") The minimum value of an array along a given axis, propagating any NaNs. [`nanmax`](numpy.nanmax#numpy.nanmax "numpy.nanmax") The maximum value of an array along a given axis, ignoring any NaNs. [`maximum`](numpy.maximum#numpy.maximum "numpy.maximum") Element-wise maximum of two arrays, propagating any NaNs. [`fmax`](numpy.fmax#numpy.fmax "numpy.fmax") Element-wise maximum of two arrays, ignoring any NaNs. [`argmax`](numpy.argmax#numpy.argmax "numpy.argmax") Return the indices of the maximum values. [`nanmin`](numpy.nanmin#numpy.nanmin "numpy.nanmin"), [`minimum`](numpy.minimum#numpy.minimum "numpy.minimum"), [`fmin`](numpy.fmin#numpy.fmin "numpy.fmin") #### Notes NaN values are propagated, that is if at least one item is NaN, the corresponding max value will be NaN as well. To ignore NaN values (MATLAB behavior), please use nanmax. Don’t use `max` for element-wise comparison of 2 arrays; when `a.shape[0]` is 2, `maximum(a[0], a[1])` is faster than `max(a, axis=0)`. #### Examples >>> import numpy as np >>> a = np.arange(4).reshape((2,2)) >>> a array([[0, 1], [2, 3]]) >>> np.max(a) # Maximum of the flattened array 3 >>> np.max(a, axis=0) # Maxima along the first axis array([2, 3]) >>> np.max(a, axis=1) # Maxima along the second axis array([1, 3]) >>> np.max(a, where=[False, True], initial=-1, axis=0) array([-1, 3]) >>> b = np.arange(5, dtype=float) >>> b[2] = np.nan >>> np.max(b) np.float64(nan) >>> np.max(b, where=~np.isnan(b), initial=-1) 4.0 >>> np.nanmax(b) 4.0 You can use an initial value to compute the maximum of an empty slice, or to initialize it to a different value: >>> np.max([[-50], [10]], axis=-1, initial=0) array([ 0, 10]) Notice that the initial value is used as one of the elements for which the maximum is determined, unlike for the default argument Python’s max function, which is only used for empty iterables. >>> np.max([5], initial=6) 6 >>> max([5], default=6) 5 # numpy.maximum numpy.maximum(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Element-wise maximum of array elements. Compare two arrays and return a new array containing the element-wise maxima. If one of the elements being compared is a NaN, then that element is returned. If both elements are NaNs then the first is returned. The latter distinction is important for complex NaNs, which are defined as at least one of the real or imaginary parts being a NaN. The net effect is that NaNs are propagated. Parameters: **x1, x2** array_like The arrays holding the elements to be compared. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray or scalar The maximum of `x1` and `x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. See also [`minimum`](numpy.minimum#numpy.minimum "numpy.minimum") Element-wise minimum of two arrays, propagates NaNs. [`fmax`](numpy.fmax#numpy.fmax "numpy.fmax") Element-wise maximum of two arrays, ignores NaNs. [`amax`](numpy.amax#numpy.amax "numpy.amax") The maximum value of an array along a given axis, propagates NaNs. [`nanmax`](numpy.nanmax#numpy.nanmax "numpy.nanmax") The maximum value of an array along a given axis, ignores NaNs. [`fmin`](numpy.fmin#numpy.fmin "numpy.fmin"), [`amin`](numpy.amin#numpy.amin "numpy.amin"), [`nanmin`](numpy.nanmin#numpy.nanmin "numpy.nanmin") #### Notes The maximum is equivalent to `np.where(x1 >= x2, x1, x2)` when neither x1 nor x2 are nans, but it is faster and does proper broadcasting. #### Examples >>> import numpy as np >>> np.maximum([2, 3, 4], [1, 5, 2]) array([2, 5, 4]) >>> np.maximum(np.eye(2), [0.5, 2]) # broadcasting array([[ 1. , 2. ], [ 0.5, 2. ]]) >>> np.maximum([np.nan, 0, np.nan], [0, np.nan, np.nan]) array([nan, nan, nan]) >>> np.maximum(np.inf, 1) inf # numpy.may_share_memory numpy.may_share_memory(_a_ , _b_ , _/_ , _max_work =None_) Determine if two arrays might share memory A return of True does not necessarily mean that the two arrays share any element. It just means that they _might_. Only the memory bounds of a and b are checked by default. Parameters: **a, b** ndarray Input arrays **max_work** int, optional Effort to spend on solving the overlap problem. See [`shares_memory`](numpy.shares_memory#numpy.shares_memory "numpy.shares_memory") for details. Default for `may_share_memory` is to do a bounds check. Returns: **out** bool See also [`shares_memory`](numpy.shares_memory#numpy.shares_memory "numpy.shares_memory") #### Examples >>> import numpy as np >>> np.may_share_memory(np.array([1,2]), np.array([5,8,9])) False >>> x = np.zeros([3, 4]) >>> np.may_share_memory(x[:,0], x[:,1]) True # numpy.mean numpy.mean(_a_ , _axis=None_ , _dtype=None_ , _out=None_ , _keepdims= _, _*_ , _where= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L3735-L3861) Compute the arithmetic mean along the specified axis. Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified axis. [`float64`](../arrays.scalars#numpy.float64 "numpy.float64") intermediate and return values are used for integer inputs. Parameters: **a** array_like Array containing numbers whose mean is desired. If `a` is not an array, a conversion is attempted. **axis** None or int or tuple of ints, optional Axis or axes along which the means are computed. The default is to compute the mean of the flattened array. If this is a tuple of ints, a mean is performed over multiple axes, instead of a single axis or all the axes as before. **dtype** data-type, optional Type to use in computing the mean. For integer inputs, the default is [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"); for floating point inputs, it is the same as the input dtype. **out** ndarray, optional Alternate output array in which to place the result. The default is `None`; if provided, it must have the same shape as the expected output, but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the `mean` method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **where** array_like of bool, optional Elements to include in the mean. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.20.0. Returns: **m** ndarray, see dtype parameter above If `out=None`, returns a new array containing the mean values, otherwise a reference to the output array is returned. See also [`average`](numpy.average#numpy.average "numpy.average") Weighted average [`std`](numpy.std#numpy.std "numpy.std"), [`var`](numpy.var#numpy.var "numpy.var"), [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean"), [`nanstd`](numpy.nanstd#numpy.nanstd "numpy.nanstd"), [`nanvar`](numpy.nanvar#numpy.nanvar "numpy.nanvar") #### Notes The arithmetic mean is the sum of the elements along the axis divided by the number of elements. Note that for floating-point input, the mean is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for [`float32`](../arrays.scalars#numpy.float32 "numpy.float32") (see example below). Specifying a higher-precision accumulator using the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") keyword can alleviate this issue. By default, [`float16`](../arrays.scalars#numpy.float16 "numpy.float16") results are computed using [`float32`](../arrays.scalars#numpy.float32 "numpy.float32") intermediates for extra precision. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> np.mean(a) 2.5 >>> np.mean(a, axis=0) array([2., 3.]) >>> np.mean(a, axis=1) array([1.5, 3.5]) In single precision, `mean` can be inaccurate: >>> a = np.zeros((2, 512*512), dtype=np.float32) >>> a[0, :] = 1.0 >>> a[1, :] = 0.1 >>> np.mean(a) np.float32(0.54999924) Computing the mean in float64 is more accurate: >>> np.mean(a, dtype=np.float64) 0.55000000074505806 # may vary Computing the mean in timedelta64 is available: >>> b = np.array([1, 3], dtype="timedelta64[D]") >>> np.mean(b) np.timedelta64(2,'D') Specifying a where argument: >>> a = np.array([[5, 9, 13], [14, 10, 12], [11, 15, 19]]) >>> np.mean(a) 12.0 >>> np.mean(a, where=[[True], [False], [False]]) 9.0 # numpy.median numpy.median(_a_ , _axis =None_, _out =None_, _overwrite_input =False_, _keepdims =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L3916-L4002) Compute the median along the specified axis. Returns the median of the array elements. Parameters: **a** array_like Input array or object that can be converted to an array. **axis**{int, sequence of int, None}, optional Axis or axes along which the medians are computed. The default, axis=None, will compute the median along a flattened version of the array. If a sequence of axes, the array is first flattened along the given axes, then the median is computed along the resulting flattened axis. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary. **overwrite_input** bool, optional If True, then allow use of memory of input array `a` for calculations. The input array will be modified by the call to `median`. This will save memory when you do not need to preserve the contents of the input array. Treat the input as undefined, but it will probably be fully or partially sorted. Default is False. If `overwrite_input` is `True` and `a` is not already an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), an error will be raised. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `arr`. Returns: **median** ndarray A new array holding the result. If the input contains integers or floats smaller than `float64`, then the output data-type is `np.float64`. Otherwise, the data-type of the output is the same as that of the input. If `out` is specified, that array is returned instead. See also [`mean`](numpy.mean#numpy.mean "numpy.mean"), [`percentile`](numpy.percentile#numpy.percentile "numpy.percentile") #### Notes Given a vector `V` of length `N`, the median of `V` is the middle value of a sorted copy of `V`, `V_sorted` \- i e., `V_sorted[(N-1)/2]`, when `N` is odd, and the average of the two middle values of `V_sorted` when `N` is even. #### Examples >>> import numpy as np >>> a = np.array([[10, 7, 4], [3, 2, 1]]) >>> a array([[10, 7, 4], [ 3, 2, 1]]) >>> np.median(a) np.float64(3.5) >>> np.median(a, axis=0) array([6.5, 4.5, 2.5]) >>> np.median(a, axis=1) array([7., 2.]) >>> np.median(a, axis=(0, 1)) np.float64(3.5) >>> m = np.median(a, axis=0) >>> out = np.zeros_like(m) >>> np.median(a, axis=0, out=m) array([6.5, 4.5, 2.5]) >>> m array([6.5, 4.5, 2.5]) >>> b = a.copy() >>> np.median(b, axis=1, overwrite_input=True) array([7., 2.]) >>> assert not np.all(a==b) >>> b = a.copy() >>> np.median(b, axis=None, overwrite_input=True) np.float64(3.5) >>> assert not np.all(a==b) # numpy.memmap.flush method memmap.flush()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/memmap.py#L322-L338) Write any changes in the array to the file on disk. For further information, see [`memmap`](numpy.memmap#numpy.memmap "numpy.memmap"). Parameters: **None** See also [`memmap`](numpy.memmap#numpy.memmap "numpy.memmap") # numpy.memmap _class_ numpy.memmap(_filename_ , _dtype= _, _mode='r+'_ , _offset=0_ , _shape=None_ , _order='C'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) Create a memory-map to an array stored in a _binary_ file on disk. Memory-mapped files are used for accessing small segments of large files on disk, without reading the entire file into memory. NumPy’s memmap’s are array- like objects. This differs from Python’s `mmap` module, which uses file-like objects. This subclass of ndarray has some unpleasant interactions with some operations, because it doesn’t quite fit properly as a subclass. An alternative to using this subclass is to create the `mmap` object yourself, then create an ndarray with ndarray.__new__ directly, passing the object created in its ‘buffer=’ parameter. This class may at some point be turned into a factory function which returns a view into an mmap buffer. Flush the memmap instance to write the changes to the file. Currently there is no API to close the underlying `mmap`. It is tricky to ensure the resource is actually closed, since it may be shared between different memmap instances. Parameters: **filename** str, file-like object, or pathlib.Path instance The file name or file object to be used as the array data buffer. **dtype** data-type, optional The data-type used to interpret the file contents. Default is [`uint8`](../arrays.scalars#numpy.uint8 "numpy.uint8"). **mode**{‘r+’, ‘r’, ‘w+’, ‘c’}, optional The file is opened in this mode: ‘r’ | Open existing file for reading only. ---|--- ‘r+’ | Open existing file for reading and writing. ‘w+’ | Create or overwrite existing file for reading and writing. If `mode == 'w+'` then [`shape`](numpy.shape#numpy.shape "numpy.shape") must also be specified. ‘c’ | Copy-on-write: assignments affect data in memory, but changes are not saved to disk. The file on disk is read-only. Default is ‘r+’. **offset** int, optional In the file, array data starts at this offset. Since `offset` is measured in bytes, it should normally be a multiple of the byte-size of [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"). When `mode != 'r'`, even positive offsets beyond end of file are valid; The file will be extended to accommodate the additional data. By default, `memmap` will start at the beginning of the file, even if `filename` is a file pointer `fp` and `fp.tell() != 0`. **shape** int or sequence of ints, optional The desired shape of the array. If `mode == 'r'` and the number of remaining bytes after `offset` is not a multiple of the byte-size of [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), you must specify [`shape`](numpy.shape#numpy.shape "numpy.shape"). By default, the returned array will be 1-D with the number of elements determined by file size and data-type. Changed in version 2.0: The shape parameter can now be any integer sequence type, previously types were limited to tuple and int. **order**{‘C’, ‘F’}, optional Specify the order of the ndarray memory layout: [row- major](../../glossary#term-row-major), C-style or [column- major](../../glossary#term-column-major), Fortran-style. This only has an effect if the shape is greater than 1-D. The default order is ‘C’. See also [`lib.format.open_memmap`](numpy.lib.format.open_memmap#numpy.lib.format.open_memmap "numpy.lib.format.open_memmap") Create or load a memory-mapped `.npy` file. #### Notes The memmap object can be used anywhere an ndarray is accepted. Given a memmap `fp`, `isinstance(fp, numpy.ndarray)` returns `True`. Memory-mapped files cannot be larger than 2GB on 32-bit systems. When a memmap causes a file to be created or extended beyond its current size in the filesystem, the contents of the new part are unspecified. On systems with POSIX filesystem semantics, the extended part will be filled with zero bytes. #### Examples >>> import numpy as np >>> data = np.arange(12, dtype='float32') >>> data.resize((3,4)) This example uses a temporary file so that doctest doesn’t write files to your directory. You would use a ‘normal’ filename. >>> from tempfile import mkdtemp >>> import os.path as path >>> filename = path.join(mkdtemp(), 'newfile.dat') Create a memmap with dtype and shape that matches our data: >>> fp = np.memmap(filename, dtype='float32', mode='w+', shape=(3,4)) >>> fp memmap([[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]], dtype=float32) Write data to memmap array: >>> fp[:] = data[:] >>> fp memmap([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]], dtype=float32) >>> fp.filename == path.abspath(filename) True Flushes memory changes to disk in order to read them back >>> fp.flush() Load the memmap and verify data was stored: >>> newfp = np.memmap(filename, dtype='float32', mode='r', shape=(3,4)) >>> newfp memmap([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]], dtype=float32) Read-only memmap: >>> fpr = np.memmap(filename, dtype='float32', mode='r', shape=(3,4)) >>> fpr.flags.writeable False Copy-on-write memmap: >>> fpc = np.memmap(filename, dtype='float32', mode='c', shape=(3,4)) >>> fpc.flags.writeable True It’s possible to assign to copy-on-write array, but values are only written into the memory copy of the array, and not written to disk: >>> fpc memmap([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]], dtype=float32) >>> fpc[0,:] = 0 >>> fpc memmap([[ 0., 0., 0., 0.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]], dtype=float32) File on disk is unchanged: >>> fpr memmap([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]], dtype=float32) Offset into a memmap: >>> fpo = np.memmap(filename, dtype='float32', mode='r', offset=16) >>> fpo memmap([ 4., 5., 6., 7., 8., 9., 10., 11.], dtype=float32) Attributes: **filename** str or pathlib.Path instance Path to the mapped file. **offset** int Offset position in the file. **mode** str File mode. #### Methods [`flush`](numpy.memmap.flush#numpy.memmap.flush "numpy.memmap.flush")() | Write any changes in the array to the file on disk. ---|--- # numpy.meshgrid numpy.meshgrid(_* xi_, _copy =True_, _sparse =False_, _indexing ='xy'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L5117-L5265) Return a tuple of coordinate matrices from coordinate vectors. Make N-D coordinate arrays for vectorized evaluations of N-D scalar/vector fields over N-D grids, given one-dimensional coordinate arrays x1, x2,…, xn. Parameters: **x1, x2,…, xn** array_like 1-D arrays representing the coordinates of a grid. **indexing**{‘xy’, ‘ij’}, optional Cartesian (‘xy’, default) or matrix (‘ij’) indexing of output. See Notes for more details. **sparse** bool, optional If True the shape of the returned coordinate array for dimension _i_ is reduced from `(N1, ..., Ni, ... Nn)` to `(1, ..., 1, Ni, 1, ..., 1)`. These sparse coordinate grids are intended to be use with [Broadcasting](../../user/basics.broadcasting#basics-broadcasting). When all coordinates are used in an expression, broadcasting still leads to a fully- dimensonal result array. Default is False. **copy** bool, optional If False, a view into the original arrays are returned in order to conserve memory. Default is True. Please note that `sparse=False, copy=False` will likely return non-contiguous arrays. Furthermore, more than one element of a broadcast array may refer to a single memory location. If you need to write to the arrays, make copies first. Returns: **X1, X2,…, XN** tuple of ndarrays For vectors `x1`, `x2`,…, `xn` with lengths `Ni=len(xi)`, returns `(N1, N2, N3,..., Nn)` shaped arrays if indexing=’ij’ or `(N2, N1, N3,..., Nn)` shaped arrays if indexing=’xy’ with the elements of `xi` repeated to fill the matrix along the first dimension for `x1`, the second for `x2` and so on. See also [`mgrid`](numpy.mgrid#numpy.mgrid "numpy.mgrid") Construct a multi-dimensional “meshgrid” using indexing notation. [`ogrid`](numpy.ogrid#numpy.ogrid "numpy.ogrid") Construct an open multi-dimensional “meshgrid” using indexing notation. [How to index ndarrays](../../user/how-to-index#how-to-index) #### Notes This function supports both indexing conventions through the indexing keyword argument. Giving the string ‘ij’ returns a meshgrid with matrix indexing, while ‘xy’ returns a meshgrid with Cartesian indexing. In the 2-D case with inputs of length M and N, the outputs are of shape (N, M) for ‘xy’ indexing and (M, N) for ‘ij’ indexing. In the 3-D case with inputs of length M, N and P, outputs are of shape (N, M, P) for ‘xy’ indexing and (M, N, P) for ‘ij’ indexing. The difference is illustrated by the following code snippet: xv, yv = np.meshgrid(x, y, indexing='ij') for i in range(nx): for j in range(ny): # treat xv[i,j], yv[i,j] xv, yv = np.meshgrid(x, y, indexing='xy') for i in range(nx): for j in range(ny): # treat xv[j,i], yv[j,i] In the 1-D and 0-D case, the indexing and sparse keywords have no effect. #### Examples >>> import numpy as np >>> nx, ny = (3, 2) >>> x = np.linspace(0, 1, nx) >>> y = np.linspace(0, 1, ny) >>> xv, yv = np.meshgrid(x, y) >>> xv array([[0. , 0.5, 1. ], [0. , 0.5, 1. ]]) >>> yv array([[0., 0., 0.], [1., 1., 1.]]) The result of `meshgrid` is a coordinate grid: >>> import matplotlib.pyplot as plt >>> plt.plot(xv, yv, marker='o', color='k', linestyle='none') >>> plt.show() You can create sparse output arrays to save memory and computation time. >>> xv, yv = np.meshgrid(x, y, sparse=True) >>> xv array([[0. , 0.5, 1. ]]) >>> yv array([[0.], [1.]]) `meshgrid` is very useful to evaluate functions on a grid. If the function depends on all coordinates, both dense and sparse outputs can be used. >>> x = np.linspace(-5, 5, 101) >>> y = np.linspace(-5, 5, 101) >>> # full coordinate arrays >>> xx, yy = np.meshgrid(x, y) >>> zz = np.sqrt(xx**2 + yy**2) >>> xx.shape, yy.shape, zz.shape ((101, 101), (101, 101), (101, 101)) >>> # sparse coordinate arrays >>> xs, ys = np.meshgrid(x, y, sparse=True) >>> zs = np.sqrt(xs**2 + ys**2) >>> xs.shape, ys.shape, zs.shape ((1, 101), (101, 1), (101, 101)) >>> np.array_equal(zz, zs) True >>> h = plt.contourf(x, y, zs) >>> plt.axis('scaled') >>> plt.colorbar() >>> plt.show() # numpy.mgrid numpy.mgrid _= _ An instance which returns a dense multi-dimensional “meshgrid”. An instance which returns a dense (or fleshed out) mesh-grid when indexed, so that each returned argument has the same shape. The dimensions and number of the output arrays are equal to the number of indexing dimensions. If the step length is not a complex number, then the stop is not inclusive. However, if the step length is a **complex number** (e.g. 5j), then the integer part of its magnitude is interpreted as specifying the number of points to create between the start and stop values, where the stop value **is inclusive**. Returns: **mesh-grid** ndarray A single array, containing a set of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray")s all of the same dimensions. stacked along the first axis. See also [`ogrid`](numpy.ogrid#numpy.ogrid "numpy.ogrid") like `mgrid` but returns open (not fleshed out) mesh grids [`meshgrid`](numpy.meshgrid#numpy.meshgrid "numpy.meshgrid") return coordinate matrices from coordinate vectors [`r_`](numpy.r_#numpy.r_ "numpy.r_") array concatenator [How to create arrays with regularly-spaced values](../../user/how-to- partition#how-to-partition) #### Examples >>> import numpy as np >>> np.mgrid[0:5, 0:5] array([[[0, 0, 0, 0, 0], [1, 1, 1, 1, 1], [2, 2, 2, 2, 2], [3, 3, 3, 3, 3], [4, 4, 4, 4, 4]], [[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4]]]) >>> np.mgrid[-1:1:5j] array([-1. , -0.5, 0. , 0.5, 1. ]) >>> np.mgrid[0:4].shape (4,) >>> np.mgrid[0:4, 0:5].shape (2, 4, 5) >>> np.mgrid[0:4, 0:5, 0:6].shape (3, 4, 5, 6) # numpy.min numpy.min(_a_ , _axis=None_ , _out=None_ , _keepdims= _, _initial= _, _where= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L3190-L3303) Return the minimum of an array or minimum along an axis. Parameters: **a** array_like Input data. **axis** None or int or tuple of ints, optional Axis or axes along which to operate. By default, flattened input is used. If this is a tuple of ints, the minimum is selected over multiple axes, instead of a single axis or all the axes as before. **out** ndarray, optional Alternative output array in which to place the result. Must be of the same shape and buffer length as the expected output. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the `min` method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **initial** scalar, optional The maximum value of an output element. Must be present to allow computation on empty slice. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. **where** array_like of bool, optional Elements to compare for the minimum. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. Returns: **min** ndarray or scalar Minimum of `a`. If `axis` is None, the result is a scalar value. If `axis` is an int, the result is an array of dimension `a.ndim - 1`. If `axis` is a tuple, the result is an array of dimension `a.ndim - len(axis)`. See also [`amax`](numpy.amax#numpy.amax "numpy.amax") The maximum value of an array along a given axis, propagating any NaNs. [`nanmin`](numpy.nanmin#numpy.nanmin "numpy.nanmin") The minimum value of an array along a given axis, ignoring any NaNs. [`minimum`](numpy.minimum#numpy.minimum "numpy.minimum") Element-wise minimum of two arrays, propagating any NaNs. [`fmin`](numpy.fmin#numpy.fmin "numpy.fmin") Element-wise minimum of two arrays, ignoring any NaNs. [`argmin`](numpy.argmin#numpy.argmin "numpy.argmin") Return the indices of the minimum values. [`nanmax`](numpy.nanmax#numpy.nanmax "numpy.nanmax"), [`maximum`](numpy.maximum#numpy.maximum "numpy.maximum"), [`fmax`](numpy.fmax#numpy.fmax "numpy.fmax") #### Notes NaN values are propagated, that is if at least one item is NaN, the corresponding min value will be NaN as well. To ignore NaN values (MATLAB behavior), please use nanmin. Don’t use `min` for element-wise comparison of 2 arrays; when `a.shape[0]` is 2, `minimum(a[0], a[1])` is faster than `min(a, axis=0)`. #### Examples >>> import numpy as np >>> a = np.arange(4).reshape((2,2)) >>> a array([[0, 1], [2, 3]]) >>> np.min(a) # Minimum of the flattened array 0 >>> np.min(a, axis=0) # Minima along the first axis array([0, 1]) >>> np.min(a, axis=1) # Minima along the second axis array([0, 2]) >>> np.min(a, where=[False, True], initial=10, axis=0) array([10, 1]) >>> b = np.arange(5, dtype=float) >>> b[2] = np.nan >>> np.min(b) np.float64(nan) >>> np.min(b, where=~np.isnan(b), initial=10) 0.0 >>> np.nanmin(b) 0.0 >>> np.min([[-50], [10]], axis=-1, initial=0) array([-50, 0]) Notice that the initial value is used as one of the elements for which the minimum is determined, unlike for the default argument Python’s max function, which is only used for empty iterables. Notice that this isn’t the same as Python’s `default` argument. >>> np.min([6], initial=5) 5 >>> min([6], default=5) 6 # numpy.min_scalar_type numpy.min_scalar_type(_a_ , _/_) For scalar `a`, returns the data type with the smallest size and smallest scalar kind which can hold its value. For non-scalar array `a`, returns the vector’s dtype unmodified. Floating point values are not demoted to integers, and complex values are not demoted to floats. Parameters: **a** scalar or array_like The value whose minimal data type is to be found. Returns: **out** dtype The minimal data type. See also [`result_type`](numpy.result_type#numpy.result_type "numpy.result_type"), [`promote_types`](numpy.promote_types#numpy.promote_types "numpy.promote_types"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), [`can_cast`](numpy.can_cast#numpy.can_cast "numpy.can_cast") #### Examples >>> import numpy as np >>> np.min_scalar_type(10) dtype('uint8') >>> np.min_scalar_type(-260) dtype('int16') >>> np.min_scalar_type(3.1) dtype('float16') >>> np.min_scalar_type(1e50) dtype('float64') >>> np.min_scalar_type(np.arange(4,dtype='f8')) dtype('float64') # numpy.minimum numpy.minimum(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Element-wise minimum of array elements. Compare two arrays and return a new array containing the element-wise minima. If one of the elements being compared is a NaN, then that element is returned. If both elements are NaNs then the first is returned. The latter distinction is important for complex NaNs, which are defined as at least one of the real or imaginary parts being a NaN. The net effect is that NaNs are propagated. Parameters: **x1, x2** array_like The arrays holding the elements to be compared. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray or scalar The minimum of `x1` and `x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. See also [`maximum`](numpy.maximum#numpy.maximum "numpy.maximum") Element-wise maximum of two arrays, propagates NaNs. [`fmin`](numpy.fmin#numpy.fmin "numpy.fmin") Element-wise minimum of two arrays, ignores NaNs. [`amin`](numpy.amin#numpy.amin "numpy.amin") The minimum value of an array along a given axis, propagates NaNs. [`nanmin`](numpy.nanmin#numpy.nanmin "numpy.nanmin") The minimum value of an array along a given axis, ignores NaNs. [`fmax`](numpy.fmax#numpy.fmax "numpy.fmax"), [`amax`](numpy.amax#numpy.amax "numpy.amax"), [`nanmax`](numpy.nanmax#numpy.nanmax "numpy.nanmax") #### Notes The minimum is equivalent to `np.where(x1 <= x2, x1, x2)` when neither x1 nor x2 are NaNs, but it is faster and does proper broadcasting. #### Examples >>> import numpy as np >>> np.minimum([2, 3, 4], [1, 5, 2]) array([1, 3, 2]) >>> np.minimum(np.eye(2), [0.5, 2]) # broadcasting array([[ 0.5, 0. ], [ 0. , 1. ]]) >>> np.minimum([np.nan, 0, np.nan],[0, np.nan, np.nan]) array([nan, nan, nan]) >>> np.minimum(-np.inf, 1) -inf # numpy.mintypecode numpy.mintypecode(_typechars_ , _typeset ='GDFgdf'_, _default ='d'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_type_check_impl.py#L25-L77) Return the character for the minimum-size type to which given types can be safely cast. The returned type character must represent the smallest size dtype such that an array of the returned type can handle the data from an array of all types in `typechars` (or if `typechars` is an array, then its dtype.char). Parameters: **typechars** list of str or array_like If a list of strings, each string should represent a dtype. If array_like, the character representation of the array dtype is used. **typeset** str or list of str, optional The set of characters that the returned character is chosen from. The default set is ‘GDFgdf’. **default** str, optional The default character, this is returned if none of the characters in `typechars` matches a character in `typeset`. Returns: **typechar** str The character representing the minimum-size type that was found. See also [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") #### Examples >>> import numpy as np >>> np.mintypecode(['d', 'f', 'S']) 'd' >>> x = np.array([1.1, 2-3.j]) >>> np.mintypecode(x) 'D' >>> np.mintypecode('abceh', default='G') 'G' # numpy.mod numpy.mod(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns the element-wise remainder of division. Computes the remainder complementary to the [`floor_divide`](numpy.floor_divide#numpy.floor_divide "numpy.floor_divide") function. It is equivalent to the Python modulus operator `x1 % x2` and has the same sign as the divisor `x2`. The MATLAB function equivalent to `np.remainder` is `mod`. Warning This should not be confused with: * Python 3.7’s [`math.remainder`](https://docs.python.org/3/library/math.html#math.remainder "\(in Python v3.13\)") and C’s `remainder`, which computes the IEEE remainder, which are the complement to `round(x1 / x2)`. * The MATLAB `rem` function and or the C `%` operator which is the complement to `int(x1 / x2)`. Parameters: **x1** array_like Dividend array. **x2** array_like Divisor array. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The element-wise remainder of the quotient `floor_divide(x1, x2)`. This is a scalar if both `x1` and `x2` are scalars. See also [`floor_divide`](numpy.floor_divide#numpy.floor_divide "numpy.floor_divide") Equivalent of Python `//` operator. [`divmod`](numpy.divmod#numpy.divmod "numpy.divmod") Simultaneous floor division and remainder. [`fmod`](numpy.fmod#numpy.fmod "numpy.fmod") Equivalent of the MATLAB `rem` function. [`divide`](numpy.divide#numpy.divide "numpy.divide"), [`floor`](numpy.floor#numpy.floor "numpy.floor") #### Notes Returns 0 when `x2` is 0 and both `x1` and `x2` are (arrays of) integers. `mod` is an alias of `remainder`. #### Examples >>> import numpy as np >>> np.remainder([4, 7], [2, 3]) array([0, 1]) >>> np.remainder(np.arange(7), 5) array([0, 1, 2, 3, 4, 0, 1]) The `%` operator can be used as a shorthand for `np.remainder` on ndarrays. >>> x1 = np.arange(7) >>> x1 % 5 array([0, 1, 2, 3, 4, 0, 1]) # numpy.modf numpy.modf(_x_ , [_out1_ , _out2_ , ]_/_ , [_out=(None_ , _None)_ , ]_*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the fractional and integral parts of an array, element-wise. The fractional and integral parts are negative if the given number is negative. Parameters: **x** array_like Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y1** ndarray Fractional part of `x`. This is a scalar if `x` is a scalar. **y2** ndarray Integral part of `x`. This is a scalar if `x` is a scalar. See also [`divmod`](numpy.divmod#numpy.divmod "numpy.divmod") `divmod(x, 1)` is equivalent to `modf` with the return values switched, except it always has a positive remainder. #### Notes For integer input the return values are floats. #### Examples >>> import numpy as np >>> np.modf([0, 3.5]) (array([ 0. , 0.5]), array([ 0., 3.])) >>> np.modf(-0.5) (-0.5, -0) # numpy.moveaxis numpy.moveaxis(_a_ , _source_ , _destination_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L1448-L1515) Move axes of an array to new positions. Other axes remain in their original order. Parameters: **a** np.ndarray The array whose axes should be reordered. **source** int or sequence of int Original positions of the axes to move. These must be unique. **destination** int or sequence of int Destination positions for each of the original axes. These must also be unique. Returns: **result** np.ndarray Array with moved axes. This array is a view of the input array. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") Permute the dimensions of an array. [`swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") Interchange two axes of an array. #### Examples >>> import numpy as np >>> x = np.zeros((3, 4, 5)) >>> np.moveaxis(x, 0, -1).shape (4, 5, 3) >>> np.moveaxis(x, -1, 0).shape (5, 3, 4) These all achieve the same result: >>> np.transpose(x).shape (5, 4, 3) >>> np.swapaxes(x, 0, -1).shape (5, 4, 3) >>> np.moveaxis(x, [0, 1], [-1, -2]).shape (5, 4, 3) >>> np.moveaxis(x, [0, 1, 2], [-1, -2, -3]).shape (5, 4, 3) # numpy.multiply numpy.multiply(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Multiply arguments element-wise. Parameters: **x1, x2** array_like Input arrays to be multiplied. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The product of `x1` and `x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. #### Notes Equivalent to `x1` * `x2` in terms of array broadcasting. #### Examples >>> import numpy as np >>> np.multiply(2.0, 4.0) 8.0 >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> np.multiply(x1, x2) array([[ 0., 1., 4.], [ 0., 4., 10.], [ 0., 7., 16.]]) The `*` operator can be used as a shorthand for `np.multiply` on ndarrays. >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> x1 * x2 array([[ 0., 1., 4.], [ 0., 4., 10.], [ 0., 7., 16.]]) # numpy.nan_to_num numpy.nan_to_num(_x_ , _copy =True_, _nan =0.0_, _posinf =None_, _neginf =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_type_check_impl.py#L373-L481) Replace NaN with zero and infinity with large finite numbers (default behaviour) or with the numbers defined by the user using the [`nan`](../constants#numpy.nan "numpy.nan"), `posinf` and/or `neginf` keywords. If `x` is inexact, NaN is replaced by zero or by the user defined value in [`nan`](../constants#numpy.nan "numpy.nan") keyword, infinity is replaced by the largest finite floating point values representable by `x.dtype` or by the user defined value in `posinf` keyword and -infinity is replaced by the most negative finite floating point values representable by `x.dtype` or by the user defined value in `neginf` keyword. For complex dtypes, the above is applied to each of the real and imaginary components of `x` separately. If `x` is not inexact, then no replacements are made. Parameters: **x** scalar or array_like Input data. **copy** bool, optional Whether to create a copy of `x` (True) or to replace values in-place (False). The in-place operation only occurs if casting to an array does not require a copy. Default is True. **nan** int, float, optional Value to be used to fill NaN values. If no value is passed then NaN values will be replaced with 0.0. **posinf** int, float, optional Value to be used to fill positive infinity values. If no value is passed then positive infinity values will be replaced with a very large number. **neginf** int, float, optional Value to be used to fill negative infinity values. If no value is passed then negative infinity values will be replaced with a very small (or negative) number. Returns: **out** ndarray `x`, with the non-finite values replaced. If [`copy`](numpy.copy#numpy.copy "numpy.copy") is False, this may be `x` itself. See also [`isinf`](numpy.isinf#numpy.isinf "numpy.isinf") Shows which elements are positive or negative infinity. [`isneginf`](numpy.isneginf#numpy.isneginf "numpy.isneginf") Shows which elements are negative infinity. [`isposinf`](numpy.isposinf#numpy.isposinf "numpy.isposinf") Shows which elements are positive infinity. [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") Shows which elements are Not a Number (NaN). [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite") Shows which elements are finite (not NaN, not infinity) #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. #### Examples >>> import numpy as np >>> np.nan_to_num(np.inf) 1.7976931348623157e+308 >>> np.nan_to_num(-np.inf) -1.7976931348623157e+308 >>> np.nan_to_num(np.nan) 0.0 >>> x = np.array([np.inf, -np.inf, np.nan, -128, 128]) >>> np.nan_to_num(x) array([ 1.79769313e+308, -1.79769313e+308, 0.00000000e+000, # may vary -1.28000000e+002, 1.28000000e+002]) >>> np.nan_to_num(x, nan=-9999, posinf=33333333, neginf=33333333) array([ 3.3333333e+07, 3.3333333e+07, -9.9990000e+03, -1.2800000e+02, 1.2800000e+02]) >>> y = np.array([complex(np.inf, np.nan), np.nan, complex(np.nan, np.inf)]) array([ 1.79769313e+308, -1.79769313e+308, 0.00000000e+000, # may vary -1.28000000e+002, 1.28000000e+002]) >>> np.nan_to_num(y) array([ 1.79769313e+308 +0.00000000e+000j, # may vary 0.00000000e+000 +0.00000000e+000j, 0.00000000e+000 +1.79769313e+308j]) >>> np.nan_to_num(y, nan=111111, posinf=222222) array([222222.+111111.j, 111111. +0.j, 111111.+222222.j]) # numpy.nanargmax numpy.nanargmax(_a_ , _axis=None_ , _out=None_ , _*_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_nanfunctions_impl.py#L572-L627) Return the indices of the maximum values in the specified axis ignoring NaNs. For all-NaN slices `ValueError` is raised. Warning: the results cannot be trusted if a slice contains only NaNs and -Infs. Parameters: **a** array_like Input data. **axis** int, optional Axis along which to operate. By default flattened input is used. **out** array, optional If provided, the result will be inserted into this array. It should be of the appropriate shape and dtype. New in version 1.22.0. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. New in version 1.22.0. Returns: **index_array** ndarray An array of indices or a single index value. See also [`argmax`](numpy.argmax#numpy.argmax "numpy.argmax"), [`nanargmin`](numpy.nanargmin#numpy.nanargmin "numpy.nanargmin") #### Examples >>> import numpy as np >>> a = np.array([[np.nan, 4], [2, 3]]) >>> np.argmax(a) 0 >>> np.nanargmax(a) 1 >>> np.nanargmax(a, axis=0) array([1, 0]) >>> np.nanargmax(a, axis=1) array([1, 1]) # numpy.nanargmin numpy.nanargmin(_a_ , _axis=None_ , _out=None_ , _*_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_nanfunctions_impl.py#L511-L565) Return the indices of the minimum values in the specified axis ignoring NaNs. For all-NaN slices `ValueError` is raised. Warning: the results cannot be trusted if a slice contains only NaNs and Infs. Parameters: **a** array_like Input data. **axis** int, optional Axis along which to operate. By default flattened input is used. **out** array, optional If provided, the result will be inserted into this array. It should be of the appropriate shape and dtype. New in version 1.22.0. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. New in version 1.22.0. Returns: **index_array** ndarray An array of indices or a single index value. See also [`argmin`](numpy.argmin#numpy.argmin "numpy.argmin"), [`nanargmax`](numpy.nanargmax#numpy.nanargmax "numpy.nanargmax") #### Examples >>> import numpy as np >>> a = np.array([[np.nan, 4], [2, 3]]) >>> np.argmin(a) 0 >>> np.nanargmin(a) 2 >>> np.nanargmin(a, axis=0) array([1, 1]) >>> np.nanargmin(a, axis=1) array([1, 0]) # numpy.nancumprod numpy.nancumprod(_a_ , _axis =None_, _dtype =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_nanfunctions_impl.py#L887-L946) Return the cumulative product of array elements over a given axis treating Not a Numbers (NaNs) as one. The cumulative product does not change when NaNs are encountered and leading NaNs are replaced by ones. Ones are returned for slices that are all-NaN or empty. Parameters: **a** array_like Input array. **axis** int, optional Axis along which the cumulative product is computed. By default the input is flattened. **dtype** dtype, optional Type of the returned array, as well as of the accumulator in which the elements are multiplied. If _dtype_ is not specified, it defaults to the dtype of `a`, unless `a` has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used instead. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type of the resulting values will be cast if necessary. Returns: **nancumprod** ndarray A new array holding the result is returned unless `out` is specified, in which case it is returned. See also [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") Cumulative product across array propagating NaNs. [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") Show which elements are NaN. #### Examples >>> import numpy as np >>> np.nancumprod(1) array([1]) >>> np.nancumprod([1]) array([1]) >>> np.nancumprod([1, np.nan]) array([1., 1.]) >>> a = np.array([[1, 2], [3, np.nan]]) >>> np.nancumprod(a) array([1., 2., 6., 6.]) >>> np.nancumprod(a, axis=0) array([[1., 2.], [3., 2.]]) >>> np.nancumprod(a, axis=1) array([[1., 2.], [3., 3.]]) # numpy.nancumsum numpy.nancumsum(_a_ , _axis =None_, _dtype =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_nanfunctions_impl.py#L818-L880) Return the cumulative sum of array elements over a given axis treating Not a Numbers (NaNs) as zero. The cumulative sum does not change when NaNs are encountered and leading NaNs are replaced by zeros. Zeros are returned for slices that are all-NaN or empty. Parameters: **a** array_like Input array. **axis** int, optional Axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array. **dtype** dtype, optional Type of the returned array and of the accumulator in which the elements are summed. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not specified, it defaults to the dtype of `a`, unless `a` has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs- output-type) for more details. Returns: **nancumsum** ndarray. A new array holding the result is returned unless `out` is specified, in which it is returned. The result has the same size as `a`, and the same shape as `a` if `axis` is not None or `a` is a 1-d array. See also [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") Cumulative sum across array propagating NaNs. [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") Show which elements are NaN. #### Examples >>> import numpy as np >>> np.nancumsum(1) array([1]) >>> np.nancumsum([1]) array([1]) >>> np.nancumsum([1, np.nan]) array([1., 1.]) >>> a = np.array([[1, 2], [3, np.nan]]) >>> np.nancumsum(a) array([1., 3., 6., 6.]) >>> np.nancumsum(a, axis=0) array([[1., 2.], [4., 2.]]) >>> np.nancumsum(a, axis=1) array([[1., 3.], [3., 3.]]) # numpy.nanmax numpy.nanmax(_a_ , _axis=None_ , _out=None_ , _keepdims= _, _initial= _, _where= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_nanfunctions_impl.py#L383-L504) Return the maximum of an array or maximum along an axis, ignoring any NaNs. When all-NaN slices are encountered a `RuntimeWarning` is raised and NaN is returned for that slice. Parameters: **a** array_like Array containing numbers whose maximum is desired. If `a` is not an array, a conversion is attempted. **axis**{int, tuple of int, None}, optional Axis or axes along which the maximum is computed. The default is to compute the maximum of the flattened array. **out** ndarray, optional Alternate output array in which to place the result. The default is `None`; if provided, it must have the same shape as the expected output, but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. If the value is anything but the default, then `keepdims` will be passed through to the [`max`](numpy.max#numpy.max "numpy.max") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). If the sub-classes methods does not implement `keepdims` any exceptions will be raised. **initial** scalar, optional The minimum value of an output element. Must be present to allow computation on empty slice. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. **where** array_like of bool, optional Elements to compare for the maximum. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. Returns: **nanmax** ndarray An array with the same shape as `a`, with the specified axis removed. If `a` is a 0-d array, or if axis is None, an ndarray scalar is returned. The same dtype as `a` is returned. See also [`nanmin`](numpy.nanmin#numpy.nanmin "numpy.nanmin") The minimum value of an array along a given axis, ignoring any NaNs. [`amax`](numpy.amax#numpy.amax "numpy.amax") The maximum value of an array along a given axis, propagating any NaNs. [`fmax`](numpy.fmax#numpy.fmax "numpy.fmax") Element-wise maximum of two arrays, ignoring any NaNs. [`maximum`](numpy.maximum#numpy.maximum "numpy.maximum") Element-wise maximum of two arrays, propagating any NaNs. [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") Shows which elements are Not a Number (NaN). [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite") Shows which elements are neither NaN nor infinity. [`amin`](numpy.amin#numpy.amin "numpy.amin"), [`fmin`](numpy.fmin#numpy.fmin "numpy.fmin"), [`minimum`](numpy.minimum#numpy.minimum "numpy.minimum") #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. Positive infinity is treated as a very large number and negative infinity is treated as a very small (i.e. negative) number. If the input has a integer type the function is equivalent to np.max. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, np.nan]]) >>> np.nanmax(a) 3.0 >>> np.nanmax(a, axis=0) array([3., 2.]) >>> np.nanmax(a, axis=1) array([2., 3.]) When positive infinity and negative infinity are present: >>> np.nanmax([1, 2, np.nan, -np.inf]) 2.0 >>> np.nanmax([1, 2, np.nan, np.inf]) inf # numpy.nanmean numpy.nanmean(_a_ , _axis=None_ , _dtype=None_ , _out=None_ , _keepdims= _, _*_ , _where= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_nanfunctions_impl.py#L954-L1056) Compute the arithmetic mean along the specified axis, ignoring NaNs. Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified axis. [`float64`](../arrays.scalars#numpy.float64 "numpy.float64") intermediate and return values are used for integer inputs. For all-NaN slices, NaN is returned and a `RuntimeWarning` is raised. Parameters: **a** array_like Array containing numbers whose mean is desired. If `a` is not an array, a conversion is attempted. **axis**{int, tuple of int, None}, optional Axis or axes along which the means are computed. The default is to compute the mean of the flattened array. **dtype** data-type, optional Type to use in computing the mean. For integer inputs, the default is [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"); for inexact inputs, it is the same as the input dtype. **out** ndarray, optional Alternate output array in which to place the result. The default is `None`; if provided, it must have the same shape as the expected output, but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. If the value is anything but the default, then `keepdims` will be passed through to the [`mean`](numpy.mean#numpy.mean "numpy.mean") or [`sum`](numpy.sum#numpy.sum "numpy.sum") methods of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). If the sub-classes methods does not implement `keepdims` any exceptions will be raised. **where** array_like of bool, optional Elements to include in the mean. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. Returns: **m** ndarray, see dtype parameter above If `out=None`, returns a new array containing the mean values, otherwise a reference to the output array is returned. Nan is returned for slices that contain only NaNs. See also [`average`](numpy.average#numpy.average "numpy.average") Weighted average [`mean`](numpy.mean#numpy.mean "numpy.mean") Arithmetic mean taken while not ignoring NaNs [`var`](numpy.var#numpy.var "numpy.var"), [`nanvar`](numpy.nanvar#numpy.nanvar "numpy.nanvar") #### Notes The arithmetic mean is the sum of the non-NaN elements along the axis divided by the number of non-NaN elements. Note that for floating-point input, the mean is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for [`float32`](../arrays.scalars#numpy.float32 "numpy.float32"). Specifying a higher-precision accumulator using the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") keyword can alleviate this issue. #### Examples >>> import numpy as np >>> a = np.array([[1, np.nan], [3, 4]]) >>> np.nanmean(a) 2.6666666666666665 >>> np.nanmean(a, axis=0) array([2., 4.]) >>> np.nanmean(a, axis=1) array([1., 3.5]) # may vary # numpy.nanmedian numpy.nanmedian(_a_ , _axis=None_ , _out=None_ , _overwrite_input=False_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_nanfunctions_impl.py#L1127-L1219) Compute the median along the specified axis, while ignoring NaNs. Returns the median of the array elements. Parameters: **a** array_like Input array or object that can be converted to an array. **axis**{int, sequence of int, None}, optional Axis or axes along which the medians are computed. The default is to compute the median along a flattened version of the array. A sequence of axes is supported since version 1.9.0. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary. **overwrite_input** bool, optional If True, then allow use of memory of input array `a` for calculations. The input array will be modified by the call to [`median`](numpy.median#numpy.median "numpy.median"). This will save memory when you do not need to preserve the contents of the input array. Treat the input as undefined, but it will probably be fully or partially sorted. Default is False. If `overwrite_input` is `True` and `a` is not already an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), an error will be raised. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. If this is anything but the default value it will be passed through (in the special case of an empty array) to the [`mean`](numpy.mean#numpy.mean "numpy.mean") function of the underlying array. If the array is a sub-class and [`mean`](numpy.mean#numpy.mean "numpy.mean") does not have the kwarg `keepdims` this will raise a RuntimeError. Returns: **median** ndarray A new array holding the result. If the input contains integers or floats smaller than `float64`, then the output data-type is `np.float64`. Otherwise, the data-type of the output is the same as that of the input. If `out` is specified, that array is returned instead. See also [`mean`](numpy.mean#numpy.mean "numpy.mean"), [`median`](numpy.median#numpy.median "numpy.median"), [`percentile`](numpy.percentile#numpy.percentile "numpy.percentile") #### Notes Given a vector `V` of length `N`, the median of `V` is the middle value of a sorted copy of `V`, `V_sorted` \- i.e., `V_sorted[(N-1)/2]`, when `N` is odd and the average of the two middle values of `V_sorted` when `N` is even. #### Examples >>> import numpy as np >>> a = np.array([[10.0, 7, 4], [3, 2, 1]]) >>> a[0, 1] = np.nan >>> a array([[10., nan, 4.], [ 3., 2., 1.]]) >>> np.median(a) np.float64(nan) >>> np.nanmedian(a) 3.0 >>> np.nanmedian(a, axis=0) array([6.5, 2. , 2.5]) >>> np.median(a, axis=1) array([nan, 2.]) >>> b = a.copy() >>> np.nanmedian(b, axis=1, overwrite_input=True) array([7., 2.]) >>> assert not np.all(a==b) >>> b = a.copy() >>> np.nanmedian(b, axis=None, overwrite_input=True) 3.0 >>> assert not np.all(a==b) # numpy.nanmin numpy.nanmin(_a_ , _axis=None_ , _out=None_ , _keepdims= _, _initial= _, _where= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_nanfunctions_impl.py#L253-L375) Return minimum of an array or minimum along an axis, ignoring any NaNs. When all-NaN slices are encountered a `RuntimeWarning` is raised and Nan is returned for that slice. Parameters: **a** array_like Array containing numbers whose minimum is desired. If `a` is not an array, a conversion is attempted. **axis**{int, tuple of int, None}, optional Axis or axes along which the minimum is computed. The default is to compute the minimum of the flattened array. **out** ndarray, optional Alternate output array in which to place the result. The default is `None`; if provided, it must have the same shape as the expected output, but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. If the value is anything but the default, then `keepdims` will be passed through to the [`min`](numpy.min#numpy.min "numpy.min") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). If the sub- classes methods does not implement `keepdims` any exceptions will be raised. **initial** scalar, optional The maximum value of an output element. Must be present to allow computation on empty slice. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. **where** array_like of bool, optional Elements to compare for the minimum. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. Returns: **nanmin** ndarray An array with the same shape as `a`, with the specified axis removed. If `a` is a 0-d array, or if axis is None, an ndarray scalar is returned. The same dtype as `a` is returned. See also [`nanmax`](numpy.nanmax#numpy.nanmax "numpy.nanmax") The maximum value of an array along a given axis, ignoring any NaNs. [`amin`](numpy.amin#numpy.amin "numpy.amin") The minimum value of an array along a given axis, propagating any NaNs. [`fmin`](numpy.fmin#numpy.fmin "numpy.fmin") Element-wise minimum of two arrays, ignoring any NaNs. [`minimum`](numpy.minimum#numpy.minimum "numpy.minimum") Element-wise minimum of two arrays, propagating any NaNs. [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") Shows which elements are Not a Number (NaN). [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite") Shows which elements are neither NaN nor infinity. [`amax`](numpy.amax#numpy.amax "numpy.amax"), [`fmax`](numpy.fmax#numpy.fmax "numpy.fmax"), [`maximum`](numpy.maximum#numpy.maximum "numpy.maximum") #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. Positive infinity is treated as a very large number and negative infinity is treated as a very small (i.e. negative) number. If the input has a integer type the function is equivalent to np.min. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, np.nan]]) >>> np.nanmin(a) 1.0 >>> np.nanmin(a, axis=0) array([1., 2.]) >>> np.nanmin(a, axis=1) array([1., 3.]) When positive infinity and negative infinity are present: >>> np.nanmin([1, 2, np.nan, np.inf]) 1.0 >>> np.nanmin([1, 2, np.nan, -np.inf]) -inf # numpy.nanpercentile numpy.nanpercentile(_a_ , _q_ , _axis=None_ , _out=None_ , _overwrite_input=False_ , _method='linear'_ , _keepdims= _, _*_ , _weights=None_ , _interpolation=None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_nanfunctions_impl.py#L1228-L1410) Compute the qth percentile of the data along the specified axis, while ignoring nan values. Returns the qth percentile(s) of the array elements. Parameters: **a** array_like Input array or object that can be converted to an array, containing nan values to be ignored. **q** array_like of float Percentile or sequence of percentiles to compute, which must be between 0 and 100 inclusive. **axis**{int, tuple of int, None}, optional Axis or axes along which the percentiles are computed. The default is to compute the percentile(s) along a flattened version of the array. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary. **overwrite_input** bool, optional If True, then allow the input array `a` to be modified by intermediate calculations, to save memory. In this case, the contents of the input `a` after this function completes is undefined. **method** str, optional This parameter specifies the method to use for estimating the percentile. There are many different methods, some unique to NumPy. See the notes for explanation. The options sorted by their R type as summarized in the H&F paper [1] are: 1. ‘inverted_cdf’ 2. ‘averaged_inverted_cdf’ 3. ‘closest_observation’ 4. ‘interpolated_inverted_cdf’ 5. ‘hazen’ 6. ‘weibull’ 7. ‘linear’ (default) 8. ‘median_unbiased’ 9. ‘normal_unbiased’ The first three methods are discontinuous. NumPy further defines the following discontinuous variations of the default ‘linear’ (7.) option: * ‘lower’ * ‘higher’, * ‘midpoint’ * ‘nearest’ Changed in version 1.22.0: This argument was previously called “interpolation” and only offered the “linear” default and last four options. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array `a`. If this is anything but the default value it will be passed through (in the special case of an empty array) to the [`mean`](numpy.mean#numpy.mean "numpy.mean") function of the underlying array. If the array is a sub-class and [`mean`](numpy.mean#numpy.mean "numpy.mean") does not have the kwarg `keepdims` this will raise a RuntimeError. **weights** array_like, optional An array of weights associated with the values in `a`. Each value in `a` contributes to the percentile according to its associated weight. The weights array can either be 1-D (in which case its length must be the size of `a` along the given axis) or of the same shape as `a`. If `weights=None`, then all data in `a` are assumed to have a weight equal to one. Only `method=”inverted_cdf”` supports weights. New in version 2.0.0. **interpolation** str, optional Deprecated name for the method keyword argument. Deprecated since version 1.22.0. Returns: **percentile** scalar or ndarray If `q` is a single percentile and `axis=None`, then the result is a scalar. If multiple percentiles are given, first axis of the result corresponds to the percentiles. The other axes are the axes that remain after the reduction of `a`. If the input contains integers or floats smaller than `float64`, the output data-type is `float64`. Otherwise, the output data-type is the same as that of the input. If `out` is specified, that array is returned instead. See also [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean") [`nanmedian`](numpy.nanmedian#numpy.nanmedian "numpy.nanmedian") equivalent to `nanpercentile(..., 50)` [`percentile`](numpy.percentile#numpy.percentile "numpy.percentile"), [`median`](numpy.median#numpy.median "numpy.median"), [`mean`](numpy.mean#numpy.mean "numpy.mean") [`nanquantile`](numpy.nanquantile#numpy.nanquantile "numpy.nanquantile") equivalent to nanpercentile, except q in range [0, 1]. #### Notes The behavior of `numpy.nanpercentile` with percentage `q` is that of [`numpy.quantile`](numpy.quantile#numpy.quantile "numpy.quantile") with argument `q/100` (ignoring nan values). For more information, please see [`numpy.quantile`](numpy.quantile#numpy.quantile "numpy.quantile"). #### References [1] R. J. Hyndman and Y. Fan, “Sample quantiles in statistical packages,” The American Statistician, 50(4), pp. 361-365, 1996 #### Examples >>> import numpy as np >>> a = np.array([[10., 7., 4.], [3., 2., 1.]]) >>> a[0][1] = np.nan >>> a array([[10., nan, 4.], [ 3., 2., 1.]]) >>> np.percentile(a, 50) np.float64(nan) >>> np.nanpercentile(a, 50) 3.0 >>> np.nanpercentile(a, 50, axis=0) array([6.5, 2. , 2.5]) >>> np.nanpercentile(a, 50, axis=1, keepdims=True) array([[7.], [2.]]) >>> m = np.nanpercentile(a, 50, axis=0) >>> out = np.zeros_like(m) >>> np.nanpercentile(a, 50, axis=0, out=out) array([6.5, 2. , 2.5]) >>> m array([6.5, 2. , 2.5]) >>> b = a.copy() >>> np.nanpercentile(b, 50, axis=1, overwrite_input=True) array([7., 2.]) >>> assert not np.all(a==b) # numpy.nanprod numpy.nanprod(_a_ , _axis=None_ , _dtype=None_ , _out=None_ , _keepdims= _, _initial= _, _where= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_nanfunctions_impl.py#L737-L811) Return the product of array elements over a given axis treating Not a Numbers (NaNs) as ones. One is returned for slices that are all-NaN or empty. Parameters: **a** array_like Array containing numbers whose product is desired. If `a` is not an array, a conversion is attempted. **axis**{int, tuple of int, None}, optional Axis or axes along which the product is computed. The default is to compute the product of the flattened array. **dtype** data-type, optional The type of the returned array and of the accumulator in which the elements are summed. By default, the dtype of `a` is used. An exception is when `a` has an integer type with less precision than the platform (u)intp. In that case, the default will be either (u)int32 or (u)int64 depending on whether the platform is 32 or 64 bits. For inexact inputs, dtype must be inexact. **out** ndarray, optional Alternate output array in which to place the result. The default is `None`. If provided, it must have the same shape as the expected output, but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. The casting of NaN to integer can yield unexpected results. **keepdims** bool, optional If True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `arr`. **initial** scalar, optional The starting value for this product. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. **where** array_like of bool, optional Elements to include in the product. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. Returns: **nanprod** ndarray A new array holding the result is returned unless `out` is specified, in which case it is returned. See also [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") Product across array propagating NaNs. [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") Show which elements are NaN. #### Examples >>> import numpy as np >>> np.nanprod(1) 1 >>> np.nanprod([1]) 1 >>> np.nanprod([1, np.nan]) 1.0 >>> a = np.array([[1, 2], [3, np.nan]]) >>> np.nanprod(a) 6.0 >>> np.nanprod(a, axis=0) array([3., 2.]) # numpy.nanquantile numpy.nanquantile(_a_ , _q_ , _axis=None_ , _out=None_ , _overwrite_input=False_ , _method='linear'_ , _keepdims= _, _*_ , _weights=None_ , _interpolation=None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_nanfunctions_impl.py#L1419-L1602) Compute the qth quantile of the data along the specified axis, while ignoring nan values. Returns the qth quantile(s) of the array elements. Parameters: **a** array_like Input array or object that can be converted to an array, containing nan values to be ignored **q** array_like of float Probability or sequence of probabilities for the quantiles to compute. Values must be between 0 and 1 inclusive. **axis**{int, tuple of int, None}, optional Axis or axes along which the quantiles are computed. The default is to compute the quantile(s) along a flattened version of the array. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary. **overwrite_input** bool, optional If True, then allow the input array `a` to be modified by intermediate calculations, to save memory. In this case, the contents of the input `a` after this function completes is undefined. **method** str, optional This parameter specifies the method to use for estimating the quantile. There are many different methods, some unique to NumPy. See the notes for explanation. The options sorted by their R type as summarized in the H&F paper [1] are: 1. ‘inverted_cdf’ 2. ‘averaged_inverted_cdf’ 3. ‘closest_observation’ 4. ‘interpolated_inverted_cdf’ 5. ‘hazen’ 6. ‘weibull’ 7. ‘linear’ (default) 8. ‘median_unbiased’ 9. ‘normal_unbiased’ The first three methods are discontinuous. NumPy further defines the following discontinuous variations of the default ‘linear’ (7.) option: * ‘lower’ * ‘higher’, * ‘midpoint’ * ‘nearest’ Changed in version 1.22.0: This argument was previously called “interpolation” and only offered the “linear” default and last four options. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array `a`. If this is anything but the default value it will be passed through (in the special case of an empty array) to the [`mean`](numpy.mean#numpy.mean "numpy.mean") function of the underlying array. If the array is a sub-class and [`mean`](numpy.mean#numpy.mean "numpy.mean") does not have the kwarg `keepdims` this will raise a RuntimeError. **weights** array_like, optional An array of weights associated with the values in `a`. Each value in `a` contributes to the quantile according to its associated weight. The weights array can either be 1-D (in which case its length must be the size of `a` along the given axis) or of the same shape as `a`. If `weights=None`, then all data in `a` are assumed to have a weight equal to one. Only `method=”inverted_cdf”` supports weights. New in version 2.0.0. **interpolation** str, optional Deprecated name for the method keyword argument. Deprecated since version 1.22.0. Returns: **quantile** scalar or ndarray If `q` is a single probability and `axis=None`, then the result is a scalar. If multiple probability levels are given, first axis of the result corresponds to the quantiles. The other axes are the axes that remain after the reduction of `a`. If the input contains integers or floats smaller than `float64`, the output data-type is `float64`. Otherwise, the output data-type is the same as that of the input. If `out` is specified, that array is returned instead. See also [`quantile`](numpy.quantile#numpy.quantile "numpy.quantile") [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean"), [`nanmedian`](numpy.nanmedian#numpy.nanmedian "numpy.nanmedian") [`nanmedian`](numpy.nanmedian#numpy.nanmedian "numpy.nanmedian") equivalent to `nanquantile(..., 0.5)` [`nanpercentile`](numpy.nanpercentile#numpy.nanpercentile "numpy.nanpercentile") same as nanquantile, but with q in the range [0, 100]. #### Notes The behavior of `numpy.nanquantile` is the same as that of [`numpy.quantile`](numpy.quantile#numpy.quantile "numpy.quantile") (ignoring nan values). For more information, please see [`numpy.quantile`](numpy.quantile#numpy.quantile "numpy.quantile"). #### References [1] R. J. Hyndman and Y. Fan, “Sample quantiles in statistical packages,” The American Statistician, 50(4), pp. 361-365, 1996 #### Examples >>> import numpy as np >>> a = np.array([[10., 7., 4.], [3., 2., 1.]]) >>> a[0][1] = np.nan >>> a array([[10., nan, 4.], [ 3., 2., 1.]]) >>> np.quantile(a, 0.5) np.float64(nan) >>> np.nanquantile(a, 0.5) 3.0 >>> np.nanquantile(a, 0.5, axis=0) array([6.5, 2. , 2.5]) >>> np.nanquantile(a, 0.5, axis=1, keepdims=True) array([[7.], [2.]]) >>> m = np.nanquantile(a, 0.5, axis=0) >>> out = np.zeros_like(m) >>> np.nanquantile(a, 0.5, axis=0, out=out) array([6.5, 2. , 2.5]) >>> m array([6.5, 2. , 2.5]) >>> b = a.copy() >>> np.nanquantile(b, 0.5, axis=1, overwrite_input=True) array([7., 2.]) >>> assert not np.all(a==b) # numpy.nanstd numpy.nanstd(_a_ , _axis=None_ , _dtype=None_ , _out=None_ , _ddof=0_ , _keepdims= _, _*_ , _where= _, _mean= _, _correction= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_nanfunctions_impl.py#L1905-L2028) Compute the standard deviation along the specified axis, while ignoring NaNs. Returns the standard deviation, a measure of the spread of a distribution, of the non-NaN array elements. The standard deviation is computed for the flattened array by default, otherwise over the specified axis. For all-NaN slices or slices with zero degrees of freedom, NaN is returned and a `RuntimeWarning` is raised. Parameters: **a** array_like Calculate the standard deviation of the non-NaN values. **axis**{int, tuple of int, None}, optional Axis or axes along which the standard deviation is computed. The default is to compute the standard deviation of the flattened array. **dtype** dtype, optional Type to use in computing the standard deviation. For arrays of integer type the default is float64, for arrays of float types it is the same as the array type. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape as the expected output but the type (of the calculated values) will be cast if necessary. **ddof**{int, float}, optional Means Delta Degrees of Freedom. The divisor used in calculations is `N - ddof`, where `N` represents the number of non-NaN elements. By default `ddof` is zero. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. If this value is anything but the default it is passed through as-is to the relevant functions of the sub-classes. If these functions do not have a `keepdims` kwarg, a RuntimeError will be raised. **where** array_like of bool, optional Elements to include in the standard deviation. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. **mean** array_like, optional Provide the mean to prevent its recalculation. The mean should have a shape as if it was calculated with `keepdims=True`. The axis for the calculation of the mean should be the same as used in the call to this std function. New in version 2.0.0. **correction**{int, float}, optional Array API compatible name for the `ddof` parameter. Only one of them can be provided at the same time. New in version 2.0.0. Returns: **standard_deviation** ndarray, see dtype parameter above. If `out` is None, return a new array containing the standard deviation, otherwise return a reference to the output array. If ddof is >= the number of non-NaN elements in a slice or the slice contains only NaNs, then the result for that slice is NaN. See also [`var`](numpy.var#numpy.var "numpy.var"), [`mean`](numpy.mean#numpy.mean "numpy.mean"), [`std`](numpy.std#numpy.std "numpy.std") [`nanvar`](numpy.nanvar#numpy.nanvar "numpy.nanvar"), [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean") [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes The standard deviation is the square root of the average of the squared deviations from the mean: `std = sqrt(mean(abs(x - x.mean())**2))`. The average squared deviation is normally calculated as `x.sum() / N`, where `N = len(x)`. If, however, `ddof` is specified, the divisor `N - ddof` is used instead. In standard statistical practice, `ddof=1` provides an unbiased estimator of the variance of the infinite population. `ddof=0` provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even with `ddof=1`, it will not be an unbiased estimate of the standard deviation per se. Note that, for complex numbers, [`std`](numpy.std#numpy.std "numpy.std") takes the absolute value before squaring, so that the result is always real and nonnegative. For floating-point input, the _std_ is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher- accuracy accumulator using the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") keyword can alleviate this issue. #### Examples >>> import numpy as np >>> a = np.array([[1, np.nan], [3, 4]]) >>> np.nanstd(a) 1.247219128924647 >>> np.nanstd(a, axis=0) array([1., 0.]) >>> np.nanstd(a, axis=1) array([0., 0.5]) # may vary # numpy.nansum numpy.nansum(_a_ , _axis=None_ , _dtype=None_ , _out=None_ , _keepdims= _, _initial= _, _where= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_nanfunctions_impl.py#L635-L729) Return the sum of array elements over a given axis treating Not a Numbers (NaNs) as zero. In NumPy versions <= 1.9.0 Nan is returned for slices that are all-NaN or empty. In later versions zero is returned. Parameters: **a** array_like Array containing numbers whose sum is desired. If `a` is not an array, a conversion is attempted. **axis**{int, tuple of int, None}, optional Axis or axes along which the sum is computed. The default is to compute the sum of the flattened array. **dtype** data-type, optional The type of the returned array and of the accumulator in which the elements are summed. By default, the dtype of `a` is used. An exception is when `a` has an integer type with less precision than the platform (u)intp. In that case, the default will be either (u)int32 or (u)int64 depending on whether the platform is 32 or 64 bits. For inexact inputs, dtype must be inexact. **out** ndarray, optional Alternate output array in which to place the result. The default is `None`. If provided, it must have the same shape as the expected output, but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. The casting of NaN to integer can yield unexpected results. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. If the value is anything but the default, then `keepdims` will be passed through to the [`mean`](numpy.mean#numpy.mean "numpy.mean") or [`sum`](numpy.sum#numpy.sum "numpy.sum") methods of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). If the sub-classes methods does not implement `keepdims` any exceptions will be raised. **initial** scalar, optional Starting value for the sum. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. **where** array_like of bool, optional Elements to include in the sum. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. Returns: **nansum** ndarray. A new array holding the result is returned unless `out` is specified, in which it is returned. The result has the same size as `a`, and the same shape as `a` if `axis` is not None or `a` is a 1-d array. See also [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") Sum across array propagating NaNs. [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") Show which elements are NaN. [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite") Show which elements are not NaN or +/-inf. #### Notes If both positive and negative infinity are present, the sum will be Not A Number (NaN). #### Examples >>> import numpy as np >>> np.nansum(1) 1 >>> np.nansum([1]) 1 >>> np.nansum([1, np.nan]) 1.0 >>> a = np.array([[1, 1], [1, np.nan]]) >>> np.nansum(a) 3.0 >>> np.nansum(a, axis=0) array([2., 1.]) >>> np.nansum([1, np.nan, np.inf]) inf >>> np.nansum([1, np.nan, -np.inf]) -inf >>> from numpy.testing import suppress_warnings >>> with np.errstate(invalid="ignore"): ... np.nansum([1, np.nan, np.inf, -np.inf]) # both +/- infinity present np.float64(nan) # numpy.nanvar numpy.nanvar(_a_ , _axis=None_ , _dtype=None_ , _out=None_ , _ddof=0_ , _keepdims= _, _*_ , _where= _, _mean= _, _correction= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_nanfunctions_impl.py#L1715-L1896) Compute the variance along the specified axis, while ignoring NaNs. Returns the variance of the array elements, a measure of the spread of a distribution. The variance is computed for the flattened array by default, otherwise over the specified axis. For all-NaN slices or slices with zero degrees of freedom, NaN is returned and a `RuntimeWarning` is raised. Parameters: **a** array_like Array containing numbers whose variance is desired. If `a` is not an array, a conversion is attempted. **axis**{int, tuple of int, None}, optional Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array. **dtype** data-type, optional Type to use in computing the variance. For arrays of integer type the default is [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"); for arrays of float types it is the same as the array type. **out** ndarray, optional Alternate output array in which to place the result. It must have the same shape as the expected output, but the type is cast if necessary. **ddof**{int, float}, optional “Delta Degrees of Freedom”: the divisor used in the calculation is `N - ddof`, where `N` represents the number of non-NaN elements. By default `ddof` is zero. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. **where** array_like of bool, optional Elements to include in the variance. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. **mean** array_like, optional Provide the mean to prevent its recalculation. The mean should have a shape as if it was calculated with `keepdims=True`. The axis for the calculation of the mean should be the same as used in the call to this var function. New in version 2.0.0. **correction**{int, float}, optional Array API compatible name for the `ddof` parameter. Only one of them can be provided at the same time. New in version 2.0.0. Returns: **variance** ndarray, see dtype parameter above If `out` is None, return a new array containing the variance, otherwise return a reference to the output array. If ddof is >= the number of non-NaN elements in a slice or the slice contains only NaNs, then the result for that slice is NaN. See also [`std`](numpy.std#numpy.std "numpy.std") Standard deviation [`mean`](numpy.mean#numpy.mean "numpy.mean") Average [`var`](numpy.var#numpy.var "numpy.var") Variance while not ignoring NaNs [`nanstd`](numpy.nanstd#numpy.nanstd "numpy.nanstd"), [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean") [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes The variance is the average of the squared deviations from the mean, i.e., `var = mean(abs(x - x.mean())**2)`. The mean is normally calculated as `x.sum() / N`, where `N = len(x)`. If, however, `ddof` is specified, the divisor `N - ddof` is used instead. In standard statistical practice, `ddof=1` provides an unbiased estimator of the variance of a hypothetical infinite population. `ddof=0` provides a maximum likelihood estimate of the variance for normally distributed variables. Note that for complex numbers, the absolute value is taken before squaring, so that the result is always real and nonnegative. For floating-point input, the variance is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for [`float32`](../arrays.scalars#numpy.float32 "numpy.float32") (see example below). Specifying a higher-accuracy accumulator using the `dtype` keyword can alleviate this issue. For this function to work on sub-classes of ndarray, they must define [`sum`](numpy.sum#numpy.sum "numpy.sum") with the kwarg `keepdims` #### Examples >>> import numpy as np >>> a = np.array([[1, np.nan], [3, 4]]) >>> np.nanvar(a) 1.5555555555555554 >>> np.nanvar(a, axis=0) array([1., 0.]) >>> np.nanvar(a, axis=1) array([0., 0.25]) # may vary # numpy.ndarray.__abs__ method ndarray.__abs__(_self_) # numpy.ndarray.__add__ method ndarray.__add__(_value_ , _/_) Return self+value. # numpy.ndarray.__and__ method ndarray.__and__(_value_ , _/_) Return self&value. # numpy.ndarray.__array__ method ndarray.__array__([_dtype_ , ]_*_ , _copy=None_) For `dtype` parameter it returns a new reference to self if `dtype` is not given or it matches array’s data type. A new array of provided data type is returned if `dtype` is different from the current data type of the array. For `copy` parameter it returns a new reference to self if `copy=False` or `copy=None` and copying isn’t enforced by `dtype` parameter. The method returns a new array for `copy=True`, regardless of `dtype` parameter. A more detailed explanation of the `__array__` interface can be found in [The __array__() method](../../user/basics.interoperability#dunder-array- interface). # numpy.ndarray.__array_wrap__ method ndarray.__array_wrap__(_array_ , [_context_ , ]_/_) Returns a view of [`array`](numpy.array#numpy.array "numpy.array") with the same type as self. # numpy.ndarray.__bool__ method ndarray.__bool__(_/_) True if self else False # numpy.ndarray.__class_getitem__ method ndarray.__class_getitem__(_item_ , _/_) Return a parametrized wrapper around the [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") type. New in version 1.22. Returns: **alias** types.GenericAlias A parametrized [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") type. See also [**PEP 585**](https://peps.python.org/pep-0585/) Type hinting generics in standard collections. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Examples >>> from typing import Any >>> import numpy as np >>> np.ndarray[Any, np.dtype[Any]] numpy.ndarray[typing.Any, numpy.dtype[typing.Any]] # numpy.ndarray.__complex__ method ndarray.__complex__() # numpy.ndarray.__contains__ method ndarray.__contains__(_key_ , _/_) Return key in self. # numpy.ndarray.__copy__ method ndarray.__copy__() Used if [`copy.copy`](https://docs.python.org/3/library/copy.html#copy.copy "\(in Python v3.13\)") is called on an array. Returns a copy of the array. Equivalent to `a.copy(order='K')`. # numpy.ndarray.__deepcopy__ method ndarray.__deepcopy__(_memo_ , _/_) Used if [`copy.deepcopy`](https://docs.python.org/3/library/copy.html#copy.deepcopy "\(in Python v3.13\)") is called on an array. # numpy.ndarray.__divmod__ method ndarray.__divmod__(_value_ , _/_) Return divmod(self, value). # numpy.ndarray.__eq__ method ndarray.__eq__(_value_ , _/_) Return self==value. # numpy.ndarray.__float__ method ndarray.__float__(_self_) # numpy.ndarray.__floordiv__ method ndarray.__floordiv__(_value_ , _/_) Return self//value. # numpy.ndarray.__ge__ method ndarray.__ge__(_value_ , _/_) Return self>=value. # numpy.ndarray.__getitem__ method ndarray.__getitem__(_key_ , _/_) Return self[key]. # numpy.ndarray.__gt__ method ndarray.__gt__(_value_ , _/_) Return self>value. # numpy.ndarray.__iadd__ method ndarray.__iadd__(_value_ , _/_) Return self+=value. # numpy.ndarray.__iand__ method ndarray.__iand__(_value_ , _/_) Return self&=value. # numpy.ndarray.__ifloordiv__ method ndarray.__ifloordiv__(_value_ , _/_) Return self//=value. # numpy.ndarray.__ilshift__ method ndarray.__ilshift__(_value_ , _/_) Return self<<=value. # numpy.ndarray.__imod__ method ndarray.__imod__(_value_ , _/_) Return self%=value. # numpy.ndarray.__imul__ method ndarray.__imul__(_value_ , _/_) Return self*=value. # numpy.ndarray.__int__ method ndarray.__int__(_self_) # numpy.ndarray.__invert__ method ndarray.__invert__(_/_) ~self # numpy.ndarray.__ior__ method ndarray.__ior__(_value_ , _/_) Return self|=value. # numpy.ndarray.__ipow__ method ndarray.__ipow__(_value_ , _/_) Return self**=value. # numpy.ndarray.__irshift__ method ndarray.__irshift__(_value_ , _/_) Return self>>=value. # numpy.ndarray.__isub__ method ndarray.__isub__(_value_ , _/_) Return self-=value. # numpy.ndarray.__itruediv__ method ndarray.__itruediv__(_value_ , _/_) Return self/=value. # numpy.ndarray.__ixor__ method ndarray.__ixor__(_value_ , _/_) Return self^=value. # numpy.ndarray.__le__ method ndarray.__le__(_value_ , _/_) Return self<=value. # numpy.ndarray.__len__ method ndarray.__len__(_/_) Return len(self). # numpy.ndarray.__lshift__ method ndarray.__lshift__(_value_ , _/_) Return self<>value. # numpy.ndarray.__setitem__ method ndarray.__setitem__(_key_ , _value_ , _/_) Set self[key] to value. # numpy.ndarray.__setstate__ method ndarray.__setstate__(_state_ , _/_) For unpickling. The `state` argument must be a sequence that contains the following elements: Parameters: **version** int optional pickle version. If omitted defaults to 0. **shape** tuple **dtype** data-type **isFortran** bool **rawdata** string or list a binary string with the data (or a list if ‘a’ is an object array) # numpy.ndarray.__str__ method ndarray.__str__(_/_) Return str(self). # numpy.ndarray.__sub__ method ndarray.__sub__(_value_ , _/_) Return self-value. # numpy.ndarray.__truediv__ method ndarray.__truediv__(_value_ , _/_) Return self/value. # numpy.ndarray.__xor__ method ndarray.__xor__(_value_ , _/_) Return self^value. # numpy.ndarray.all method ndarray.all(_axis =None_, _out =None_, _keepdims =False_, _*_ , _where =True_) Returns True if all elements evaluate to True. Refer to [`numpy.all`](numpy.all#numpy.all "numpy.all") for full documentation. See also [`numpy.all`](numpy.all#numpy.all "numpy.all") equivalent function # numpy.ndarray.any method ndarray.any(_axis =None_, _out =None_, _keepdims =False_, _*_ , _where =True_) Returns True if any of the elements of `a` evaluate to True. Refer to [`numpy.any`](numpy.any#numpy.any "numpy.any") for full documentation. See also [`numpy.any`](numpy.any#numpy.any "numpy.any") equivalent function # numpy.ndarray.argmax method ndarray.argmax(_axis =None_, _out =None_, _*_ , _keepdims =False_) Return indices of the maximum values along the given axis. Refer to [`numpy.argmax`](numpy.argmax#numpy.argmax "numpy.argmax") for full documentation. See also [`numpy.argmax`](numpy.argmax#numpy.argmax "numpy.argmax") equivalent function # numpy.ndarray.argmin method ndarray.argmin(_axis =None_, _out =None_, _*_ , _keepdims =False_) Return indices of the minimum values along the given axis. Refer to [`numpy.argmin`](numpy.argmin#numpy.argmin "numpy.argmin") for detailed documentation. See also [`numpy.argmin`](numpy.argmin#numpy.argmin "numpy.argmin") equivalent function # numpy.ndarray.argpartition method ndarray.argpartition(_kth_ , _axis =-1_, _kind ='introselect'_, _order =None_) Returns the indices that would partition this array. Refer to [`numpy.argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") for full documentation. See also [`numpy.argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") equivalent function # numpy.ndarray.argsort method ndarray.argsort(_axis =-1_, _kind =None_, _order =None_) Returns the indices that would sort this array. Refer to [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") for full documentation. See also [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") equivalent function # numpy.ndarray.astype method ndarray.astype(_dtype_ , _order ='K'_, _casting ='unsafe'_, _subok =True_, _copy =True_) Copy of the array, cast to a specified type. Parameters: **dtype** str or dtype Typecode or data-type to which the array is cast. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. **subok** bool, optional If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array. **copy** bool, optional By default, astype always returns a newly allocated array. If this is set to false, and the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`, and `subok` requirements are satisfied, the input array is returned instead of a copy. Returns: **arr_t** ndarray Unless [`copy`](numpy.copy#numpy.copy "numpy.copy") is False and the other conditions for returning the input array are satisfied (see description for [`copy`](numpy.copy#numpy.copy "numpy.copy") input parameter), `arr_t` is a new array of the same shape as the input array, with dtype, order given by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`. Raises: ComplexWarning When casting from complex to float or int. To avoid this, one should use `a.real.astype(t)`. #### Examples >>> import numpy as np >>> x = np.array([1, 2, 2.5]) >>> x array([1. , 2. , 2.5]) >>> x.astype(int) array([1, 2, 2]) # numpy.ndarray.base attribute ndarray.base Base object if memory is from some other object. #### Examples The base of an array that owns its memory is None: >>> import numpy as np >>> x = np.array([1,2,3,4]) >>> x.base is None True Slicing creates a view, whose memory is shared with x: >>> y = x[2:] >>> y.base is x True # numpy.ndarray.byteswap method ndarray.byteswap(_inplace =False_) Swap the bytes of the array elements Toggle between low-endian and big-endian data representation by returning a byteswapped array, optionally swapped in-place. Arrays of byte-strings are not swapped. The real and imaginary parts of a complex number are swapped individually. Parameters: **inplace** bool, optional If `True`, swap bytes in-place, default is `False`. Returns: **out** ndarray The byteswapped array. If `inplace` is `True`, this is a view to self. #### Examples >>> import numpy as np >>> A = np.array([1, 256, 8755], dtype=np.int16) >>> list(map(hex, A)) ['0x1', '0x100', '0x2233'] >>> A.byteswap(inplace=True) array([ 256, 1, 13090], dtype=int16) >>> list(map(hex, A)) ['0x100', '0x1', '0x3322'] Arrays of byte-strings are not swapped >>> A = np.array([b'ceg', b'fac']) >>> A.byteswap() array([b'ceg', b'fac'], dtype='|S3') `A.view(A.dtype.newbyteorder()).byteswap()` produces an array with the same values but different representation in memory >>> A = np.array([1, 2, 3],dtype=np.int64) >>> A.view(np.uint8) array([1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0], dtype=uint8) >>> A.view(A.dtype.newbyteorder()).byteswap(inplace=True) array([1, 2, 3], dtype='>i8') >>> A.view(np.uint8) array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3], dtype=uint8) # numpy.ndarray.choose method ndarray.choose(_choices_ , _out =None_, _mode ='raise'_) Use an index array to construct a new array from a set of choices. Refer to [`numpy.choose`](numpy.choose#numpy.choose "numpy.choose") for full documentation. See also [`numpy.choose`](numpy.choose#numpy.choose "numpy.choose") equivalent function # numpy.ndarray.clip method ndarray.clip(_min =None_, _max =None_, _out =None_, _** kwargs_) Return an array whose values are limited to `[min, max]`. One of max or min must be given. Refer to [`numpy.clip`](numpy.clip#numpy.clip "numpy.clip") for full documentation. See also [`numpy.clip`](numpy.clip#numpy.clip "numpy.clip") equivalent function # numpy.ndarray.compress method ndarray.compress(_condition_ , _axis =None_, _out =None_) Return selected slices of this array along given axis. Refer to [`numpy.compress`](numpy.compress#numpy.compress "numpy.compress") for full documentation. See also [`numpy.compress`](numpy.compress#numpy.compress "numpy.compress") equivalent function # numpy.ndarray.conj method ndarray.conj() Complex-conjugate all elements. Refer to [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") for full documentation. See also [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") equivalent function # numpy.ndarray.conjugate method ndarray.conjugate() Return the complex conjugate, element-wise. Refer to [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") for full documentation. See also [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") equivalent function # numpy.ndarray.copy method ndarray.copy(_order ='C'_) Return a copy of the array. Parameters: **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. (Note that this function and [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") Similar function with different default behavior [`numpy.copyto`](numpy.copyto#numpy.copyto "numpy.copyto") #### Notes This function is the preferred method for creating an array copy. The function [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. #### Examples >>> import numpy as np >>> x = np.array([[1,2,3],[4,5,6]], order='F') >>> y = x.copy() >>> x.fill(0) >>> x array([[0, 0, 0], [0, 0, 0]]) >>> y array([[1, 2, 3], [4, 5, 6]]) >>> y.flags['C_CONTIGUOUS'] True For arrays containing Python objects (e.g. dtype=object), the copy is a shallow one. The new array will contain the same object which may lead to surprises if that object can be modified (is mutable): >>> a = np.array([1, 'm', [2, 3, 4]], dtype=object) >>> b = a.copy() >>> b[2][0] = 10 >>> a array([1, 'm', list([10, 3, 4])], dtype=object) To ensure all elements within an `object` array are copied, use [`copy.deepcopy`](https://docs.python.org/3/library/copy.html#copy.deepcopy "\(in Python v3.13\)"): >>> import copy >>> a = np.array([1, 'm', [2, 3, 4]], dtype=object) >>> c = copy.deepcopy(a) >>> c[2][0] = 10 >>> c array([1, 'm', list([10, 3, 4])], dtype=object) >>> a array([1, 'm', list([2, 3, 4])], dtype=object) # numpy.ndarray.ctypes attribute ndarray.ctypes An object to simplify the interaction of the array with the ctypes module. This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library. Parameters: **None** Returns: **c** Python object Possessing attributes data, shape, strides, etc. See also [`numpy.ctypeslib`](../routines.ctypeslib#module-numpy.ctypeslib "numpy.ctypeslib") #### Notes Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes): _ctypes.data A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as: `self._array_interface_['data'][0]`. Note that unlike `data_as`, a reference won’t be kept to the array: code like `ctypes.c_void_p((a + b).ctypes.data)` will result in a pointer to a deallocated array, and should be spelt `(a + b).ctypes.data_as(ctypes.c_void_p)` _ctypes.shape (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to `dtype('p')` on this platform (see [`c_intp`](../routines.ctypeslib#numpy.ctypeslib.c_intp "numpy.ctypeslib.c_intp")). This base-type could be [`ctypes.c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "\(in Python v3.13\)"), [`ctypes.c_long`](https://docs.python.org/3/library/ctypes.html#ctypes.c_long "\(in Python v3.13\)"), or [`ctypes.c_longlong`](https://docs.python.org/3/library/ctypes.html#ctypes.c_longlong "\(in Python v3.13\)") depending on the platform. The ctypes array contains the shape of the underlying array. _ctypes.strides (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array. _ctypes.data_as(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_internal.py#L279-L296) Return the data pointer cast to a particular c-types object. For example, calling `self._as_parameter_` is equivalent to `self.data_as(ctypes.c_void_p)`. Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: `self.data_as(ctypes.POINTER(ctypes.c_double))`. The returned pointer will keep a reference to the array. _ctypes.shape_as(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_internal.py#L298-L305) Return the shape tuple as an array of some other c-types type. For example: `self.shape_as(ctypes.c_short)`. _ctypes.strides_as(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_internal.py#L307-L314) Return the strides tuple as an array of some other c-types type. For example: `self.strides_as(ctypes.c_longlong)`. If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the `as_parameter` attribute which will return an integer equal to the data attribute. #### Examples >>> import numpy as np >>> import ctypes >>> x = np.array([[0, 1], [2, 3]], dtype=np.int32) >>> x array([[0, 1], [2, 3]], dtype=int32) >>> x.ctypes.data 31962608 # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)) <__main__.LP_c_uint object at 0x7ff2fc1fc200> # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)).contents c_uint(0) >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint64)).contents c_ulong(4294967296) >>> x.ctypes.shape # may vary >>> x.ctypes.strides # may vary # numpy.ndarray.cumprod method ndarray.cumprod(_axis =None_, _dtype =None_, _out =None_) Return the cumulative product of the elements along the given axis. Refer to [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") for full documentation. See also [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") equivalent function # numpy.ndarray.cumsum method ndarray.cumsum(_axis =None_, _dtype =None_, _out =None_) Return the cumulative sum of the elements along the given axis. Refer to [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") for full documentation. See also [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") equivalent function # numpy.ndarray.data attribute ndarray.data Python buffer object pointing to the start of the array’s data. # numpy.ndarray.diagonal method ndarray.diagonal(_offset =0_, _axis1 =0_, _axis2 =1_) Return specified diagonals. In NumPy 1.9 the returned array is a read-only view instead of a copy as in previous NumPy versions. In a future version the read-only restriction will be removed. Refer to [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") for full documentation. See also [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") equivalent function # numpy.ndarray.dtype attribute ndarray.dtype Data-type of the array’s elements. Warning Setting `arr.dtype` is discouraged and may be deprecated in the future. Setting will replace the `dtype` without modifying the memory (see also [`ndarray.view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") and [`ndarray.astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype")). Parameters: **None** Returns: **d** numpy dtype object See also [`ndarray.astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype") Cast the values contained in the array to a new data-type. [`ndarray.view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") Create a view of the same data but a different data-type. [`numpy.dtype`](numpy.dtype#numpy.dtype "numpy.dtype") #### Examples >>> x array([[0, 1], [2, 3]]) >>> x.dtype dtype('int32') >>> type(x.dtype) # numpy.ndarray.dump method ndarray.dump(_file_) Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load. Parameters: **file** str or Path A string naming the dump file. # numpy.ndarray.dumps method ndarray.dumps() Returns the pickle of the array as a string. pickle.loads will convert the string back to an array. Parameters: **None** # numpy.ndarray.fill method ndarray.fill(_value_) Fill the array with a scalar value. Parameters: **value** scalar All elements of `a` will be assigned this value. #### Examples >>> import numpy as np >>> a = np.array([1, 2]) >>> a.fill(0) >>> a array([0, 0]) >>> a = np.empty(2) >>> a.fill(1) >>> a array([1., 1.]) Fill expects a scalar value and always behaves the same as assigning to a single array element. The following is a rare example where this distinction is important: >>> a = np.array([None, None], dtype=object) >>> a[0] = np.array(3) >>> a array([array(3), None], dtype=object) >>> a.fill(np.array(3)) >>> a array([array(3), array(3)], dtype=object) Where other forms of assignments will unpack the array being assigned: >>> a[...] = np.array(3) >>> a array([3, 3], dtype=object) # numpy.ndarray.flags attribute ndarray.flags Information about the memory layout of the array. #### Notes The `flags` object can be accessed dictionary-like (as in `a.flags['WRITEABLE']`), or by using lowercased attribute names (as in `a.flags.writeable`). Short flag names are only supported in dictionary access. Only the WRITEBACKIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling [`ndarray.setflags`](numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags"). The array flags cannot be set arbitrarily: * WRITEBACKIFCOPY can only be set `False`. * ALIGNED can only be set `True` if the data is truly aligned. * WRITEABLE can only be set `True` if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string. Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays. Even for contiguous arrays a stride for a given dimension `arr.strides[dim]` may be _arbitrary_ if `arr.shape[dim] == 1` or the array has no elements. It does _not_ generally hold that `self.strides[-1] == self.itemsize` for C-style contiguous arrays or `self.strides[0] == self.itemsize` for Fortran-style contiguous arrays is true. Attributes: **C_CONTIGUOUS (C)** The data is in a single, C-style contiguous segment. **F_CONTIGUOUS (F)** The data is in a single, Fortran-style contiguous segment. **OWNDATA (O)** The array owns the memory it uses or borrows it from another object. **WRITEABLE (W)** The data area can be written to. Setting this to False locks the data, making it read-only. A view (slice, etc.) inherits WRITEABLE from its base array at creation time, but a view of a writeable array may be subsequently locked while the base array remains writeable. (The opposite is not true, in that a view of a locked array may not be made writeable. However, currently, locking a base object does not lock any views that already reference it, so under that circumstance it is possible to alter the contents of a locked array via a previously created writeable view onto it.) Attempting to change a non- writeable array raises a RuntimeError exception. **ALIGNED (A)** The data and all elements are aligned appropriately for the hardware. **WRITEBACKIFCOPY (X)** This array is a copy of some other array. The C-API function PyArray_ResolveWritebackIfCopy must be called before deallocating to the base array will be updated with the contents of this array. **FNC** F_CONTIGUOUS and not C_CONTIGUOUS. **FORC** F_CONTIGUOUS or C_CONTIGUOUS (one-segment test). **BEHAVED (B)** ALIGNED and WRITEABLE. **CARRAY (CA)** BEHAVED and C_CONTIGUOUS. **FARRAY (FA)** BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS. # numpy.ndarray.flat attribute ndarray.flat A 1-D iterator over the array. This is a [`numpy.flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") instance, which acts similarly to, but is not a subclass of, Python’s built-in iterator object. See also [`flatten`](numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten") Return a copy of the array collapsed into one dimension. [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples >>> import numpy as np >>> x = np.arange(1, 7).reshape(2, 3) >>> x array([[1, 2, 3], [4, 5, 6]]) >>> x.flat[3] 4 >>> x.T array([[1, 4], [2, 5], [3, 6]]) >>> x.T.flat[3] 5 >>> type(x.flat) An assignment example: >>> x.flat = 3; x array([[3, 3, 3], [3, 3, 3]]) >>> x.flat[[1,4]] = 1; x array([[3, 1, 3], [3, 1, 3]]) # numpy.ndarray.flatten method ndarray.flatten(_order ='C'_) Return a copy of the array collapsed into one dimension. Parameters: **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional ‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran- style) order. ‘A’ means to flatten in column-major order if `a` is Fortran _contiguous_ in memory, row-major order otherwise. ‘K’ means to flatten `a` in the order the elements occur in memory. The default is ‘C’. Returns: **y** ndarray A copy of the input array, flattened to one dimension. See also [`ravel`](numpy.ravel#numpy.ravel "numpy.ravel") Return a flattened array. [`flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") A 1-D flat iterator over the array. #### Examples >>> import numpy as np >>> a = np.array([[1,2], [3,4]]) >>> a.flatten() array([1, 2, 3, 4]) >>> a.flatten('F') array([1, 3, 2, 4]) # numpy.ndarray.getfield method ndarray.getfield(_dtype_ , _offset =0_) Returns a field of the given array as a certain type. A field is a view of the array data with a given data-type. The values in the view are determined by the given type and the offset into the current array in bytes. The offset needs to be such that the view dtype fits in the array dtype; for example an array of dtype complex128 has 16-byte elements. If taking a view with a 32-bit integer (4 bytes), the offset needs to be between 0 and 12 bytes. Parameters: **dtype** str or dtype The data type of the view. The dtype size of the view can not be larger than that of the array itself. **offset** int Number of bytes to skip before beginning the element view. #### Examples >>> import numpy as np >>> x = np.diag([1.+1.j]*2) >>> x[1, 1] = 2 + 4.j >>> x array([[1.+1.j, 0.+0.j], [0.+0.j, 2.+4.j]]) >>> x.getfield(np.float64) array([[1., 0.], [0., 2.]]) By choosing an offset of 8 bytes we can select the complex part of the array for our view: >>> x.getfield(np.float64, offset=8) array([[1., 0.], [0., 4.]]) # numpy.ndarray _class_ numpy.ndarray(_shape_ , _dtype =float_, _buffer =None_, _offset =0_, _strides =None_, _order =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/arrayobject.c) An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level `ndarray` constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: [`T`](numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T")ndarray View of the transposed array. [`data`](numpy.ndarray.data#numpy.ndarray.data "numpy.ndarray.data")buffer Python buffer object pointing to the start of the array’s data. [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype")dtype object Data-type of the array’s elements. [`flags`](numpy.ndarray.flags#numpy.ndarray.flags "numpy.ndarray.flags")dict Information about the memory layout of the array. [`flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat")numpy.flatiter object A 1-D iterator over the array. [`imag`](numpy.imag#numpy.imag "numpy.imag")ndarray The imaginary part of the array. [`real`](numpy.real#numpy.real "numpy.real")ndarray The real part of the array. [`size`](numpy.size#numpy.size "numpy.size")int Number of elements in the array. [`itemsize`](numpy.ndarray.itemsize#numpy.ndarray.itemsize "numpy.ndarray.itemsize")int Length of one array element in bytes. [`nbytes`](numpy.ndarray.nbytes#numpy.ndarray.nbytes "numpy.ndarray.nbytes")int Total bytes consumed by the elements of the array. [`ndim`](numpy.ndim#numpy.ndim "numpy.ndim")int Number of array dimensions. [`shape`](numpy.shape#numpy.shape "numpy.shape")tuple of ints Tuple of array dimensions. [`strides`](numpy.ndarray.strides#numpy.ndarray.strides "numpy.ndarray.strides")tuple of ints Tuple of bytes to step in each dimension when traversing an array. [`ctypes`](numpy.ndarray.ctypes#numpy.ndarray.ctypes "numpy.ndarray.ctypes")ctypes object An object to simplify the interaction of the array with the ctypes module. [`base`](numpy.ndarray.base#numpy.ndarray.base "numpy.ndarray.base")ndarray Base object if memory is from some other object. #### Methods [`all`](numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all")([axis, out, keepdims, where]) | Returns True if all elements evaluate to True. ---|--- [`any`](numpy.ndarray.any#numpy.ndarray.any "numpy.ndarray.any")([axis, out, keepdims, where]) | Returns True if any of the elements of `a` evaluate to True. [`argmax`](numpy.ndarray.argmax#numpy.ndarray.argmax "numpy.ndarray.argmax")([axis, out, keepdims]) | Return indices of the maximum values along the given axis. [`argmin`](numpy.ndarray.argmin#numpy.ndarray.argmin "numpy.ndarray.argmin")([axis, out, keepdims]) | Return indices of the minimum values along the given axis. [`argpartition`](numpy.ndarray.argpartition#numpy.ndarray.argpartition "numpy.ndarray.argpartition")(kth[, axis, kind, order]) | Returns the indices that would partition this array. [`argsort`](numpy.ndarray.argsort#numpy.ndarray.argsort "numpy.ndarray.argsort")([axis, kind, order]) | Returns the indices that would sort this array. [`astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype")(dtype[, order, casting, subok, copy]) | Copy of the array, cast to a specified type. [`byteswap`](numpy.ndarray.byteswap#numpy.ndarray.byteswap "numpy.ndarray.byteswap")([inplace]) | Swap the bytes of the array elements [`choose`](numpy.ndarray.choose#numpy.ndarray.choose "numpy.ndarray.choose")(choices[, out, mode]) | Use an index array to construct a new array from a set of choices. [`clip`](numpy.ndarray.clip#numpy.ndarray.clip "numpy.ndarray.clip")([min, max, out]) | Return an array whose values are limited to `[min, max]`. [`compress`](numpy.ndarray.compress#numpy.ndarray.compress "numpy.ndarray.compress")(condition[, axis, out]) | Return selected slices of this array along given axis. [`conj`](numpy.ndarray.conj#numpy.ndarray.conj "numpy.ndarray.conj")() | Complex-conjugate all elements. [`conjugate`](numpy.ndarray.conjugate#numpy.ndarray.conjugate "numpy.ndarray.conjugate")() | Return the complex conjugate, element-wise. [`copy`](numpy.ndarray.copy#numpy.ndarray.copy "numpy.ndarray.copy")([order]) | Return a copy of the array. [`cumprod`](numpy.ndarray.cumprod#numpy.ndarray.cumprod "numpy.ndarray.cumprod")([axis, dtype, out]) | Return the cumulative product of the elements along the given axis. [`cumsum`](numpy.ndarray.cumsum#numpy.ndarray.cumsum "numpy.ndarray.cumsum")([axis, dtype, out]) | Return the cumulative sum of the elements along the given axis. [`diagonal`](numpy.ndarray.diagonal#numpy.ndarray.diagonal "numpy.ndarray.diagonal")([offset, axis1, axis2]) | Return specified diagonals. [`dump`](numpy.ndarray.dump#numpy.ndarray.dump "numpy.ndarray.dump")(file) | Dump a pickle of the array to the specified file. [`dumps`](numpy.ndarray.dumps#numpy.ndarray.dumps "numpy.ndarray.dumps")() | Returns the pickle of the array as a string. [`fill`](numpy.ndarray.fill#numpy.ndarray.fill "numpy.ndarray.fill")(value) | Fill the array with a scalar value. [`flatten`](numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten")([order]) | Return a copy of the array collapsed into one dimension. [`getfield`](numpy.ndarray.getfield#numpy.ndarray.getfield "numpy.ndarray.getfield")(dtype[, offset]) | Returns a field of the given array as a certain type. [`item`](numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item")(*args) | Copy an element of an array to a standard Python scalar and return it. [`max`](numpy.ndarray.max#numpy.ndarray.max "numpy.ndarray.max")([axis, out, keepdims, initial, where]) | Return the maximum along a given axis. [`mean`](numpy.ndarray.mean#numpy.ndarray.mean "numpy.ndarray.mean")([axis, dtype, out, keepdims, where]) | Returns the average of the array elements along given axis. [`min`](numpy.ndarray.min#numpy.ndarray.min "numpy.ndarray.min")([axis, out, keepdims, initial, where]) | Return the minimum along a given axis. [`nonzero`](numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero")() | Return the indices of the elements that are non-zero. [`partition`](numpy.ndarray.partition#numpy.ndarray.partition "numpy.ndarray.partition")(kth[, axis, kind, order]) | Partially sorts the elements in the array in such a way that the value of the element in k-th position is in the position it would be in a sorted array. [`prod`](numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod")([axis, dtype, out, keepdims, initial, ...]) | Return the product of the array elements over the given axis [`put`](numpy.ndarray.put#numpy.ndarray.put "numpy.ndarray.put")(indices, values[, mode]) | Set `a.flat[n] = values[n]` for all `n` in indices. [`ravel`](numpy.ndarray.ravel#numpy.ndarray.ravel "numpy.ndarray.ravel")([order]) | Return a flattened array. [`repeat`](numpy.ndarray.repeat#numpy.ndarray.repeat "numpy.ndarray.repeat")(repeats[, axis]) | Repeat elements of an array. [`reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape")(shape, /, *[, order, copy]) | Returns an array containing the same data with a new shape. [`resize`](numpy.ndarray.resize#numpy.ndarray.resize "numpy.ndarray.resize")(new_shape[, refcheck]) | Change shape and size of array in-place. [`round`](numpy.ndarray.round#numpy.ndarray.round "numpy.ndarray.round")([decimals, out]) | Return `a` with each element rounded to the given number of decimals. [`searchsorted`](numpy.ndarray.searchsorted#numpy.ndarray.searchsorted "numpy.ndarray.searchsorted")(v[, side, sorter]) | Find indices where elements of v should be inserted in a to maintain order. [`setfield`](numpy.ndarray.setfield#numpy.ndarray.setfield "numpy.ndarray.setfield")(val, dtype[, offset]) | Put a value into a specified place in a field defined by a data-type. [`setflags`](numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags")([write, align, uic]) | Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. [`sort`](numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort")([axis, kind, order]) | Sort an array in-place. [`squeeze`](numpy.ndarray.squeeze#numpy.ndarray.squeeze "numpy.ndarray.squeeze")([axis]) | Remove axes of length one from `a`. [`std`](numpy.ndarray.std#numpy.ndarray.std "numpy.ndarray.std")([axis, dtype, out, ddof, keepdims, where]) | Returns the standard deviation of the array elements along given axis. [`sum`](numpy.ndarray.sum#numpy.ndarray.sum "numpy.ndarray.sum")([axis, dtype, out, keepdims, initial, where]) | Return the sum of the array elements over the given axis. [`swapaxes`](numpy.ndarray.swapaxes#numpy.ndarray.swapaxes "numpy.ndarray.swapaxes")(axis1, axis2) | Return a view of the array with `axis1` and `axis2` interchanged. [`take`](numpy.ndarray.take#numpy.ndarray.take "numpy.ndarray.take")(indices[, axis, out, mode]) | Return an array formed from the elements of `a` at the given indices. [`tobytes`](numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes")([order]) | Construct Python bytes containing the raw data bytes in the array. [`tofile`](numpy.ndarray.tofile#numpy.ndarray.tofile "numpy.ndarray.tofile")(fid[, sep, format]) | Write array to a file as text or binary (default). [`tolist`](numpy.ndarray.tolist#numpy.ndarray.tolist "numpy.ndarray.tolist")() | Return the array as an `a.ndim`-levels deep nested list of Python scalars. [`tostring`](numpy.ndarray.tostring#numpy.ndarray.tostring "numpy.ndarray.tostring")([order]) | A compatibility alias for [`tobytes`](numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes"), with exactly the same behavior. [`trace`](numpy.ndarray.trace#numpy.ndarray.trace "numpy.ndarray.trace")([offset, axis1, axis2, dtype, out]) | Return the sum along diagonals of the array. [`transpose`](numpy.ndarray.transpose#numpy.ndarray.transpose "numpy.ndarray.transpose")(*axes) | Returns a view of the array with axes transposed. [`var`](numpy.ndarray.var#numpy.ndarray.var "numpy.ndarray.var")([axis, dtype, out, ddof, keepdims, where]) | Returns the variance of the array elements, along given axis. [`view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view")([dtype][, type]) | New view of array with the same data. **dot** | ---|--- **to_device** | # numpy.ndarray.imag attribute ndarray.imag The imaginary part of the array. #### Examples >>> import numpy as np >>> x = np.sqrt([1+0j, 0+1j]) >>> x.imag array([ 0. , 0.70710678]) >>> x.imag.dtype dtype('float64') # numpy.ndarray.item method ndarray.item(_* args_) Copy an element of an array to a standard Python scalar and return it. Parameters: ***args** Arguments (variable number and type) * none: in this case, the method only works for arrays with one element (`a.size == 1`), which element is copied into a standard Python scalar object and returned. * int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return. * tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array. Returns: **z** Standard Python scalar object A copy of the specified element of the array as a suitable Python scalar #### Notes When the data type of `a` is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned. `item` is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math. #### Examples >>> import numpy as np >>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.item(3) 1 >>> x.item(7) 0 >>> x.item((0, 1)) 2 >>> x.item((2, 2)) 1 For an array with object dtype, elements are returned as-is. >>> a = np.array([np.int64(1)], dtype=object) >>> a.item() #return np.int64 np.int64(1) # numpy.ndarray.itemsize attribute ndarray.itemsize Length of one array element in bytes. #### Examples >>> import numpy as np >>> x = np.array([1,2,3], dtype=np.float64) >>> x.itemsize 8 >>> x = np.array([1,2,3], dtype=np.complex128) >>> x.itemsize 16 # numpy.ndarray.max method ndarray.max(_axis=None_ , _out=None_ , _keepdims=False_ , _initial= _, _where=True_) Return the maximum along a given axis. Refer to [`numpy.amax`](numpy.amax#numpy.amax "numpy.amax") for full documentation. See also [`numpy.amax`](numpy.amax#numpy.amax "numpy.amax") equivalent function # numpy.ndarray.mean method ndarray.mean(_axis =None_, _dtype =None_, _out =None_, _keepdims =False_, _*_ , _where =True_) Returns the average of the array elements along given axis. Refer to [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") for full documentation. See also [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") equivalent function # numpy.ndarray.min method ndarray.min(_axis=None_ , _out=None_ , _keepdims=False_ , _initial= _, _where=True_) Return the minimum along a given axis. Refer to [`numpy.amin`](numpy.amin#numpy.amin "numpy.amin") for full documentation. See also [`numpy.amin`](numpy.amin#numpy.amin "numpy.amin") equivalent function # numpy.ndarray.nbytes attribute ndarray.nbytes Total bytes consumed by the elements of the array. See also [`sys.getsizeof`](https://docs.python.org/3/library/sys.html#sys.getsizeof "\(in Python v3.13\)") Memory consumed by the object itself without parents in case view. This does include memory consumed by non-element attributes. #### Notes Does not include memory consumed by non-element attributes of the array object. #### Examples >>> import numpy as np >>> x = np.zeros((3,5,2), dtype=np.complex128) >>> x.nbytes 480 >>> np.prod(x.shape) * x.itemsize 480 # numpy.ndarray.ndim attribute ndarray.ndim Number of array dimensions. #### Examples >>> import numpy as np >>> x = np.array([1, 2, 3]) >>> x.ndim 1 >>> y = np.zeros((2, 3, 4)) >>> y.ndim 3 # numpy.ndarray.nonzero method ndarray.nonzero() Return the indices of the elements that are non-zero. Refer to [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") for full documentation. See also [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") equivalent function # numpy.ndarray.partition method ndarray.partition(_kth_ , _axis =-1_, _kind ='introselect'_, _order =None_) Partially sorts the elements in the array in such a way that the value of the element in k-th position is in the position it would be in a sorted array. In the output array, all elements smaller than the k-th element are located to the left of this element and all equal or greater are located to its right. The ordering of the elements in the two partitions on the either side of the k-th element in the output array is undefined. Parameters: **kth** int or sequence of ints Element index to partition by. The kth element value will be in its final sorted position and all smaller elements will be moved before it and all equal or greater elements behind it. The order of all elements in the partitions is undefined. If provided with a sequence of kth it will partition all elements indexed by kth of them into their sorted position at once. Deprecated since version 1.22.0: Passing booleans as index is deprecated. **axis** int, optional Axis along which to sort. Default is -1, which means sort along the last axis. **kind**{‘introselect’}, optional Selection algorithm. Default is ‘introselect’. **order** str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need to be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. See also [`numpy.partition`](numpy.partition#numpy.partition "numpy.partition") Return a partitioned copy of an array. [`argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") Indirect partition. [`sort`](numpy.sort#numpy.sort "numpy.sort") Full sort. #### Notes See `np.partition` for notes on the different algorithms. #### Examples >>> import numpy as np >>> a = np.array([3, 4, 2, 1]) >>> a.partition(3) >>> a array([2, 1, 3, 4]) # may vary >>> a.partition((1, 3)) >>> a array([1, 2, 3, 4]) # numpy.ndarray.prod method ndarray.prod(_axis =None_, _dtype =None_, _out =None_, _keepdims =False_, _initial =1_, _where =True_) Return the product of the array elements over the given axis Refer to [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") for full documentation. See also [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") equivalent function # numpy.ndarray.ptp attribute ndarray.ptp # numpy.ndarray.put method ndarray.put(_indices_ , _values_ , _mode ='raise'_) Set `a.flat[n] = values[n]` for all `n` in indices. Refer to [`numpy.put`](numpy.put#numpy.put "numpy.put") for full documentation. See also [`numpy.put`](numpy.put#numpy.put "numpy.put") equivalent function # numpy.ndarray.ravel method ndarray.ravel([_order_]) Return a flattened array. Refer to [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") for full documentation. See also [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") equivalent function [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") a flat iterator on the array. # numpy.ndarray.real attribute ndarray.real The real part of the array. See also [`numpy.real`](numpy.real#numpy.real "numpy.real") equivalent function #### Examples >>> import numpy as np >>> x = np.sqrt([1+0j, 0+1j]) >>> x.real array([ 1. , 0.70710678]) >>> x.real.dtype dtype('float64') # numpy.ndarray.repeat method ndarray.repeat(_repeats_ , _axis =None_) Repeat elements of an array. Refer to [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") for full documentation. See also [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") equivalent function # numpy.ndarray.reshape method ndarray.reshape(_shape_ , _/_ , _*_ , _order ='C'_, _copy =None_) Returns an array containing the same data with a new shape. Refer to [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") for full documentation. See also [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") equivalent function #### Notes Unlike the free function [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape"), this method on [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") allows the elements of the shape parameter to be passed in as separate arguments. For example, `a.reshape(10, 11)` is equivalent to `a.reshape((10, 11))`. # numpy.ndarray.resize method ndarray.resize(_new_shape_ , _refcheck =True_) Change shape and size of array in-place. Parameters: **new_shape** tuple of ints, or `n` ints Shape of resized array. **refcheck** bool, optional If False, reference count will not be checked. Default is True. Returns: None Raises: ValueError If `a` does not own its own data or references or views to it exist, and the data memory must be changed. PyPy only: will always raise if the data memory must be changed, since there is no reliable way to determine if references or views to it exist. SystemError If the `order` keyword argument is specified. This behaviour is a bug in NumPy. See also [`resize`](numpy.resize#numpy.resize "numpy.resize") Return a new array with the specified shape. #### Notes This reallocates space for the data area if necessary. Only contiguous arrays (data elements consecutive in memory) can be resized. The purpose of the reference count check is to make sure you do not use this array as a buffer for another Python object and then reallocate the memory. However, reference counts can increase in other ways so if you are sure that you have not shared the memory for this array with another Python object, then you may safely set `refcheck` to False. #### Examples Shrinking an array: array is flattened (in the order that the data are stored in memory), resized, and reshaped: >>> import numpy as np >>> a = np.array([[0, 1], [2, 3]], order='C') >>> a.resize((2, 1)) >>> a array([[0], [1]]) >>> a = np.array([[0, 1], [2, 3]], order='F') >>> a.resize((2, 1)) >>> a array([[0], [2]]) Enlarging an array: as above, but missing entries are filled with zeros: >>> b = np.array([[0, 1], [2, 3]]) >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple >>> b array([[0, 1, 2], [3, 0, 0]]) Referencing an array prevents resizing… >>> c = a >>> a.resize((1, 1)) Traceback (most recent call last): ... ValueError: cannot resize an array that references or is referenced ... Unless `refcheck` is False: >>> a.resize((1, 1), refcheck=False) >>> a array([[0]]) >>> c array([[0]]) # numpy.ndarray.round method ndarray.round(_decimals =0_, _out =None_) Return `a` with each element rounded to the given number of decimals. Refer to [`numpy.around`](numpy.around#numpy.around "numpy.around") for full documentation. See also [`numpy.around`](numpy.around#numpy.around "numpy.around") equivalent function # numpy.ndarray.searchsorted method ndarray.searchsorted(_v_ , _side ='left'_, _sorter =None_) Find indices where elements of v should be inserted in a to maintain order. For full documentation, see [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") See also [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") equivalent function # numpy.ndarray.setfield method ndarray.setfield(_val_ , _dtype_ , _offset =0_) Put a value into a specified place in a field defined by a data-type. Place `val` into `a`’s field defined by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") and beginning `offset` bytes into the field. Parameters: **val** object Value to be placed in field. **dtype** dtype object Data-type of the field in which to place `val`. **offset** int, optional The number of bytes into the field at which to place `val`. Returns: None See also [`getfield`](numpy.ndarray.getfield#numpy.ndarray.getfield "numpy.ndarray.getfield") #### Examples >>> import numpy as np >>> x = np.eye(3) >>> x.getfield(np.float64) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) >>> x.setfield(3, np.int32) >>> x.getfield(np.int32) array([[3, 3, 3], [3, 3, 3], [3, 3, 3]], dtype=int32) >>> x array([[1.0e+000, 1.5e-323, 1.5e-323], [1.5e-323, 1.0e+000, 1.5e-323], [1.5e-323, 1.5e-323, 1.0e+000]]) >>> x.setfield(np.eye(3), np.int32) >>> x array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) # numpy.ndarray.setflags method ndarray.setflags(_write =None_, _align =None_, _uic =None_) Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. These Boolean-valued flags affect how numpy interprets the memory area used by `a` (see Notes below). The ALIGNED flag can only be set to True if the data is actually aligned according to the type. The WRITEBACKIFCOPY flag can never be set to True. The flag WRITEABLE can only be set to True if the array owns its own memory, or the ultimate owner of the memory exposes a writeable buffer interface, or is a string. (The exception for string is made so that unpickling can be done without copying memory.) Parameters: **write** bool, optional Describes whether or not `a` can be written to. **align** bool, optional Describes whether or not `a` is aligned properly for its type. **uic** bool, optional Describes whether or not `a` is a copy of another “base” array. #### Notes Array flags provide information about how the memory area used for the array is to be interpreted. There are 7 Boolean flags in use, only three of which can be changed by the user: WRITEBACKIFCOPY, WRITEABLE, and ALIGNED. WRITEABLE (W) the data area can be written to; ALIGNED (A) the data and strides are aligned appropriately for the hardware (as determined by the compiler); WRITEBACKIFCOPY (X) this array is a copy of some other array (referenced by .base). When the C-API function PyArray_ResolveWritebackIfCopy is called, the base array will be updated with the contents of this array. All flags can be accessed using the single (upper case) letter as well as the full name. #### Examples >>> import numpy as np >>> y = np.array([[3, 1, 7], ... [2, 0, 0], ... [8, 5, 9]]) >>> y array([[3, 1, 7], [2, 0, 0], [8, 5, 9]]) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False >>> y.setflags(write=0, align=0) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : False ALIGNED : False WRITEBACKIFCOPY : False >>> y.setflags(uic=1) Traceback (most recent call last): File "", line 1, in ValueError: cannot set WRITEBACKIFCOPY flag to True # numpy.ndarray.shape attribute ndarray.shape Tuple of array dimensions. The shape property is usually used to get the current shape of an array, but may also be used to reshape the array in-place by assigning a tuple of array dimensions to it. As with [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape"), one of the new shape dimensions can be -1, in which case its value is inferred from the size of the array and the remaining dimensions. Reshaping an array in-place will fail if a copy is required. Warning Setting `arr.shape` is discouraged and may be deprecated in the future. Using [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") is the preferred approach. See also [`numpy.shape`](numpy.shape#numpy.shape "numpy.shape") Equivalent getter function. [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Function similar to setting `shape`. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Method similar to setting `shape`. #### Examples >>> import numpy as np >>> x = np.array([1, 2, 3, 4]) >>> x.shape (4,) >>> y = np.zeros((2, 3, 4)) >>> y.shape (2, 3, 4) >>> y.shape = (3, 8) >>> y array([[ 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0.]]) >>> y.shape = (3, 6) Traceback (most recent call last): File "", line 1, in ValueError: total size of new array must be unchanged >>> np.zeros((4,2))[::2].shape = (-1,) Traceback (most recent call last): File "", line 1, in AttributeError: Incompatible shape for in-place modification. Use `.reshape()` to make a copy with the desired shape. # numpy.ndarray.size attribute ndarray.size Number of elements in the array. Equal to `np.prod(a.shape)`, i.e., the product of the array’s dimensions. #### Notes `a.size` returns a standard arbitrary precision Python integer. This may not be the case with other methods of obtaining the same value (like the suggested `np.prod(a.shape)`, which returns an instance of `np.int_`), and may be relevant if the value is used further in calculations that may overflow a fixed size integer type. #### Examples >>> import numpy as np >>> x = np.zeros((3, 5, 2), dtype=np.complex128) >>> x.size 30 >>> np.prod(x.shape) 30 # numpy.ndarray.sort method ndarray.sort(_axis =-1_, _kind =None_, _order =None_) Sort an array in-place. Refer to [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for full documentation. Parameters: **axis** int, optional Axis along which to sort. Default is -1, which means sort along the last axis. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional Sorting algorithm. The default is ‘quicksort’. Note that both ‘stable’ and ‘mergesort’ use timsort under the covers and, in general, the actual implementation will vary with datatype. The ‘mergesort’ option is retained for backwards compatibility. **order** str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. See also [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") Return a sorted copy of an array. [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Indirect sort. [`numpy.lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort on multiple keys. [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") Find elements in sorted array. [`numpy.partition`](numpy.partition#numpy.partition "numpy.partition") Partial sort. #### Notes See [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for notes on the different sorting algorithms. #### Examples >>> import numpy as np >>> a = np.array([[1,4], [3,1]]) >>> a.sort(axis=1) >>> a array([[1, 4], [1, 3]]) >>> a.sort(axis=0) >>> a array([[1, 3], [1, 4]]) Use the `order` keyword to specify a field to use when sorting a structured array: >>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)]) >>> a.sort(order='y') >>> a array([(b'c', 1), (b'a', 2)], dtype=[('x', 'S1'), ('y', '>> import numpy as np >>> y = np.reshape(np.arange(2*3*4), (2,3,4)) >>> y array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> y.strides (48, 16, 4) >>> y[1,1,1] 17 >>> offset=sum(y.strides * np.array((1,1,1))) >>> offset/y.itemsize 17 >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0) >>> x.strides (32, 4, 224, 1344) >>> i = np.array([3,5,2,2]) >>> offset = sum(i * x.strides) >>> x[3,5,2,2] 813 >>> offset / x.itemsize 813 # numpy.ndarray.sum method ndarray.sum(_axis =None_, _dtype =None_, _out =None_, _keepdims =False_, _initial =0_, _where =True_) Return the sum of the array elements over the given axis. Refer to [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") for full documentation. See also [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") equivalent function # numpy.ndarray.swapaxes method ndarray.swapaxes(_axis1_ , _axis2_) Return a view of the array with `axis1` and `axis2` interchanged. Refer to [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") for full documentation. See also [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") equivalent function # numpy.ndarray.T attribute ndarray.T View of the transposed array. Same as `self.transpose()`. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.T array([[1, 3], [2, 4]]) >>> a = np.array([1, 2, 3, 4]) >>> a array([1, 2, 3, 4]) >>> a.T array([1, 2, 3, 4]) # numpy.ndarray.take method ndarray.take(_indices_ , _axis =None_, _out =None_, _mode ='raise'_) Return an array formed from the elements of `a` at the given indices. Refer to [`numpy.take`](numpy.take#numpy.take "numpy.take") for full documentation. See also [`numpy.take`](numpy.take#numpy.take "numpy.take") equivalent function # numpy.ndarray.tobytes method ndarray.tobytes(_order ='C'_) Construct Python bytes containing the raw data bytes in the array. Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object is produced in C-order by default. This behavior is controlled by the `order` parameter. Parameters: **order**{‘C’, ‘F’, ‘A’}, optional Controls the memory layout of the bytes object. ‘C’ means C-order, ‘F’ means F-order, ‘A’ (short for _Any_) means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. Default is ‘C’. Returns: **s** bytes Python bytes exhibiting a copy of `a`’s raw data. See also [`frombuffer`](numpy.frombuffer#numpy.frombuffer "numpy.frombuffer") Inverse of this operation, construct a 1-dimensional array from Python bytes. #### Examples >>> import numpy as np >>> x = np.array([[0, 1], [2, 3]], dtype='>> x.tobytes() b'\x00\x00\x01\x00\x02\x00\x03\x00' >>> x.tobytes('C') == x.tobytes() True >>> x.tobytes('F') b'\x00\x00\x02\x00\x01\x00\x03\x00' # numpy.ndarray.tofile method ndarray.tofile(_fid_ , _sep =''_, _format ='%s'_) Write array to a file as text or binary (default). Data is always written in ‘C’ order, independent of the order of `a`. The data produced by this method can be recovered using the function fromfile(). Parameters: **fid** file or str or Path An open file object, or a string containing a filename. **sep** str Separator between array items for text output. If “” (empty), a binary file is written, equivalent to `file.write(a.tobytes())`. **format** str Format string for text file output. Each entry in the array is formatted to text by first converting it to the closest Python type, and then using “format” % item. #### Notes This is a convenience function for quick storage of array data. Information on endianness and precision is lost, so this method is not a good choice for files intended to archive data or transport data between machines with different endianness. Some of these problems can be overcome by outputting the data as text files, at the expense of speed and file size. When fid is a file object, array contents are directly written to the file, bypassing the file object’s `write` method. As a result, tofile cannot be used with files objects supporting compression (e.g., GzipFile) or file-like objects that do not support `fileno()` (e.g., BytesIO). # numpy.ndarray.tolist method ndarray.tolist() Return the array as an `a.ndim`-levels deep nested list of Python scalars. Return a copy of the array data as a (nested) Python list. Data items are converted to the nearest compatible builtin Python type, via the [`item`](numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item") function. If `a.ndim` is 0, then since the depth of the nested list is 0, it will not be a list at all, but a simple Python scalar. Parameters: **none** Returns: **y** object, or list of object, or list of list of object, or … The possibly nested list of array elements. #### Notes The array may be recreated via `a = np.array(a.tolist())`, although this may sometimes lose precision. #### Examples For a 1D array, `a.tolist()` is almost the same as `list(a)`, except that `tolist` changes numpy scalars to Python scalars: >>> import numpy as np >>> a = np.uint32([1, 2]) >>> a_list = list(a) >>> a_list [np.uint32(1), np.uint32(2)] >>> type(a_list[0]) >>> a_tolist = a.tolist() >>> a_tolist [1, 2] >>> type(a_tolist[0]) Additionally, for a 2D array, `tolist` applies recursively: >>> a = np.array([[1, 2], [3, 4]]) >>> list(a) [array([1, 2]), array([3, 4])] >>> a.tolist() [[1, 2], [3, 4]] The base case for this recursion is a 0D array: >>> a = np.array(1) >>> list(a) Traceback (most recent call last): ... TypeError: iteration over a 0-d array >>> a.tolist() 1 # numpy.ndarray.tostring method ndarray.tostring(_order ='C'_) A compatibility alias for [`tobytes`](numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes"), with exactly the same behavior. Despite its name, it returns [`bytes`](https://docs.python.org/3/library/stdtypes.html#bytes "\(in Python v3.13\)") not [`str`](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.13\)")s. Deprecated since version 1.19.0. # numpy.ndarray.trace method ndarray.trace(_offset =0_, _axis1 =0_, _axis2 =1_, _dtype =None_, _out =None_) Return the sum along diagonals of the array. Refer to [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") for full documentation. See also [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") equivalent function # numpy.ndarray.transpose method ndarray.transpose(_* axes_) Returns a view of the array with axes transposed. Refer to [`numpy.transpose`](numpy.transpose#numpy.transpose "numpy.transpose") for full documentation. Parameters: **axes** None, tuple of ints, or `n` ints * None or no argument: reverses the order of the axes. * tuple of ints: `i` in the `j`-th place in the tuple means that the array’s `i`-th axis becomes the transposed array’s `j`-th axis. * `n` ints: same as an n-tuple of the same ints (this form is intended simply as a “convenience” alternative to the tuple form). Returns: **p** ndarray View of the array with its axes suitably permuted. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") Equivalent function. [`ndarray.T`](numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T") Array property returning the array transposed. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Give a new shape to an array without changing its data. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.transpose() array([[1, 3], [2, 4]]) >>> a.transpose((1, 0)) array([[1, 3], [2, 4]]) >>> a.transpose(1, 0) array([[1, 3], [2, 4]]) >>> a = np.array([1, 2, 3, 4]) >>> a array([1, 2, 3, 4]) >>> a.transpose() array([1, 2, 3, 4]) # numpy.ndarray.var method ndarray.var(_axis =None_, _dtype =None_, _out =None_, _ddof =0_, _keepdims =False_, _*_ , _where =True_) Returns the variance of the array elements, along given axis. Refer to [`numpy.var`](numpy.var#numpy.var "numpy.var") for full documentation. See also [`numpy.var`](numpy.var#numpy.var "numpy.var") equivalent function # numpy.ndarray.view method ndarray.view(_[dtype][, type]_) New view of array with the same data. Note Passing None for `dtype` is different from omitting the parameter, since the former invokes `dtype(None)` which is an alias for `dtype('float64')`. Parameters: **dtype** data-type or ndarray sub-class, optional Data-type descriptor of the returned view, e.g., float32 or int16. Omitting it results in the view having the same data-type as `a`. This argument can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the `type` parameter). **type** Python type, optional Type of the returned view, e.g., ndarray or matrix. Again, omission of the parameter results in type preservation. #### Notes `a.view()` is used two different ways: `a.view(some_dtype)` or `a.view(dtype=some_dtype)` constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory. `a.view(ndarray_subclass)` or `a.view(type=ndarray_subclass)` just returns an instance of `ndarray_subclass` that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory. For `a.view(some_dtype)`, if `some_dtype` has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the last axis of `a` must be contiguous. This axis will be resized in the result. Changed in version 1.23.0: Only the last axis needs to be contiguous. Previously, the entire array had to be C-contiguous. #### Examples >>> import numpy as np >>> x = np.array([(-1, 2)], dtype=[('a', np.int8), ('b', np.int8)]) Viewing array data using a different type and dtype: >>> nonneg = np.dtype([("a", np.uint8), ("b", np.uint8)]) >>> y = x.view(dtype=nonneg, type=np.recarray) >>> x["a"] array([-1], dtype=int8) >>> y.a array([255], dtype=uint8) Creating a view on a structured array so it can be used in calculations >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)]) >>> xv = x.view(dtype=np.int8).reshape(-1,2) >>> xv array([[1, 2], [3, 4]], dtype=int8) >>> xv.mean(0) array([2., 3.]) Making changes to the view changes the underlying array >>> xv[0,1] = 20 >>> x array([(1, 20), (3, 4)], dtype=[('a', 'i1'), ('b', 'i1')]) Using a view to convert an array to a recarray: >>> z = x.view(np.recarray) >>> z.a array([1, 3], dtype=int8) Views share data: >>> x[0] = (9, 10) >>> z[0] np.record((9, 10), dtype=[('a', 'i1'), ('b', 'i1')]) Views that change the dtype size (bytes per entry) should normally be avoided on arrays defined by slices, transposes, fortran-ordering, etc.: >>> x = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.int16) >>> y = x[:, ::2] >>> y array([[1, 3], [4, 6]], dtype=int16) >>> y.view(dtype=[('width', np.int16), ('length', np.int16)]) Traceback (most recent call last): ... ValueError: To change to a dtype of a different size, the last axis must be contiguous >>> z = y.copy() >>> z.view(dtype=[('width', np.int16), ('length', np.int16)]) array([[(1, 3)], [(4, 6)]], dtype=[('width', '>> x = np.arange(2 * 3 * 4, dtype=np.int8).reshape(2, 3, 4) >>> x.transpose(1, 0, 2).view(np.int16) array([[[ 256, 770], [3340, 3854]], [[1284, 1798], [4368, 4882]], [[2312, 2826], [5396, 5910]]], dtype=int16) # numpy.ndenumerate _class_ numpy.ndenumerate(_arr_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) Multidimensional index iterator. Return an iterator yielding pairs of array coordinates and values. Parameters: **arr** ndarray Input array. See also [`ndindex`](numpy.ndindex#numpy.ndindex "numpy.ndindex"), [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> for index, x in np.ndenumerate(a): ... print(index, x) (0, 0) 1 (0, 1) 2 (1, 0) 3 (1, 1) 4 # numpy.ndim numpy.ndim(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L3523-L3559) Return the number of dimensions of an array. Parameters: **a** array_like Input array. If it is not already an ndarray, a conversion is attempted. Returns: **number_of_dimensions** int The number of dimensions in `a`. Scalars are zero-dimensional. See also [`ndarray.ndim`](numpy.ndarray.ndim#numpy.ndarray.ndim "numpy.ndarray.ndim") equivalent method [`shape`](numpy.shape#numpy.shape "numpy.shape") dimensions of array [`ndarray.shape`](numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") dimensions of array #### Examples >>> import numpy as np >>> np.ndim([[1,2,3],[4,5,6]]) 2 >>> np.ndim(np.array([[1,2,3],[4,5,6]])) 2 >>> np.ndim(1) 0 # numpy.ndindex _class_ numpy.ndindex(_* shape_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) An N-dimensional iterator object to index arrays. Given the shape of an array, an `ndindex` instance iterates over the N-dimensional index of the array. At each iteration a tuple of indices is returned, the last dimension is iterated over first. Parameters: **shape** ints, or a single tuple of ints The size of each dimension of the array can be passed as individual parameters or as the elements of a tuple. See also [`ndenumerate`](numpy.ndenumerate#numpy.ndenumerate "numpy.ndenumerate"), [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples >>> import numpy as np Dimensions as individual arguments >>> for index in np.ndindex(3, 2, 1): ... print(index) (0, 0, 0) (0, 1, 0) (1, 0, 0) (1, 1, 0) (2, 0, 0) (2, 1, 0) Same dimensions - but in a tuple `(3, 2, 1)` >>> for index in np.ndindex((3, 2, 1)): ... print(index) (0, 0, 0) (0, 1, 0) (1, 0, 0) (1, 1, 0) (2, 0, 0) (2, 1, 0) #### Methods [`ndincr`](numpy.ndindex.ndincr#numpy.ndindex.ndincr "numpy.ndindex.ndincr")() | Increment the multi-dimensional index by one. ---|--- # numpy.ndindex.ndincr method ndindex.ndincr()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_index_tricks_impl.py#L701-L715) Increment the multi-dimensional index by one. This method is for backward compatibility only: do not use. Deprecated since version 1.20.0: This method has been advised against since numpy 1.8.0, but only started emitting DeprecationWarning as of this version. # numpy.nditer.close method nditer.close() Resolve all writeback semantics in writeable operands. See also [Modifying array values](../arrays.nditer#nditer-context-manager) # numpy.nditer.copy method nditer.copy() Get a copy of the iterator in its current state. #### Examples >>> import numpy as np >>> x = np.arange(10) >>> y = x + 1 >>> it = np.nditer([x, y]) >>> next(it) (array(0), array(1)) >>> it2 = it.copy() >>> next(it2) (array(1), array(2)) # numpy.nditer.debug_print method nditer.debug_print() Print the current state of the [`nditer`](numpy.nditer#numpy.nditer "numpy.nditer") instance and debug info to stdout. # numpy.nditer.enable_external_loop method nditer.enable_external_loop() When the “external_loop” was not used during construction, but is desired, this modifies the iterator to behave as if the flag was specified. # numpy.nditer _class_ numpy.nditer(_op_ , _flags =None_, _op_flags =None_, _op_dtypes =None_, _order ='K'_, _casting ='safe'_, _op_axes =None_, _itershape =None_, _buffersize =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) Efficient multi-dimensional iterator object to iterate over arrays. To get started using this object, see the [introductory guide to array iteration](../arrays.nditer#arrays-nditer). Parameters: **op** ndarray or sequence of array_like The array(s) to iterate over. **flags** sequence of str, optional Flags to control the behavior of the iterator. * `buffered` enables buffering when required. * `c_index` causes a C-order index to be tracked. * `f_index` causes a Fortran-order index to be tracked. * `multi_index` causes a multi-index, or a tuple of indices with one per iteration dimension, to be tracked. * `common_dtype` causes all the operands to be converted to a common data type, with copying or buffering as necessary. * `copy_if_overlap` causes the iterator to determine if read operands have overlap with write operands, and make temporary copies as necessary to avoid overlap. False positives (needless copying) are possible in some cases. * `delay_bufalloc` delays allocation of the buffers until a reset() call is made. Allows `allocate` operands to be initialized before their values are copied into the buffers. * `external_loop` causes the `values` given to be one-dimensional arrays with multiple values instead of zero-dimensional arrays. * `grow_inner` allows the `value` array sizes to be made larger than the buffer size when both `buffered` and `external_loop` is used. * `ranged` allows the iterator to be restricted to a sub-range of the iterindex values. * `refs_ok` enables iteration of reference types, such as object arrays. * `reduce_ok` enables iteration of `readwrite` operands which are broadcasted, also known as reduction operands. * `zerosize_ok` allows [`itersize`](numpy.nditer.itersize#numpy.nditer.itersize "numpy.nditer.itersize") to be zero. **op_flags** list of list of str, optional This is a list of flags for each operand. At minimum, one of `readonly`, `readwrite`, or `writeonly` must be specified. * `readonly` indicates the operand will only be read from. * `readwrite` indicates the operand will be read from and written to. * `writeonly` indicates the operand will only be written to. * `no_broadcast` prevents the operand from being broadcasted. * `contig` forces the operand data to be contiguous. * `aligned` forces the operand data to be aligned. * `nbo` forces the operand data to be in native byte order. * `copy` allows a temporary read-only copy if required. * `updateifcopy` allows a temporary read-write copy if required. * `allocate` causes the array to be allocated if it is None in the `op` parameter. * `no_subtype` prevents an `allocate` operand from using a subtype. * `arraymask` indicates that this operand is the mask to use for selecting elements when writing to operands with the ‘writemasked’ flag set. The iterator does not enforce this, but when writing from a buffer back to the array, it only copies those elements indicated by this mask. * `writemasked` indicates that only elements where the chosen `arraymask` operand is True will be written to. * `overlap_assume_elementwise` can be used to mark operands that are accessed only in the iterator order, to allow less conservative copying when `copy_if_overlap` is present. **op_dtypes** dtype or tuple of dtype(s), optional The required data type(s) of the operands. If copying or buffering is enabled, the data will be converted to/from their original types. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the iteration order. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. This also affects the element memory order of `allocate` operands, as they are allocated to be compatible with iteration order. Default is ‘K’. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur when making a copy or buffering. Setting this to ‘unsafe’ is not recommended, as it can adversely affect accumulations. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. **op_axes** list of list of ints, optional If provided, is a list of ints or None for each operands. The list of axes for an operand is a mapping from the dimensions of the iterator to the dimensions of the operand. A value of -1 can be placed for entries, causing that dimension to be treated as [`newaxis`](../constants#numpy.newaxis "numpy.newaxis"). **itershape** tuple of ints, optional The desired shape of the iterator. This allows `allocate` operands with a dimension mapped by op_axes not corresponding to a dimension of a different operand to get a value not equal to 1 for that dimension. **buffersize** int, optional When buffering is enabled, controls the size of the temporary buffers. Set to 0 for the default value. #### Notes `nditer` supersedes [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter"). The iterator implementation behind `nditer` is also exposed by the NumPy C API. The Python exposure supplies two iteration interfaces, one which follows the Python iterator protocol, and another which mirrors the C-style do-while pattern. The native Python approach is better in most cases, but if you need the coordinates or index of an iterator, use the C-style pattern. #### Examples Here is how we might write an `iter_add` function, using the Python iterator protocol: >>> import numpy as np >>> def iter_add_py(x, y, out=None): ... addop = np.add ... it = np.nditer([x, y, out], [], ... [['readonly'], ['readonly'], ['writeonly','allocate']]) ... with it: ... for (a, b, c) in it: ... addop(a, b, out=c) ... return it.operands[2] Here is the same function, but following the C-style pattern: >>> def iter_add(x, y, out=None): ... addop = np.add ... it = np.nditer([x, y, out], [], ... [['readonly'], ['readonly'], ['writeonly','allocate']]) ... with it: ... while not it.finished: ... addop(it[0], it[1], out=it[2]) ... it.iternext() ... return it.operands[2] Here is an example outer product function: >>> def outer_it(x, y, out=None): ... mulop = np.multiply ... it = np.nditer([x, y, out], ['external_loop'], ... [['readonly'], ['readonly'], ['writeonly', 'allocate']], ... op_axes=[list(range(x.ndim)) + [-1] * y.ndim, ... [-1] * x.ndim + list(range(y.ndim)), ... None]) ... with it: ... for (a, b, c) in it: ... mulop(a, b, out=c) ... return it.operands[2] >>> a = np.arange(2)+1 >>> b = np.arange(3)+1 >>> outer_it(a,b) array([[1, 2, 3], [2, 4, 6]]) Here is an example function which operates like a “lambda” ufunc: >>> def luf(lamdaexpr, *args, **kwargs): ... '''luf(lambdaexpr, op1, ..., opn, out=None, order='K', casting='safe', buffersize=0)''' ... nargs = len(args) ... op = (kwargs.get('out',None),) + args ... it = np.nditer(op, ['buffered','external_loop'], ... [['writeonly','allocate','no_broadcast']] + ... [['readonly','nbo','aligned']]*nargs, ... order=kwargs.get('order','K'), ... casting=kwargs.get('casting','safe'), ... buffersize=kwargs.get('buffersize',0)) ... while not it.finished: ... it[0] = lamdaexpr(*it[1:]) ... it.iternext() ... return it.operands[0] >>> a = np.arange(5) >>> b = np.ones(5) >>> luf(lambda i,j:i*i + j/2, a, b) array([ 0.5, 1.5, 4.5, 9.5, 16.5]) If operand flags `"writeonly"` or `"readwrite"` are used the operands may be views into the original data with the `WRITEBACKIFCOPY` flag. In this case `nditer` must be used as a context manager or the [`nditer.close`](numpy.nditer.close#numpy.nditer.close "numpy.nditer.close") method must be called before using the result. The temporary data will be written back to the original data when the [`__exit__`](https://docs.python.org/3/reference/datamodel.html#object.__exit__ "\(in Python v3.13\)") function is called but not before: >>> a = np.arange(6, dtype='i4')[::-2] >>> with np.nditer(a, [], ... [['writeonly', 'updateifcopy']], ... casting='unsafe', ... op_dtypes=[np.dtype('f4')]) as i: ... x = i.operands[0] ... x[:] = [-1, -2, -3] ... # a still unchanged here >>> a, x (array([-1, -2, -3], dtype=int32), array([-1., -2., -3.], dtype=float32)) It is important to note that once the iterator is exited, dangling references (like `x` in the example) may or may not share data with the original data `a`. If writeback semantics were active, i.e. if `x.base.flags.writebackifcopy` is `True`, then exiting the iterator will sever the connection between `x` and `a`, writing to `x` will no longer write to `a`. If writeback semantics are not active, then `x.data` will still point at some part of `a.data`, and writing to one will affect the other. Context management and the [`close`](numpy.nditer.close#numpy.nditer.close "numpy.nditer.close") method appeared in version 1.15.0. Attributes: **dtypes** tuple of dtype(s) The data types of the values provided in [`value`](numpy.nditer.value#numpy.nditer.value "numpy.nditer.value"). This may be different from the operand data types if buffering is enabled. Valid only before the iterator is closed. **finished** bool Whether the iteration over the operands is finished or not. **has_delayed_bufalloc** bool If True, the iterator was created with the `delay_bufalloc` flag, and no reset() function was called on it yet. **has_index** bool If True, the iterator was created with either the `c_index` or the `f_index` flag, and the property [`index`](numpy.nditer.index#numpy.nditer.index "numpy.nditer.index") can be used to retrieve it. **has_multi_index** bool If True, the iterator was created with the `multi_index` flag, and the property [`multi_index`](numpy.nditer.multi_index#numpy.nditer.multi_index "numpy.nditer.multi_index") can be used to retrieve it. **index** When the `c_index` or `f_index` flag was used, this property provides access to the index. Raises a ValueError if accessed and `has_index` is False. **iterationneedsapi** bool Whether iteration requires access to the Python API, for example if one of the operands is an object array. **iterindex** int An index which matches the order of iteration. **itersize** int Size of the iterator. **itviews** Structured view(s) of [`operands`](numpy.nditer.operands#numpy.nditer.operands "numpy.nditer.operands") in memory, matching the reordered and optimized iterator access pattern. Valid only before the iterator is closed. **multi_index** When the `multi_index` flag was used, this property provides access to the index. Raises a ValueError if accessed accessed and `has_multi_index` is False. **ndim** int The dimensions of the iterator. **nop** int The number of iterator operands. [`operands`](numpy.nditer.operands#numpy.nditer.operands "numpy.nditer.operands")tuple of operand(s) operands[`Slice`] **shape** tuple of ints Shape tuple, the shape of the iterator. **value** Value of `operands` at current iteration. Normally, this is a tuple of array scalars, but if the flag `external_loop` is used, it is a tuple of one dimensional arrays. #### Methods [`close`](numpy.nditer.close#numpy.nditer.close "numpy.nditer.close")() | Resolve all writeback semantics in writeable operands. ---|--- [`copy`](numpy.nditer.copy#numpy.nditer.copy "numpy.nditer.copy")() | Get a copy of the iterator in its current state. [`debug_print`](numpy.nditer.debug_print#numpy.nditer.debug_print "numpy.nditer.debug_print")() | Print the current state of the `nditer` instance and debug info to stdout. [`enable_external_loop`](numpy.nditer.enable_external_loop#numpy.nditer.enable_external_loop "numpy.nditer.enable_external_loop")() | When the "external_loop" was not used during construction, but is desired, this modifies the iterator to behave as if the flag was specified. [`iternext`](numpy.nditer.iternext#numpy.nditer.iternext "numpy.nditer.iternext")() | Check whether iterations are left, and perform a single internal iteration without returning the result. [`remove_axis`](numpy.nditer.remove_axis#numpy.nditer.remove_axis "numpy.nditer.remove_axis")(i, /) | Removes axis `i` from the iterator. [`remove_multi_index`](numpy.nditer.remove_multi_index#numpy.nditer.remove_multi_index "numpy.nditer.remove_multi_index")() | When the "multi_index" flag was specified, this removes it, allowing the internal iteration structure to be optimized further. [`reset`](numpy.nditer.reset#numpy.nditer.reset "numpy.nditer.reset")() | Reset the iterator to its initial state. # numpy.nditer.index attribute nditer.index # numpy.nditer.iternext method nditer.iternext() Check whether iterations are left, and perform a single internal iteration without returning the result. Used in the C-style pattern do-while pattern. For an example, see [`nditer`](numpy.nditer#numpy.nditer "numpy.nditer"). Returns: **iternext** bool Whether or not there are iterations left. # numpy.nditer.itersize attribute nditer.itersize # numpy.nditer.multi_index attribute nditer.multi_index # numpy.nditer.operands attribute nditer.operands The array(s) to be iterated over. Valid only before the iterator is closed. # numpy.nditer.remove_axis method nditer.remove_axis(_i_ , _/_) Removes axis `i` from the iterator. Requires that the flag “multi_index” be enabled. # numpy.nditer.remove_multi_index method nditer.remove_multi_index() When the “multi_index” flag was specified, this removes it, allowing the internal iteration structure to be optimized further. # numpy.nditer.reset method nditer.reset() Reset the iterator to its initial state. # numpy.nditer.value attribute nditer.value # numpy.negative numpy.negative(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Numerical negative, element-wise. Parameters: **x** array_like or scalar Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray or scalar Returned array or scalar: `y = -x`. This is a scalar if `x` is a scalar. #### Examples >>> import numpy as np >>> np.negative([1.,-1.]) array([-1., 1.]) The unary `-` operator can be used as a shorthand for `np.negative` on ndarrays. >>> x1 = np.array(([1., -1.])) >>> -x1 array([-1., 1.]) # numpy.nested_iters numpy.nested_iters(_op_ , _axes_ , _flags =None_, _op_flags =None_, _op_dtypes =None_, _order ='K'_, _casting ='safe'_, _buffersize =0_) Create nditers for use in nested loops Create a tuple of [`nditer`](numpy.nditer#numpy.nditer "numpy.nditer") objects which iterate in nested loops over different axes of the op argument. The first iterator is used in the outermost loop, the last in the innermost loop. Advancing one will change the subsequent iterators to point at its new element. Parameters: **op** ndarray or sequence of array_like The array(s) to iterate over. **axes** list of list of int Each item is used as an “op_axes” argument to an nditer **flags, op_flags, op_dtypes, order, casting, buffersize (optional)** See [`nditer`](numpy.nditer#numpy.nditer "numpy.nditer") parameters of the same name Returns: **iters** tuple of nditer An nditer for each item in `axes`, outermost first See also [`nditer`](numpy.nditer#numpy.nditer "numpy.nditer") #### Examples Basic usage. Note how y is the “flattened” version of [a[:, 0, :], a[:, 1, 0], a[:, 2, :]] since we specified the first iter’s axes as [1] >>> import numpy as np >>> a = np.arange(12).reshape(2, 3, 2) >>> i, j = np.nested_iters(a, [[1], [0, 2]], flags=["multi_index"]) >>> for x in i: ... print(i.multi_index) ... for y in j: ... print('', j.multi_index, y) (0,) (0, 0) 0 (0, 1) 1 (1, 0) 6 (1, 1) 7 (1,) (0, 0) 2 (0, 1) 3 (1, 0) 8 (1, 1) 9 (2,) (0, 0) 4 (0, 1) 5 (1, 0) 10 (1, 1) 11 # numpy.nextafter numpy.nextafter(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the next floating-point value after x1 towards x2, element-wise. Parameters: **x1** array_like Values to find the next representable value of. **x2** array_like The direction where to look for the next representable value of `x1`. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar The next representable values of `x1` in the direction of `x2`. This is a scalar if both `x1` and `x2` are scalars. #### Examples >>> import numpy as np >>> eps = np.finfo(np.float64).eps >>> np.nextafter(1, 2) == eps + 1 True >>> np.nextafter([1, 2], [2, 1]) == [eps + 1, 2 - eps] array([ True, True]) # numpy.nonzero numpy.nonzero(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L2018-L2111) Return the indices of the elements that are non-zero. Returns a tuple of arrays, one for each dimension of `a`, containing the indices of the non-zero elements in that dimension. The values in `a` are always tested and returned in row-major, C-style order. To group the indices by element, rather than dimension, use [`argwhere`](numpy.argwhere#numpy.argwhere "numpy.argwhere"), which returns a row for each non-zero element. Note When called on a zero-d array or scalar, `nonzero(a)` is treated as `nonzero(atleast_1d(a))`. Deprecated since version 1.17.0: Use [`atleast_1d`](numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d") explicitly if this behavior is deliberate. Parameters: **a** array_like Input array. Returns: **tuple_of_arrays** tuple Indices of elements that are non-zero. See also [`flatnonzero`](numpy.flatnonzero#numpy.flatnonzero "numpy.flatnonzero") Return indices that are non-zero in the flattened version of the input array. [`ndarray.nonzero`](numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero") Equivalent ndarray method. [`count_nonzero`](numpy.count_nonzero#numpy.count_nonzero "numpy.count_nonzero") Counts the number of non-zero elements in the input array. #### Notes While the nonzero values can be obtained with `a[nonzero(a)]`, it is recommended to use `x[x.astype(bool)]` or `x[x != 0]` instead, which will correctly handle 0-d arrays. #### Examples >>> import numpy as np >>> x = np.array([[3, 0, 0], [0, 4, 0], [5, 6, 0]]) >>> x array([[3, 0, 0], [0, 4, 0], [5, 6, 0]]) >>> np.nonzero(x) (array([0, 1, 2, 2]), array([0, 1, 0, 1])) >>> x[np.nonzero(x)] array([3, 4, 5, 6]) >>> np.transpose(np.nonzero(x)) array([[0, 0], [1, 1], [2, 0], [2, 1]]) A common use for `nonzero` is to find the indices of an array, where a condition is True. Given an array `a`, the condition `a` > 3 is a boolean array and since False is interpreted as 0, np.nonzero(a > 3) yields the indices of the `a` where the condition is true. >>> a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> a > 3 array([[False, False, False], [ True, True, True], [ True, True, True]]) >>> np.nonzero(a > 3) (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) Using this result to index `a` is equivalent to using the mask directly: >>> a[np.nonzero(a > 3)] array([4, 5, 6, 7, 8, 9]) >>> a[a > 3] # prefer this spelling array([4, 5, 6, 7, 8, 9]) `nonzero` can also be called as a method of the array. >>> (a > 3).nonzero() (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) # numpy.not_equal numpy.not_equal(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return (x1 != x2) element-wise. Parameters: **x1, x2** array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`less`](numpy.less#numpy.less "numpy.less"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal") #### Examples >>> import numpy as np >>> np.not_equal([1.,2.], [1., 3.]) array([False, True]) >>> np.not_equal([1, 2], [[1, 3],[1, 4]]) array([[False, True], [False, True]]) The `!=` operator can be used as a shorthand for `np.not_equal` on ndarrays. >>> a = np.array([1., 2.]) >>> b = np.array([1., 3.]) >>> a != b array([False, True]) # numpy.number.__class_getitem__ method number.__class_getitem__(_item_ , _/_) Return a parametrized wrapper around the [`number`](../arrays.scalars#numpy.number "numpy.number") type. New in version 1.22. Returns: **alias** types.GenericAlias A parametrized [`number`](../arrays.scalars#numpy.number "numpy.number") type. See also [**PEP 585**](https://peps.python.org/pep-0585/) Type hinting generics in standard collections. #### Examples >>> from typing import Any >>> import numpy as np >>> np.signedinteger[Any] numpy.signedinteger[typing.Any] # numpy.ogrid numpy.ogrid _= _ An instance which returns an open multi-dimensional “meshgrid”. An instance which returns an open (i.e. not fleshed out) mesh-grid when indexed, so that only one dimension of each returned array is greater than 1. The dimension and number of the output arrays are equal to the number of indexing dimensions. If the step length is not a complex number, then the stop is not inclusive. However, if the step length is a **complex number** (e.g. 5j), then the integer part of its magnitude is interpreted as specifying the number of points to create between the start and stop values, where the stop value **is inclusive**. Returns: **mesh-grid** ndarray or tuple of ndarrays If the input is a single slice, returns an array. If the input is multiple slices, returns a tuple of arrays, with only one dimension not equal to 1. See also [`mgrid`](numpy.mgrid#numpy.mgrid "numpy.mgrid") like `ogrid` but returns dense (or fleshed out) mesh grids [`meshgrid`](numpy.meshgrid#numpy.meshgrid "numpy.meshgrid") return coordinate matrices from coordinate vectors [`r_`](numpy.r_#numpy.r_ "numpy.r_") array concatenator [How to create arrays with regularly-spaced values](../../user/how-to- partition#how-to-partition) #### Examples >>> from numpy import ogrid >>> ogrid[-1:1:5j] array([-1. , -0.5, 0. , 0.5, 1. ]) >>> ogrid[0:5, 0:5] (array([[0], [1], [2], [3], [4]]), array([[0, 1, 2, 3, 4]])) # numpy.ones numpy.ones(_shape_ , _dtype =None_, _order ='C'_, _*_ , _device =None_, _like =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L137-L201) Return a new array of given shape and type, filled with ones. Parameters: **shape** int or sequence of ints Shape of the new array, e.g., `(2, 3)` or `2`. **dtype** data-type, optional The desired data-type for the array, e.g., [`numpy.int8`](../arrays.scalars#numpy.int8 "numpy.int8"). Default is [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64"). **order**{‘C’, ‘F’}, optional, default: C Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. **device** str, optional The device on which to place the created array. Default: None. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** ndarray Array of ones with the given shape, dtype, and order. See also [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") Return an array of ones with shape and type of input. [`empty`](numpy.empty#numpy.empty "numpy.empty") Return a new uninitialized array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Return a new array setting values to zero. [`full`](numpy.full#numpy.full "numpy.full") Return a new array of given shape filled with value. #### Examples >>> import numpy as np >>> np.ones(5) array([1., 1., 1., 1., 1.]) >>> np.ones((5,), dtype=int) array([1, 1, 1, 1, 1]) >>> np.ones((2, 1)) array([[1.], [1.]]) >>> s = (2,2) >>> np.ones(s) array([[1., 1.], [1., 1.]]) # numpy.ones_like numpy.ones_like(_a_ , _dtype =None_, _order ='K'_, _subok =True_, _shape =None_, _*_ , _device =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L213-L281) Return an array of ones with the same shape and type as a given array. Parameters: **a** array_like The shape and data-type of `a` define these same attributes of the returned array. **dtype** data-type, optional Overrides the data type of the result. **order**{‘C’, ‘F’, ‘A’, or ‘K’}, optional Overrides the memory layout of the result. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. **subok** bool, optional. If True, then the newly created array will use the sub-class type of `a`, otherwise it will be a base-class array. Defaults to True. **shape** int or sequence of ints, optional. Overrides the shape of the result. If order=’K’ and the number of dimensions is unchanged, will try to keep order, otherwise, order=’C’ is implied. **device** str, optional The device on which to place the created array. Default: None. For Array-API interoperability only, so must be `"cpu"` if passed. New in version 2.0.0. Returns: **out** ndarray Array of ones with the same shape and type as `a`. See also [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like") Return an empty array with shape and type of input. [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Return an array of zeros with shape and type of input. [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like") Return a new array with shape of input filled with value. [`ones`](numpy.ones#numpy.ones "numpy.ones") Return a new array setting values to one. #### Examples >>> import numpy as np >>> x = np.arange(6) >>> x = x.reshape((2, 3)) >>> x array([[0, 1, 2], [3, 4, 5]]) >>> np.ones_like(x) array([[1, 1, 1], [1, 1, 1]]) >>> y = np.arange(3, dtype=float) >>> y array([0., 1., 2.]) >>> np.ones_like(y) array([1., 1., 1.]) # numpy.outer numpy.outer(_a_ , _b_ , _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L876-L961) Compute the outer product of two vectors. Given two vectors `a` and `b` of length `M` and `N`, respectively, the outer product [1] is: [[a_0*b_0 a_0*b_1 ... a_0*b_{N-1} ] [a_1*b_0 . [ ... . [a_{M-1}*b_0 a_{M-1}*b_{N-1} ]] Parameters: **a**(M,) array_like First input vector. Input is flattened if not already 1-dimensional. **b**(N,) array_like Second input vector. Input is flattened if not already 1-dimensional. **out**(M, N) ndarray, optional A location where the result is stored Returns: **out**(M, N) ndarray `out[i, j] = a[i] * b[j]` See also [`inner`](numpy.inner#numpy.inner "numpy.inner") [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") `einsum('i,j->ij', a.ravel(), b.ravel())` is the equivalent. [`ufunc.outer`](numpy.ufunc.outer#numpy.ufunc.outer "numpy.ufunc.outer") A generalization to dimensions other than 1D and other operations. `np.multiply.outer(a.ravel(), b.ravel())` is the equivalent. [`linalg.outer`](numpy.linalg.outer#numpy.linalg.outer "numpy.linalg.outer") An Array API compatible variation of `np.outer`, which accepts 1-dimensional inputs only. [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") `np.tensordot(a.ravel(), b.ravel(), axes=((), ()))` is the equivalent. #### References [1] G. H. Golub and C. F. Van Loan, _Matrix Computations_ , 3rd ed., Baltimore, MD, Johns Hopkins University Press, 1996, pg. 8. #### Examples Make a (_very_ coarse) grid for computing a Mandelbrot set: >>> import numpy as np >>> rl = np.outer(np.ones((5,)), np.linspace(-2, 2, 5)) >>> rl array([[-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.]]) >>> im = np.outer(1j*np.linspace(2, -2, 5), np.ones((5,))) >>> im array([[0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j], [0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j], [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j], [0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j]]) >>> grid = rl + im >>> grid array([[-2.+2.j, -1.+2.j, 0.+2.j, 1.+2.j, 2.+2.j], [-2.+1.j, -1.+1.j, 0.+1.j, 1.+1.j, 2.+1.j], [-2.+0.j, -1.+0.j, 0.+0.j, 1.+0.j, 2.+0.j], [-2.-1.j, -1.-1.j, 0.-1.j, 1.-1.j, 2.-1.j], [-2.-2.j, -1.-2.j, 0.-2.j, 1.-2.j, 2.-2.j]]) An example using a “vector” of letters: >>> x = np.array(['a', 'b', 'c'], dtype=object) >>> np.outer(x, [1, 2, 3]) array([['a', 'aa', 'aaa'], ['b', 'bb', 'bbb'], ['c', 'cc', 'ccc']], dtype=object) # numpy.packbits numpy.packbits(_a_ , _/_ , _axis =None_, _bitorder ='big'_) Packs the elements of a binary-valued array into bits in a uint8 array. The result is padded to full bytes by inserting zero bits at the end. Parameters: **a** array_like An array of integers or booleans whose elements should be packed to bits. **axis** int, optional The dimension over which bit-packing is done. `None` implies packing the flattened array. **bitorder**{‘big’, ‘little’}, optional The order of the input bits. ‘big’ will mimic bin(val), `[0, 0, 0, 0, 0, 0, 1, 1] => 3 = 0b00000011`, ‘little’ will reverse the order so `[1, 1, 0, 0, 0, 0, 0, 0] => 3`. Defaults to ‘big’. Returns: **packed** ndarray Array of type uint8 whose elements represent bits corresponding to the logical (0 or nonzero) value of the input elements. The shape of `packed` has the same number of dimensions as the input (unless `axis` is None, in which case the output is 1-D). See also [`unpackbits`](numpy.unpackbits#numpy.unpackbits "numpy.unpackbits") Unpacks elements of a uint8 array into a binary-valued output array. #### Examples >>> import numpy as np >>> a = np.array([[[1,0,1], ... [0,1,0]], ... [[1,1,0], ... [0,0,1]]]) >>> b = np.packbits(a, axis=-1) >>> b array([[[160], [ 64]], [[192], [ 32]]], dtype=uint8) Note that in binary 160 = 1010 0000, 64 = 0100 0000, 192 = 1100 0000, and 32 = 0010 0000. # numpy.pad numpy.pad(_array_ , _pad_width_ , _mode ='constant'_, _** kwargs_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_arraypad_impl.py#L545-L891) Pad an array. Parameters: **array** array_like of rank N The array to pad. **pad_width**{sequence, array_like, int} Number of values padded to the edges of each axis. `((before_1, after_1), ... (before_N, after_N))` unique pad widths for each axis. `(before, after)` or `((before, after),)` yields same before and after pad for each axis. `(pad,)` or `int` is a shortcut for before = after = pad width for all axes. **mode** str or function, optional One of the following string values or a user supplied function. ‘constant’ (default) Pads with a constant value. ‘edge’ Pads with the edge values of array. ‘linear_ramp’ Pads with the linear ramp between end_value and the array edge value. ‘maximum’ Pads with the maximum value of all or part of the vector along each axis. ‘mean’ Pads with the mean value of all or part of the vector along each axis. ‘median’ Pads with the median value of all or part of the vector along each axis. ‘minimum’ Pads with the minimum value of all or part of the vector along each axis. ‘reflect’ Pads with the reflection of the vector mirrored on the first and last values of the vector along each axis. ‘symmetric’ Pads with the reflection of the vector mirrored along the edge of the array. ‘wrap’ Pads with the wrap of the vector along the axis. The first values are used to pad the end and the end values are used to pad the beginning. ‘empty’ Pads with undefined values. Padding function, see Notes. **stat_length** sequence or int, optional Used in ‘maximum’, ‘mean’, ‘median’, and ‘minimum’. Number of values at edge of each axis used to calculate the statistic value. `((before_1, after_1), ... (before_N, after_N))` unique statistic lengths for each axis. `(before, after)` or `((before, after),)` yields same before and after statistic lengths for each axis. `(stat_length,)` or `int` is a shortcut for `before = after = statistic` length for all axes. Default is `None`, to use the entire axis. **constant_values** sequence or scalar, optional Used in ‘constant’. The values to set the padded values for each axis. `((before_1, after_1), ... (before_N, after_N))` unique pad constants for each axis. `(before, after)` or `((before, after),)` yields same before and after constants for each axis. `(constant,)` or `constant` is a shortcut for `before = after = constant` for all axes. Default is 0. **end_values** sequence or scalar, optional Used in ‘linear_ramp’. The values used for the ending value of the linear_ramp and that will form the edge of the padded array. `((before_1, after_1), ... (before_N, after_N))` unique end values for each axis. `(before, after)` or `((before, after),)` yields same before and after end values for each axis. `(constant,)` or `constant` is a shortcut for `before = after = constant` for all axes. Default is 0. **reflect_type**{‘even’, ‘odd’}, optional Used in ‘reflect’, and ‘symmetric’. The ‘even’ style is the default with an unaltered reflection around the edge value. For the ‘odd’ style, the extended part of the array is created by subtracting the reflected values from two times the edge value. Returns: **pad** ndarray Padded array of rank equal to [`array`](numpy.array#numpy.array "numpy.array") with shape increased according to `pad_width`. #### Notes For an array with rank greater than 1, some of the padding of later axes is calculated from padding of previous axes. This is easiest to think about with a rank 2 array where the corners of the padded array are calculated by using padded values from the first axis. The padding function, if used, should modify a rank 1 array in-place. It has the following signature: padding_func(vector, iaxis_pad_width, iaxis, kwargs) where vectorndarray A rank 1 array already padded with zeros. Padded values are vector[:iaxis_pad_width[0]] and vector[-iaxis_pad_width[1]:]. iaxis_pad_widthtuple A 2-tuple of ints, iaxis_pad_width[0] represents the number of values padded at the beginning of vector where iaxis_pad_width[1] represents the number of values padded at the end of vector. iaxisint The axis currently being calculated. kwargsdict Any keyword arguments the function requires. #### Examples >>> import numpy as np >>> a = [1, 2, 3, 4, 5] >>> np.pad(a, (2, 3), 'constant', constant_values=(4, 6)) array([4, 4, 1, ..., 6, 6, 6]) >>> np.pad(a, (2, 3), 'edge') array([1, 1, 1, ..., 5, 5, 5]) >>> np.pad(a, (2, 3), 'linear_ramp', end_values=(5, -4)) array([ 5, 3, 1, 2, 3, 4, 5, 2, -1, -4]) >>> np.pad(a, (2,), 'maximum') array([5, 5, 1, 2, 3, 4, 5, 5, 5]) >>> np.pad(a, (2,), 'mean') array([3, 3, 1, 2, 3, 4, 5, 3, 3]) >>> np.pad(a, (2,), 'median') array([3, 3, 1, 2, 3, 4, 5, 3, 3]) >>> a = [[1, 2], [3, 4]] >>> np.pad(a, ((3, 2), (2, 3)), 'minimum') array([[1, 1, 1, 2, 1, 1, 1], [1, 1, 1, 2, 1, 1, 1], [1, 1, 1, 2, 1, 1, 1], [1, 1, 1, 2, 1, 1, 1], [3, 3, 3, 4, 3, 3, 3], [1, 1, 1, 2, 1, 1, 1], [1, 1, 1, 2, 1, 1, 1]]) >>> a = [1, 2, 3, 4, 5] >>> np.pad(a, (2, 3), 'reflect') array([3, 2, 1, 2, 3, 4, 5, 4, 3, 2]) >>> np.pad(a, (2, 3), 'reflect', reflect_type='odd') array([-1, 0, 1, 2, 3, 4, 5, 6, 7, 8]) >>> np.pad(a, (2, 3), 'symmetric') array([2, 1, 1, 2, 3, 4, 5, 5, 4, 3]) >>> np.pad(a, (2, 3), 'symmetric', reflect_type='odd') array([0, 1, 1, 2, 3, 4, 5, 5, 6, 7]) >>> np.pad(a, (2, 3), 'wrap') array([4, 5, 1, 2, 3, 4, 5, 1, 2, 3]) >>> def pad_with(vector, pad_width, iaxis, kwargs): ... pad_value = kwargs.get('padder', 10) ... vector[:pad_width[0]] = pad_value ... vector[-pad_width[1]:] = pad_value >>> a = np.arange(6) >>> a = a.reshape((2, 3)) >>> np.pad(a, 2, pad_with) array([[10, 10, 10, 10, 10, 10, 10], [10, 10, 10, 10, 10, 10, 10], [10, 10, 0, 1, 2, 10, 10], [10, 10, 3, 4, 5, 10, 10], [10, 10, 10, 10, 10, 10, 10], [10, 10, 10, 10, 10, 10, 10]]) >>> np.pad(a, 2, pad_with, padder=100) array([[100, 100, 100, 100, 100, 100, 100], [100, 100, 100, 100, 100, 100, 100], [100, 100, 0, 1, 2, 100, 100], [100, 100, 3, 4, 5, 100, 100], [100, 100, 100, 100, 100, 100, 100], [100, 100, 100, 100, 100, 100, 100]]) # numpy.partition numpy.partition(_a_ , _kth_ , _axis =-1_, _kind ='introselect'_, _order =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L758-L869) Return a partitioned copy of an array. Creates a copy of the array and partially sorts it in such a way that the value of the element in k-th position is in the position it would be in a sorted array. In the output array, all elements smaller than the k-th element are located to the left of this element and all equal or greater are located to its right. The ordering of the elements in the two partitions on the either side of the k-th element in the output array is undefined. Parameters: **a** array_like Array to be sorted. **kth** int or sequence of ints Element index to partition by. The k-th value of the element will be in its final sorted position and all smaller elements will be moved before it and all equal or greater elements behind it. The order of all elements in the partitions is undefined. If provided with a sequence of k-th it will partition all elements indexed by k-th of them into their sorted position at once. Deprecated since version 1.22.0: Passing booleans as index is deprecated. **axis** int or None, optional Axis along which to sort. If None, the array is flattened before sorting. The default is -1, which sorts along the last axis. **kind**{‘introselect’}, optional Selection algorithm. Default is ‘introselect’. **order** str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string. Not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. Returns: **partitioned_array** ndarray Array of the same type and shape as `a`. See also [`ndarray.partition`](numpy.ndarray.partition#numpy.ndarray.partition "numpy.ndarray.partition") Method to sort an array in-place. [`argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") Indirect partition. [`sort`](numpy.sort#numpy.sort "numpy.sort") Full sorting #### Notes The various selection algorithms are characterized by their average speed, worst case performance, work space size, and whether they are stable. A stable sort keeps items with the same key in the same relative order. The available algorithms have the following properties: kind | speed | worst case | work space | stable ---|---|---|---|--- ‘introselect’ | 1 | O(n) | 0 | no All the partition algorithms make temporary copies of the data when partitioning along any but the last axis. Consequently, partitioning along the last axis is faster and uses less space than partitioning along any other axis. The sort order for complex numbers is lexicographic. If both the real and imaginary parts are non-nan then the order is determined by the real parts except when they are equal, in which case the order is determined by the imaginary parts. The sort order of `np.nan` is bigger than `np.inf`. #### Examples >>> import numpy as np >>> a = np.array([7, 1, 7, 7, 1, 5, 7, 2, 3, 2, 6, 2, 3, 0]) >>> p = np.partition(a, 4) >>> p array([0, 1, 2, 1, 2, 5, 2, 3, 3, 6, 7, 7, 7, 7]) # may vary `p[4]` is 2; all elements in `p[:4]` are less than or equal to `p[4]`, and all elements in `p[5:]` are greater than or equal to `p[4]`. The partition is: [0, 1, 2, 1], [2], [5, 2, 3, 3, 6, 7, 7, 7, 7] The next example shows the use of multiple values passed to `kth`. >>> p2 = np.partition(a, (4, 8)) >>> p2 array([0, 1, 2, 1, 2, 3, 3, 2, 5, 6, 7, 7, 7, 7]) `p2[4]` is 2 and `p2[8]` is 5. All elements in `p2[:4]` are less than or equal to `p2[4]`, all elements in `p2[5:8]` are greater than or equal to `p2[4]` and less than or equal to `p2[8]`, and all elements in `p2[9:]` are greater than or equal to `p2[8]`. The partition is: [0, 1, 2, 1], [2], [3, 3, 2], [5], [6, 7, 7, 7, 7] # numpy.percentile numpy.percentile(_a_ , _q_ , _axis =None_, _out =None_, _overwrite_input =False_, _method ='linear'_, _keepdims =False_, _*_ , _weights =None_, _interpolation =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L4067-L4274) Compute the q-th percentile of the data along the specified axis. Returns the q-th percentile(s) of the array elements. Parameters: **a** array_like of real numbers Input array or object that can be converted to an array. **q** array_like of float Percentage or sequence of percentages for the percentiles to compute. Values must be between 0 and 100 inclusive. **axis**{int, tuple of int, None}, optional Axis or axes along which the percentiles are computed. The default is to compute the percentile(s) along a flattened version of the array. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary. **overwrite_input** bool, optional If True, then allow the input array `a` to be modified by intermediate calculations, to save memory. In this case, the contents of the input `a` after this function completes is undefined. **method** str, optional This parameter specifies the method to use for estimating the percentile. There are many different methods, some unique to NumPy. See the notes for explanation. The options sorted by their R type as summarized in the H&F paper [1] are: 1. ‘inverted_cdf’ 2. ‘averaged_inverted_cdf’ 3. ‘closest_observation’ 4. ‘interpolated_inverted_cdf’ 5. ‘hazen’ 6. ‘weibull’ 7. ‘linear’ (default) 8. ‘median_unbiased’ 9. ‘normal_unbiased’ The first three methods are discontinuous. NumPy further defines the following discontinuous variations of the default ‘linear’ (7.) option: * ‘lower’ * ‘higher’, * ‘midpoint’ * ‘nearest’ Changed in version 1.22.0: This argument was previously called “interpolation” and only offered the “linear” default and last four options. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array `a`. weightsarray_like, optional An array of weights associated with the values in `a`. Each value in `a` contributes to the percentile according to its associated weight. The weights array can either be 1-D (in which case its length must be the size of `a` along the given axis) or of the same shape as `a`. If `weights=None`, then all data in `a` are assumed to have a weight equal to one. Only `method=”inverted_cdf”` supports weights. See the notes for more details. New in version 2.0.0. **interpolation** str, optional Deprecated name for the method keyword argument. Deprecated since version 1.22.0. Returns: **percentile** scalar or ndarray If `q` is a single percentile and `axis=None`, then the result is a scalar. If multiple percentiles are given, first axis of the result corresponds to the percentiles. The other axes are the axes that remain after the reduction of `a`. If the input contains integers or floats smaller than `float64`, the output data-type is `float64`. Otherwise, the output data-type is the same as that of the input. If `out` is specified, that array is returned instead. See also [`mean`](numpy.mean#numpy.mean "numpy.mean") [`median`](numpy.median#numpy.median "numpy.median") equivalent to `percentile(..., 50)` [`nanpercentile`](numpy.nanpercentile#numpy.nanpercentile "numpy.nanpercentile") [`quantile`](numpy.quantile#numpy.quantile "numpy.quantile") equivalent to percentile, except q in the range [0, 1]. #### Notes The behavior of `numpy.percentile` with percentage `q` is that of [`numpy.quantile`](numpy.quantile#numpy.quantile "numpy.quantile") with argument `q/100`. For more information, please see [`numpy.quantile`](numpy.quantile#numpy.quantile "numpy.quantile"). #### References [1] R. J. Hyndman and Y. Fan, “Sample quantiles in statistical packages,” The American Statistician, 50(4), pp. 361-365, 1996 #### Examples >>> import numpy as np >>> a = np.array([[10, 7, 4], [3, 2, 1]]) >>> a array([[10, 7, 4], [ 3, 2, 1]]) >>> np.percentile(a, 50) 3.5 >>> np.percentile(a, 50, axis=0) array([6.5, 4.5, 2.5]) >>> np.percentile(a, 50, axis=1) array([7., 2.]) >>> np.percentile(a, 50, axis=1, keepdims=True) array([[7.], [2.]]) >>> m = np.percentile(a, 50, axis=0) >>> out = np.zeros_like(m) >>> np.percentile(a, 50, axis=0, out=out) array([6.5, 4.5, 2.5]) >>> m array([6.5, 4.5, 2.5]) >>> b = a.copy() >>> np.percentile(b, 50, axis=1, overwrite_input=True) array([7., 2.]) >>> assert not np.all(a == b) The different methods can be visualized graphically: import matplotlib.pyplot as plt a = np.arange(4) p = np.linspace(0, 100, 6001) ax = plt.gca() lines = [ ('linear', '-', 'C0'), ('inverted_cdf', ':', 'C1'), # Almost the same as `inverted_cdf`: ('averaged_inverted_cdf', '-.', 'C1'), ('closest_observation', ':', 'C2'), ('interpolated_inverted_cdf', '--', 'C1'), ('hazen', '--', 'C3'), ('weibull', '-.', 'C4'), ('median_unbiased', '--', 'C5'), ('normal_unbiased', '-.', 'C6'), ] for method, style, color in lines: ax.plot( p, np.percentile(a, p, method=method), label=method, linestyle=style, color=color) ax.set( title='Percentiles for different methods and data: ' + str(a), xlabel='Percentile', ylabel='Estimated percentile value', yticks=a) ax.legend(bbox_to_anchor=(1.03, 1)) plt.tight_layout() plt.show() # numpy.permute_dims numpy.permute_dims(_a_ , _axes =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L630-L703) Returns an array with axes transposed. For a 1-D array, this returns an unchanged view of the original array, as a transposed vector is simply the same vector. To convert a 1-D array into a 2-D column vector, an additional dimension must be added, e.g., `np.atleast_2d(a).T` achieves this, as does `a[:, np.newaxis]`. For a 2-D array, this is the standard matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided, then `transpose(a).shape == a.shape[::-1]`. Parameters: **a** array_like Input array. **axes** tuple or list of ints, optional If specified, it must be a tuple or list which contains a permutation of [0, 1, …, N-1] where N is the number of axes of `a`. Negative indices can also be used to specify axes. The i-th axis of the returned array will correspond to the axis numbered `axes[i]` of the input. If not specified, defaults to `range(a.ndim)[::-1]`, which reverses the order of the axes. Returns: **p** ndarray `a` with its axes permuted. A view is returned whenever possible. See also [`ndarray.transpose`](numpy.ndarray.transpose#numpy.ndarray.transpose "numpy.ndarray.transpose") Equivalent method. [`moveaxis`](numpy.moveaxis#numpy.moveaxis "numpy.moveaxis") Move axes of an array to new positions. [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Return the indices that would sort an array. #### Notes Use `transpose(a, argsort(axes))` to invert the transposition of tensors when using the `axes` keyword argument. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> np.transpose(a) array([[1, 3], [2, 4]]) >>> a = np.array([1, 2, 3, 4]) >>> a array([1, 2, 3, 4]) >>> np.transpose(a) array([1, 2, 3, 4]) >>> a = np.ones((1, 2, 3)) >>> np.transpose(a, (1, 0, 2)).shape (2, 1, 3) >>> a = np.ones((2, 3, 4, 5)) >>> np.transpose(a).shape (5, 4, 3, 2) >>> a = np.arange(3*4*5).reshape((3, 4, 5)) >>> np.transpose(a, (-1, 0, -2)).shape (5, 3, 4) # numpy.piecewise numpy.piecewise(_x_ , _condlist_ , _funclist_ , _* args_, _** kw_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L658-L778) Evaluate a piecewise-defined function. Given a set of conditions and corresponding functions, evaluate each function on the input data wherever its condition is true. Parameters: **x** ndarray or scalar The input domain. **condlist** list of bool arrays or bool scalars Each boolean array corresponds to a function in `funclist`. Wherever `condlist[i]` is True, `funclist[i](x)` is used as the output value. Each boolean array in `condlist` selects a piece of `x`, and should therefore be of the same shape as `x`. The length of `condlist` must correspond to that of `funclist`. If one extra function is given, i.e. if `len(funclist) == len(condlist) + 1`, then that extra function is the default value, used wherever all conditions are false. **funclist** list of callables, f(x,*args,**kw), or scalars Each function is evaluated over `x` wherever its corresponding condition is True. It should take a 1d array as input and give an 1d array or a scalar value as output. If, instead of a callable, a scalar is provided then a constant function (`lambda x: scalar`) is assumed. **args** tuple, optional Any further arguments given to `piecewise` are passed to the functions upon execution, i.e., if called `piecewise(..., ..., 1, 'a')`, then each function is called as `f(x, 1, 'a')`. **kw** dict, optional Keyword arguments used in calling `piecewise` are passed to the functions upon execution, i.e., if called `piecewise(..., ..., alpha=1)`, then each function is called as `f(x, alpha=1)`. Returns: **out** ndarray The output is the same shape and type as x and is found by calling the functions in `funclist` on the appropriate portions of `x`, as defined by the boolean arrays in `condlist`. Portions not covered by any condition have a default value of 0. See also [`choose`](numpy.choose#numpy.choose "numpy.choose"), [`select`](numpy.select#numpy.select "numpy.select"), [`where`](numpy.where#numpy.where "numpy.where") #### Notes This is similar to choose or select, except that functions are evaluated on elements of `x` that satisfy the corresponding condition from `condlist`. The result is: |-- |funclist[0](x[condlist[0]]) out = |funclist[1](x[condlist[1]]) |... |funclist[n2](x[condlist[n2]]) |-- #### Examples >>> import numpy as np Define the signum function, which is -1 for `x < 0` and +1 for `x >= 0`. >>> x = np.linspace(-2.5, 2.5, 6) >>> np.piecewise(x, [x < 0, x >= 0], [-1, 1]) array([-1., -1., -1., 1., 1., 1.]) Define the absolute value, which is `-x` for `x <0` and `x` for `x >= 0`. >>> np.piecewise(x, [x < 0, x >= 0], [lambda x: -x, lambda x: x]) array([2.5, 1.5, 0.5, 0.5, 1.5, 2.5]) Apply the same function to a scalar value. >>> y = -2 >>> np.piecewise(y, [y < 0, y >= 0], [lambda x: -x, lambda x: x]) array(2) # numpy.place numpy.place(_arr_ , _mask_ , _vals_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L2047-L2085) Change elements of an array based on conditional and input values. Similar to `np.copyto(arr, vals, where=mask)`, the difference is that `place` uses the first N elements of `vals`, where N is the number of True values in `mask`, while [`copyto`](numpy.copyto#numpy.copyto "numpy.copyto") uses the elements where `mask` is True. Note that [`extract`](numpy.extract#numpy.extract "numpy.extract") does the exact opposite of `place`. Parameters: **arr** ndarray Array to put data into. **mask** array_like Boolean mask array. Must have the same size as `a`. **vals** 1-D sequence Values to put into `a`. Only the first N elements are used, where N is the number of True values in `mask`. If `vals` is smaller than N, it will be repeated, and if elements of `a` are to be masked, this sequence must be non- empty. See also [`copyto`](numpy.copyto#numpy.copyto "numpy.copyto"), [`put`](numpy.put#numpy.put "numpy.put"), [`take`](numpy.take#numpy.take "numpy.take"), [`extract`](numpy.extract#numpy.extract "numpy.extract") #### Examples >>> import numpy as np >>> arr = np.arange(6).reshape(2, 3) >>> np.place(arr, arr>2, [44, 55]) >>> arr array([[ 0, 1, 2], [44, 55, 44]]) # numpy.poly numpy.poly(_seq_of_zeros_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_polynomial_impl.py#L34-L156) Find the coefficients of a polynomial with the given sequence of roots. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials- package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). Returns the coefficients of the polynomial whose leading coefficient is one for the given sequence of zeros (multiple roots must be included in the sequence as many times as their multiplicity; see Examples). A square matrix (or array, which will be treated as a matrix) can also be given, in which case the coefficients of the characteristic polynomial of the matrix are returned. Parameters: **seq_of_zeros** array_like, shape (N,) or (N, N) A sequence of polynomial roots, or a square array or matrix object. Returns: **c** ndarray 1D array of polynomial coefficients from highest to lowest degree: `c[0] * x**(N) + c[1] * x**(N-1) + ... + c[N-1] * x + c[N]` where c[0] always equals 1. Raises: ValueError If input is the wrong shape (the input must be a 1-D or square 2-D array). See also [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") Compute polynomial values. [`roots`](numpy.roots#numpy.roots "numpy.roots") Return the roots of a polynomial. [`polyfit`](numpy.polyfit#numpy.polyfit "numpy.polyfit") Least squares polynomial fit. [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") A one-dimensional polynomial class. #### Notes Specifying the roots of a polynomial still leaves one degree of freedom, typically represented by an undetermined leading coefficient. [1] In the case of this function, that coefficient - the first one in the returned array - is always taken as one. (If for some reason you have one other point, the only automatic way presently to leverage that information is to use `polyfit`.) The characteristic polynomial, \\(p_a(t)\\), of an `n`-by-`n` matrix **A** is given by \\(p_a(t) = \mathrm{det}(t\, \mathbf{I} - \mathbf{A})\\), where **I** is the `n`-by-`n` identity matrix. [2] #### References [1] M. Sullivan and M. Sullivan, III, “Algebra and Trigonometry, Enhanced With Graphing Utilities,” Prentice-Hall, pg. 318, 1996. [2] G. Strang, “Linear Algebra and Its Applications, 2nd Edition,” Academic Press, pg. 182, 1980. #### Examples Given a sequence of a polynomial’s zeros: >>> import numpy as np >>> np.poly((0, 0, 0)) # Multiple root example array([1., 0., 0., 0.]) The line above represents z**3 + 0*z**2 + 0*z + 0. >>> np.poly((-1./2, 0, 1./2)) array([ 1. , 0. , -0.25, 0. ]) The line above represents z**3 - z/4 >>> np.poly((np.random.random(1)[0], 0, np.random.random(1)[0])) array([ 1. , -0.77086955, 0.08618131, 0. ]) # random Given a square array object: >>> P = np.array([[0, 1./3], [-1./2, 0]]) >>> np.poly(P) array([1. , 0. , 0.16666667]) Note how in all cases the leading coefficient is always 1. # numpy.poly1d.__call__ method poly1d.__call__(_val_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_polynomial_impl.py#L1330-L1331) Call self as a function. # numpy.poly1d.c property _property_ poly1d.c The polynomial coefficients # numpy.poly1d.coef property _property_ poly1d.coef The polynomial coefficients # numpy.poly1d.coefficients property _property_ poly1d.coefficients The polynomial coefficients # numpy.poly1d.coeffs property _property_ poly1d.coeffs The polynomial coefficients # numpy.poly1d.deriv method poly1d.deriv(_m =1_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_polynomial_impl.py#L1443-L1454) Return a derivative of this polynomial. Refer to [`polyder`](numpy.polyder#numpy.polyder "numpy.polyder") for full documentation. See also [`polyder`](numpy.polyder#numpy.polyder "numpy.polyder") equivalent function # numpy.poly1d _class_ numpy.poly1d(_c_or_r_ , _r =False_, _variable =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) A one-dimensional polynomial class. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials- package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). A convenience class, used to encapsulate “natural” operations on polynomials so that said operations may take on their customary form in code (see Examples). Parameters: **c_or_r** array_like The polynomial’s coefficients, in decreasing powers, or if the value of the second parameter is True, the polynomial’s roots (values where the polynomial evaluates to 0). For example, `poly1d([1, 2, 3])` returns an object that represents \\(x^2 + 2x + 3\\), whereas `poly1d([1, 2, 3], True)` returns one that represents \\((x-1)(x-2)(x-3) = x^3 - 6x^2 + 11x -6\\). **r** bool, optional If True, `c_or_r` specifies the polynomial’s roots; the default is False. **variable** str, optional Changes the variable used when printing `p` from `x` to [`variable`](numpy.poly1d.variable#numpy.poly1d.variable "numpy.poly1d.variable") (see Examples). #### Examples Construct the polynomial \\(x^2 + 2x + 3\\): >>> import numpy as np >>> p = np.poly1d([1, 2, 3]) >>> print(np.poly1d(p)) 2 1 x + 2 x + 3 Evaluate the polynomial at \\(x = 0.5\\): >>> p(0.5) 4.25 Find the roots: >>> p.r array([-1.+1.41421356j, -1.-1.41421356j]) >>> p(p.r) array([ -4.44089210e-16+0.j, -4.44089210e-16+0.j]) # may vary These numbers in the previous line represent (0, 0) to machine precision Show the coefficients: >>> p.c array([1, 2, 3]) Display the order (the leading zero-coefficients are removed): >>> p.order 2 Show the coefficient of the k-th power in the polynomial (which is equivalent to `p.c[-(i+1)]`): >>> p[1] 2 Polynomials can be added, subtracted, multiplied, and divided (returns quotient and remainder): >>> p * p poly1d([ 1, 4, 10, 12, 9]) >>> (p**3 + 4) / p (poly1d([ 1., 4., 10., 12., 9.]), poly1d([4.])) `asarray(p)` gives the coefficient array, so polynomials can be used in all functions that accept arrays: >>> p**2 # square of polynomial poly1d([ 1, 4, 10, 12, 9]) >>> np.square(p) # square of individual coefficients array([1, 4, 9]) The variable used in the string representation of `p` can be modified, using the [`variable`](numpy.poly1d.variable#numpy.poly1d.variable "numpy.poly1d.variable") parameter: >>> p = np.poly1d([1,2,3], variable='z') >>> print(p) 2 1 z + 2 z + 3 Construct a polynomial from its roots: >>> np.poly1d([1, 2], True) poly1d([ 1., -3., 2.]) This is the same polynomial as obtained by: >>> np.poly1d([1, -1]) * np.poly1d([1, -2]) poly1d([ 1, -3, 2]) Attributes: [`c`](numpy.poly1d.c#numpy.poly1d.c "numpy.poly1d.c") The polynomial coefficients [`coef`](numpy.poly1d.coef#numpy.poly1d.coef "numpy.poly1d.coef") The polynomial coefficients [`coefficients`](numpy.poly1d.coefficients#numpy.poly1d.coefficients "numpy.poly1d.coefficients") The polynomial coefficients [`coeffs`](numpy.poly1d.coeffs#numpy.poly1d.coeffs "numpy.poly1d.coeffs") The polynomial coefficients [`o`](numpy.poly1d.o#numpy.poly1d.o "numpy.poly1d.o") The order or degree of the polynomial [`order`](numpy.poly1d.order#numpy.poly1d.order "numpy.poly1d.order") The order or degree of the polynomial [`r`](numpy.poly1d.r#numpy.poly1d.r "numpy.poly1d.r") The roots of the polynomial, where self(x) == 0 [`roots`](numpy.roots#numpy.roots "numpy.roots") The roots of the polynomial, where self(x) == 0 [`variable`](numpy.poly1d.variable#numpy.poly1d.variable "numpy.poly1d.variable") The name of the polynomial variable #### Methods [`__call__`](numpy.poly1d.__call__#numpy.poly1d.__call__ "numpy.poly1d.__call__")(val) | Call self as a function. ---|--- [`deriv`](numpy.poly1d.deriv#numpy.poly1d.deriv "numpy.poly1d.deriv")([m]) | Return a derivative of this polynomial. [`integ`](numpy.poly1d.integ#numpy.poly1d.integ "numpy.poly1d.integ")([m, k]) | Return an antiderivative (indefinite integral) of this polynomial. # numpy.poly1d.integ method poly1d.integ(_m =1_, _k =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_polynomial_impl.py#L1430-L1441) Return an antiderivative (indefinite integral) of this polynomial. Refer to [`polyint`](numpy.polyint#numpy.polyint "numpy.polyint") for full documentation. See also [`polyint`](numpy.polyint#numpy.polyint "numpy.polyint") equivalent function # numpy.poly1d.o property _property_ poly1d.o The order or degree of the polynomial # numpy.poly1d.order property _property_ poly1d.order The order or degree of the polynomial # numpy.poly1d.r property _property_ poly1d.r The roots of the polynomial, where self(x) == 0 # numpy.poly1d.variable property _property_ poly1d.variable The name of the polynomial variable # numpy.polyadd numpy.polyadd(_a1_ , _a2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_polynomial_impl.py#L786-L852) Find the sum of two polynomials. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials- package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). Returns the polynomial resulting from the sum of two input polynomials. Each input must be either a poly1d object or a 1D sequence of polynomial coefficients, from highest to lowest degree. Parameters: **a1, a2** array_like or poly1d object Input polynomials. Returns: **out** ndarray or poly1d object The sum of the inputs. If either input is a poly1d object, then the output is also a poly1d object. Otherwise, it is a 1D array of polynomial coefficients from highest to lowest degree. See also [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") A one-dimensional polynomial class. [`poly`](numpy.poly#numpy.poly "numpy.poly"), `polyadd`, [`polyder`](numpy.polyder#numpy.polyder "numpy.polyder"), [`polydiv`](numpy.polydiv#numpy.polydiv "numpy.polydiv"), [`polyfit`](numpy.polyfit#numpy.polyfit "numpy.polyfit"), [`polyint`](numpy.polyint#numpy.polyint "numpy.polyint"), [`polysub`](numpy.polysub#numpy.polysub "numpy.polysub"), [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") #### Examples >>> import numpy as np >>> np.polyadd([1, 2], [9, 5, 4]) array([9, 6, 6]) Using poly1d objects: >>> p1 = np.poly1d([1, 2]) >>> p2 = np.poly1d([9, 5, 4]) >>> print(p1) 1 x + 2 >>> print(p2) 2 9 x + 5 x + 4 >>> print(np.polyadd(p1, p2)) 2 9 x + 6 x + 6 # numpy.polyder numpy.polyder(_p_ , _m =1_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_polynomial_impl.py#L367-L442) Return the derivative of the specified order of a polynomial. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials- package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). Parameters: **p** poly1d or sequence Polynomial to differentiate. A sequence is interpreted as polynomial coefficients, see [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d"). **m** int, optional Order of differentiation (default: 1) Returns: **der** poly1d A new polynomial representing the derivative. See also [`polyint`](numpy.polyint#numpy.polyint "numpy.polyint") Anti-derivative of a polynomial. [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") Class for one-dimensional polynomials. #### Examples The derivative of the polynomial \\(x^3 + x^2 + x^1 + 1\\) is: >>> import numpy as np >>> p = np.poly1d([1,1,1,1]) >>> p2 = np.polyder(p) >>> p2 poly1d([3, 2, 1]) which evaluates to: >>> p2(2.) 17.0 We can verify this, approximating the derivative with `(f(x + h) - f(x))/h`: >>> (p(2. + 0.001) - p(2.)) / 0.001 17.007000999997857 The fourth-order derivative of a 3rd-order polynomial is zero: >>> np.polyder(p, 2) poly1d([6, 2]) >>> np.polyder(p, 3) poly1d([6]) >>> np.polyder(p, 4) poly1d([0]) # numpy.polydiv numpy.polydiv(_u_ , _v_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_polynomial_impl.py#L979-L1050) Returns the quotient and remainder of polynomial division. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials- package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). The input arrays are the coefficients (including any coefficients equal to zero) of the “numerator” (dividend) and “denominator” (divisor) polynomials, respectively. Parameters: **u** array_like or poly1d Dividend polynomial’s coefficients. **v** array_like or poly1d Divisor polynomial’s coefficients. Returns: **q** ndarray Coefficients, including those equal to zero, of the quotient. **r** ndarray Coefficients, including those equal to zero, of the remainder. See also [`poly`](numpy.poly#numpy.poly "numpy.poly"), [`polyadd`](numpy.polyadd#numpy.polyadd "numpy.polyadd"), [`polyder`](numpy.polyder#numpy.polyder "numpy.polyder"), `polydiv`, [`polyfit`](numpy.polyfit#numpy.polyfit "numpy.polyfit"), [`polyint`](numpy.polyint#numpy.polyint "numpy.polyint"), [`polymul`](numpy.polymul#numpy.polymul "numpy.polymul"), [`polysub`](numpy.polysub#numpy.polysub "numpy.polysub") [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") #### Notes Both `u` and `v` must be 0-d or 1-d (ndim = 0 or 1), but `u.ndim` need not equal `v.ndim`. In other words, all four possible combinations - `u.ndim = v.ndim = 0`, `u.ndim = v.ndim = 1`, `u.ndim = 1, v.ndim = 0`, and `u.ndim = 0, v.ndim = 1` \- work. #### Examples \\[\frac{3x^2 + 5x + 2}{2x + 1} = 1.5x + 1.75, remainder 0.25\\] >>> import numpy as np >>> x = np.array([3.0, 5.0, 2.0]) >>> y = np.array([2.0, 1.0]) >>> np.polydiv(x, y) (array([1.5 , 1.75]), array([0.25])) # numpy.polyfit numpy.polyfit(_x_ , _y_ , _deg_ , _rcond =None_, _full =False_, _w =None_, _cov =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_polynomial_impl.py#L449-L695) Least squares polynomial fit. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials- package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). Fit a polynomial `p(x) = p[0] * x**deg + ... + p[deg]` of degree `deg` to points `(x, y)`. Returns a vector of coefficients `p` that minimises the squared error in the order `deg`, `deg-1`, … `0`. The [`Polynomial.fit`](numpy.polynomial.polynomial.polynomial.fit#numpy.polynomial.polynomial.Polynomial.fit "numpy.polynomial.polynomial.Polynomial.fit") class method is recommended for new code as it is more stable numerically. See the documentation of the method for more information. Parameters: **x** array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y** array_like, shape (M,) or (M, K) y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column. **deg** int Degree of the fitting polynomial **rcond** float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full** bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w** array_like, shape (M,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse- variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. **cov** bool or str, optional If given and not `False`, return not just the estimate but also its covariance matrix. By default, the covariance are scaled by chi2/dof, where dof = M - (deg + 1), i.e., the weights are presumed to be unreliable except in a relative sense and everything is scaled such that the reduced chi2 is unity. This scaling is omitted if `cov='unscaled'`, as is relevant for the case that the weights are w = 1/sigma, with sigma known to be a reliable estimate of the uncertainty. Returns: **p** ndarray, shape (deg + 1,) or (deg + 1, K) Polynomial coefficients, highest power first. If `y` was 2-D, the coefficients for `k`-th data set are in `p[:,k]`. residuals, rank, singular_values, rcond These values are only returned if `full == True` * residuals – sum of squared residuals of the least squares fit * rank – the effective rank of the scaled Vandermonde coefficient matrix * singular_values – singular values of the scaled Vandermonde coefficient matrix * rcond – value of `rcond`. For more details, see [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). **V** ndarray, shape (deg + 1, deg + 1) or (deg + 1, deg + 1, K) Present only if `full == False` and `cov == True`. The covariance matrix of the polynomial coefficient estimates. The diagonal of this matrix are the variance estimates for each coefficient. If y is a 2-D array, then the covariance matrix for the `k`-th data set are in `V[:,:,k]` Warns: RankWarning The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if `full == False`. The warnings can be turned off by >>> import warnings >>> warnings.simplefilter('ignore', np.exceptions.RankWarning) See also [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") Compute polynomial values. [`linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") Computes a least-squares fit. [`scipy.interpolate.UnivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline "\(in SciPy v1.14.1\)") Computes spline fits. #### Notes The solution minimizes the squared error \\[E = \sum_{j=0}^k |p(x_j) - y_j|^2\\] in the equations: x[0]**n * p[0] + ... + x[0] * p[n-1] + p[n] = y[0] x[1]**n * p[0] + ... + x[1] * p[n-1] + p[n] = y[1] ... x[k]**n * p[0] + ... + x[k] * p[n-1] + p[n] = y[k] The coefficient matrix of the coefficients `p` is a Vandermonde matrix. `polyfit` issues a [`RankWarning`](numpy.exceptions.rankwarning#numpy.exceptions.RankWarning "numpy.exceptions.RankWarning") when the least-squares fit is badly conditioned. This implies that the best fit is not well-defined due to numerical error. The results may be improved by lowering the polynomial degree or by replacing `x` by `x` \- `x`.mean(). The `rcond` parameter can also be set to a value smaller than its default, but the resulting fit may be spurious: including contributions from the small singular values can add numerical noise to the result. Note that fitting polynomial coefficients is inherently badly conditioned when the degree of the polynomial is large or the interval of sample points is badly centered. The quality of the fit should always be checked in these cases. When polynomial fits are not satisfactory, splines may be a good alternative. #### References [1] Wikipedia, “Curve fitting”, [2] Wikipedia, “Polynomial interpolation”, #### Examples >>> import numpy as np >>> import warnings >>> x = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0]) >>> y = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0]) >>> z = np.polyfit(x, y, 3) >>> z array([ 0.08703704, -0.81349206, 1.69312169, -0.03968254]) # may vary It is convenient to use [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") objects for dealing with polynomials: >>> p = np.poly1d(z) >>> p(0.5) 0.6143849206349179 # may vary >>> p(3.5) -0.34732142857143039 # may vary >>> p(10) 22.579365079365115 # may vary High-order polynomials may oscillate wildly: >>> with warnings.catch_warnings(): ... warnings.simplefilter('ignore', np.exceptions.RankWarning) ... p30 = np.poly1d(np.polyfit(x, y, 30)) ... >>> p30(4) -0.80000000000000204 # may vary >>> p30(5) -0.99999999999999445 # may vary >>> p30(4.5) -0.10547061179440398 # may vary Illustration: >>> import matplotlib.pyplot as plt >>> xp = np.linspace(-2, 6, 100) >>> _ = plt.plot(x, y, '.', xp, p(xp), '-', xp, p30(xp), '--') >>> plt.ylim(-2,2) (-2, 2) >>> plt.show() # numpy.polyint numpy.polyint(_p_ , _m =1_, _k =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_polynomial_impl.py#L260-L360) Return an antiderivative (indefinite integral) of a polynomial. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials- package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). The returned order `m` antiderivative `P` of polynomial `p` satisfies \\(\frac{d^m}{dx^m}P(x) = p(x)\\) and is defined up to `m - 1` integration constants `k`. The constants determine the low-order polynomial part \\[\frac{k_{m-1}}{0!} x^0 + \ldots + \frac{k_0}{(m-1)!}x^{m-1}\\] of `P` so that \\(P^{(j)}(0) = k_{m-j-1}\\). Parameters: **p** array_like or poly1d Polynomial to integrate. A sequence is interpreted as polynomial coefficients, see [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d"). **m** int, optional Order of the antiderivative. (Default: 1) **k** list of `m` scalars or scalar, optional Integration constants. They are given in the order of integration: those corresponding to highest-order terms come first. If `None` (default), all constants are assumed to be zero. If `m = 1`, a single scalar can be given instead of a list. See also [`polyder`](numpy.polyder#numpy.polyder "numpy.polyder") derivative of a polynomial [`poly1d.integ`](numpy.poly1d.integ#numpy.poly1d.integ "numpy.poly1d.integ") equivalent method #### Examples The defining property of the antiderivative: >>> import numpy as np >>> p = np.poly1d([1,1,1]) >>> P = np.polyint(p) >>> P poly1d([ 0.33333333, 0.5 , 1. , 0. ]) # may vary >>> np.polyder(P) == p True The integration constants default to zero, but can be specified: >>> P = np.polyint(p, 3) >>> P(0) 0.0 >>> np.polyder(P)(0) 0.0 >>> np.polyder(P, 2)(0) 0.0 >>> P = np.polyint(p, 3, k=[6,5,3]) >>> P poly1d([ 0.01666667, 0.04166667, 0.16666667, 3. , 5. , 3. ]) # may vary Note that 3 = 6 / 2!, and that the constants are given in the order of integrations. Constant of the highest-order polynomial term comes first: >>> np.polyder(P, 2)(0) 6.0 >>> np.polyder(P, 1)(0) 5.0 >>> P(0) 3.0 # numpy.polymul numpy.polymul(_a1_ , _a2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_polynomial_impl.py#L911-L972) Find the product of two polynomials. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials- package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). Finds the polynomial resulting from the multiplication of the two input polynomials. Each input must be either a poly1d object or a 1D sequence of polynomial coefficients, from highest to lowest degree. Parameters: **a1, a2** array_like or poly1d object Input polynomials. Returns: **out** ndarray or poly1d object The polynomial resulting from the multiplication of the inputs. If either inputs is a poly1d object, then the output is also a poly1d object. Otherwise, it is a 1D array of polynomial coefficients from highest to lowest degree. See also [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") A one-dimensional polynomial class. [`poly`](numpy.poly#numpy.poly "numpy.poly"), [`polyadd`](numpy.polyadd#numpy.polyadd "numpy.polyadd"), [`polyder`](numpy.polyder#numpy.polyder "numpy.polyder"), [`polydiv`](numpy.polydiv#numpy.polydiv "numpy.polydiv"), [`polyfit`](numpy.polyfit#numpy.polyfit "numpy.polyfit"), [`polyint`](numpy.polyint#numpy.polyint "numpy.polyint"), [`polysub`](numpy.polysub#numpy.polysub "numpy.polysub"), [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") [`convolve`](numpy.convolve#numpy.convolve "numpy.convolve") Array convolution. Same output as polymul, but has parameter for overlap mode. #### Examples >>> import numpy as np >>> np.polymul([1, 2, 3], [9, 5, 1]) array([ 9, 23, 38, 17, 3]) Using poly1d objects: >>> p1 = np.poly1d([1, 2, 3]) >>> p2 = np.poly1d([9, 5, 1]) >>> print(p1) 2 1 x + 2 x + 3 >>> print(p2) 2 9 x + 5 x + 1 >>> print(np.polymul(p1, p2)) 4 3 2 9 x + 23 x + 38 x + 17 x + 3 # numpy.polynomial.chebyshev.cheb2poly polynomial.chebyshev.cheb2poly(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L397-L455) Convert a Chebyshev series to a polynomial. Convert an array representing the coefficients of a Chebyshev series, ordered from lowest degree to highest, to an array of the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest to highest degree. Parameters: **c** array_like 1-D array containing the Chebyshev series coefficients, ordered from lowest order term to highest. Returns: **pol** ndarray 1-D array containing the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest order term to highest. See also [`poly2cheb`](numpy.polynomial.chebyshev.poly2cheb#numpy.polynomial.chebyshev.poly2cheb "numpy.polynomial.chebyshev.poly2cheb") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples >>> from numpy import polynomial as P >>> c = P.Chebyshev(range(4)) >>> c Chebyshev([0., 1., 2., 3.], domain=[-1., 1.], window=[-1., 1.], symbol='x') >>> p = c.convert(kind=P.Polynomial) >>> p Polynomial([-2., -8., 4., 12.], domain=[-1., 1.], window=[-1., 1.], ... >>> P.chebyshev.cheb2poly(range(4)) array([-2., -8., 4., 12.]) # numpy.polynomial.chebyshev.chebadd polynomial.chebyshev.chebadd(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L569-L608) Add one Chebyshev series to another. Returns the sum of two Chebyshev series `c1` \+ `c2`. The arguments are sequences of coefficients ordered from lowest order term to highest, i.e., [1,2,3] represents the series `T_0 + 2*T_1 + 3*T_2`. Parameters: **c1, c2** array_like 1-D arrays of Chebyshev series coefficients ordered from low to high. Returns: **out** ndarray Array representing the Chebyshev series of their sum. See also [`chebsub`](numpy.polynomial.chebyshev.chebsub#numpy.polynomial.chebyshev.chebsub "numpy.polynomial.chebyshev.chebsub"), [`chebmulx`](numpy.polynomial.chebyshev.chebmulx#numpy.polynomial.chebyshev.chebmulx "numpy.polynomial.chebyshev.chebmulx"), [`chebmul`](numpy.polynomial.chebyshev.chebmul#numpy.polynomial.chebyshev.chebmul "numpy.polynomial.chebyshev.chebmul"), [`chebdiv`](numpy.polynomial.chebyshev.chebdiv#numpy.polynomial.chebyshev.chebdiv "numpy.polynomial.chebyshev.chebdiv"), [`chebpow`](numpy.polynomial.chebyshev.chebpow#numpy.polynomial.chebyshev.chebpow "numpy.polynomial.chebyshev.chebpow") #### Notes Unlike multiplication, division, etc., the sum of two Chebyshev series is a Chebyshev series (without having to “reproject” the result onto the basis set) so addition, just like that of “standard” polynomials, is simply “component- wise.” #### Examples >>> from numpy.polynomial import chebyshev as C >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> C.chebadd(c1,c2) array([4., 4., 4.]) # numpy.polynomial.chebyshev.chebcompanion polynomial.chebyshev.chebcompanion(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1629-L1665) Return the scaled companion matrix of c. The basis polynomials are scaled so that the companion matrix is symmetric when `c` is a Chebyshev basis polynomial. This provides better eigenvalue estimates than the unscaled case and for basis polynomials the eigenvalues are guaranteed to be real if [`numpy.linalg.eigvalsh`](numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh") is used to obtain them. Parameters: **c** array_like 1-D array of Chebyshev series coefficients ordered from low to high degree. Returns: **mat** ndarray Scaled companion matrix of dimensions (deg, deg). # numpy.polynomial.chebyshev.chebder polynomial.chebyshev.chebder(_c_ , _m =1_, _scl =1_, _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L874-L961) Differentiate a Chebyshev series. Returns the Chebyshev series coefficients `c` differentiated `m` times along `axis`. At each iteration the result is multiplied by `scl` (the scaling factor is for use in a linear change of variable). The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `1*T_0 + 2*T_1 + 3*T_2` while [[1,2],[1,2]] represents `1*T_0(x)*T_0(y) + 1*T_1(x)*T_0(y) + 2*T_0(x)*T_1(y) + 2*T_1(x)*T_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters: **c** array_like Array of Chebyshev series coefficients. If c is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m** int, optional Number of derivatives taken, must be non-negative. (Default: 1) **scl** scalar, optional Each differentiation is multiplied by `scl`. The end result is multiplication by `scl**m`. This is for use in a linear change of variable. (Default: 1) **axis** int, optional Axis over which the derivative is taken. (Default: 0). Returns: **der** ndarray Chebyshev series of the derivative. See also [`chebint`](numpy.polynomial.chebyshev.chebint#numpy.polynomial.chebyshev.chebint "numpy.polynomial.chebyshev.chebint") #### Notes In general, the result of differentiating a C-series needs to be “reprojected” onto the C-series basis set. Thus, typically, the result of this function is “unintuitive,” albeit correct; see Examples section below. #### Examples >>> from numpy.polynomial import chebyshev as C >>> c = (1,2,3,4) >>> C.chebder(c) array([14., 12., 24.]) >>> C.chebder(c,3) array([96.]) >>> C.chebder(c,scl=-1) array([-14., -12., -24.]) >>> C.chebder(c,2,-1) array([12., 96.]) # numpy.polynomial.chebyshev.chebdiv polynomial.chebyshev.chebdiv(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L749-L813) Divide one Chebyshev series by another. Returns the quotient-with-remainder of two Chebyshev series `c1` / `c2`. The arguments are sequences of coefficients from lowest order “term” to highest, e.g., [1,2,3] represents the series `T_0 + 2*T_1 + 3*T_2`. Parameters: **c1, c2** array_like 1-D arrays of Chebyshev series coefficients ordered from low to high. Returns: **[quo, rem]** ndarrays Of Chebyshev series coefficients representing the quotient and remainder. See also [`chebadd`](numpy.polynomial.chebyshev.chebadd#numpy.polynomial.chebyshev.chebadd "numpy.polynomial.chebyshev.chebadd"), [`chebsub`](numpy.polynomial.chebyshev.chebsub#numpy.polynomial.chebyshev.chebsub "numpy.polynomial.chebyshev.chebsub"), [`chebmulx`](numpy.polynomial.chebyshev.chebmulx#numpy.polynomial.chebyshev.chebmulx "numpy.polynomial.chebyshev.chebmulx"), [`chebmul`](numpy.polynomial.chebyshev.chebmul#numpy.polynomial.chebyshev.chebmul "numpy.polynomial.chebyshev.chebmul"), [`chebpow`](numpy.polynomial.chebyshev.chebpow#numpy.polynomial.chebyshev.chebpow "numpy.polynomial.chebyshev.chebpow") #### Notes In general, the (polynomial) division of one C-series by another results in quotient and remainder terms that are not in the Chebyshev polynomial basis set. Thus, to express these results as C-series, it is typically necessary to “reproject” the results onto said basis set, which typically produces “unintuitive” (but correct) results; see Examples section below. #### Examples >>> from numpy.polynomial import chebyshev as C >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> C.chebdiv(c1,c2) # quotient "intuitive," remainder not (array([3.]), array([-8., -4.])) >>> c2 = (0,1,2,3) >>> C.chebdiv(c2,c1) # neither "intuitive" (array([0., 2.]), array([-2., -4.])) # numpy.polynomial.chebyshev.chebdomain polynomial.chebyshev.chebdomain _= array([-1., 1.])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.chebyshev.chebfit polynomial.chebyshev.chebfit(_x_ , _y_ , _deg_ , _rcond =None_, _full =False_, _w =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1504-L1626) Least squares fit of Chebyshev series to data. Return the coefficients of a Chebyshev series of degree `deg` that is the least squares fit to the data values `y` given at points `x`. If `y` is 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple fits are done, one for each column of `y`, and the resulting coefficients are stored in the corresponding columns of a 2-D return. The fitted polynomial(s) are in the form \\[p(x) = c_0 + c_1 * T_1(x) + ... + c_n * T_n(x),\\] where `n` is `deg`. Parameters: **x** array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y** array_like, shape (M,) or (M, K) y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column. **deg** int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer, all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **rcond** float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is `len(x)*eps`, where eps is the relative precision of the float type, about 2e-16 in most cases. **full** bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w** array_like, shape (`M`,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse- variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. Returns: **coef** ndarray, shape (M,) or (M, K) Chebyshev coefficients ordered from low to high. If `y` was 2-D, the coefficients for the data in column k of `y` are in column `k`. **[residuals, rank, singular_values, rcond]** list These values are only returned if `full == True` * residuals – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * singular_values – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). Warns: RankWarning The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if `full == False`. The warnings can be turned off by >>> import warnings >>> warnings.simplefilter('ignore', np.exceptions.RankWarning) See also [`numpy.polynomial.polynomial.polyfit`](numpy.polynomial.polynomial.polyfit#numpy.polynomial.polynomial.polyfit "numpy.polynomial.polynomial.polyfit") [`numpy.polynomial.legendre.legfit`](numpy.polynomial.legendre.legfit#numpy.polynomial.legendre.legfit "numpy.polynomial.legendre.legfit") [`numpy.polynomial.laguerre.lagfit`](numpy.polynomial.laguerre.lagfit#numpy.polynomial.laguerre.lagfit "numpy.polynomial.laguerre.lagfit") [`numpy.polynomial.hermite.hermfit`](numpy.polynomial.hermite.hermfit#numpy.polynomial.hermite.hermfit "numpy.polynomial.hermite.hermfit") [`numpy.polynomial.hermite_e.hermefit`](numpy.polynomial.hermite_e.hermefit#numpy.polynomial.hermite_e.hermefit "numpy.polynomial.hermite_e.hermefit") [`chebval`](numpy.polynomial.chebyshev.chebval#numpy.polynomial.chebyshev.chebval "numpy.polynomial.chebyshev.chebval") Evaluates a Chebyshev series. [`chebvander`](numpy.polynomial.chebyshev.chebvander#numpy.polynomial.chebyshev.chebvander "numpy.polynomial.chebyshev.chebvander") Vandermonde matrix of Chebyshev series. [`chebweight`](numpy.polynomial.chebyshev.chebweight#numpy.polynomial.chebyshev.chebweight "numpy.polynomial.chebyshev.chebweight") Chebyshev weight function. [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") Computes a least-squares fit from the matrix. [`scipy.interpolate.UnivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline "\(in SciPy v1.14.1\)") Computes spline fits. #### Notes The solution is the coefficients of the Chebyshev series `p` that minimizes the sum of the weighted squared errors \\[E = \sum_j w_j^2 * |y_j - p(x_j)|^2,\\] where \\(w_j\\) are the weights. This problem is solved by setting up as the (typically) overdetermined matrix equation \\[V(x) * c = w * y,\\] where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the coefficients to be solved for, `w` are the weights, and `y` are the observed values. This equation is then solved using the singular value decomposition of `V`. If some of the singular values of `V` are so small that they are neglected, then a [`RankWarning`](numpy.exceptions.rankwarning#numpy.exceptions.RankWarning "numpy.exceptions.RankWarning") will be issued. This means that the coefficient values may be poorly determined. Using a lower order fit will usually get rid of the warning. The `rcond` parameter can also be set to a value smaller than its default, but the resulting fit may be spurious and have large contributions from roundoff error. Fits using Chebyshev series are usually better conditioned than fits using power series, but much can depend on the distribution of the sample points and the smoothness of the data. If the quality of the fit is inadequate splines may be a good alternative. #### References [1] Wikipedia, “Curve fitting”, # numpy.polynomial.chebyshev.chebfromroots polynomial.chebyshev.chebfromroots(_roots_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L514-L566) Generate a Chebyshev series with given roots. The function returns the coefficients of the polynomial \\[p(x) = (x - r_0) * (x - r_1) * ... * (x - r_n),\\] in Chebyshev form, where the \\(r_n\\) are the roots specified in [`roots`](numpy.roots#numpy.roots "numpy.roots"). If a zero has multiplicity n, then it must appear in [`roots`](numpy.roots#numpy.roots "numpy.roots") n times. For instance, if 2 is a root of multiplicity three and 3 is a root of multiplicity 2, then [`roots`](numpy.roots#numpy.roots "numpy.roots") looks something like [2, 2, 2, 3, 3]. The roots can appear in any order. If the returned coefficients are `c`, then \\[p(x) = c_0 + c_1 * T_1(x) + ... + c_n * T_n(x)\\] The coefficient of the last term is not generally 1 for monic polynomials in Chebyshev form. Parameters: **roots** array_like Sequence containing the roots. Returns: **out** ndarray 1-D array of coefficients. If all roots are real then `out` is a real array, if some of the roots are complex, then `out` is complex even if all the coefficients in the result are real (see Examples below). See also [`numpy.polynomial.polynomial.polyfromroots`](numpy.polynomial.polynomial.polyfromroots#numpy.polynomial.polynomial.polyfromroots "numpy.polynomial.polynomial.polyfromroots") [`numpy.polynomial.legendre.legfromroots`](numpy.polynomial.legendre.legfromroots#numpy.polynomial.legendre.legfromroots "numpy.polynomial.legendre.legfromroots") [`numpy.polynomial.laguerre.lagfromroots`](numpy.polynomial.laguerre.lagfromroots#numpy.polynomial.laguerre.lagfromroots "numpy.polynomial.laguerre.lagfromroots") [`numpy.polynomial.hermite.hermfromroots`](numpy.polynomial.hermite.hermfromroots#numpy.polynomial.hermite.hermfromroots "numpy.polynomial.hermite.hermfromroots") [`numpy.polynomial.hermite_e.hermefromroots`](numpy.polynomial.hermite_e.hermefromroots#numpy.polynomial.hermite_e.hermefromroots "numpy.polynomial.hermite_e.hermefromroots") #### Examples >>> import numpy.polynomial.chebyshev as C >>> C.chebfromroots((-1,0,1)) # x^3 - x relative to the standard basis array([ 0. , -0.25, 0. , 0.25]) >>> j = complex(0,1) >>> C.chebfromroots((-j,j)) # x^2 + 1 relative to the standard basis array([1.5+0.j, 0. +0.j, 0.5+0.j]) # numpy.polynomial.chebyshev.chebgauss polynomial.chebyshev.chebgauss(_deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1793-L1832) Gauss-Chebyshev quadrature. Computes the sample points and weights for Gauss-Chebyshev quadrature. These sample points and weights will correctly integrate polynomials of degree \\(2*deg - 1\\) or less over the interval \\([-1, 1]\\) with the weight function \\(f(x) = 1/\sqrt{1 - x^2}\\). Parameters: **deg** int Number of sample points and weights. It must be >= 1. Returns: **x** ndarray 1-D ndarray containing the sample points. **y** ndarray 1-D ndarray containing the weights. #### Notes The results have only been tested up to degree 100, higher degrees may be problematic. For Gauss-Chebyshev there are closed form solutions for the sample points and weights. If n = `deg`, then \\[x_i = \cos(\pi (2 i - 1) / (2 n))\\] \\[w_i = \pi / n\\] # numpy.polynomial.chebyshev.chebgrid2d polynomial.chebyshev.chebgrid2d(_x_ , _y_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1214-L1258) Evaluate a 2-D Chebyshev series on the Cartesian product of x and y. This function returns the values: \\[p(a,b) = \sum_{i,j} c_{i,j} * T_i(a) * T_j(b),\\] where the points `(a, b)` consist of all pairs formed by taking `a` from `x` and `b` from `y`. The resulting points form a grid with `x` in the first dimension and `y` in the second. The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than two dimensions, ones are implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape + y.shape. Parameters: **x, y** array_like, compatible objects The two dimensional series is evaluated at the points in the Cartesian product of `x` and `y`. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficient of the term of multi- degree i,j is contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional Chebyshev series at points in the Cartesian product of `x` and `y`. See also [`chebval`](numpy.polynomial.chebyshev.chebval#numpy.polynomial.chebyshev.chebval "numpy.polynomial.chebyshev.chebval"), [`chebval2d`](numpy.polynomial.chebyshev.chebval2d#numpy.polynomial.chebyshev.chebval2d "numpy.polynomial.chebyshev.chebval2d"), [`chebval3d`](numpy.polynomial.chebyshev.chebval3d#numpy.polynomial.chebyshev.chebval3d "numpy.polynomial.chebyshev.chebval3d"), [`chebgrid3d`](numpy.polynomial.chebyshev.chebgrid3d#numpy.polynomial.chebyshev.chebgrid3d "numpy.polynomial.chebyshev.chebgrid3d") # numpy.polynomial.chebyshev.chebgrid3d polynomial.chebyshev.chebgrid3d(_x_ , _y_ , _z_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1306-L1353) Evaluate a 3-D Chebyshev series on the Cartesian product of x, y, and z. This function returns the values: \\[p(a,b,c) = \sum_{i,j,k} c_{i,j,k} * T_i(a) * T_j(b) * T_k(c)\\] where the points `(a, b, c)` consist of all triples formed by taking `a` from `x`, `b` from `y`, and `c` from `z`. The resulting points form a grid with `x` in the first dimension, `y` in the second, and `z` in the third. The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than three dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape + y.shape + z.shape. Parameters: **x, y, z** array_like, compatible objects The three dimensional series is evaluated at the points in the Cartesian product of `x`, `y`, and `z`. If `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`chebval`](numpy.polynomial.chebyshev.chebval#numpy.polynomial.chebyshev.chebval "numpy.polynomial.chebyshev.chebval"), [`chebval2d`](numpy.polynomial.chebyshev.chebval2d#numpy.polynomial.chebyshev.chebval2d "numpy.polynomial.chebyshev.chebval2d"), [`chebgrid2d`](numpy.polynomial.chebyshev.chebgrid2d#numpy.polynomial.chebyshev.chebgrid2d "numpy.polynomial.chebyshev.chebgrid2d"), [`chebval3d`](numpy.polynomial.chebyshev.chebval3d#numpy.polynomial.chebyshev.chebval3d "numpy.polynomial.chebyshev.chebval3d") # numpy.polynomial.chebyshev.chebint polynomial.chebyshev.chebint(_c_ , _m =1_, _k =[]_, _lbnd =0_, _scl =1_, _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L964-L1086) Integrate a Chebyshev series. Returns the Chebyshev series coefficients `c` integrated `m` times from `lbnd` along `axis`. At each iteration the resulting series is **multiplied** by `scl` and an integration constant, `k`, is added. The scaling factor is for use in a linear change of variable. (“Buyer beware”: note that, depending on what one is doing, one may want `scl` to be the reciprocal of what one might expect; for more information, see the Notes section below.) The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `T_0 + 2*T_1 + 3*T_2` while [[1,2],[1,2]] represents `1*T_0(x)*T_0(y) + 1*T_1(x)*T_0(y) + 2*T_0(x)*T_1(y) + 2*T_1(x)*T_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters: **c** array_like Array of Chebyshev series coefficients. If c is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m** int, optional Order of integration, must be positive. (Default: 1) **k**{[], list, scalar}, optional Integration constant(s). The value of the first integral at zero is the first value in the list, the value of the second integral at zero is the second value, etc. If `k == []` (the default), all constants are set to zero. If `m == 1`, a single scalar can be given instead of a list. **lbnd** scalar, optional The lower bound of the integral. (Default: 0) **scl** scalar, optional Following each integration the result is _multiplied_ by `scl` before the integration constant is added. (Default: 1) **axis** int, optional Axis over which the integral is taken. (Default: 0). Returns: **S** ndarray C-series coefficients of the integral. Raises: ValueError If `m < 1`, `len(k) > m`, `np.ndim(lbnd) != 0`, or `np.ndim(scl) != 0`. See also [`chebder`](numpy.polynomial.chebyshev.chebder#numpy.polynomial.chebyshev.chebder "numpy.polynomial.chebyshev.chebder") #### Notes Note that the result of each integration is _multiplied_ by `scl`. Why is this important to note? Say one is making a linear change of variable \\(u = ax + b\\) in an integral relative to `x`. Then \\(dx = du/a\\), so one will need to set `scl` equal to \\(1/a\\)\- perhaps not what one would have first thought. Also note that, in general, the result of integrating a C-series needs to be “reprojected” onto the C-series basis set. Thus, typically, the result of this function is “unintuitive,” albeit correct; see Examples section below. #### Examples >>> from numpy.polynomial import chebyshev as C >>> c = (1,2,3) >>> C.chebint(c) array([ 0.5, -0.5, 0.5, 0.5]) >>> C.chebint(c,3) array([ 0.03125 , -0.1875 , 0.04166667, -0.05208333, 0.01041667, # may vary 0.00625 ]) >>> C.chebint(c, k=3) array([ 3.5, -0.5, 0.5, 0.5]) >>> C.chebint(c,lbnd=-2) array([ 8.5, -0.5, 0.5, 0.5]) >>> C.chebint(c,scl=-2) array([-1., 1., -1., -1.]) # numpy.polynomial.chebyshev.chebinterpolate polynomial.chebyshev.chebinterpolate(_func_ , _deg_ , _args =()_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1729-L1790) Interpolate a function at the Chebyshev points of the first kind. Returns the Chebyshev series that interpolates `func` at the Chebyshev points of the first kind in the interval [-1, 1]. The interpolating series tends to a minmax approximation to `func` with increasing `deg` if the function is continuous in the interval. Parameters: **func** function The function to be approximated. It must be a function of a single variable of the form `f(x, a, b, c...)`, where `a, b, c...` are extra arguments passed in the `args` parameter. **deg** int Degree of the interpolating polynomial **args** tuple, optional Extra arguments to be used in the function call. Default is no extra arguments. Returns: **coef** ndarray, shape (deg + 1,) Chebyshev coefficients of the interpolating series ordered from low to high. #### Notes The Chebyshev polynomials used in the interpolation are orthogonal when sampled at the Chebyshev points of the first kind. If it is desired to constrain some of the coefficients they can simply be set to the desired value after the interpolation, no new interpolation or fit is needed. This is especially useful if it is known apriori that some of coefficients are zero. For instance, if the function is even then the coefficients of the terms of odd degree in the result can be set to zero. #### Examples >>> import numpy.polynomial.chebyshev as C >>> C.chebinterpolate(lambda x: np.tanh(x) + 0.5, 8) array([ 5.00000000e-01, 8.11675684e-01, -9.86864911e-17, -5.42457905e-02, -2.71387850e-16, 4.51658839e-03, 2.46716228e-17, -3.79694221e-04, -3.26899002e-16]) # numpy.polynomial.chebyshev.chebline polynomial.chebyshev.chebline(_off_ , _scl_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L476-L511) Chebyshev series whose graph is a straight line. Parameters: **off, scl** scalars The specified line is given by `off + scl*x`. Returns: **y** ndarray This module’s representation of the Chebyshev series for `off + scl*x`. See also [`numpy.polynomial.polynomial.polyline`](numpy.polynomial.polynomial.polyline#numpy.polynomial.polynomial.polyline "numpy.polynomial.polynomial.polyline") [`numpy.polynomial.legendre.legline`](numpy.polynomial.legendre.legline#numpy.polynomial.legendre.legline "numpy.polynomial.legendre.legline") [`numpy.polynomial.laguerre.lagline`](numpy.polynomial.laguerre.lagline#numpy.polynomial.laguerre.lagline "numpy.polynomial.laguerre.lagline") [`numpy.polynomial.hermite.hermline`](numpy.polynomial.hermite.hermline#numpy.polynomial.hermite.hermline "numpy.polynomial.hermite.hermline") [`numpy.polynomial.hermite_e.hermeline`](numpy.polynomial.hermite_e.hermeline#numpy.polynomial.hermite_e.hermeline "numpy.polynomial.hermite_e.hermeline") #### Examples >>> import numpy.polynomial.chebyshev as C >>> C.chebline(3,2) array([3, 2]) >>> C.chebval(-3, C.chebline(3,2)) # should be -3 -3.0 # numpy.polynomial.chebyshev.chebmul polynomial.chebyshev.chebmul(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L700-L746) Multiply one Chebyshev series by another. Returns the product of two Chebyshev series `c1` * `c2`. The arguments are sequences of coefficients, from lowest order “term” to highest, e.g., [1,2,3] represents the series `T_0 + 2*T_1 + 3*T_2`. Parameters: **c1, c2** array_like 1-D arrays of Chebyshev series coefficients ordered from low to high. Returns: **out** ndarray Of Chebyshev series coefficients representing their product. See also [`chebadd`](numpy.polynomial.chebyshev.chebadd#numpy.polynomial.chebyshev.chebadd "numpy.polynomial.chebyshev.chebadd"), [`chebsub`](numpy.polynomial.chebyshev.chebsub#numpy.polynomial.chebyshev.chebsub "numpy.polynomial.chebyshev.chebsub"), [`chebmulx`](numpy.polynomial.chebyshev.chebmulx#numpy.polynomial.chebyshev.chebmulx "numpy.polynomial.chebyshev.chebmulx"), [`chebdiv`](numpy.polynomial.chebyshev.chebdiv#numpy.polynomial.chebyshev.chebdiv "numpy.polynomial.chebyshev.chebdiv"), [`chebpow`](numpy.polynomial.chebyshev.chebpow#numpy.polynomial.chebyshev.chebpow "numpy.polynomial.chebyshev.chebpow") #### Notes In general, the (polynomial) product of two C-series results in terms that are not in the Chebyshev polynomial basis set. Thus, to express the product as a C-series, it is typically necessary to “reproject” the product onto said basis set, which typically produces “unintuitive live” (but correct) results; see Examples section below. #### Examples >>> from numpy.polynomial import chebyshev as C >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> C.chebmul(c1,c2) # multiplication requires "reprojection" array([ 6.5, 12. , 12. , 4. , 1.5]) # numpy.polynomial.chebyshev.chebmulx polynomial.chebyshev.chebmulx(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L655-L697) Multiply a Chebyshev series by x. Multiply the polynomial `c` by x, where x is the independent variable. Parameters: **c** array_like 1-D array of Chebyshev series coefficients ordered from low to high. Returns: **out** ndarray Array representing the result of the multiplication. See also [`chebadd`](numpy.polynomial.chebyshev.chebadd#numpy.polynomial.chebyshev.chebadd "numpy.polynomial.chebyshev.chebadd"), [`chebsub`](numpy.polynomial.chebyshev.chebsub#numpy.polynomial.chebyshev.chebsub "numpy.polynomial.chebyshev.chebsub"), [`chebmul`](numpy.polynomial.chebyshev.chebmul#numpy.polynomial.chebyshev.chebmul "numpy.polynomial.chebyshev.chebmul"), [`chebdiv`](numpy.polynomial.chebyshev.chebdiv#numpy.polynomial.chebyshev.chebdiv "numpy.polynomial.chebyshev.chebdiv"), [`chebpow`](numpy.polynomial.chebyshev.chebpow#numpy.polynomial.chebyshev.chebpow "numpy.polynomial.chebyshev.chebpow") #### Examples >>> from numpy.polynomial import chebyshev as C >>> C.chebmulx([1,2,3]) array([1. , 2.5, 1. , 1.5]) # numpy.polynomial.chebyshev.chebone polynomial.chebyshev.chebone _= array([1])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.chebyshev.chebpow polynomial.chebyshev.chebpow(_c_ , _pow_ , _maxpower =16_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L816-L871) Raise a Chebyshev series to a power. Returns the Chebyshev series `c` raised to the power [`pow`](numpy.pow#numpy.pow "numpy.pow"). The argument `c` is a sequence of coefficients ordered from low to high. i.e., [1,2,3] is the series `T_0 + 2*T_1 + 3*T_2.` Parameters: **c** array_like 1-D array of Chebyshev series coefficients ordered from low to high. **pow** integer Power to which the series will be raised **maxpower** integer, optional Maximum power allowed. This is mainly to limit growth of the series to unmanageable size. Default is 16 Returns: **coef** ndarray Chebyshev series of power. See also [`chebadd`](numpy.polynomial.chebyshev.chebadd#numpy.polynomial.chebyshev.chebadd "numpy.polynomial.chebyshev.chebadd"), [`chebsub`](numpy.polynomial.chebyshev.chebsub#numpy.polynomial.chebyshev.chebsub "numpy.polynomial.chebyshev.chebsub"), [`chebmulx`](numpy.polynomial.chebyshev.chebmulx#numpy.polynomial.chebyshev.chebmulx "numpy.polynomial.chebyshev.chebmulx"), [`chebmul`](numpy.polynomial.chebyshev.chebmul#numpy.polynomial.chebyshev.chebmul "numpy.polynomial.chebyshev.chebmul"), [`chebdiv`](numpy.polynomial.chebyshev.chebdiv#numpy.polynomial.chebyshev.chebdiv "numpy.polynomial.chebyshev.chebdiv") #### Examples >>> from numpy.polynomial import chebyshev as C >>> C.chebpow([1, 2, 3, 4], 2) array([15.5, 22. , 16. , ..., 12.5, 12. , 8. ]) # numpy.polynomial.chebyshev.chebpts1 polynomial.chebyshev.chebpts1(_npts_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1857-L1885) Chebyshev points of the first kind. The Chebyshev points of the first kind are the points `cos(x)`, where `x = [pi*(k + .5)/npts for k in range(npts)]`. Parameters: **npts** int Number of sample points desired. Returns: **pts** ndarray The Chebyshev points of the first kind. See also [`chebpts2`](numpy.polynomial.chebyshev.chebpts2#numpy.polynomial.chebyshev.chebpts2 "numpy.polynomial.chebyshev.chebpts2") # numpy.polynomial.chebyshev.chebpts2 polynomial.chebyshev.chebpts2(_npts_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1888-L1913) Chebyshev points of the second kind. The Chebyshev points of the second kind are the points `cos(x)`, where `x = [pi*k/(npts - 1) for k in range(npts)]` sorted in ascending order. Parameters: **npts** int Number of sample points desired. Returns: **pts** ndarray The Chebyshev points of the second kind. # numpy.polynomial.chebyshev.chebroots polynomial.chebyshev.chebroots(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1668-L1726) Compute the roots of a Chebyshev series. Return the roots (a.k.a. “zeros”) of the polynomial \\[p(x) = \sum_i c[i] * T_i(x).\\] Parameters: **c** 1-D array_like 1-D array of coefficients. Returns: **out** ndarray Array of the roots of the series. If all the roots are real, then `out` is also real, otherwise it is complex. See also [`numpy.polynomial.polynomial.polyroots`](numpy.polynomial.polynomial.polyroots#numpy.polynomial.polynomial.polyroots "numpy.polynomial.polynomial.polyroots") [`numpy.polynomial.legendre.legroots`](numpy.polynomial.legendre.legroots#numpy.polynomial.legendre.legroots "numpy.polynomial.legendre.legroots") [`numpy.polynomial.laguerre.lagroots`](numpy.polynomial.laguerre.lagroots#numpy.polynomial.laguerre.lagroots "numpy.polynomial.laguerre.lagroots") [`numpy.polynomial.hermite.hermroots`](numpy.polynomial.hermite.hermroots#numpy.polynomial.hermite.hermroots "numpy.polynomial.hermite.hermroots") [`numpy.polynomial.hermite_e.hermeroots`](numpy.polynomial.hermite_e.hermeroots#numpy.polynomial.hermite_e.hermeroots "numpy.polynomial.hermite_e.hermeroots") #### Notes The root estimates are obtained as the eigenvalues of the companion matrix, Roots far from the origin of the complex plane may have large errors due to the numerical instability of the series for such values. Roots with multiplicity greater than 1 will also show larger errors as the value of the series near such points is relatively insensitive to errors in the roots. Isolated roots near the origin can be improved by a few iterations of Newton’s method. The Chebyshev series basis polynomials aren’t powers of `x` so the results of this function may seem unintuitive. #### Examples >>> import numpy.polynomial.chebyshev as cheb >>> cheb.chebroots((-1, 1,-1, 1)) # T3 - T2 + T1 - T0 has real roots array([ -5.00000000e-01, 2.60860684e-17, 1.00000000e+00]) # may vary # numpy.polynomial.chebyshev.chebsub polynomial.chebyshev.chebsub(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L611-L652) Subtract one Chebyshev series from another. Returns the difference of two Chebyshev series `c1` \- `c2`. The sequences of coefficients are from lowest order term to highest, i.e., [1,2,3] represents the series `T_0 + 2*T_1 + 3*T_2`. Parameters: **c1, c2** array_like 1-D arrays of Chebyshev series coefficients ordered from low to high. Returns: **out** ndarray Of Chebyshev series coefficients representing their difference. See also [`chebadd`](numpy.polynomial.chebyshev.chebadd#numpy.polynomial.chebyshev.chebadd "numpy.polynomial.chebyshev.chebadd"), [`chebmulx`](numpy.polynomial.chebyshev.chebmulx#numpy.polynomial.chebyshev.chebmulx "numpy.polynomial.chebyshev.chebmulx"), [`chebmul`](numpy.polynomial.chebyshev.chebmul#numpy.polynomial.chebyshev.chebmul "numpy.polynomial.chebyshev.chebmul"), [`chebdiv`](numpy.polynomial.chebyshev.chebdiv#numpy.polynomial.chebyshev.chebdiv "numpy.polynomial.chebyshev.chebdiv"), [`chebpow`](numpy.polynomial.chebyshev.chebpow#numpy.polynomial.chebyshev.chebpow "numpy.polynomial.chebyshev.chebpow") #### Notes Unlike multiplication, division, etc., the difference of two Chebyshev series is a Chebyshev series (without having to “reproject” the result onto the basis set) so subtraction, just like that of “standard” polynomials, is simply “component-wise.” #### Examples >>> from numpy.polynomial import chebyshev as C >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> C.chebsub(c1,c2) array([-2., 0., 2.]) >>> C.chebsub(c2,c1) # -C.chebsub(c1,c2) array([ 2., 0., -2.]) # numpy.polynomial.chebyshev.chebtrim polynomial.chebyshev.chebtrim(_c_ , _tol =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polyutils.py#L144-L192) Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter `tol`; “trailing” means highest order coefficient(s), e.g., in `[0, 1, 1, 0, 0]` (which represents `0 + x + x**2 + 0*x**3 + 0*x**4`) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters: **c** array_like 1-d array of coefficients, ordered from lowest order to highest. **tol** number, optional Trailing (i.e., highest order) elements with absolute value less than or equal to `tol` (default value is zero) are removed. Returns: **trimmed** ndarray 1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises: ValueError If `tol` < 0 #### Examples >>> from numpy.polynomial import polyutils as pu >>> pu.trimcoef((0,0,3,0,5,0,0)) array([0., 0., 3., 0., 5.]) >>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed array([0.]) >>> i = complex(0,1) # works for complex >>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3) array([0.0003+0.j , 0.001 -0.001j]) # numpy.polynomial.chebyshev.chebval polynomial.chebyshev.chebval(_x_ , _c_ , _tensor =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1089-L1168) Evaluate a Chebyshev series at points x. If `c` is of length `n + 1`, this function returns the value: \\[p(x) = c_0 * T_0(x) + c_1 * T_1(x) + ... + c_n * T_n(x)\\] The parameter `x` is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either `x` or its elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array, then `p(x)` will have the same shape as `x`. If `c` is multidimensional, then the shape of the result depends on the value of `tensor`. If `tensor` is true the shape will be c.shape[1:] + x.shape. If `tensor` is false the shape will be c.shape[1:]. Note that scalars have shape (,). Trailing zeros in the coefficients will be used in the evaluation, so they should be avoided if efficiency is a concern. Parameters: **x** array_like, compatible object If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with themselves and with the elements of `c`. **c** array_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If `c` is multidimensional the remaining indices enumerate multiple polynomials. In the two dimensional case the coefficients may be thought of as stored in the columns of `c`. **tensor** boolean, optional If True, the shape of the coefficient array is extended with ones on the right, one for each dimension of `x`. Scalars have dimension 0 for this action. The result is that every column of coefficients in `c` is evaluated for every element of `x`. If False, `x` is broadcast over the columns of `c` for the evaluation. This keyword is useful when `c` is multidimensional. The default value is True. Returns: **values** ndarray, algebra_like The shape of the return value is described above. See also [`chebval2d`](numpy.polynomial.chebyshev.chebval2d#numpy.polynomial.chebyshev.chebval2d "numpy.polynomial.chebyshev.chebval2d"), [`chebgrid2d`](numpy.polynomial.chebyshev.chebgrid2d#numpy.polynomial.chebyshev.chebgrid2d "numpy.polynomial.chebyshev.chebgrid2d"), [`chebval3d`](numpy.polynomial.chebyshev.chebval3d#numpy.polynomial.chebyshev.chebval3d "numpy.polynomial.chebyshev.chebval3d"), [`chebgrid3d`](numpy.polynomial.chebyshev.chebgrid3d#numpy.polynomial.chebyshev.chebgrid3d "numpy.polynomial.chebyshev.chebgrid3d") #### Notes The evaluation uses Clenshaw recursion, aka synthetic division. # numpy.polynomial.chebyshev.chebval2d polynomial.chebyshev.chebval2d(_x_ , _y_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1171-L1211) Evaluate a 2-D Chebyshev series at points (x, y). This function returns the values: \\[p(x,y) = \sum_{i,j} c_{i,j} * T_i(x) * T_j(y)\\] The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array a one is implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters: **x, y** array_like, compatible objects The two dimensional series is evaluated at the points `(x, y)`, where `x` and `y` must have the same shape. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficient of the term of multi- degree i,j is contained in `c[i,j]`. If `c` has dimension greater than 2 the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional Chebyshev series at points formed from pairs of corresponding values from `x` and `y`. See also [`chebval`](numpy.polynomial.chebyshev.chebval#numpy.polynomial.chebyshev.chebval "numpy.polynomial.chebyshev.chebval"), [`chebgrid2d`](numpy.polynomial.chebyshev.chebgrid2d#numpy.polynomial.chebyshev.chebgrid2d "numpy.polynomial.chebyshev.chebgrid2d"), [`chebval3d`](numpy.polynomial.chebyshev.chebval3d#numpy.polynomial.chebyshev.chebval3d "numpy.polynomial.chebyshev.chebval3d"), [`chebgrid3d`](numpy.polynomial.chebyshev.chebgrid3d#numpy.polynomial.chebyshev.chebgrid3d "numpy.polynomial.chebyshev.chebgrid3d") # numpy.polynomial.chebyshev.chebval3d polynomial.chebyshev.chebval3d(_x_ , _y_ , _z_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1261-L1303) Evaluate a 3-D Chebyshev series at points (x, y, z). This function returns the values: \\[p(x,y,z) = \sum_{i,j,k} c_{i,j,k} * T_i(x) * T_j(y) * T_k(z)\\] The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than 3 dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape. Parameters: **x, y, z** array_like, compatible object The three dimensional series is evaluated at the points `(x, y, z)`, where `x`, `y`, and `z` must have the same shape. If any of `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficient of the term of multi- degree i,j,k is contained in `c[i,j,k]`. If `c` has dimension greater than 3 the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the multidimensional polynomial on points formed with triples of corresponding values from `x`, `y`, and `z`. See also [`chebval`](numpy.polynomial.chebyshev.chebval#numpy.polynomial.chebyshev.chebval "numpy.polynomial.chebyshev.chebval"), [`chebval2d`](numpy.polynomial.chebyshev.chebval2d#numpy.polynomial.chebyshev.chebval2d "numpy.polynomial.chebyshev.chebval2d"), [`chebgrid2d`](numpy.polynomial.chebyshev.chebgrid2d#numpy.polynomial.chebyshev.chebgrid2d "numpy.polynomial.chebyshev.chebgrid2d"), [`chebgrid3d`](numpy.polynomial.chebyshev.chebgrid3d#numpy.polynomial.chebyshev.chebgrid3d "numpy.polynomial.chebyshev.chebgrid3d") # numpy.polynomial.chebyshev.chebvander polynomial.chebyshev.chebvander(_x_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1356-L1406) Pseudo-Vandermonde matrix of given degree. Returns the pseudo-Vandermonde matrix of degree `deg` and sample points `x`. The pseudo-Vandermonde matrix is defined by \\[V[..., i] = T_i(x),\\] where `0 <= i <= deg`. The leading indices of `V` index the elements of `x` and the last index is the degree of the Chebyshev polynomial. If `c` is a 1-D array of coefficients of length `n + 1` and `V` is the matrix `V = chebvander(x, n)`, then `np.dot(V, c)` and `chebval(x, c)` are the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of Chebyshev series of the same degree and sample points. Parameters: **x** array_like Array of points. The dtype is converted to float64 or complex128 depending on whether any of the elements are complex. If `x` is scalar it is converted to a 1-D array. **deg** int Degree of the resulting matrix. Returns: **vander** ndarray The pseudo Vandermonde matrix. The shape of the returned matrix is `x.shape + (deg + 1,)`, where The last index is the degree of the corresponding Chebyshev polynomial. The dtype will be the same as the converted `x`. # numpy.polynomial.chebyshev.chebvander2d polynomial.chebyshev.chebvander2d(_x_ , _y_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1409-L1453) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y)`. The pseudo-Vandermonde matrix is defined by \\[V[..., (deg[1] + 1)*i + j] = T_i(x) * T_j(y),\\] where `0 <= i <= deg[0]` and `0 <= j <= deg[1]`. The leading indices of `V` index the points `(x, y)` and the last index encodes the degrees of the Chebyshev polynomials. If `V = chebvander2d(x, y, [xdeg, ydeg])`, then the columns of `V` correspond to the elements of a 2-D coefficient array `c` of shape (xdeg + 1, ydeg + 1) in the order \\[c_{00}, c_{01}, c_{02} ... , c_{10}, c_{11}, c_{12} ...\\] and `np.dot(V, c.flat)` and `chebval2d(x, y, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 2-D Chebyshev series of the same degrees and sample points. Parameters: **x, y** array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg** list of ints List of maximum degrees of the form [x_deg, y_deg]. Returns: **vander2d** ndarray The shape of the returned matrix is `x.shape + (order,)`, where \\(order = (deg[0]+1)*(deg[1]+1)\\). The dtype will be the same as the converted `x` and `y`. See also [`chebvander`](numpy.polynomial.chebyshev.chebvander#numpy.polynomial.chebyshev.chebvander "numpy.polynomial.chebyshev.chebvander"), [`chebvander3d`](numpy.polynomial.chebyshev.chebvander3d#numpy.polynomial.chebyshev.chebvander3d "numpy.polynomial.chebyshev.chebvander3d"), [`chebval2d`](numpy.polynomial.chebyshev.chebval2d#numpy.polynomial.chebyshev.chebval2d "numpy.polynomial.chebyshev.chebval2d"), [`chebval3d`](numpy.polynomial.chebyshev.chebval3d#numpy.polynomial.chebyshev.chebval3d "numpy.polynomial.chebyshev.chebval3d") # numpy.polynomial.chebyshev.chebvander3d polynomial.chebyshev.chebvander3d(_x_ , _y_ , _z_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1456-L1501) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y, z)`. If `l`, `m`, `n` are the given degrees in `x`, `y`, `z`, then The pseudo-Vandermonde matrix is defined by \\[V[..., (m+1)(n+1)i + (n+1)j + k] = T_i(x)*T_j(y)*T_k(z),\\] where `0 <= i <= l`, `0 <= j <= m`, and `0 <= j <= n`. The leading indices of `V` index the points `(x, y, z)` and the last index encodes the degrees of the Chebyshev polynomials. If `V = chebvander3d(x, y, z, [xdeg, ydeg, zdeg])`, then the columns of `V` correspond to the elements of a 3-D coefficient array `c` of shape (xdeg + 1, ydeg + 1, zdeg + 1) in the order \\[c_{000}, c_{001}, c_{002},... , c_{010}, c_{011}, c_{012},...\\] and `np.dot(V, c.flat)` and `chebval3d(x, y, z, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 3-D Chebyshev series of the same degrees and sample points. Parameters: **x, y, z** array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg** list of ints List of maximum degrees of the form [x_deg, y_deg, z_deg]. Returns: **vander3d** ndarray The shape of the returned matrix is `x.shape + (order,)`, where \\(order = (deg[0]+1)*(deg[1]+1)*(deg[2]+1)\\). The dtype will be the same as the converted `x`, `y`, and `z`. See also [`chebvander`](numpy.polynomial.chebyshev.chebvander#numpy.polynomial.chebyshev.chebvander "numpy.polynomial.chebyshev.chebvander"), `chebvander3d`, [`chebval2d`](numpy.polynomial.chebyshev.chebval2d#numpy.polynomial.chebyshev.chebval2d "numpy.polynomial.chebyshev.chebval2d"), [`chebval3d`](numpy.polynomial.chebyshev.chebval3d#numpy.polynomial.chebyshev.chebval3d "numpy.polynomial.chebyshev.chebval3d") # numpy.polynomial.chebyshev.chebweight polynomial.chebyshev.chebweight(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1835-L1854) The weight function of the Chebyshev polynomials. The weight function is \\(1/\sqrt{1 - x^2}\\) and the interval of integration is \\([-1, 1]\\). The Chebyshev polynomials are orthogonal, but not normalized, with respect to this weight function. Parameters: **x** array_like Values at which the weight function will be computed. Returns: **w** ndarray The weight function at `x`. # numpy.polynomial.chebyshev.chebx polynomial.chebyshev.chebx _= array([0, 1])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.chebyshev.Chebyshev.__call__ method polynomial.chebyshev.Chebyshev.__call__(_arg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L513-L515) Call self as a function. # numpy.polynomial.chebyshev.Chebyshev.basis method _classmethod_ polynomial.chebyshev.Chebyshev.basis(_deg_ , _domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1120-L1157) Series basis polynomial of degree `deg`. Returns the series representing the basis polynomial of degree `deg`. Parameters: **deg** int Degree of the basis polynomial for the series. Must be >= 0. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series A series with the coefficient of the `deg` term set to one and all others zero. # numpy.polynomial.chebyshev.Chebyshev.cast method _classmethod_ polynomial.chebyshev.Chebyshev.cast(_series_ , _domain =None_, _window =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1159-L1197) Convert series to series of this class. The `series` is expected to be an instance of some polynomial series of one of the types supported by by the numpy.polynomial module, but could be some other class that supports the convert method. Parameters: **series** series The series instance to be converted. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns: **new_series** series A series of the same kind as the calling class and equal to `series` when evaluated. See also [`convert`](numpy.polynomial.chebyshev.chebyshev.convert#numpy.polynomial.chebyshev.Chebyshev.convert "numpy.polynomial.chebyshev.Chebyshev.convert") similar instance method # numpy.polynomial.chebyshev.Chebyshev.convert method polynomial.chebyshev.Chebyshev.convert(_domain =None_, _kind =None_, _window =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L785-L820) Convert series to a different kind and/or domain and/or window. Parameters: **domain** array_like, optional The domain of the converted series. If the value is None, the default domain of `kind` is used. **kind** class, optional The polynomial series type class to which the current instance should be converted. If kind is None, then the class of the current instance is used. **window** array_like, optional The window of the converted series. If the value is None, the default window of `kind` is used. Returns: **new_series** series The returned class can be of different type than the current instance and/or have a different domain and/or different window. #### Notes Conversion between domains and class types can result in numerically ill defined series. # numpy.polynomial.chebyshev.Chebyshev.copy method polynomial.chebyshev.Chebyshev.copy()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L666-L675) Return a copy. Returns: **new_series** series Copy of self. # numpy.polynomial.chebyshev.Chebyshev.cutdeg method polynomial.chebyshev.Chebyshev.cutdeg(_deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L710-L731) Truncate series to the given degree. Reduce the degree of the series to `deg` by discarding the high order terms. If `deg` is greater than the current degree a copy of the current series is returned. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters: **deg** non-negative int The series is reduced to degree `deg` by discarding the high order terms. The value of `deg` must be a non-negative integer. Returns: **new_series** series New instance of series with reduced degree. # numpy.polynomial.chebyshev.Chebyshev.degree method polynomial.chebyshev.Chebyshev.degree()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L677-L708) The degree of the series. Returns: **degree** int Degree of the series, one less than the number of coefficients. #### Examples Create a polynomial object for `1 + 7*x + 4*x**2`: >>> poly = np.polynomial.Polynomial([1, 7, 4]) >>> print(poly) 1.0 + 7.0·x + 4.0·x² >>> poly.degree() 2 Note that this method does not check for non-zero coefficients. You must trim the polynomial to remove any trailing zeroes: >>> poly = np.polynomial.Polynomial([1, 7, 0]) >>> print(poly) 1.0 + 7.0·x + 0.0·x² >>> poly.degree() 2 >>> poly.trim().degree() 1 # numpy.polynomial.chebyshev.Chebyshev.deriv method polynomial.chebyshev.Chebyshev.deriv(_m =1_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L884-L904) Differentiate. Return a series instance of that is the derivative of the current series. Parameters: **m** non-negative int Find the derivative of order `m`. Returns: **new_series** series A new series representing the derivative. The domain is the same as the domain of the differentiated series. # numpy.polynomial.chebyshev.Chebyshev.domain attribute polynomial.chebyshev.Chebyshev.domain _= array([-1., 1.])_ # numpy.polynomial.chebyshev.Chebyshev.fit method _classmethod_ polynomial.chebyshev.Chebyshev.fit(_x_ , _y_ , _deg_ , _domain =None_, _rcond =None_, _full =False_, _w =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L951-L1040) Least squares fit to data. Return a series instance that is the least squares fit to the data `y` sampled at `x`. The domain of the returned instance can be specified and this will often result in a superior fit with less chance of ill conditioning. Parameters: **x** array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y** array_like, shape (M,) y-coordinates of the M sample points `(x[i], y[i])`. **deg** int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **domain**{None, [beg, end], []}, optional Domain to use for the returned series. If `None`, then a minimal domain that covers the points `x` is chosen. If `[]` the class domain is used. The default value was the class domain in NumPy 1.4 and `None` in later versions. The `[]` option was added in numpy 1.5.0. **rcond** float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is `len(x)*eps`, where eps is the relative precision of the float type, about 2e-16 in most cases. **full** bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w** array_like, shape (M,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse- variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. **window**{[beg, end]}, optional Window to use for the returned series. The default value is the default class domain **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series A series that represents the least squares fit to the data and has the domain and window specified in the call. If the coefficients for the unscaled and unshifted basis polynomials are of interest, do `new_series.convert().coef`. **[resid, rank, sv, rcond]** list These values are only returned if `full == True` * resid – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * sv – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). # numpy.polynomial.chebyshev.Chebyshev.fromroots method _classmethod_ polynomial.chebyshev.Chebyshev.fromroots(_roots_ , _domain =[]_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1042-L1083) Return series instance that has the specified roots. Returns a series representing the product `(x - r[0])*(x - r[1])*...*(x - r[n-1])`, where `r` is a list of roots. Parameters: **roots** array_like List of roots. **domain**{[], None, array_like}, optional Domain for the resulting series. If None the domain is the interval from the smallest root to the largest. If [] the domain is the class domain. The default is []. **window**{None, array_like}, optional Window for the returned series. If None the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series Series with the specified roots. # numpy.polynomial.chebyshev.Chebyshev.has_samecoef method polynomial.chebyshev.Chebyshev.has_samecoef(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L188-L207) Check if coefficients match. Parameters: **other** class instance The other class must have the `coef` attribute. Returns: **bool** boolean True if the coefficients are the same, False otherwise. # numpy.polynomial.chebyshev.Chebyshev.has_samedomain method polynomial.chebyshev.Chebyshev.has_samedomain(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L209-L223) Check if domains match. Parameters: **other** class instance The other class must have the `domain` attribute. Returns: **bool** boolean True if the domains are the same, False otherwise. # numpy.polynomial.chebyshev.Chebyshev.has_sametype method polynomial.chebyshev.Chebyshev.has_sametype(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L241-L255) Check if types match. Parameters: **other** object Class instance. Returns: **bool** boolean True if other is same class as self # numpy.polynomial.chebyshev.Chebyshev.has_samewindow method polynomial.chebyshev.Chebyshev.has_samewindow(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L225-L239) Check if windows match. Parameters: **other** class instance The other class must have the `window` attribute. Returns: **bool** boolean True if the windows are the same, False otherwise. # numpy.polynomial.chebyshev.Chebyshev _class_ numpy.polynomial.chebyshev.Chebyshev(_coef_ , _domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1920-L2003) A Chebyshev series class. The Chebyshev class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the attributes and methods listed below. Parameters: **coef** array_like Chebyshev coefficients in order of increasing degree, i.e., `(1, 2, 3)` gives `1*T_0(x) + 2*T_1(x) + 3*T_2(x)`. **domain**(2,) array_like, optional Domain to use. The interval `[domain[0], domain[1]]` is mapped to the interval `[window[0], window[1]]` by shifting and scaling. The default value is [-1., 1.]. **window**(2,) array_like, optional Window, see [`domain`](numpy.polynomial.chebyshev.chebyshev.domain#numpy.polynomial.chebyshev.Chebyshev.domain "numpy.polynomial.chebyshev.Chebyshev.domain") for its use. The default value is [-1., 1.]. **symbol** str, optional Symbol used to represent the independent variable in string representations of the polynomial expression, e.g. for printing. The symbol must be a valid Python identifier. Default value is ‘x’. New in version 1.24. Attributes: **symbol** #### Methods [`__call__`](numpy.polynomial.chebyshev.chebyshev.__call__#numpy.polynomial.chebyshev.Chebyshev.__call__ "numpy.polynomial.chebyshev.Chebyshev.__call__")(arg) | Call self as a function. ---|--- [`basis`](numpy.polynomial.chebyshev.chebyshev.basis#numpy.polynomial.chebyshev.Chebyshev.basis "numpy.polynomial.chebyshev.Chebyshev.basis")(deg[, domain, window, symbol]) | Series basis polynomial of degree `deg`. [`cast`](numpy.polynomial.chebyshev.chebyshev.cast#numpy.polynomial.chebyshev.Chebyshev.cast "numpy.polynomial.chebyshev.Chebyshev.cast")(series[, domain, window]) | Convert series to series of this class. [`convert`](numpy.polynomial.chebyshev.chebyshev.convert#numpy.polynomial.chebyshev.Chebyshev.convert "numpy.polynomial.chebyshev.Chebyshev.convert")([domain, kind, window]) | Convert series to a different kind and/or domain and/or window. [`copy`](numpy.polynomial.chebyshev.chebyshev.copy#numpy.polynomial.chebyshev.Chebyshev.copy "numpy.polynomial.chebyshev.Chebyshev.copy")() | Return a copy. [`cutdeg`](numpy.polynomial.chebyshev.chebyshev.cutdeg#numpy.polynomial.chebyshev.Chebyshev.cutdeg "numpy.polynomial.chebyshev.Chebyshev.cutdeg")(deg) | Truncate series to the given degree. [`degree`](numpy.polynomial.chebyshev.chebyshev.degree#numpy.polynomial.chebyshev.Chebyshev.degree "numpy.polynomial.chebyshev.Chebyshev.degree")() | The degree of the series. [`deriv`](numpy.polynomial.chebyshev.chebyshev.deriv#numpy.polynomial.chebyshev.Chebyshev.deriv "numpy.polynomial.chebyshev.Chebyshev.deriv")([m]) | Differentiate. [`fit`](numpy.polynomial.chebyshev.chebyshev.fit#numpy.polynomial.chebyshev.Chebyshev.fit "numpy.polynomial.chebyshev.Chebyshev.fit")(x, y, deg[, domain, rcond, full, w, ...]) | Least squares fit to data. [`fromroots`](numpy.polynomial.chebyshev.chebyshev.fromroots#numpy.polynomial.chebyshev.Chebyshev.fromroots "numpy.polynomial.chebyshev.Chebyshev.fromroots")(roots[, domain, window, symbol]) | Return series instance that has the specified roots. [`has_samecoef`](numpy.polynomial.chebyshev.chebyshev.has_samecoef#numpy.polynomial.chebyshev.Chebyshev.has_samecoef "numpy.polynomial.chebyshev.Chebyshev.has_samecoef")(other) | Check if coefficients match. [`has_samedomain`](numpy.polynomial.chebyshev.chebyshev.has_samedomain#numpy.polynomial.chebyshev.Chebyshev.has_samedomain "numpy.polynomial.chebyshev.Chebyshev.has_samedomain")(other) | Check if domains match. [`has_sametype`](numpy.polynomial.chebyshev.chebyshev.has_sametype#numpy.polynomial.chebyshev.Chebyshev.has_sametype "numpy.polynomial.chebyshev.Chebyshev.has_sametype")(other) | Check if types match. [`has_samewindow`](numpy.polynomial.chebyshev.chebyshev.has_samewindow#numpy.polynomial.chebyshev.Chebyshev.has_samewindow "numpy.polynomial.chebyshev.Chebyshev.has_samewindow")(other) | Check if windows match. [`identity`](numpy.polynomial.chebyshev.chebyshev.identity#numpy.polynomial.chebyshev.Chebyshev.identity "numpy.polynomial.chebyshev.Chebyshev.identity")([domain, window, symbol]) | Identity function. [`integ`](numpy.polynomial.chebyshev.chebyshev.integ#numpy.polynomial.chebyshev.Chebyshev.integ "numpy.polynomial.chebyshev.Chebyshev.integ")([m, k, lbnd]) | Integrate. [`interpolate`](numpy.polynomial.chebyshev.chebyshev.interpolate#numpy.polynomial.chebyshev.Chebyshev.interpolate "numpy.polynomial.chebyshev.Chebyshev.interpolate")(func, deg[, domain, args]) | Interpolate a function at the Chebyshev points of the first kind. [`linspace`](numpy.polynomial.chebyshev.chebyshev.linspace#numpy.polynomial.chebyshev.Chebyshev.linspace "numpy.polynomial.chebyshev.Chebyshev.linspace")([n, domain]) | Return x, y values at equally spaced points in domain. [`mapparms`](numpy.polynomial.chebyshev.chebyshev.mapparms#numpy.polynomial.chebyshev.Chebyshev.mapparms "numpy.polynomial.chebyshev.Chebyshev.mapparms")() | Return the mapping parameters. [`roots`](numpy.polynomial.chebyshev.chebyshev.roots#numpy.polynomial.chebyshev.Chebyshev.roots "numpy.polynomial.chebyshev.Chebyshev.roots")() | Return the roots of the series polynomial. [`trim`](numpy.polynomial.chebyshev.chebyshev.trim#numpy.polynomial.chebyshev.Chebyshev.trim "numpy.polynomial.chebyshev.Chebyshev.trim")([tol]) | Remove trailing coefficients [`truncate`](numpy.polynomial.chebyshev.chebyshev.truncate#numpy.polynomial.chebyshev.Chebyshev.truncate "numpy.polynomial.chebyshev.Chebyshev.truncate")(size) | Truncate series to length `size`. # numpy.polynomial.chebyshev.Chebyshev.identity method _classmethod_ polynomial.chebyshev.Chebyshev.identity(_domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1085-L1118) Identity function. If `p` is the returned series, then `p(x) == x` for all values of x. Parameters: **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series Series of representing the identity. # numpy.polynomial.chebyshev.Chebyshev.integ method polynomial.chebyshev.Chebyshev.integ(_m =1_, _k =[]_, _lbnd =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L851-L882) Integrate. Return a series instance that is the definite integral of the current series. Parameters: **m** non-negative int The number of integrations to perform. **k** array_like Integration constants. The first constant is applied to the first integration, the second to the second, and so on. The list of values must less than or equal to `m` in length and any missing values are set to zero. **lbnd** Scalar The lower bound of the definite integral. Returns: **new_series** series A new series representing the integral. The domain is the same as the domain of the integrated series. # numpy.polynomial.chebyshev.Chebyshev.interpolate method _classmethod_ polynomial.chebyshev.Chebyshev.interpolate(_func_ , _deg_ , _domain =None_, _args =()_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L1960-L1998) Interpolate a function at the Chebyshev points of the first kind. Returns the series that interpolates `func` at the Chebyshev points of the first kind scaled and shifted to the [`domain`](numpy.polynomial.chebyshev.chebyshev.domain#numpy.polynomial.chebyshev.Chebyshev.domain "numpy.polynomial.chebyshev.Chebyshev.domain"). The resulting series tends to a minmax approximation of `func` when the function is continuous in the domain. Parameters: **func** function The function to be interpolated. It must be a function of a single variable of the form `f(x, a, b, c...)`, where `a, b, c...` are extra arguments passed in the `args` parameter. **deg** int Degree of the interpolating polynomial. **domain**{None, [beg, end]}, optional Domain over which `func` is interpolated. The default is None, in which case the domain is [-1, 1]. **args** tuple, optional Extra arguments to be used in the function call. Default is no extra arguments. Returns: **polynomial** Chebyshev instance Interpolating Chebyshev instance. #### Notes See `numpy.polynomial.chebinterpolate` for more details. # numpy.polynomial.chebyshev.Chebyshev.linspace method polynomial.chebyshev.Chebyshev.linspace(_n =100_, _domain =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L921-L949) Return x, y values at equally spaced points in domain. Returns the x, y values at `n` linearly spaced points across the domain. Here y is the value of the polynomial at the points x. By default the domain is the same as that of the series instance. This method is intended mostly as a plotting aid. Parameters: **n** int, optional Number of point pairs to return. The default value is 100. **domain**{None, array_like}, optional If not None, the specified domain is used instead of that of the calling instance. It should be of the form `[beg,end]`. The default is None which case the class domain is used. Returns: **x, y** ndarray x is equal to linspace(self.domain[0], self.domain[1], n) and y is the series evaluated at element of x. # numpy.polynomial.chebyshev.Chebyshev.mapparms method polynomial.chebyshev.Chebyshev.mapparms()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L822-L849) Return the mapping parameters. The returned values define a linear map `off + scl*x` that is applied to the input arguments before the series is evaluated. The map depends on the `domain` and `window`; if the current `domain` is equal to the `window` the resulting map is the identity. If the coefficients of the series instance are to be used by themselves outside this class, then the linear function must be substituted for the `x` in the standard representation of the base polynomials. Returns: **off, scl** float or complex The mapping function is defined by `off + scl*x`. #### Notes If the current domain is the interval `[l1, r1]` and the window is `[l2, r2]`, then the linear mapping function `L` is defined by the equations: L(l1) = l2 L(r1) = r2 # numpy.polynomial.chebyshev.Chebyshev.roots method polynomial.chebyshev.Chebyshev.roots()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L906-L919) Return the roots of the series polynomial. Compute the roots for the series. Note that the accuracy of the roots decreases the further outside the [`domain`](numpy.polynomial.chebyshev.chebyshev.domain#numpy.polynomial.chebyshev.Chebyshev.domain "numpy.polynomial.chebyshev.Chebyshev.domain") they lie. Returns: **roots** ndarray Array containing the roots of the series. # numpy.polynomial.chebyshev.Chebyshev.trim method polynomial.chebyshev.Chebyshev.trim(_tol =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L733-L754) Remove trailing coefficients Remove trailing coefficients until a coefficient is reached whose absolute value greater than `tol` or the beginning of the series is reached. If all the coefficients would be removed the series is set to `[0]`. A new series instance is returned with the new coefficients. The current instance remains unchanged. Parameters: **tol** non-negative number. All trailing coefficients less than `tol` will be removed. Returns: **new_series** series New instance of series with trimmed coefficients. # numpy.polynomial.chebyshev.Chebyshev.truncate method polynomial.chebyshev.Chebyshev.truncate(_size_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L756-L783) Truncate series to length [`size`](numpy.size#numpy.size "numpy.size"). Reduce the series to length [`size`](numpy.size#numpy.size "numpy.size") by discarding the high degree terms. The value of [`size`](numpy.size#numpy.size "numpy.size") must be a positive integer. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters: **size** positive int The series is reduced to length [`size`](numpy.size#numpy.size "numpy.size") by discarding the high degree terms. The value of [`size`](numpy.size#numpy.size "numpy.size") must be a positive integer. Returns: **new_series** series New instance of series with truncated coefficients. # numpy.polynomial.chebyshev.chebzero polynomial.chebyshev.chebzero _= array([0])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.chebyshev.poly2cheb polynomial.chebyshev.poly2cheb(_pol_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/chebyshev.py#L347-L394) Convert a polynomial to a Chebyshev series. Convert an array representing the coefficients of a polynomial (relative to the “standard” basis) ordered from lowest degree to highest, to an array of the coefficients of the equivalent Chebyshev series, ordered from lowest to highest degree. Parameters: **pol** array_like 1-D array containing the polynomial coefficients Returns: **c** ndarray 1-D array containing the coefficients of the equivalent Chebyshev series. See also [`cheb2poly`](numpy.polynomial.chebyshev.cheb2poly#numpy.polynomial.chebyshev.cheb2poly "numpy.polynomial.chebyshev.cheb2poly") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples >>> from numpy import polynomial as P >>> p = P.Polynomial(range(4)) >>> p Polynomial([0., 1., 2., 3.], domain=[-1., 1.], window=[-1., 1.], symbol='x') >>> c = p.convert(kind=P.Chebyshev) >>> c Chebyshev([1. , 3.25, 1. , 0.75], domain=[-1., 1.], window=[-1., ... >>> P.chebyshev.poly2cheb(range(4)) array([1. , 3.25, 1. , 0.75]) # numpy.polynomial.hermite.herm2poly polynomial.hermite.herm2poly(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L142-L197) Convert a Hermite series to a polynomial. Convert an array representing the coefficients of a Hermite series, ordered from lowest degree to highest, to an array of the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest to highest degree. Parameters: **c** array_like 1-D array containing the Hermite series coefficients, ordered from lowest order term to highest. Returns: **pol** ndarray 1-D array containing the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest order term to highest. See also [`poly2herm`](numpy.polynomial.hermite.poly2herm#numpy.polynomial.hermite.poly2herm "numpy.polynomial.hermite.poly2herm") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples >>> from numpy.polynomial.hermite import herm2poly >>> herm2poly([ 1. , 2.75 , 0.5 , 0.375]) array([0., 1., 2., 3.]) # numpy.polynomial.hermite.hermadd polynomial.hermite.hermadd(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L314-L351) Add one Hermite series to another. Returns the sum of two Hermite series `c1` \+ `c2`. The arguments are sequences of coefficients ordered from lowest order term to highest, i.e., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters: **c1, c2** array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns: **out** ndarray Array representing the Hermite series of their sum. See also [`hermsub`](numpy.polynomial.hermite.hermsub#numpy.polynomial.hermite.hermsub "numpy.polynomial.hermite.hermsub"), [`hermmulx`](numpy.polynomial.hermite.hermmulx#numpy.polynomial.hermite.hermmulx "numpy.polynomial.hermite.hermmulx"), [`hermmul`](numpy.polynomial.hermite.hermmul#numpy.polynomial.hermite.hermmul "numpy.polynomial.hermite.hermmul"), [`hermdiv`](numpy.polynomial.hermite.hermdiv#numpy.polynomial.hermite.hermdiv "numpy.polynomial.hermite.hermdiv"), [`hermpow`](numpy.polynomial.hermite.hermpow#numpy.polynomial.hermite.hermpow "numpy.polynomial.hermite.hermpow") #### Notes Unlike multiplication, division, etc., the sum of two Hermite series is a Hermite series (without having to “reproject” the result onto the basis set) so addition, just like that of “standard” polynomials, is simply “component- wise.” #### Examples >>> from numpy.polynomial.hermite import hermadd >>> hermadd([1, 2, 3], [1, 2, 3, 4]) array([2., 4., 6., 4.]) # numpy.polynomial.hermite.hermcompanion polynomial.hermite.hermcompanion(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L1440-L1484) Return the scaled companion matrix of c. The basis polynomials are scaled so that the companion matrix is symmetric when `c` is an Hermite basis polynomial. This provides better eigenvalue estimates than the unscaled case and for basis polynomials the eigenvalues are guaranteed to be real if [`numpy.linalg.eigvalsh`](numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh") is used to obtain them. Parameters: **c** array_like 1-D array of Hermite series coefficients ordered from low to high degree. Returns: **mat** ndarray Scaled companion matrix of dimensions (deg, deg). #### Examples >>> from numpy.polynomial.hermite import hermcompanion >>> hermcompanion([1, 0, 1]) array([[0. , 0.35355339], [0.70710678, 0. ]]) # numpy.polynomial.hermite.hermder polynomial.hermite.hermder(_c_ , _m =1_, _scl =1_, _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L598-L676) Differentiate a Hermite series. Returns the Hermite series coefficients `c` differentiated `m` times along `axis`. At each iteration the result is multiplied by `scl` (the scaling factor is for use in a linear change of variable). The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `1*H_0 + 2*H_1 + 3*H_2` while [[1,2],[1,2]] represents `1*H_0(x)*H_0(y) + 1*H_1(x)*H_0(y) + 2*H_0(x)*H_1(y) + 2*H_1(x)*H_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters: **c** array_like Array of Hermite series coefficients. If `c` is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m** int, optional Number of derivatives taken, must be non-negative. (Default: 1) **scl** scalar, optional Each differentiation is multiplied by `scl`. The end result is multiplication by `scl**m`. This is for use in a linear change of variable. (Default: 1) **axis** int, optional Axis over which the derivative is taken. (Default: 0). Returns: **der** ndarray Hermite series of the derivative. See also [`hermint`](numpy.polynomial.hermite.hermint#numpy.polynomial.hermite.hermint "numpy.polynomial.hermite.hermint") #### Notes In general, the result of differentiating a Hermite series does not resemble the same operation on a power series. Thus the result of this function may be “unintuitive,” albeit correct; see Examples section below. #### Examples >>> from numpy.polynomial.hermite import hermder >>> hermder([ 1. , 0.5, 0.5, 0.5]) array([1., 2., 3.]) >>> hermder([-0.5, 1./2., 1./8., 1./12., 1./16.], m=2) array([1., 2., 3.]) # numpy.polynomial.hermite.hermdiv polynomial.hermite.hermdiv(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L513-L558) Divide one Hermite series by another. Returns the quotient-with-remainder of two Hermite series `c1` / `c2`. The arguments are sequences of coefficients from lowest order “term” to highest, e.g., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters: **c1, c2** array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns: **[quo, rem]** ndarrays Of Hermite series coefficients representing the quotient and remainder. See also [`hermadd`](numpy.polynomial.hermite.hermadd#numpy.polynomial.hermite.hermadd "numpy.polynomial.hermite.hermadd"), [`hermsub`](numpy.polynomial.hermite.hermsub#numpy.polynomial.hermite.hermsub "numpy.polynomial.hermite.hermsub"), [`hermmulx`](numpy.polynomial.hermite.hermmulx#numpy.polynomial.hermite.hermmulx "numpy.polynomial.hermite.hermmulx"), [`hermmul`](numpy.polynomial.hermite.hermmul#numpy.polynomial.hermite.hermmul "numpy.polynomial.hermite.hermmul"), [`hermpow`](numpy.polynomial.hermite.hermpow#numpy.polynomial.hermite.hermpow "numpy.polynomial.hermite.hermpow") #### Notes In general, the (polynomial) division of one Hermite series by another results in quotient and remainder terms that are not in the Hermite polynomial basis set. Thus, to express these results as a Hermite series, it is necessary to “reproject” the results onto the Hermite basis set, which may produce “unintuitive” (but correct) results; see Examples section below. #### Examples >>> from numpy.polynomial.hermite import hermdiv >>> hermdiv([ 52., 29., 52., 7., 6.], [0, 1, 2]) (array([1., 2., 3.]), array([0.])) >>> hermdiv([ 54., 31., 52., 7., 6.], [0, 1, 2]) (array([1., 2., 3.]), array([2., 2.])) >>> hermdiv([ 53., 30., 52., 7., 6.], [0, 1, 2]) (array([1., 2., 3.]), array([1., 1.])) # numpy.polynomial.hermite.hermdomain polynomial.hermite.hermdomain _= array([-1., 1.])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.hermite.hermfit polynomial.hermite.hermfit(_x_ , _y_ , _deg_ , _rcond =None_, _full =False_, _w =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L1306-L1437) Least squares fit of Hermite series to data. Return the coefficients of a Hermite series of degree `deg` that is the least squares fit to the data values `y` given at points `x`. If `y` is 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple fits are done, one for each column of `y`, and the resulting coefficients are stored in the corresponding columns of a 2-D return. The fitted polynomial(s) are in the form \\[p(x) = c_0 + c_1 * H_1(x) + ... + c_n * H_n(x),\\] where `n` is `deg`. Parameters: **x** array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y** array_like, shape (M,) or (M, K) y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column. **deg** int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **rcond** float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full** bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w** array_like, shape (`M`,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse- variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. Returns: **coef** ndarray, shape (M,) or (M, K) Hermite coefficients ordered from low to high. If `y` was 2-D, the coefficients for the data in column k of `y` are in column `k`. **[residuals, rank, singular_values, rcond]** list These values are only returned if `full == True` * residuals – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * singular_values – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). Warns: RankWarning The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if `full == False`. The warnings can be turned off by >>> import warnings >>> warnings.simplefilter('ignore', np.exceptions.RankWarning) See also [`numpy.polynomial.chebyshev.chebfit`](numpy.polynomial.chebyshev.chebfit#numpy.polynomial.chebyshev.chebfit "numpy.polynomial.chebyshev.chebfit") [`numpy.polynomial.legendre.legfit`](numpy.polynomial.legendre.legfit#numpy.polynomial.legendre.legfit "numpy.polynomial.legendre.legfit") [`numpy.polynomial.laguerre.lagfit`](numpy.polynomial.laguerre.lagfit#numpy.polynomial.laguerre.lagfit "numpy.polynomial.laguerre.lagfit") [`numpy.polynomial.polynomial.polyfit`](numpy.polynomial.polynomial.polyfit#numpy.polynomial.polynomial.polyfit "numpy.polynomial.polynomial.polyfit") [`numpy.polynomial.hermite_e.hermefit`](numpy.polynomial.hermite_e.hermefit#numpy.polynomial.hermite_e.hermefit "numpy.polynomial.hermite_e.hermefit") [`hermval`](numpy.polynomial.hermite.hermval#numpy.polynomial.hermite.hermval "numpy.polynomial.hermite.hermval") Evaluates a Hermite series. [`hermvander`](numpy.polynomial.hermite.hermvander#numpy.polynomial.hermite.hermvander "numpy.polynomial.hermite.hermvander") Vandermonde matrix of Hermite series. [`hermweight`](numpy.polynomial.hermite.hermweight#numpy.polynomial.hermite.hermweight "numpy.polynomial.hermite.hermweight") Hermite weight function [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") Computes a least-squares fit from the matrix. [`scipy.interpolate.UnivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline "\(in SciPy v1.14.1\)") Computes spline fits. #### Notes The solution is the coefficients of the Hermite series `p` that minimizes the sum of the weighted squared errors \\[E = \sum_j w_j^2 * |y_j - p(x_j)|^2,\\] where the \\(w_j\\) are the weights. This problem is solved by setting up the (typically) overdetermined matrix equation \\[V(x) * c = w * y,\\] where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the coefficients to be solved for, `w` are the weights, `y` are the observed values. This equation is then solved using the singular value decomposition of `V`. If some of the singular values of `V` are so small that they are neglected, then a [`RankWarning`](numpy.exceptions.rankwarning#numpy.exceptions.RankWarning "numpy.exceptions.RankWarning") will be issued. This means that the coefficient values may be poorly determined. Using a lower order fit will usually get rid of the warning. The `rcond` parameter can also be set to a value smaller than its default, but the resulting fit may be spurious and have large contributions from roundoff error. Fits using Hermite series are probably most useful when the data can be approximated by `sqrt(w(x)) * p(x)`, where `w(x)` is the Hermite weight. In that case the weight `sqrt(w(x[i]))` should be used together with data values `y[i]/sqrt(w(x[i]))`. The weight function is available as [`hermweight`](numpy.polynomial.hermite.hermweight#numpy.polynomial.hermite.hermweight "numpy.polynomial.hermite.hermweight"). #### References [1] Wikipedia, “Curve fitting”, #### Examples >>> import numpy as np >>> from numpy.polynomial.hermite import hermfit, hermval >>> x = np.linspace(-10, 10) >>> rng = np.random.default_rng() >>> err = rng.normal(scale=1./10, size=len(x)) >>> y = hermval(x, [1, 2, 3]) + err >>> hermfit(x, y, 2) array([1.02294967, 2.00016403, 2.99994614]) # may vary # numpy.polynomial.hermite.hermfromroots polynomial.hermite.hermfromroots(_roots_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L258-L311) Generate a Hermite series with given roots. The function returns the coefficients of the polynomial \\[p(x) = (x - r_0) * (x - r_1) * ... * (x - r_n),\\] in Hermite form, where the \\(r_n\\) are the roots specified in [`roots`](numpy.roots#numpy.roots "numpy.roots"). If a zero has multiplicity n, then it must appear in [`roots`](numpy.roots#numpy.roots "numpy.roots") n times. For instance, if 2 is a root of multiplicity three and 3 is a root of multiplicity 2, then [`roots`](numpy.roots#numpy.roots "numpy.roots") looks something like [2, 2, 2, 3, 3]. The roots can appear in any order. If the returned coefficients are `c`, then \\[p(x) = c_0 + c_1 * H_1(x) + ... + c_n * H_n(x)\\] The coefficient of the last term is not generally 1 for monic polynomials in Hermite form. Parameters: **roots** array_like Sequence containing the roots. Returns: **out** ndarray 1-D array of coefficients. If all roots are real then `out` is a real array, if some of the roots are complex, then `out` is complex even if all the coefficients in the result are real (see Examples below). See also [`numpy.polynomial.polynomial.polyfromroots`](numpy.polynomial.polynomial.polyfromroots#numpy.polynomial.polynomial.polyfromroots "numpy.polynomial.polynomial.polyfromroots") [`numpy.polynomial.legendre.legfromroots`](numpy.polynomial.legendre.legfromroots#numpy.polynomial.legendre.legfromroots "numpy.polynomial.legendre.legfromroots") [`numpy.polynomial.laguerre.lagfromroots`](numpy.polynomial.laguerre.lagfromroots#numpy.polynomial.laguerre.lagfromroots "numpy.polynomial.laguerre.lagfromroots") [`numpy.polynomial.chebyshev.chebfromroots`](numpy.polynomial.chebyshev.chebfromroots#numpy.polynomial.chebyshev.chebfromroots "numpy.polynomial.chebyshev.chebfromroots") [`numpy.polynomial.hermite_e.hermefromroots`](numpy.polynomial.hermite_e.hermefromroots#numpy.polynomial.hermite_e.hermefromroots "numpy.polynomial.hermite_e.hermefromroots") #### Examples >>> from numpy.polynomial.hermite import hermfromroots, hermval >>> coef = hermfromroots((-1, 0, 1)) >>> hermval((-1, 0, 1), coef) array([0., 0., 0.]) >>> coef = hermfromroots((-1j, 1j)) >>> hermval((-1j, 1j), coef) array([0.+0.j, 0.+0.j]) # numpy.polynomial.hermite.hermgauss polynomial.hermite.hermgauss(_deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L1592-L1659) Gauss-Hermite quadrature. Computes the sample points and weights for Gauss-Hermite quadrature. These sample points and weights will correctly integrate polynomials of degree \\(2*deg - 1\\) or less over the interval \\([-\inf, \inf]\\) with the weight function \\(f(x) = \exp(-x^2)\\). Parameters: **deg** int Number of sample points and weights. It must be >= 1. Returns: **x** ndarray 1-D ndarray containing the sample points. **y** ndarray 1-D ndarray containing the weights. #### Notes The results have only been tested up to degree 100, higher degrees may be problematic. The weights are determined by using the fact that \\[w_k = c / (H'_n(x_k) * H_{n-1}(x_k))\\] where \\(c\\) is a constant independent of \\(k\\) and \\(x_k\\) is the k’th root of \\(H_n\\), and then scaling the results to get the right value when integrating 1. #### Examples >>> from numpy.polynomial.hermite import hermgauss >>> hermgauss(2) (array([-0.70710678, 0.70710678]), array([0.88622693, 0.88622693])) # numpy.polynomial.hermite.hermgrid2d polynomial.hermite.hermgrid2d(_x_ , _y_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L946-L1002) Evaluate a 2-D Hermite series on the Cartesian product of x and y. This function returns the values: \\[p(a,b) = \sum_{i,j} c_{i,j} * H_i(a) * H_j(b)\\] where the points `(a, b)` consist of all pairs formed by taking `a` from `x` and `b` from `y`. The resulting points form a grid with `x` in the first dimension and `y` in the second. The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than two dimensions, ones are implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters: **x, y** array_like, compatible objects The two dimensional series is evaluated at the points in the Cartesian product of `x` and `y`. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`hermval`](numpy.polynomial.hermite.hermval#numpy.polynomial.hermite.hermval "numpy.polynomial.hermite.hermval"), [`hermval2d`](numpy.polynomial.hermite.hermval2d#numpy.polynomial.hermite.hermval2d "numpy.polynomial.hermite.hermval2d"), [`hermval3d`](numpy.polynomial.hermite.hermval3d#numpy.polynomial.hermite.hermval3d "numpy.polynomial.hermite.hermval3d"), [`hermgrid3d`](numpy.polynomial.hermite.hermgrid3d#numpy.polynomial.hermite.hermgrid3d "numpy.polynomial.hermite.hermgrid3d") #### Examples >>> from numpy.polynomial.hermite import hermgrid2d >>> x = [1, 2, 3] >>> y = [4, 5] >>> c = [[1, 2, 3], [4, 5, 6]] >>> hermgrid2d(x, y, c) array([[1035., 1599.], [1867., 2883.], [2699., 4167.]]) # numpy.polynomial.hermite.hermgrid3d polynomial.hermite.hermgrid3d(_x_ , _y_ , _z_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L1061-L1122) Evaluate a 3-D Hermite series on the Cartesian product of x, y, and z. This function returns the values: \\[p(a,b,c) = \sum_{i,j,k} c_{i,j,k} * H_i(a) * H_j(b) * H_k(c)\\] where the points `(a, b, c)` consist of all triples formed by taking `a` from `x`, `b` from `y`, and `c` from `z`. The resulting points form a grid with `x` in the first dimension, `y` in the second, and `z` in the third. The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than three dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape + y.shape + z.shape. Parameters: **x, y, z** array_like, compatible objects The three dimensional series is evaluated at the points in the Cartesian product of `x`, `y`, and `z`. If `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`hermval`](numpy.polynomial.hermite.hermval#numpy.polynomial.hermite.hermval "numpy.polynomial.hermite.hermval"), [`hermval2d`](numpy.polynomial.hermite.hermval2d#numpy.polynomial.hermite.hermval2d "numpy.polynomial.hermite.hermval2d"), [`hermgrid2d`](numpy.polynomial.hermite.hermgrid2d#numpy.polynomial.hermite.hermgrid2d "numpy.polynomial.hermite.hermgrid2d"), [`hermval3d`](numpy.polynomial.hermite.hermval3d#numpy.polynomial.hermite.hermval3d "numpy.polynomial.hermite.hermval3d") #### Examples >>> from numpy.polynomial.hermite import hermgrid3d >>> x = [1, 2] >>> y = [4, 5] >>> z = [6, 7] >>> c = [[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]] >>> hermgrid3d(x, y, z, c) array([[[ 40077., 54117.], [ 49293., 66561.]], [[ 72375., 97719.], [ 88975., 120131.]]]) # numpy.polynomial.hermite.hermint polynomial.hermite.hermint(_c_ , _m =1_, _k =[]_, _lbnd =0_, _scl =1_, _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L679-L796) Integrate a Hermite series. Returns the Hermite series coefficients `c` integrated `m` times from `lbnd` along `axis`. At each iteration the resulting series is **multiplied** by `scl` and an integration constant, `k`, is added. The scaling factor is for use in a linear change of variable. (“Buyer beware”: note that, depending on what one is doing, one may want `scl` to be the reciprocal of what one might expect; for more information, see the Notes section below.) The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `H_0 + 2*H_1 + 3*H_2` while [[1,2],[1,2]] represents `1*H_0(x)*H_0(y) + 1*H_1(x)*H_0(y) + 2*H_0(x)*H_1(y) + 2*H_1(x)*H_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters: **c** array_like Array of Hermite series coefficients. If c is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m** int, optional Order of integration, must be positive. (Default: 1) **k**{[], list, scalar}, optional Integration constant(s). The value of the first integral at `lbnd` is the first value in the list, the value of the second integral at `lbnd` is the second value, etc. If `k == []` (the default), all constants are set to zero. If `m == 1`, a single scalar can be given instead of a list. **lbnd** scalar, optional The lower bound of the integral. (Default: 0) **scl** scalar, optional Following each integration the result is _multiplied_ by `scl` before the integration constant is added. (Default: 1) **axis** int, optional Axis over which the integral is taken. (Default: 0). Returns: **S** ndarray Hermite series coefficients of the integral. Raises: ValueError If `m < 0`, `len(k) > m`, `np.ndim(lbnd) != 0`, or `np.ndim(scl) != 0`. See also [`hermder`](numpy.polynomial.hermite.hermder#numpy.polynomial.hermite.hermder "numpy.polynomial.hermite.hermder") #### Notes Note that the result of each integration is _multiplied_ by `scl`. Why is this important to note? Say one is making a linear change of variable \\(u = ax + b\\) in an integral relative to `x`. Then \\(dx = du/a\\), so one will need to set `scl` equal to \\(1/a\\) \- perhaps not what one would have first thought. Also note that, in general, the result of integrating a C-series needs to be “reprojected” onto the C-series basis set. Thus, typically, the result of this function is “unintuitive,” albeit correct; see Examples section below. #### Examples >>> from numpy.polynomial.hermite import hermint >>> hermint([1,2,3]) # integrate once, value 0 at 0. array([1. , 0.5, 0.5, 0.5]) >>> hermint([1,2,3], m=2) # integrate twice, value & deriv 0 at 0 array([-0.5 , 0.5 , 0.125 , 0.08333333, 0.0625 ]) # may vary >>> hermint([1,2,3], k=1) # integrate once, value 1 at 0. array([2. , 0.5, 0.5, 0.5]) >>> hermint([1,2,3], lbnd=-1) # integrate once, value 0 at -1 array([-2. , 0.5, 0.5, 0.5]) >>> hermint([1,2,3], m=2, k=[1,2], lbnd=-1) array([ 1.66666667, -0.5 , 0.125 , 0.08333333, 0.0625 ]) # may vary # numpy.polynomial.hermite.Hermite.__call__ method polynomial.hermite.Hermite.__call__(_arg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L513-L515) Call self as a function. # numpy.polynomial.hermite.Hermite.basis method _classmethod_ polynomial.hermite.Hermite.basis(_deg_ , _domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1120-L1157) Series basis polynomial of degree `deg`. Returns the series representing the basis polynomial of degree `deg`. Parameters: **deg** int Degree of the basis polynomial for the series. Must be >= 0. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series A series with the coefficient of the `deg` term set to one and all others zero. # numpy.polynomial.hermite.Hermite.cast method _classmethod_ polynomial.hermite.Hermite.cast(_series_ , _domain =None_, _window =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1159-L1197) Convert series to series of this class. The `series` is expected to be an instance of some polynomial series of one of the types supported by by the numpy.polynomial module, but could be some other class that supports the convert method. Parameters: **series** series The series instance to be converted. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns: **new_series** series A series of the same kind as the calling class and equal to `series` when evaluated. See also [`convert`](numpy.polynomial.hermite.hermite.convert#numpy.polynomial.hermite.Hermite.convert "numpy.polynomial.hermite.Hermite.convert") similar instance method # numpy.polynomial.hermite.Hermite.convert method polynomial.hermite.Hermite.convert(_domain =None_, _kind =None_, _window =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L785-L820) Convert series to a different kind and/or domain and/or window. Parameters: **domain** array_like, optional The domain of the converted series. If the value is None, the default domain of `kind` is used. **kind** class, optional The polynomial series type class to which the current instance should be converted. If kind is None, then the class of the current instance is used. **window** array_like, optional The window of the converted series. If the value is None, the default window of `kind` is used. Returns: **new_series** series The returned class can be of different type than the current instance and/or have a different domain and/or different window. #### Notes Conversion between domains and class types can result in numerically ill defined series. # numpy.polynomial.hermite.Hermite.copy method polynomial.hermite.Hermite.copy()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L666-L675) Return a copy. Returns: **new_series** series Copy of self. # numpy.polynomial.hermite.Hermite.cutdeg method polynomial.hermite.Hermite.cutdeg(_deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L710-L731) Truncate series to the given degree. Reduce the degree of the series to `deg` by discarding the high order terms. If `deg` is greater than the current degree a copy of the current series is returned. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters: **deg** non-negative int The series is reduced to degree `deg` by discarding the high order terms. The value of `deg` must be a non-negative integer. Returns: **new_series** series New instance of series with reduced degree. # numpy.polynomial.hermite.Hermite.degree method polynomial.hermite.Hermite.degree()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L677-L708) The degree of the series. Returns: **degree** int Degree of the series, one less than the number of coefficients. #### Examples Create a polynomial object for `1 + 7*x + 4*x**2`: >>> poly = np.polynomial.Polynomial([1, 7, 4]) >>> print(poly) 1.0 + 7.0·x + 4.0·x² >>> poly.degree() 2 Note that this method does not check for non-zero coefficients. You must trim the polynomial to remove any trailing zeroes: >>> poly = np.polynomial.Polynomial([1, 7, 0]) >>> print(poly) 1.0 + 7.0·x + 0.0·x² >>> poly.degree() 2 >>> poly.trim().degree() 1 # numpy.polynomial.hermite.Hermite.deriv method polynomial.hermite.Hermite.deriv(_m =1_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L884-L904) Differentiate. Return a series instance of that is the derivative of the current series. Parameters: **m** non-negative int Find the derivative of order `m`. Returns: **new_series** series A new series representing the derivative. The domain is the same as the domain of the differentiated series. # numpy.polynomial.hermite.Hermite.domain attribute polynomial.hermite.Hermite.domain _= array([-1., 1.])_ # numpy.polynomial.hermite.Hermite.fit method _classmethod_ polynomial.hermite.Hermite.fit(_x_ , _y_ , _deg_ , _domain =None_, _rcond =None_, _full =False_, _w =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L951-L1040) Least squares fit to data. Return a series instance that is the least squares fit to the data `y` sampled at `x`. The domain of the returned instance can be specified and this will often result in a superior fit with less chance of ill conditioning. Parameters: **x** array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y** array_like, shape (M,) y-coordinates of the M sample points `(x[i], y[i])`. **deg** int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **domain**{None, [beg, end], []}, optional Domain to use for the returned series. If `None`, then a minimal domain that covers the points `x` is chosen. If `[]` the class domain is used. The default value was the class domain in NumPy 1.4 and `None` in later versions. The `[]` option was added in numpy 1.5.0. **rcond** float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is `len(x)*eps`, where eps is the relative precision of the float type, about 2e-16 in most cases. **full** bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w** array_like, shape (M,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse- variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. **window**{[beg, end]}, optional Window to use for the returned series. The default value is the default class domain **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series A series that represents the least squares fit to the data and has the domain and window specified in the call. If the coefficients for the unscaled and unshifted basis polynomials are of interest, do `new_series.convert().coef`. **[resid, rank, sv, rcond]** list These values are only returned if `full == True` * resid – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * sv – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). # numpy.polynomial.hermite.Hermite.fromroots method _classmethod_ polynomial.hermite.Hermite.fromroots(_roots_ , _domain =[]_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1042-L1083) Return series instance that has the specified roots. Returns a series representing the product `(x - r[0])*(x - r[1])*...*(x - r[n-1])`, where `r` is a list of roots. Parameters: **roots** array_like List of roots. **domain**{[], None, array_like}, optional Domain for the resulting series. If None the domain is the interval from the smallest root to the largest. If [] the domain is the class domain. The default is []. **window**{None, array_like}, optional Window for the returned series. If None the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series Series with the specified roots. # numpy.polynomial.hermite.Hermite.has_samecoef method polynomial.hermite.Hermite.has_samecoef(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L188-L207) Check if coefficients match. Parameters: **other** class instance The other class must have the `coef` attribute. Returns: **bool** boolean True if the coefficients are the same, False otherwise. # numpy.polynomial.hermite.Hermite.has_samedomain method polynomial.hermite.Hermite.has_samedomain(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L209-L223) Check if domains match. Parameters: **other** class instance The other class must have the `domain` attribute. Returns: **bool** boolean True if the domains are the same, False otherwise. # numpy.polynomial.hermite.Hermite.has_sametype method polynomial.hermite.Hermite.has_sametype(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L241-L255) Check if types match. Parameters: **other** object Class instance. Returns: **bool** boolean True if other is same class as self # numpy.polynomial.hermite.Hermite.has_samewindow method polynomial.hermite.Hermite.has_samewindow(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L225-L239) Check if windows match. Parameters: **other** class instance The other class must have the `window` attribute. Returns: **bool** boolean True if the windows are the same, False otherwise. # numpy.polynomial.hermite.Hermite _class_ numpy.polynomial.hermite.Hermite(_coef_ , _domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L1697-L1740) An Hermite series class. The Hermite class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the attributes and methods listed below. Parameters: **coef** array_like Hermite coefficients in order of increasing degree, i.e, `(1, 2, 3)` gives `1*H_0(x) + 2*H_1(x) + 3*H_2(x)`. **domain**(2,) array_like, optional Domain to use. The interval `[domain[0], domain[1]]` is mapped to the interval `[window[0], window[1]]` by shifting and scaling. The default value is [-1., 1.]. **window**(2,) array_like, optional Window, see [`domain`](numpy.polynomial.hermite.hermite.domain#numpy.polynomial.hermite.Hermite.domain "numpy.polynomial.hermite.Hermite.domain") for its use. The default value is [-1., 1.]. **symbol** str, optional Symbol used to represent the independent variable in string representations of the polynomial expression, e.g. for printing. The symbol must be a valid Python identifier. Default value is ‘x’. New in version 1.24. Attributes: **symbol** #### Methods [`__call__`](numpy.polynomial.hermite.hermite.__call__#numpy.polynomial.hermite.Hermite.__call__ "numpy.polynomial.hermite.Hermite.__call__")(arg) | Call self as a function. ---|--- [`basis`](numpy.polynomial.hermite.hermite.basis#numpy.polynomial.hermite.Hermite.basis "numpy.polynomial.hermite.Hermite.basis")(deg[, domain, window, symbol]) | Series basis polynomial of degree `deg`. [`cast`](numpy.polynomial.hermite.hermite.cast#numpy.polynomial.hermite.Hermite.cast "numpy.polynomial.hermite.Hermite.cast")(series[, domain, window]) | Convert series to series of this class. [`convert`](numpy.polynomial.hermite.hermite.convert#numpy.polynomial.hermite.Hermite.convert "numpy.polynomial.hermite.Hermite.convert")([domain, kind, window]) | Convert series to a different kind and/or domain and/or window. [`copy`](numpy.polynomial.hermite.hermite.copy#numpy.polynomial.hermite.Hermite.copy "numpy.polynomial.hermite.Hermite.copy")() | Return a copy. [`cutdeg`](numpy.polynomial.hermite.hermite.cutdeg#numpy.polynomial.hermite.Hermite.cutdeg "numpy.polynomial.hermite.Hermite.cutdeg")(deg) | Truncate series to the given degree. [`degree`](numpy.polynomial.hermite.hermite.degree#numpy.polynomial.hermite.Hermite.degree "numpy.polynomial.hermite.Hermite.degree")() | The degree of the series. [`deriv`](numpy.polynomial.hermite.hermite.deriv#numpy.polynomial.hermite.Hermite.deriv "numpy.polynomial.hermite.Hermite.deriv")([m]) | Differentiate. [`fit`](numpy.polynomial.hermite.hermite.fit#numpy.polynomial.hermite.Hermite.fit "numpy.polynomial.hermite.Hermite.fit")(x, y, deg[, domain, rcond, full, w, ...]) | Least squares fit to data. [`fromroots`](numpy.polynomial.hermite.hermite.fromroots#numpy.polynomial.hermite.Hermite.fromroots "numpy.polynomial.hermite.Hermite.fromroots")(roots[, domain, window, symbol]) | Return series instance that has the specified roots. [`has_samecoef`](numpy.polynomial.hermite.hermite.has_samecoef#numpy.polynomial.hermite.Hermite.has_samecoef "numpy.polynomial.hermite.Hermite.has_samecoef")(other) | Check if coefficients match. [`has_samedomain`](numpy.polynomial.hermite.hermite.has_samedomain#numpy.polynomial.hermite.Hermite.has_samedomain "numpy.polynomial.hermite.Hermite.has_samedomain")(other) | Check if domains match. [`has_sametype`](numpy.polynomial.hermite.hermite.has_sametype#numpy.polynomial.hermite.Hermite.has_sametype "numpy.polynomial.hermite.Hermite.has_sametype")(other) | Check if types match. [`has_samewindow`](numpy.polynomial.hermite.hermite.has_samewindow#numpy.polynomial.hermite.Hermite.has_samewindow "numpy.polynomial.hermite.Hermite.has_samewindow")(other) | Check if windows match. [`identity`](numpy.polynomial.hermite.hermite.identity#numpy.polynomial.hermite.Hermite.identity "numpy.polynomial.hermite.Hermite.identity")([domain, window, symbol]) | Identity function. [`integ`](numpy.polynomial.hermite.hermite.integ#numpy.polynomial.hermite.Hermite.integ "numpy.polynomial.hermite.Hermite.integ")([m, k, lbnd]) | Integrate. [`linspace`](numpy.polynomial.hermite.hermite.linspace#numpy.polynomial.hermite.Hermite.linspace "numpy.polynomial.hermite.Hermite.linspace")([n, domain]) | Return x, y values at equally spaced points in domain. [`mapparms`](numpy.polynomial.hermite.hermite.mapparms#numpy.polynomial.hermite.Hermite.mapparms "numpy.polynomial.hermite.Hermite.mapparms")() | Return the mapping parameters. [`roots`](numpy.polynomial.hermite.hermite.roots#numpy.polynomial.hermite.Hermite.roots "numpy.polynomial.hermite.Hermite.roots")() | Return the roots of the series polynomial. [`trim`](numpy.polynomial.hermite.hermite.trim#numpy.polynomial.hermite.Hermite.trim "numpy.polynomial.hermite.Hermite.trim")([tol]) | Remove trailing coefficients [`truncate`](numpy.polynomial.hermite.hermite.truncate#numpy.polynomial.hermite.Hermite.truncate "numpy.polynomial.hermite.Hermite.truncate")(size) | Truncate series to length `size`. # numpy.polynomial.hermite.Hermite.identity method _classmethod_ polynomial.hermite.Hermite.identity(_domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1085-L1118) Identity function. If `p` is the returned series, then `p(x) == x` for all values of x. Parameters: **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series Series of representing the identity. # numpy.polynomial.hermite.Hermite.integ method polynomial.hermite.Hermite.integ(_m =1_, _k =[]_, _lbnd =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L851-L882) Integrate. Return a series instance that is the definite integral of the current series. Parameters: **m** non-negative int The number of integrations to perform. **k** array_like Integration constants. The first constant is applied to the first integration, the second to the second, and so on. The list of values must less than or equal to `m` in length and any missing values are set to zero. **lbnd** Scalar The lower bound of the definite integral. Returns: **new_series** series A new series representing the integral. The domain is the same as the domain of the integrated series. # numpy.polynomial.hermite.Hermite.linspace method polynomial.hermite.Hermite.linspace(_n =100_, _domain =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L921-L949) Return x, y values at equally spaced points in domain. Returns the x, y values at `n` linearly spaced points across the domain. Here y is the value of the polynomial at the points x. By default the domain is the same as that of the series instance. This method is intended mostly as a plotting aid. Parameters: **n** int, optional Number of point pairs to return. The default value is 100. **domain**{None, array_like}, optional If not None, the specified domain is used instead of that of the calling instance. It should be of the form `[beg,end]`. The default is None which case the class domain is used. Returns: **x, y** ndarray x is equal to linspace(self.domain[0], self.domain[1], n) and y is the series evaluated at element of x. # numpy.polynomial.hermite.Hermite.mapparms method polynomial.hermite.Hermite.mapparms()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L822-L849) Return the mapping parameters. The returned values define a linear map `off + scl*x` that is applied to the input arguments before the series is evaluated. The map depends on the `domain` and `window`; if the current `domain` is equal to the `window` the resulting map is the identity. If the coefficients of the series instance are to be used by themselves outside this class, then the linear function must be substituted for the `x` in the standard representation of the base polynomials. Returns: **off, scl** float or complex The mapping function is defined by `off + scl*x`. #### Notes If the current domain is the interval `[l1, r1]` and the window is `[l2, r2]`, then the linear mapping function `L` is defined by the equations: L(l1) = l2 L(r1) = r2 # numpy.polynomial.hermite.Hermite.roots method polynomial.hermite.Hermite.roots()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L906-L919) Return the roots of the series polynomial. Compute the roots for the series. Note that the accuracy of the roots decreases the further outside the [`domain`](numpy.polynomial.hermite.hermite.domain#numpy.polynomial.hermite.Hermite.domain "numpy.polynomial.hermite.Hermite.domain") they lie. Returns: **roots** ndarray Array containing the roots of the series. # numpy.polynomial.hermite.Hermite.trim method polynomial.hermite.Hermite.trim(_tol =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L733-L754) Remove trailing coefficients Remove trailing coefficients until a coefficient is reached whose absolute value greater than `tol` or the beginning of the series is reached. If all the coefficients would be removed the series is set to `[0]`. A new series instance is returned with the new coefficients. The current instance remains unchanged. Parameters: **tol** non-negative number. All trailing coefficients less than `tol` will be removed. Returns: **new_series** series New instance of series with trimmed coefficients. # numpy.polynomial.hermite.Hermite.truncate method polynomial.hermite.Hermite.truncate(_size_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L756-L783) Truncate series to length [`size`](numpy.size#numpy.size "numpy.size"). Reduce the series to length [`size`](numpy.size#numpy.size "numpy.size") by discarding the high degree terms. The value of [`size`](numpy.size#numpy.size "numpy.size") must be a positive integer. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters: **size** positive int The series is reduced to length [`size`](numpy.size#numpy.size "numpy.size") by discarding the high degree terms. The value of [`size`](numpy.size#numpy.size "numpy.size") must be a positive integer. Returns: **new_series** series New instance of series with truncated coefficients. # numpy.polynomial.hermite.hermline polynomial.hermite.hermline(_off_ , _scl_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L218-L255) Hermite series whose graph is a straight line. Parameters: **off, scl** scalars The specified line is given by `off + scl*x`. Returns: **y** ndarray This module’s representation of the Hermite series for `off + scl*x`. See also [`numpy.polynomial.polynomial.polyline`](numpy.polynomial.polynomial.polyline#numpy.polynomial.polynomial.polyline "numpy.polynomial.polynomial.polyline") [`numpy.polynomial.chebyshev.chebline`](numpy.polynomial.chebyshev.chebline#numpy.polynomial.chebyshev.chebline "numpy.polynomial.chebyshev.chebline") [`numpy.polynomial.legendre.legline`](numpy.polynomial.legendre.legline#numpy.polynomial.legendre.legline "numpy.polynomial.legendre.legline") [`numpy.polynomial.laguerre.lagline`](numpy.polynomial.laguerre.lagline#numpy.polynomial.laguerre.lagline "numpy.polynomial.laguerre.lagline") [`numpy.polynomial.hermite_e.hermeline`](numpy.polynomial.hermite_e.hermeline#numpy.polynomial.hermite_e.hermeline "numpy.polynomial.hermite_e.hermeline") #### Examples >>> from numpy.polynomial.hermite import hermline, hermval >>> hermval(0,hermline(3, 2)) 3.0 >>> hermval(1,hermline(3, 2)) 5.0 # numpy.polynomial.hermite.hermmul polynomial.hermite.hermmul(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L447-L510) Multiply one Hermite series by another. Returns the product of two Hermite series `c1` * `c2`. The arguments are sequences of coefficients, from lowest order “term” to highest, e.g., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters: **c1, c2** array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns: **out** ndarray Of Hermite series coefficients representing their product. See also [`hermadd`](numpy.polynomial.hermite.hermadd#numpy.polynomial.hermite.hermadd "numpy.polynomial.hermite.hermadd"), [`hermsub`](numpy.polynomial.hermite.hermsub#numpy.polynomial.hermite.hermsub "numpy.polynomial.hermite.hermsub"), [`hermmulx`](numpy.polynomial.hermite.hermmulx#numpy.polynomial.hermite.hermmulx "numpy.polynomial.hermite.hermmulx"), [`hermdiv`](numpy.polynomial.hermite.hermdiv#numpy.polynomial.hermite.hermdiv "numpy.polynomial.hermite.hermdiv"), [`hermpow`](numpy.polynomial.hermite.hermpow#numpy.polynomial.hermite.hermpow "numpy.polynomial.hermite.hermpow") #### Notes In general, the (polynomial) product of two C-series results in terms that are not in the Hermite polynomial basis set. Thus, to express the product as a Hermite series, it is necessary to “reproject” the product onto said basis set, which may produce “unintuitive” (but correct) results; see Examples section below. #### Examples >>> from numpy.polynomial.hermite import hermmul >>> hermmul([1, 2, 3], [0, 1, 2]) array([52., 29., 52., 7., 6.]) # numpy.polynomial.hermite.hermmulx polynomial.hermite.hermmulx(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L394-L444) Multiply a Hermite series by x. Multiply the Hermite series `c` by x, where x is the independent variable. Parameters: **c** array_like 1-D array of Hermite series coefficients ordered from low to high. Returns: **out** ndarray Array representing the result of the multiplication. See also [`hermadd`](numpy.polynomial.hermite.hermadd#numpy.polynomial.hermite.hermadd "numpy.polynomial.hermite.hermadd"), [`hermsub`](numpy.polynomial.hermite.hermsub#numpy.polynomial.hermite.hermsub "numpy.polynomial.hermite.hermsub"), [`hermmul`](numpy.polynomial.hermite.hermmul#numpy.polynomial.hermite.hermmul "numpy.polynomial.hermite.hermmul"), [`hermdiv`](numpy.polynomial.hermite.hermdiv#numpy.polynomial.hermite.hermdiv "numpy.polynomial.hermite.hermdiv"), [`hermpow`](numpy.polynomial.hermite.hermpow#numpy.polynomial.hermite.hermpow "numpy.polynomial.hermite.hermpow") #### Notes The multiplication uses the recursion relationship for Hermite polynomials in the form \\[xP_i(x) = (P_{i + 1}(x)/2 + i*P_{i - 1}(x))\\] #### Examples >>> from numpy.polynomial.hermite import hermmulx >>> hermmulx([1, 2, 3]) array([2. , 6.5, 1. , 1.5]) # numpy.polynomial.hermite.hermone polynomial.hermite.hermone _= array([1])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.hermite.hermpow polynomial.hermite.hermpow(_c_ , _pow_ , _maxpower =16_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L561-L595) Raise a Hermite series to a power. Returns the Hermite series `c` raised to the power [`pow`](numpy.pow#numpy.pow "numpy.pow"). The argument `c` is a sequence of coefficients ordered from low to high. i.e., [1,2,3] is the series `P_0 + 2*P_1 + 3*P_2.` Parameters: **c** array_like 1-D array of Hermite series coefficients ordered from low to high. **pow** integer Power to which the series will be raised **maxpower** integer, optional Maximum power allowed. This is mainly to limit growth of the series to unmanageable size. Default is 16 Returns: **coef** ndarray Hermite series of power. See also [`hermadd`](numpy.polynomial.hermite.hermadd#numpy.polynomial.hermite.hermadd "numpy.polynomial.hermite.hermadd"), [`hermsub`](numpy.polynomial.hermite.hermsub#numpy.polynomial.hermite.hermsub "numpy.polynomial.hermite.hermsub"), [`hermmulx`](numpy.polynomial.hermite.hermmulx#numpy.polynomial.hermite.hermmulx "numpy.polynomial.hermite.hermmulx"), [`hermmul`](numpy.polynomial.hermite.hermmul#numpy.polynomial.hermite.hermmul "numpy.polynomial.hermite.hermmul"), [`hermdiv`](numpy.polynomial.hermite.hermdiv#numpy.polynomial.hermite.hermdiv "numpy.polynomial.hermite.hermdiv") #### Examples >>> from numpy.polynomial.hermite import hermpow >>> hermpow([1, 2, 3], 2) array([81., 52., 82., 12., 9.]) # numpy.polynomial.hermite.hermroots polynomial.hermite.hermroots(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L1487-L1548) Compute the roots of a Hermite series. Return the roots (a.k.a. “zeros”) of the polynomial \\[p(x) = \sum_i c[i] * H_i(x).\\] Parameters: **c** 1-D array_like 1-D array of coefficients. Returns: **out** ndarray Array of the roots of the series. If all the roots are real, then `out` is also real, otherwise it is complex. See also [`numpy.polynomial.polynomial.polyroots`](numpy.polynomial.polynomial.polyroots#numpy.polynomial.polynomial.polyroots "numpy.polynomial.polynomial.polyroots") [`numpy.polynomial.legendre.legroots`](numpy.polynomial.legendre.legroots#numpy.polynomial.legendre.legroots "numpy.polynomial.legendre.legroots") [`numpy.polynomial.laguerre.lagroots`](numpy.polynomial.laguerre.lagroots#numpy.polynomial.laguerre.lagroots "numpy.polynomial.laguerre.lagroots") [`numpy.polynomial.chebyshev.chebroots`](numpy.polynomial.chebyshev.chebroots#numpy.polynomial.chebyshev.chebroots "numpy.polynomial.chebyshev.chebroots") [`numpy.polynomial.hermite_e.hermeroots`](numpy.polynomial.hermite_e.hermeroots#numpy.polynomial.hermite_e.hermeroots "numpy.polynomial.hermite_e.hermeroots") #### Notes The root estimates are obtained as the eigenvalues of the companion matrix, Roots far from the origin of the complex plane may have large errors due to the numerical instability of the series for such values. Roots with multiplicity greater than 1 will also show larger errors as the value of the series near such points is relatively insensitive to errors in the roots. Isolated roots near the origin can be improved by a few iterations of Newton’s method. The Hermite series basis polynomials aren’t powers of `x` so the results of this function may seem unintuitive. #### Examples >>> from numpy.polynomial.hermite import hermroots, hermfromroots >>> coef = hermfromroots([-1, 0, 1]) >>> coef array([0. , 0.25 , 0. , 0.125]) >>> hermroots(coef) array([-1.00000000e+00, -1.38777878e-17, 1.00000000e+00]) # numpy.polynomial.hermite.hermsub polynomial.hermite.hermsub(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L354-L391) Subtract one Hermite series from another. Returns the difference of two Hermite series `c1` \- `c2`. The sequences of coefficients are from lowest order term to highest, i.e., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters: **c1, c2** array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns: **out** ndarray Of Hermite series coefficients representing their difference. See also [`hermadd`](numpy.polynomial.hermite.hermadd#numpy.polynomial.hermite.hermadd "numpy.polynomial.hermite.hermadd"), [`hermmulx`](numpy.polynomial.hermite.hermmulx#numpy.polynomial.hermite.hermmulx "numpy.polynomial.hermite.hermmulx"), [`hermmul`](numpy.polynomial.hermite.hermmul#numpy.polynomial.hermite.hermmul "numpy.polynomial.hermite.hermmul"), [`hermdiv`](numpy.polynomial.hermite.hermdiv#numpy.polynomial.hermite.hermdiv "numpy.polynomial.hermite.hermdiv"), [`hermpow`](numpy.polynomial.hermite.hermpow#numpy.polynomial.hermite.hermpow "numpy.polynomial.hermite.hermpow") #### Notes Unlike multiplication, division, etc., the difference of two Hermite series is a Hermite series (without having to “reproject” the result onto the basis set) so subtraction, just like that of “standard” polynomials, is simply “component-wise.” #### Examples >>> from numpy.polynomial.hermite import hermsub >>> hermsub([1, 2, 3, 4], [1, 2, 3]) array([0., 0., 0., 4.]) # numpy.polynomial.hermite.hermtrim polynomial.hermite.hermtrim(_c_ , _tol =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polyutils.py#L144-L192) Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter `tol`; “trailing” means highest order coefficient(s), e.g., in `[0, 1, 1, 0, 0]` (which represents `0 + x + x**2 + 0*x**3 + 0*x**4`) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters: **c** array_like 1-d array of coefficients, ordered from lowest order to highest. **tol** number, optional Trailing (i.e., highest order) elements with absolute value less than or equal to `tol` (default value is zero) are removed. Returns: **trimmed** ndarray 1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises: ValueError If `tol` < 0 #### Examples >>> from numpy.polynomial import polyutils as pu >>> pu.trimcoef((0,0,3,0,5,0,0)) array([0., 0., 3., 0., 5.]) >>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed array([0.]) >>> i = complex(0,1) # works for complex >>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3) array([0.0003+0.j , 0.001 -0.001j]) # numpy.polynomial.hermite.hermval polynomial.hermite.hermval(_x_ , _c_ , _tensor =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L799-L890) Evaluate an Hermite series at points x. If `c` is of length `n + 1`, this function returns the value: \\[p(x) = c_0 * H_0(x) + c_1 * H_1(x) + ... + c_n * H_n(x)\\] The parameter `x` is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either `x` or its elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array, then `p(x)` will have the same shape as `x`. If `c` is multidimensional, then the shape of the result depends on the value of `tensor`. If `tensor` is true the shape will be c.shape[1:] + x.shape. If `tensor` is false the shape will be c.shape[1:]. Note that scalars have shape (,). Trailing zeros in the coefficients will be used in the evaluation, so they should be avoided if efficiency is a concern. Parameters: **x** array_like, compatible object If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with themselves and with the elements of `c`. **c** array_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If `c` is multidimensional the remaining indices enumerate multiple polynomials. In the two dimensional case the coefficients may be thought of as stored in the columns of `c`. **tensor** boolean, optional If True, the shape of the coefficient array is extended with ones on the right, one for each dimension of `x`. Scalars have dimension 0 for this action. The result is that every column of coefficients in `c` is evaluated for every element of `x`. If False, `x` is broadcast over the columns of `c` for the evaluation. This keyword is useful when `c` is multidimensional. The default value is True. Returns: **values** ndarray, algebra_like The shape of the return value is described above. See also [`hermval2d`](numpy.polynomial.hermite.hermval2d#numpy.polynomial.hermite.hermval2d "numpy.polynomial.hermite.hermval2d"), [`hermgrid2d`](numpy.polynomial.hermite.hermgrid2d#numpy.polynomial.hermite.hermgrid2d "numpy.polynomial.hermite.hermgrid2d"), [`hermval3d`](numpy.polynomial.hermite.hermval3d#numpy.polynomial.hermite.hermval3d "numpy.polynomial.hermite.hermval3d"), [`hermgrid3d`](numpy.polynomial.hermite.hermgrid3d#numpy.polynomial.hermite.hermgrid3d "numpy.polynomial.hermite.hermgrid3d") #### Notes The evaluation uses Clenshaw recursion, aka synthetic division. #### Examples >>> from numpy.polynomial.hermite import hermval >>> coef = [1,2,3] >>> hermval(1, coef) 11.0 >>> hermval([[1,2],[3,4]], coef) array([[ 11., 51.], [115., 203.]]) # numpy.polynomial.hermite.hermval2d polynomial.hermite.hermval2d(_x_ , _y_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L893-L943) Evaluate a 2-D Hermite series at points (x, y). This function returns the values: \\[p(x,y) = \sum_{i,j} c_{i,j} * H_i(x) * H_j(y)\\] The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array a one is implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters: **x, y** array_like, compatible objects The two dimensional series is evaluated at the points `(x, y)`, where `x` and `y` must have the same shape. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficient of the term of multi- degree i,j is contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional polynomial at points formed with pairs of corresponding values from `x` and `y`. See also [`hermval`](numpy.polynomial.hermite.hermval#numpy.polynomial.hermite.hermval "numpy.polynomial.hermite.hermval"), [`hermgrid2d`](numpy.polynomial.hermite.hermgrid2d#numpy.polynomial.hermite.hermgrid2d "numpy.polynomial.hermite.hermgrid2d"), [`hermval3d`](numpy.polynomial.hermite.hermval3d#numpy.polynomial.hermite.hermval3d "numpy.polynomial.hermite.hermval3d"), [`hermgrid3d`](numpy.polynomial.hermite.hermgrid3d#numpy.polynomial.hermite.hermgrid3d "numpy.polynomial.hermite.hermgrid3d") #### Examples >>> from numpy.polynomial.hermite import hermval2d >>> x = [1, 2] >>> y = [4, 5] >>> c = [[1, 2, 3], [4, 5, 6]] >>> hermval2d(x, y, c) array([1035., 2883.]) # numpy.polynomial.hermite.hermval3d polynomial.hermite.hermval3d(_x_ , _y_ , _z_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L1005-L1058) Evaluate a 3-D Hermite series at points (x, y, z). This function returns the values: \\[p(x,y,z) = \sum_{i,j,k} c_{i,j,k} * H_i(x) * H_j(y) * H_k(z)\\] The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than 3 dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape. Parameters: **x, y, z** array_like, compatible object The three dimensional series is evaluated at the points `(x, y, z)`, where `x`, `y`, and `z` must have the same shape. If any of `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficient of the term of multi- degree i,j,k is contained in `c[i,j,k]`. If `c` has dimension greater than 3 the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the multidimensional polynomial on points formed with triples of corresponding values from `x`, `y`, and `z`. See also [`hermval`](numpy.polynomial.hermite.hermval#numpy.polynomial.hermite.hermval "numpy.polynomial.hermite.hermval"), [`hermval2d`](numpy.polynomial.hermite.hermval2d#numpy.polynomial.hermite.hermval2d "numpy.polynomial.hermite.hermval2d"), [`hermgrid2d`](numpy.polynomial.hermite.hermgrid2d#numpy.polynomial.hermite.hermgrid2d "numpy.polynomial.hermite.hermgrid2d"), [`hermgrid3d`](numpy.polynomial.hermite.hermgrid3d#numpy.polynomial.hermite.hermgrid3d "numpy.polynomial.hermite.hermgrid3d") #### Examples >>> from numpy.polynomial.hermite import hermval3d >>> x = [1, 2] >>> y = [4, 5] >>> z = [6, 7] >>> c = [[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]] >>> hermval3d(x, y, z, c) array([ 40077., 120131.]) # numpy.polynomial.hermite.hermvander polynomial.hermite.hermvander(_x_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L1125-L1184) Pseudo-Vandermonde matrix of given degree. Returns the pseudo-Vandermonde matrix of degree `deg` and sample points `x`. The pseudo-Vandermonde matrix is defined by \\[V[..., i] = H_i(x),\\] where `0 <= i <= deg`. The leading indices of `V` index the elements of `x` and the last index is the degree of the Hermite polynomial. If `c` is a 1-D array of coefficients of length `n + 1` and `V` is the array `V = hermvander(x, n)`, then `np.dot(V, c)` and `hermval(x, c)` are the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of Hermite series of the same degree and sample points. Parameters: **x** array_like Array of points. The dtype is converted to float64 or complex128 depending on whether any of the elements are complex. If `x` is scalar it is converted to a 1-D array. **deg** int Degree of the resulting matrix. Returns: **vander** ndarray The pseudo-Vandermonde matrix. The shape of the returned matrix is `x.shape + (deg + 1,)`, where The last index is the degree of the corresponding Hermite polynomial. The dtype will be the same as the converted `x`. #### Examples >>> import numpy as np >>> from numpy.polynomial.hermite import hermvander >>> x = np.array([-1, 0, 1]) >>> hermvander(x, 3) array([[ 1., -2., 2., 4.], [ 1., 0., -2., -0.], [ 1., 2., 2., -4.]]) # numpy.polynomial.hermite.hermvander2d polynomial.hermite.hermvander2d(_x_ , _y_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L1187-L1243) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y)`. The pseudo-Vandermonde matrix is defined by \\[V[..., (deg[1] + 1)*i + j] = H_i(x) * H_j(y),\\] where `0 <= i <= deg[0]` and `0 <= j <= deg[1]`. The leading indices of `V` index the points `(x, y)` and the last index encodes the degrees of the Hermite polynomials. If `V = hermvander2d(x, y, [xdeg, ydeg])`, then the columns of `V` correspond to the elements of a 2-D coefficient array `c` of shape (xdeg + 1, ydeg + 1) in the order \\[c_{00}, c_{01}, c_{02} ... , c_{10}, c_{11}, c_{12} ...\\] and `np.dot(V, c.flat)` and `hermval2d(x, y, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 2-D Hermite series of the same degrees and sample points. Parameters: **x, y** array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg** list of ints List of maximum degrees of the form [x_deg, y_deg]. Returns: **vander2d** ndarray The shape of the returned matrix is `x.shape + (order,)`, where \\(order = (deg[0]+1)*(deg[1]+1)\\). The dtype will be the same as the converted `x` and `y`. See also [`hermvander`](numpy.polynomial.hermite.hermvander#numpy.polynomial.hermite.hermvander "numpy.polynomial.hermite.hermvander"), [`hermvander3d`](numpy.polynomial.hermite.hermvander3d#numpy.polynomial.hermite.hermvander3d "numpy.polynomial.hermite.hermvander3d"), [`hermval2d`](numpy.polynomial.hermite.hermval2d#numpy.polynomial.hermite.hermval2d "numpy.polynomial.hermite.hermval2d"), [`hermval3d`](numpy.polynomial.hermite.hermval3d#numpy.polynomial.hermite.hermval3d "numpy.polynomial.hermite.hermval3d") #### Examples >>> import numpy as np >>> from numpy.polynomial.hermite import hermvander2d >>> x = np.array([-1, 0, 1]) >>> y = np.array([-1, 0, 1]) >>> hermvander2d(x, y, [2, 2]) array([[ 1., -2., 2., -2., 4., -4., 2., -4., 4.], [ 1., 0., -2., 0., 0., -0., -2., -0., 4.], [ 1., 2., 2., 2., 4., 4., 2., 4., 4.]]) # numpy.polynomial.hermite.hermvander3d polynomial.hermite.hermvander3d(_x_ , _y_ , _z_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L1246-L1303) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y, z)`. If `l`, `m`, `n` are the given degrees in `x`, `y`, `z`, then The pseudo-Vandermonde matrix is defined by \\[V[..., (m+1)(n+1)i + (n+1)j + k] = H_i(x)*H_j(y)*H_k(z),\\] where `0 <= i <= l`, `0 <= j <= m`, and `0 <= j <= n`. The leading indices of `V` index the points `(x, y, z)` and the last index encodes the degrees of the Hermite polynomials. If `V = hermvander3d(x, y, z, [xdeg, ydeg, zdeg])`, then the columns of `V` correspond to the elements of a 3-D coefficient array `c` of shape (xdeg + 1, ydeg + 1, zdeg + 1) in the order \\[c_{000}, c_{001}, c_{002},... , c_{010}, c_{011}, c_{012},...\\] and `np.dot(V, c.flat)` and `hermval3d(x, y, z, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 3-D Hermite series of the same degrees and sample points. Parameters: **x, y, z** array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg** list of ints List of maximum degrees of the form [x_deg, y_deg, z_deg]. Returns: **vander3d** ndarray The shape of the returned matrix is `x.shape + (order,)`, where \\(order = (deg[0]+1)*(deg[1]+1)*(deg[2]+1)\\). The dtype will be the same as the converted `x`, `y`, and `z`. See also [`hermvander`](numpy.polynomial.hermite.hermvander#numpy.polynomial.hermite.hermvander "numpy.polynomial.hermite.hermvander"), `hermvander3d`, [`hermval2d`](numpy.polynomial.hermite.hermval2d#numpy.polynomial.hermite.hermval2d "numpy.polynomial.hermite.hermval2d"), [`hermval3d`](numpy.polynomial.hermite.hermval3d#numpy.polynomial.hermite.hermval3d "numpy.polynomial.hermite.hermval3d") #### Examples >>> from numpy.polynomial.hermite import hermvander3d >>> x = np.array([-1, 0, 1]) >>> y = np.array([-1, 0, 1]) >>> z = np.array([-1, 0, 1]) >>> hermvander3d(x, y, z, [0, 1, 2]) array([[ 1., -2., 2., -2., 4., -4.], [ 1., 0., -2., 0., 0., -0.], [ 1., 2., 2., 2., 4., 4.]]) # numpy.polynomial.hermite.hermweight polynomial.hermite.hermweight(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L1662-L1690) Weight function of the Hermite polynomials. The weight function is \\(\exp(-x^2)\\) and the interval of integration is \\([-\inf, \inf]\\). the Hermite polynomials are orthogonal, but not normalized, with respect to this weight function. Parameters: **x** array_like Values at which the weight function will be computed. Returns: **w** ndarray The weight function at `x`. #### Examples >>> import numpy as np >>> from numpy.polynomial.hermite import hermweight >>> x = np.arange(-2, 2) >>> hermweight(x) array([0.01831564, 0.36787944, 1. , 0.36787944]) # numpy.polynomial.hermite.hermx polynomial.hermite.hermx _= array([0. , 0.5])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.hermite.hermzero polynomial.hermite.hermzero _= array([0])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.hermite.poly2herm polynomial.hermite.poly2herm(_pol_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite.py#L96-L139) Convert a polynomial to a Hermite series. Convert an array representing the coefficients of a polynomial (relative to the “standard” basis) ordered from lowest degree to highest, to an array of the coefficients of the equivalent Hermite series, ordered from lowest to highest degree. Parameters: **pol** array_like 1-D array containing the polynomial coefficients Returns: **c** ndarray 1-D array containing the coefficients of the equivalent Hermite series. See also [`herm2poly`](numpy.polynomial.hermite.herm2poly#numpy.polynomial.hermite.herm2poly "numpy.polynomial.hermite.herm2poly") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples >>> from numpy.polynomial.hermite import poly2herm >>> poly2herm(np.arange(4)) array([1. , 2.75 , 0.5 , 0.375]) # numpy.polynomial.hermite_e.herme2poly polynomial.hermite_e.herme2poly(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L144-L198) Convert a Hermite series to a polynomial. Convert an array representing the coefficients of a Hermite series, ordered from lowest degree to highest, to an array of the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest to highest degree. Parameters: **c** array_like 1-D array containing the Hermite series coefficients, ordered from lowest order term to highest. Returns: **pol** ndarray 1-D array containing the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest order term to highest. See also [`poly2herme`](numpy.polynomial.hermite_e.poly2herme#numpy.polynomial.hermite_e.poly2herme "numpy.polynomial.hermite_e.poly2herme") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples >>> from numpy.polynomial.hermite_e import herme2poly >>> herme2poly([ 2., 10., 2., 3.]) array([0., 1., 2., 3.]) # numpy.polynomial.hermite_e.hermeadd polynomial.hermite_e.hermeadd(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L314-L351) Add one Hermite series to another. Returns the sum of two Hermite series `c1` \+ `c2`. The arguments are sequences of coefficients ordered from lowest order term to highest, i.e., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters: **c1, c2** array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns: **out** ndarray Array representing the Hermite series of their sum. See also [`hermesub`](numpy.polynomial.hermite_e.hermesub#numpy.polynomial.hermite_e.hermesub "numpy.polynomial.hermite_e.hermesub"), [`hermemulx`](numpy.polynomial.hermite_e.hermemulx#numpy.polynomial.hermite_e.hermemulx "numpy.polynomial.hermite_e.hermemulx"), [`hermemul`](numpy.polynomial.hermite_e.hermemul#numpy.polynomial.hermite_e.hermemul "numpy.polynomial.hermite_e.hermemul"), [`hermediv`](numpy.polynomial.hermite_e.hermediv#numpy.polynomial.hermite_e.hermediv "numpy.polynomial.hermite_e.hermediv"), [`hermepow`](numpy.polynomial.hermite_e.hermepow#numpy.polynomial.hermite_e.hermepow "numpy.polynomial.hermite_e.hermepow") #### Notes Unlike multiplication, division, etc., the sum of two Hermite series is a Hermite series (without having to “reproject” the result onto the basis set) so addition, just like that of “standard” polynomials, is simply “component- wise.” #### Examples >>> from numpy.polynomial.hermite_e import hermeadd >>> hermeadd([1, 2, 3], [1, 2, 3, 4]) array([2., 4., 6., 4.]) # numpy.polynomial.hermite_e.hermecompanion polynomial.hermite_e.hermecompanion(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L1365-L1402) Return the scaled companion matrix of c. The basis polynomials are scaled so that the companion matrix is symmetric when `c` is an HermiteE basis polynomial. This provides better eigenvalue estimates than the unscaled case and for basis polynomials the eigenvalues are guaranteed to be real if [`numpy.linalg.eigvalsh`](numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh") is used to obtain them. Parameters: **c** array_like 1-D array of HermiteE series coefficients ordered from low to high degree. Returns: **mat** ndarray Scaled companion matrix of dimensions (deg, deg). # numpy.polynomial.hermite_e.hermeder polynomial.hermite_e.hermeder(_c_ , _m =1_, _scl =1_, _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L596-L674) Differentiate a Hermite_e series. Returns the series coefficients `c` differentiated `m` times along `axis`. At each iteration the result is multiplied by `scl` (the scaling factor is for use in a linear change of variable). The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `1*He_0 + 2*He_1 + 3*He_2` while [[1,2],[1,2]] represents `1*He_0(x)*He_0(y) + 1*He_1(x)*He_0(y) + 2*He_0(x)*He_1(y) + 2*He_1(x)*He_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters: **c** array_like Array of Hermite_e series coefficients. If `c` is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m** int, optional Number of derivatives taken, must be non-negative. (Default: 1) **scl** scalar, optional Each differentiation is multiplied by `scl`. The end result is multiplication by `scl**m`. This is for use in a linear change of variable. (Default: 1) **axis** int, optional Axis over which the derivative is taken. (Default: 0). Returns: **der** ndarray Hermite series of the derivative. See also [`hermeint`](numpy.polynomial.hermite_e.hermeint#numpy.polynomial.hermite_e.hermeint "numpy.polynomial.hermite_e.hermeint") #### Notes In general, the result of differentiating a Hermite series does not resemble the same operation on a power series. Thus the result of this function may be “unintuitive,” albeit correct; see Examples section below. #### Examples >>> from numpy.polynomial.hermite_e import hermeder >>> hermeder([ 1., 1., 1., 1.]) array([1., 2., 3.]) >>> hermeder([-0.25, 1., 1./2., 1./3., 1./4 ], m=2) array([1., 2., 3.]) # numpy.polynomial.hermite_e.hermediv polynomial.hermite_e.hermediv(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L513-L556) Divide one Hermite series by another. Returns the quotient-with-remainder of two Hermite series `c1` / `c2`. The arguments are sequences of coefficients from lowest order “term” to highest, e.g., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters: **c1, c2** array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns: **[quo, rem]** ndarrays Of Hermite series coefficients representing the quotient and remainder. See also [`hermeadd`](numpy.polynomial.hermite_e.hermeadd#numpy.polynomial.hermite_e.hermeadd "numpy.polynomial.hermite_e.hermeadd"), [`hermesub`](numpy.polynomial.hermite_e.hermesub#numpy.polynomial.hermite_e.hermesub "numpy.polynomial.hermite_e.hermesub"), [`hermemulx`](numpy.polynomial.hermite_e.hermemulx#numpy.polynomial.hermite_e.hermemulx "numpy.polynomial.hermite_e.hermemulx"), [`hermemul`](numpy.polynomial.hermite_e.hermemul#numpy.polynomial.hermite_e.hermemul "numpy.polynomial.hermite_e.hermemul"), [`hermepow`](numpy.polynomial.hermite_e.hermepow#numpy.polynomial.hermite_e.hermepow "numpy.polynomial.hermite_e.hermepow") #### Notes In general, the (polynomial) division of one Hermite series by another results in quotient and remainder terms that are not in the Hermite polynomial basis set. Thus, to express these results as a Hermite series, it is necessary to “reproject” the results onto the Hermite basis set, which may produce “unintuitive” (but correct) results; see Examples section below. #### Examples >>> from numpy.polynomial.hermite_e import hermediv >>> hermediv([ 14., 15., 28., 7., 6.], [0, 1, 2]) (array([1., 2., 3.]), array([0.])) >>> hermediv([ 15., 17., 28., 7., 6.], [0, 1, 2]) (array([1., 2., 3.]), array([1., 2.])) # numpy.polynomial.hermite_e.hermedomain polynomial.hermite_e.hermedomain _= array([-1., 1.])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.hermite_e.hermefit polynomial.hermite_e.hermefit(_x_ , _y_ , _deg_ , _rcond =None_, _full =False_, _w =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L1231-L1362) Least squares fit of Hermite series to data. Return the coefficients of a HermiteE series of degree `deg` that is the least squares fit to the data values `y` given at points `x`. If `y` is 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple fits are done, one for each column of `y`, and the resulting coefficients are stored in the corresponding columns of a 2-D return. The fitted polynomial(s) are in the form \\[p(x) = c_0 + c_1 * He_1(x) + ... + c_n * He_n(x),\\] where `n` is `deg`. Parameters: **x** array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y** array_like, shape (M,) or (M, K) y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column. **deg** int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **rcond** float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full** bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w** array_like, shape (`M`,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse- variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. Returns: **coef** ndarray, shape (M,) or (M, K) Hermite coefficients ordered from low to high. If `y` was 2-D, the coefficients for the data in column k of `y` are in column `k`. **[residuals, rank, singular_values, rcond]** list These values are only returned if `full == True` * residuals – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * singular_values – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). Warns: RankWarning The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if `full = False`. The warnings can be turned off by >>> import warnings >>> warnings.simplefilter('ignore', np.exceptions.RankWarning) See also [`numpy.polynomial.chebyshev.chebfit`](numpy.polynomial.chebyshev.chebfit#numpy.polynomial.chebyshev.chebfit "numpy.polynomial.chebyshev.chebfit") [`numpy.polynomial.legendre.legfit`](numpy.polynomial.legendre.legfit#numpy.polynomial.legendre.legfit "numpy.polynomial.legendre.legfit") [`numpy.polynomial.polynomial.polyfit`](numpy.polynomial.polynomial.polyfit#numpy.polynomial.polynomial.polyfit "numpy.polynomial.polynomial.polyfit") [`numpy.polynomial.hermite.hermfit`](numpy.polynomial.hermite.hermfit#numpy.polynomial.hermite.hermfit "numpy.polynomial.hermite.hermfit") [`numpy.polynomial.laguerre.lagfit`](numpy.polynomial.laguerre.lagfit#numpy.polynomial.laguerre.lagfit "numpy.polynomial.laguerre.lagfit") [`hermeval`](numpy.polynomial.hermite_e.hermeval#numpy.polynomial.hermite_e.hermeval "numpy.polynomial.hermite_e.hermeval") Evaluates a Hermite series. [`hermevander`](numpy.polynomial.hermite_e.hermevander#numpy.polynomial.hermite_e.hermevander "numpy.polynomial.hermite_e.hermevander") pseudo Vandermonde matrix of Hermite series. [`hermeweight`](numpy.polynomial.hermite_e.hermeweight#numpy.polynomial.hermite_e.hermeweight "numpy.polynomial.hermite_e.hermeweight") HermiteE weight function. [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") Computes a least-squares fit from the matrix. [`scipy.interpolate.UnivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline "\(in SciPy v1.14.1\)") Computes spline fits. #### Notes The solution is the coefficients of the HermiteE series `p` that minimizes the sum of the weighted squared errors \\[E = \sum_j w_j^2 * |y_j - p(x_j)|^2,\\] where the \\(w_j\\) are the weights. This problem is solved by setting up the (typically) overdetermined matrix equation \\[V(x) * c = w * y,\\] where `V` is the pseudo Vandermonde matrix of `x`, the elements of `c` are the coefficients to be solved for, and the elements of `y` are the observed values. This equation is then solved using the singular value decomposition of `V`. If some of the singular values of `V` are so small that they are neglected, then a [`RankWarning`](numpy.exceptions.rankwarning#numpy.exceptions.RankWarning "numpy.exceptions.RankWarning") will be issued. This means that the coefficient values may be poorly determined. Using a lower order fit will usually get rid of the warning. The `rcond` parameter can also be set to a value smaller than its default, but the resulting fit may be spurious and have large contributions from roundoff error. Fits using HermiteE series are probably most useful when the data can be approximated by `sqrt(w(x)) * p(x)`, where `w(x)` is the HermiteE weight. In that case the weight `sqrt(w(x[i]))` should be used together with data values `y[i]/sqrt(w(x[i]))`. The weight function is available as [`hermeweight`](numpy.polynomial.hermite_e.hermeweight#numpy.polynomial.hermite_e.hermeweight "numpy.polynomial.hermite_e.hermeweight"). #### References [1] Wikipedia, “Curve fitting”, #### Examples >>> import numpy as np >>> from numpy.polynomial.hermite_e import hermefit, hermeval >>> x = np.linspace(-10, 10) >>> rng = np.random.default_rng() >>> err = rng.normal(scale=1./10, size=len(x)) >>> y = hermeval(x, [1, 2, 3]) + err >>> hermefit(x, y, 2) array([1.02284196, 2.00032805, 2.99978457]) # may vary # numpy.polynomial.hermite_e.hermefromroots polynomial.hermite_e.hermefromroots(_roots_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L258-L311) Generate a HermiteE series with given roots. The function returns the coefficients of the polynomial \\[p(x) = (x - r_0) * (x - r_1) * ... * (x - r_n),\\] in HermiteE form, where the \\(r_n\\) are the roots specified in [`roots`](numpy.roots#numpy.roots "numpy.roots"). If a zero has multiplicity n, then it must appear in [`roots`](numpy.roots#numpy.roots "numpy.roots") n times. For instance, if 2 is a root of multiplicity three and 3 is a root of multiplicity 2, then [`roots`](numpy.roots#numpy.roots "numpy.roots") looks something like [2, 2, 2, 3, 3]. The roots can appear in any order. If the returned coefficients are `c`, then \\[p(x) = c_0 + c_1 * He_1(x) + ... + c_n * He_n(x)\\] The coefficient of the last term is not generally 1 for monic polynomials in HermiteE form. Parameters: **roots** array_like Sequence containing the roots. Returns: **out** ndarray 1-D array of coefficients. If all roots are real then `out` is a real array, if some of the roots are complex, then `out` is complex even if all the coefficients in the result are real (see Examples below). See also [`numpy.polynomial.polynomial.polyfromroots`](numpy.polynomial.polynomial.polyfromroots#numpy.polynomial.polynomial.polyfromroots "numpy.polynomial.polynomial.polyfromroots") [`numpy.polynomial.legendre.legfromroots`](numpy.polynomial.legendre.legfromroots#numpy.polynomial.legendre.legfromroots "numpy.polynomial.legendre.legfromroots") [`numpy.polynomial.laguerre.lagfromroots`](numpy.polynomial.laguerre.lagfromroots#numpy.polynomial.laguerre.lagfromroots "numpy.polynomial.laguerre.lagfromroots") [`numpy.polynomial.hermite.hermfromroots`](numpy.polynomial.hermite.hermfromroots#numpy.polynomial.hermite.hermfromroots "numpy.polynomial.hermite.hermfromroots") [`numpy.polynomial.chebyshev.chebfromroots`](numpy.polynomial.chebyshev.chebfromroots#numpy.polynomial.chebyshev.chebfromroots "numpy.polynomial.chebyshev.chebfromroots") #### Examples >>> from numpy.polynomial.hermite_e import hermefromroots, hermeval >>> coef = hermefromroots((-1, 0, 1)) >>> hermeval((-1, 0, 1), coef) array([0., 0., 0.]) >>> coef = hermefromroots((-1j, 1j)) >>> hermeval((-1j, 1j), coef) array([0.+0.j, 0.+0.j]) # numpy.polynomial.hermite_e.hermegauss polynomial.hermite_e.hermegauss(_deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L1510-L1571) Gauss-HermiteE quadrature. Computes the sample points and weights for Gauss-HermiteE quadrature. These sample points and weights will correctly integrate polynomials of degree \\(2*deg - 1\\) or less over the interval \\([-\inf, \inf]\\) with the weight function \\(f(x) = \exp(-x^2/2)\\). Parameters: **deg** int Number of sample points and weights. It must be >= 1. Returns: **x** ndarray 1-D ndarray containing the sample points. **y** ndarray 1-D ndarray containing the weights. #### Notes The results have only been tested up to degree 100, higher degrees may be problematic. The weights are determined by using the fact that \\[w_k = c / (He'_n(x_k) * He_{n-1}(x_k))\\] where \\(c\\) is a constant independent of \\(k\\) and \\(x_k\\) is the k’th root of \\(He_n\\), and then scaling the results to get the right value when integrating 1. # numpy.polynomial.hermite_e.hermegrid2d polynomial.hermite_e.hermegrid2d(_x_ , _y_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L933-L977) Evaluate a 2-D HermiteE series on the Cartesian product of x and y. This function returns the values: \\[p(a,b) = \sum_{i,j} c_{i,j} * H_i(a) * H_j(b)\\] where the points `(a, b)` consist of all pairs formed by taking `a` from `x` and `b` from `y`. The resulting points form a grid with `x` in the first dimension and `y` in the second. The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than two dimensions, ones are implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters: **x, y** array_like, compatible objects The two dimensional series is evaluated at the points in the Cartesian product of `x` and `y`. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`hermeval`](numpy.polynomial.hermite_e.hermeval#numpy.polynomial.hermite_e.hermeval "numpy.polynomial.hermite_e.hermeval"), [`hermeval2d`](numpy.polynomial.hermite_e.hermeval2d#numpy.polynomial.hermite_e.hermeval2d "numpy.polynomial.hermite_e.hermeval2d"), [`hermeval3d`](numpy.polynomial.hermite_e.hermeval3d#numpy.polynomial.hermite_e.hermeval3d "numpy.polynomial.hermite_e.hermeval3d"), [`hermegrid3d`](numpy.polynomial.hermite_e.hermegrid3d#numpy.polynomial.hermite_e.hermegrid3d "numpy.polynomial.hermite_e.hermegrid3d") # numpy.polynomial.hermite_e.hermegrid3d polynomial.hermite_e.hermegrid3d(_x_ , _y_ , _z_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L1025-L1072) Evaluate a 3-D HermiteE series on the Cartesian product of x, y, and z. This function returns the values: \\[p(a,b,c) = \sum_{i,j,k} c_{i,j,k} * He_i(a) * He_j(b) * He_k(c)\\] where the points `(a, b, c)` consist of all triples formed by taking `a` from `x`, `b` from `y`, and `c` from `z`. The resulting points form a grid with `x` in the first dimension, `y` in the second, and `z` in the third. The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than three dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape + y.shape + z.shape. Parameters: **x, y, z** array_like, compatible objects The three dimensional series is evaluated at the points in the Cartesian product of `x`, `y`, and `z`. If `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`hermeval`](numpy.polynomial.hermite_e.hermeval#numpy.polynomial.hermite_e.hermeval "numpy.polynomial.hermite_e.hermeval"), [`hermeval2d`](numpy.polynomial.hermite_e.hermeval2d#numpy.polynomial.hermite_e.hermeval2d "numpy.polynomial.hermite_e.hermeval2d"), [`hermegrid2d`](numpy.polynomial.hermite_e.hermegrid2d#numpy.polynomial.hermite_e.hermegrid2d "numpy.polynomial.hermite_e.hermegrid2d"), [`hermeval3d`](numpy.polynomial.hermite_e.hermeval3d#numpy.polynomial.hermite_e.hermeval3d "numpy.polynomial.hermite_e.hermeval3d") # numpy.polynomial.hermite_e.hermeint polynomial.hermite_e.hermeint(_c_ , _m =1_, _k =[]_, _lbnd =0_, _scl =1_, _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L677-L794) Integrate a Hermite_e series. Returns the Hermite_e series coefficients `c` integrated `m` times from `lbnd` along `axis`. At each iteration the resulting series is **multiplied** by `scl` and an integration constant, `k`, is added. The scaling factor is for use in a linear change of variable. (“Buyer beware”: note that, depending on what one is doing, one may want `scl` to be the reciprocal of what one might expect; for more information, see the Notes section below.) The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `H_0 + 2*H_1 + 3*H_2` while [[1,2],[1,2]] represents `1*H_0(x)*H_0(y) + 1*H_1(x)*H_0(y) + 2*H_0(x)*H_1(y) + 2*H_1(x)*H_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters: **c** array_like Array of Hermite_e series coefficients. If c is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m** int, optional Order of integration, must be positive. (Default: 1) **k**{[], list, scalar}, optional Integration constant(s). The value of the first integral at `lbnd` is the first value in the list, the value of the second integral at `lbnd` is the second value, etc. If `k == []` (the default), all constants are set to zero. If `m == 1`, a single scalar can be given instead of a list. **lbnd** scalar, optional The lower bound of the integral. (Default: 0) **scl** scalar, optional Following each integration the result is _multiplied_ by `scl` before the integration constant is added. (Default: 1) **axis** int, optional Axis over which the integral is taken. (Default: 0). Returns: **S** ndarray Hermite_e series coefficients of the integral. Raises: ValueError If `m < 0`, `len(k) > m`, `np.ndim(lbnd) != 0`, or `np.ndim(scl) != 0`. See also [`hermeder`](numpy.polynomial.hermite_e.hermeder#numpy.polynomial.hermite_e.hermeder "numpy.polynomial.hermite_e.hermeder") #### Notes Note that the result of each integration is _multiplied_ by `scl`. Why is this important to note? Say one is making a linear change of variable \\(u = ax + b\\) in an integral relative to `x`. Then \\(dx = du/a\\), so one will need to set `scl` equal to \\(1/a\\) \- perhaps not what one would have first thought. Also note that, in general, the result of integrating a C-series needs to be “reprojected” onto the C-series basis set. Thus, typically, the result of this function is “unintuitive,” albeit correct; see Examples section below. #### Examples >>> from numpy.polynomial.hermite_e import hermeint >>> hermeint([1, 2, 3]) # integrate once, value 0 at 0. array([1., 1., 1., 1.]) >>> hermeint([1, 2, 3], m=2) # integrate twice, value & deriv 0 at 0 array([-0.25 , 1. , 0.5 , 0.33333333, 0.25 ]) # may vary >>> hermeint([1, 2, 3], k=1) # integrate once, value 1 at 0. array([2., 1., 1., 1.]) >>> hermeint([1, 2, 3], lbnd=-1) # integrate once, value 0 at -1 array([-1., 1., 1., 1.]) >>> hermeint([1, 2, 3], m=2, k=[1, 2], lbnd=-1) array([ 1.83333333, 0. , 0.5 , 0.33333333, 0.25 ]) # may vary # numpy.polynomial.hermite_e.hermeline polynomial.hermite_e.hermeline(_off_ , _scl_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L219-L255) Hermite series whose graph is a straight line. Parameters: **off, scl** scalars The specified line is given by `off + scl*x`. Returns: **y** ndarray This module’s representation of the Hermite series for `off + scl*x`. See also [`numpy.polynomial.polynomial.polyline`](numpy.polynomial.polynomial.polyline#numpy.polynomial.polynomial.polyline "numpy.polynomial.polynomial.polyline") [`numpy.polynomial.chebyshev.chebline`](numpy.polynomial.chebyshev.chebline#numpy.polynomial.chebyshev.chebline "numpy.polynomial.chebyshev.chebline") [`numpy.polynomial.legendre.legline`](numpy.polynomial.legendre.legline#numpy.polynomial.legendre.legline "numpy.polynomial.legendre.legline") [`numpy.polynomial.laguerre.lagline`](numpy.polynomial.laguerre.lagline#numpy.polynomial.laguerre.lagline "numpy.polynomial.laguerre.lagline") [`numpy.polynomial.hermite.hermline`](numpy.polynomial.hermite.hermline#numpy.polynomial.hermite.hermline "numpy.polynomial.hermite.hermline") #### Examples >>> from numpy.polynomial.hermite_e import hermeline >>> from numpy.polynomial.hermite_e import hermeline, hermeval >>> hermeval(0,hermeline(3, 2)) 3.0 >>> hermeval(1,hermeline(3, 2)) 5.0 # numpy.polynomial.hermite_e.hermemul polynomial.hermite_e.hermemul(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L447-L510) Multiply one Hermite series by another. Returns the product of two Hermite series `c1` * `c2`. The arguments are sequences of coefficients, from lowest order “term” to highest, e.g., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters: **c1, c2** array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns: **out** ndarray Of Hermite series coefficients representing their product. See also [`hermeadd`](numpy.polynomial.hermite_e.hermeadd#numpy.polynomial.hermite_e.hermeadd "numpy.polynomial.hermite_e.hermeadd"), [`hermesub`](numpy.polynomial.hermite_e.hermesub#numpy.polynomial.hermite_e.hermesub "numpy.polynomial.hermite_e.hermesub"), [`hermemulx`](numpy.polynomial.hermite_e.hermemulx#numpy.polynomial.hermite_e.hermemulx "numpy.polynomial.hermite_e.hermemulx"), [`hermediv`](numpy.polynomial.hermite_e.hermediv#numpy.polynomial.hermite_e.hermediv "numpy.polynomial.hermite_e.hermediv"), [`hermepow`](numpy.polynomial.hermite_e.hermepow#numpy.polynomial.hermite_e.hermepow "numpy.polynomial.hermite_e.hermepow") #### Notes In general, the (polynomial) product of two C-series results in terms that are not in the Hermite polynomial basis set. Thus, to express the product as a Hermite series, it is necessary to “reproject” the product onto said basis set, which may produce “unintuitive” (but correct) results; see Examples section below. #### Examples >>> from numpy.polynomial.hermite_e import hermemul >>> hermemul([1, 2, 3], [0, 1, 2]) array([14., 15., 28., 7., 6.]) # numpy.polynomial.hermite_e.hermemulx polynomial.hermite_e.hermemulx(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L394-L444) Multiply a Hermite series by x. Multiply the Hermite series `c` by x, where x is the independent variable. Parameters: **c** array_like 1-D array of Hermite series coefficients ordered from low to high. Returns: **out** ndarray Array representing the result of the multiplication. See also [`hermeadd`](numpy.polynomial.hermite_e.hermeadd#numpy.polynomial.hermite_e.hermeadd "numpy.polynomial.hermite_e.hermeadd"), [`hermesub`](numpy.polynomial.hermite_e.hermesub#numpy.polynomial.hermite_e.hermesub "numpy.polynomial.hermite_e.hermesub"), [`hermemul`](numpy.polynomial.hermite_e.hermemul#numpy.polynomial.hermite_e.hermemul "numpy.polynomial.hermite_e.hermemul"), [`hermediv`](numpy.polynomial.hermite_e.hermediv#numpy.polynomial.hermite_e.hermediv "numpy.polynomial.hermite_e.hermediv"), [`hermepow`](numpy.polynomial.hermite_e.hermepow#numpy.polynomial.hermite_e.hermepow "numpy.polynomial.hermite_e.hermepow") #### Notes The multiplication uses the recursion relationship for Hermite polynomials in the form \\[xP_i(x) = (P_{i + 1}(x) + iP_{i - 1}(x)))\\] #### Examples >>> from numpy.polynomial.hermite_e import hermemulx >>> hermemulx([1, 2, 3]) array([2., 7., 2., 3.]) # numpy.polynomial.hermite_e.hermeone polynomial.hermite_e.hermeone _= array([1])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.hermite_e.hermepow polynomial.hermite_e.hermepow(_c_ , _pow_ , _maxpower =16_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L559-L593) Raise a Hermite series to a power. Returns the Hermite series `c` raised to the power [`pow`](numpy.pow#numpy.pow "numpy.pow"). The argument `c` is a sequence of coefficients ordered from low to high. i.e., [1,2,3] is the series `P_0 + 2*P_1 + 3*P_2.` Parameters: **c** array_like 1-D array of Hermite series coefficients ordered from low to high. **pow** integer Power to which the series will be raised **maxpower** integer, optional Maximum power allowed. This is mainly to limit growth of the series to unmanageable size. Default is 16 Returns: **coef** ndarray Hermite series of power. See also [`hermeadd`](numpy.polynomial.hermite_e.hermeadd#numpy.polynomial.hermite_e.hermeadd "numpy.polynomial.hermite_e.hermeadd"), [`hermesub`](numpy.polynomial.hermite_e.hermesub#numpy.polynomial.hermite_e.hermesub "numpy.polynomial.hermite_e.hermesub"), [`hermemulx`](numpy.polynomial.hermite_e.hermemulx#numpy.polynomial.hermite_e.hermemulx "numpy.polynomial.hermite_e.hermemulx"), [`hermemul`](numpy.polynomial.hermite_e.hermemul#numpy.polynomial.hermite_e.hermemul "numpy.polynomial.hermite_e.hermemul"), [`hermediv`](numpy.polynomial.hermite_e.hermediv#numpy.polynomial.hermite_e.hermediv "numpy.polynomial.hermite_e.hermediv") #### Examples >>> from numpy.polynomial.hermite_e import hermepow >>> hermepow([1, 2, 3], 2) array([23., 28., 46., 12., 9.]) # numpy.polynomial.hermite_e.hermeroots polynomial.hermite_e.hermeroots(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L1405-L1466) Compute the roots of a HermiteE series. Return the roots (a.k.a. “zeros”) of the polynomial \\[p(x) = \sum_i c[i] * He_i(x).\\] Parameters: **c** 1-D array_like 1-D array of coefficients. Returns: **out** ndarray Array of the roots of the series. If all the roots are real, then `out` is also real, otherwise it is complex. See also [`numpy.polynomial.polynomial.polyroots`](numpy.polynomial.polynomial.polyroots#numpy.polynomial.polynomial.polyroots "numpy.polynomial.polynomial.polyroots") [`numpy.polynomial.legendre.legroots`](numpy.polynomial.legendre.legroots#numpy.polynomial.legendre.legroots "numpy.polynomial.legendre.legroots") [`numpy.polynomial.laguerre.lagroots`](numpy.polynomial.laguerre.lagroots#numpy.polynomial.laguerre.lagroots "numpy.polynomial.laguerre.lagroots") [`numpy.polynomial.hermite.hermroots`](numpy.polynomial.hermite.hermroots#numpy.polynomial.hermite.hermroots "numpy.polynomial.hermite.hermroots") [`numpy.polynomial.chebyshev.chebroots`](numpy.polynomial.chebyshev.chebroots#numpy.polynomial.chebyshev.chebroots "numpy.polynomial.chebyshev.chebroots") #### Notes The root estimates are obtained as the eigenvalues of the companion matrix, Roots far from the origin of the complex plane may have large errors due to the numerical instability of the series for such values. Roots with multiplicity greater than 1 will also show larger errors as the value of the series near such points is relatively insensitive to errors in the roots. Isolated roots near the origin can be improved by a few iterations of Newton’s method. The HermiteE series basis polynomials aren’t powers of `x` so the results of this function may seem unintuitive. #### Examples >>> from numpy.polynomial.hermite_e import hermeroots, hermefromroots >>> coef = hermefromroots([-1, 0, 1]) >>> coef array([0., 2., 0., 1.]) >>> hermeroots(coef) array([-1., 0., 1.]) # may vary # numpy.polynomial.hermite_e.hermesub polynomial.hermite_e.hermesub(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L354-L391) Subtract one Hermite series from another. Returns the difference of two Hermite series `c1` \- `c2`. The sequences of coefficients are from lowest order term to highest, i.e., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters: **c1, c2** array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns: **out** ndarray Of Hermite series coefficients representing their difference. See also [`hermeadd`](numpy.polynomial.hermite_e.hermeadd#numpy.polynomial.hermite_e.hermeadd "numpy.polynomial.hermite_e.hermeadd"), [`hermemulx`](numpy.polynomial.hermite_e.hermemulx#numpy.polynomial.hermite_e.hermemulx "numpy.polynomial.hermite_e.hermemulx"), [`hermemul`](numpy.polynomial.hermite_e.hermemul#numpy.polynomial.hermite_e.hermemul "numpy.polynomial.hermite_e.hermemul"), [`hermediv`](numpy.polynomial.hermite_e.hermediv#numpy.polynomial.hermite_e.hermediv "numpy.polynomial.hermite_e.hermediv"), [`hermepow`](numpy.polynomial.hermite_e.hermepow#numpy.polynomial.hermite_e.hermepow "numpy.polynomial.hermite_e.hermepow") #### Notes Unlike multiplication, division, etc., the difference of two Hermite series is a Hermite series (without having to “reproject” the result onto the basis set) so subtraction, just like that of “standard” polynomials, is simply “component-wise.” #### Examples >>> from numpy.polynomial.hermite_e import hermesub >>> hermesub([1, 2, 3, 4], [1, 2, 3]) array([0., 0., 0., 4.]) # numpy.polynomial.hermite_e.hermetrim polynomial.hermite_e.hermetrim(_c_ , _tol =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polyutils.py#L144-L192) Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter `tol`; “trailing” means highest order coefficient(s), e.g., in `[0, 1, 1, 0, 0]` (which represents `0 + x + x**2 + 0*x**3 + 0*x**4`) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters: **c** array_like 1-d array of coefficients, ordered from lowest order to highest. **tol** number, optional Trailing (i.e., highest order) elements with absolute value less than or equal to `tol` (default value is zero) are removed. Returns: **trimmed** ndarray 1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises: ValueError If `tol` < 0 #### Examples >>> from numpy.polynomial import polyutils as pu >>> pu.trimcoef((0,0,3,0,5,0,0)) array([0., 0., 3., 0., 5.]) >>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed array([0.]) >>> i = complex(0,1) # works for complex >>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3) array([0.0003+0.j , 0.001 -0.001j]) # numpy.polynomial.hermite_e.hermeval polynomial.hermite_e.hermeval(_x_ , _c_ , _tensor =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L797-L887) Evaluate an HermiteE series at points x. If `c` is of length `n + 1`, this function returns the value: \\[p(x) = c_0 * He_0(x) + c_1 * He_1(x) + ... + c_n * He_n(x)\\] The parameter `x` is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either `x` or its elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array, then `p(x)` will have the same shape as `x`. If `c` is multidimensional, then the shape of the result depends on the value of `tensor`. If `tensor` is true the shape will be c.shape[1:] + x.shape. If `tensor` is false the shape will be c.shape[1:]. Note that scalars have shape (,). Trailing zeros in the coefficients will be used in the evaluation, so they should be avoided if efficiency is a concern. Parameters: **x** array_like, compatible object If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with with themselves and with the elements of `c`. **c** array_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If `c` is multidimensional the remaining indices enumerate multiple polynomials. In the two dimensional case the coefficients may be thought of as stored in the columns of `c`. **tensor** boolean, optional If True, the shape of the coefficient array is extended with ones on the right, one for each dimension of `x`. Scalars have dimension 0 for this action. The result is that every column of coefficients in `c` is evaluated for every element of `x`. If False, `x` is broadcast over the columns of `c` for the evaluation. This keyword is useful when `c` is multidimensional. The default value is True. Returns: **values** ndarray, algebra_like The shape of the return value is described above. See also [`hermeval2d`](numpy.polynomial.hermite_e.hermeval2d#numpy.polynomial.hermite_e.hermeval2d "numpy.polynomial.hermite_e.hermeval2d"), [`hermegrid2d`](numpy.polynomial.hermite_e.hermegrid2d#numpy.polynomial.hermite_e.hermegrid2d "numpy.polynomial.hermite_e.hermegrid2d"), [`hermeval3d`](numpy.polynomial.hermite_e.hermeval3d#numpy.polynomial.hermite_e.hermeval3d "numpy.polynomial.hermite_e.hermeval3d"), [`hermegrid3d`](numpy.polynomial.hermite_e.hermegrid3d#numpy.polynomial.hermite_e.hermegrid3d "numpy.polynomial.hermite_e.hermegrid3d") #### Notes The evaluation uses Clenshaw recursion, aka synthetic division. #### Examples >>> from numpy.polynomial.hermite_e import hermeval >>> coef = [1,2,3] >>> hermeval(1, coef) 3.0 >>> hermeval([[1,2],[3,4]], coef) array([[ 3., 14.], [31., 54.]]) # numpy.polynomial.hermite_e.hermeval2d polynomial.hermite_e.hermeval2d(_x_ , _y_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L890-L930) Evaluate a 2-D HermiteE series at points (x, y). This function returns the values: \\[p(x,y) = \sum_{i,j} c_{i,j} * He_i(x) * He_j(y)\\] The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array a one is implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters: **x, y** array_like, compatible objects The two dimensional series is evaluated at the points `(x, y)`, where `x` and `y` must have the same shape. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficient of the term of multi- degree i,j is contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional polynomial at points formed with pairs of corresponding values from `x` and `y`. See also [`hermeval`](numpy.polynomial.hermite_e.hermeval#numpy.polynomial.hermite_e.hermeval "numpy.polynomial.hermite_e.hermeval"), [`hermegrid2d`](numpy.polynomial.hermite_e.hermegrid2d#numpy.polynomial.hermite_e.hermegrid2d "numpy.polynomial.hermite_e.hermegrid2d"), [`hermeval3d`](numpy.polynomial.hermite_e.hermeval3d#numpy.polynomial.hermite_e.hermeval3d "numpy.polynomial.hermite_e.hermeval3d"), [`hermegrid3d`](numpy.polynomial.hermite_e.hermegrid3d#numpy.polynomial.hermite_e.hermegrid3d "numpy.polynomial.hermite_e.hermegrid3d") # numpy.polynomial.hermite_e.hermeval3d polynomial.hermite_e.hermeval3d(_x_ , _y_ , _z_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L980-L1022) Evaluate a 3-D Hermite_e series at points (x, y, z). This function returns the values: \\[p(x,y,z) = \sum_{i,j,k} c_{i,j,k} * He_i(x) * He_j(y) * He_k(z)\\] The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than 3 dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape. Parameters: **x, y, z** array_like, compatible object The three dimensional series is evaluated at the points `(x, y, z)`, where `x`, `y`, and `z` must have the same shape. If any of `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficient of the term of multi- degree i,j,k is contained in `c[i,j,k]`. If `c` has dimension greater than 3 the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the multidimensional polynomial on points formed with triples of corresponding values from `x`, `y`, and `z`. See also [`hermeval`](numpy.polynomial.hermite_e.hermeval#numpy.polynomial.hermite_e.hermeval "numpy.polynomial.hermite_e.hermeval"), [`hermeval2d`](numpy.polynomial.hermite_e.hermeval2d#numpy.polynomial.hermite_e.hermeval2d "numpy.polynomial.hermite_e.hermeval2d"), [`hermegrid2d`](numpy.polynomial.hermite_e.hermegrid2d#numpy.polynomial.hermite_e.hermegrid2d "numpy.polynomial.hermite_e.hermegrid2d"), [`hermegrid3d`](numpy.polynomial.hermite_e.hermegrid3d#numpy.polynomial.hermite_e.hermegrid3d "numpy.polynomial.hermite_e.hermegrid3d") # numpy.polynomial.hermite_e.hermevander polynomial.hermite_e.hermevander(_x_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L1075-L1133) Pseudo-Vandermonde matrix of given degree. Returns the pseudo-Vandermonde matrix of degree `deg` and sample points `x`. The pseudo-Vandermonde matrix is defined by \\[V[..., i] = He_i(x),\\] where `0 <= i <= deg`. The leading indices of `V` index the elements of `x` and the last index is the degree of the HermiteE polynomial. If `c` is a 1-D array of coefficients of length `n + 1` and `V` is the array `V = hermevander(x, n)`, then `np.dot(V, c)` and `hermeval(x, c)` are the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of HermiteE series of the same degree and sample points. Parameters: **x** array_like Array of points. The dtype is converted to float64 or complex128 depending on whether any of the elements are complex. If `x` is scalar it is converted to a 1-D array. **deg** int Degree of the resulting matrix. Returns: **vander** ndarray The pseudo-Vandermonde matrix. The shape of the returned matrix is `x.shape + (deg + 1,)`, where The last index is the degree of the corresponding HermiteE polynomial. The dtype will be the same as the converted `x`. #### Examples >>> import numpy as np >>> from numpy.polynomial.hermite_e import hermevander >>> x = np.array([-1, 0, 1]) >>> hermevander(x, 3) array([[ 1., -1., 0., 2.], [ 1., 0., -1., -0.], [ 1., 1., 0., -2.]]) # numpy.polynomial.hermite_e.hermevander2d polynomial.hermite_e.hermevander2d(_x_ , _y_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L1136-L1180) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y)`. The pseudo-Vandermonde matrix is defined by \\[V[..., (deg[1] + 1)*i + j] = He_i(x) * He_j(y),\\] where `0 <= i <= deg[0]` and `0 <= j <= deg[1]`. The leading indices of `V` index the points `(x, y)` and the last index encodes the degrees of the HermiteE polynomials. If `V = hermevander2d(x, y, [xdeg, ydeg])`, then the columns of `V` correspond to the elements of a 2-D coefficient array `c` of shape (xdeg + 1, ydeg + 1) in the order \\[c_{00}, c_{01}, c_{02} ... , c_{10}, c_{11}, c_{12} ...\\] and `np.dot(V, c.flat)` and `hermeval2d(x, y, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 2-D HermiteE series of the same degrees and sample points. Parameters: **x, y** array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg** list of ints List of maximum degrees of the form [x_deg, y_deg]. Returns: **vander2d** ndarray The shape of the returned matrix is `x.shape + (order,)`, where \\(order = (deg[0]+1)*(deg[1]+1)\\). The dtype will be the same as the converted `x` and `y`. See also [`hermevander`](numpy.polynomial.hermite_e.hermevander#numpy.polynomial.hermite_e.hermevander "numpy.polynomial.hermite_e.hermevander"), [`hermevander3d`](numpy.polynomial.hermite_e.hermevander3d#numpy.polynomial.hermite_e.hermevander3d "numpy.polynomial.hermite_e.hermevander3d"), [`hermeval2d`](numpy.polynomial.hermite_e.hermeval2d#numpy.polynomial.hermite_e.hermeval2d "numpy.polynomial.hermite_e.hermeval2d"), [`hermeval3d`](numpy.polynomial.hermite_e.hermeval3d#numpy.polynomial.hermite_e.hermeval3d "numpy.polynomial.hermite_e.hermeval3d") # numpy.polynomial.hermite_e.hermevander3d polynomial.hermite_e.hermevander3d(_x_ , _y_ , _z_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L1183-L1228) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y, z)`. If `l`, `m`, `n` are the given degrees in `x`, `y`, `z`, then Hehe pseudo-Vandermonde matrix is defined by \\[V[..., (m+1)(n+1)i + (n+1)j + k] = He_i(x)*He_j(y)*He_k(z),\\] where `0 <= i <= l`, `0 <= j <= m`, and `0 <= j <= n`. The leading indices of `V` index the points `(x, y, z)` and the last index encodes the degrees of the HermiteE polynomials. If `V = hermevander3d(x, y, z, [xdeg, ydeg, zdeg])`, then the columns of `V` correspond to the elements of a 3-D coefficient array `c` of shape (xdeg + 1, ydeg + 1, zdeg + 1) in the order \\[c_{000}, c_{001}, c_{002},... , c_{010}, c_{011}, c_{012},...\\] and `np.dot(V, c.flat)` and `hermeval3d(x, y, z, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 3-D HermiteE series of the same degrees and sample points. Parameters: **x, y, z** array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg** list of ints List of maximum degrees of the form [x_deg, y_deg, z_deg]. Returns: **vander3d** ndarray The shape of the returned matrix is `x.shape + (order,)`, where \\(order = (deg[0]+1)*(deg[1]+1)*(deg[2]+1)\\). The dtype will be the same as the converted `x`, `y`, and `z`. See also [`hermevander`](numpy.polynomial.hermite_e.hermevander#numpy.polynomial.hermite_e.hermevander "numpy.polynomial.hermite_e.hermevander"), `hermevander3d`, [`hermeval2d`](numpy.polynomial.hermite_e.hermeval2d#numpy.polynomial.hermite_e.hermeval2d "numpy.polynomial.hermite_e.hermeval2d"), [`hermeval3d`](numpy.polynomial.hermite_e.hermeval3d#numpy.polynomial.hermite_e.hermeval3d "numpy.polynomial.hermite_e.hermeval3d") # numpy.polynomial.hermite_e.hermeweight polynomial.hermite_e.hermeweight(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L1574-L1592) Weight function of the Hermite_e polynomials. The weight function is \\(\exp(-x^2/2)\\) and the interval of integration is \\([-\inf, \inf]\\). the HermiteE polynomials are orthogonal, but not normalized, with respect to this weight function. Parameters: **x** array_like Values at which the weight function will be computed. Returns: **w** ndarray The weight function at `x`. # numpy.polynomial.hermite_e.hermex polynomial.hermite_e.hermex _= array([0, 1])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.hermite_e.hermezero polynomial.hermite_e.hermezero _= array([0])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.hermite_e.HermiteE.__call__ method polynomial.hermite_e.HermiteE.__call__(_arg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L513-L515) Call self as a function. # numpy.polynomial.hermite_e.HermiteE.basis method _classmethod_ polynomial.hermite_e.HermiteE.basis(_deg_ , _domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1120-L1157) Series basis polynomial of degree `deg`. Returns the series representing the basis polynomial of degree `deg`. Parameters: **deg** int Degree of the basis polynomial for the series. Must be >= 0. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series A series with the coefficient of the `deg` term set to one and all others zero. # numpy.polynomial.hermite_e.HermiteE.cast method _classmethod_ polynomial.hermite_e.HermiteE.cast(_series_ , _domain =None_, _window =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1159-L1197) Convert series to series of this class. The `series` is expected to be an instance of some polynomial series of one of the types supported by by the numpy.polynomial module, but could be some other class that supports the convert method. Parameters: **series** series The series instance to be converted. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns: **new_series** series A series of the same kind as the calling class and equal to `series` when evaluated. See also [`convert`](numpy.polynomial.hermite_e.hermitee.convert#numpy.polynomial.hermite_e.HermiteE.convert "numpy.polynomial.hermite_e.HermiteE.convert") similar instance method # numpy.polynomial.hermite_e.HermiteE.convert method polynomial.hermite_e.HermiteE.convert(_domain =None_, _kind =None_, _window =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L785-L820) Convert series to a different kind and/or domain and/or window. Parameters: **domain** array_like, optional The domain of the converted series. If the value is None, the default domain of `kind` is used. **kind** class, optional The polynomial series type class to which the current instance should be converted. If kind is None, then the class of the current instance is used. **window** array_like, optional The window of the converted series. If the value is None, the default window of `kind` is used. Returns: **new_series** series The returned class can be of different type than the current instance and/or have a different domain and/or different window. #### Notes Conversion between domains and class types can result in numerically ill defined series. # numpy.polynomial.hermite_e.HermiteE.copy method polynomial.hermite_e.HermiteE.copy()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L666-L675) Return a copy. Returns: **new_series** series Copy of self. # numpy.polynomial.hermite_e.HermiteE.cutdeg method polynomial.hermite_e.HermiteE.cutdeg(_deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L710-L731) Truncate series to the given degree. Reduce the degree of the series to `deg` by discarding the high order terms. If `deg` is greater than the current degree a copy of the current series is returned. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters: **deg** non-negative int The series is reduced to degree `deg` by discarding the high order terms. The value of `deg` must be a non-negative integer. Returns: **new_series** series New instance of series with reduced degree. # numpy.polynomial.hermite_e.HermiteE.degree method polynomial.hermite_e.HermiteE.degree()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L677-L708) The degree of the series. Returns: **degree** int Degree of the series, one less than the number of coefficients. #### Examples Create a polynomial object for `1 + 7*x + 4*x**2`: >>> poly = np.polynomial.Polynomial([1, 7, 4]) >>> print(poly) 1.0 + 7.0·x + 4.0·x² >>> poly.degree() 2 Note that this method does not check for non-zero coefficients. You must trim the polynomial to remove any trailing zeroes: >>> poly = np.polynomial.Polynomial([1, 7, 0]) >>> print(poly) 1.0 + 7.0·x + 0.0·x² >>> poly.degree() 2 >>> poly.trim().degree() 1 # numpy.polynomial.hermite_e.HermiteE.deriv method polynomial.hermite_e.HermiteE.deriv(_m =1_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L884-L904) Differentiate. Return a series instance of that is the derivative of the current series. Parameters: **m** non-negative int Find the derivative of order `m`. Returns: **new_series** series A new series representing the derivative. The domain is the same as the domain of the differentiated series. # numpy.polynomial.hermite_e.HermiteE.domain attribute polynomial.hermite_e.HermiteE.domain _= array([-1., 1.])_ # numpy.polynomial.hermite_e.HermiteE.fit method _classmethod_ polynomial.hermite_e.HermiteE.fit(_x_ , _y_ , _deg_ , _domain =None_, _rcond =None_, _full =False_, _w =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L951-L1040) Least squares fit to data. Return a series instance that is the least squares fit to the data `y` sampled at `x`. The domain of the returned instance can be specified and this will often result in a superior fit with less chance of ill conditioning. Parameters: **x** array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y** array_like, shape (M,) y-coordinates of the M sample points `(x[i], y[i])`. **deg** int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **domain**{None, [beg, end], []}, optional Domain to use for the returned series. If `None`, then a minimal domain that covers the points `x` is chosen. If `[]` the class domain is used. The default value was the class domain in NumPy 1.4 and `None` in later versions. The `[]` option was added in numpy 1.5.0. **rcond** float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is `len(x)*eps`, where eps is the relative precision of the float type, about 2e-16 in most cases. **full** bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w** array_like, shape (M,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse- variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. **window**{[beg, end]}, optional Window to use for the returned series. The default value is the default class domain **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series A series that represents the least squares fit to the data and has the domain and window specified in the call. If the coefficients for the unscaled and unshifted basis polynomials are of interest, do `new_series.convert().coef`. **[resid, rank, sv, rcond]** list These values are only returned if `full == True` * resid – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * sv – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). # numpy.polynomial.hermite_e.HermiteE.fromroots method _classmethod_ polynomial.hermite_e.HermiteE.fromroots(_roots_ , _domain =[]_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1042-L1083) Return series instance that has the specified roots. Returns a series representing the product `(x - r[0])*(x - r[1])*...*(x - r[n-1])`, where `r` is a list of roots. Parameters: **roots** array_like List of roots. **domain**{[], None, array_like}, optional Domain for the resulting series. If None the domain is the interval from the smallest root to the largest. If [] the domain is the class domain. The default is []. **window**{None, array_like}, optional Window for the returned series. If None the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series Series with the specified roots. # numpy.polynomial.hermite_e.HermiteE.has_samecoef method polynomial.hermite_e.HermiteE.has_samecoef(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L188-L207) Check if coefficients match. Parameters: **other** class instance The other class must have the `coef` attribute. Returns: **bool** boolean True if the coefficients are the same, False otherwise. # numpy.polynomial.hermite_e.HermiteE.has_samedomain method polynomial.hermite_e.HermiteE.has_samedomain(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L209-L223) Check if domains match. Parameters: **other** class instance The other class must have the `domain` attribute. Returns: **bool** boolean True if the domains are the same, False otherwise. # numpy.polynomial.hermite_e.HermiteE.has_sametype method polynomial.hermite_e.HermiteE.has_sametype(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L241-L255) Check if types match. Parameters: **other** object Class instance. Returns: **bool** boolean True if other is same class as self # numpy.polynomial.hermite_e.HermiteE.has_samewindow method polynomial.hermite_e.HermiteE.has_samewindow(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L225-L239) Check if windows match. Parameters: **other** class instance The other class must have the `window` attribute. Returns: **bool** boolean True if the windows are the same, False otherwise. # numpy.polynomial.hermite_e.HermiteE _class_ numpy.polynomial.hermite_e.HermiteE(_coef_ , _domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L1599-L1642) An HermiteE series class. The HermiteE class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the attributes and methods listed below. Parameters: **coef** array_like HermiteE coefficients in order of increasing degree, i.e, `(1, 2, 3)` gives `1*He_0(x) + 2*He_1(X) + 3*He_2(x)`. **domain**(2,) array_like, optional Domain to use. The interval `[domain[0], domain[1]]` is mapped to the interval `[window[0], window[1]]` by shifting and scaling. The default value is [-1., 1.]. **window**(2,) array_like, optional Window, see [`domain`](numpy.polynomial.hermite_e.hermitee.domain#numpy.polynomial.hermite_e.HermiteE.domain "numpy.polynomial.hermite_e.HermiteE.domain") for its use. The default value is [-1., 1.]. **symbol** str, optional Symbol used to represent the independent variable in string representations of the polynomial expression, e.g. for printing. The symbol must be a valid Python identifier. Default value is ‘x’. New in version 1.24. Attributes: **symbol** #### Methods [`__call__`](numpy.polynomial.hermite_e.hermitee.__call__#numpy.polynomial.hermite_e.HermiteE.__call__ "numpy.polynomial.hermite_e.HermiteE.__call__")(arg) | Call self as a function. ---|--- [`basis`](numpy.polynomial.hermite_e.hermitee.basis#numpy.polynomial.hermite_e.HermiteE.basis "numpy.polynomial.hermite_e.HermiteE.basis")(deg[, domain, window, symbol]) | Series basis polynomial of degree `deg`. [`cast`](numpy.polynomial.hermite_e.hermitee.cast#numpy.polynomial.hermite_e.HermiteE.cast "numpy.polynomial.hermite_e.HermiteE.cast")(series[, domain, window]) | Convert series to series of this class. [`convert`](numpy.polynomial.hermite_e.hermitee.convert#numpy.polynomial.hermite_e.HermiteE.convert "numpy.polynomial.hermite_e.HermiteE.convert")([domain, kind, window]) | Convert series to a different kind and/or domain and/or window. [`copy`](numpy.polynomial.hermite_e.hermitee.copy#numpy.polynomial.hermite_e.HermiteE.copy "numpy.polynomial.hermite_e.HermiteE.copy")() | Return a copy. [`cutdeg`](numpy.polynomial.hermite_e.hermitee.cutdeg#numpy.polynomial.hermite_e.HermiteE.cutdeg "numpy.polynomial.hermite_e.HermiteE.cutdeg")(deg) | Truncate series to the given degree. [`degree`](numpy.polynomial.hermite_e.hermitee.degree#numpy.polynomial.hermite_e.HermiteE.degree "numpy.polynomial.hermite_e.HermiteE.degree")() | The degree of the series. [`deriv`](numpy.polynomial.hermite_e.hermitee.deriv#numpy.polynomial.hermite_e.HermiteE.deriv "numpy.polynomial.hermite_e.HermiteE.deriv")([m]) | Differentiate. [`fit`](numpy.polynomial.hermite_e.hermitee.fit#numpy.polynomial.hermite_e.HermiteE.fit "numpy.polynomial.hermite_e.HermiteE.fit")(x, y, deg[, domain, rcond, full, w, ...]) | Least squares fit to data. [`fromroots`](numpy.polynomial.hermite_e.hermitee.fromroots#numpy.polynomial.hermite_e.HermiteE.fromroots "numpy.polynomial.hermite_e.HermiteE.fromroots")(roots[, domain, window, symbol]) | Return series instance that has the specified roots. [`has_samecoef`](numpy.polynomial.hermite_e.hermitee.has_samecoef#numpy.polynomial.hermite_e.HermiteE.has_samecoef "numpy.polynomial.hermite_e.HermiteE.has_samecoef")(other) | Check if coefficients match. [`has_samedomain`](numpy.polynomial.hermite_e.hermitee.has_samedomain#numpy.polynomial.hermite_e.HermiteE.has_samedomain "numpy.polynomial.hermite_e.HermiteE.has_samedomain")(other) | Check if domains match. [`has_sametype`](numpy.polynomial.hermite_e.hermitee.has_sametype#numpy.polynomial.hermite_e.HermiteE.has_sametype "numpy.polynomial.hermite_e.HermiteE.has_sametype")(other) | Check if types match. [`has_samewindow`](numpy.polynomial.hermite_e.hermitee.has_samewindow#numpy.polynomial.hermite_e.HermiteE.has_samewindow "numpy.polynomial.hermite_e.HermiteE.has_samewindow")(other) | Check if windows match. [`identity`](numpy.polynomial.hermite_e.hermitee.identity#numpy.polynomial.hermite_e.HermiteE.identity "numpy.polynomial.hermite_e.HermiteE.identity")([domain, window, symbol]) | Identity function. [`integ`](numpy.polynomial.hermite_e.hermitee.integ#numpy.polynomial.hermite_e.HermiteE.integ "numpy.polynomial.hermite_e.HermiteE.integ")([m, k, lbnd]) | Integrate. [`linspace`](numpy.polynomial.hermite_e.hermitee.linspace#numpy.polynomial.hermite_e.HermiteE.linspace "numpy.polynomial.hermite_e.HermiteE.linspace")([n, domain]) | Return x, y values at equally spaced points in domain. [`mapparms`](numpy.polynomial.hermite_e.hermitee.mapparms#numpy.polynomial.hermite_e.HermiteE.mapparms "numpy.polynomial.hermite_e.HermiteE.mapparms")() | Return the mapping parameters. [`roots`](numpy.polynomial.hermite_e.hermitee.roots#numpy.polynomial.hermite_e.HermiteE.roots "numpy.polynomial.hermite_e.HermiteE.roots")() | Return the roots of the series polynomial. [`trim`](numpy.polynomial.hermite_e.hermitee.trim#numpy.polynomial.hermite_e.HermiteE.trim "numpy.polynomial.hermite_e.HermiteE.trim")([tol]) | Remove trailing coefficients [`truncate`](numpy.polynomial.hermite_e.hermitee.truncate#numpy.polynomial.hermite_e.HermiteE.truncate "numpy.polynomial.hermite_e.HermiteE.truncate")(size) | Truncate series to length `size`. # numpy.polynomial.hermite_e.HermiteE.identity method _classmethod_ polynomial.hermite_e.HermiteE.identity(_domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1085-L1118) Identity function. If `p` is the returned series, then `p(x) == x` for all values of x. Parameters: **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series Series of representing the identity. # numpy.polynomial.hermite_e.HermiteE.integ method polynomial.hermite_e.HermiteE.integ(_m =1_, _k =[]_, _lbnd =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L851-L882) Integrate. Return a series instance that is the definite integral of the current series. Parameters: **m** non-negative int The number of integrations to perform. **k** array_like Integration constants. The first constant is applied to the first integration, the second to the second, and so on. The list of values must less than or equal to `m` in length and any missing values are set to zero. **lbnd** Scalar The lower bound of the definite integral. Returns: **new_series** series A new series representing the integral. The domain is the same as the domain of the integrated series. # numpy.polynomial.hermite_e.HermiteE.linspace method polynomial.hermite_e.HermiteE.linspace(_n =100_, _domain =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L921-L949) Return x, y values at equally spaced points in domain. Returns the x, y values at `n` linearly spaced points across the domain. Here y is the value of the polynomial at the points x. By default the domain is the same as that of the series instance. This method is intended mostly as a plotting aid. Parameters: **n** int, optional Number of point pairs to return. The default value is 100. **domain**{None, array_like}, optional If not None, the specified domain is used instead of that of the calling instance. It should be of the form `[beg,end]`. The default is None which case the class domain is used. Returns: **x, y** ndarray x is equal to linspace(self.domain[0], self.domain[1], n) and y is the series evaluated at element of x. # numpy.polynomial.hermite_e.HermiteE.mapparms method polynomial.hermite_e.HermiteE.mapparms()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L822-L849) Return the mapping parameters. The returned values define a linear map `off + scl*x` that is applied to the input arguments before the series is evaluated. The map depends on the `domain` and `window`; if the current `domain` is equal to the `window` the resulting map is the identity. If the coefficients of the series instance are to be used by themselves outside this class, then the linear function must be substituted for the `x` in the standard representation of the base polynomials. Returns: **off, scl** float or complex The mapping function is defined by `off + scl*x`. #### Notes If the current domain is the interval `[l1, r1]` and the window is `[l2, r2]`, then the linear mapping function `L` is defined by the equations: L(l1) = l2 L(r1) = r2 # numpy.polynomial.hermite_e.HermiteE.roots method polynomial.hermite_e.HermiteE.roots()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L906-L919) Return the roots of the series polynomial. Compute the roots for the series. Note that the accuracy of the roots decreases the further outside the [`domain`](numpy.polynomial.hermite_e.hermitee.domain#numpy.polynomial.hermite_e.HermiteE.domain "numpy.polynomial.hermite_e.HermiteE.domain") they lie. Returns: **roots** ndarray Array containing the roots of the series. # numpy.polynomial.hermite_e.HermiteE.trim method polynomial.hermite_e.HermiteE.trim(_tol =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L733-L754) Remove trailing coefficients Remove trailing coefficients until a coefficient is reached whose absolute value greater than `tol` or the beginning of the series is reached. If all the coefficients would be removed the series is set to `[0]`. A new series instance is returned with the new coefficients. The current instance remains unchanged. Parameters: **tol** non-negative number. All trailing coefficients less than `tol` will be removed. Returns: **new_series** series New instance of series with trimmed coefficients. # numpy.polynomial.hermite_e.HermiteE.truncate method polynomial.hermite_e.HermiteE.truncate(_size_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L756-L783) Truncate series to length [`size`](numpy.size#numpy.size "numpy.size"). Reduce the series to length [`size`](numpy.size#numpy.size "numpy.size") by discarding the high degree terms. The value of [`size`](numpy.size#numpy.size "numpy.size") must be a positive integer. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters: **size** positive int The series is reduced to length [`size`](numpy.size#numpy.size "numpy.size") by discarding the high degree terms. The value of [`size`](numpy.size#numpy.size "numpy.size") must be a positive integer. Returns: **new_series** series New instance of series with truncated coefficients. # numpy.polynomial.hermite_e.poly2herme polynomial.hermite_e.poly2herme(_pol_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/hermite_e.py#L97-L141) Convert a polynomial to a Hermite series. Convert an array representing the coefficients of a polynomial (relative to the “standard” basis) ordered from lowest degree to highest, to an array of the coefficients of the equivalent Hermite series, ordered from lowest to highest degree. Parameters: **pol** array_like 1-D array containing the polynomial coefficients Returns: **c** ndarray 1-D array containing the coefficients of the equivalent Hermite series. See also [`herme2poly`](numpy.polynomial.hermite_e.herme2poly#numpy.polynomial.hermite_e.herme2poly "numpy.polynomial.hermite_e.herme2poly") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples >>> import numpy as np >>> from numpy.polynomial.hermite_e import poly2herme >>> poly2herme(np.arange(4)) array([ 2., 10., 2., 3.]) # numpy.polynomial.laguerre.lag2poly polynomial.laguerre.lag2poly(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L142-L194) Convert a Laguerre series to a polynomial. Convert an array representing the coefficients of a Laguerre series, ordered from lowest degree to highest, to an array of the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest to highest degree. Parameters: **c** array_like 1-D array containing the Laguerre series coefficients, ordered from lowest order term to highest. Returns: **pol** ndarray 1-D array containing the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest order term to highest. See also [`poly2lag`](numpy.polynomial.laguerre.poly2lag#numpy.polynomial.laguerre.poly2lag "numpy.polynomial.laguerre.poly2lag") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples >>> from numpy.polynomial.laguerre import lag2poly >>> lag2poly([ 23., -63., 58., -18.]) array([0., 1., 2., 3.]) # numpy.polynomial.laguerre.lagadd polynomial.laguerre.lagadd(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L309-L346) Add one Laguerre series to another. Returns the sum of two Laguerre series `c1` \+ `c2`. The arguments are sequences of coefficients ordered from lowest order term to highest, i.e., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters: **c1, c2** array_like 1-D arrays of Laguerre series coefficients ordered from low to high. Returns: **out** ndarray Array representing the Laguerre series of their sum. See also [`lagsub`](numpy.polynomial.laguerre.lagsub#numpy.polynomial.laguerre.lagsub "numpy.polynomial.laguerre.lagsub"), [`lagmulx`](numpy.polynomial.laguerre.lagmulx#numpy.polynomial.laguerre.lagmulx "numpy.polynomial.laguerre.lagmulx"), [`lagmul`](numpy.polynomial.laguerre.lagmul#numpy.polynomial.laguerre.lagmul "numpy.polynomial.laguerre.lagmul"), [`lagdiv`](numpy.polynomial.laguerre.lagdiv#numpy.polynomial.laguerre.lagdiv "numpy.polynomial.laguerre.lagdiv"), [`lagpow`](numpy.polynomial.laguerre.lagpow#numpy.polynomial.laguerre.lagpow "numpy.polynomial.laguerre.lagpow") #### Notes Unlike multiplication, division, etc., the sum of two Laguerre series is a Laguerre series (without having to “reproject” the result onto the basis set) so addition, just like that of “standard” polynomials, is simply “component- wise.” #### Examples >>> from numpy.polynomial.laguerre import lagadd >>> lagadd([1, 2, 3], [1, 2, 3, 4]) array([2., 4., 6., 4.]) # numpy.polynomial.laguerre.lagcompanion polynomial.laguerre.lagcompanion(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L1423-L1466) Return the companion matrix of c. The usual companion matrix of the Laguerre polynomials is already symmetric when `c` is a basis Laguerre polynomial, so no scaling is applied. Parameters: **c** array_like 1-D array of Laguerre series coefficients ordered from low to high degree. Returns: **mat** ndarray Companion matrix of dimensions (deg, deg). #### Examples >>> from numpy.polynomial.laguerre import lagcompanion >>> lagcompanion([1, 2, 3]) array([[ 1. , -0.33333333], [-1. , 4.33333333]]) # numpy.polynomial.laguerre.lagder polynomial.laguerre.lagder(_c_ , _m =1_, _scl =1_, _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L592-L673) Differentiate a Laguerre series. Returns the Laguerre series coefficients `c` differentiated `m` times along `axis`. At each iteration the result is multiplied by `scl` (the scaling factor is for use in a linear change of variable). The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `1*L_0 + 2*L_1 + 3*L_2` while [[1,2],[1,2]] represents `1*L_0(x)*L_0(y) + 1*L_1(x)*L_0(y) + 2*L_0(x)*L_1(y) + 2*L_1(x)*L_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters: **c** array_like Array of Laguerre series coefficients. If `c` is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m** int, optional Number of derivatives taken, must be non-negative. (Default: 1) **scl** scalar, optional Each differentiation is multiplied by `scl`. The end result is multiplication by `scl**m`. This is for use in a linear change of variable. (Default: 1) **axis** int, optional Axis over which the derivative is taken. (Default: 0). Returns: **der** ndarray Laguerre series of the derivative. See also [`lagint`](numpy.polynomial.laguerre.lagint#numpy.polynomial.laguerre.lagint "numpy.polynomial.laguerre.lagint") #### Notes In general, the result of differentiating a Laguerre series does not resemble the same operation on a power series. Thus the result of this function may be “unintuitive,” albeit correct; see Examples section below. #### Examples >>> from numpy.polynomial.laguerre import lagder >>> lagder([ 1., 1., 1., -3.]) array([1., 2., 3.]) >>> lagder([ 1., 0., 0., -4., 3.], m=2) array([1., 2., 3.]) # numpy.polynomial.laguerre.lagdiv polynomial.laguerre.lagdiv(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L509-L552) Divide one Laguerre series by another. Returns the quotient-with-remainder of two Laguerre series `c1` / `c2`. The arguments are sequences of coefficients from lowest order “term” to highest, e.g., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters: **c1, c2** array_like 1-D arrays of Laguerre series coefficients ordered from low to high. Returns: **[quo, rem]** ndarrays Of Laguerre series coefficients representing the quotient and remainder. See also [`lagadd`](numpy.polynomial.laguerre.lagadd#numpy.polynomial.laguerre.lagadd "numpy.polynomial.laguerre.lagadd"), [`lagsub`](numpy.polynomial.laguerre.lagsub#numpy.polynomial.laguerre.lagsub "numpy.polynomial.laguerre.lagsub"), [`lagmulx`](numpy.polynomial.laguerre.lagmulx#numpy.polynomial.laguerre.lagmulx "numpy.polynomial.laguerre.lagmulx"), [`lagmul`](numpy.polynomial.laguerre.lagmul#numpy.polynomial.laguerre.lagmul "numpy.polynomial.laguerre.lagmul"), [`lagpow`](numpy.polynomial.laguerre.lagpow#numpy.polynomial.laguerre.lagpow "numpy.polynomial.laguerre.lagpow") #### Notes In general, the (polynomial) division of one Laguerre series by another results in quotient and remainder terms that are not in the Laguerre polynomial basis set. Thus, to express these results as a Laguerre series, it is necessary to “reproject” the results onto the Laguerre basis set, which may produce “unintuitive” (but correct) results; see Examples section below. #### Examples >>> from numpy.polynomial.laguerre import lagdiv >>> lagdiv([ 8., -13., 38., -51., 36.], [0, 1, 2]) (array([1., 2., 3.]), array([0.])) >>> lagdiv([ 9., -12., 38., -51., 36.], [0, 1, 2]) (array([1., 2., 3.]), array([1., 1.])) # numpy.polynomial.laguerre.lagdomain polynomial.laguerre.lagdomain _= array([0., 1.])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.laguerre.lagfit polynomial.laguerre.lagfit(_x_ , _y_ , _deg_ , _rcond =None_, _full =False_, _w =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L1289-L1420) Least squares fit of Laguerre series to data. Return the coefficients of a Laguerre series of degree `deg` that is the least squares fit to the data values `y` given at points `x`. If `y` is 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple fits are done, one for each column of `y`, and the resulting coefficients are stored in the corresponding columns of a 2-D return. The fitted polynomial(s) are in the form \\[p(x) = c_0 + c_1 * L_1(x) + ... + c_n * L_n(x),\\] where `n` is `deg`. Parameters: **x** array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y** array_like, shape (M,) or (M, K) y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column. **deg** int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **rcond** float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full** bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w** array_like, shape (`M`,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse- variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. Returns: **coef** ndarray, shape (M,) or (M, K) Laguerre coefficients ordered from low to high. If `y` was 2-D, the coefficients for the data in column _k_ of `y` are in column _k_. **[residuals, rank, singular_values, rcond]** list These values are only returned if `full == True` * residuals – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * singular_values – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). Warns: RankWarning The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if `full == False`. The warnings can be turned off by >>> import warnings >>> warnings.simplefilter('ignore', np.exceptions.RankWarning) See also [`numpy.polynomial.polynomial.polyfit`](numpy.polynomial.polynomial.polyfit#numpy.polynomial.polynomial.polyfit "numpy.polynomial.polynomial.polyfit") [`numpy.polynomial.legendre.legfit`](numpy.polynomial.legendre.legfit#numpy.polynomial.legendre.legfit "numpy.polynomial.legendre.legfit") [`numpy.polynomial.chebyshev.chebfit`](numpy.polynomial.chebyshev.chebfit#numpy.polynomial.chebyshev.chebfit "numpy.polynomial.chebyshev.chebfit") [`numpy.polynomial.hermite.hermfit`](numpy.polynomial.hermite.hermfit#numpy.polynomial.hermite.hermfit "numpy.polynomial.hermite.hermfit") [`numpy.polynomial.hermite_e.hermefit`](numpy.polynomial.hermite_e.hermefit#numpy.polynomial.hermite_e.hermefit "numpy.polynomial.hermite_e.hermefit") [`lagval`](numpy.polynomial.laguerre.lagval#numpy.polynomial.laguerre.lagval "numpy.polynomial.laguerre.lagval") Evaluates a Laguerre series. [`lagvander`](numpy.polynomial.laguerre.lagvander#numpy.polynomial.laguerre.lagvander "numpy.polynomial.laguerre.lagvander") pseudo Vandermonde matrix of Laguerre series. [`lagweight`](numpy.polynomial.laguerre.lagweight#numpy.polynomial.laguerre.lagweight "numpy.polynomial.laguerre.lagweight") Laguerre weight function. [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") Computes a least-squares fit from the matrix. [`scipy.interpolate.UnivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline "\(in SciPy v1.14.1\)") Computes spline fits. #### Notes The solution is the coefficients of the Laguerre series `p` that minimizes the sum of the weighted squared errors \\[E = \sum_j w_j^2 * |y_j - p(x_j)|^2,\\] where the \\(w_j\\) are the weights. This problem is solved by setting up as the (typically) overdetermined matrix equation \\[V(x) * c = w * y,\\] where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the coefficients to be solved for, `w` are the weights, and `y` are the observed values. This equation is then solved using the singular value decomposition of `V`. If some of the singular values of `V` are so small that they are neglected, then a [`RankWarning`](numpy.exceptions.rankwarning#numpy.exceptions.RankWarning "numpy.exceptions.RankWarning") will be issued. This means that the coefficient values may be poorly determined. Using a lower order fit will usually get rid of the warning. The `rcond` parameter can also be set to a value smaller than its default, but the resulting fit may be spurious and have large contributions from roundoff error. Fits using Laguerre series are probably most useful when the data can be approximated by `sqrt(w(x)) * p(x)`, where `w(x)` is the Laguerre weight. In that case the weight `sqrt(w(x[i]))` should be used together with data values `y[i]/sqrt(w(x[i]))`. The weight function is available as [`lagweight`](numpy.polynomial.laguerre.lagweight#numpy.polynomial.laguerre.lagweight "numpy.polynomial.laguerre.lagweight"). #### References [1] Wikipedia, “Curve fitting”, #### Examples >>> import numpy as np >>> from numpy.polynomial.laguerre import lagfit, lagval >>> x = np.linspace(0, 10) >>> rng = np.random.default_rng() >>> err = rng.normal(scale=1./10, size=len(x)) >>> y = lagval(x, [1, 2, 3]) + err >>> lagfit(x, y, 2) array([1.00578369, 1.99417356, 2.99827656]) # may vary # numpy.polynomial.laguerre.lagfromroots polynomial.laguerre.lagfromroots(_roots_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L253-L306) Generate a Laguerre series with given roots. The function returns the coefficients of the polynomial \\[p(x) = (x - r_0) * (x - r_1) * ... * (x - r_n),\\] in Laguerre form, where the \\(r_n\\) are the roots specified in [`roots`](numpy.roots#numpy.roots "numpy.roots"). If a zero has multiplicity n, then it must appear in [`roots`](numpy.roots#numpy.roots "numpy.roots") n times. For instance, if 2 is a root of multiplicity three and 3 is a root of multiplicity 2, then [`roots`](numpy.roots#numpy.roots "numpy.roots") looks something like [2, 2, 2, 3, 3]. The roots can appear in any order. If the returned coefficients are `c`, then \\[p(x) = c_0 + c_1 * L_1(x) + ... + c_n * L_n(x)\\] The coefficient of the last term is not generally 1 for monic polynomials in Laguerre form. Parameters: **roots** array_like Sequence containing the roots. Returns: **out** ndarray 1-D array of coefficients. If all roots are real then `out` is a real array, if some of the roots are complex, then `out` is complex even if all the coefficients in the result are real (see Examples below). See also [`numpy.polynomial.polynomial.polyfromroots`](numpy.polynomial.polynomial.polyfromroots#numpy.polynomial.polynomial.polyfromroots "numpy.polynomial.polynomial.polyfromroots") [`numpy.polynomial.legendre.legfromroots`](numpy.polynomial.legendre.legfromroots#numpy.polynomial.legendre.legfromroots "numpy.polynomial.legendre.legfromroots") [`numpy.polynomial.chebyshev.chebfromroots`](numpy.polynomial.chebyshev.chebfromroots#numpy.polynomial.chebyshev.chebfromroots "numpy.polynomial.chebyshev.chebfromroots") [`numpy.polynomial.hermite.hermfromroots`](numpy.polynomial.hermite.hermfromroots#numpy.polynomial.hermite.hermfromroots "numpy.polynomial.hermite.hermfromroots") [`numpy.polynomial.hermite_e.hermefromroots`](numpy.polynomial.hermite_e.hermefromroots#numpy.polynomial.hermite_e.hermefromroots "numpy.polynomial.hermite_e.hermefromroots") #### Examples >>> from numpy.polynomial.laguerre import lagfromroots, lagval >>> coef = lagfromroots((-1, 0, 1)) >>> lagval((-1, 0, 1), coef) array([0., 0., 0.]) >>> coef = lagfromroots((-1j, 1j)) >>> lagval((-1j, 1j), coef) array([0.+0.j, 0.+0.j]) # numpy.polynomial.laguerre.laggauss polynomial.laguerre.laggauss(_deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L1533-L1597) Gauss-Laguerre quadrature. Computes the sample points and weights for Gauss-Laguerre quadrature. These sample points and weights will correctly integrate polynomials of degree \\(2*deg - 1\\) or less over the interval \\([0, \inf]\\) with the weight function \\(f(x) = \exp(-x)\\). Parameters: **deg** int Number of sample points and weights. It must be >= 1. Returns: **x** ndarray 1-D ndarray containing the sample points. **y** ndarray 1-D ndarray containing the weights. #### Notes The results have only been tested up to degree 100 higher degrees may be problematic. The weights are determined by using the fact that \\[w_k = c / (L'_n(x_k) * L_{n-1}(x_k))\\] where \\(c\\) is a constant independent of \\(k\\) and \\(x_k\\) is the k’th root of \\(L_n\\), and then scaling the results to get the right value when integrating 1. #### Examples >>> from numpy.polynomial.laguerre import laggauss >>> laggauss(2) (array([0.58578644, 3.41421356]), array([0.85355339, 0.14644661])) # numpy.polynomial.laguerre.laggrid2d polynomial.laguerre.laggrid2d(_x_ , _y_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L941-L994) Evaluate a 2-D Laguerre series on the Cartesian product of x and y. This function returns the values: \\[p(a,b) = \sum_{i,j} c_{i,j} * L_i(a) * L_j(b)\\] where the points `(a, b)` consist of all pairs formed by taking `a` from `x` and `b` from `y`. The resulting points form a grid with `x` in the first dimension and `y` in the second. The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than two dimensions, ones are implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape + y.shape. Parameters: **x, y** array_like, compatible objects The two dimensional series is evaluated at the points in the Cartesian product of `x` and `y`. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficient of the term of multi- degree i,j is contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional Chebyshev series at points in the Cartesian product of `x` and `y`. See also [`lagval`](numpy.polynomial.laguerre.lagval#numpy.polynomial.laguerre.lagval "numpy.polynomial.laguerre.lagval"), [`lagval2d`](numpy.polynomial.laguerre.lagval2d#numpy.polynomial.laguerre.lagval2d "numpy.polynomial.laguerre.lagval2d"), [`lagval3d`](numpy.polynomial.laguerre.lagval3d#numpy.polynomial.laguerre.lagval3d "numpy.polynomial.laguerre.lagval3d"), [`laggrid3d`](numpy.polynomial.laguerre.laggrid3d#numpy.polynomial.laguerre.laggrid3d "numpy.polynomial.laguerre.laggrid3d") #### Examples >>> from numpy.polynomial.laguerre import laggrid2d >>> c = [[1, 2], [3, 4]] >>> laggrid2d([0, 1], [0, 1], c) array([[10., 4.], [ 3., 1.]]) # numpy.polynomial.laguerre.laggrid3d polynomial.laguerre.laggrid3d(_x_ , _y_ , _z_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L1050-L1108) Evaluate a 3-D Laguerre series on the Cartesian product of x, y, and z. This function returns the values: \\[p(a,b,c) = \sum_{i,j,k} c_{i,j,k} * L_i(a) * L_j(b) * L_k(c)\\] where the points `(a, b, c)` consist of all triples formed by taking `a` from `x`, `b` from `y`, and `c` from `z`. The resulting points form a grid with `x` in the first dimension, `y` in the second, and `z` in the third. The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than three dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape + y.shape + z.shape. Parameters: **x, y, z** array_like, compatible objects The three dimensional series is evaluated at the points in the Cartesian product of `x`, `y`, and `z`. If `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`lagval`](numpy.polynomial.laguerre.lagval#numpy.polynomial.laguerre.lagval "numpy.polynomial.laguerre.lagval"), [`lagval2d`](numpy.polynomial.laguerre.lagval2d#numpy.polynomial.laguerre.lagval2d "numpy.polynomial.laguerre.lagval2d"), [`laggrid2d`](numpy.polynomial.laguerre.laggrid2d#numpy.polynomial.laguerre.laggrid2d "numpy.polynomial.laguerre.laggrid2d"), [`lagval3d`](numpy.polynomial.laguerre.lagval3d#numpy.polynomial.laguerre.lagval3d "numpy.polynomial.laguerre.lagval3d") #### Examples >>> from numpy.polynomial.laguerre import laggrid3d >>> c = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]] >>> laggrid3d([0, 1], [0, 1], [2, 4], c) array([[[ -4., -44.], [ -2., -18.]], [[ -2., -14.], [ -1., -5.]]]) # numpy.polynomial.laguerre.lagint polynomial.laguerre.lagint(_c_ , _m =1_, _k =[]_, _lbnd =0_, _scl =1_, _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L676-L795) Integrate a Laguerre series. Returns the Laguerre series coefficients `c` integrated `m` times from `lbnd` along `axis`. At each iteration the resulting series is **multiplied** by `scl` and an integration constant, `k`, is added. The scaling factor is for use in a linear change of variable. (“Buyer beware”: note that, depending on what one is doing, one may want `scl` to be the reciprocal of what one might expect; for more information, see the Notes section below.) The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `L_0 + 2*L_1 + 3*L_2` while [[1,2],[1,2]] represents `1*L_0(x)*L_0(y) + 1*L_1(x)*L_0(y) + 2*L_0(x)*L_1(y) + 2*L_1(x)*L_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters: **c** array_like Array of Laguerre series coefficients. If `c` is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m** int, optional Order of integration, must be positive. (Default: 1) **k**{[], list, scalar}, optional Integration constant(s). The value of the first integral at `lbnd` is the first value in the list, the value of the second integral at `lbnd` is the second value, etc. If `k == []` (the default), all constants are set to zero. If `m == 1`, a single scalar can be given instead of a list. **lbnd** scalar, optional The lower bound of the integral. (Default: 0) **scl** scalar, optional Following each integration the result is _multiplied_ by `scl` before the integration constant is added. (Default: 1) **axis** int, optional Axis over which the integral is taken. (Default: 0). Returns: **S** ndarray Laguerre series coefficients of the integral. Raises: ValueError If `m < 0`, `len(k) > m`, `np.ndim(lbnd) != 0`, or `np.ndim(scl) != 0`. See also [`lagder`](numpy.polynomial.laguerre.lagder#numpy.polynomial.laguerre.lagder "numpy.polynomial.laguerre.lagder") #### Notes Note that the result of each integration is _multiplied_ by `scl`. Why is this important to note? Say one is making a linear change of variable \\(u = ax + b\\) in an integral relative to `x`. Then \\(dx = du/a\\), so one will need to set `scl` equal to \\(1/a\\) \- perhaps not what one would have first thought. Also note that, in general, the result of integrating a C-series needs to be “reprojected” onto the C-series basis set. Thus, typically, the result of this function is “unintuitive,” albeit correct; see Examples section below. #### Examples >>> from numpy.polynomial.laguerre import lagint >>> lagint([1,2,3]) array([ 1., 1., 1., -3.]) >>> lagint([1,2,3], m=2) array([ 1., 0., 0., -4., 3.]) >>> lagint([1,2,3], k=1) array([ 2., 1., 1., -3.]) >>> lagint([1,2,3], lbnd=-1) array([11.5, 1. , 1. , -3. ]) >>> lagint([1,2], m=2, k=[1,2], lbnd=-1) array([ 11.16666667, -5. , -3. , 2. ]) # may vary # numpy.polynomial.laguerre.lagline polynomial.laguerre.lagline(_off_ , _scl_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L215-L250) Laguerre series whose graph is a straight line. Parameters: **off, scl** scalars The specified line is given by `off + scl*x`. Returns: **y** ndarray This module’s representation of the Laguerre series for `off + scl*x`. See also [`numpy.polynomial.polynomial.polyline`](numpy.polynomial.polynomial.polyline#numpy.polynomial.polynomial.polyline "numpy.polynomial.polynomial.polyline") [`numpy.polynomial.chebyshev.chebline`](numpy.polynomial.chebyshev.chebline#numpy.polynomial.chebyshev.chebline "numpy.polynomial.chebyshev.chebline") [`numpy.polynomial.legendre.legline`](numpy.polynomial.legendre.legline#numpy.polynomial.legendre.legline "numpy.polynomial.legendre.legline") [`numpy.polynomial.hermite.hermline`](numpy.polynomial.hermite.hermline#numpy.polynomial.hermite.hermline "numpy.polynomial.hermite.hermline") [`numpy.polynomial.hermite_e.hermeline`](numpy.polynomial.hermite_e.hermeline#numpy.polynomial.hermite_e.hermeline "numpy.polynomial.hermite_e.hermeline") #### Examples >>> from numpy.polynomial.laguerre import lagline, lagval >>> lagval(0,lagline(3, 2)) 3.0 >>> lagval(1,lagline(3, 2)) 5.0 # numpy.polynomial.laguerre.lagmul polynomial.laguerre.lagmul(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L443-L506) Multiply one Laguerre series by another. Returns the product of two Laguerre series `c1` * `c2`. The arguments are sequences of coefficients, from lowest order “term” to highest, e.g., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters: **c1, c2** array_like 1-D arrays of Laguerre series coefficients ordered from low to high. Returns: **out** ndarray Of Laguerre series coefficients representing their product. See also [`lagadd`](numpy.polynomial.laguerre.lagadd#numpy.polynomial.laguerre.lagadd "numpy.polynomial.laguerre.lagadd"), [`lagsub`](numpy.polynomial.laguerre.lagsub#numpy.polynomial.laguerre.lagsub "numpy.polynomial.laguerre.lagsub"), [`lagmulx`](numpy.polynomial.laguerre.lagmulx#numpy.polynomial.laguerre.lagmulx "numpy.polynomial.laguerre.lagmulx"), [`lagdiv`](numpy.polynomial.laguerre.lagdiv#numpy.polynomial.laguerre.lagdiv "numpy.polynomial.laguerre.lagdiv"), [`lagpow`](numpy.polynomial.laguerre.lagpow#numpy.polynomial.laguerre.lagpow "numpy.polynomial.laguerre.lagpow") #### Notes In general, the (polynomial) product of two C-series results in terms that are not in the Laguerre polynomial basis set. Thus, to express the product as a Laguerre series, it is necessary to “reproject” the product onto said basis set, which may produce “unintuitive” (but correct) results; see Examples section below. #### Examples >>> from numpy.polynomial.laguerre import lagmul >>> lagmul([1, 2, 3], [0, 1, 2]) array([ 8., -13., 38., -51., 36.]) # numpy.polynomial.laguerre.lagmulx polynomial.laguerre.lagmulx(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L389-L440) Multiply a Laguerre series by x. Multiply the Laguerre series `c` by x, where x is the independent variable. Parameters: **c** array_like 1-D array of Laguerre series coefficients ordered from low to high. Returns: **out** ndarray Array representing the result of the multiplication. See also [`lagadd`](numpy.polynomial.laguerre.lagadd#numpy.polynomial.laguerre.lagadd "numpy.polynomial.laguerre.lagadd"), [`lagsub`](numpy.polynomial.laguerre.lagsub#numpy.polynomial.laguerre.lagsub "numpy.polynomial.laguerre.lagsub"), [`lagmul`](numpy.polynomial.laguerre.lagmul#numpy.polynomial.laguerre.lagmul "numpy.polynomial.laguerre.lagmul"), [`lagdiv`](numpy.polynomial.laguerre.lagdiv#numpy.polynomial.laguerre.lagdiv "numpy.polynomial.laguerre.lagdiv"), [`lagpow`](numpy.polynomial.laguerre.lagpow#numpy.polynomial.laguerre.lagpow "numpy.polynomial.laguerre.lagpow") #### Notes The multiplication uses the recursion relationship for Laguerre polynomials in the form \\[xP_i(x) = (-(i + 1)*P_{i + 1}(x) + (2i + 1)P_{i}(x) - iP_{i - 1}(x))\\] #### Examples >>> from numpy.polynomial.laguerre import lagmulx >>> lagmulx([1, 2, 3]) array([-1., -1., 11., -9.]) # numpy.polynomial.laguerre.lagone polynomial.laguerre.lagone _= array([1])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.laguerre.lagpow polynomial.laguerre.lagpow(_c_ , _pow_ , _maxpower =16_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L555-L589) Raise a Laguerre series to a power. Returns the Laguerre series `c` raised to the power [`pow`](numpy.pow#numpy.pow "numpy.pow"). The argument `c` is a sequence of coefficients ordered from low to high. i.e., [1,2,3] is the series `P_0 + 2*P_1 + 3*P_2.` Parameters: **c** array_like 1-D array of Laguerre series coefficients ordered from low to high. **pow** integer Power to which the series will be raised **maxpower** integer, optional Maximum power allowed. This is mainly to limit growth of the series to unmanageable size. Default is 16 Returns: **coef** ndarray Laguerre series of power. See also [`lagadd`](numpy.polynomial.laguerre.lagadd#numpy.polynomial.laguerre.lagadd "numpy.polynomial.laguerre.lagadd"), [`lagsub`](numpy.polynomial.laguerre.lagsub#numpy.polynomial.laguerre.lagsub "numpy.polynomial.laguerre.lagsub"), [`lagmulx`](numpy.polynomial.laguerre.lagmulx#numpy.polynomial.laguerre.lagmulx "numpy.polynomial.laguerre.lagmulx"), [`lagmul`](numpy.polynomial.laguerre.lagmul#numpy.polynomial.laguerre.lagmul "numpy.polynomial.laguerre.lagmul"), [`lagdiv`](numpy.polynomial.laguerre.lagdiv#numpy.polynomial.laguerre.lagdiv "numpy.polynomial.laguerre.lagdiv") #### Examples >>> from numpy.polynomial.laguerre import lagpow >>> lagpow([1, 2, 3], 2) array([ 14., -16., 56., -72., 54.]) # numpy.polynomial.laguerre.lagroots polynomial.laguerre.lagroots(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L1469-L1530) Compute the roots of a Laguerre series. Return the roots (a.k.a. “zeros”) of the polynomial \\[p(x) = \sum_i c[i] * L_i(x).\\] Parameters: **c** 1-D array_like 1-D array of coefficients. Returns: **out** ndarray Array of the roots of the series. If all the roots are real, then `out` is also real, otherwise it is complex. See also [`numpy.polynomial.polynomial.polyroots`](numpy.polynomial.polynomial.polyroots#numpy.polynomial.polynomial.polyroots "numpy.polynomial.polynomial.polyroots") [`numpy.polynomial.legendre.legroots`](numpy.polynomial.legendre.legroots#numpy.polynomial.legendre.legroots "numpy.polynomial.legendre.legroots") [`numpy.polynomial.chebyshev.chebroots`](numpy.polynomial.chebyshev.chebroots#numpy.polynomial.chebyshev.chebroots "numpy.polynomial.chebyshev.chebroots") [`numpy.polynomial.hermite.hermroots`](numpy.polynomial.hermite.hermroots#numpy.polynomial.hermite.hermroots "numpy.polynomial.hermite.hermroots") [`numpy.polynomial.hermite_e.hermeroots`](numpy.polynomial.hermite_e.hermeroots#numpy.polynomial.hermite_e.hermeroots "numpy.polynomial.hermite_e.hermeroots") #### Notes The root estimates are obtained as the eigenvalues of the companion matrix, Roots far from the origin of the complex plane may have large errors due to the numerical instability of the series for such values. Roots with multiplicity greater than 1 will also show larger errors as the value of the series near such points is relatively insensitive to errors in the roots. Isolated roots near the origin can be improved by a few iterations of Newton’s method. The Laguerre series basis polynomials aren’t powers of `x` so the results of this function may seem unintuitive. #### Examples >>> from numpy.polynomial.laguerre import lagroots, lagfromroots >>> coef = lagfromroots([0, 1, 2]) >>> coef array([ 2., -8., 12., -6.]) >>> lagroots(coef) array([-4.4408921e-16, 1.0000000e+00, 2.0000000e+00]) # numpy.polynomial.laguerre.lagsub polynomial.laguerre.lagsub(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L349-L386) Subtract one Laguerre series from another. Returns the difference of two Laguerre series `c1` \- `c2`. The sequences of coefficients are from lowest order term to highest, i.e., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters: **c1, c2** array_like 1-D arrays of Laguerre series coefficients ordered from low to high. Returns: **out** ndarray Of Laguerre series coefficients representing their difference. See also [`lagadd`](numpy.polynomial.laguerre.lagadd#numpy.polynomial.laguerre.lagadd "numpy.polynomial.laguerre.lagadd"), [`lagmulx`](numpy.polynomial.laguerre.lagmulx#numpy.polynomial.laguerre.lagmulx "numpy.polynomial.laguerre.lagmulx"), [`lagmul`](numpy.polynomial.laguerre.lagmul#numpy.polynomial.laguerre.lagmul "numpy.polynomial.laguerre.lagmul"), [`lagdiv`](numpy.polynomial.laguerre.lagdiv#numpy.polynomial.laguerre.lagdiv "numpy.polynomial.laguerre.lagdiv"), [`lagpow`](numpy.polynomial.laguerre.lagpow#numpy.polynomial.laguerre.lagpow "numpy.polynomial.laguerre.lagpow") #### Notes Unlike multiplication, division, etc., the difference of two Laguerre series is a Laguerre series (without having to “reproject” the result onto the basis set) so subtraction, just like that of “standard” polynomials, is simply “component-wise.” #### Examples >>> from numpy.polynomial.laguerre import lagsub >>> lagsub([1, 2, 3, 4], [1, 2, 3]) array([0., 0., 0., 4.]) # numpy.polynomial.laguerre.lagtrim polynomial.laguerre.lagtrim(_c_ , _tol =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polyutils.py#L144-L192) Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter `tol`; “trailing” means highest order coefficient(s), e.g., in `[0, 1, 1, 0, 0]` (which represents `0 + x + x**2 + 0*x**3 + 0*x**4`) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters: **c** array_like 1-d array of coefficients, ordered from lowest order to highest. **tol** number, optional Trailing (i.e., highest order) elements with absolute value less than or equal to `tol` (default value is zero) are removed. Returns: **trimmed** ndarray 1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises: ValueError If `tol` < 0 #### Examples >>> from numpy.polynomial import polyutils as pu >>> pu.trimcoef((0,0,3,0,5,0,0)) array([0., 0., 3., 0., 5.]) >>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed array([0.]) >>> i = complex(0,1) # works for complex >>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3) array([0.0003+0.j , 0.001 -0.001j]) # numpy.polynomial.laguerre.Laguerre.__call__ method polynomial.laguerre.Laguerre.__call__(_arg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L513-L515) Call self as a function. # numpy.polynomial.laguerre.Laguerre.basis method _classmethod_ polynomial.laguerre.Laguerre.basis(_deg_ , _domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1120-L1157) Series basis polynomial of degree `deg`. Returns the series representing the basis polynomial of degree `deg`. Parameters: **deg** int Degree of the basis polynomial for the series. Must be >= 0. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series A series with the coefficient of the `deg` term set to one and all others zero. # numpy.polynomial.laguerre.Laguerre.cast method _classmethod_ polynomial.laguerre.Laguerre.cast(_series_ , _domain =None_, _window =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1159-L1197) Convert series to series of this class. The `series` is expected to be an instance of some polynomial series of one of the types supported by by the numpy.polynomial module, but could be some other class that supports the convert method. Parameters: **series** series The series instance to be converted. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns: **new_series** series A series of the same kind as the calling class and equal to `series` when evaluated. See also [`convert`](numpy.polynomial.laguerre.laguerre.convert#numpy.polynomial.laguerre.Laguerre.convert "numpy.polynomial.laguerre.Laguerre.convert") similar instance method # numpy.polynomial.laguerre.Laguerre.convert method polynomial.laguerre.Laguerre.convert(_domain =None_, _kind =None_, _window =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L785-L820) Convert series to a different kind and/or domain and/or window. Parameters: **domain** array_like, optional The domain of the converted series. If the value is None, the default domain of `kind` is used. **kind** class, optional The polynomial series type class to which the current instance should be converted. If kind is None, then the class of the current instance is used. **window** array_like, optional The window of the converted series. If the value is None, the default window of `kind` is used. Returns: **new_series** series The returned class can be of different type than the current instance and/or have a different domain and/or different window. #### Notes Conversion between domains and class types can result in numerically ill defined series. # numpy.polynomial.laguerre.Laguerre.copy method polynomial.laguerre.Laguerre.copy()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L666-L675) Return a copy. Returns: **new_series** series Copy of self. # numpy.polynomial.laguerre.Laguerre.cutdeg method polynomial.laguerre.Laguerre.cutdeg(_deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L710-L731) Truncate series to the given degree. Reduce the degree of the series to `deg` by discarding the high order terms. If `deg` is greater than the current degree a copy of the current series is returned. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters: **deg** non-negative int The series is reduced to degree `deg` by discarding the high order terms. The value of `deg` must be a non-negative integer. Returns: **new_series** series New instance of series with reduced degree. # numpy.polynomial.laguerre.Laguerre.degree method polynomial.laguerre.Laguerre.degree()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L677-L708) The degree of the series. Returns: **degree** int Degree of the series, one less than the number of coefficients. #### Examples Create a polynomial object for `1 + 7*x + 4*x**2`: >>> poly = np.polynomial.Polynomial([1, 7, 4]) >>> print(poly) 1.0 + 7.0·x + 4.0·x² >>> poly.degree() 2 Note that this method does not check for non-zero coefficients. You must trim the polynomial to remove any trailing zeroes: >>> poly = np.polynomial.Polynomial([1, 7, 0]) >>> print(poly) 1.0 + 7.0·x + 0.0·x² >>> poly.degree() 2 >>> poly.trim().degree() 1 # numpy.polynomial.laguerre.Laguerre.deriv method polynomial.laguerre.Laguerre.deriv(_m =1_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L884-L904) Differentiate. Return a series instance of that is the derivative of the current series. Parameters: **m** non-negative int Find the derivative of order `m`. Returns: **new_series** series A new series representing the derivative. The domain is the same as the domain of the differentiated series. # numpy.polynomial.laguerre.Laguerre.domain attribute polynomial.laguerre.Laguerre.domain _= array([0., 1.])_ # numpy.polynomial.laguerre.Laguerre.fit method _classmethod_ polynomial.laguerre.Laguerre.fit(_x_ , _y_ , _deg_ , _domain =None_, _rcond =None_, _full =False_, _w =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L951-L1040) Least squares fit to data. Return a series instance that is the least squares fit to the data `y` sampled at `x`. The domain of the returned instance can be specified and this will often result in a superior fit with less chance of ill conditioning. Parameters: **x** array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y** array_like, shape (M,) y-coordinates of the M sample points `(x[i], y[i])`. **deg** int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **domain**{None, [beg, end], []}, optional Domain to use for the returned series. If `None`, then a minimal domain that covers the points `x` is chosen. If `[]` the class domain is used. The default value was the class domain in NumPy 1.4 and `None` in later versions. The `[]` option was added in numpy 1.5.0. **rcond** float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is `len(x)*eps`, where eps is the relative precision of the float type, about 2e-16 in most cases. **full** bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w** array_like, shape (M,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse- variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. **window**{[beg, end]}, optional Window to use for the returned series. The default value is the default class domain **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series A series that represents the least squares fit to the data and has the domain and window specified in the call. If the coefficients for the unscaled and unshifted basis polynomials are of interest, do `new_series.convert().coef`. **[resid, rank, sv, rcond]** list These values are only returned if `full == True` * resid – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * sv – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). # numpy.polynomial.laguerre.Laguerre.fromroots method _classmethod_ polynomial.laguerre.Laguerre.fromroots(_roots_ , _domain =[]_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1042-L1083) Return series instance that has the specified roots. Returns a series representing the product `(x - r[0])*(x - r[1])*...*(x - r[n-1])`, where `r` is a list of roots. Parameters: **roots** array_like List of roots. **domain**{[], None, array_like}, optional Domain for the resulting series. If None the domain is the interval from the smallest root to the largest. If [] the domain is the class domain. The default is []. **window**{None, array_like}, optional Window for the returned series. If None the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series Series with the specified roots. # numpy.polynomial.laguerre.Laguerre.has_samecoef method polynomial.laguerre.Laguerre.has_samecoef(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L188-L207) Check if coefficients match. Parameters: **other** class instance The other class must have the `coef` attribute. Returns: **bool** boolean True if the coefficients are the same, False otherwise. # numpy.polynomial.laguerre.Laguerre.has_samedomain method polynomial.laguerre.Laguerre.has_samedomain(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L209-L223) Check if domains match. Parameters: **other** class instance The other class must have the `domain` attribute. Returns: **bool** boolean True if the domains are the same, False otherwise. # numpy.polynomial.laguerre.Laguerre.has_sametype method polynomial.laguerre.Laguerre.has_sametype(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L241-L255) Check if types match. Parameters: **other** object Class instance. Returns: **bool** boolean True if other is same class as self # numpy.polynomial.laguerre.Laguerre.has_samewindow method polynomial.laguerre.Laguerre.has_samewindow(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L225-L239) Check if windows match. Parameters: **other** class instance The other class must have the `window` attribute. Returns: **bool** boolean True if the windows are the same, False otherwise. # numpy.polynomial.laguerre.Laguerre _class_ numpy.polynomial.laguerre.Laguerre(_coef_ , _domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L1632-L1675) A Laguerre series class. The Laguerre class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the attributes and methods listed below. Parameters: **coef** array_like Laguerre coefficients in order of increasing degree, i.e, `(1, 2, 3)` gives `1*L_0(x) + 2*L_1(X) + 3*L_2(x)`. **domain**(2,) array_like, optional Domain to use. The interval `[domain[0], domain[1]]` is mapped to the interval `[window[0], window[1]]` by shifting and scaling. The default value is [0., 1.]. **window**(2,) array_like, optional Window, see [`domain`](numpy.polynomial.laguerre.laguerre.domain#numpy.polynomial.laguerre.Laguerre.domain "numpy.polynomial.laguerre.Laguerre.domain") for its use. The default value is [0., 1.]. **symbol** str, optional Symbol used to represent the independent variable in string representations of the polynomial expression, e.g. for printing. The symbol must be a valid Python identifier. Default value is ‘x’. New in version 1.24. Attributes: **symbol** #### Methods [`__call__`](numpy.polynomial.laguerre.laguerre.__call__#numpy.polynomial.laguerre.Laguerre.__call__ "numpy.polynomial.laguerre.Laguerre.__call__")(arg) | Call self as a function. ---|--- [`basis`](numpy.polynomial.laguerre.laguerre.basis#numpy.polynomial.laguerre.Laguerre.basis "numpy.polynomial.laguerre.Laguerre.basis")(deg[, domain, window, symbol]) | Series basis polynomial of degree `deg`. [`cast`](numpy.polynomial.laguerre.laguerre.cast#numpy.polynomial.laguerre.Laguerre.cast "numpy.polynomial.laguerre.Laguerre.cast")(series[, domain, window]) | Convert series to series of this class. [`convert`](numpy.polynomial.laguerre.laguerre.convert#numpy.polynomial.laguerre.Laguerre.convert "numpy.polynomial.laguerre.Laguerre.convert")([domain, kind, window]) | Convert series to a different kind and/or domain and/or window. [`copy`](numpy.polynomial.laguerre.laguerre.copy#numpy.polynomial.laguerre.Laguerre.copy "numpy.polynomial.laguerre.Laguerre.copy")() | Return a copy. [`cutdeg`](numpy.polynomial.laguerre.laguerre.cutdeg#numpy.polynomial.laguerre.Laguerre.cutdeg "numpy.polynomial.laguerre.Laguerre.cutdeg")(deg) | Truncate series to the given degree. [`degree`](numpy.polynomial.laguerre.laguerre.degree#numpy.polynomial.laguerre.Laguerre.degree "numpy.polynomial.laguerre.Laguerre.degree")() | The degree of the series. [`deriv`](numpy.polynomial.laguerre.laguerre.deriv#numpy.polynomial.laguerre.Laguerre.deriv "numpy.polynomial.laguerre.Laguerre.deriv")([m]) | Differentiate. [`fit`](numpy.polynomial.laguerre.laguerre.fit#numpy.polynomial.laguerre.Laguerre.fit "numpy.polynomial.laguerre.Laguerre.fit")(x, y, deg[, domain, rcond, full, w, ...]) | Least squares fit to data. [`fromroots`](numpy.polynomial.laguerre.laguerre.fromroots#numpy.polynomial.laguerre.Laguerre.fromroots "numpy.polynomial.laguerre.Laguerre.fromroots")(roots[, domain, window, symbol]) | Return series instance that has the specified roots. [`has_samecoef`](numpy.polynomial.laguerre.laguerre.has_samecoef#numpy.polynomial.laguerre.Laguerre.has_samecoef "numpy.polynomial.laguerre.Laguerre.has_samecoef")(other) | Check if coefficients match. [`has_samedomain`](numpy.polynomial.laguerre.laguerre.has_samedomain#numpy.polynomial.laguerre.Laguerre.has_samedomain "numpy.polynomial.laguerre.Laguerre.has_samedomain")(other) | Check if domains match. [`has_sametype`](numpy.polynomial.laguerre.laguerre.has_sametype#numpy.polynomial.laguerre.Laguerre.has_sametype "numpy.polynomial.laguerre.Laguerre.has_sametype")(other) | Check if types match. [`has_samewindow`](numpy.polynomial.laguerre.laguerre.has_samewindow#numpy.polynomial.laguerre.Laguerre.has_samewindow "numpy.polynomial.laguerre.Laguerre.has_samewindow")(other) | Check if windows match. [`identity`](numpy.polynomial.laguerre.laguerre.identity#numpy.polynomial.laguerre.Laguerre.identity "numpy.polynomial.laguerre.Laguerre.identity")([domain, window, symbol]) | Identity function. [`integ`](numpy.polynomial.laguerre.laguerre.integ#numpy.polynomial.laguerre.Laguerre.integ "numpy.polynomial.laguerre.Laguerre.integ")([m, k, lbnd]) | Integrate. [`linspace`](numpy.polynomial.laguerre.laguerre.linspace#numpy.polynomial.laguerre.Laguerre.linspace "numpy.polynomial.laguerre.Laguerre.linspace")([n, domain]) | Return x, y values at equally spaced points in domain. [`mapparms`](numpy.polynomial.laguerre.laguerre.mapparms#numpy.polynomial.laguerre.Laguerre.mapparms "numpy.polynomial.laguerre.Laguerre.mapparms")() | Return the mapping parameters. [`roots`](numpy.polynomial.laguerre.laguerre.roots#numpy.polynomial.laguerre.Laguerre.roots "numpy.polynomial.laguerre.Laguerre.roots")() | Return the roots of the series polynomial. [`trim`](numpy.polynomial.laguerre.laguerre.trim#numpy.polynomial.laguerre.Laguerre.trim "numpy.polynomial.laguerre.Laguerre.trim")([tol]) | Remove trailing coefficients [`truncate`](numpy.polynomial.laguerre.laguerre.truncate#numpy.polynomial.laguerre.Laguerre.truncate "numpy.polynomial.laguerre.Laguerre.truncate")(size) | Truncate series to length `size`. # numpy.polynomial.laguerre.Laguerre.identity method _classmethod_ polynomial.laguerre.Laguerre.identity(_domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1085-L1118) Identity function. If `p` is the returned series, then `p(x) == x` for all values of x. Parameters: **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series Series of representing the identity. # numpy.polynomial.laguerre.Laguerre.integ method polynomial.laguerre.Laguerre.integ(_m =1_, _k =[]_, _lbnd =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L851-L882) Integrate. Return a series instance that is the definite integral of the current series. Parameters: **m** non-negative int The number of integrations to perform. **k** array_like Integration constants. The first constant is applied to the first integration, the second to the second, and so on. The list of values must less than or equal to `m` in length and any missing values are set to zero. **lbnd** Scalar The lower bound of the definite integral. Returns: **new_series** series A new series representing the integral. The domain is the same as the domain of the integrated series. # numpy.polynomial.laguerre.Laguerre.linspace method polynomial.laguerre.Laguerre.linspace(_n =100_, _domain =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L921-L949) Return x, y values at equally spaced points in domain. Returns the x, y values at `n` linearly spaced points across the domain. Here y is the value of the polynomial at the points x. By default the domain is the same as that of the series instance. This method is intended mostly as a plotting aid. Parameters: **n** int, optional Number of point pairs to return. The default value is 100. **domain**{None, array_like}, optional If not None, the specified domain is used instead of that of the calling instance. It should be of the form `[beg,end]`. The default is None which case the class domain is used. Returns: **x, y** ndarray x is equal to linspace(self.domain[0], self.domain[1], n) and y is the series evaluated at element of x. # numpy.polynomial.laguerre.Laguerre.mapparms method polynomial.laguerre.Laguerre.mapparms()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L822-L849) Return the mapping parameters. The returned values define a linear map `off + scl*x` that is applied to the input arguments before the series is evaluated. The map depends on the `domain` and `window`; if the current `domain` is equal to the `window` the resulting map is the identity. If the coefficients of the series instance are to be used by themselves outside this class, then the linear function must be substituted for the `x` in the standard representation of the base polynomials. Returns: **off, scl** float or complex The mapping function is defined by `off + scl*x`. #### Notes If the current domain is the interval `[l1, r1]` and the window is `[l2, r2]`, then the linear mapping function `L` is defined by the equations: L(l1) = l2 L(r1) = r2 # numpy.polynomial.laguerre.Laguerre.roots method polynomial.laguerre.Laguerre.roots()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L906-L919) Return the roots of the series polynomial. Compute the roots for the series. Note that the accuracy of the roots decreases the further outside the [`domain`](numpy.polynomial.laguerre.laguerre.domain#numpy.polynomial.laguerre.Laguerre.domain "numpy.polynomial.laguerre.Laguerre.domain") they lie. Returns: **roots** ndarray Array containing the roots of the series. # numpy.polynomial.laguerre.Laguerre.trim method polynomial.laguerre.Laguerre.trim(_tol =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L733-L754) Remove trailing coefficients Remove trailing coefficients until a coefficient is reached whose absolute value greater than `tol` or the beginning of the series is reached. If all the coefficients would be removed the series is set to `[0]`. A new series instance is returned with the new coefficients. The current instance remains unchanged. Parameters: **tol** non-negative number. All trailing coefficients less than `tol` will be removed. Returns: **new_series** series New instance of series with trimmed coefficients. # numpy.polynomial.laguerre.Laguerre.truncate method polynomial.laguerre.Laguerre.truncate(_size_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L756-L783) Truncate series to length [`size`](numpy.size#numpy.size "numpy.size"). Reduce the series to length [`size`](numpy.size#numpy.size "numpy.size") by discarding the high degree terms. The value of [`size`](numpy.size#numpy.size "numpy.size") must be a positive integer. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters: **size** positive int The series is reduced to length [`size`](numpy.size#numpy.size "numpy.size") by discarding the high degree terms. The value of [`size`](numpy.size#numpy.size "numpy.size") must be a positive integer. Returns: **new_series** series New instance of series with truncated coefficients. # numpy.polynomial.laguerre.lagval polynomial.laguerre.lagval(_x_ , _c_ , _tensor =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L798-L888) Evaluate a Laguerre series at points x. If `c` is of length `n + 1`, this function returns the value: \\[p(x) = c_0 * L_0(x) + c_1 * L_1(x) + ... + c_n * L_n(x)\\] The parameter `x` is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either `x` or its elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array, then `p(x)` will have the same shape as `x`. If `c` is multidimensional, then the shape of the result depends on the value of `tensor`. If `tensor` is true the shape will be c.shape[1:] + x.shape. If `tensor` is false the shape will be c.shape[1:]. Note that scalars have shape (,). Trailing zeros in the coefficients will be used in the evaluation, so they should be avoided if efficiency is a concern. Parameters: **x** array_like, compatible object If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with themselves and with the elements of `c`. **c** array_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If `c` is multidimensional the remaining indices enumerate multiple polynomials. In the two dimensional case the coefficients may be thought of as stored in the columns of `c`. **tensor** boolean, optional If True, the shape of the coefficient array is extended with ones on the right, one for each dimension of `x`. Scalars have dimension 0 for this action. The result is that every column of coefficients in `c` is evaluated for every element of `x`. If False, `x` is broadcast over the columns of `c` for the evaluation. This keyword is useful when `c` is multidimensional. The default value is True. Returns: **values** ndarray, algebra_like The shape of the return value is described above. See also [`lagval2d`](numpy.polynomial.laguerre.lagval2d#numpy.polynomial.laguerre.lagval2d "numpy.polynomial.laguerre.lagval2d"), [`laggrid2d`](numpy.polynomial.laguerre.laggrid2d#numpy.polynomial.laguerre.laggrid2d "numpy.polynomial.laguerre.laggrid2d"), [`lagval3d`](numpy.polynomial.laguerre.lagval3d#numpy.polynomial.laguerre.lagval3d "numpy.polynomial.laguerre.lagval3d"), [`laggrid3d`](numpy.polynomial.laguerre.laggrid3d#numpy.polynomial.laguerre.laggrid3d "numpy.polynomial.laguerre.laggrid3d") #### Notes The evaluation uses Clenshaw recursion, aka synthetic division. #### Examples >>> from numpy.polynomial.laguerre import lagval >>> coef = [1, 2, 3] >>> lagval(1, coef) -0.5 >>> lagval([[1, 2],[3, 4]], coef) array([[-0.5, -4. ], [-4.5, -2. ]]) # numpy.polynomial.laguerre.lagval2d polynomial.laguerre.lagval2d(_x_ , _y_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L891-L938) Evaluate a 2-D Laguerre series at points (x, y). This function returns the values: \\[p(x,y) = \sum_{i,j} c_{i,j} * L_i(x) * L_j(y)\\] The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array a one is implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters: **x, y** array_like, compatible objects The two dimensional series is evaluated at the points `(x, y)`, where `x` and `y` must have the same shape. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficient of the term of multi- degree i,j is contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional polynomial at points formed with pairs of corresponding values from `x` and `y`. See also [`lagval`](numpy.polynomial.laguerre.lagval#numpy.polynomial.laguerre.lagval "numpy.polynomial.laguerre.lagval"), [`laggrid2d`](numpy.polynomial.laguerre.laggrid2d#numpy.polynomial.laguerre.laggrid2d "numpy.polynomial.laguerre.laggrid2d"), [`lagval3d`](numpy.polynomial.laguerre.lagval3d#numpy.polynomial.laguerre.lagval3d "numpy.polynomial.laguerre.lagval3d"), [`laggrid3d`](numpy.polynomial.laguerre.laggrid3d#numpy.polynomial.laguerre.laggrid3d "numpy.polynomial.laguerre.laggrid3d") #### Examples >>> from numpy.polynomial.laguerre import lagval2d >>> c = [[1, 2],[3, 4]] >>> lagval2d(1, 1, c) 1.0 # numpy.polynomial.laguerre.lagval3d polynomial.laguerre.lagval3d(_x_ , _y_ , _z_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L997-L1047) Evaluate a 3-D Laguerre series at points (x, y, z). This function returns the values: \\[p(x,y,z) = \sum_{i,j,k} c_{i,j,k} * L_i(x) * L_j(y) * L_k(z)\\] The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than 3 dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape. Parameters: **x, y, z** array_like, compatible object The three dimensional series is evaluated at the points `(x, y, z)`, where `x`, `y`, and `z` must have the same shape. If any of `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficient of the term of multi- degree i,j,k is contained in `c[i,j,k]`. If `c` has dimension greater than 3 the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the multidimensional polynomial on points formed with triples of corresponding values from `x`, `y`, and `z`. See also [`lagval`](numpy.polynomial.laguerre.lagval#numpy.polynomial.laguerre.lagval "numpy.polynomial.laguerre.lagval"), [`lagval2d`](numpy.polynomial.laguerre.lagval2d#numpy.polynomial.laguerre.lagval2d "numpy.polynomial.laguerre.lagval2d"), [`laggrid2d`](numpy.polynomial.laguerre.laggrid2d#numpy.polynomial.laguerre.laggrid2d "numpy.polynomial.laguerre.laggrid2d"), [`laggrid3d`](numpy.polynomial.laguerre.laggrid3d#numpy.polynomial.laguerre.laggrid3d "numpy.polynomial.laguerre.laggrid3d") #### Examples >>> from numpy.polynomial.laguerre import lagval3d >>> c = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]] >>> lagval3d(1, 1, 2, c) -1.0 # numpy.polynomial.laguerre.lagvander polynomial.laguerre.lagvander(_x_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L1111-L1169) Pseudo-Vandermonde matrix of given degree. Returns the pseudo-Vandermonde matrix of degree `deg` and sample points `x`. The pseudo-Vandermonde matrix is defined by \\[V[..., i] = L_i(x)\\] where `0 <= i <= deg`. The leading indices of `V` index the elements of `x` and the last index is the degree of the Laguerre polynomial. If `c` is a 1-D array of coefficients of length `n + 1` and `V` is the array `V = lagvander(x, n)`, then `np.dot(V, c)` and `lagval(x, c)` are the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of Laguerre series of the same degree and sample points. Parameters: **x** array_like Array of points. The dtype is converted to float64 or complex128 depending on whether any of the elements are complex. If `x` is scalar it is converted to a 1-D array. **deg** int Degree of the resulting matrix. Returns: **vander** ndarray The pseudo-Vandermonde matrix. The shape of the returned matrix is `x.shape + (deg + 1,)`, where The last index is the degree of the corresponding Laguerre polynomial. The dtype will be the same as the converted `x`. #### Examples >>> import numpy as np >>> from numpy.polynomial.laguerre import lagvander >>> x = np.array([0, 1, 2]) >>> lagvander(x, 3) array([[ 1. , 1. , 1. , 1. ], [ 1. , 0. , -0.5 , -0.66666667], [ 1. , -1. , -1. , -0.33333333]]) # numpy.polynomial.laguerre.lagvander2d polynomial.laguerre.lagvander2d(_x_ , _y_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L1172-L1226) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y)`. The pseudo-Vandermonde matrix is defined by \\[V[..., (deg[1] + 1)*i + j] = L_i(x) * L_j(y),\\] where `0 <= i <= deg[0]` and `0 <= j <= deg[1]`. The leading indices of `V` index the points `(x, y)` and the last index encodes the degrees of the Laguerre polynomials. If `V = lagvander2d(x, y, [xdeg, ydeg])`, then the columns of `V` correspond to the elements of a 2-D coefficient array `c` of shape (xdeg + 1, ydeg + 1) in the order \\[c_{00}, c_{01}, c_{02} ... , c_{10}, c_{11}, c_{12} ...\\] and `np.dot(V, c.flat)` and `lagval2d(x, y, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 2-D Laguerre series of the same degrees and sample points. Parameters: **x, y** array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg** list of ints List of maximum degrees of the form [x_deg, y_deg]. Returns: **vander2d** ndarray The shape of the returned matrix is `x.shape + (order,)`, where \\(order = (deg[0]+1)*(deg[1]+1)\\). The dtype will be the same as the converted `x` and `y`. See also [`lagvander`](numpy.polynomial.laguerre.lagvander#numpy.polynomial.laguerre.lagvander "numpy.polynomial.laguerre.lagvander"), [`lagvander3d`](numpy.polynomial.laguerre.lagvander3d#numpy.polynomial.laguerre.lagvander3d "numpy.polynomial.laguerre.lagvander3d"), [`lagval2d`](numpy.polynomial.laguerre.lagval2d#numpy.polynomial.laguerre.lagval2d "numpy.polynomial.laguerre.lagval2d"), [`lagval3d`](numpy.polynomial.laguerre.lagval3d#numpy.polynomial.laguerre.lagval3d "numpy.polynomial.laguerre.lagval3d") #### Examples >>> import numpy as np >>> from numpy.polynomial.laguerre import lagvander2d >>> x = np.array([0]) >>> y = np.array([2]) >>> lagvander2d(x, y, [2, 1]) array([[ 1., -1., 1., -1., 1., -1.]]) # numpy.polynomial.laguerre.lagvander3d polynomial.laguerre.lagvander3d(_x_ , _y_ , _z_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L1229-L1286) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y, z)`. If `l`, `m`, `n` are the given degrees in `x`, `y`, `z`, then The pseudo-Vandermonde matrix is defined by \\[V[..., (m+1)(n+1)i + (n+1)j + k] = L_i(x)*L_j(y)*L_k(z),\\] where `0 <= i <= l`, `0 <= j <= m`, and `0 <= j <= n`. The leading indices of `V` index the points `(x, y, z)` and the last index encodes the degrees of the Laguerre polynomials. If `V = lagvander3d(x, y, z, [xdeg, ydeg, zdeg])`, then the columns of `V` correspond to the elements of a 3-D coefficient array `c` of shape (xdeg + 1, ydeg + 1, zdeg + 1) in the order \\[c_{000}, c_{001}, c_{002},... , c_{010}, c_{011}, c_{012},...\\] and `np.dot(V, c.flat)` and `lagval3d(x, y, z, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 3-D Laguerre series of the same degrees and sample points. Parameters: **x, y, z** array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg** list of ints List of maximum degrees of the form [x_deg, y_deg, z_deg]. Returns: **vander3d** ndarray The shape of the returned matrix is `x.shape + (order,)`, where \\(order = (deg[0]+1)*(deg[1]+1)*(deg[2]+1)\\). The dtype will be the same as the converted `x`, `y`, and `z`. See also [`lagvander`](numpy.polynomial.laguerre.lagvander#numpy.polynomial.laguerre.lagvander "numpy.polynomial.laguerre.lagvander"), `lagvander3d`, [`lagval2d`](numpy.polynomial.laguerre.lagval2d#numpy.polynomial.laguerre.lagval2d "numpy.polynomial.laguerre.lagval2d"), [`lagval3d`](numpy.polynomial.laguerre.lagval3d#numpy.polynomial.laguerre.lagval3d "numpy.polynomial.laguerre.lagval3d") #### Examples >>> import numpy as np >>> from numpy.polynomial.laguerre import lagvander3d >>> x = np.array([0]) >>> y = np.array([2]) >>> z = np.array([0]) >>> lagvander3d(x, y, z, [2, 1, 3]) array([[ 1., 1., 1., 1., -1., -1., -1., -1., 1., 1., 1., 1., -1., -1., -1., -1., 1., 1., 1., 1., -1., -1., -1., -1.]]) # numpy.polynomial.laguerre.lagweight polynomial.laguerre.lagweight(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L1600-L1626) Weight function of the Laguerre polynomials. The weight function is \\(exp(-x)\\) and the interval of integration is \\([0, \inf]\\). The Laguerre polynomials are orthogonal, but not normalized, with respect to this weight function. Parameters: **x** array_like Values at which the weight function will be computed. Returns: **w** ndarray The weight function at `x`. #### Examples >>> from numpy.polynomial.laguerre import lagweight >>> x = np.array([0, 1, 2]) >>> lagweight(x) array([1. , 0.36787944, 0.13533528]) # numpy.polynomial.laguerre.lagx polynomial.laguerre.lagx _= array([ 1, -1])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.laguerre.lagzero polynomial.laguerre.lagzero _= array([0])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.laguerre.poly2lag polynomial.laguerre.poly2lag(_pol_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/laguerre.py#L96-L139) Convert a polynomial to a Laguerre series. Convert an array representing the coefficients of a polynomial (relative to the “standard” basis) ordered from lowest degree to highest, to an array of the coefficients of the equivalent Laguerre series, ordered from lowest to highest degree. Parameters: **pol** array_like 1-D array containing the polynomial coefficients Returns: **c** ndarray 1-D array containing the coefficients of the equivalent Laguerre series. See also [`lag2poly`](numpy.polynomial.laguerre.lag2poly#numpy.polynomial.laguerre.lag2poly "numpy.polynomial.laguerre.lag2poly") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples >>> import numpy as np >>> from numpy.polynomial.laguerre import poly2lag >>> poly2lag(np.arange(4)) array([ 23., -63., 58., -18.]) # numpy.polynomial.legendre.leg2poly polynomial.legendre.leg2poly(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L149-L208) Convert a Legendre series to a polynomial. Convert an array representing the coefficients of a Legendre series, ordered from lowest degree to highest, to an array of the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest to highest degree. Parameters: **c** array_like 1-D array containing the Legendre series coefficients, ordered from lowest order term to highest. Returns: **pol** ndarray 1-D array containing the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest order term to highest. See also [`poly2leg`](numpy.polynomial.legendre.poly2leg#numpy.polynomial.legendre.poly2leg "numpy.polynomial.legendre.poly2leg") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples >>> from numpy import polynomial as P >>> c = P.Legendre(range(4)) >>> c Legendre([0., 1., 2., 3.], domain=[-1., 1.], window=[-1., 1.], symbol='x') >>> p = c.convert(kind=P.Polynomial) >>> p Polynomial([-1. , -3.5, 3. , 7.5], domain=[-1., 1.], window=[-1., ... >>> P.legendre.leg2poly(range(4)) array([-1. , -3.5, 3. , 7.5]) # numpy.polynomial.legendre.legadd polynomial.legendre.legadd(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L324-L363) Add one Legendre series to another. Returns the sum of two Legendre series `c1` \+ `c2`. The arguments are sequences of coefficients ordered from lowest order term to highest, i.e., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters: **c1, c2** array_like 1-D arrays of Legendre series coefficients ordered from low to high. Returns: **out** ndarray Array representing the Legendre series of their sum. See also [`legsub`](numpy.polynomial.legendre.legsub#numpy.polynomial.legendre.legsub "numpy.polynomial.legendre.legsub"), [`legmulx`](numpy.polynomial.legendre.legmulx#numpy.polynomial.legendre.legmulx "numpy.polynomial.legendre.legmulx"), [`legmul`](numpy.polynomial.legendre.legmul#numpy.polynomial.legendre.legmul "numpy.polynomial.legendre.legmul"), [`legdiv`](numpy.polynomial.legendre.legdiv#numpy.polynomial.legendre.legdiv "numpy.polynomial.legendre.legdiv"), [`legpow`](numpy.polynomial.legendre.legpow#numpy.polynomial.legendre.legpow "numpy.polynomial.legendre.legpow") #### Notes Unlike multiplication, division, etc., the sum of two Legendre series is a Legendre series (without having to “reproject” the result onto the basis set) so addition, just like that of “standard” polynomials, is simply “component- wise.” #### Examples >>> from numpy.polynomial import legendre as L >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> L.legadd(c1,c2) array([4., 4., 4.]) # numpy.polynomial.legendre.legcompanion polynomial.legendre.legcompanion(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L1373-L1408) Return the scaled companion matrix of c. The basis polynomials are scaled so that the companion matrix is symmetric when `c` is an Legendre basis polynomial. This provides better eigenvalue estimates than the unscaled case and for basis polynomials the eigenvalues are guaranteed to be real if [`numpy.linalg.eigvalsh`](numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh") is used to obtain them. Parameters: **c** array_like 1-D array of Legendre series coefficients ordered from low to high degree. Returns: **mat** ndarray Scaled companion matrix of dimensions (deg, deg). # numpy.polynomial.legendre.legder polynomial.legendre.legder(_c_ , _m =1_, _scl =1_, _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L614-L701) Differentiate a Legendre series. Returns the Legendre series coefficients `c` differentiated `m` times along `axis`. At each iteration the result is multiplied by `scl` (the scaling factor is for use in a linear change of variable). The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `1*L_0 + 2*L_1 + 3*L_2` while [[1,2],[1,2]] represents `1*L_0(x)*L_0(y) + 1*L_1(x)*L_0(y) + 2*L_0(x)*L_1(y) + 2*L_1(x)*L_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters: **c** array_like Array of Legendre series coefficients. If c is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m** int, optional Number of derivatives taken, must be non-negative. (Default: 1) **scl** scalar, optional Each differentiation is multiplied by `scl`. The end result is multiplication by `scl**m`. This is for use in a linear change of variable. (Default: 1) **axis** int, optional Axis over which the derivative is taken. (Default: 0). Returns: **der** ndarray Legendre series of the derivative. See also [`legint`](numpy.polynomial.legendre.legint#numpy.polynomial.legendre.legint "numpy.polynomial.legendre.legint") #### Notes In general, the result of differentiating a Legendre series does not resemble the same operation on a power series. Thus the result of this function may be “unintuitive,” albeit correct; see Examples section below. #### Examples >>> from numpy.polynomial import legendre as L >>> c = (1,2,3,4) >>> L.legder(c) array([ 6., 9., 20.]) >>> L.legder(c, 3) array([60.]) >>> L.legder(c, scl=-1) array([ -6., -9., -20.]) >>> L.legder(c, 2,-1) array([ 9., 60.]) # numpy.polynomial.legendre.legdiv polynomial.legendre.legdiv(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L534-L580) Divide one Legendre series by another. Returns the quotient-with-remainder of two Legendre series `c1` / `c2`. The arguments are sequences of coefficients from lowest order “term” to highest, e.g., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters: **c1, c2** array_like 1-D arrays of Legendre series coefficients ordered from low to high. Returns: **quo, rem** ndarrays Of Legendre series coefficients representing the quotient and remainder. See also [`legadd`](numpy.polynomial.legendre.legadd#numpy.polynomial.legendre.legadd "numpy.polynomial.legendre.legadd"), [`legsub`](numpy.polynomial.legendre.legsub#numpy.polynomial.legendre.legsub "numpy.polynomial.legendre.legsub"), [`legmulx`](numpy.polynomial.legendre.legmulx#numpy.polynomial.legendre.legmulx "numpy.polynomial.legendre.legmulx"), [`legmul`](numpy.polynomial.legendre.legmul#numpy.polynomial.legendre.legmul "numpy.polynomial.legendre.legmul"), [`legpow`](numpy.polynomial.legendre.legpow#numpy.polynomial.legendre.legpow "numpy.polynomial.legendre.legpow") #### Notes In general, the (polynomial) division of one Legendre series by another results in quotient and remainder terms that are not in the Legendre polynomial basis set. Thus, to express these results as a Legendre series, it is necessary to “reproject” the results onto the Legendre basis set, which may produce “unintuitive” (but correct) results; see Examples section below. #### Examples >>> from numpy.polynomial import legendre as L >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> L.legdiv(c1,c2) # quotient "intuitive," remainder not (array([3.]), array([-8., -4.])) >>> c2 = (0,1,2,3) >>> L.legdiv(c2,c1) # neither "intuitive" (array([-0.07407407, 1.66666667]), array([-1.03703704, -2.51851852])) # may vary # numpy.polynomial.legendre.legdomain polynomial.legendre.legdomain _= array([-1., 1.])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.legendre.Legendre.__call__ method polynomial.legendre.Legendre.__call__(_arg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L513-L515) Call self as a function. # numpy.polynomial.legendre.Legendre.basis method _classmethod_ polynomial.legendre.Legendre.basis(_deg_ , _domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1120-L1157) Series basis polynomial of degree `deg`. Returns the series representing the basis polynomial of degree `deg`. Parameters: **deg** int Degree of the basis polynomial for the series. Must be >= 0. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series A series with the coefficient of the `deg` term set to one and all others zero. # numpy.polynomial.legendre.Legendre.cast method _classmethod_ polynomial.legendre.Legendre.cast(_series_ , _domain =None_, _window =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1159-L1197) Convert series to series of this class. The `series` is expected to be an instance of some polynomial series of one of the types supported by by the numpy.polynomial module, but could be some other class that supports the convert method. Parameters: **series** series The series instance to be converted. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns: **new_series** series A series of the same kind as the calling class and equal to `series` when evaluated. See also [`convert`](numpy.polynomial.legendre.legendre.convert#numpy.polynomial.legendre.Legendre.convert "numpy.polynomial.legendre.Legendre.convert") similar instance method # numpy.polynomial.legendre.Legendre.convert method polynomial.legendre.Legendre.convert(_domain =None_, _kind =None_, _window =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L785-L820) Convert series to a different kind and/or domain and/or window. Parameters: **domain** array_like, optional The domain of the converted series. If the value is None, the default domain of `kind` is used. **kind** class, optional The polynomial series type class to which the current instance should be converted. If kind is None, then the class of the current instance is used. **window** array_like, optional The window of the converted series. If the value is None, the default window of `kind` is used. Returns: **new_series** series The returned class can be of different type than the current instance and/or have a different domain and/or different window. #### Notes Conversion between domains and class types can result in numerically ill defined series. # numpy.polynomial.legendre.Legendre.copy method polynomial.legendre.Legendre.copy()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L666-L675) Return a copy. Returns: **new_series** series Copy of self. # numpy.polynomial.legendre.Legendre.cutdeg method polynomial.legendre.Legendre.cutdeg(_deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L710-L731) Truncate series to the given degree. Reduce the degree of the series to `deg` by discarding the high order terms. If `deg` is greater than the current degree a copy of the current series is returned. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters: **deg** non-negative int The series is reduced to degree `deg` by discarding the high order terms. The value of `deg` must be a non-negative integer. Returns: **new_series** series New instance of series with reduced degree. # numpy.polynomial.legendre.Legendre.degree method polynomial.legendre.Legendre.degree()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L677-L708) The degree of the series. Returns: **degree** int Degree of the series, one less than the number of coefficients. #### Examples Create a polynomial object for `1 + 7*x + 4*x**2`: >>> poly = np.polynomial.Polynomial([1, 7, 4]) >>> print(poly) 1.0 + 7.0·x + 4.0·x² >>> poly.degree() 2 Note that this method does not check for non-zero coefficients. You must trim the polynomial to remove any trailing zeroes: >>> poly = np.polynomial.Polynomial([1, 7, 0]) >>> print(poly) 1.0 + 7.0·x + 0.0·x² >>> poly.degree() 2 >>> poly.trim().degree() 1 # numpy.polynomial.legendre.Legendre.deriv method polynomial.legendre.Legendre.deriv(_m =1_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L884-L904) Differentiate. Return a series instance of that is the derivative of the current series. Parameters: **m** non-negative int Find the derivative of order `m`. Returns: **new_series** series A new series representing the derivative. The domain is the same as the domain of the differentiated series. # numpy.polynomial.legendre.Legendre.domain attribute polynomial.legendre.Legendre.domain _= array([-1., 1.])_ # numpy.polynomial.legendre.Legendre.fit method _classmethod_ polynomial.legendre.Legendre.fit(_x_ , _y_ , _deg_ , _domain =None_, _rcond =None_, _full =False_, _w =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L951-L1040) Least squares fit to data. Return a series instance that is the least squares fit to the data `y` sampled at `x`. The domain of the returned instance can be specified and this will often result in a superior fit with less chance of ill conditioning. Parameters: **x** array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y** array_like, shape (M,) y-coordinates of the M sample points `(x[i], y[i])`. **deg** int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **domain**{None, [beg, end], []}, optional Domain to use for the returned series. If `None`, then a minimal domain that covers the points `x` is chosen. If `[]` the class domain is used. The default value was the class domain in NumPy 1.4 and `None` in later versions. The `[]` option was added in numpy 1.5.0. **rcond** float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is `len(x)*eps`, where eps is the relative precision of the float type, about 2e-16 in most cases. **full** bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w** array_like, shape (M,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse- variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. **window**{[beg, end]}, optional Window to use for the returned series. The default value is the default class domain **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series A series that represents the least squares fit to the data and has the domain and window specified in the call. If the coefficients for the unscaled and unshifted basis polynomials are of interest, do `new_series.convert().coef`. **[resid, rank, sv, rcond]** list These values are only returned if `full == True` * resid – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * sv – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). # numpy.polynomial.legendre.Legendre.fromroots method _classmethod_ polynomial.legendre.Legendre.fromroots(_roots_ , _domain =[]_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1042-L1083) Return series instance that has the specified roots. Returns a series representing the product `(x - r[0])*(x - r[1])*...*(x - r[n-1])`, where `r` is a list of roots. Parameters: **roots** array_like List of roots. **domain**{[], None, array_like}, optional Domain for the resulting series. If None the domain is the interval from the smallest root to the largest. If [] the domain is the class domain. The default is []. **window**{None, array_like}, optional Window for the returned series. If None the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series Series with the specified roots. # numpy.polynomial.legendre.Legendre.has_samecoef method polynomial.legendre.Legendre.has_samecoef(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L188-L207) Check if coefficients match. Parameters: **other** class instance The other class must have the `coef` attribute. Returns: **bool** boolean True if the coefficients are the same, False otherwise. # numpy.polynomial.legendre.Legendre.has_samedomain method polynomial.legendre.Legendre.has_samedomain(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L209-L223) Check if domains match. Parameters: **other** class instance The other class must have the `domain` attribute. Returns: **bool** boolean True if the domains are the same, False otherwise. # numpy.polynomial.legendre.Legendre.has_sametype method polynomial.legendre.Legendre.has_sametype(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L241-L255) Check if types match. Parameters: **other** object Class instance. Returns: **bool** boolean True if other is same class as self # numpy.polynomial.legendre.Legendre.has_samewindow method polynomial.legendre.Legendre.has_samewindow(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L225-L239) Check if windows match. Parameters: **other** class instance The other class must have the `window` attribute. Returns: **bool** boolean True if the windows are the same, False otherwise. # numpy.polynomial.legendre.Legendre _class_ numpy.polynomial.legendre.Legendre(_coef_ , _domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L1562-L1605) A Legendre series class. The Legendre class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the attributes and methods listed below. Parameters: **coef** array_like Legendre coefficients in order of increasing degree, i.e., `(1, 2, 3)` gives `1*P_0(x) + 2*P_1(x) + 3*P_2(x)`. **domain**(2,) array_like, optional Domain to use. The interval `[domain[0], domain[1]]` is mapped to the interval `[window[0], window[1]]` by shifting and scaling. The default value is [-1., 1.]. **window**(2,) array_like, optional Window, see [`domain`](numpy.polynomial.legendre.legendre.domain#numpy.polynomial.legendre.Legendre.domain "numpy.polynomial.legendre.Legendre.domain") for its use. The default value is [-1., 1.]. **symbol** str, optional Symbol used to represent the independent variable in string representations of the polynomial expression, e.g. for printing. The symbol must be a valid Python identifier. Default value is ‘x’. New in version 1.24. Attributes: **symbol** #### Methods [`__call__`](numpy.polynomial.legendre.legendre.__call__#numpy.polynomial.legendre.Legendre.__call__ "numpy.polynomial.legendre.Legendre.__call__")(arg) | Call self as a function. ---|--- [`basis`](numpy.polynomial.legendre.legendre.basis#numpy.polynomial.legendre.Legendre.basis "numpy.polynomial.legendre.Legendre.basis")(deg[, domain, window, symbol]) | Series basis polynomial of degree `deg`. [`cast`](numpy.polynomial.legendre.legendre.cast#numpy.polynomial.legendre.Legendre.cast "numpy.polynomial.legendre.Legendre.cast")(series[, domain, window]) | Convert series to series of this class. [`convert`](numpy.polynomial.legendre.legendre.convert#numpy.polynomial.legendre.Legendre.convert "numpy.polynomial.legendre.Legendre.convert")([domain, kind, window]) | Convert series to a different kind and/or domain and/or window. [`copy`](numpy.polynomial.legendre.legendre.copy#numpy.polynomial.legendre.Legendre.copy "numpy.polynomial.legendre.Legendre.copy")() | Return a copy. [`cutdeg`](numpy.polynomial.legendre.legendre.cutdeg#numpy.polynomial.legendre.Legendre.cutdeg "numpy.polynomial.legendre.Legendre.cutdeg")(deg) | Truncate series to the given degree. [`degree`](numpy.polynomial.legendre.legendre.degree#numpy.polynomial.legendre.Legendre.degree "numpy.polynomial.legendre.Legendre.degree")() | The degree of the series. [`deriv`](numpy.polynomial.legendre.legendre.deriv#numpy.polynomial.legendre.Legendre.deriv "numpy.polynomial.legendre.Legendre.deriv")([m]) | Differentiate. [`fit`](numpy.polynomial.legendre.legendre.fit#numpy.polynomial.legendre.Legendre.fit "numpy.polynomial.legendre.Legendre.fit")(x, y, deg[, domain, rcond, full, w, ...]) | Least squares fit to data. [`fromroots`](numpy.polynomial.legendre.legendre.fromroots#numpy.polynomial.legendre.Legendre.fromroots "numpy.polynomial.legendre.Legendre.fromroots")(roots[, domain, window, symbol]) | Return series instance that has the specified roots. [`has_samecoef`](numpy.polynomial.legendre.legendre.has_samecoef#numpy.polynomial.legendre.Legendre.has_samecoef "numpy.polynomial.legendre.Legendre.has_samecoef")(other) | Check if coefficients match. [`has_samedomain`](numpy.polynomial.legendre.legendre.has_samedomain#numpy.polynomial.legendre.Legendre.has_samedomain "numpy.polynomial.legendre.Legendre.has_samedomain")(other) | Check if domains match. [`has_sametype`](numpy.polynomial.legendre.legendre.has_sametype#numpy.polynomial.legendre.Legendre.has_sametype "numpy.polynomial.legendre.Legendre.has_sametype")(other) | Check if types match. [`has_samewindow`](numpy.polynomial.legendre.legendre.has_samewindow#numpy.polynomial.legendre.Legendre.has_samewindow "numpy.polynomial.legendre.Legendre.has_samewindow")(other) | Check if windows match. [`identity`](numpy.polynomial.legendre.legendre.identity#numpy.polynomial.legendre.Legendre.identity "numpy.polynomial.legendre.Legendre.identity")([domain, window, symbol]) | Identity function. [`integ`](numpy.polynomial.legendre.legendre.integ#numpy.polynomial.legendre.Legendre.integ "numpy.polynomial.legendre.Legendre.integ")([m, k, lbnd]) | Integrate. [`linspace`](numpy.polynomial.legendre.legendre.linspace#numpy.polynomial.legendre.Legendre.linspace "numpy.polynomial.legendre.Legendre.linspace")([n, domain]) | Return x, y values at equally spaced points in domain. [`mapparms`](numpy.polynomial.legendre.legendre.mapparms#numpy.polynomial.legendre.Legendre.mapparms "numpy.polynomial.legendre.Legendre.mapparms")() | Return the mapping parameters. [`roots`](numpy.polynomial.legendre.legendre.roots#numpy.polynomial.legendre.Legendre.roots "numpy.polynomial.legendre.Legendre.roots")() | Return the roots of the series polynomial. [`trim`](numpy.polynomial.legendre.legendre.trim#numpy.polynomial.legendre.Legendre.trim "numpy.polynomial.legendre.Legendre.trim")([tol]) | Remove trailing coefficients [`truncate`](numpy.polynomial.legendre.legendre.truncate#numpy.polynomial.legendre.Legendre.truncate "numpy.polynomial.legendre.Legendre.truncate")(size) | Truncate series to length `size`. # numpy.polynomial.legendre.Legendre.identity method _classmethod_ polynomial.legendre.Legendre.identity(_domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1085-L1118) Identity function. If `p` is the returned series, then `p(x) == x` for all values of x. Parameters: **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series Series of representing the identity. # numpy.polynomial.legendre.Legendre.integ method polynomial.legendre.Legendre.integ(_m =1_, _k =[]_, _lbnd =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L851-L882) Integrate. Return a series instance that is the definite integral of the current series. Parameters: **m** non-negative int The number of integrations to perform. **k** array_like Integration constants. The first constant is applied to the first integration, the second to the second, and so on. The list of values must less than or equal to `m` in length and any missing values are set to zero. **lbnd** Scalar The lower bound of the definite integral. Returns: **new_series** series A new series representing the integral. The domain is the same as the domain of the integrated series. # numpy.polynomial.legendre.Legendre.linspace method polynomial.legendre.Legendre.linspace(_n =100_, _domain =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L921-L949) Return x, y values at equally spaced points in domain. Returns the x, y values at `n` linearly spaced points across the domain. Here y is the value of the polynomial at the points x. By default the domain is the same as that of the series instance. This method is intended mostly as a plotting aid. Parameters: **n** int, optional Number of point pairs to return. The default value is 100. **domain**{None, array_like}, optional If not None, the specified domain is used instead of that of the calling instance. It should be of the form `[beg,end]`. The default is None which case the class domain is used. Returns: **x, y** ndarray x is equal to linspace(self.domain[0], self.domain[1], n) and y is the series evaluated at element of x. # numpy.polynomial.legendre.Legendre.mapparms method polynomial.legendre.Legendre.mapparms()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L822-L849) Return the mapping parameters. The returned values define a linear map `off + scl*x` that is applied to the input arguments before the series is evaluated. The map depends on the `domain` and `window`; if the current `domain` is equal to the `window` the resulting map is the identity. If the coefficients of the series instance are to be used by themselves outside this class, then the linear function must be substituted for the `x` in the standard representation of the base polynomials. Returns: **off, scl** float or complex The mapping function is defined by `off + scl*x`. #### Notes If the current domain is the interval `[l1, r1]` and the window is `[l2, r2]`, then the linear mapping function `L` is defined by the equations: L(l1) = l2 L(r1) = r2 # numpy.polynomial.legendre.Legendre.roots method polynomial.legendre.Legendre.roots()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L906-L919) Return the roots of the series polynomial. Compute the roots for the series. Note that the accuracy of the roots decreases the further outside the [`domain`](numpy.polynomial.legendre.legendre.domain#numpy.polynomial.legendre.Legendre.domain "numpy.polynomial.legendre.Legendre.domain") they lie. Returns: **roots** ndarray Array containing the roots of the series. # numpy.polynomial.legendre.Legendre.trim method polynomial.legendre.Legendre.trim(_tol =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L733-L754) Remove trailing coefficients Remove trailing coefficients until a coefficient is reached whose absolute value greater than `tol` or the beginning of the series is reached. If all the coefficients would be removed the series is set to `[0]`. A new series instance is returned with the new coefficients. The current instance remains unchanged. Parameters: **tol** non-negative number. All trailing coefficients less than `tol` will be removed. Returns: **new_series** series New instance of series with trimmed coefficients. # numpy.polynomial.legendre.Legendre.truncate method polynomial.legendre.Legendre.truncate(_size_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L756-L783) Truncate series to length [`size`](numpy.size#numpy.size "numpy.size"). Reduce the series to length [`size`](numpy.size#numpy.size "numpy.size") by discarding the high degree terms. The value of [`size`](numpy.size#numpy.size "numpy.size") must be a positive integer. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters: **size** positive int The series is reduced to length [`size`](numpy.size#numpy.size "numpy.size") by discarding the high degree terms. The value of [`size`](numpy.size#numpy.size "numpy.size") must be a positive integer. Returns: **new_series** series New instance of series with truncated coefficients. # numpy.polynomial.legendre.legfit polynomial.legendre.legfit(_x_ , _y_ , _deg_ , _rcond =None_, _full =False_, _w =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L1246-L1370) Least squares fit of Legendre series to data. Return the coefficients of a Legendre series of degree `deg` that is the least squares fit to the data values `y` given at points `x`. If `y` is 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple fits are done, one for each column of `y`, and the resulting coefficients are stored in the corresponding columns of a 2-D return. The fitted polynomial(s) are in the form \\[p(x) = c_0 + c_1 * L_1(x) + ... + c_n * L_n(x),\\] where `n` is `deg`. Parameters: **x** array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y** array_like, shape (M,) or (M, K) y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column. **deg** int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **rcond** float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full** bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w** array_like, shape (`M`,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse- variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. Returns: **coef** ndarray, shape (M,) or (M, K) Legendre coefficients ordered from low to high. If `y` was 2-D, the coefficients for the data in column k of `y` are in column `k`. If `deg` is specified as a list, coefficients for terms not included in the fit are set equal to zero in the returned `coef`. **[residuals, rank, singular_values, rcond]** list These values are only returned if `full == True` * residuals – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * singular_values – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). Warns: RankWarning The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if `full == False`. The warnings can be turned off by >>> import warnings >>> warnings.simplefilter('ignore', np.exceptions.RankWarning) See also [`numpy.polynomial.polynomial.polyfit`](numpy.polynomial.polynomial.polyfit#numpy.polynomial.polynomial.polyfit "numpy.polynomial.polynomial.polyfit") [`numpy.polynomial.chebyshev.chebfit`](numpy.polynomial.chebyshev.chebfit#numpy.polynomial.chebyshev.chebfit "numpy.polynomial.chebyshev.chebfit") [`numpy.polynomial.laguerre.lagfit`](numpy.polynomial.laguerre.lagfit#numpy.polynomial.laguerre.lagfit "numpy.polynomial.laguerre.lagfit") [`numpy.polynomial.hermite.hermfit`](numpy.polynomial.hermite.hermfit#numpy.polynomial.hermite.hermfit "numpy.polynomial.hermite.hermfit") [`numpy.polynomial.hermite_e.hermefit`](numpy.polynomial.hermite_e.hermefit#numpy.polynomial.hermite_e.hermefit "numpy.polynomial.hermite_e.hermefit") [`legval`](numpy.polynomial.legendre.legval#numpy.polynomial.legendre.legval "numpy.polynomial.legendre.legval") Evaluates a Legendre series. [`legvander`](numpy.polynomial.legendre.legvander#numpy.polynomial.legendre.legvander "numpy.polynomial.legendre.legvander") Vandermonde matrix of Legendre series. [`legweight`](numpy.polynomial.legendre.legweight#numpy.polynomial.legendre.legweight "numpy.polynomial.legendre.legweight") Legendre weight function (= 1). [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") Computes a least-squares fit from the matrix. [`scipy.interpolate.UnivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline "\(in SciPy v1.14.1\)") Computes spline fits. #### Notes The solution is the coefficients of the Legendre series `p` that minimizes the sum of the weighted squared errors \\[E = \sum_j w_j^2 * |y_j - p(x_j)|^2,\\] where \\(w_j\\) are the weights. This problem is solved by setting up as the (typically) overdetermined matrix equation \\[V(x) * c = w * y,\\] where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the coefficients to be solved for, `w` are the weights, and `y` are the observed values. This equation is then solved using the singular value decomposition of `V`. If some of the singular values of `V` are so small that they are neglected, then a [`RankWarning`](numpy.exceptions.rankwarning#numpy.exceptions.RankWarning "numpy.exceptions.RankWarning") will be issued. This means that the coefficient values may be poorly determined. Using a lower order fit will usually get rid of the warning. The `rcond` parameter can also be set to a value smaller than its default, but the resulting fit may be spurious and have large contributions from roundoff error. Fits using Legendre series are usually better conditioned than fits using power series, but much can depend on the distribution of the sample points and the smoothness of the data. If the quality of the fit is inadequate splines may be a good alternative. #### References [1] Wikipedia, “Curve fitting”, # numpy.polynomial.legendre.legfromroots polynomial.legendre.legfromroots(_roots_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L269-L321) Generate a Legendre series with given roots. The function returns the coefficients of the polynomial \\[p(x) = (x - r_0) * (x - r_1) * ... * (x - r_n),\\] in Legendre form, where the \\(r_n\\) are the roots specified in [`roots`](numpy.roots#numpy.roots "numpy.roots"). If a zero has multiplicity n, then it must appear in [`roots`](numpy.roots#numpy.roots "numpy.roots") n times. For instance, if 2 is a root of multiplicity three and 3 is a root of multiplicity 2, then [`roots`](numpy.roots#numpy.roots "numpy.roots") looks something like [2, 2, 2, 3, 3]. The roots can appear in any order. If the returned coefficients are `c`, then \\[p(x) = c_0 + c_1 * L_1(x) + ... + c_n * L_n(x)\\] The coefficient of the last term is not generally 1 for monic polynomials in Legendre form. Parameters: **roots** array_like Sequence containing the roots. Returns: **out** ndarray 1-D array of coefficients. If all roots are real then `out` is a real array, if some of the roots are complex, then `out` is complex even if all the coefficients in the result are real (see Examples below). See also [`numpy.polynomial.polynomial.polyfromroots`](numpy.polynomial.polynomial.polyfromroots#numpy.polynomial.polynomial.polyfromroots "numpy.polynomial.polynomial.polyfromroots") [`numpy.polynomial.chebyshev.chebfromroots`](numpy.polynomial.chebyshev.chebfromroots#numpy.polynomial.chebyshev.chebfromroots "numpy.polynomial.chebyshev.chebfromroots") [`numpy.polynomial.laguerre.lagfromroots`](numpy.polynomial.laguerre.lagfromroots#numpy.polynomial.laguerre.lagfromroots "numpy.polynomial.laguerre.lagfromroots") [`numpy.polynomial.hermite.hermfromroots`](numpy.polynomial.hermite.hermfromroots#numpy.polynomial.hermite.hermfromroots "numpy.polynomial.hermite.hermfromroots") [`numpy.polynomial.hermite_e.hermefromroots`](numpy.polynomial.hermite_e.hermefromroots#numpy.polynomial.hermite_e.hermefromroots "numpy.polynomial.hermite_e.hermefromroots") #### Examples >>> import numpy.polynomial.legendre as L >>> L.legfromroots((-1,0,1)) # x^3 - x relative to the standard basis array([ 0. , -0.4, 0. , 0.4]) >>> j = complex(0,1) >>> L.legfromroots((-j,j)) # x^2 + 1 relative to the standard basis array([ 1.33333333+0.j, 0.00000000+0.j, 0.66666667+0.j]) # may vary # numpy.polynomial.legendre.leggauss polynomial.legendre.leggauss(_deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L1472-L1534) Gauss-Legendre quadrature. Computes the sample points and weights for Gauss-Legendre quadrature. These sample points and weights will correctly integrate polynomials of degree \\(2*deg - 1\\) or less over the interval \\([-1, 1]\\) with the weight function \\(f(x) = 1\\). Parameters: **deg** int Number of sample points and weights. It must be >= 1. Returns: **x** ndarray 1-D ndarray containing the sample points. **y** ndarray 1-D ndarray containing the weights. #### Notes The results have only been tested up to degree 100, higher degrees may be problematic. The weights are determined by using the fact that \\[w_k = c / (L'_n(x_k) * L_{n-1}(x_k))\\] where \\(c\\) is a constant independent of \\(k\\) and \\(x_k\\) is the k’th root of \\(L_n\\), and then scaling the results to get the right value when integrating 1. # numpy.polynomial.legendre.leggrid2d polynomial.legendre.leggrid2d(_x_ , _y_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L956-L1000) Evaluate a 2-D Legendre series on the Cartesian product of x and y. This function returns the values: \\[p(a,b) = \sum_{i,j} c_{i,j} * L_i(a) * L_j(b)\\] where the points `(a, b)` consist of all pairs formed by taking `a` from `x` and `b` from `y`. The resulting points form a grid with `x` in the first dimension and `y` in the second. The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than two dimensions, ones are implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape + y.shape. Parameters: **x, y** array_like, compatible objects The two dimensional series is evaluated at the points in the Cartesian product of `x` and `y`. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficient of the term of multi- degree i,j is contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional Chebyshev series at points in the Cartesian product of `x` and `y`. See also [`legval`](numpy.polynomial.legendre.legval#numpy.polynomial.legendre.legval "numpy.polynomial.legendre.legval"), [`legval2d`](numpy.polynomial.legendre.legval2d#numpy.polynomial.legendre.legval2d "numpy.polynomial.legendre.legval2d"), [`legval3d`](numpy.polynomial.legendre.legval3d#numpy.polynomial.legendre.legval3d "numpy.polynomial.legendre.legval3d"), [`leggrid3d`](numpy.polynomial.legendre.leggrid3d#numpy.polynomial.legendre.leggrid3d "numpy.polynomial.legendre.leggrid3d") # numpy.polynomial.legendre.leggrid3d polynomial.legendre.leggrid3d(_x_ , _y_ , _z_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L1048-L1095) Evaluate a 3-D Legendre series on the Cartesian product of x, y, and z. This function returns the values: \\[p(a,b,c) = \sum_{i,j,k} c_{i,j,k} * L_i(a) * L_j(b) * L_k(c)\\] where the points `(a, b, c)` consist of all triples formed by taking `a` from `x`, `b` from `y`, and `c` from `z`. The resulting points form a grid with `x` in the first dimension, `y` in the second, and `z` in the third. The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than three dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape + y.shape + z.shape. Parameters: **x, y, z** array_like, compatible objects The three dimensional series is evaluated at the points in the Cartesian product of `x`, `y`, and `z`. If `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`legval`](numpy.polynomial.legendre.legval#numpy.polynomial.legendre.legval "numpy.polynomial.legendre.legval"), [`legval2d`](numpy.polynomial.legendre.legval2d#numpy.polynomial.legendre.legval2d "numpy.polynomial.legendre.legval2d"), [`leggrid2d`](numpy.polynomial.legendre.leggrid2d#numpy.polynomial.legendre.leggrid2d "numpy.polynomial.legendre.leggrid2d"), [`legval3d`](numpy.polynomial.legendre.legval3d#numpy.polynomial.legendre.legval3d "numpy.polynomial.legendre.legval3d") # numpy.polynomial.legendre.legint polynomial.legendre.legint(_c_ , _m =1_, _k =[]_, _lbnd =0_, _scl =1_, _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L704-L827) Integrate a Legendre series. Returns the Legendre series coefficients `c` integrated `m` times from `lbnd` along `axis`. At each iteration the resulting series is **multiplied** by `scl` and an integration constant, `k`, is added. The scaling factor is for use in a linear change of variable. (“Buyer beware”: note that, depending on what one is doing, one may want `scl` to be the reciprocal of what one might expect; for more information, see the Notes section below.) The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `L_0 + 2*L_1 + 3*L_2` while [[1,2],[1,2]] represents `1*L_0(x)*L_0(y) + 1*L_1(x)*L_0(y) + 2*L_0(x)*L_1(y) + 2*L_1(x)*L_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters: **c** array_like Array of Legendre series coefficients. If c is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m** int, optional Order of integration, must be positive. (Default: 1) **k**{[], list, scalar}, optional Integration constant(s). The value of the first integral at `lbnd` is the first value in the list, the value of the second integral at `lbnd` is the second value, etc. If `k == []` (the default), all constants are set to zero. If `m == 1`, a single scalar can be given instead of a list. **lbnd** scalar, optional The lower bound of the integral. (Default: 0) **scl** scalar, optional Following each integration the result is _multiplied_ by `scl` before the integration constant is added. (Default: 1) **axis** int, optional Axis over which the integral is taken. (Default: 0). Returns: **S** ndarray Legendre series coefficient array of the integral. Raises: ValueError If `m < 0`, `len(k) > m`, `np.ndim(lbnd) != 0`, or `np.ndim(scl) != 0`. See also [`legder`](numpy.polynomial.legendre.legder#numpy.polynomial.legendre.legder "numpy.polynomial.legendre.legder") #### Notes Note that the result of each integration is _multiplied_ by `scl`. Why is this important to note? Say one is making a linear change of variable \\(u = ax + b\\) in an integral relative to `x`. Then \\(dx = du/a\\), so one will need to set `scl` equal to \\(1/a\\) \- perhaps not what one would have first thought. Also note that, in general, the result of integrating a C-series needs to be “reprojected” onto the C-series basis set. Thus, typically, the result of this function is “unintuitive,” albeit correct; see Examples section below. #### Examples >>> from numpy.polynomial import legendre as L >>> c = (1,2,3) >>> L.legint(c) array([ 0.33333333, 0.4 , 0.66666667, 0.6 ]) # may vary >>> L.legint(c, 3) array([ 1.66666667e-02, -1.78571429e-02, 4.76190476e-02, # may vary -1.73472348e-18, 1.90476190e-02, 9.52380952e-03]) >>> L.legint(c, k=3) array([ 3.33333333, 0.4 , 0.66666667, 0.6 ]) # may vary >>> L.legint(c, lbnd=-2) array([ 7.33333333, 0.4 , 0.66666667, 0.6 ]) # may vary >>> L.legint(c, scl=2) array([ 0.66666667, 0.8 , 1.33333333, 1.2 ]) # may vary # numpy.polynomial.legendre.legline polynomial.legendre.legline(_off_ , _scl_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L229-L266) Legendre series whose graph is a straight line. Parameters: **off, scl** scalars The specified line is given by `off + scl*x`. Returns: **y** ndarray This module’s representation of the Legendre series for `off + scl*x`. See also [`numpy.polynomial.polynomial.polyline`](numpy.polynomial.polynomial.polyline#numpy.polynomial.polynomial.polyline "numpy.polynomial.polynomial.polyline") [`numpy.polynomial.chebyshev.chebline`](numpy.polynomial.chebyshev.chebline#numpy.polynomial.chebyshev.chebline "numpy.polynomial.chebyshev.chebline") [`numpy.polynomial.laguerre.lagline`](numpy.polynomial.laguerre.lagline#numpy.polynomial.laguerre.lagline "numpy.polynomial.laguerre.lagline") [`numpy.polynomial.hermite.hermline`](numpy.polynomial.hermite.hermline#numpy.polynomial.hermite.hermline "numpy.polynomial.hermite.hermline") [`numpy.polynomial.hermite_e.hermeline`](numpy.polynomial.hermite_e.hermeline#numpy.polynomial.hermite_e.hermeline "numpy.polynomial.hermite_e.hermeline") #### Examples >>> import numpy.polynomial.legendre as L >>> L.legline(3,2) array([3, 2]) >>> L.legval(-3, L.legline(3,2)) # should be -3 -3.0 # numpy.polynomial.legendre.legmul polynomial.legendre.legmul(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L466-L531) Multiply one Legendre series by another. Returns the product of two Legendre series `c1` * `c2`. The arguments are sequences of coefficients, from lowest order “term” to highest, e.g., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters: **c1, c2** array_like 1-D arrays of Legendre series coefficients ordered from low to high. Returns: **out** ndarray Of Legendre series coefficients representing their product. See also [`legadd`](numpy.polynomial.legendre.legadd#numpy.polynomial.legendre.legadd "numpy.polynomial.legendre.legadd"), [`legsub`](numpy.polynomial.legendre.legsub#numpy.polynomial.legendre.legsub "numpy.polynomial.legendre.legsub"), [`legmulx`](numpy.polynomial.legendre.legmulx#numpy.polynomial.legendre.legmulx "numpy.polynomial.legendre.legmulx"), [`legdiv`](numpy.polynomial.legendre.legdiv#numpy.polynomial.legendre.legdiv "numpy.polynomial.legendre.legdiv"), [`legpow`](numpy.polynomial.legendre.legpow#numpy.polynomial.legendre.legpow "numpy.polynomial.legendre.legpow") #### Notes In general, the (polynomial) product of two C-series results in terms that are not in the Legendre polynomial basis set. Thus, to express the product as a Legendre series, it is necessary to “reproject” the product onto said basis set, which may produce “unintuitive” (but correct) results; see Examples section below. #### Examples >>> from numpy.polynomial import legendre as L >>> c1 = (1,2,3) >>> c2 = (3,2) >>> L.legmul(c1,c2) # multiplication requires "reprojection" array([ 4.33333333, 10.4 , 11.66666667, 3.6 ]) # may vary # numpy.polynomial.legendre.legmulx polynomial.legendre.legmulx(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L410-L463) Multiply a Legendre series by x. Multiply the Legendre series `c` by x, where x is the independent variable. Parameters: **c** array_like 1-D array of Legendre series coefficients ordered from low to high. Returns: **out** ndarray Array representing the result of the multiplication. See also [`legadd`](numpy.polynomial.legendre.legadd#numpy.polynomial.legendre.legadd "numpy.polynomial.legendre.legadd"), [`legsub`](numpy.polynomial.legendre.legsub#numpy.polynomial.legendre.legsub "numpy.polynomial.legendre.legsub"), [`legmul`](numpy.polynomial.legendre.legmul#numpy.polynomial.legendre.legmul "numpy.polynomial.legendre.legmul"), [`legdiv`](numpy.polynomial.legendre.legdiv#numpy.polynomial.legendre.legdiv "numpy.polynomial.legendre.legdiv"), [`legpow`](numpy.polynomial.legendre.legpow#numpy.polynomial.legendre.legpow "numpy.polynomial.legendre.legpow") #### Notes The multiplication uses the recursion relationship for Legendre polynomials in the form \\[xP_i(x) = ((i + 1)*P_{i + 1}(x) + i*P_{i - 1}(x))/(2i + 1)\\] #### Examples >>> from numpy.polynomial import legendre as L >>> L.legmulx([1,2,3]) array([ 0.66666667, 2.2, 1.33333333, 1.8]) # may vary # numpy.polynomial.legendre.legone polynomial.legendre.legone _= array([1])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.legendre.legpow polynomial.legendre.legpow(_c_ , _pow_ , _maxpower =16_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L583-L611) Raise a Legendre series to a power. Returns the Legendre series `c` raised to the power [`pow`](numpy.pow#numpy.pow "numpy.pow"). The argument `c` is a sequence of coefficients ordered from low to high. i.e., [1,2,3] is the series `P_0 + 2*P_1 + 3*P_2.` Parameters: **c** array_like 1-D array of Legendre series coefficients ordered from low to high. **pow** integer Power to which the series will be raised **maxpower** integer, optional Maximum power allowed. This is mainly to limit growth of the series to unmanageable size. Default is 16 Returns: **coef** ndarray Legendre series of power. See also [`legadd`](numpy.polynomial.legendre.legadd#numpy.polynomial.legendre.legadd "numpy.polynomial.legendre.legadd"), [`legsub`](numpy.polynomial.legendre.legsub#numpy.polynomial.legendre.legsub "numpy.polynomial.legendre.legsub"), [`legmulx`](numpy.polynomial.legendre.legmulx#numpy.polynomial.legendre.legmulx "numpy.polynomial.legendre.legmulx"), [`legmul`](numpy.polynomial.legendre.legmul#numpy.polynomial.legendre.legmul "numpy.polynomial.legendre.legmul"), [`legdiv`](numpy.polynomial.legendre.legdiv#numpy.polynomial.legendre.legdiv "numpy.polynomial.legendre.legdiv") # numpy.polynomial.legendre.legroots polynomial.legendre.legroots(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L1411-L1469) Compute the roots of a Legendre series. Return the roots (a.k.a. “zeros”) of the polynomial \\[p(x) = \sum_i c[i] * L_i(x).\\] Parameters: **c** 1-D array_like 1-D array of coefficients. Returns: **out** ndarray Array of the roots of the series. If all the roots are real, then `out` is also real, otherwise it is complex. See also [`numpy.polynomial.polynomial.polyroots`](numpy.polynomial.polynomial.polyroots#numpy.polynomial.polynomial.polyroots "numpy.polynomial.polynomial.polyroots") [`numpy.polynomial.chebyshev.chebroots`](numpy.polynomial.chebyshev.chebroots#numpy.polynomial.chebyshev.chebroots "numpy.polynomial.chebyshev.chebroots") [`numpy.polynomial.laguerre.lagroots`](numpy.polynomial.laguerre.lagroots#numpy.polynomial.laguerre.lagroots "numpy.polynomial.laguerre.lagroots") [`numpy.polynomial.hermite.hermroots`](numpy.polynomial.hermite.hermroots#numpy.polynomial.hermite.hermroots "numpy.polynomial.hermite.hermroots") [`numpy.polynomial.hermite_e.hermeroots`](numpy.polynomial.hermite_e.hermeroots#numpy.polynomial.hermite_e.hermeroots "numpy.polynomial.hermite_e.hermeroots") #### Notes The root estimates are obtained as the eigenvalues of the companion matrix, Roots far from the origin of the complex plane may have large errors due to the numerical instability of the series for such values. Roots with multiplicity greater than 1 will also show larger errors as the value of the series near such points is relatively insensitive to errors in the roots. Isolated roots near the origin can be improved by a few iterations of Newton’s method. The Legendre series basis polynomials aren’t powers of `x` so the results of this function may seem unintuitive. #### Examples >>> import numpy.polynomial.legendre as leg >>> leg.legroots((1, 2, 3, 4)) # 4L_3 + 3L_2 + 2L_1 + 1L_0, all real roots array([-0.85099543, -0.11407192, 0.51506735]) # may vary # numpy.polynomial.legendre.legsub polynomial.legendre.legsub(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L366-L407) Subtract one Legendre series from another. Returns the difference of two Legendre series `c1` \- `c2`. The sequences of coefficients are from lowest order term to highest, i.e., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters: **c1, c2** array_like 1-D arrays of Legendre series coefficients ordered from low to high. Returns: **out** ndarray Of Legendre series coefficients representing their difference. See also [`legadd`](numpy.polynomial.legendre.legadd#numpy.polynomial.legendre.legadd "numpy.polynomial.legendre.legadd"), [`legmulx`](numpy.polynomial.legendre.legmulx#numpy.polynomial.legendre.legmulx "numpy.polynomial.legendre.legmulx"), [`legmul`](numpy.polynomial.legendre.legmul#numpy.polynomial.legendre.legmul "numpy.polynomial.legendre.legmul"), [`legdiv`](numpy.polynomial.legendre.legdiv#numpy.polynomial.legendre.legdiv "numpy.polynomial.legendre.legdiv"), [`legpow`](numpy.polynomial.legendre.legpow#numpy.polynomial.legendre.legpow "numpy.polynomial.legendre.legpow") #### Notes Unlike multiplication, division, etc., the difference of two Legendre series is a Legendre series (without having to “reproject” the result onto the basis set) so subtraction, just like that of “standard” polynomials, is simply “component-wise.” #### Examples >>> from numpy.polynomial import legendre as L >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> L.legsub(c1,c2) array([-2., 0., 2.]) >>> L.legsub(c2,c1) # -C.legsub(c1,c2) array([ 2., 0., -2.]) # numpy.polynomial.legendre.legtrim polynomial.legendre.legtrim(_c_ , _tol =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polyutils.py#L144-L192) Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter `tol`; “trailing” means highest order coefficient(s), e.g., in `[0, 1, 1, 0, 0]` (which represents `0 + x + x**2 + 0*x**3 + 0*x**4`) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters: **c** array_like 1-d array of coefficients, ordered from lowest order to highest. **tol** number, optional Trailing (i.e., highest order) elements with absolute value less than or equal to `tol` (default value is zero) are removed. Returns: **trimmed** ndarray 1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises: ValueError If `tol` < 0 #### Examples >>> from numpy.polynomial import polyutils as pu >>> pu.trimcoef((0,0,3,0,5,0,0)) array([0., 0., 3., 0., 5.]) >>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed array([0.]) >>> i = complex(0,1) # works for complex >>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3) array([0.0003+0.j , 0.001 -0.001j]) # numpy.polynomial.legendre.legval polynomial.legendre.legval(_x_ , _c_ , _tensor =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L830-L910) Evaluate a Legendre series at points x. If `c` is of length `n + 1`, this function returns the value: \\[p(x) = c_0 * L_0(x) + c_1 * L_1(x) + ... + c_n * L_n(x)\\] The parameter `x` is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either `x` or its elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array, then `p(x)` will have the same shape as `x`. If `c` is multidimensional, then the shape of the result depends on the value of `tensor`. If `tensor` is true the shape will be c.shape[1:] + x.shape. If `tensor` is false the shape will be c.shape[1:]. Note that scalars have shape (,). Trailing zeros in the coefficients will be used in the evaluation, so they should be avoided if efficiency is a concern. Parameters: **x** array_like, compatible object If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with themselves and with the elements of `c`. **c** array_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If `c` is multidimensional the remaining indices enumerate multiple polynomials. In the two dimensional case the coefficients may be thought of as stored in the columns of `c`. **tensor** boolean, optional If True, the shape of the coefficient array is extended with ones on the right, one for each dimension of `x`. Scalars have dimension 0 for this action. The result is that every column of coefficients in `c` is evaluated for every element of `x`. If False, `x` is broadcast over the columns of `c` for the evaluation. This keyword is useful when `c` is multidimensional. The default value is True. Returns: **values** ndarray, algebra_like The shape of the return value is described above. See also [`legval2d`](numpy.polynomial.legendre.legval2d#numpy.polynomial.legendre.legval2d "numpy.polynomial.legendre.legval2d"), [`leggrid2d`](numpy.polynomial.legendre.leggrid2d#numpy.polynomial.legendre.leggrid2d "numpy.polynomial.legendre.leggrid2d"), [`legval3d`](numpy.polynomial.legendre.legval3d#numpy.polynomial.legendre.legval3d "numpy.polynomial.legendre.legval3d"), [`leggrid3d`](numpy.polynomial.legendre.leggrid3d#numpy.polynomial.legendre.leggrid3d "numpy.polynomial.legendre.leggrid3d") #### Notes The evaluation uses Clenshaw recursion, aka synthetic division. # numpy.polynomial.legendre.legval2d polynomial.legendre.legval2d(_x_ , _y_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L913-L953) Evaluate a 2-D Legendre series at points (x, y). This function returns the values: \\[p(x,y) = \sum_{i,j} c_{i,j} * L_i(x) * L_j(y)\\] The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array a one is implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters: **x, y** array_like, compatible objects The two dimensional series is evaluated at the points `(x, y)`, where `x` and `y` must have the same shape. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficient of the term of multi- degree i,j is contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional Legendre series at points formed from pairs of corresponding values from `x` and `y`. See also [`legval`](numpy.polynomial.legendre.legval#numpy.polynomial.legendre.legval "numpy.polynomial.legendre.legval"), [`leggrid2d`](numpy.polynomial.legendre.leggrid2d#numpy.polynomial.legendre.leggrid2d "numpy.polynomial.legendre.leggrid2d"), [`legval3d`](numpy.polynomial.legendre.legval3d#numpy.polynomial.legendre.legval3d "numpy.polynomial.legendre.legval3d"), [`leggrid3d`](numpy.polynomial.legendre.leggrid3d#numpy.polynomial.legendre.leggrid3d "numpy.polynomial.legendre.leggrid3d") # numpy.polynomial.legendre.legval3d polynomial.legendre.legval3d(_x_ , _y_ , _z_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L1003-L1045) Evaluate a 3-D Legendre series at points (x, y, z). This function returns the values: \\[p(x,y,z) = \sum_{i,j,k} c_{i,j,k} * L_i(x) * L_j(y) * L_k(z)\\] The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than 3 dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape. Parameters: **x, y, z** array_like, compatible object The three dimensional series is evaluated at the points `(x, y, z)`, where `x`, `y`, and `z` must have the same shape. If any of `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficient of the term of multi- degree i,j,k is contained in `c[i,j,k]`. If `c` has dimension greater than 3 the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the multidimensional polynomial on points formed with triples of corresponding values from `x`, `y`, and `z`. See also [`legval`](numpy.polynomial.legendre.legval#numpy.polynomial.legendre.legval "numpy.polynomial.legendre.legval"), [`legval2d`](numpy.polynomial.legendre.legval2d#numpy.polynomial.legendre.legval2d "numpy.polynomial.legendre.legval2d"), [`leggrid2d`](numpy.polynomial.legendre.leggrid2d#numpy.polynomial.legendre.leggrid2d "numpy.polynomial.legendre.leggrid2d"), [`leggrid3d`](numpy.polynomial.legendre.leggrid3d#numpy.polynomial.legendre.leggrid3d "numpy.polynomial.legendre.leggrid3d") # numpy.polynomial.legendre.legvander polynomial.legendre.legvander(_x_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L1098-L1148) Pseudo-Vandermonde matrix of given degree. Returns the pseudo-Vandermonde matrix of degree `deg` and sample points `x`. The pseudo-Vandermonde matrix is defined by \\[V[..., i] = L_i(x)\\] where `0 <= i <= deg`. The leading indices of `V` index the elements of `x` and the last index is the degree of the Legendre polynomial. If `c` is a 1-D array of coefficients of length `n + 1` and `V` is the array `V = legvander(x, n)`, then `np.dot(V, c)` and `legval(x, c)` are the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of Legendre series of the same degree and sample points. Parameters: **x** array_like Array of points. The dtype is converted to float64 or complex128 depending on whether any of the elements are complex. If `x` is scalar it is converted to a 1-D array. **deg** int Degree of the resulting matrix. Returns: **vander** ndarray The pseudo-Vandermonde matrix. The shape of the returned matrix is `x.shape + (deg + 1,)`, where The last index is the degree of the corresponding Legendre polynomial. The dtype will be the same as the converted `x`. # numpy.polynomial.legendre.legvander2d polynomial.legendre.legvander2d(_x_ , _y_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L1151-L1195) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y)`. The pseudo-Vandermonde matrix is defined by \\[V[..., (deg[1] + 1)*i + j] = L_i(x) * L_j(y),\\] where `0 <= i <= deg[0]` and `0 <= j <= deg[1]`. The leading indices of `V` index the points `(x, y)` and the last index encodes the degrees of the Legendre polynomials. If `V = legvander2d(x, y, [xdeg, ydeg])`, then the columns of `V` correspond to the elements of a 2-D coefficient array `c` of shape (xdeg + 1, ydeg + 1) in the order \\[c_{00}, c_{01}, c_{02} ... , c_{10}, c_{11}, c_{12} ...\\] and `np.dot(V, c.flat)` and `legval2d(x, y, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 2-D Legendre series of the same degrees and sample points. Parameters: **x, y** array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg** list of ints List of maximum degrees of the form [x_deg, y_deg]. Returns: **vander2d** ndarray The shape of the returned matrix is `x.shape + (order,)`, where \\(order = (deg[0]+1)*(deg[1]+1)\\). The dtype will be the same as the converted `x` and `y`. See also [`legvander`](numpy.polynomial.legendre.legvander#numpy.polynomial.legendre.legvander "numpy.polynomial.legendre.legvander"), [`legvander3d`](numpy.polynomial.legendre.legvander3d#numpy.polynomial.legendre.legvander3d "numpy.polynomial.legendre.legvander3d"), [`legval2d`](numpy.polynomial.legendre.legval2d#numpy.polynomial.legendre.legval2d "numpy.polynomial.legendre.legval2d"), [`legval3d`](numpy.polynomial.legendre.legval3d#numpy.polynomial.legendre.legval3d "numpy.polynomial.legendre.legval3d") # numpy.polynomial.legendre.legvander3d polynomial.legendre.legvander3d(_x_ , _y_ , _z_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L1198-L1243) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y, z)`. If `l`, `m`, `n` are the given degrees in `x`, `y`, `z`, then The pseudo-Vandermonde matrix is defined by \\[V[..., (m+1)(n+1)i + (n+1)j + k] = L_i(x)*L_j(y)*L_k(z),\\] where `0 <= i <= l`, `0 <= j <= m`, and `0 <= j <= n`. The leading indices of `V` index the points `(x, y, z)` and the last index encodes the degrees of the Legendre polynomials. If `V = legvander3d(x, y, z, [xdeg, ydeg, zdeg])`, then the columns of `V` correspond to the elements of a 3-D coefficient array `c` of shape (xdeg + 1, ydeg + 1, zdeg + 1) in the order \\[c_{000}, c_{001}, c_{002},... , c_{010}, c_{011}, c_{012},...\\] and `np.dot(V, c.flat)` and `legval3d(x, y, z, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 3-D Legendre series of the same degrees and sample points. Parameters: **x, y, z** array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg** list of ints List of maximum degrees of the form [x_deg, y_deg, z_deg]. Returns: **vander3d** ndarray The shape of the returned matrix is `x.shape + (order,)`, where \\(order = (deg[0]+1)*(deg[1]+1)*(deg[2]+1)\\). The dtype will be the same as the converted `x`, `y`, and `z`. See also [`legvander`](numpy.polynomial.legendre.legvander#numpy.polynomial.legendre.legvander "numpy.polynomial.legendre.legvander"), `legvander3d`, [`legval2d`](numpy.polynomial.legendre.legval2d#numpy.polynomial.legendre.legval2d "numpy.polynomial.legendre.legval2d"), [`legval3d`](numpy.polynomial.legendre.legval3d#numpy.polynomial.legendre.legval3d "numpy.polynomial.legendre.legval3d") # numpy.polynomial.legendre.legweight polynomial.legendre.legweight(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L1537-L1556) Weight function of the Legendre polynomials. The weight function is \\(1\\) and the interval of integration is \\([-1, 1]\\). The Legendre polynomials are orthogonal, but not normalized, with respect to this weight function. Parameters: **x** array_like Values at which the weight function will be computed. Returns: **w** ndarray The weight function at `x`. # numpy.polynomial.legendre.legx polynomial.legendre.legx _= array([0, 1])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.legendre.legzero polynomial.legendre.legzero _= array([0])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.legendre.poly2leg polynomial.legendre.poly2leg(_pol_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/legendre.py#L100-L146) Convert a polynomial to a Legendre series. Convert an array representing the coefficients of a polynomial (relative to the “standard” basis) ordered from lowest degree to highest, to an array of the coefficients of the equivalent Legendre series, ordered from lowest to highest degree. Parameters: **pol** array_like 1-D array containing the polynomial coefficients Returns: **c** ndarray 1-D array containing the coefficients of the equivalent Legendre series. See also [`leg2poly`](numpy.polynomial.legendre.leg2poly#numpy.polynomial.legendre.leg2poly "numpy.polynomial.legendre.leg2poly") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples >>> import numpy as np >>> from numpy import polynomial as P >>> p = P.Polynomial(np.arange(4)) >>> p Polynomial([0., 1., 2., 3.], domain=[-1., 1.], window=[-1., 1.], ... >>> c = P.Legendre(P.legendre.poly2leg(p.coef)) >>> c Legendre([ 1. , 3.25, 1. , 0.75], domain=[-1, 1], window=[-1, 1]) # may vary # numpy.polynomial.polynomial.polyadd polynomial.polynomial.polyadd(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L216-L249) Add one polynomial to another. Returns the sum of two polynomials `c1` \+ `c2`. The arguments are sequences of coefficients from lowest order term to highest, i.e., [1,2,3] represents the polynomial `1 + 2*x + 3*x**2`. Parameters: **c1, c2** array_like 1-D arrays of polynomial coefficients ordered from low to high. Returns: **out** ndarray The coefficient array representing their sum. See also [`polysub`](numpy.polysub#numpy.polysub "numpy.polysub"), [`polymulx`](numpy.polynomial.polynomial.polymulx#numpy.polynomial.polynomial.polymulx "numpy.polynomial.polynomial.polymulx"), [`polymul`](numpy.polymul#numpy.polymul "numpy.polymul"), [`polydiv`](numpy.polydiv#numpy.polydiv "numpy.polydiv"), [`polypow`](numpy.polynomial.polynomial.polypow#numpy.polynomial.polynomial.polypow "numpy.polynomial.polynomial.polypow") #### Examples >>> from numpy.polynomial import polynomial as P >>> c1 = (1, 2, 3) >>> c2 = (3, 2, 1) >>> sum = P.polyadd(c1,c2); sum array([4., 4., 4.]) >>> P.polyval(2, sum) # 4 + 4(2) + 4(2**2) 28.0 # numpy.polynomial.polynomial.polycompanion polynomial.polynomial.polycompanion(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L1439-L1479) Return the companion matrix of c. The companion matrix for power series cannot be made symmetric by scaling the basis, so this function differs from those for the orthogonal polynomials. Parameters: **c** array_like 1-D array of polynomial coefficients ordered from low to high degree. Returns: **mat** ndarray Companion matrix of dimensions (deg, deg). #### Examples >>> from numpy.polynomial import polynomial as P >>> c = (1, 2, 3) >>> P.polycompanion(c) array([[ 0. , -0.33333333], [ 1. , -0.66666667]]) # numpy.polynomial.polynomial.polyder polynomial.polynomial.polyder(_c_ , _m =1_, _scl =1_, _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L466-L543) Differentiate a polynomial. Returns the polynomial coefficients `c` differentiated `m` times along `axis`. At each iteration the result is multiplied by `scl` (the scaling factor is for use in a linear change of variable). The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the polynomial `1 + 2*x + 3*x**2` while [[1,2],[1,2]] represents `1 + 1*x + 2*y + 2*x*y` if axis=0 is `x` and axis=1 is `y`. Parameters: **c** array_like Array of polynomial coefficients. If c is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m** int, optional Number of derivatives taken, must be non-negative. (Default: 1) **scl** scalar, optional Each differentiation is multiplied by `scl`. The end result is multiplication by `scl**m`. This is for use in a linear change of variable. (Default: 1) **axis** int, optional Axis over which the derivative is taken. (Default: 0). Returns: **der** ndarray Polynomial coefficients of the derivative. See also [`polyint`](numpy.polyint#numpy.polyint "numpy.polyint") #### Examples >>> from numpy.polynomial import polynomial as P >>> c = (1, 2, 3, 4) >>> P.polyder(c) # (d/dx)(c) array([ 2., 6., 12.]) >>> P.polyder(c, 3) # (d**3/dx**3)(c) array([24.]) >>> P.polyder(c, scl=-1) # (d/d(-x))(c) array([ -2., -6., -12.]) >>> P.polyder(c, 2, -1) # (d**2/d(-x)**2)(c) array([ 6., 24.]) # numpy.polynomial.polynomial.polydiv polynomial.polynomial.polydiv(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L369-L424) Divide one polynomial by another. Returns the quotient-with-remainder of two polynomials `c1` / `c2`. The arguments are sequences of coefficients, from lowest order term to highest, e.g., [1,2,3] represents `1 + 2*x + 3*x**2`. Parameters: **c1, c2** array_like 1-D arrays of polynomial coefficients ordered from low to high. Returns: **[quo, rem]** ndarrays Of coefficient series representing the quotient and remainder. See also [`polyadd`](numpy.polyadd#numpy.polyadd "numpy.polyadd"), [`polysub`](numpy.polysub#numpy.polysub "numpy.polysub"), [`polymulx`](numpy.polynomial.polynomial.polymulx#numpy.polynomial.polynomial.polymulx "numpy.polynomial.polynomial.polymulx"), [`polymul`](numpy.polymul#numpy.polymul "numpy.polymul"), [`polypow`](numpy.polynomial.polynomial.polypow#numpy.polynomial.polynomial.polypow "numpy.polynomial.polynomial.polypow") #### Examples >>> from numpy.polynomial import polynomial as P >>> c1 = (1, 2, 3) >>> c2 = (3, 2, 1) >>> P.polydiv(c1, c2) (array([3.]), array([-8., -4.])) >>> P.polydiv(c2, c1) (array([ 0.33333333]), array([ 2.66666667, 1.33333333])) # may vary # numpy.polynomial.polynomial.polydomain polynomial.polynomial.polydomain _= array([-1., 1.])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.polynomial.polyfit polynomial.polynomial.polyfit(_x_ , _y_ , _deg_ , _rcond =None_, _full =False_, _w =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L1285-L1436) Least-squares fit of a polynomial to data. Return the coefficients of a polynomial of degree `deg` that is the least squares fit to the data values `y` given at points `x`. If `y` is 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple fits are done, one for each column of `y`, and the resulting coefficients are stored in the corresponding columns of a 2-D return. The fitted polynomial(s) are in the form \\[p(x) = c_0 + c_1 * x + ... + c_n * x^n,\\] where `n` is `deg`. Parameters: **x** array_like, shape (`M`,) x-coordinates of the `M` sample (data) points `(x[i], y[i])`. **y** array_like, shape (`M`,) or (`M`, `K`) y-coordinates of the sample points. Several sets of sample points sharing the same x-coordinates can be (independently) fit with one call to [`polyfit`](numpy.polyfit#numpy.polyfit "numpy.polyfit") by passing in for `y` a 2-D array that contains one data set per column. **deg** int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **rcond** float, optional Relative condition number of the fit. Singular values smaller than `rcond`, relative to the largest singular value, will be ignored. The default value is `len(x)*eps`, where `eps` is the relative precision of the platform’s float type, about 2e-16 in most cases. **full** bool, optional Switch determining the nature of the return value. When `False` (the default) just the coefficients are returned; when `True`, diagnostic information from the singular value decomposition (used to solve the fit’s matrix equation) is also returned. **w** array_like, shape (`M`,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse- variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. Returns: **coef** ndarray, shape (`deg` \+ 1,) or (`deg` \+ 1, `K`) Polynomial coefficients ordered from low to high. If `y` was 2-D, the coefficients in column `k` of `coef` represent the polynomial fit to the data in `y`’s `k`-th column. **[residuals, rank, singular_values, rcond]** list These values are only returned if `full == True` * residuals – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * singular_values – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). Raises: RankWarning Raised if the matrix in the least-squares fit is rank deficient. The warning is only raised if `full == False`. The warnings can be turned off by: >>> import warnings >>> warnings.simplefilter('ignore', np.exceptions.RankWarning) See also [`numpy.polynomial.chebyshev.chebfit`](numpy.polynomial.chebyshev.chebfit#numpy.polynomial.chebyshev.chebfit "numpy.polynomial.chebyshev.chebfit") [`numpy.polynomial.legendre.legfit`](numpy.polynomial.legendre.legfit#numpy.polynomial.legendre.legfit "numpy.polynomial.legendre.legfit") [`numpy.polynomial.laguerre.lagfit`](numpy.polynomial.laguerre.lagfit#numpy.polynomial.laguerre.lagfit "numpy.polynomial.laguerre.lagfit") [`numpy.polynomial.hermite.hermfit`](numpy.polynomial.hermite.hermfit#numpy.polynomial.hermite.hermfit "numpy.polynomial.hermite.hermfit") [`numpy.polynomial.hermite_e.hermefit`](numpy.polynomial.hermite_e.hermefit#numpy.polynomial.hermite_e.hermefit "numpy.polynomial.hermite_e.hermefit") [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") Evaluates a polynomial. [`polyvander`](numpy.polynomial.polynomial.polyvander#numpy.polynomial.polynomial.polyvander "numpy.polynomial.polynomial.polyvander") Vandermonde matrix for powers. [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") Computes a least-squares fit from the matrix. [`scipy.interpolate.UnivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline "\(in SciPy v1.14.1\)") Computes spline fits. #### Notes The solution is the coefficients of the polynomial `p` that minimizes the sum of the weighted squared errors \\[E = \sum_j w_j^2 * |y_j - p(x_j)|^2,\\] where the \\(w_j\\) are the weights. This problem is solved by setting up the (typically) over-determined matrix equation: \\[V(x) * c = w * y,\\] where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the coefficients to be solved for, `w` are the weights, and `y` are the observed values. This equation is then solved using the singular value decomposition of `V`. If some of the singular values of `V` are so small that they are neglected (and [`full`](numpy.full#numpy.full "numpy.full") == `False`), a [`RankWarning`](numpy.exceptions.rankwarning#numpy.exceptions.RankWarning "numpy.exceptions.RankWarning") will be raised. This means that the coefficient values may be poorly determined. Fitting to a lower order polynomial will usually get rid of the warning (but may not be what you want, of course; if you have independent reason(s) for choosing the degree which isn’t working, you may have to: a) reconsider those reasons, and/or b) reconsider the quality of your data). The `rcond` parameter can also be set to a value smaller than its default, but the resulting fit may be spurious and have large contributions from roundoff error. Polynomial fits using double precision tend to “fail” at about (polynomial) degree 20. Fits using Chebyshev or Legendre series are generally better conditioned, but much can still depend on the distribution of the sample points and the smoothness of the data. If the quality of the fit is inadequate, splines may be a good alternative. #### Examples >>> import numpy as np >>> from numpy.polynomial import polynomial as P >>> x = np.linspace(-1,1,51) # x "data": [-1, -0.96, ..., 0.96, 1] >>> rng = np.random.default_rng() >>> err = rng.normal(size=len(x)) >>> y = x**3 - x + err # x^3 - x + Gaussian noise >>> c, stats = P.polyfit(x,y,3,full=True) >>> c # c[0], c[1] approx. -1, c[2] should be approx. 0, c[3] approx. 1 array([ 0.23111996, -1.02785049, -0.2241444 , 1.08405657]) # may vary >>> stats # note the large SSR, explaining the rather poor results [array([48.312088]), # may vary 4, array([1.38446749, 1.32119158, 0.50443316, 0.28853036]), 1.1324274851176597e-14] Same thing without the added noise >>> y = x**3 - x >>> c, stats = P.polyfit(x,y,3,full=True) >>> c # c[0], c[1] ~= -1, c[2] should be "very close to 0", c[3] ~= 1 array([-6.73496154e-17, -1.00000000e+00, 0.00000000e+00, 1.00000000e+00]) >>> stats # note the minuscule SSR [array([8.79579319e-31]), np.int32(4), array([1.38446749, 1.32119158, 0.50443316, 0.28853036]), 1.1324274851176597e-14] # numpy.polynomial.polynomial.polyfromroots polynomial.polynomial.polyfromroots(_roots_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L152-L213) Generate a monic polynomial with given roots. Return the coefficients of the polynomial \\[p(x) = (x - r_0) * (x - r_1) * ... * (x - r_n),\\] where the \\(r_n\\) are the roots specified in [`roots`](numpy.roots#numpy.roots "numpy.roots"). If a zero has multiplicity n, then it must appear in [`roots`](numpy.roots#numpy.roots "numpy.roots") n times. For instance, if 2 is a root of multiplicity three and 3 is a root of multiplicity 2, then [`roots`](numpy.roots#numpy.roots "numpy.roots") looks something like [2, 2, 2, 3, 3]. The roots can appear in any order. If the returned coefficients are `c`, then \\[p(x) = c_0 + c_1 * x + ... + x^n\\] The coefficient of the last term is 1 for monic polynomials in this form. Parameters: **roots** array_like Sequence containing the roots. Returns: **out** ndarray 1-D array of the polynomial’s coefficients If all the roots are real, then `out` is also real, otherwise it is complex. (see Examples below). See also [`numpy.polynomial.chebyshev.chebfromroots`](numpy.polynomial.chebyshev.chebfromroots#numpy.polynomial.chebyshev.chebfromroots "numpy.polynomial.chebyshev.chebfromroots") [`numpy.polynomial.legendre.legfromroots`](numpy.polynomial.legendre.legfromroots#numpy.polynomial.legendre.legfromroots "numpy.polynomial.legendre.legfromroots") [`numpy.polynomial.laguerre.lagfromroots`](numpy.polynomial.laguerre.lagfromroots#numpy.polynomial.laguerre.lagfromroots "numpy.polynomial.laguerre.lagfromroots") [`numpy.polynomial.hermite.hermfromroots`](numpy.polynomial.hermite.hermfromroots#numpy.polynomial.hermite.hermfromroots "numpy.polynomial.hermite.hermfromroots") [`numpy.polynomial.hermite_e.hermefromroots`](numpy.polynomial.hermite_e.hermefromroots#numpy.polynomial.hermite_e.hermefromroots "numpy.polynomial.hermite_e.hermefromroots") #### Notes The coefficients are determined by multiplying together linear factors of the form `(x - r_i)`, i.e. \\[p(x) = (x - r_0) (x - r_1) ... (x - r_n)\\] where `n == len(roots) - 1`; note that this implies that `1` is always returned for \\(a_n\\). #### Examples >>> from numpy.polynomial import polynomial as P >>> P.polyfromroots((-1,0,1)) # x(x - 1)(x + 1) = x^3 - x array([ 0., -1., 0., 1.]) >>> j = complex(0,1) >>> P.polyfromroots((-j,j)) # complex returned, though values are real array([1.+0.j, 0.+0.j, 1.+0.j]) # numpy.polynomial.polynomial.polygrid2d polynomial.polynomial.polygrid2d(_x_ , _y_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L897-L950) Evaluate a 2-D polynomial on the Cartesian product of x and y. This function returns the values: \\[p(a,b) = \sum_{i,j} c_{i,j} * a^i * b^j\\] where the points `(a, b)` consist of all pairs formed by taking `a` from `x` and `b` from `y`. The resulting points form a grid with `x` in the first dimension and `y` in the second. The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than two dimensions, ones are implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape + y.shape. Parameters: **x, y** array_like, compatible objects The two dimensional series is evaluated at the points in the Cartesian product of `x` and `y`. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval"), [`polyval2d`](numpy.polynomial.polynomial.polyval2d#numpy.polynomial.polynomial.polyval2d "numpy.polynomial.polynomial.polyval2d"), [`polyval3d`](numpy.polynomial.polynomial.polyval3d#numpy.polynomial.polynomial.polyval3d "numpy.polynomial.polynomial.polyval3d"), [`polygrid3d`](numpy.polynomial.polynomial.polygrid3d#numpy.polynomial.polynomial.polygrid3d "numpy.polynomial.polynomial.polygrid3d") #### Examples >>> from numpy.polynomial import polynomial as P >>> c = ((1, 2, 3), (4, 5, 6)) >>> P.polygrid2d([0, 1], [0, 1], c) array([[ 1., 6.], [ 5., 21.]]) # numpy.polynomial.polynomial.polygrid3d polynomial.polynomial.polygrid3d(_x_ , _y_ , _z_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L1006-L1062) Evaluate a 3-D polynomial on the Cartesian product of x, y and z. This function returns the values: \\[p(a,b,c) = \sum_{i,j,k} c_{i,j,k} * a^i * b^j * c^k\\] where the points `(a, b, c)` consist of all triples formed by taking `a` from `x`, `b` from `y`, and `c` from `z`. The resulting points form a grid with `x` in the first dimension, `y` in the second, and `z` in the third. The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than three dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape + y.shape + z.shape. Parameters: **x, y, z** array_like, compatible objects The three dimensional series is evaluated at the points in the Cartesian product of `x`, `y`, and `z`. If `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval"), [`polyval2d`](numpy.polynomial.polynomial.polyval2d#numpy.polynomial.polynomial.polyval2d "numpy.polynomial.polynomial.polyval2d"), [`polygrid2d`](numpy.polynomial.polynomial.polygrid2d#numpy.polynomial.polynomial.polygrid2d "numpy.polynomial.polynomial.polygrid2d"), [`polyval3d`](numpy.polynomial.polynomial.polyval3d#numpy.polynomial.polynomial.polyval3d "numpy.polynomial.polynomial.polyval3d") #### Examples >>> from numpy.polynomial import polynomial as P >>> c = ((1, 2, 3), (4, 5, 6), (7, 8, 9)) >>> P.polygrid3d([0, 1], [0, 1], [0, 1], c) array([[ 1., 13.], [ 6., 51.]]) # numpy.polynomial.polynomial.polyint polynomial.polynomial.polyint(_c_ , _m =1_, _k =[]_, _lbnd =0_, _scl =1_, _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L546-L660) Integrate a polynomial. Returns the polynomial coefficients `c` integrated `m` times from `lbnd` along `axis`. At each iteration the resulting series is **multiplied** by `scl` and an integration constant, `k`, is added. The scaling factor is for use in a linear change of variable. (“Buyer beware”: note that, depending on what one is doing, one may want `scl` to be the reciprocal of what one might expect; for more information, see the Notes section below.) The argument `c` is an array of coefficients, from low to high degree along each axis, e.g., [1,2,3] represents the polynomial `1 + 2*x + 3*x**2` while [[1,2],[1,2]] represents `1 + 1*x + 2*y + 2*x*y` if axis=0 is `x` and axis=1 is `y`. Parameters: **c** array_like 1-D array of polynomial coefficients, ordered from low to high. **m** int, optional Order of integration, must be positive. (Default: 1) **k**{[], list, scalar}, optional Integration constant(s). The value of the first integral at zero is the first value in the list, the value of the second integral at zero is the second value, etc. If `k == []` (the default), all constants are set to zero. If `m == 1`, a single scalar can be given instead of a list. **lbnd** scalar, optional The lower bound of the integral. (Default: 0) **scl** scalar, optional Following each integration the result is _multiplied_ by `scl` before the integration constant is added. (Default: 1) **axis** int, optional Axis over which the integral is taken. (Default: 0). Returns: **S** ndarray Coefficient array of the integral. Raises: ValueError If `m < 1`, `len(k) > m`, `np.ndim(lbnd) != 0`, or `np.ndim(scl) != 0`. See also [`polyder`](numpy.polyder#numpy.polyder "numpy.polyder") #### Notes Note that the result of each integration is _multiplied_ by `scl`. Why is this important to note? Say one is making a linear change of variable \\(u = ax + b\\) in an integral relative to `x`. Then \\(dx = du/a\\), so one will need to set `scl` equal to \\(1/a\\) \- perhaps not what one would have first thought. #### Examples >>> from numpy.polynomial import polynomial as P >>> c = (1, 2, 3) >>> P.polyint(c) # should return array([0, 1, 1, 1]) array([0., 1., 1., 1.]) >>> P.polyint(c, 3) # should return array([0, 0, 0, 1/6, 1/12, 1/20]) array([ 0. , 0. , 0. , 0.16666667, 0.08333333, # may vary 0.05 ]) >>> P.polyint(c, k=3) # should return array([3, 1, 1, 1]) array([3., 1., 1., 1.]) >>> P.polyint(c,lbnd=-2) # should return array([6, 1, 1, 1]) array([6., 1., 1., 1.]) >>> P.polyint(c,scl=-2) # should return array([0, -2, -2, -2]) array([ 0., -2., -2., -2.]) # numpy.polynomial.polynomial.polyline polynomial.polynomial.polyline(_off_ , _scl_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L114-L149) Returns an array representing a linear polynomial. Parameters: **off, scl** scalars The “y-intercept” and “slope” of the line, respectively. Returns: **y** ndarray This module’s representation of the linear polynomial `off + scl*x`. See also [`numpy.polynomial.chebyshev.chebline`](numpy.polynomial.chebyshev.chebline#numpy.polynomial.chebyshev.chebline "numpy.polynomial.chebyshev.chebline") [`numpy.polynomial.legendre.legline`](numpy.polynomial.legendre.legline#numpy.polynomial.legendre.legline "numpy.polynomial.legendre.legline") [`numpy.polynomial.laguerre.lagline`](numpy.polynomial.laguerre.lagline#numpy.polynomial.laguerre.lagline "numpy.polynomial.laguerre.lagline") [`numpy.polynomial.hermite.hermline`](numpy.polynomial.hermite.hermline#numpy.polynomial.hermite.hermline "numpy.polynomial.hermite.hermline") [`numpy.polynomial.hermite_e.hermeline`](numpy.polynomial.hermite_e.hermeline#numpy.polynomial.hermite_e.hermeline "numpy.polynomial.hermite_e.hermeline") #### Examples >>> from numpy.polynomial import polynomial as P >>> P.polyline(1, -1) array([ 1, -1]) >>> P.polyval(1, P.polyline(1, -1)) # should be 0 0.0 # numpy.polynomial.polynomial.polymul polynomial.polynomial.polymul(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L331-L366) Multiply one polynomial by another. Returns the product of two polynomials `c1` * `c2`. The arguments are sequences of coefficients, from lowest order term to highest, e.g., [1,2,3] represents the polynomial `1 + 2*x + 3*x**2.` Parameters: **c1, c2** array_like 1-D arrays of coefficients representing a polynomial, relative to the “standard” basis, and ordered from lowest order term to highest. Returns: **out** ndarray Of the coefficients of their product. See also [`polyadd`](numpy.polyadd#numpy.polyadd "numpy.polyadd"), [`polysub`](numpy.polysub#numpy.polysub "numpy.polysub"), [`polymulx`](numpy.polynomial.polynomial.polymulx#numpy.polynomial.polynomial.polymulx "numpy.polynomial.polynomial.polymulx"), [`polydiv`](numpy.polydiv#numpy.polydiv "numpy.polydiv"), [`polypow`](numpy.polynomial.polynomial.polypow#numpy.polynomial.polynomial.polypow "numpy.polynomial.polynomial.polypow") #### Examples >>> from numpy.polynomial import polynomial as P >>> c1 = (1, 2, 3) >>> c2 = (3, 2, 1) >>> P.polymul(c1, c2) array([ 3., 8., 14., 8., 3.]) # numpy.polynomial.polynomial.polymulx polynomial.polynomial.polymulx(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L289-L328) Multiply a polynomial by x. Multiply the polynomial `c` by x, where x is the independent variable. Parameters: **c** array_like 1-D array of polynomial coefficients ordered from low to high. Returns: **out** ndarray Array representing the result of the multiplication. See also [`polyadd`](numpy.polyadd#numpy.polyadd "numpy.polyadd"), [`polysub`](numpy.polysub#numpy.polysub "numpy.polysub"), [`polymul`](numpy.polymul#numpy.polymul "numpy.polymul"), [`polydiv`](numpy.polydiv#numpy.polydiv "numpy.polydiv"), [`polypow`](numpy.polynomial.polynomial.polypow#numpy.polynomial.polynomial.polypow "numpy.polynomial.polynomial.polypow") #### Examples >>> from numpy.polynomial import polynomial as P >>> c = (1, 2, 3) >>> P.polymulx(c) array([0., 1., 2., 3.]) # numpy.polynomial.polynomial.Polynomial.__call__ method polynomial.polynomial.Polynomial.__call__(_arg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L513-L515) Call self as a function. # numpy.polynomial.polynomial.Polynomial.basis method _classmethod_ polynomial.polynomial.Polynomial.basis(_deg_ , _domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1120-L1157) Series basis polynomial of degree `deg`. Returns the series representing the basis polynomial of degree `deg`. Parameters: **deg** int Degree of the basis polynomial for the series. Must be >= 0. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series A series with the coefficient of the `deg` term set to one and all others zero. # numpy.polynomial.polynomial.Polynomial.cast method _classmethod_ polynomial.polynomial.Polynomial.cast(_series_ , _domain =None_, _window =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1159-L1197) Convert series to series of this class. The `series` is expected to be an instance of some polynomial series of one of the types supported by by the numpy.polynomial module, but could be some other class that supports the convert method. Parameters: **series** series The series instance to be converted. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns: **new_series** series A series of the same kind as the calling class and equal to `series` when evaluated. See also [`convert`](numpy.polynomial.polynomial.polynomial.convert#numpy.polynomial.polynomial.Polynomial.convert "numpy.polynomial.polynomial.Polynomial.convert") similar instance method # numpy.polynomial.polynomial.Polynomial.convert method polynomial.polynomial.Polynomial.convert(_domain =None_, _kind =None_, _window =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L785-L820) Convert series to a different kind and/or domain and/or window. Parameters: **domain** array_like, optional The domain of the converted series. If the value is None, the default domain of `kind` is used. **kind** class, optional The polynomial series type class to which the current instance should be converted. If kind is None, then the class of the current instance is used. **window** array_like, optional The window of the converted series. If the value is None, the default window of `kind` is used. Returns: **new_series** series The returned class can be of different type than the current instance and/or have a different domain and/or different window. #### Notes Conversion between domains and class types can result in numerically ill defined series. # numpy.polynomial.polynomial.Polynomial.copy method polynomial.polynomial.Polynomial.copy()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L666-L675) Return a copy. Returns: **new_series** series Copy of self. # numpy.polynomial.polynomial.Polynomial.cutdeg method polynomial.polynomial.Polynomial.cutdeg(_deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L710-L731) Truncate series to the given degree. Reduce the degree of the series to `deg` by discarding the high order terms. If `deg` is greater than the current degree a copy of the current series is returned. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters: **deg** non-negative int The series is reduced to degree `deg` by discarding the high order terms. The value of `deg` must be a non-negative integer. Returns: **new_series** series New instance of series with reduced degree. # numpy.polynomial.polynomial.Polynomial.degree method polynomial.polynomial.Polynomial.degree()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L677-L708) The degree of the series. Returns: **degree** int Degree of the series, one less than the number of coefficients. #### Examples Create a polynomial object for `1 + 7*x + 4*x**2`: >>> poly = np.polynomial.Polynomial([1, 7, 4]) >>> print(poly) 1.0 + 7.0·x + 4.0·x² >>> poly.degree() 2 Note that this method does not check for non-zero coefficients. You must trim the polynomial to remove any trailing zeroes: >>> poly = np.polynomial.Polynomial([1, 7, 0]) >>> print(poly) 1.0 + 7.0·x + 0.0·x² >>> poly.degree() 2 >>> poly.trim().degree() 1 # numpy.polynomial.polynomial.Polynomial.deriv method polynomial.polynomial.Polynomial.deriv(_m =1_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L884-L904) Differentiate. Return a series instance of that is the derivative of the current series. Parameters: **m** non-negative int Find the derivative of order `m`. Returns: **new_series** series A new series representing the derivative. The domain is the same as the domain of the differentiated series. # numpy.polynomial.polynomial.Polynomial.domain attribute polynomial.polynomial.Polynomial.domain _= array([-1., 1.])_ # numpy.polynomial.polynomial.Polynomial.fit method _classmethod_ polynomial.polynomial.Polynomial.fit(_x_ , _y_ , _deg_ , _domain =None_, _rcond =None_, _full =False_, _w =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L951-L1040) Least squares fit to data. Return a series instance that is the least squares fit to the data `y` sampled at `x`. The domain of the returned instance can be specified and this will often result in a superior fit with less chance of ill conditioning. Parameters: **x** array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y** array_like, shape (M,) y-coordinates of the M sample points `(x[i], y[i])`. **deg** int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **domain**{None, [beg, end], []}, optional Domain to use for the returned series. If `None`, then a minimal domain that covers the points `x` is chosen. If `[]` the class domain is used. The default value was the class domain in NumPy 1.4 and `None` in later versions. The `[]` option was added in numpy 1.5.0. **rcond** float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is `len(x)*eps`, where eps is the relative precision of the float type, about 2e-16 in most cases. **full** bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w** array_like, shape (M,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse- variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. **window**{[beg, end]}, optional Window to use for the returned series. The default value is the default class domain **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series A series that represents the least squares fit to the data and has the domain and window specified in the call. If the coefficients for the unscaled and unshifted basis polynomials are of interest, do `new_series.convert().coef`. **[resid, rank, sv, rcond]** list These values are only returned if `full == True` * resid – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * sv – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). # numpy.polynomial.polynomial.Polynomial.fromroots method _classmethod_ polynomial.polynomial.Polynomial.fromroots(_roots_ , _domain =[]_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1042-L1083) Return series instance that has the specified roots. Returns a series representing the product `(x - r[0])*(x - r[1])*...*(x - r[n-1])`, where `r` is a list of roots. Parameters: **roots** array_like List of roots. **domain**{[], None, array_like}, optional Domain for the resulting series. If None the domain is the interval from the smallest root to the largest. If [] the domain is the class domain. The default is []. **window**{None, array_like}, optional Window for the returned series. If None the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series Series with the specified roots. # numpy.polynomial.polynomial.Polynomial.has_samecoef method polynomial.polynomial.Polynomial.has_samecoef(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L188-L207) Check if coefficients match. Parameters: **other** class instance The other class must have the `coef` attribute. Returns: **bool** boolean True if the coefficients are the same, False otherwise. # numpy.polynomial.polynomial.Polynomial.has_samedomain method polynomial.polynomial.Polynomial.has_samedomain(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L209-L223) Check if domains match. Parameters: **other** class instance The other class must have the `domain` attribute. Returns: **bool** boolean True if the domains are the same, False otherwise. # numpy.polynomial.polynomial.Polynomial.has_sametype method polynomial.polynomial.Polynomial.has_sametype(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L241-L255) Check if types match. Parameters: **other** object Class instance. Returns: **bool** boolean True if other is same class as self # numpy.polynomial.polynomial.Polynomial.has_samewindow method polynomial.polynomial.Polynomial.has_samewindow(_other_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L225-L239) Check if windows match. Parameters: **other** class instance The other class must have the `window` attribute. Returns: **bool** boolean True if the windows are the same, False otherwise. # numpy.polynomial.polynomial.Polynomial _class_ numpy.polynomial.polynomial.Polynomial(_coef_ , _domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L1549-L1617) A power series class. The Polynomial class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the attributes and methods listed below. Parameters: **coef** array_like Polynomial coefficients in order of increasing degree, i.e., `(1, 2, 3)` give `1 + 2*x + 3*x**2`. **domain**(2,) array_like, optional Domain to use. The interval `[domain[0], domain[1]]` is mapped to the interval `[window[0], window[1]]` by shifting and scaling. The default value is [-1., 1.]. **window**(2,) array_like, optional Window, see [`domain`](numpy.polynomial.polynomial.polynomial.domain#numpy.polynomial.polynomial.Polynomial.domain "numpy.polynomial.polynomial.Polynomial.domain") for its use. The default value is [-1., 1.]. **symbol** str, optional Symbol used to represent the independent variable in string representations of the polynomial expression, e.g. for printing. The symbol must be a valid Python identifier. Default value is ‘x’. New in version 1.24. Attributes: **basis_name** **symbol** #### Methods [`__call__`](numpy.polynomial.polynomial.polynomial.__call__#numpy.polynomial.polynomial.Polynomial.__call__ "numpy.polynomial.polynomial.Polynomial.__call__")(arg) | Call self as a function. ---|--- [`basis`](numpy.polynomial.polynomial.polynomial.basis#numpy.polynomial.polynomial.Polynomial.basis "numpy.polynomial.polynomial.Polynomial.basis")(deg[, domain, window, symbol]) | Series basis polynomial of degree `deg`. [`cast`](numpy.polynomial.polynomial.polynomial.cast#numpy.polynomial.polynomial.Polynomial.cast "numpy.polynomial.polynomial.Polynomial.cast")(series[, domain, window]) | Convert series to series of this class. [`convert`](numpy.polynomial.polynomial.polynomial.convert#numpy.polynomial.polynomial.Polynomial.convert "numpy.polynomial.polynomial.Polynomial.convert")([domain, kind, window]) | Convert series to a different kind and/or domain and/or window. [`copy`](numpy.polynomial.polynomial.polynomial.copy#numpy.polynomial.polynomial.Polynomial.copy "numpy.polynomial.polynomial.Polynomial.copy")() | Return a copy. [`cutdeg`](numpy.polynomial.polynomial.polynomial.cutdeg#numpy.polynomial.polynomial.Polynomial.cutdeg "numpy.polynomial.polynomial.Polynomial.cutdeg")(deg) | Truncate series to the given degree. [`degree`](numpy.polynomial.polynomial.polynomial.degree#numpy.polynomial.polynomial.Polynomial.degree "numpy.polynomial.polynomial.Polynomial.degree")() | The degree of the series. [`deriv`](numpy.polynomial.polynomial.polynomial.deriv#numpy.polynomial.polynomial.Polynomial.deriv "numpy.polynomial.polynomial.Polynomial.deriv")([m]) | Differentiate. [`fit`](numpy.polynomial.polynomial.polynomial.fit#numpy.polynomial.polynomial.Polynomial.fit "numpy.polynomial.polynomial.Polynomial.fit")(x, y, deg[, domain, rcond, full, w, ...]) | Least squares fit to data. [`fromroots`](numpy.polynomial.polynomial.polynomial.fromroots#numpy.polynomial.polynomial.Polynomial.fromroots "numpy.polynomial.polynomial.Polynomial.fromroots")(roots[, domain, window, symbol]) | Return series instance that has the specified roots. [`has_samecoef`](numpy.polynomial.polynomial.polynomial.has_samecoef#numpy.polynomial.polynomial.Polynomial.has_samecoef "numpy.polynomial.polynomial.Polynomial.has_samecoef")(other) | Check if coefficients match. [`has_samedomain`](numpy.polynomial.polynomial.polynomial.has_samedomain#numpy.polynomial.polynomial.Polynomial.has_samedomain "numpy.polynomial.polynomial.Polynomial.has_samedomain")(other) | Check if domains match. [`has_sametype`](numpy.polynomial.polynomial.polynomial.has_sametype#numpy.polynomial.polynomial.Polynomial.has_sametype "numpy.polynomial.polynomial.Polynomial.has_sametype")(other) | Check if types match. [`has_samewindow`](numpy.polynomial.polynomial.polynomial.has_samewindow#numpy.polynomial.polynomial.Polynomial.has_samewindow "numpy.polynomial.polynomial.Polynomial.has_samewindow")(other) | Check if windows match. [`identity`](numpy.polynomial.polynomial.polynomial.identity#numpy.polynomial.polynomial.Polynomial.identity "numpy.polynomial.polynomial.Polynomial.identity")([domain, window, symbol]) | Identity function. [`integ`](numpy.polynomial.polynomial.polynomial.integ#numpy.polynomial.polynomial.Polynomial.integ "numpy.polynomial.polynomial.Polynomial.integ")([m, k, lbnd]) | Integrate. [`linspace`](numpy.polynomial.polynomial.polynomial.linspace#numpy.polynomial.polynomial.Polynomial.linspace "numpy.polynomial.polynomial.Polynomial.linspace")([n, domain]) | Return x, y values at equally spaced points in domain. [`mapparms`](numpy.polynomial.polynomial.polynomial.mapparms#numpy.polynomial.polynomial.Polynomial.mapparms "numpy.polynomial.polynomial.Polynomial.mapparms")() | Return the mapping parameters. [`roots`](numpy.polynomial.polynomial.polynomial.roots#numpy.polynomial.polynomial.Polynomial.roots "numpy.polynomial.polynomial.Polynomial.roots")() | Return the roots of the series polynomial. [`trim`](numpy.polynomial.polynomial.polynomial.trim#numpy.polynomial.polynomial.Polynomial.trim "numpy.polynomial.polynomial.Polynomial.trim")([tol]) | Remove trailing coefficients [`truncate`](numpy.polynomial.polynomial.polynomial.truncate#numpy.polynomial.polynomial.Polynomial.truncate "numpy.polynomial.polynomial.Polynomial.truncate")(size) | Truncate series to length `size`. # numpy.polynomial.polynomial.Polynomial.identity method _classmethod_ polynomial.polynomial.Polynomial.identity(_domain =None_, _window =None_, _symbol ='x'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L1085-L1118) Identity function. If `p` is the returned series, then `p(x) == x` for all values of x. Parameters: **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. **symbol** str, optional Symbol representing the independent variable. Default is ‘x’. Returns: **new_series** series Series of representing the identity. # numpy.polynomial.polynomial.Polynomial.integ method polynomial.polynomial.Polynomial.integ(_m =1_, _k =[]_, _lbnd =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L851-L882) Integrate. Return a series instance that is the definite integral of the current series. Parameters: **m** non-negative int The number of integrations to perform. **k** array_like Integration constants. The first constant is applied to the first integration, the second to the second, and so on. The list of values must less than or equal to `m` in length and any missing values are set to zero. **lbnd** Scalar The lower bound of the definite integral. Returns: **new_series** series A new series representing the integral. The domain is the same as the domain of the integrated series. # numpy.polynomial.polynomial.Polynomial.linspace method polynomial.polynomial.Polynomial.linspace(_n =100_, _domain =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L921-L949) Return x, y values at equally spaced points in domain. Returns the x, y values at `n` linearly spaced points across the domain. Here y is the value of the polynomial at the points x. By default the domain is the same as that of the series instance. This method is intended mostly as a plotting aid. Parameters: **n** int, optional Number of point pairs to return. The default value is 100. **domain**{None, array_like}, optional If not None, the specified domain is used instead of that of the calling instance. It should be of the form `[beg,end]`. The default is None which case the class domain is used. Returns: **x, y** ndarray x is equal to linspace(self.domain[0], self.domain[1], n) and y is the series evaluated at element of x. # numpy.polynomial.polynomial.Polynomial.mapparms method polynomial.polynomial.Polynomial.mapparms()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L822-L849) Return the mapping parameters. The returned values define a linear map `off + scl*x` that is applied to the input arguments before the series is evaluated. The map depends on the `domain` and `window`; if the current `domain` is equal to the `window` the resulting map is the identity. If the coefficients of the series instance are to be used by themselves outside this class, then the linear function must be substituted for the `x` in the standard representation of the base polynomials. Returns: **off, scl** float or complex The mapping function is defined by `off + scl*x`. #### Notes If the current domain is the interval `[l1, r1]` and the window is `[l2, r2]`, then the linear mapping function `L` is defined by the equations: L(l1) = l2 L(r1) = r2 # numpy.polynomial.polynomial.Polynomial.roots method polynomial.polynomial.Polynomial.roots()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L906-L919) Return the roots of the series polynomial. Compute the roots for the series. Note that the accuracy of the roots decreases the further outside the [`domain`](numpy.polynomial.polynomial.polynomial.domain#numpy.polynomial.polynomial.Polynomial.domain "numpy.polynomial.polynomial.Polynomial.domain") they lie. Returns: **roots** ndarray Array containing the roots of the series. # numpy.polynomial.polynomial.Polynomial.trim method polynomial.polynomial.Polynomial.trim(_tol =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L733-L754) Remove trailing coefficients Remove trailing coefficients until a coefficient is reached whose absolute value greater than `tol` or the beginning of the series is reached. If all the coefficients would be removed the series is set to `[0]`. A new series instance is returned with the new coefficients. The current instance remains unchanged. Parameters: **tol** non-negative number. All trailing coefficients less than `tol` will be removed. Returns: **new_series** series New instance of series with trimmed coefficients. # numpy.polynomial.polynomial.Polynomial.truncate method polynomial.polynomial.Polynomial.truncate(_size_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/_polybase.py#L756-L783) Truncate series to length [`size`](numpy.size#numpy.size "numpy.size"). Reduce the series to length [`size`](numpy.size#numpy.size "numpy.size") by discarding the high degree terms. The value of [`size`](numpy.size#numpy.size "numpy.size") must be a positive integer. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters: **size** positive int The series is reduced to length [`size`](numpy.size#numpy.size "numpy.size") by discarding the high degree terms. The value of [`size`](numpy.size#numpy.size "numpy.size") must be a positive integer. Returns: **new_series** series New instance of series with truncated coefficients. # numpy.polynomial.polynomial.polyone polynomial.polynomial.polyone _= array([1])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.polynomial.polypow polynomial.polynomial.polypow(_c_ , _pow_ , _maxpower =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L427-L463) Raise a polynomial to a power. Returns the polynomial `c` raised to the power [`pow`](numpy.pow#numpy.pow "numpy.pow"). The argument `c` is a sequence of coefficients ordered from low to high. i.e., [1,2,3] is the series `1 + 2*x + 3*x**2.` Parameters: **c** array_like 1-D array of array of series coefficients ordered from low to high degree. **pow** integer Power to which the series will be raised **maxpower** integer, optional Maximum power allowed. This is mainly to limit growth of the series to unmanageable size. Default is 16 Returns: **coef** ndarray Power series of power. See also [`polyadd`](numpy.polyadd#numpy.polyadd "numpy.polyadd"), [`polysub`](numpy.polysub#numpy.polysub "numpy.polysub"), [`polymulx`](numpy.polynomial.polynomial.polymulx#numpy.polynomial.polynomial.polymulx "numpy.polynomial.polynomial.polymulx"), [`polymul`](numpy.polymul#numpy.polymul "numpy.polymul"), [`polydiv`](numpy.polydiv#numpy.polydiv "numpy.polydiv") #### Examples >>> from numpy.polynomial import polynomial as P >>> P.polypow([1, 2, 3], 2) array([ 1., 4., 10., 12., 9.]) # numpy.polynomial.polynomial.polyroots polynomial.polynomial.polyroots(_c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L1482-L1542) Compute the roots of a polynomial. Return the roots (a.k.a. “zeros”) of the polynomial \\[p(x) = \sum_i c[i] * x^i.\\] Parameters: **c** 1-D array_like 1-D array of polynomial coefficients. Returns: **out** ndarray Array of the roots of the polynomial. If all the roots are real, then `out` is also real, otherwise it is complex. See also [`numpy.polynomial.chebyshev.chebroots`](numpy.polynomial.chebyshev.chebroots#numpy.polynomial.chebyshev.chebroots "numpy.polynomial.chebyshev.chebroots") [`numpy.polynomial.legendre.legroots`](numpy.polynomial.legendre.legroots#numpy.polynomial.legendre.legroots "numpy.polynomial.legendre.legroots") [`numpy.polynomial.laguerre.lagroots`](numpy.polynomial.laguerre.lagroots#numpy.polynomial.laguerre.lagroots "numpy.polynomial.laguerre.lagroots") [`numpy.polynomial.hermite.hermroots`](numpy.polynomial.hermite.hermroots#numpy.polynomial.hermite.hermroots "numpy.polynomial.hermite.hermroots") [`numpy.polynomial.hermite_e.hermeroots`](numpy.polynomial.hermite_e.hermeroots#numpy.polynomial.hermite_e.hermeroots "numpy.polynomial.hermite_e.hermeroots") #### Notes The root estimates are obtained as the eigenvalues of the companion matrix, Roots far from the origin of the complex plane may have large errors due to the numerical instability of the power series for such values. Roots with multiplicity greater than 1 will also show larger errors as the value of the series near such points is relatively insensitive to errors in the roots. Isolated roots near the origin can be improved by a few iterations of Newton’s method. #### Examples >>> import numpy.polynomial.polynomial as poly >>> poly.polyroots(poly.polyfromroots((-1,0,1))) array([-1., 0., 1.]) >>> poly.polyroots(poly.polyfromroots((-1,0,1))).dtype dtype('float64') >>> j = complex(0,1) >>> poly.polyroots(poly.polyfromroots((-j,0,j))) array([ 0.00000000e+00+0.j, 0.00000000e+00+1.j, 2.77555756e-17-1.j]) # may vary # numpy.polynomial.polynomial.polysub polynomial.polynomial.polysub(_c1_ , _c2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L252-L286) Subtract one polynomial from another. Returns the difference of two polynomials `c1` \- `c2`. The arguments are sequences of coefficients from lowest order term to highest, i.e., [1,2,3] represents the polynomial `1 + 2*x + 3*x**2`. Parameters: **c1, c2** array_like 1-D arrays of polynomial coefficients ordered from low to high. Returns: **out** ndarray Of coefficients representing their difference. See also [`polyadd`](numpy.polyadd#numpy.polyadd "numpy.polyadd"), [`polymulx`](numpy.polynomial.polynomial.polymulx#numpy.polynomial.polynomial.polymulx "numpy.polynomial.polynomial.polymulx"), [`polymul`](numpy.polymul#numpy.polymul "numpy.polymul"), [`polydiv`](numpy.polydiv#numpy.polydiv "numpy.polydiv"), [`polypow`](numpy.polynomial.polynomial.polypow#numpy.polynomial.polynomial.polypow "numpy.polynomial.polynomial.polypow") #### Examples >>> from numpy.polynomial import polynomial as P >>> c1 = (1, 2, 3) >>> c2 = (3, 2, 1) >>> P.polysub(c1,c2) array([-2., 0., 2.]) >>> P.polysub(c2, c1) # -P.polysub(c1,c2) array([ 2., 0., -2.]) # numpy.polynomial.polynomial.polytrim polynomial.polynomial.polytrim(_c_ , _tol =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polyutils.py#L144-L192) Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter `tol`; “trailing” means highest order coefficient(s), e.g., in `[0, 1, 1, 0, 0]` (which represents `0 + x + x**2 + 0*x**3 + 0*x**4`) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters: **c** array_like 1-d array of coefficients, ordered from lowest order to highest. **tol** number, optional Trailing (i.e., highest order) elements with absolute value less than or equal to `tol` (default value is zero) are removed. Returns: **trimmed** ndarray 1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises: ValueError If `tol` < 0 #### Examples >>> from numpy.polynomial import polyutils as pu >>> pu.trimcoef((0,0,3,0,5,0,0)) array([0., 0., 3., 0., 5.]) >>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed array([0.]) >>> i = complex(0,1) # works for complex >>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3) array([0.0003+0.j , 0.001 -0.001j]) # numpy.polynomial.polynomial.polyval polynomial.polynomial.polyval(_x_ , _c_ , _tensor =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L663-L755) Evaluate a polynomial at points x. If `c` is of length `n + 1`, this function returns the value \\[p(x) = c_0 + c_1 * x + ... + c_n * x^n\\] The parameter `x` is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either `x` or its elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array, then `p(x)` will have the same shape as `x`. If `c` is multidimensional, then the shape of the result depends on the value of `tensor`. If `tensor` is true the shape will be c.shape[1:] + x.shape. If `tensor` is false the shape will be c.shape[1:]. Note that scalars have shape (,). Trailing zeros in the coefficients will be used in the evaluation, so they should be avoided if efficiency is a concern. Parameters: **x** array_like, compatible object If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with with themselves and with the elements of `c`. **c** array_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If `c` is multidimensional the remaining indices enumerate multiple polynomials. In the two dimensional case the coefficients may be thought of as stored in the columns of `c`. **tensor** boolean, optional If True, the shape of the coefficient array is extended with ones on the right, one for each dimension of `x`. Scalars have dimension 0 for this action. The result is that every column of coefficients in `c` is evaluated for every element of `x`. If False, `x` is broadcast over the columns of `c` for the evaluation. This keyword is useful when `c` is multidimensional. The default value is True. Returns: **values** ndarray, compatible object The shape of the returned array is described above. See also [`polyval2d`](numpy.polynomial.polynomial.polyval2d#numpy.polynomial.polynomial.polyval2d "numpy.polynomial.polynomial.polyval2d"), [`polygrid2d`](numpy.polynomial.polynomial.polygrid2d#numpy.polynomial.polynomial.polygrid2d "numpy.polynomial.polynomial.polygrid2d"), [`polyval3d`](numpy.polynomial.polynomial.polyval3d#numpy.polynomial.polynomial.polyval3d "numpy.polynomial.polynomial.polyval3d"), [`polygrid3d`](numpy.polynomial.polynomial.polygrid3d#numpy.polynomial.polynomial.polygrid3d "numpy.polynomial.polynomial.polygrid3d") #### Notes The evaluation uses Horner’s method. #### Examples >>> import numpy as np >>> from numpy.polynomial.polynomial import polyval >>> polyval(1, [1,2,3]) 6.0 >>> a = np.arange(4).reshape(2,2) >>> a array([[0, 1], [2, 3]]) >>> polyval(a, [1, 2, 3]) array([[ 1., 6.], [17., 34.]]) >>> coef = np.arange(4).reshape(2, 2) # multidimensional coefficients >>> coef array([[0, 1], [2, 3]]) >>> polyval([1, 2], coef, tensor=True) array([[2., 4.], [4., 7.]]) >>> polyval([1, 2], coef, tensor=False) array([2., 7.]) # numpy.polynomial.polynomial.polyval2d polynomial.polynomial.polyval2d(_x_ , _y_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L845-L894) Evaluate a 2-D polynomial at points (x, y). This function returns the value \\[p(x,y) = \sum_{i,j} c_{i,j} * x^i * y^j\\] The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than two dimensions, ones are implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters: **x, y** array_like, compatible objects The two dimensional series is evaluated at the points `(x, y)`, where `x` and `y` must have the same shape. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficient of the term of multi- degree i,j is contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the two dimensional polynomial at points formed with pairs of corresponding values from `x` and `y`. See also [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval"), [`polygrid2d`](numpy.polynomial.polynomial.polygrid2d#numpy.polynomial.polynomial.polygrid2d "numpy.polynomial.polynomial.polygrid2d"), [`polyval3d`](numpy.polynomial.polynomial.polyval3d#numpy.polynomial.polynomial.polyval3d "numpy.polynomial.polynomial.polyval3d"), [`polygrid3d`](numpy.polynomial.polynomial.polygrid3d#numpy.polynomial.polynomial.polygrid3d "numpy.polynomial.polynomial.polygrid3d") #### Examples >>> from numpy.polynomial import polynomial as P >>> c = ((1, 2, 3), (4, 5, 6)) >>> P.polyval2d(1, 1, c) 21.0 # numpy.polynomial.polynomial.polyval3d polynomial.polynomial.polyval3d(_x_ , _y_ , _z_ , _c_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L953-L1003) Evaluate a 3-D polynomial at points (x, y, z). This function returns the values: \\[p(x,y,z) = \sum_{i,j,k} c_{i,j,k} * x^i * y^j * z^k\\] The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than 3 dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape. Parameters: **x, y, z** array_like, compatible object The three dimensional series is evaluated at the points `(x, y, z)`, where `x`, `y`, and `z` must have the same shape. If any of `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c** array_like Array of coefficients ordered so that the coefficient of the term of multi- degree i,j,k is contained in `c[i,j,k]`. If `c` has dimension greater than 3 the remaining indices enumerate multiple sets of coefficients. Returns: **values** ndarray, compatible object The values of the multidimensional polynomial on points formed with triples of corresponding values from `x`, `y`, and `z`. See also [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval"), [`polyval2d`](numpy.polynomial.polynomial.polyval2d#numpy.polynomial.polynomial.polyval2d "numpy.polynomial.polynomial.polyval2d"), [`polygrid2d`](numpy.polynomial.polynomial.polygrid2d#numpy.polynomial.polynomial.polygrid2d "numpy.polynomial.polynomial.polygrid2d"), [`polygrid3d`](numpy.polynomial.polynomial.polygrid3d#numpy.polynomial.polynomial.polygrid3d "numpy.polynomial.polynomial.polygrid3d") #### Examples >>> from numpy.polynomial import polynomial as P >>> c = ((1, 2, 3), (4, 5, 6), (7, 8, 9)) >>> P.polyval3d(1, 1, 1, c) 45.0 # numpy.polynomial.polynomial.polyvalfromroots polynomial.polynomial.polyvalfromroots(_x_ , _r_ , _tensor =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L758-L842) Evaluate a polynomial specified by its roots at points x. If `r` is of length `N`, this function returns the value \\[p(x) = \prod_{n=1}^{N} (x - r_n)\\] The parameter `x` is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either `x` or its elements must support multiplication and addition both with themselves and with the elements of `r`. If `r` is a 1-D array, then `p(x)` will have the same shape as `x`. If `r` is multidimensional, then the shape of the result depends on the value of `tensor`. If `tensor` is `True` the shape will be r.shape[1:] + x.shape; that is, each polynomial is evaluated at every value of `x`. If `tensor` is `False`, the shape will be r.shape[1:]; that is, each polynomial is evaluated only for the corresponding broadcast value of `x`. Note that scalars have shape (,). Parameters: **x** array_like, compatible object If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with with themselves and with the elements of `r`. **r** array_like Array of roots. If `r` is multidimensional the first index is the root index, while the remaining indices enumerate multiple polynomials. For instance, in the two dimensional case the roots of each polynomial may be thought of as stored in the columns of `r`. **tensor** boolean, optional If True, the shape of the roots array is extended with ones on the right, one for each dimension of `x`. Scalars have dimension 0 for this action. The result is that every column of coefficients in `r` is evaluated for every element of `x`. If False, `x` is broadcast over the columns of `r` for the evaluation. This keyword is useful when `r` is multidimensional. The default value is True. Returns: **values** ndarray, compatible object The shape of the returned array is described above. See also [`polyroots`](numpy.polynomial.polynomial.polyroots#numpy.polynomial.polynomial.polyroots "numpy.polynomial.polynomial.polyroots"), [`polyfromroots`](numpy.polynomial.polynomial.polyfromroots#numpy.polynomial.polynomial.polyfromroots "numpy.polynomial.polynomial.polyfromroots"), [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") #### Examples >>> from numpy.polynomial.polynomial import polyvalfromroots >>> polyvalfromroots(1, [1, 2, 3]) 0.0 >>> a = np.arange(4).reshape(2, 2) >>> a array([[0, 1], [2, 3]]) >>> polyvalfromroots(a, [-1, 0, 1]) array([[-0., 0.], [ 6., 24.]]) >>> r = np.arange(-2, 2).reshape(2,2) # multidimensional coefficients >>> r # each column of r defines one polynomial array([[-2, -1], [ 0, 1]]) >>> b = [-2, 1] >>> polyvalfromroots(b, r, tensor=True) array([[-0., 3.], [ 3., 0.]]) >>> polyvalfromroots(b, r, tensor=False) array([-0., 0.]) # numpy.polynomial.polynomial.polyvander polynomial.polynomial.polyvander(_x_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L1065-L1129) Vandermonde matrix of given degree. Returns the Vandermonde matrix of degree `deg` and sample points `x`. The Vandermonde matrix is defined by \\[V[..., i] = x^i,\\] where `0 <= i <= deg`. The leading indices of `V` index the elements of `x` and the last index is the power of `x`. If `c` is a 1-D array of coefficients of length `n + 1` and `V` is the matrix `V = polyvander(x, n)`, then `np.dot(V, c)` and `polyval(x, c)` are the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of polynomials of the same degree and sample points. Parameters: **x** array_like Array of points. The dtype is converted to float64 or complex128 depending on whether any of the elements are complex. If `x` is scalar it is converted to a 1-D array. **deg** int Degree of the resulting matrix. Returns: **vander** ndarray. The Vandermonde matrix. The shape of the returned matrix is `x.shape + (deg + 1,)`, where the last index is the power of `x`. The dtype will be the same as the converted `x`. See also [`polyvander2d`](numpy.polynomial.polynomial.polyvander2d#numpy.polynomial.polynomial.polyvander2d "numpy.polynomial.polynomial.polyvander2d"), [`polyvander3d`](numpy.polynomial.polynomial.polyvander3d#numpy.polynomial.polynomial.polyvander3d "numpy.polynomial.polynomial.polyvander3d") #### Examples The Vandermonde matrix of degree `deg = 5` and sample points `x = [-1, 2, 3]` contains the element-wise powers of `x` from 0 to 5 as its columns. >>> from numpy.polynomial import polynomial as P >>> x, deg = [-1, 2, 3], 5 >>> P.polyvander(x=x, deg=deg) array([[ 1., -1., 1., -1., 1., -1.], [ 1., 2., 4., 8., 16., 32.], [ 1., 3., 9., 27., 81., 243.]]) # numpy.polynomial.polynomial.polyvander2d polynomial.polynomial.polyvander2d(_x_ , _y_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L1132-L1208) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y)`. The pseudo-Vandermonde matrix is defined by \\[V[..., (deg[1] + 1)*i + j] = x^i * y^j,\\] where `0 <= i <= deg[0]` and `0 <= j <= deg[1]`. The leading indices of `V` index the points `(x, y)` and the last index encodes the powers of `x` and `y`. If `V = polyvander2d(x, y, [xdeg, ydeg])`, then the columns of `V` correspond to the elements of a 2-D coefficient array `c` of shape (xdeg + 1, ydeg + 1) in the order \\[c_{00}, c_{01}, c_{02} ... , c_{10}, c_{11}, c_{12} ...\\] and `np.dot(V, c.flat)` and `polyval2d(x, y, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 2-D polynomials of the same degrees and sample points. Parameters: **x, y** array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg** list of ints List of maximum degrees of the form [x_deg, y_deg]. Returns: **vander2d** ndarray The shape of the returned matrix is `x.shape + (order,)`, where \\(order = (deg[0]+1)*(deg([1]+1)\\). The dtype will be the same as the converted `x` and `y`. See also [`polyvander`](numpy.polynomial.polynomial.polyvander#numpy.polynomial.polynomial.polyvander "numpy.polynomial.polynomial.polyvander"), [`polyvander3d`](numpy.polynomial.polynomial.polyvander3d#numpy.polynomial.polynomial.polyvander3d "numpy.polynomial.polynomial.polyvander3d"), [`polyval2d`](numpy.polynomial.polynomial.polyval2d#numpy.polynomial.polynomial.polyval2d "numpy.polynomial.polynomial.polyval2d"), [`polyval3d`](numpy.polynomial.polynomial.polyval3d#numpy.polynomial.polynomial.polyval3d "numpy.polynomial.polynomial.polyval3d") #### Examples >>> import numpy as np The 2-D pseudo-Vandermonde matrix of degree `[1, 2]` and sample points `x = [-1, 2]` and `y = [1, 3]` is as follows: >>> from numpy.polynomial import polynomial as P >>> x = np.array([-1, 2]) >>> y = np.array([1, 3]) >>> m, n = 1, 2 >>> deg = np.array([m, n]) >>> V = P.polyvander2d(x=x, y=y, deg=deg) >>> V array([[ 1., 1., 1., -1., -1., -1.], [ 1., 3., 9., 2., 6., 18.]]) We can verify the columns for any `0 <= i <= m` and `0 <= j <= n`: >>> i, j = 0, 1 >>> V[:, (deg[1]+1)*i + j] == x**i * y**j array([ True, True]) The (1D) Vandermonde matrix of sample points `x` and degree `m` is a special case of the (2D) pseudo-Vandermonde matrix with `y` points all zero and degree `[m, 0]`. >>> P.polyvander2d(x=x, y=0*x, deg=(m, 0)) == P.polyvander(x=x, deg=m) array([[ True, True], [ True, True]]) # numpy.polynomial.polynomial.polyvander3d polynomial.polynomial.polyvander3d(_x_ , _y_ , _z_ , _deg_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polynomial.py#L1211-L1282) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y, z)`. If `l`, `m`, `n` are the given degrees in `x`, `y`, `z`, then The pseudo-Vandermonde matrix is defined by \\[V[..., (m+1)(n+1)i + (n+1)j + k] = x^i * y^j * z^k,\\] where `0 <= i <= l`, `0 <= j <= m`, and `0 <= j <= n`. The leading indices of `V` index the points `(x, y, z)` and the last index encodes the powers of `x`, `y`, and `z`. If `V = polyvander3d(x, y, z, [xdeg, ydeg, zdeg])`, then the columns of `V` correspond to the elements of a 3-D coefficient array `c` of shape (xdeg + 1, ydeg + 1, zdeg + 1) in the order \\[c_{000}, c_{001}, c_{002},... , c_{010}, c_{011}, c_{012},...\\] and `np.dot(V, c.flat)` and `polyval3d(x, y, z, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 3-D polynomials of the same degrees and sample points. Parameters: **x, y, z** array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg** list of ints List of maximum degrees of the form [x_deg, y_deg, z_deg]. Returns: **vander3d** ndarray The shape of the returned matrix is `x.shape + (order,)`, where \\(order = (deg[0]+1)*(deg([1]+1)*(deg[2]+1)\\). The dtype will be the same as the converted `x`, `y`, and `z`. See also [`polyvander`](numpy.polynomial.polynomial.polyvander#numpy.polynomial.polynomial.polyvander "numpy.polynomial.polynomial.polyvander"), `polyvander3d`, [`polyval2d`](numpy.polynomial.polynomial.polyval2d#numpy.polynomial.polynomial.polyval2d "numpy.polynomial.polynomial.polyval2d"), [`polyval3d`](numpy.polynomial.polynomial.polyval3d#numpy.polynomial.polynomial.polyval3d "numpy.polynomial.polynomial.polyval3d") #### Examples >>> import numpy as np >>> from numpy.polynomial import polynomial as P >>> x = np.asarray([-1, 2, 1]) >>> y = np.asarray([1, -2, -3]) >>> z = np.asarray([2, 2, 5]) >>> l, m, n = [2, 2, 1] >>> deg = [l, m, n] >>> V = P.polyvander3d(x=x, y=y, z=z, deg=deg) >>> V array([[ 1., 2., 1., 2., 1., 2., -1., -2., -1., -2., -1., -2., 1., 2., 1., 2., 1., 2.], [ 1., 2., -2., -4., 4., 8., 2., 4., -4., -8., 8., 16., 4., 8., -8., -16., 16., 32.], [ 1., 5., -3., -15., 9., 45., 1., 5., -3., -15., 9., 45., 1., 5., -3., -15., 9., 45.]]) We can verify the columns for any `0 <= i <= l`, `0 <= j <= m`, and `0 <= k <= n` >>> i, j, k = 2, 1, 0 >>> V[:, (m+1)*(n+1)*i + (n+1)*j + k] == x**i * y**j * z**k array([ True, True, True]) # numpy.polynomial.polynomial.polyx polynomial.polynomial.polyx _= array([0, 1])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.polynomial.polyzero polynomial.polynomial.polyzero _= array([0])_ An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(…)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters: **(for the __new__ method; see Notes below)** **shape** tuple of ints Shape of created array. **dtype** data-type, optional Any object that can be interpreted as a numpy data type. **buffer** object exposing buffer interface, optional Used to fill the array with data. **offset** int, optional Offset of array data in buffer. **strides** tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term- generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: >>> import numpy as np >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes: **T** ndarray Transpose of the array. **data** buffer The array’s elements, in memory. **dtype** dtype object Describes the format of the elements in the array. **flags** dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat** numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag** ndarray Imaginary part of the array. **real** ndarray Real part of the array. **size** int Number of elements in the array. **itemsize** int The memory use of each array element in bytes. **nbytes** int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim** int The array’s number of dimensions. **shape** tuple of ints Shape of the array. **strides** tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes** ctypes object Class containing properties of the array needed for interaction with ctypes. **base** ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. # numpy.polynomial.polyutils.as_series polynomial.polyutils.as_series(_alist_ , _trim =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polyutils.py#L66-L141) Return argument as a list of 1-d arrays. The returned list contains array(s) of dtype double, complex double, or object. A 1-d argument of shape `(N,)` is parsed into `N` arrays of size one; a 2-d argument of shape `(M,N)` is parsed into `M` arrays of size `N` (i.e., is “parsed by row”); and a higher dimensional array raises a Value Error if it is not first reshaped into either a 1-d or 2-d array. Parameters: **alist** array_like A 1- or 2-d array_like **trim** boolean, optional When True, trailing zeros are removed from the inputs. When False, the inputs are passed through intact. Returns: **[a1, a2,…]** list of 1-D arrays A copy of the input data as a list of 1-d arrays. Raises: ValueError Raised when `as_series` cannot convert its input to 1-d arrays, or at least one of the resulting arrays is empty. #### Examples >>> import numpy as np >>> from numpy.polynomial import polyutils as pu >>> a = np.arange(4) >>> pu.as_series(a) [array([0.]), array([1.]), array([2.]), array([3.])] >>> b = np.arange(6).reshape((2,3)) >>> pu.as_series(b) [array([0., 1., 2.]), array([3., 4., 5.])] >>> pu.as_series((1, np.arange(3), np.arange(2, dtype=np.float16))) [array([1.]), array([0., 1., 2.]), array([0., 1.])] >>> pu.as_series([2, [1.1, 0.]]) [array([2.]), array([1.1])] >>> pu.as_series([2, [1.1, 0.]], trim=False) [array([2.]), array([1.1, 0. ])] # numpy.polynomial.polyutils.getdomain polynomial.polyutils.getdomain(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polyutils.py#L194-L239) Return a domain suitable for given abscissae. Find a domain suitable for a polynomial or Chebyshev series defined at the values supplied. Parameters: **x** array_like 1-d array of abscissae whose domain will be determined. Returns: **domain** ndarray 1-d array containing two values. If the inputs are complex, then the two returned points are the lower left and upper right corners of the smallest rectangle (aligned with the axes) in the complex plane containing the points `x`. If the inputs are real, then the two points are the ends of the smallest interval containing the points `x`. See also [`mapparms`](numpy.polynomial.polyutils.mapparms#numpy.polynomial.polyutils.mapparms "numpy.polynomial.polyutils.mapparms"), [`mapdomain`](numpy.polynomial.polyutils.mapdomain#numpy.polynomial.polyutils.mapdomain "numpy.polynomial.polyutils.mapdomain") #### Examples >>> import numpy as np >>> from numpy.polynomial import polyutils as pu >>> points = np.arange(4)**2 - 5; points array([-5, -4, -1, 4]) >>> pu.getdomain(points) array([-5., 4.]) >>> c = np.exp(complex(0,1)*np.pi*np.arange(12)/6) # unit circle >>> pu.getdomain(c) array([-1.-1.j, 1.+1.j]) # numpy.polynomial.polyutils.mapdomain polynomial.polyutils.mapdomain(_x_ , _old_ , _new_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polyutils.py#L288-L355) Apply linear map to input points. The linear map `offset + scale*x` that maps the domain `old` to the domain `new` is applied to the points `x`. Parameters: **x** array_like Points to be mapped. If `x` is a subtype of ndarray the subtype will be preserved. **old, new** array_like The two domains that determine the map. Each must (successfully) convert to 1-d arrays containing precisely two values. Returns: **x_out** ndarray Array of points of the same shape as `x`, after application of the linear map between the two domains. See also [`getdomain`](numpy.polynomial.polyutils.getdomain#numpy.polynomial.polyutils.getdomain "numpy.polynomial.polyutils.getdomain"), [`mapparms`](numpy.polynomial.polyutils.mapparms#numpy.polynomial.polyutils.mapparms "numpy.polynomial.polyutils.mapparms") #### Notes Effectively, this implements: \\[x\\_out = new[0] + m(x - old[0])\\] where \\[m = \frac{new[1]-new[0]}{old[1]-old[0]}\\] #### Examples >>> import numpy as np >>> from numpy.polynomial import polyutils as pu >>> old_domain = (-1,1) >>> new_domain = (0,2*np.pi) >>> x = np.linspace(-1,1,6); x array([-1. , -0.6, -0.2, 0.2, 0.6, 1. ]) >>> x_out = pu.mapdomain(x, old_domain, new_domain); x_out array([ 0. , 1.25663706, 2.51327412, 3.76991118, 5.02654825, # may vary 6.28318531]) >>> x - pu.mapdomain(x_out, new_domain, old_domain) array([0., 0., 0., 0., 0., 0.]) Also works for complex numbers (and thus can be used to map any line in the complex plane to any other line therein). >>> i = complex(0,1) >>> old = (-1 - i, 1 + i) >>> new = (-1 + i, 1 - i) >>> z = np.linspace(old[0], old[1], 6); z array([-1. -1.j , -0.6-0.6j, -0.2-0.2j, 0.2+0.2j, 0.6+0.6j, 1. +1.j ]) >>> new_z = pu.mapdomain(z, old, new); new_z array([-1.0+1.j , -0.6+0.6j, -0.2+0.2j, 0.2-0.2j, 0.6-0.6j, 1.0-1.j ]) # may vary # numpy.polynomial.polyutils.mapparms polynomial.polyutils.mapparms(_old_ , _new_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polyutils.py#L241-L286) Linear map parameters between domains. Return the parameters of the linear map `offset + scale*x` that maps `old` to `new` such that `old[i] -> new[i]`, `i = 0, 1`. Parameters: **old, new** array_like Domains. Each domain must (successfully) convert to a 1-d array containing precisely two values. Returns: **offset, scale** scalars The map `L(x) = offset + scale*x` maps the first domain to the second. See also [`getdomain`](numpy.polynomial.polyutils.getdomain#numpy.polynomial.polyutils.getdomain "numpy.polynomial.polyutils.getdomain"), [`mapdomain`](numpy.polynomial.polyutils.mapdomain#numpy.polynomial.polyutils.mapdomain "numpy.polynomial.polyutils.mapdomain") #### Notes Also works for complex numbers, and thus can be used to calculate the parameters required to map any line in the complex plane to any other line therein. #### Examples >>> from numpy.polynomial import polyutils as pu >>> pu.mapparms((-1,1),(-1,1)) (0.0, 1.0) >>> pu.mapparms((1,-1),(-1,1)) (-0.0, -1.0) >>> i = complex(0,1) >>> pu.mapparms((-i,-1),(1,i)) ((1+1j), (1-0j)) # numpy.polynomial.polyutils.trimcoef polynomial.polyutils.trimcoef(_c_ , _tol =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polyutils.py#L144-L192) Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter `tol`; “trailing” means highest order coefficient(s), e.g., in `[0, 1, 1, 0, 0]` (which represents `0 + x + x**2 + 0*x**3 + 0*x**4`) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters: **c** array_like 1-d array of coefficients, ordered from lowest order to highest. **tol** number, optional Trailing (i.e., highest order) elements with absolute value less than or equal to `tol` (default value is zero) are removed. Returns: **trimmed** ndarray 1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises: ValueError If `tol` < 0 #### Examples >>> from numpy.polynomial import polyutils as pu >>> pu.trimcoef((0,0,3,0,5,0,0)) array([0., 0., 3., 0., 5.]) >>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed array([0.]) >>> i = complex(0,1) # works for complex >>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3) array([0.0003+0.j , 0.001 -0.001j]) # numpy.polynomial.polyutils.trimseq polynomial.polyutils.trimseq(_seq_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/polyutils.py#L37-L63) Remove small Poly series coefficients. Parameters: **seq** sequence Sequence of Poly series coefficients. Returns: **series** sequence Subsequence with trailing zeros removed. If the resulting sequence would be empty, return the first element. The returned sequence may or may not be a view. #### Notes Do not lose the type info if the sequence contains unknown objects. # numpy.polynomial.set_default_printstyle polynomial.set_default_printstyle(_style_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/polynomial/__init__.py#L136-L182) Set the default format for the string representation of polynomials. Values for `style` must be valid inputs to `__format__`, i.e. ‘ascii’ or ‘unicode’. Parameters: **style** str Format string for default printing style. Must be either ‘ascii’ or ‘unicode’. #### Notes The default format depends on the platform: ‘unicode’ is used on Unix-based systems and ‘ascii’ on Windows. This determination is based on default font support for the unicode superscript and subscript ranges. #### Examples >>> p = np.polynomial.Polynomial([1, 2, 3]) >>> c = np.polynomial.Chebyshev([1, 2, 3]) >>> np.polynomial.set_default_printstyle('unicode') >>> print(p) 1.0 + 2.0·x + 3.0·x² >>> print(c) 1.0 + 2.0·T₁(x) + 3.0·T₂(x) >>> np.polynomial.set_default_printstyle('ascii') >>> print(p) 1.0 + 2.0 x + 3.0 x**2 >>> print(c) 1.0 + 2.0 T_1(x) + 3.0 T_2(x) >>> # Formatting supersedes all class/package-level defaults >>> print(f"{p:unicode}") 1.0 + 2.0·x + 3.0·x² # numpy.polysub numpy.polysub(_a1_ , _a2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_polynomial_impl.py#L855-L908) Difference (subtraction) of two polynomials. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials- package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). Given two polynomials `a1` and `a2`, returns `a1 - a2`. `a1` and `a2` can be either array_like sequences of the polynomials’ coefficients (including coefficients equal to zero), or [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") objects. Parameters: **a1, a2** array_like or poly1d Minuend and subtrahend polynomials, respectively. Returns: **out** ndarray or poly1d Array or [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") object of the difference polynomial’s coefficients. See also [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval"), [`polydiv`](numpy.polydiv#numpy.polydiv "numpy.polydiv"), [`polymul`](numpy.polymul#numpy.polymul "numpy.polymul"), [`polyadd`](numpy.polyadd#numpy.polyadd "numpy.polyadd") #### Examples \\[(2 x^2 + 10 x - 2) - (3 x^2 + 10 x -4) = (-x^2 + 2)\\] >>> import numpy as np >>> np.polysub([2, 10, -2], [3, 10, -4]) array([-1, 0, 2]) # numpy.polyval numpy.polyval(_p_ , _x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_polynomial_impl.py#L702-L779) Evaluate a polynomial at specific values. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials- package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). If `p` is of length N, this function returns the value: p[0]*x**(N-1) + p[1]*x**(N-2) + ... + p[N-2]*x + p[N-1] If `x` is a sequence, then `p(x)` is returned for each element of `x`. If `x` is another polynomial then the composite polynomial `p(x(t))` is returned. Parameters: **p** array_like or poly1d object 1D array of polynomial coefficients (including coefficients equal to zero) from highest degree to the constant term, or an instance of poly1d. **x** array_like or poly1d object A number, an array of numbers, or an instance of poly1d, at which to evaluate `p`. Returns: **values** ndarray or poly1d If `x` is a poly1d instance, the result is the composition of the two polynomials, i.e., `x` is “substituted” in `p` and the simplified result is returned. In addition, the type of `x` \- array_like or poly1d - governs the type of the output: `x` array_like => `values` array_like, `x` a poly1d object => `values` is also. See also [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") A polynomial class. #### Notes Horner’s scheme [1] is used to evaluate the polynomial. Even so, for polynomials of high degree the values may be inaccurate due to rounding errors. Use carefully. If `x` is a subtype of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") the return value will be of the same type. #### References [1] I. N. Bronshtein, K. A. Semendyayev, and K. A. Hirsch (Eng. trans. Ed.), _Handbook of Mathematics_ , New York, Van Nostrand Reinhold Co., 1985, pg. 720. #### Examples >>> import numpy as np >>> np.polyval([3,0,1], 5) # 3 * 5**2 + 0 * 5**1 + 1 76 >>> np.polyval([3,0,1], np.poly1d(5)) poly1d([76]) >>> np.polyval(np.poly1d([3,0,1]), 5) 76 >>> np.polyval(np.poly1d([3,0,1]), np.poly1d(5)) poly1d([76]) # numpy.positive numpy.positive(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Numerical positive, element-wise. Parameters: **x** array_like or scalar Input array. Returns: **y** ndarray or scalar Returned array or scalar: `y = +x`. This is a scalar if `x` is a scalar. #### Notes Equivalent to `x.copy()`, but only defined for types that support arithmetic. #### Examples >>> import numpy as np >>> x1 = np.array(([1., -1.])) >>> np.positive(x1) array([ 1., -1.]) The unary `+` operator can be used as a shorthand for `np.positive` on ndarrays. >>> x1 = np.array(([1., -1.])) >>> +x1 array([ 1., -1.]) # numpy.pow numpy.pow(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ First array elements raised to powers from second array, element-wise. Raise each base in `x1` to the positionally-corresponding power in `x2`. `x1` and `x2` must be broadcastable to the same shape. An integer type raised to a negative integer power will raise a `ValueError`. Negative values raised to a non-integral value will return `nan`. To get complex results, cast the input to complex, or specify the `dtype` to be `complex` (see the example below). Parameters: **x1** array_like The bases. **x2** array_like The exponents. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The bases in `x1` raised to the exponents in `x2`. This is a scalar if both `x1` and `x2` are scalars. See also [`float_power`](numpy.float_power#numpy.float_power "numpy.float_power") power function that promotes integers to float #### Examples >>> import numpy as np Cube each element in an array. >>> x1 = np.arange(6) >>> x1 [0, 1, 2, 3, 4, 5] >>> np.power(x1, 3) array([ 0, 1, 8, 27, 64, 125]) Raise the bases to different exponents. >>> x2 = [1.0, 2.0, 3.0, 3.0, 2.0, 1.0] >>> np.power(x1, x2) array([ 0., 1., 8., 27., 16., 5.]) The effect of broadcasting. >>> x2 = np.array([[1, 2, 3, 3, 2, 1], [1, 2, 3, 3, 2, 1]]) >>> x2 array([[1, 2, 3, 3, 2, 1], [1, 2, 3, 3, 2, 1]]) >>> np.power(x1, x2) array([[ 0, 1, 8, 27, 16, 5], [ 0, 1, 8, 27, 16, 5]]) The `**` operator can be used as a shorthand for `np.power` on ndarrays. >>> x2 = np.array([1, 2, 3, 3, 2, 1]) >>> x1 = np.arange(6) >>> x1 ** x2 array([ 0, 1, 8, 27, 16, 5]) Negative values raised to a non-integral value will result in `nan` (and a warning will be generated). >>> x3 = np.array([-1.0, -4.0]) >>> with np.errstate(invalid='ignore'): ... p = np.power(x3, 1.5) ... >>> p array([nan, nan]) To get complex results, give the argument `dtype=complex`. >>> np.power(x3, 1.5, dtype=complex) array([-1.83697020e-16-1.j, -1.46957616e-15-8.j]) # numpy.power numpy.power(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ First array elements raised to powers from second array, element-wise. Raise each base in `x1` to the positionally-corresponding power in `x2`. `x1` and `x2` must be broadcastable to the same shape. An integer type raised to a negative integer power will raise a `ValueError`. Negative values raised to a non-integral value will return `nan`. To get complex results, cast the input to complex, or specify the `dtype` to be `complex` (see the example below). Parameters: **x1** array_like The bases. **x2** array_like The exponents. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The bases in `x1` raised to the exponents in `x2`. This is a scalar if both `x1` and `x2` are scalars. See also [`float_power`](numpy.float_power#numpy.float_power "numpy.float_power") power function that promotes integers to float #### Examples >>> import numpy as np Cube each element in an array. >>> x1 = np.arange(6) >>> x1 [0, 1, 2, 3, 4, 5] >>> np.power(x1, 3) array([ 0, 1, 8, 27, 64, 125]) Raise the bases to different exponents. >>> x2 = [1.0, 2.0, 3.0, 3.0, 2.0, 1.0] >>> np.power(x1, x2) array([ 0., 1., 8., 27., 16., 5.]) The effect of broadcasting. >>> x2 = np.array([[1, 2, 3, 3, 2, 1], [1, 2, 3, 3, 2, 1]]) >>> x2 array([[1, 2, 3, 3, 2, 1], [1, 2, 3, 3, 2, 1]]) >>> np.power(x1, x2) array([[ 0, 1, 8, 27, 16, 5], [ 0, 1, 8, 27, 16, 5]]) The `**` operator can be used as a shorthand for `np.power` on ndarrays. >>> x2 = np.array([1, 2, 3, 3, 2, 1]) >>> x1 = np.arange(6) >>> x1 ** x2 array([ 0, 1, 8, 27, 16, 5]) Negative values raised to a non-integral value will result in `nan` (and a warning will be generated). >>> x3 = np.array([-1.0, -4.0]) >>> with np.errstate(invalid='ignore'): ... p = np.power(x3, 1.5) ... >>> p array([nan, nan]) To get complex results, give the argument `dtype=complex`. >>> np.power(x3, 1.5, dtype=complex) array([-1.83697020e-16-1.j, -1.46957616e-15-8.j]) # numpy.printoptions numpy.printoptions(_* args_, _** kwargs_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/arrayprint.py#L372-L405) Context manager for setting print options. Set print options for the scope of the `with` block, and restore the old options at the end. See [`set_printoptions`](numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions") for the full description of available options. See also [`set_printoptions`](numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions"), [`get_printoptions`](numpy.get_printoptions#numpy.get_printoptions "numpy.get_printoptions") #### Examples >>> import numpy as np >>> from numpy.testing import assert_equal >>> with np.printoptions(precision=2): ... np.array([2.0]) / 3 array([0.67]) The `as`-clause of the `with`-statement gives the current print options: >>> with np.printoptions(precision=2) as opts: ... assert_equal(opts, np.get_printoptions()) # numpy.prod numpy.prod(_a_ , _axis=None_ , _dtype=None_ , _out=None_ , _keepdims= _, _initial= _, _where= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L3328-L3446) Return the product of array elements over a given axis. Parameters: **a** array_like Input data. **axis** None or int or tuple of ints, optional Axis or axes along which a product is performed. The default, axis=None, will calculate the product of all the elements in the input array. If axis is negative it counts from the last to the first axis. If axis is a tuple of ints, a product is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before. **dtype** dtype, optional The type of the returned array, as well as of the accumulator in which the elements are multiplied. The dtype of `a` is used by default unless `a` has an integer dtype of less precision than the default platform integer. In that case, if `a` is signed then the platform integer is used while if `a` is unsigned then an unsigned integer of the same precision as the platform integer is used. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape as the expected output, but the type of the output values will be cast if necessary. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the `prod` method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **initial** scalar, optional The starting value for this product. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. **where** array_like of bool, optional Elements to include in the product. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. Returns: **product_along_axis** ndarray, see [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") parameter above. An array shaped as `a` but with the specified axis removed. Returns a reference to `out` if specified. See also [`ndarray.prod`](numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod") equivalent method [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes Arithmetic is modular when using integer types, and no error is raised on overflow. That means that, on a 32-bit platform: >>> x = np.array([536870910, 536870910, 536870910, 536870910]) >>> np.prod(x) 16 # may vary The product of an empty array is the neutral element 1: >>> np.prod([]) 1.0 #### Examples By default, calculate the product of all elements: >>> import numpy as np >>> np.prod([1.,2.]) 2.0 Even when the input array is two-dimensional: >>> a = np.array([[1., 2.], [3., 4.]]) >>> np.prod(a) 24.0 But we can also specify the axis over which to multiply: >>> np.prod(a, axis=1) array([ 2., 12.]) >>> np.prod(a, axis=0) array([3., 8.]) Or select specific elements to include: >>> np.prod([1., np.nan, 3.], where=[True, False, True]) 3.0 If the type of `x` is unsigned, then the output type is the unsigned platform integer: >>> x = np.array([1, 2, 3], dtype=np.uint8) >>> np.prod(x).dtype == np.uint True If `x` is of a signed integer type, then the output type is the default platform integer: >>> x = np.array([1, 2, 3], dtype=np.int8) >>> np.prod(x).dtype == int True You can also start the product with a value other than one: >>> np.prod([1, 2], initial=5) 10 # numpy.promote_types numpy.promote_types(_type1_ , _type2_) Returns the data type with the smallest size and smallest scalar kind to which both `type1` and `type2` may be safely cast. The returned data type is always considered “canonical”, this mainly means that the promoted dtype will always be in native byte order. This function is symmetric, but rarely associative. Parameters: **type1** dtype or dtype specifier First data type. **type2** dtype or dtype specifier Second data type. Returns: **out** dtype The promoted data type. See also [`result_type`](numpy.result_type#numpy.result_type "numpy.result_type"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), [`can_cast`](numpy.can_cast#numpy.can_cast "numpy.can_cast") #### Notes Please see [`numpy.result_type`](numpy.result_type#numpy.result_type "numpy.result_type") for additional information about promotion. Starting in NumPy 1.9, promote_types function now returns a valid string length when given an integer or float dtype as one argument and a string dtype as another argument. Previously it always returned the input string dtype, even if it wasn’t long enough to store the max integer/float value converted to a string. Changed in version 1.23.0. NumPy now supports promotion for more structured dtypes. It will now remove unnecessary padding from a structure dtype and promote included fields individually. #### Examples >>> import numpy as np >>> np.promote_types('f4', 'f8') dtype('float64') >>> np.promote_types('i8', 'f4') dtype('float64') >>> np.promote_types('>i8', '>> np.promote_types('i4', 'S8') dtype('S11') An example of a non-associative case: >>> p = np.promote_types >>> p('S', p('i1', 'u1')) dtype('S6') >>> p(p('S', 'i1'), 'u1') dtype('S4') # numpy.ptp numpy.ptp(_a_ , _axis=None_ , _out=None_ , _keepdims= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L2962-L3044) Range of values (maximum - minimum) along an axis. The name of the function comes from the acronym for ‘peak to peak’. Warning `ptp` preserves the data type of the array. This means the return value for an input of signed integers with n bits (e.g. [`numpy.int8`](../arrays.scalars#numpy.int8 "numpy.int8"), [`numpy.int16`](../arrays.scalars#numpy.int16 "numpy.int16"), etc) is also a signed integer with n bits. In that case, peak-to-peak values greater than `2**(n-1)-1` will be returned as negative values. An example with a work- around is shown below. Parameters: **a** array_like Input values. **axis** None or int or tuple of ints, optional Axis along which to find the peaks. By default, flatten the array. `axis` may be negative, in which case it counts from the last to the first axis. If this is a tuple of ints, a reduction is performed on multiple axes, instead of a single axis or all the axes as before. **out** array_like Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type of the output values will be cast if necessary. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the `ptp` method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. Returns: **ptp** ndarray or scalar The range of a given array - `scalar` if array is one-dimensional or a new array holding the result along the given axis #### Examples >>> import numpy as np >>> x = np.array([[4, 9, 2, 10], ... [6, 9, 7, 12]]) >>> np.ptp(x, axis=1) array([8, 6]) >>> np.ptp(x, axis=0) array([2, 0, 5, 2]) >>> np.ptp(x) 10 This example shows that a negative value can be returned when the input is an array of signed integers. >>> y = np.array([[1, 127], ... [0, 127], ... [-1, 127], ... [-2, 127]], dtype=np.int8) >>> np.ptp(y, axis=1) array([ 126, 127, -128, -127], dtype=int8) A work-around is to use the `view()` method to view the result as unsigned integers with the same bit width: >>> np.ptp(y, axis=1).view(np.uint8) array([126, 127, 128, 129], dtype=uint8) # numpy.put numpy.put(_a_ , _ind_ , _v_ , _mode ='raise'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L513-L571) Replaces specified elements of an array with given values. The indexing works on the flattened target array. `put` is roughly equivalent to: a.flat[ind] = v Parameters: **a** ndarray Target array. **ind** array_like Target indices, interpreted as integers. **v** array_like Values to place in `a` at target indices. If `v` is shorter than `ind` it will be repeated as necessary. **mode**{‘raise’, ‘wrap’, ‘clip’}, optional Specifies how out-of-bounds indices will behave. * ‘raise’ – raise an error (default) * ‘wrap’ – wrap around * ‘clip’ – clip to the range ‘clip’ mode means that all indices that are too large are replaced by the index that addresses the last element along that axis. Note that this disables indexing with negative numbers. In ‘raise’ mode, if an exception occurs the target array may still be modified. See also [`putmask`](numpy.putmask#numpy.putmask "numpy.putmask"), [`place`](numpy.place#numpy.place "numpy.place") [`put_along_axis`](numpy.put_along_axis#numpy.put_along_axis "numpy.put_along_axis") Put elements by matching the array and the index arrays #### Examples >>> import numpy as np >>> a = np.arange(5) >>> np.put(a, [0, 2], [-44, -55]) >>> a array([-44, 1, -55, 3, 4]) >>> a = np.arange(5) >>> np.put(a, 22, -5, mode='clip') >>> a array([ 0, 1, 2, 3, -5]) # numpy.put_along_axis numpy.put_along_axis(_arr_ , _indices_ , _values_ , _axis_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_shape_base_impl.py#L182-L267) Put values into the destination array by matching 1d index and data slices. This iterates over matching 1d slices oriented along the specified axis in the index and data arrays, and uses the former to place values into the latter. These slices can be different lengths. Functions returning an index along an axis, like [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort") and [`argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition"), produce suitable indices for this function. Parameters: **arr** ndarray (Ni…, M, Nk…) Destination array. **indices** ndarray (Ni…, J, Nk…) Indices to change along each 1d slice of `arr`. This must match the dimension of arr, but dimensions in Ni and Nj may be 1 to broadcast against `arr`. **values** array_like (Ni…, J, Nk…) values to insert at those indices. Its shape and dimension are broadcast to match that of [`indices`](numpy.indices#numpy.indices "numpy.indices"). **axis** int The axis to take 1d slices along. If axis is None, the destination array is treated as if a flattened 1d view had been created of it. See also [`take_along_axis`](numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") Take values from the input array by matching 1d index and data slices #### Notes This is equivalent to (but faster than) the following use of [`ndindex`](numpy.ndindex#numpy.ndindex "numpy.ndindex") and [`s_`](numpy.s_#numpy.s_ "numpy.s_"), which sets each of `ii` and `kk` to a tuple of indices: Ni, M, Nk = a.shape[:axis], a.shape[axis], a.shape[axis+1:] J = indices.shape[axis] # Need not equal M for ii in ndindex(Ni): for kk in ndindex(Nk): a_1d = a [ii + s_[:,] + kk] indices_1d = indices[ii + s_[:,] + kk] values_1d = values [ii + s_[:,] + kk] for j in range(J): a_1d[indices_1d[j]] = values_1d[j] Equivalently, eliminating the inner loop, the last two lines would be: a_1d[indices_1d] = values_1d #### Examples >>> import numpy as np For this sample array >>> a = np.array([[10, 30, 20], [60, 40, 50]]) We can replace the maximum values with: >>> ai = np.argmax(a, axis=1, keepdims=True) >>> ai array([[1], [0]]) >>> np.put_along_axis(a, ai, 99, axis=1) >>> a array([[10, 99, 20], [99, 40, 50]]) # numpy.putmask numpy.putmask(_a_ , _mask_ , _values_) Changes elements of an array based on conditional and input values. Sets `a.flat[n] = values[n]` for each n where `mask.flat[n]==True`. If `values` is not the same size as `a` and `mask` then it will repeat. This gives behavior different from `a[mask] = values`. Parameters: **a** ndarray Target array. **mask** array_like Boolean mask array. It has to be the same shape as `a`. **values** array_like Values to put into `a` where `mask` is True. If `values` is smaller than `a` it will be repeated. See also [`place`](numpy.place#numpy.place "numpy.place"), [`put`](numpy.put#numpy.put "numpy.put"), [`take`](numpy.take#numpy.take "numpy.take"), [`copyto`](numpy.copyto#numpy.copyto "numpy.copyto") #### Examples >>> import numpy as np >>> x = np.arange(6).reshape(2, 3) >>> np.putmask(x, x>2, x**2) >>> x array([[ 0, 1, 2], [ 9, 16, 25]]) If `values` is smaller than `a` it is repeated: >>> x = np.arange(5) >>> np.putmask(x, x>1, [-33, -44]) >>> x array([ 0, 1, -33, -44, -33]) # numpy.quantile numpy.quantile(_a_ , _q_ , _axis =None_, _out =None_, _overwrite_input =False_, _method ='linear'_, _keepdims =False_, _*_ , _weights =None_, _interpolation =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L4283-L4538) Compute the q-th quantile of the data along the specified axis. Parameters: **a** array_like of real numbers Input array or object that can be converted to an array. **q** array_like of float Probability or sequence of probabilities of the quantiles to compute. Values must be between 0 and 1 inclusive. **axis**{int, tuple of int, None}, optional Axis or axes along which the quantiles are computed. The default is to compute the quantile(s) along a flattened version of the array. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary. **overwrite_input** bool, optional If True, then allow the input array `a` to be modified by intermediate calculations, to save memory. In this case, the contents of the input `a` after this function completes is undefined. **method** str, optional This parameter specifies the method to use for estimating the quantile. There are many different methods, some unique to NumPy. The recommended options, numbered as they appear in [1], are: 1. ‘inverted_cdf’ 2. ‘averaged_inverted_cdf’ 3. ‘closest_observation’ 4. ‘interpolated_inverted_cdf’ 5. ‘hazen’ 6. ‘weibull’ 7. ‘linear’ (default) 8. ‘median_unbiased’ 9. ‘normal_unbiased’ The first three methods are discontinuous. For backward compatibility with previous versions of NumPy, the following discontinuous variations of the default ‘linear’ (7.) option are available: * ‘lower’ * ‘higher’, * ‘midpoint’ * ‘nearest’ See Notes for details. Changed in version 1.22.0: This argument was previously called “interpolation” and only offered the “linear” default and last four options. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array `a`. **weights** array_like, optional An array of weights associated with the values in `a`. Each value in `a` contributes to the quantile according to its associated weight. The weights array can either be 1-D (in which case its length must be the size of `a` along the given axis) or of the same shape as `a`. If `weights=None`, then all data in `a` are assumed to have a weight equal to one. Only `method=”inverted_cdf”` supports weights. See the notes for more details. New in version 2.0.0. **interpolation** str, optional Deprecated name for the method keyword argument. Deprecated since version 1.22.0. Returns: **quantile** scalar or ndarray If `q` is a single probability and `axis=None`, then the result is a scalar. If multiple probability levels are given, first axis of the result corresponds to the quantiles. The other axes are the axes that remain after the reduction of `a`. If the input contains integers or floats smaller than `float64`, the output data-type is `float64`. Otherwise, the output data-type is the same as that of the input. If `out` is specified, that array is returned instead. See also [`mean`](numpy.mean#numpy.mean "numpy.mean") [`percentile`](numpy.percentile#numpy.percentile "numpy.percentile") equivalent to quantile, but with q in the range [0, 100]. [`median`](numpy.median#numpy.median "numpy.median") equivalent to `quantile(..., 0.5)` [`nanquantile`](numpy.nanquantile#numpy.nanquantile "numpy.nanquantile") #### Notes Given a sample `a` from an underlying distribution, `quantile` provides a nonparametric estimate of the inverse cumulative distribution function. By default, this is done by interpolating between adjacent elements in `y`, a sorted copy of `a`: (1-g)*y[j] + g*y[j+1] where the index `j` and coefficient `g` are the integral and fractional components of `q * (n-1)`, and `n` is the number of elements in the sample. This is a special case of Equation 1 of H&F [1]. More generally, * `j = (q*n + m - 1) // 1`, and * `g = (q*n + m - 1) % 1`, where `m` may be defined according to several different conventions. The preferred convention may be selected using the `method` parameter: `method` | number in H&F | `m` ---|---|--- `interpolated_inverted_cdf` | 4 | `0` `hazen` | 5 | `1/2` `weibull` | 6 | `q` `linear` (default) | 7 | `1 - q` `median_unbiased` | 8 | `q/3 + 1/3` `normal_unbiased` | 9 | `q/4 + 3/8` Note that indices `j` and `j + 1` are clipped to the range `0` to `n - 1` when the results of the formula would be outside the allowed range of non-negative indices. The `- 1` in the formulas for `j` and `g` accounts for Python’s 0-based indexing. The table above includes only the estimators from H&F that are continuous functions of probability `q` (estimators 4-9). NumPy also provides the three discontinuous estimators from H&F (estimators 1-3), where `j` is defined as above, `m` is defined as follows, and `g` is a function of the real-valued `index = q*n + m - 1` and `j`. 1. `inverted_cdf`: `m = 0` and `g = int(index - j > 0)` 2. `averaged_inverted_cdf`: `m = 0` and `g = (1 + int(index - j > 0)) / 2` 3. `closest_observation`: `m = -1/2` and `g = 1 - int((index == j) & (j%2 == 1))` For backward compatibility with previous versions of NumPy, `quantile` provides four additional discontinuous estimators. Like `method='linear'`, all have `m = 1 - q` so that `j = q*(n-1) // 1`, but `g` is defined as follows. * `lower`: `g = 0` * `midpoint`: `g = 0.5` * `higher`: `g = 1` * `nearest`: `g = (q*(n-1) % 1) > 0.5` **Weighted quantiles:** More formally, the quantile at probability level \\(q\\) of a cumulative distribution function \\(F(y)=P(Y \leq y)\\) with probability measure \\(P\\) is defined as any number \\(x\\) that fulfills the _coverage conditions_ \\[P(Y < x) \leq q \quad\text{and}\quad P(Y \leq x) \geq q\\] with random variable \\(Y\sim P\\). Sample quantiles, the result of `quantile`, provide nonparametric estimation of the underlying population counterparts, represented by the unknown \\(F\\), given a data vector `a` of length `n`. Some of the estimators above arise when one considers \\(F\\) as the empirical distribution function of the data, i.e. \\(F(y) = \frac{1}{n} \sum_i 1_{a_i \leq y}\\). Then, different methods correspond to different choices of \\(x\\) that fulfill the above coverage conditions. Methods that follow this approach are `inverted_cdf` and `averaged_inverted_cdf`. For weighted quantiles, the coverage conditions still hold. The empirical cumulative distribution is simply replaced by its weighted version, i.e. \\(P(Y \leq t) = \frac{1}{\sum_i w_i} \sum_i w_i 1_{x_i \leq t}\\). Only `method="inverted_cdf"` supports weights. #### References [1] (1,2) R. J. Hyndman and Y. Fan, “Sample quantiles in statistical packages,” The American Statistician, 50(4), pp. 361-365, 1996 #### Examples >>> import numpy as np >>> a = np.array([[10, 7, 4], [3, 2, 1]]) >>> a array([[10, 7, 4], [ 3, 2, 1]]) >>> np.quantile(a, 0.5) 3.5 >>> np.quantile(a, 0.5, axis=0) array([6.5, 4.5, 2.5]) >>> np.quantile(a, 0.5, axis=1) array([7., 2.]) >>> np.quantile(a, 0.5, axis=1, keepdims=True) array([[7.], [2.]]) >>> m = np.quantile(a, 0.5, axis=0) >>> out = np.zeros_like(m) >>> np.quantile(a, 0.5, axis=0, out=out) array([6.5, 4.5, 2.5]) >>> m array([6.5, 4.5, 2.5]) >>> b = a.copy() >>> np.quantile(b, 0.5, axis=1, overwrite_input=True) array([7., 2.]) >>> assert not np.all(a == b) See also [`numpy.percentile`](numpy.percentile#numpy.percentile "numpy.percentile") for a visualization of most methods. # numpy.r_ numpy.r__= _ Translates slice objects to concatenation along the first axis. This is a simple way to build up arrays quickly. There are two use cases. 1. If the index expression contains comma separated arrays, then stack them along their first axis. 2. If the index expression contains slice notation or scalars then create a 1-D array with a range indicated by the slice notation. If slice notation is used, the syntax `start:stop:step` is equivalent to `np.arange(start, stop, step)` inside of the brackets. However, if `step` is an imaginary number (i.e. 100j) then its integer portion is interpreted as a number-of-points desired and the start and stop are inclusive. In other words `start:stop:stepj` is interpreted as `np.linspace(start, stop, step, endpoint=1)` inside of the brackets. After expansion of slice notation, all comma separated sequences are concatenated together. Optional character strings placed as the first element of the index expression can be used to change the output. The strings ‘r’ or ‘c’ result in matrix output. If the result is 1-D and ‘r’ is specified a 1 x N (row) matrix is produced. If the result is 1-D and ‘c’ is specified, then a N x 1 (column) matrix is produced. If the result is 2-D then both provide the same matrix result. A string integer specifies which axis to stack multiple comma separated arrays along. A string of two comma-separated integers allows indication of the minimum number of dimensions to force each entry into as the second integer (the axis to concatenate along is still the first integer). A string with three comma-separated integers allows specification of the axis to concatenate along, the minimum number of dimensions to force the entries to, and which axis should contain the start of the arrays which are less than the specified number of dimensions. In other words the third integer allows you to specify where the 1’s should be placed in the shape of the arrays that have their shapes upgraded. By default, they are placed in the front of the shape tuple. The third argument allows you to specify where the start of the array should be instead. Thus, a third argument of ‘0’ would place the 1’s at the end of the array shape. Negative integers specify where in the new shape tuple the last dimension of upgraded arrays should be placed, so the default is ‘-1’. Parameters: **Not a function, so takes no parameters** Returns: A concatenated ndarray or matrix. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`c_`](numpy.c_#numpy.c_ "numpy.c_") Translates slice objects to concatenation along the second axis. #### Examples >>> import numpy as np >>> np.r_[np.array([1,2,3]), 0, 0, np.array([4,5,6])] array([1, 2, 3, ..., 4, 5, 6]) >>> np.r_[-1:1:6j, [0]*3, 5, 6] array([-1. , -0.6, -0.2, 0.2, 0.6, 1. , 0. , 0. , 0. , 5. , 6. ]) String integers specify the axis to concatenate along or the minimum number of dimensions to force entries into. >>> a = np.array([[0, 1, 2], [3, 4, 5]]) >>> np.r_['-1', a, a] # concatenate along last axis array([[0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5]]) >>> np.r_['0,2', [1,2,3], [4,5,6]] # concatenate along first axis, dim>=2 array([[1, 2, 3], [4, 5, 6]]) >>> np.r_['0,2,0', [1,2,3], [4,5,6]] array([[1], [2], [3], [4], [5], [6]]) >>> np.r_['1,2,0', [1,2,3], [4,5,6]] array([[1, 4], [2, 5], [3, 6]]) Using ‘r’ or ‘c’ as a first string argument creates a matrix. >>> np.r_['r',[1,2,3], [4,5,6]] matrix([[1, 2, 3, 4, 5, 6]]) # numpy.rad2deg numpy.rad2deg(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Convert angles from radians to degrees. Parameters: **x** array_like Angle in radians. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The corresponding angle in degrees. This is a scalar if `x` is a scalar. See also [`deg2rad`](numpy.deg2rad#numpy.deg2rad "numpy.deg2rad") Convert angles from degrees to radians. [`unwrap`](numpy.unwrap#numpy.unwrap "numpy.unwrap") Remove large jumps in angle by wrapping. #### Notes rad2deg(x) is `180 * x / pi`. #### Examples >>> import numpy as np >>> np.rad2deg(np.pi/2) 90.0 # numpy.radians numpy.radians(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Convert angles from degrees to radians. Parameters: **x** array_like Input array in degrees. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The corresponding radian values. This is a scalar if `x` is a scalar. See also [`deg2rad`](numpy.deg2rad#numpy.deg2rad "numpy.deg2rad") equivalent function #### Examples >>> import numpy as np Convert a degree array to radians >>> deg = np.arange(12.) * 30. >>> np.radians(deg) array([ 0. , 0.52359878, 1.04719755, 1.57079633, 2.0943951 , 2.61799388, 3.14159265, 3.66519143, 4.1887902 , 4.71238898, 5.23598776, 5.75958653]) >>> out = np.zeros((deg.shape)) >>> ret = np.radians(deg, out) >>> ret is out True # numpy.ravel numpy.ravel(_a_ , _order ='C'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L1904-L2011) Return a contiguous flattened array. A 1-D array, containing the elements of the input, is returned. A copy is made only if needed. As of NumPy 1.10, the returned array will have the same type as the input array. (for example, a masked array will be returned for a masked array input) Parameters: **a** array_like Input array. The elements in `a` are read in the order specified by `order`, and packed as a 1-D array. **order**{‘C’,’F’, ‘A’, ‘K’}, optional The elements of `a` are read using this index order. ‘C’ means to index the elements in row-major, C-style order, with the last axis index changing fastest, back to the first axis index changing slowest. ‘F’ means to index the elements in column-major, Fortran-style order, with the first index changing fastest, and the last index changing slowest. Note that the ‘C’ and ‘F’ options take no account of the memory layout of the underlying array, and only refer to the order of axis indexing. ‘A’ means to read the elements in Fortran-like index order if `a` is Fortran _contiguous_ in memory, C-like order otherwise. ‘K’ means to read the elements in the order they occur in memory, except for reversing the data when strides are negative. By default, ‘C’ index order is used. Returns: **y** array_like y is a contiguous 1-D array of the same subtype as `a`, with shape `(a.size,)`. Note that matrices are special cased for backward compatibility, if `a` is a matrix, then y is a 1-D ndarray. See also [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") 1-D iterator over an array. [`ndarray.flatten`](numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten") 1-D array copy of the elements of an array in row-major order. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Change the shape of an array without changing its data. #### Notes In row-major, C-style order, in two dimensions, the row index varies the slowest, and the column index the quickest. This can be generalized to multiple dimensions, where row-major order implies that the index along the first axis varies slowest, and the index along the last quickest. The opposite holds for column-major, Fortran-style index ordering. When a view is desired in as many cases as possible, `arr.reshape(-1)` may be preferable. However, `ravel` supports `K` in the optional `order` argument while `reshape` does not. #### Examples It is equivalent to `reshape(-1, order=order)`. >>> import numpy as np >>> x = np.array([[1, 2, 3], [4, 5, 6]]) >>> np.ravel(x) array([1, 2, 3, 4, 5, 6]) >>> x.reshape(-1) array([1, 2, 3, 4, 5, 6]) >>> np.ravel(x, order='F') array([1, 4, 2, 5, 3, 6]) When `order` is ‘A’, it will preserve the array’s ‘C’ or ‘F’ ordering: >>> np.ravel(x.T) array([1, 4, 2, 5, 3, 6]) >>> np.ravel(x.T, order='A') array([1, 2, 3, 4, 5, 6]) When `order` is ‘K’, it will preserve orderings that are neither ‘C’ nor ‘F’, but won’t reverse axes: >>> a = np.arange(3)[::-1]; a array([2, 1, 0]) >>> a.ravel(order='C') array([2, 1, 0]) >>> a.ravel(order='K') array([2, 1, 0]) >>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a array([[[ 0, 2, 4], [ 1, 3, 5]], [[ 6, 8, 10], [ 7, 9, 11]]]) >>> a.ravel(order='C') array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11]) >>> a.ravel(order='K') array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) # numpy.ravel_multi_index numpy.ravel_multi_index(_multi_index_ , _dims_ , _mode ='raise'_, _order ='C'_) Converts a tuple of index arrays into an array of flat indices, applying boundary modes to the multi-index. Parameters: **multi_index** tuple of array_like A tuple of integer arrays, one array for each dimension. **dims** tuple of ints The shape of array into which the indices from `multi_index` apply. **mode**{‘raise’, ‘wrap’, ‘clip’}, optional Specifies how out-of-bounds indices are handled. Can specify either one mode or a tuple of modes, one mode per index. * ‘raise’ – raise an error (default) * ‘wrap’ – wrap around * ‘clip’ – clip to the range In ‘clip’ mode, a negative index which would normally wrap will clip to 0 instead. **order**{‘C’, ‘F’}, optional Determines whether the multi-index should be viewed as indexing in row-major (C-style) or column-major (Fortran-style) order. Returns: **raveled_indices** ndarray An array of indices into the flattened version of an array of dimensions `dims`. See also [`unravel_index`](numpy.unravel_index#numpy.unravel_index "numpy.unravel_index") #### Examples >>> import numpy as np >>> arr = np.array([[3,6,6],[4,5,1]]) >>> np.ravel_multi_index(arr, (7,6)) array([22, 41, 37]) >>> np.ravel_multi_index(arr, (7,6), order='F') array([31, 41, 13]) >>> np.ravel_multi_index(arr, (4,6), mode='clip') array([22, 23, 19]) >>> np.ravel_multi_index(arr, (4,4), mode=('clip','wrap')) array([12, 13, 13]) >>> np.ravel_multi_index((3,1,4,1), (6,7,8,9)) 1621 # numpy.real numpy.real(_val_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_type_check_impl.py#L84-L124) Return the real part of the complex argument. Parameters: **val** array_like Input array. Returns: **out** ndarray or scalar The real component of the complex argument. If `val` is real, the type of `val` is used for the output. If `val` has complex elements, the returned type is float. See also [`real_if_close`](numpy.real_if_close#numpy.real_if_close "numpy.real_if_close"), [`imag`](numpy.imag#numpy.imag "numpy.imag"), [`angle`](numpy.angle#numpy.angle "numpy.angle") #### Examples >>> import numpy as np >>> a = np.array([1+2j, 3+4j, 5+6j]) >>> a.real array([1., 3., 5.]) >>> a.real = 9 >>> a array([9.+2.j, 9.+4.j, 9.+6.j]) >>> a.real = np.array([9, 8, 7]) >>> a array([9.+2.j, 8.+4.j, 7.+6.j]) >>> np.real(1 + 1j) 1.0 # numpy.real_if_close numpy.real_if_close(_a_ , _tol =100_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_type_check_impl.py#L489-L545) If input is complex with all imaginary parts close to zero, return real parts. “Close to zero” is defined as `tol` * (machine epsilon of the type for `a`). Parameters: **a** array_like Input array. **tol** float Tolerance in machine epsilons for the complex part of the elements in the array. If the tolerance is <=1, then the absolute tolerance is used. Returns: **out** ndarray If `a` is real, the type of `a` is used for the output. If `a` has complex elements, the returned type is float. See also [`real`](numpy.real#numpy.real "numpy.real"), [`imag`](numpy.imag#numpy.imag "numpy.imag"), [`angle`](numpy.angle#numpy.angle "numpy.angle") #### Notes Machine epsilon varies from machine to machine and between data types but Python floats on most platforms have a machine epsilon equal to 2.2204460492503131e-16. You can use ‘np.finfo(float).eps’ to print out the machine epsilon for floats. #### Examples >>> import numpy as np >>> np.finfo(float).eps 2.2204460492503131e-16 # may vary >>> np.real_if_close([2.1 + 4e-14j, 5.2 + 3e-15j], tol=1000) array([2.1, 5.2]) >>> np.real_if_close([2.1 + 4e-13j, 5.2 + 3e-15j], tol=1000) array([2.1+4.e-13j, 5.2 + 3e-15j]) # numpy.rec.array rec.array(_obj_ , _dtype =None_, _shape =None_, _offset =0_, _strides =None_, _formats =None_, _names =None_, _titles =None_, _aligned =False_, _byteorder =None_, _copy =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/records.py#L944-L1091) Construct a record array from a wide-variety of objects. A general-purpose record array constructor that dispatches to the appropriate [`recarray`](numpy.recarray#numpy.recarray "numpy.recarray") creation function based on the inputs (see Notes). Parameters: **obj** any Input object. See Notes for details on how various input types are treated. **dtype** data-type, optional Valid dtype for array. **shape** int or tuple of ints, optional Shape of each array. **offset** int, optional Position in the file or buffer to start reading from. **strides** tuple of ints, optional Buffer (`buf`) is interpreted according to these strides (strides define how many bytes each array element, row, column, etc. occupy in memory). **formats, names, titles, aligned, byteorder** If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is `None`, these arguments are passed to `numpy.format_parser` to construct a dtype. See that function for detailed documentation. **copy** bool, optional Whether to copy the input object (True), or to use a reference instead. This option only applies when the input is an ndarray or recarray. Defaults to True. Returns: np.recarray Record array created from the specified object. #### Notes If `obj` is `None`, then call the [`recarray`](numpy.recarray#numpy.recarray "numpy.recarray") constructor. If `obj` is a string, then call the [`fromstring`](numpy.fromstring#numpy.fromstring "numpy.fromstring") constructor. If `obj` is a list or a tuple, then if the first object is an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), call [`fromarrays`](numpy.rec.fromarrays#numpy.rec.fromarrays "numpy.rec.fromarrays"), otherwise call [`fromrecords`](numpy.rec.fromrecords#numpy.rec.fromrecords "numpy.rec.fromrecords"). If `obj` is a [`recarray`](numpy.recarray#numpy.recarray "numpy.recarray"), then make a copy of the data in the recarray (if `copy=True`) and use the new formats, names, and titles. If `obj` is a file, then call [`fromfile`](numpy.fromfile#numpy.fromfile "numpy.fromfile"). Finally, if obj is an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), then return `obj.view(recarray)`, making a copy of the data if `copy=True`. #### Examples >>> a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> a array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> np.rec.array(a) rec.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=int64) >>> b = [(1, 1), (2, 4), (3, 9)] >>> c = np.rec.array(b, formats = ['i2', 'f2'], names = ('x', 'y')) >>> c rec.array([(1, 1.), (2, 4.), (3, 9.)], dtype=[('x', '>> c.x array([1, 2, 3], dtype=int16) >>> c.y array([1., 4., 9.], dtype=float16) >>> r = np.rec.array(['abc','def'], names=['col1','col2']) >>> print(r.col1) abc >>> r.col1 array('abc', dtype='>> r.col2 array('def', dtype='>> import numpy as np >>> np.rec.format_parser(['>> np.rec.format_parser(['f8', 'i4', 'a5'], ['col1', 'col2', 'col3'], ... []).dtype dtype([('col1', '>> np.rec.format_parser(['>> x1=np.array([1,2,3,4]) >>> x2=np.array(['a','dd','xyz','12']) >>> x3=np.array([1.1,2,3,4]) >>> r = np.rec.fromarrays([x1,x2,x3],names='a,b,c') >>> print(r[1]) (2, 'dd', 2.0) # may vary >>> x1[1]=34 >>> r.a array([1, 2, 3, 4]) >>> x1 = np.array([1, 2, 3, 4]) >>> x2 = np.array(['a', 'dd', 'xyz', '12']) >>> x3 = np.array([1.1, 2, 3,4]) >>> r = np.rec.fromarrays( ... [x1, x2, x3], ... dtype=np.dtype([('a', np.int32), ('b', 'S3'), ('c', np.float32)])) >>> r rec.array([(1, b'a', 1.1), (2, b'dd', 2. ), (3, b'xyz', 3. ), (4, b'12', 4. )], dtype=[('a', '>> from tempfile import TemporaryFile >>> a = np.empty(10,dtype='f8,i4,a5') >>> a[5] = (0.5,10,'abcde') >>> >>> fd=TemporaryFile() >>> a = a.view(a.dtype.newbyteorder('<')) >>> a.tofile(fd) >>> >>> _ = fd.seek(0) >>> r=np.rec.fromfile(fd, formats='f8,i4,a5', shape=10, ... byteorder='<') >>> print(r[5]) (0.5, 10, b'abcde') >>> r.shape (10,) # numpy.rec.fromrecords rec.fromrecords(_recList_ , _dtype =None_, _shape =None_, _formats =None_, _names =None_, _titles =None_, _aligned =False_, _byteorder =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/records.py#L666-L752) Create a recarray from a list of records in text form. Parameters: **recList** sequence data in the same field may be heterogeneous - they will be promoted to the highest data type. **dtype** data-type, optional valid dtype for all arrays **shape** int or tuple of ints, optional shape of each array. **formats, names, titles, aligned, byteorder** If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is `None`, these arguments are passed to `numpy.format_parser` to construct a dtype. See that function for detailed documentation. If both `formats` and [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") are None, then this will auto-detect formats. Use list of tuples rather than list of lists for faster processing. Returns: np.recarray record array consisting of given recList rows. #### Examples >>> r=np.rec.fromrecords([(456,'dbe',1.2),(2,'de',1.3)], ... names='col1,col2,col3') >>> print(r[0]) (456, 'dbe', 1.2) >>> r.col1 array([456, 2]) >>> r.col2 array(['dbe', 'de'], dtype='>> import pickle >>> pickle.loads(pickle.dumps(r)) rec.array([(456, 'dbe', 1.2), ( 2, 'de', 1.3)], dtype=[('col1', '>> a = b'\x01\x02\x03abc' >>> np.rec.fromstring(a, dtype='u1,u1,u1,S3') rec.array([(1, 2, 3, b'abc')], dtype=[('f0', 'u1'), ('f1', 'u1'), ('f2', 'u1'), ('f3', 'S3')]) >>> grades_dtype = [('Name', (np.str_, 10)), ('Marks', np.float64), ... ('GradeLevel', np.int32)] >>> grades_array = np.array([('Sam', 33.3, 3), ('Mike', 44.4, 5), ... ('Aadi', 66.6, 6)], dtype=grades_dtype) >>> np.rec.fromstring(grades_array.tobytes(), dtype=grades_dtype) rec.array([('Sam', 33.3, 3), ('Mike', 44.4, 5), ('Aadi', 66.6, 6)], dtype=[('Name', '>> s = '\x01\x02\x03abc' >>> np.rec.fromstring(s, dtype='u1,u1,u1,S3') Traceback (most recent call last): ... TypeError: a bytes-like object is required, not 'str' # numpy.recarray.all method recarray.all(_axis =None_, _out =None_, _keepdims =False_, _*_ , _where =True_) Returns True if all elements evaluate to True. Refer to [`numpy.all`](numpy.all#numpy.all "numpy.all") for full documentation. See also [`numpy.all`](numpy.all#numpy.all "numpy.all") equivalent function # numpy.recarray.any method recarray.any(_axis =None_, _out =None_, _keepdims =False_, _*_ , _where =True_) Returns True if any of the elements of `a` evaluate to True. Refer to [`numpy.any`](numpy.any#numpy.any "numpy.any") for full documentation. See also [`numpy.any`](numpy.any#numpy.any "numpy.any") equivalent function # numpy.recarray.argmax method recarray.argmax(_axis =None_, _out =None_, _*_ , _keepdims =False_) Return indices of the maximum values along the given axis. Refer to [`numpy.argmax`](numpy.argmax#numpy.argmax "numpy.argmax") for full documentation. See also [`numpy.argmax`](numpy.argmax#numpy.argmax "numpy.argmax") equivalent function # numpy.recarray.argmin method recarray.argmin(_axis =None_, _out =None_, _*_ , _keepdims =False_) Return indices of the minimum values along the given axis. Refer to [`numpy.argmin`](numpy.argmin#numpy.argmin "numpy.argmin") for detailed documentation. See also [`numpy.argmin`](numpy.argmin#numpy.argmin "numpy.argmin") equivalent function # numpy.recarray.argpartition method recarray.argpartition(_kth_ , _axis =-1_, _kind ='introselect'_, _order =None_) Returns the indices that would partition this array. Refer to [`numpy.argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") for full documentation. See also [`numpy.argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") equivalent function # numpy.recarray.argsort method recarray.argsort(_axis =-1_, _kind =None_, _order =None_) Returns the indices that would sort this array. Refer to [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") for full documentation. See also [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") equivalent function # numpy.recarray.astype method recarray.astype(_dtype_ , _order ='K'_, _casting ='unsafe'_, _subok =True_, _copy =True_) Copy of the array, cast to a specified type. Parameters: **dtype** str or dtype Typecode or data-type to which the array is cast. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. **subok** bool, optional If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array. **copy** bool, optional By default, astype always returns a newly allocated array. If this is set to false, and the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`, and `subok` requirements are satisfied, the input array is returned instead of a copy. Returns: **arr_t** ndarray Unless [`copy`](numpy.copy#numpy.copy "numpy.copy") is False and the other conditions for returning the input array are satisfied (see description for [`copy`](numpy.copy#numpy.copy "numpy.copy") input parameter), `arr_t` is a new array of the same shape as the input array, with dtype, order given by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`. Raises: ComplexWarning When casting from complex to float or int. To avoid this, one should use `a.real.astype(t)`. #### Examples >>> import numpy as np >>> x = np.array([1, 2, 2.5]) >>> x array([1. , 2. , 2.5]) >>> x.astype(int) array([1, 2, 2]) # numpy.recarray.base attribute recarray.base Base object if memory is from some other object. #### Examples The base of an array that owns its memory is None: >>> import numpy as np >>> x = np.array([1,2,3,4]) >>> x.base is None True Slicing creates a view, whose memory is shared with x: >>> y = x[2:] >>> y.base is x True # numpy.recarray.byteswap method recarray.byteswap(_inplace =False_) Swap the bytes of the array elements Toggle between low-endian and big-endian data representation by returning a byteswapped array, optionally swapped in-place. Arrays of byte-strings are not swapped. The real and imaginary parts of a complex number are swapped individually. Parameters: **inplace** bool, optional If `True`, swap bytes in-place, default is `False`. Returns: **out** ndarray The byteswapped array. If `inplace` is `True`, this is a view to self. #### Examples >>> import numpy as np >>> A = np.array([1, 256, 8755], dtype=np.int16) >>> list(map(hex, A)) ['0x1', '0x100', '0x2233'] >>> A.byteswap(inplace=True) array([ 256, 1, 13090], dtype=int16) >>> list(map(hex, A)) ['0x100', '0x1', '0x3322'] Arrays of byte-strings are not swapped >>> A = np.array([b'ceg', b'fac']) >>> A.byteswap() array([b'ceg', b'fac'], dtype='|S3') `A.view(A.dtype.newbyteorder()).byteswap()` produces an array with the same values but different representation in memory >>> A = np.array([1, 2, 3],dtype=np.int64) >>> A.view(np.uint8) array([1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0], dtype=uint8) >>> A.view(A.dtype.newbyteorder()).byteswap(inplace=True) array([1, 2, 3], dtype='>i8') >>> A.view(np.uint8) array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3], dtype=uint8) # numpy.recarray.choose method recarray.choose(_choices_ , _out =None_, _mode ='raise'_) Use an index array to construct a new array from a set of choices. Refer to [`numpy.choose`](numpy.choose#numpy.choose "numpy.choose") for full documentation. See also [`numpy.choose`](numpy.choose#numpy.choose "numpy.choose") equivalent function # numpy.recarray.clip method recarray.clip(_min =None_, _max =None_, _out =None_, _** kwargs_) Return an array whose values are limited to `[min, max]`. One of max or min must be given. Refer to [`numpy.clip`](numpy.clip#numpy.clip "numpy.clip") for full documentation. See also [`numpy.clip`](numpy.clip#numpy.clip "numpy.clip") equivalent function # numpy.recarray.compress method recarray.compress(_condition_ , _axis =None_, _out =None_) Return selected slices of this array along given axis. Refer to [`numpy.compress`](numpy.compress#numpy.compress "numpy.compress") for full documentation. See also [`numpy.compress`](numpy.compress#numpy.compress "numpy.compress") equivalent function # numpy.recarray.conj method recarray.conj() Complex-conjugate all elements. Refer to [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") for full documentation. See also [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") equivalent function # numpy.recarray.conjugate method recarray.conjugate() Return the complex conjugate, element-wise. Refer to [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") for full documentation. See also [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") equivalent function # numpy.recarray.copy method recarray.copy(_order ='C'_) Return a copy of the array. Parameters: **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. (Note that this function and [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") Similar function with different default behavior [`numpy.copyto`](numpy.copyto#numpy.copyto "numpy.copyto") #### Notes This function is the preferred method for creating an array copy. The function [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. #### Examples >>> import numpy as np >>> x = np.array([[1,2,3],[4,5,6]], order='F') >>> y = x.copy() >>> x.fill(0) >>> x array([[0, 0, 0], [0, 0, 0]]) >>> y array([[1, 2, 3], [4, 5, 6]]) >>> y.flags['C_CONTIGUOUS'] True For arrays containing Python objects (e.g. dtype=object), the copy is a shallow one. The new array will contain the same object which may lead to surprises if that object can be modified (is mutable): >>> a = np.array([1, 'm', [2, 3, 4]], dtype=object) >>> b = a.copy() >>> b[2][0] = 10 >>> a array([1, 'm', list([10, 3, 4])], dtype=object) To ensure all elements within an `object` array are copied, use [`copy.deepcopy`](https://docs.python.org/3/library/copy.html#copy.deepcopy "\(in Python v3.13\)"): >>> import copy >>> a = np.array([1, 'm', [2, 3, 4]], dtype=object) >>> c = copy.deepcopy(a) >>> c[2][0] = 10 >>> c array([1, 'm', list([10, 3, 4])], dtype=object) >>> a array([1, 'm', list([2, 3, 4])], dtype=object) # numpy.recarray.ctypes attribute recarray.ctypes An object to simplify the interaction of the array with the ctypes module. This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library. Parameters: **None** Returns: **c** Python object Possessing attributes data, shape, strides, etc. See also [`numpy.ctypeslib`](../routines.ctypeslib#module-numpy.ctypeslib "numpy.ctypeslib") #### Notes Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes): _ctypes.data A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as: `self._array_interface_['data'][0]`. Note that unlike `data_as`, a reference won’t be kept to the array: code like `ctypes.c_void_p((a + b).ctypes.data)` will result in a pointer to a deallocated array, and should be spelt `(a + b).ctypes.data_as(ctypes.c_void_p)` _ctypes.shape (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to `dtype('p')` on this platform (see [`c_intp`](../routines.ctypeslib#numpy.ctypeslib.c_intp "numpy.ctypeslib.c_intp")). This base-type could be [`ctypes.c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "\(in Python v3.13\)"), [`ctypes.c_long`](https://docs.python.org/3/library/ctypes.html#ctypes.c_long "\(in Python v3.13\)"), or [`ctypes.c_longlong`](https://docs.python.org/3/library/ctypes.html#ctypes.c_longlong "\(in Python v3.13\)") depending on the platform. The ctypes array contains the shape of the underlying array. _ctypes.strides (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array. _ctypes.data_as(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_internal.py#L279-L296) Return the data pointer cast to a particular c-types object. For example, calling `self._as_parameter_` is equivalent to `self.data_as(ctypes.c_void_p)`. Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: `self.data_as(ctypes.POINTER(ctypes.c_double))`. The returned pointer will keep a reference to the array. _ctypes.shape_as(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_internal.py#L298-L305) Return the shape tuple as an array of some other c-types type. For example: `self.shape_as(ctypes.c_short)`. _ctypes.strides_as(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_internal.py#L307-L314) Return the strides tuple as an array of some other c-types type. For example: `self.strides_as(ctypes.c_longlong)`. If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the `as_parameter` attribute which will return an integer equal to the data attribute. #### Examples >>> import numpy as np >>> import ctypes >>> x = np.array([[0, 1], [2, 3]], dtype=np.int32) >>> x array([[0, 1], [2, 3]], dtype=int32) >>> x.ctypes.data 31962608 # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)) <__main__.LP_c_uint object at 0x7ff2fc1fc200> # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)).contents c_uint(0) >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint64)).contents c_ulong(4294967296) >>> x.ctypes.shape # may vary >>> x.ctypes.strides # may vary # numpy.recarray.cumprod method recarray.cumprod(_axis =None_, _dtype =None_, _out =None_) Return the cumulative product of the elements along the given axis. Refer to [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") for full documentation. See also [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") equivalent function # numpy.recarray.cumsum method recarray.cumsum(_axis =None_, _dtype =None_, _out =None_) Return the cumulative sum of the elements along the given axis. Refer to [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") for full documentation. See also [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") equivalent function # numpy.recarray.data attribute recarray.data Python buffer object pointing to the start of the array’s data. # numpy.recarray.diagonal method recarray.diagonal(_offset =0_, _axis1 =0_, _axis2 =1_) Return specified diagonals. In NumPy 1.9 the returned array is a read-only view instead of a copy as in previous NumPy versions. In a future version the read-only restriction will be removed. Refer to [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") for full documentation. See also [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") equivalent function # numpy.recarray.dump method recarray.dump(_file_) Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load. Parameters: **file** str or Path A string naming the dump file. # numpy.recarray.dumps method recarray.dumps() Returns the pickle of the array as a string. pickle.loads will convert the string back to an array. Parameters: **None** # numpy.recarray.fill method recarray.fill(_value_) Fill the array with a scalar value. Parameters: **value** scalar All elements of `a` will be assigned this value. #### Examples >>> import numpy as np >>> a = np.array([1, 2]) >>> a.fill(0) >>> a array([0, 0]) >>> a = np.empty(2) >>> a.fill(1) >>> a array([1., 1.]) Fill expects a scalar value and always behaves the same as assigning to a single array element. The following is a rare example where this distinction is important: >>> a = np.array([None, None], dtype=object) >>> a[0] = np.array(3) >>> a array([array(3), None], dtype=object) >>> a.fill(np.array(3)) >>> a array([array(3), array(3)], dtype=object) Where other forms of assignments will unpack the array being assigned: >>> a[...] = np.array(3) >>> a array([3, 3], dtype=object) # numpy.recarray.flags attribute recarray.flags Information about the memory layout of the array. #### Notes The `flags` object can be accessed dictionary-like (as in `a.flags['WRITEABLE']`), or by using lowercased attribute names (as in `a.flags.writeable`). Short flag names are only supported in dictionary access. Only the WRITEBACKIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling [`ndarray.setflags`](numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags"). The array flags cannot be set arbitrarily: * WRITEBACKIFCOPY can only be set `False`. * ALIGNED can only be set `True` if the data is truly aligned. * WRITEABLE can only be set `True` if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string. Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays. Even for contiguous arrays a stride for a given dimension `arr.strides[dim]` may be _arbitrary_ if `arr.shape[dim] == 1` or the array has no elements. It does _not_ generally hold that `self.strides[-1] == self.itemsize` for C-style contiguous arrays or `self.strides[0] == self.itemsize` for Fortran-style contiguous arrays is true. Attributes: **C_CONTIGUOUS (C)** The data is in a single, C-style contiguous segment. **F_CONTIGUOUS (F)** The data is in a single, Fortran-style contiguous segment. **OWNDATA (O)** The array owns the memory it uses or borrows it from another object. **WRITEABLE (W)** The data area can be written to. Setting this to False locks the data, making it read-only. A view (slice, etc.) inherits WRITEABLE from its base array at creation time, but a view of a writeable array may be subsequently locked while the base array remains writeable. (The opposite is not true, in that a view of a locked array may not be made writeable. However, currently, locking a base object does not lock any views that already reference it, so under that circumstance it is possible to alter the contents of a locked array via a previously created writeable view onto it.) Attempting to change a non- writeable array raises a RuntimeError exception. **ALIGNED (A)** The data and all elements are aligned appropriately for the hardware. **WRITEBACKIFCOPY (X)** This array is a copy of some other array. The C-API function PyArray_ResolveWritebackIfCopy must be called before deallocating to the base array will be updated with the contents of this array. **FNC** F_CONTIGUOUS and not C_CONTIGUOUS. **FORC** F_CONTIGUOUS or C_CONTIGUOUS (one-segment test). **BEHAVED (B)** ALIGNED and WRITEABLE. **CARRAY (CA)** BEHAVED and C_CONTIGUOUS. **FARRAY (FA)** BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS. # numpy.recarray.flat attribute recarray.flat A 1-D iterator over the array. This is a [`numpy.flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") instance, which acts similarly to, but is not a subclass of, Python’s built-in iterator object. See also [`flatten`](numpy.recarray.flatten#numpy.recarray.flatten "numpy.recarray.flatten") Return a copy of the array collapsed into one dimension. [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples >>> import numpy as np >>> x = np.arange(1, 7).reshape(2, 3) >>> x array([[1, 2, 3], [4, 5, 6]]) >>> x.flat[3] 4 >>> x.T array([[1, 4], [2, 5], [3, 6]]) >>> x.T.flat[3] 5 >>> type(x.flat) An assignment example: >>> x.flat = 3; x array([[3, 3, 3], [3, 3, 3]]) >>> x.flat[[1,4]] = 1; x array([[3, 1, 3], [3, 1, 3]]) # numpy.recarray.flatten method recarray.flatten(_order ='C'_) Return a copy of the array collapsed into one dimension. Parameters: **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional ‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran- style) order. ‘A’ means to flatten in column-major order if `a` is Fortran _contiguous_ in memory, row-major order otherwise. ‘K’ means to flatten `a` in the order the elements occur in memory. The default is ‘C’. Returns: **y** ndarray A copy of the input array, flattened to one dimension. See also [`ravel`](numpy.ravel#numpy.ravel "numpy.ravel") Return a flattened array. [`flat`](numpy.recarray.flat#numpy.recarray.flat "numpy.recarray.flat") A 1-D flat iterator over the array. #### Examples >>> import numpy as np >>> a = np.array([[1,2], [3,4]]) >>> a.flatten() array([1, 2, 3, 4]) >>> a.flatten('F') array([1, 3, 2, 4]) # numpy.recarray.getfield method recarray.getfield(_dtype_ , _offset =0_) Returns a field of the given array as a certain type. A field is a view of the array data with a given data-type. The values in the view are determined by the given type and the offset into the current array in bytes. The offset needs to be such that the view dtype fits in the array dtype; for example an array of dtype complex128 has 16-byte elements. If taking a view with a 32-bit integer (4 bytes), the offset needs to be between 0 and 12 bytes. Parameters: **dtype** str or dtype The data type of the view. The dtype size of the view can not be larger than that of the array itself. **offset** int Number of bytes to skip before beginning the element view. #### Examples >>> import numpy as np >>> x = np.diag([1.+1.j]*2) >>> x[1, 1] = 2 + 4.j >>> x array([[1.+1.j, 0.+0.j], [0.+0.j, 2.+4.j]]) >>> x.getfield(np.float64) array([[1., 0.], [0., 2.]]) By choosing an offset of 8 bytes we can select the complex part of the array for our view: >>> x.getfield(np.float64, offset=8) array([[1., 0.], [0., 4.]]) # numpy.recarray _class_ numpy.recarray(_shape_ , _dtype =None_, _buf =None_, _offset =0_, _strides =None_, _formats =None_, _names =None_, _titles =None_, _byteorder =None_, _aligned =False_, _order ='C'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/rec/__init__.py) Construct an ndarray that allows field access using attributes. Arrays may have a data-types containing fields, analogous to columns in a spread sheet. An example is `[(x, int), (y, float)]`, where each entry in the array is a pair of `(int, float)`. Normally, these attributes are accessed using dictionary lookups such as `arr['x']` and `arr['y']`. Record arrays allow the fields to be accessed as members of the array, using `arr.x` and `arr.y`. Parameters: **shape** tuple Shape of output array. **dtype** data-type, optional The desired data-type. By default, the data-type is determined from `formats`, `names`, `titles`, `aligned` and `byteorder`. **formats** list of data-types, optional A list containing the data-types for the different columns, e.g. `['i4', 'f8', 'i4']`. `formats` does _not_ support the new convention of using types directly, i.e. `(int, float, int)`. Note that `formats` must be a list, not a tuple. Given that `formats` is somewhat limited, we recommend specifying [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") instead. **names** tuple of str, optional The name of each column, e.g. `('x', 'y', 'z')`. **buf** buffer, optional By default, a new array is created of the given shape and data-type. If `buf` is specified and is an object exposing the buffer interface, the array will use the memory from the existing buffer. In this case, the `offset` and [`strides`](numpy.recarray.strides#numpy.recarray.strides "numpy.recarray.strides") keywords are available. Returns: **rec** recarray Empty array of the given shape and type. Other Parameters: **titles** tuple of str, optional Aliases for column names. For example, if `names` were `('x', 'y', 'z')` and `titles` is `('x_coordinate', 'y_coordinate', 'z_coordinate')`, then `arr['x']` is equivalent to both `arr.x` and `arr.x_coordinate`. **byteorder**{‘<’, ‘>’, ‘=’}, optional Byte-order for all fields. **aligned** bool, optional Align the fields in memory as the C-compiler would. **strides** tuple of ints, optional Buffer (`buf`) is interpreted according to these strides (strides define how many bytes each array element, row, column, etc. occupy in memory). **offset** int, optional Start reading buffer (`buf`) from this offset onwards. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`numpy.rec.fromrecords`](numpy.rec.fromrecords#numpy.rec.fromrecords "numpy.rec.fromrecords") Construct a record array from data. [`numpy.record`](numpy.record#numpy.record "numpy.record") fundamental data-type for `recarray`. [`numpy.rec.format_parser`](numpy.rec.format_parser#numpy.rec.format_parser "numpy.rec.format_parser") determine data-type from formats, names, titles. #### Notes This constructor can be compared to `empty`: it creates a new record array but does not fill it with data. To create a record array from data, use one of the following methods: 1. Create a standard ndarray and convert it to a record array, using `arr.view(np.recarray)` 2. Use the `buf` keyword. 3. Use `np.rec.fromrecords`. #### Examples Create an array with two fields, `x` and `y`: >>> import numpy as np >>> x = np.array([(1.0, 2), (3.0, 4)], dtype=[('x', '>> x array([(1., 2), (3., 4)], dtype=[('x', '>> x['x'] array([1., 3.]) View the array as a record array: >>> x = x.view(np.recarray) >>> x.x array([1., 3.]) >>> x.y array([2, 4]) Create a new, empty record array: >>> np.recarray((2,), ... dtype=[('x', int), ('y', float), ('z', int)]) rec.array([(-1073741821, 1.2249118382103472e-301, 24547520), (3471280, 1.2134086255804012e-316, 0)], dtype=[('x', '>> import numpy as np >>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.item(3) 1 >>> x.item(7) 0 >>> x.item((0, 1)) 2 >>> x.item((2, 2)) 1 For an array with object dtype, elements are returned as-is. >>> a = np.array([np.int64(1)], dtype=object) >>> a.item() #return np.int64 np.int64(1) # numpy.recarray.itemsize attribute recarray.itemsize Length of one array element in bytes. #### Examples >>> import numpy as np >>> x = np.array([1,2,3], dtype=np.float64) >>> x.itemsize 8 >>> x = np.array([1,2,3], dtype=np.complex128) >>> x.itemsize 16 # numpy.recarray.max method recarray.max(_axis=None_ , _out=None_ , _keepdims=False_ , _initial= _, _where=True_) Return the maximum along a given axis. Refer to [`numpy.amax`](numpy.amax#numpy.amax "numpy.amax") for full documentation. See also [`numpy.amax`](numpy.amax#numpy.amax "numpy.amax") equivalent function # numpy.recarray.mean method recarray.mean(_axis =None_, _dtype =None_, _out =None_, _keepdims =False_, _*_ , _where =True_) Returns the average of the array elements along given axis. Refer to [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") for full documentation. See also [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") equivalent function # numpy.recarray.min method recarray.min(_axis=None_ , _out=None_ , _keepdims=False_ , _initial= _, _where=True_) Return the minimum along a given axis. Refer to [`numpy.amin`](numpy.amin#numpy.amin "numpy.amin") for full documentation. See also [`numpy.amin`](numpy.amin#numpy.amin "numpy.amin") equivalent function # numpy.recarray.mT attribute recarray.mT View of the matrix transposed array. The matrix transpose is the transpose of the last two dimensions, even if the array is of higher dimension. New in version 2.0. Raises: ValueError If the array is of dimension less than 2. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.mT array([[1, 3], [2, 4]]) >>> a = np.arange(8).reshape((2, 2, 2)) >>> a array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> a.mT array([[[0, 2], [1, 3]], [[4, 6], [5, 7]]]) # numpy.recarray.nbytes attribute recarray.nbytes Total bytes consumed by the elements of the array. See also [`sys.getsizeof`](https://docs.python.org/3/library/sys.html#sys.getsizeof "\(in Python v3.13\)") Memory consumed by the object itself without parents in case view. This does include memory consumed by non-element attributes. #### Notes Does not include memory consumed by non-element attributes of the array object. #### Examples >>> import numpy as np >>> x = np.zeros((3,5,2), dtype=np.complex128) >>> x.nbytes 480 >>> np.prod(x.shape) * x.itemsize 480 # numpy.recarray.nonzero method recarray.nonzero() Return the indices of the elements that are non-zero. Refer to [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") for full documentation. See also [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") equivalent function # numpy.recarray.partition method recarray.partition(_kth_ , _axis =-1_, _kind ='introselect'_, _order =None_) Partially sorts the elements in the array in such a way that the value of the element in k-th position is in the position it would be in a sorted array. In the output array, all elements smaller than the k-th element are located to the left of this element and all equal or greater are located to its right. The ordering of the elements in the two partitions on the either side of the k-th element in the output array is undefined. Parameters: **kth** int or sequence of ints Element index to partition by. The kth element value will be in its final sorted position and all smaller elements will be moved before it and all equal or greater elements behind it. The order of all elements in the partitions is undefined. If provided with a sequence of kth it will partition all elements indexed by kth of them into their sorted position at once. Deprecated since version 1.22.0: Passing booleans as index is deprecated. **axis** int, optional Axis along which to sort. Default is -1, which means sort along the last axis. **kind**{‘introselect’}, optional Selection algorithm. Default is ‘introselect’. **order** str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need to be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. See also [`numpy.partition`](numpy.partition#numpy.partition "numpy.partition") Return a partitioned copy of an array. [`argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") Indirect partition. [`sort`](numpy.sort#numpy.sort "numpy.sort") Full sort. #### Notes See `np.partition` for notes on the different algorithms. #### Examples >>> import numpy as np >>> a = np.array([3, 4, 2, 1]) >>> a.partition(3) >>> a array([2, 1, 3, 4]) # may vary >>> a.partition((1, 3)) >>> a array([1, 2, 3, 4]) # numpy.recarray.prod method recarray.prod(_axis =None_, _dtype =None_, _out =None_, _keepdims =False_, _initial =1_, _where =True_) Return the product of the array elements over the given axis Refer to [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") for full documentation. See also [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") equivalent function # numpy.recarray.put method recarray.put(_indices_ , _values_ , _mode ='raise'_) Set `a.flat[n] = values[n]` for all `n` in indices. Refer to [`numpy.put`](numpy.put#numpy.put "numpy.put") for full documentation. See also [`numpy.put`](numpy.put#numpy.put "numpy.put") equivalent function # numpy.recarray.ravel method recarray.ravel([_order_]) Return a flattened array. Refer to [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") for full documentation. See also [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") equivalent function [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") a flat iterator on the array. # numpy.recarray.repeat method recarray.repeat(_repeats_ , _axis =None_) Repeat elements of an array. Refer to [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") for full documentation. See also [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") equivalent function # numpy.recarray.reshape method recarray.reshape(_shape_ , _/_ , _*_ , _order ='C'_, _copy =None_) Returns an array containing the same data with a new shape. Refer to [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") for full documentation. See also [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") equivalent function #### Notes Unlike the free function [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape"), this method on [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") allows the elements of the shape parameter to be passed in as separate arguments. For example, `a.reshape(10, 11)` is equivalent to `a.reshape((10, 11))`. # numpy.recarray.resize method recarray.resize(_new_shape_ , _refcheck =True_) Change shape and size of array in-place. Parameters: **new_shape** tuple of ints, or `n` ints Shape of resized array. **refcheck** bool, optional If False, reference count will not be checked. Default is True. Returns: None Raises: ValueError If `a` does not own its own data or references or views to it exist, and the data memory must be changed. PyPy only: will always raise if the data memory must be changed, since there is no reliable way to determine if references or views to it exist. SystemError If the `order` keyword argument is specified. This behaviour is a bug in NumPy. See also [`resize`](numpy.resize#numpy.resize "numpy.resize") Return a new array with the specified shape. #### Notes This reallocates space for the data area if necessary. Only contiguous arrays (data elements consecutive in memory) can be resized. The purpose of the reference count check is to make sure you do not use this array as a buffer for another Python object and then reallocate the memory. However, reference counts can increase in other ways so if you are sure that you have not shared the memory for this array with another Python object, then you may safely set `refcheck` to False. #### Examples Shrinking an array: array is flattened (in the order that the data are stored in memory), resized, and reshaped: >>> import numpy as np >>> a = np.array([[0, 1], [2, 3]], order='C') >>> a.resize((2, 1)) >>> a array([[0], [1]]) >>> a = np.array([[0, 1], [2, 3]], order='F') >>> a.resize((2, 1)) >>> a array([[0], [2]]) Enlarging an array: as above, but missing entries are filled with zeros: >>> b = np.array([[0, 1], [2, 3]]) >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple >>> b array([[0, 1, 2], [3, 0, 0]]) Referencing an array prevents resizing… >>> c = a >>> a.resize((1, 1)) Traceback (most recent call last): ... ValueError: cannot resize an array that references or is referenced ... Unless `refcheck` is False: >>> a.resize((1, 1), refcheck=False) >>> a array([[0]]) >>> c array([[0]]) # numpy.recarray.round method recarray.round(_decimals =0_, _out =None_) Return `a` with each element rounded to the given number of decimals. Refer to [`numpy.around`](numpy.around#numpy.around "numpy.around") for full documentation. See also [`numpy.around`](numpy.around#numpy.around "numpy.around") equivalent function # numpy.recarray.searchsorted method recarray.searchsorted(_v_ , _side ='left'_, _sorter =None_) Find indices where elements of v should be inserted in a to maintain order. For full documentation, see [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") See also [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") equivalent function # numpy.recarray.setfield method recarray.setfield(_val_ , _dtype_ , _offset =0_) Put a value into a specified place in a field defined by a data-type. Place `val` into `a`’s field defined by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") and beginning `offset` bytes into the field. Parameters: **val** object Value to be placed in field. **dtype** dtype object Data-type of the field in which to place `val`. **offset** int, optional The number of bytes into the field at which to place `val`. Returns: None See also [`getfield`](numpy.recarray.getfield#numpy.recarray.getfield "numpy.recarray.getfield") #### Examples >>> import numpy as np >>> x = np.eye(3) >>> x.getfield(np.float64) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) >>> x.setfield(3, np.int32) >>> x.getfield(np.int32) array([[3, 3, 3], [3, 3, 3], [3, 3, 3]], dtype=int32) >>> x array([[1.0e+000, 1.5e-323, 1.5e-323], [1.5e-323, 1.0e+000, 1.5e-323], [1.5e-323, 1.5e-323, 1.0e+000]]) >>> x.setfield(np.eye(3), np.int32) >>> x array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) # numpy.recarray.setflags method recarray.setflags(_write =None_, _align =None_, _uic =None_) Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. These Boolean-valued flags affect how numpy interprets the memory area used by `a` (see Notes below). The ALIGNED flag can only be set to True if the data is actually aligned according to the type. The WRITEBACKIFCOPY flag can never be set to True. The flag WRITEABLE can only be set to True if the array owns its own memory, or the ultimate owner of the memory exposes a writeable buffer interface, or is a string. (The exception for string is made so that unpickling can be done without copying memory.) Parameters: **write** bool, optional Describes whether or not `a` can be written to. **align** bool, optional Describes whether or not `a` is aligned properly for its type. **uic** bool, optional Describes whether or not `a` is a copy of another “base” array. #### Notes Array flags provide information about how the memory area used for the array is to be interpreted. There are 7 Boolean flags in use, only three of which can be changed by the user: WRITEBACKIFCOPY, WRITEABLE, and ALIGNED. WRITEABLE (W) the data area can be written to; ALIGNED (A) the data and strides are aligned appropriately for the hardware (as determined by the compiler); WRITEBACKIFCOPY (X) this array is a copy of some other array (referenced by .base). When the C-API function PyArray_ResolveWritebackIfCopy is called, the base array will be updated with the contents of this array. All flags can be accessed using the single (upper case) letter as well as the full name. #### Examples >>> import numpy as np >>> y = np.array([[3, 1, 7], ... [2, 0, 0], ... [8, 5, 9]]) >>> y array([[3, 1, 7], [2, 0, 0], [8, 5, 9]]) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False >>> y.setflags(write=0, align=0) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : False ALIGNED : False WRITEBACKIFCOPY : False >>> y.setflags(uic=1) Traceback (most recent call last): File "", line 1, in ValueError: cannot set WRITEBACKIFCOPY flag to True # numpy.recarray.sort method recarray.sort(_axis =-1_, _kind =None_, _order =None_) Sort an array in-place. Refer to [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for full documentation. Parameters: **axis** int, optional Axis along which to sort. Default is -1, which means sort along the last axis. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional Sorting algorithm. The default is ‘quicksort’. Note that both ‘stable’ and ‘mergesort’ use timsort under the covers and, in general, the actual implementation will vary with datatype. The ‘mergesort’ option is retained for backwards compatibility. **order** str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. See also [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") Return a sorted copy of an array. [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Indirect sort. [`numpy.lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort on multiple keys. [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") Find elements in sorted array. [`numpy.partition`](numpy.partition#numpy.partition "numpy.partition") Partial sort. #### Notes See [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for notes on the different sorting algorithms. #### Examples >>> import numpy as np >>> a = np.array([[1,4], [3,1]]) >>> a.sort(axis=1) >>> a array([[1, 4], [1, 3]]) >>> a.sort(axis=0) >>> a array([[1, 3], [1, 4]]) Use the `order` keyword to specify a field to use when sorting a structured array: >>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)]) >>> a.sort(order='y') >>> a array([(b'c', 1), (b'a', 2)], dtype=[('x', 'S1'), ('y', '>> import numpy as np >>> y = np.reshape(np.arange(2*3*4), (2,3,4)) >>> y array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> y.strides (48, 16, 4) >>> y[1,1,1] 17 >>> offset=sum(y.strides * np.array((1,1,1))) >>> offset/y.itemsize 17 >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0) >>> x.strides (32, 4, 224, 1344) >>> i = np.array([3,5,2,2]) >>> offset = sum(i * x.strides) >>> x[3,5,2,2] 813 >>> offset / x.itemsize 813 # numpy.recarray.sum method recarray.sum(_axis =None_, _dtype =None_, _out =None_, _keepdims =False_, _initial =0_, _where =True_) Return the sum of the array elements over the given axis. Refer to [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") for full documentation. See also [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") equivalent function # numpy.recarray.swapaxes method recarray.swapaxes(_axis1_ , _axis2_) Return a view of the array with `axis1` and `axis2` interchanged. Refer to [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") for full documentation. See also [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") equivalent function # numpy.recarray.T attribute recarray.T View of the transposed array. Same as `self.transpose()`. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.T array([[1, 3], [2, 4]]) >>> a = np.array([1, 2, 3, 4]) >>> a array([1, 2, 3, 4]) >>> a.T array([1, 2, 3, 4]) # numpy.recarray.take method recarray.take(_indices_ , _axis =None_, _out =None_, _mode ='raise'_) Return an array formed from the elements of `a` at the given indices. Refer to [`numpy.take`](numpy.take#numpy.take "numpy.take") for full documentation. See also [`numpy.take`](numpy.take#numpy.take "numpy.take") equivalent function # numpy.recarray.tobytes method recarray.tobytes(_order ='C'_) Construct Python bytes containing the raw data bytes in the array. Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object is produced in C-order by default. This behavior is controlled by the `order` parameter. Parameters: **order**{‘C’, ‘F’, ‘A’}, optional Controls the memory layout of the bytes object. ‘C’ means C-order, ‘F’ means F-order, ‘A’ (short for _Any_) means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. Default is ‘C’. Returns: **s** bytes Python bytes exhibiting a copy of `a`’s raw data. See also [`frombuffer`](numpy.frombuffer#numpy.frombuffer "numpy.frombuffer") Inverse of this operation, construct a 1-dimensional array from Python bytes. #### Examples >>> import numpy as np >>> x = np.array([[0, 1], [2, 3]], dtype='>> x.tobytes() b'\x00\x00\x01\x00\x02\x00\x03\x00' >>> x.tobytes('C') == x.tobytes() True >>> x.tobytes('F') b'\x00\x00\x02\x00\x01\x00\x03\x00' # numpy.recarray.tofile method recarray.tofile(_fid_ , _sep =''_, _format ='%s'_) Write array to a file as text or binary (default). Data is always written in ‘C’ order, independent of the order of `a`. The data produced by this method can be recovered using the function fromfile(). Parameters: **fid** file or str or Path An open file object, or a string containing a filename. **sep** str Separator between array items for text output. If “” (empty), a binary file is written, equivalent to `file.write(a.tobytes())`. **format** str Format string for text file output. Each entry in the array is formatted to text by first converting it to the closest Python type, and then using “format” % item. #### Notes This is a convenience function for quick storage of array data. Information on endianness and precision is lost, so this method is not a good choice for files intended to archive data or transport data between machines with different endianness. Some of these problems can be overcome by outputting the data as text files, at the expense of speed and file size. When fid is a file object, array contents are directly written to the file, bypassing the file object’s `write` method. As a result, tofile cannot be used with files objects supporting compression (e.g., GzipFile) or file-like objects that do not support `fileno()` (e.g., BytesIO). # numpy.recarray.tolist method recarray.tolist() Return the array as an `a.ndim`-levels deep nested list of Python scalars. Return a copy of the array data as a (nested) Python list. Data items are converted to the nearest compatible builtin Python type, via the [`item`](numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item") function. If `a.ndim` is 0, then since the depth of the nested list is 0, it will not be a list at all, but a simple Python scalar. Parameters: **none** Returns: **y** object, or list of object, or list of list of object, or … The possibly nested list of array elements. #### Notes The array may be recreated via `a = np.array(a.tolist())`, although this may sometimes lose precision. #### Examples For a 1D array, `a.tolist()` is almost the same as `list(a)`, except that `tolist` changes numpy scalars to Python scalars: >>> import numpy as np >>> a = np.uint32([1, 2]) >>> a_list = list(a) >>> a_list [np.uint32(1), np.uint32(2)] >>> type(a_list[0]) >>> a_tolist = a.tolist() >>> a_tolist [1, 2] >>> type(a_tolist[0]) Additionally, for a 2D array, `tolist` applies recursively: >>> a = np.array([[1, 2], [3, 4]]) >>> list(a) [array([1, 2]), array([3, 4])] >>> a.tolist() [[1, 2], [3, 4]] The base case for this recursion is a 0D array: >>> a = np.array(1) >>> list(a) Traceback (most recent call last): ... TypeError: iteration over a 0-d array >>> a.tolist() 1 # numpy.recarray.tostring method recarray.tostring(_order ='C'_) A compatibility alias for [`tobytes`](numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes"), with exactly the same behavior. Despite its name, it returns [`bytes`](https://docs.python.org/3/library/stdtypes.html#bytes "\(in Python v3.13\)") not [`str`](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.13\)")s. Deprecated since version 1.19.0. # numpy.recarray.trace method recarray.trace(_offset =0_, _axis1 =0_, _axis2 =1_, _dtype =None_, _out =None_) Return the sum along diagonals of the array. Refer to [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") for full documentation. See also [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") equivalent function # numpy.recarray.transpose method recarray.transpose(_* axes_) Returns a view of the array with axes transposed. Refer to [`numpy.transpose`](numpy.transpose#numpy.transpose "numpy.transpose") for full documentation. Parameters: **axes** None, tuple of ints, or `n` ints * None or no argument: reverses the order of the axes. * tuple of ints: `i` in the `j`-th place in the tuple means that the array’s `i`-th axis becomes the transposed array’s `j`-th axis. * `n` ints: same as an n-tuple of the same ints (this form is intended simply as a “convenience” alternative to the tuple form). Returns: **p** ndarray View of the array with its axes suitably permuted. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") Equivalent function. [`ndarray.T`](numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T") Array property returning the array transposed. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Give a new shape to an array without changing its data. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.transpose() array([[1, 3], [2, 4]]) >>> a.transpose((1, 0)) array([[1, 3], [2, 4]]) >>> a.transpose(1, 0) array([[1, 3], [2, 4]]) >>> a = np.array([1, 2, 3, 4]) >>> a array([1, 2, 3, 4]) >>> a.transpose() array([1, 2, 3, 4]) # numpy.recarray.var method recarray.var(_axis =None_, _dtype =None_, _out =None_, _ddof =0_, _keepdims =False_, _*_ , _where =True_) Returns the variance of the array elements, along given axis. Refer to [`numpy.var`](numpy.var#numpy.var "numpy.var") for full documentation. See also [`numpy.var`](numpy.var#numpy.var "numpy.var") equivalent function # numpy.recarray.view method recarray.view(_[dtype][, type]_) New view of array with the same data. Note Passing None for `dtype` is different from omitting the parameter, since the former invokes `dtype(None)` which is an alias for `dtype('float64')`. Parameters: **dtype** data-type or ndarray sub-class, optional Data-type descriptor of the returned view, e.g., float32 or int16. Omitting it results in the view having the same data-type as `a`. This argument can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the `type` parameter). **type** Python type, optional Type of the returned view, e.g., ndarray or matrix. Again, omission of the parameter results in type preservation. #### Notes `a.view()` is used two different ways: `a.view(some_dtype)` or `a.view(dtype=some_dtype)` constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory. `a.view(ndarray_subclass)` or `a.view(type=ndarray_subclass)` just returns an instance of `ndarray_subclass` that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory. For `a.view(some_dtype)`, if `some_dtype` has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the last axis of `a` must be contiguous. This axis will be resized in the result. Changed in version 1.23.0: Only the last axis needs to be contiguous. Previously, the entire array had to be C-contiguous. #### Examples >>> import numpy as np >>> x = np.array([(-1, 2)], dtype=[('a', np.int8), ('b', np.int8)]) Viewing array data using a different type and dtype: >>> nonneg = np.dtype([("a", np.uint8), ("b", np.uint8)]) >>> y = x.view(dtype=nonneg, type=np.recarray) >>> x["a"] array([-1], dtype=int8) >>> y.a array([255], dtype=uint8) Creating a view on a structured array so it can be used in calculations >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)]) >>> xv = x.view(dtype=np.int8).reshape(-1,2) >>> xv array([[1, 2], [3, 4]], dtype=int8) >>> xv.mean(0) array([2., 3.]) Making changes to the view changes the underlying array >>> xv[0,1] = 20 >>> x array([(1, 20), (3, 4)], dtype=[('a', 'i1'), ('b', 'i1')]) Using a view to convert an array to a recarray: >>> z = x.view(np.recarray) >>> z.a array([1, 3], dtype=int8) Views share data: >>> x[0] = (9, 10) >>> z[0] np.record((9, 10), dtype=[('a', 'i1'), ('b', 'i1')]) Views that change the dtype size (bytes per entry) should normally be avoided on arrays defined by slices, transposes, fortran-ordering, etc.: >>> x = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.int16) >>> y = x[:, ::2] >>> y array([[1, 3], [4, 6]], dtype=int16) >>> y.view(dtype=[('width', np.int16), ('length', np.int16)]) Traceback (most recent call last): ... ValueError: To change to a dtype of a different size, the last axis must be contiguous >>> z = y.copy() >>> z.view(dtype=[('width', np.int16), ('length', np.int16)]) array([[(1, 3)], [(4, 6)]], dtype=[('width', '>> x = np.arange(2 * 3 * 4, dtype=np.int8).reshape(2, 3, 4) >>> x.transpose(1, 0, 2).view(np.int16) array([[[ 256, 770], [3340, 3854]], [[1284, 1798], [4368, 4882]], [[2312, 2826], [5396, 5910]]], dtype=int16) # numpy.reciprocal numpy.reciprocal(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the reciprocal of the argument, element-wise. Calculates `1/x`. Parameters: **x** array_like Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray Return array. This is a scalar if `x` is a scalar. #### Notes Note This function is not designed to work with integers. For integer arguments with absolute value larger than 1 the result is always zero because of the way Python handles integer division. For integer zero the result is an overflow. #### Examples >>> import numpy as np >>> np.reciprocal(2.) 0.5 >>> np.reciprocal([1, 2., 3.33]) array([ 1. , 0.5 , 0.3003003]) # numpy.record.all method record.all() Scalar method identical to the corresponding array attribute. Please see [`ndarray.all`](numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all"). # numpy.record.any method record.any() Scalar method identical to the corresponding array attribute. Please see [`ndarray.any`](numpy.ndarray.any#numpy.ndarray.any "numpy.ndarray.any"). # numpy.record.argmax method record.argmax() Scalar method identical to the corresponding array attribute. Please see [`ndarray.argmax`](numpy.ndarray.argmax#numpy.ndarray.argmax "numpy.ndarray.argmax"). # numpy.record.argmin method record.argmin() Scalar method identical to the corresponding array attribute. Please see [`ndarray.argmin`](numpy.ndarray.argmin#numpy.ndarray.argmin "numpy.ndarray.argmin"). # numpy.record.argsort method record.argsort() Scalar method identical to the corresponding array attribute. Please see [`ndarray.argsort`](numpy.ndarray.argsort#numpy.ndarray.argsort "numpy.ndarray.argsort"). # numpy.record.astype method record.astype() Scalar method identical to the corresponding array attribute. Please see [`ndarray.astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype"). # numpy.record.base attribute record.base base object # numpy.record.byteswap method record.byteswap() Scalar method identical to the corresponding array attribute. Please see [`ndarray.byteswap`](numpy.ndarray.byteswap#numpy.ndarray.byteswap "numpy.ndarray.byteswap"). # numpy.record.choose method record.choose() Scalar method identical to the corresponding array attribute. Please see [`ndarray.choose`](numpy.ndarray.choose#numpy.ndarray.choose "numpy.ndarray.choose"). # numpy.record.clip method record.clip() Scalar method identical to the corresponding array attribute. Please see [`ndarray.clip`](numpy.ndarray.clip#numpy.ndarray.clip "numpy.ndarray.clip"). # numpy.record.compress method record.compress() Scalar method identical to the corresponding array attribute. Please see [`ndarray.compress`](numpy.ndarray.compress#numpy.ndarray.compress "numpy.ndarray.compress"). # numpy.record.conjugate method record.conjugate() Scalar method identical to the corresponding array attribute. Please see [`ndarray.conjugate`](numpy.ndarray.conjugate#numpy.ndarray.conjugate "numpy.ndarray.conjugate"). # numpy.record.copy method record.copy() Scalar method identical to the corresponding array attribute. Please see [`ndarray.copy`](numpy.ndarray.copy#numpy.ndarray.copy "numpy.ndarray.copy"). # numpy.record.cumprod method record.cumprod() Scalar method identical to the corresponding array attribute. Please see [`ndarray.cumprod`](numpy.ndarray.cumprod#numpy.ndarray.cumprod "numpy.ndarray.cumprod"). # numpy.record.cumsum method record.cumsum() Scalar method identical to the corresponding array attribute. Please see [`ndarray.cumsum`](numpy.ndarray.cumsum#numpy.ndarray.cumsum "numpy.ndarray.cumsum"). # numpy.record.data attribute record.data Pointer to start of data. # numpy.record.diagonal method record.diagonal() Scalar method identical to the corresponding array attribute. Please see [`ndarray.diagonal`](numpy.ndarray.diagonal#numpy.ndarray.diagonal "numpy.ndarray.diagonal"). # numpy.record.dump method record.dump() Scalar method identical to the corresponding array attribute. Please see [`ndarray.dump`](numpy.ndarray.dump#numpy.ndarray.dump "numpy.ndarray.dump"). # numpy.record.dumps method record.dumps() Scalar method identical to the corresponding array attribute. Please see [`ndarray.dumps`](numpy.ndarray.dumps#numpy.ndarray.dumps "numpy.ndarray.dumps"). # numpy.record.fill method record.fill() Scalar method identical to the corresponding array attribute. Please see [`ndarray.fill`](numpy.ndarray.fill#numpy.ndarray.fill "numpy.ndarray.fill"). # numpy.record.flags attribute record.flags integer value of flags # numpy.record.flat attribute record.flat A 1-D view of the scalar. # numpy.record.flatten method record.flatten() Scalar method identical to the corresponding array attribute. Please see [`ndarray.flatten`](numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten"). # numpy.record.getfield method record.getfield() Scalar method identical to the corresponding array attribute. Please see [`ndarray.getfield`](numpy.ndarray.getfield#numpy.ndarray.getfield "numpy.ndarray.getfield"). # numpy.record _class_ numpy.record[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/src/multiarray/scalartypes.c.src) A data-type scalar that allows field access as attribute lookup. Attributes: [`T`](numpy.record.t#numpy.record.T "numpy.record.T") Scalar attribute identical to the corresponding array attribute. [`base`](numpy.record.base#numpy.record.base "numpy.record.base") base object [`data`](numpy.record.data#numpy.record.data "numpy.record.data") Pointer to start of data. **device** [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") dtype object [`flags`](numpy.record.flags#numpy.record.flags "numpy.record.flags") integer value of flags [`flat`](numpy.record.flat#numpy.record.flat "numpy.record.flat") A 1-D view of the scalar. [`imag`](numpy.imag#numpy.imag "numpy.imag") The imaginary part of the scalar. **itemset** [`itemsize`](numpy.record.itemsize#numpy.record.itemsize "numpy.record.itemsize") The length of one element in bytes. **nbytes** [`ndim`](numpy.ndim#numpy.ndim "numpy.ndim") The number of array dimensions. **newbyteorder** **ptp** [`real`](numpy.real#numpy.real "numpy.real") The real part of the scalar. [`shape`](numpy.shape#numpy.shape "numpy.shape") Tuple of array dimensions. [`size`](numpy.size#numpy.size "numpy.size") The number of elements in the gentype. [`strides`](numpy.record.strides#numpy.record.strides "numpy.record.strides") Tuple of bytes steps in each dimension. #### Methods [`all`](numpy.record.all#numpy.record.all "numpy.record.all") | Scalar method identical to the corresponding array attribute. ---|--- [`any`](numpy.record.any#numpy.record.any "numpy.record.any") | Scalar method identical to the corresponding array attribute. [`argmax`](numpy.record.argmax#numpy.record.argmax "numpy.record.argmax") | Scalar method identical to the corresponding array attribute. [`argmin`](numpy.record.argmin#numpy.record.argmin "numpy.record.argmin") | Scalar method identical to the corresponding array attribute. [`argsort`](numpy.record.argsort#numpy.record.argsort "numpy.record.argsort") | Scalar method identical to the corresponding array attribute. [`astype`](numpy.record.astype#numpy.record.astype "numpy.record.astype") | Scalar method identical to the corresponding array attribute. [`byteswap`](numpy.record.byteswap#numpy.record.byteswap "numpy.record.byteswap") | Scalar method identical to the corresponding array attribute. [`choose`](numpy.record.choose#numpy.record.choose "numpy.record.choose") | Scalar method identical to the corresponding array attribute. [`clip`](numpy.record.clip#numpy.record.clip "numpy.record.clip") | Scalar method identical to the corresponding array attribute. [`compress`](numpy.record.compress#numpy.record.compress "numpy.record.compress") | Scalar method identical to the corresponding array attribute. [`conjugate`](numpy.record.conjugate#numpy.record.conjugate "numpy.record.conjugate") | Scalar method identical to the corresponding array attribute. [`copy`](numpy.record.copy#numpy.record.copy "numpy.record.copy") | Scalar method identical to the corresponding array attribute. [`cumprod`](numpy.record.cumprod#numpy.record.cumprod "numpy.record.cumprod") | Scalar method identical to the corresponding array attribute. [`cumsum`](numpy.record.cumsum#numpy.record.cumsum "numpy.record.cumsum") | Scalar method identical to the corresponding array attribute. [`diagonal`](numpy.record.diagonal#numpy.record.diagonal "numpy.record.diagonal") | Scalar method identical to the corresponding array attribute. [`dump`](numpy.record.dump#numpy.record.dump "numpy.record.dump") | Scalar method identical to the corresponding array attribute. [`dumps`](numpy.record.dumps#numpy.record.dumps "numpy.record.dumps") | Scalar method identical to the corresponding array attribute. [`fill`](numpy.record.fill#numpy.record.fill "numpy.record.fill") | Scalar method identical to the corresponding array attribute. [`flatten`](numpy.record.flatten#numpy.record.flatten "numpy.record.flatten") | Scalar method identical to the corresponding array attribute. [`getfield`](numpy.record.getfield#numpy.record.getfield "numpy.record.getfield") | Scalar method identical to the corresponding array attribute. [`item`](numpy.record.item#numpy.record.item "numpy.record.item") | Scalar method identical to the corresponding array attribute. [`max`](numpy.record.max#numpy.record.max "numpy.record.max") | Scalar method identical to the corresponding array attribute. [`mean`](numpy.record.mean#numpy.record.mean "numpy.record.mean") | Scalar method identical to the corresponding array attribute. [`min`](numpy.record.min#numpy.record.min "numpy.record.min") | Scalar method identical to the corresponding array attribute. [`nonzero`](numpy.record.nonzero#numpy.record.nonzero "numpy.record.nonzero") | Scalar method identical to the corresponding array attribute. [`pprint`](numpy.record.pprint#numpy.record.pprint "numpy.record.pprint")() | Pretty-print all fields. [`prod`](numpy.record.prod#numpy.record.prod "numpy.record.prod") | Scalar method identical to the corresponding array attribute. [`put`](numpy.record.put#numpy.record.put "numpy.record.put") | Scalar method identical to the corresponding array attribute. [`ravel`](numpy.record.ravel#numpy.record.ravel "numpy.record.ravel") | Scalar method identical to the corresponding array attribute. [`repeat`](numpy.record.repeat#numpy.record.repeat "numpy.record.repeat") | Scalar method identical to the corresponding array attribute. [`reshape`](numpy.record.reshape#numpy.record.reshape "numpy.record.reshape") | Scalar method identical to the corresponding array attribute. [`resize`](numpy.record.resize#numpy.record.resize "numpy.record.resize") | Scalar method identical to the corresponding array attribute. [`round`](numpy.record.round#numpy.record.round "numpy.record.round") | Scalar method identical to the corresponding array attribute. [`searchsorted`](numpy.record.searchsorted#numpy.record.searchsorted "numpy.record.searchsorted") | Scalar method identical to the corresponding array attribute. [`setfield`](numpy.record.setfield#numpy.record.setfield "numpy.record.setfield") | Scalar method identical to the corresponding array attribute. [`setflags`](numpy.record.setflags#numpy.record.setflags "numpy.record.setflags") | Scalar method identical to the corresponding array attribute. [`sort`](numpy.record.sort#numpy.record.sort "numpy.record.sort") | Scalar method identical to the corresponding array attribute. [`squeeze`](numpy.record.squeeze#numpy.record.squeeze "numpy.record.squeeze") | Scalar method identical to the corresponding array attribute. [`std`](numpy.record.std#numpy.record.std "numpy.record.std") | Scalar method identical to the corresponding array attribute. [`sum`](numpy.record.sum#numpy.record.sum "numpy.record.sum") | Scalar method identical to the corresponding array attribute. [`swapaxes`](numpy.record.swapaxes#numpy.record.swapaxes "numpy.record.swapaxes") | Scalar method identical to the corresponding array attribute. [`take`](numpy.record.take#numpy.record.take "numpy.record.take") | Scalar method identical to the corresponding array attribute. [`tofile`](numpy.record.tofile#numpy.record.tofile "numpy.record.tofile") | Scalar method identical to the corresponding array attribute. [`tolist`](numpy.record.tolist#numpy.record.tolist "numpy.record.tolist") | Scalar method identical to the corresponding array attribute. [`tostring`](numpy.record.tostring#numpy.record.tostring "numpy.record.tostring") | Scalar method identical to the corresponding array attribute. [`trace`](numpy.record.trace#numpy.record.trace "numpy.record.trace") | Scalar method identical to the corresponding array attribute. [`transpose`](numpy.record.transpose#numpy.record.transpose "numpy.record.transpose") | Scalar method identical to the corresponding array attribute. [`var`](numpy.record.var#numpy.record.var "numpy.record.var") | Scalar method identical to the corresponding array attribute. [`view`](numpy.record.view#numpy.record.view "numpy.record.view") | Scalar method identical to the corresponding array attribute. **conj** | ---|--- **to_device** | **tobytes** | # numpy.record.item method record.item() Scalar method identical to the corresponding array attribute. Please see [`ndarray.item`](numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item"). # numpy.record.itemsize attribute record.itemsize The length of one element in bytes. # numpy.record.max method record.max() Scalar method identical to the corresponding array attribute. Please see [`ndarray.max`](numpy.ndarray.max#numpy.ndarray.max "numpy.ndarray.max"). # numpy.record.mean method record.mean() Scalar method identical to the corresponding array attribute. Please see [`ndarray.mean`](numpy.ndarray.mean#numpy.ndarray.mean "numpy.ndarray.mean"). # numpy.record.min method record.min() Scalar method identical to the corresponding array attribute. Please see [`ndarray.min`](numpy.ndarray.min#numpy.ndarray.min "numpy.ndarray.min"). # numpy.record.nonzero method record.nonzero() Scalar method identical to the corresponding array attribute. Please see [`ndarray.nonzero`](numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero"). # numpy.record.pprint method record.pprint()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/records.py#L264-L271) Pretty-print all fields. # numpy.record.prod method record.prod() Scalar method identical to the corresponding array attribute. Please see [`ndarray.prod`](numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod"). # numpy.record.put method record.put() Scalar method identical to the corresponding array attribute. Please see [`ndarray.put`](numpy.ndarray.put#numpy.ndarray.put "numpy.ndarray.put"). # numpy.record.ravel method record.ravel() Scalar method identical to the corresponding array attribute. Please see [`ndarray.ravel`](numpy.ndarray.ravel#numpy.ndarray.ravel "numpy.ndarray.ravel"). # numpy.record.repeat method record.repeat() Scalar method identical to the corresponding array attribute. Please see [`ndarray.repeat`](numpy.ndarray.repeat#numpy.ndarray.repeat "numpy.ndarray.repeat"). # numpy.record.reshape method record.reshape() Scalar method identical to the corresponding array attribute. Please see [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape"). # numpy.record.resize method record.resize() Scalar method identical to the corresponding array attribute. Please see [`ndarray.resize`](numpy.ndarray.resize#numpy.ndarray.resize "numpy.ndarray.resize"). # numpy.record.round method record.round() Scalar method identical to the corresponding array attribute. Please see [`ndarray.round`](numpy.ndarray.round#numpy.ndarray.round "numpy.ndarray.round"). # numpy.record.searchsorted method record.searchsorted() Scalar method identical to the corresponding array attribute. Please see [`ndarray.searchsorted`](numpy.ndarray.searchsorted#numpy.ndarray.searchsorted "numpy.ndarray.searchsorted"). # numpy.record.setfield method record.setfield() Scalar method identical to the corresponding array attribute. Please see [`ndarray.setfield`](numpy.ndarray.setfield#numpy.ndarray.setfield "numpy.ndarray.setfield"). # numpy.record.setflags method record.setflags() Scalar method identical to the corresponding array attribute. Please see [`ndarray.setflags`](numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags"). # numpy.record.sort method record.sort() Scalar method identical to the corresponding array attribute. Please see [`ndarray.sort`](numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort"). # numpy.record.squeeze method record.squeeze() Scalar method identical to the corresponding array attribute. Please see [`ndarray.squeeze`](numpy.ndarray.squeeze#numpy.ndarray.squeeze "numpy.ndarray.squeeze"). # numpy.record.std method record.std() Scalar method identical to the corresponding array attribute. Please see [`ndarray.std`](numpy.ndarray.std#numpy.ndarray.std "numpy.ndarray.std"). # numpy.record.strides attribute record.strides Tuple of bytes steps in each dimension. # numpy.record.sum method record.sum() Scalar method identical to the corresponding array attribute. Please see [`ndarray.sum`](numpy.ndarray.sum#numpy.ndarray.sum "numpy.ndarray.sum"). # numpy.record.swapaxes method record.swapaxes() Scalar method identical to the corresponding array attribute. Please see [`ndarray.swapaxes`](numpy.ndarray.swapaxes#numpy.ndarray.swapaxes "numpy.ndarray.swapaxes"). # numpy.record.T attribute record.T Scalar attribute identical to the corresponding array attribute. Please see [`ndarray.T`](numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T"). # numpy.record.take method record.take() Scalar method identical to the corresponding array attribute. Please see [`ndarray.take`](numpy.ndarray.take#numpy.ndarray.take "numpy.ndarray.take"). # numpy.record.tofile method record.tofile() Scalar method identical to the corresponding array attribute. Please see [`ndarray.tofile`](numpy.ndarray.tofile#numpy.ndarray.tofile "numpy.ndarray.tofile"). # numpy.record.tolist method record.tolist() Scalar method identical to the corresponding array attribute. Please see [`ndarray.tolist`](numpy.ndarray.tolist#numpy.ndarray.tolist "numpy.ndarray.tolist"). # numpy.record.tostring method record.tostring() Scalar method identical to the corresponding array attribute. Please see [`ndarray.tostring`](numpy.ndarray.tostring#numpy.ndarray.tostring "numpy.ndarray.tostring"). # numpy.record.trace method record.trace() Scalar method identical to the corresponding array attribute. Please see [`ndarray.trace`](numpy.ndarray.trace#numpy.ndarray.trace "numpy.ndarray.trace"). # numpy.record.transpose method record.transpose() Scalar method identical to the corresponding array attribute. Please see [`ndarray.transpose`](numpy.ndarray.transpose#numpy.ndarray.transpose "numpy.ndarray.transpose"). # numpy.record.var method record.var() Scalar method identical to the corresponding array attribute. Please see [`ndarray.var`](numpy.ndarray.var#numpy.ndarray.var "numpy.ndarray.var"). # numpy.record.view method record.view() Scalar method identical to the corresponding array attribute. Please see [`ndarray.view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view"). # numpy.remainder numpy.remainder(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns the element-wise remainder of division. Computes the remainder complementary to the [`floor_divide`](numpy.floor_divide#numpy.floor_divide "numpy.floor_divide") function. It is equivalent to the Python modulus operator `x1 % x2` and has the same sign as the divisor `x2`. The MATLAB function equivalent to `np.remainder` is `mod`. Warning This should not be confused with: * Python 3.7’s [`math.remainder`](https://docs.python.org/3/library/math.html#math.remainder "\(in Python v3.13\)") and C’s `remainder`, which computes the IEEE remainder, which are the complement to `round(x1 / x2)`. * The MATLAB `rem` function and or the C `%` operator which is the complement to `int(x1 / x2)`. Parameters: **x1** array_like Dividend array. **x2** array_like Divisor array. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The element-wise remainder of the quotient `floor_divide(x1, x2)`. This is a scalar if both `x1` and `x2` are scalars. See also [`floor_divide`](numpy.floor_divide#numpy.floor_divide "numpy.floor_divide") Equivalent of Python `//` operator. [`divmod`](numpy.divmod#numpy.divmod "numpy.divmod") Simultaneous floor division and remainder. [`fmod`](numpy.fmod#numpy.fmod "numpy.fmod") Equivalent of the MATLAB `rem` function. [`divide`](numpy.divide#numpy.divide "numpy.divide"), [`floor`](numpy.floor#numpy.floor "numpy.floor") #### Notes Returns 0 when `x2` is 0 and both `x1` and `x2` are (arrays of) integers. `mod` is an alias of `remainder`. #### Examples >>> import numpy as np >>> np.remainder([4, 7], [2, 3]) array([0, 1]) >>> np.remainder(np.arange(7), 5) array([0, 1, 2, 3, 4, 0, 1]) The `%` operator can be used as a shorthand for `np.remainder` on ndarrays. >>> x1 = np.arange(7) >>> x1 % 5 array([0, 1, 2, 3, 4, 0, 1]) # numpy.repeat numpy.repeat(_a_ , _repeats_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L462-L506) Repeat each element of an array after themselves Parameters: **a** array_like Input array. **repeats** int or array of ints The number of repetitions for each element. `repeats` is broadcasted to fit the shape of the given axis. **axis** int, optional The axis along which to repeat values. By default, use the flattened input array, and return a flat output array. Returns: **repeated_array** ndarray Output array which has the same shape as `a`, except along the given axis. See also [`tile`](numpy.tile#numpy.tile "numpy.tile") Tile an array. [`unique`](numpy.unique#numpy.unique "numpy.unique") Find the unique elements of an array. #### Examples >>> import numpy as np >>> np.repeat(3, 4) array([3, 3, 3, 3]) >>> x = np.array([[1,2],[3,4]]) >>> np.repeat(x, 2) array([1, 1, 2, 2, 3, 3, 4, 4]) >>> np.repeat(x, 3, axis=1) array([[1, 1, 1, 2, 2, 2], [3, 3, 3, 4, 4, 4]]) >>> np.repeat(x, [1, 2], axis=0) array([[1, 2], [3, 4], [3, 4]]) # numpy.require numpy.require(_a_ , _dtype =None_, _requirements =None_, _*_ , _like =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_asarray.py#L27-L132) Return an ndarray of the provided type that satisfies requirements. This function is useful to be sure that an array with the correct flags is returned for passing to compiled code (perhaps through ctypes). Parameters: **a** array_like The object to be converted to a type-and-requirement-satisfying array. **dtype** data-type The required data-type. If None preserve the current dtype. If your application requires the data to be in native byteorder, include a byteorder specification as a part of the dtype specification. **requirements** str or sequence of str The requirements list can be any of the following * ‘F_CONTIGUOUS’ (‘F’) - ensure a Fortran-contiguous array * ‘C_CONTIGUOUS’ (‘C’) - ensure a C-contiguous array * ‘ALIGNED’ (‘A’) - ensure a data-type aligned array * ‘WRITEABLE’ (‘W’) - ensure a writable array * ‘OWNDATA’ (‘O’) - ensure an array that owns its own data * ‘ENSUREARRAY’, (‘E’) - ensure a base array, instead of a subclass **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** ndarray Array with specified requirements and type if given. See also [`asarray`](numpy.asarray#numpy.asarray "numpy.asarray") Convert input to an ndarray. [`asanyarray`](numpy.asanyarray#numpy.asanyarray "numpy.asanyarray") Convert to an ndarray, but pass through ndarray subclasses. [`ascontiguousarray`](numpy.ascontiguousarray#numpy.ascontiguousarray "numpy.ascontiguousarray") Convert input to a contiguous array. [`asfortranarray`](numpy.asfortranarray#numpy.asfortranarray "numpy.asfortranarray") Convert input to an ndarray with column-major memory order. [`ndarray.flags`](numpy.ndarray.flags#numpy.ndarray.flags "numpy.ndarray.flags") Information about the memory layout of the array. #### Notes The returned array will be guaranteed to have the listed requirements by making a copy if needed. #### Examples >>> import numpy as np >>> x = np.arange(6).reshape(2,3) >>> x.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : False WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False >>> y = np.require(x, dtype=np.float32, requirements=['A', 'O', 'W', 'F']) >>> y.flags C_CONTIGUOUS : False F_CONTIGUOUS : True OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False # numpy.reshape numpy.reshape(_a_ , _/_ , _shape =None_, _order ='C'_, _*_ , _newshape =None_, _copy =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L211-L324) Gives a new shape to an array without changing its data. Parameters: **a** array_like Array to be reshaped. **shape** int or tuple of ints The new shape should be compatible with the original shape. If an integer, then the result will be a 1-D array of that length. One shape dimension can be -1. In this case, the value is inferred from the length of the array and remaining dimensions. **order**{‘C’, ‘F’, ‘A’}, optional Read the elements of `a` using this index order, and place the elements into the reshaped array using this index order. ‘C’ means to read / write the elements using C-like index order, with the last axis index changing fastest, back to the first axis index changing slowest. ‘F’ means to read / write the elements using Fortran-like index order, with the first index changing fastest, and the last index changing slowest. Note that the ‘C’ and ‘F’ options take no account of the memory layout of the underlying array, and only refer to the order of indexing. ‘A’ means to read / write the elements in Fortran-like index order if `a` is Fortran _contiguous_ in memory, C-like order otherwise. **newshape** int or tuple of ints Deprecated since version 2.1: Replaced by `shape` argument. Retained for backward compatibility. **copy** bool, optional If `True`, then the array data is copied. If `None`, a copy will only be made if it’s required by `order`. For `False` it raises a `ValueError` if a copy cannot be avoided. Default: `None`. Returns: **reshaped_array** ndarray This will be a new view object if possible; otherwise, it will be a copy. Note there is no guarantee of the _memory layout_ (C- or Fortran- contiguous) of the returned array. See also [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Equivalent method. #### Notes It is not always possible to change the shape of an array without copying the data. The `order` keyword gives the index ordering both for _fetching_ the values from `a`, and then _placing_ the values into the output array. For example, let’s say you have an array: >>> a = np.arange(6).reshape((3, 2)) >>> a array([[0, 1], [2, 3], [4, 5]]) You can think of reshaping as first raveling the array (using the given index order), then inserting the elements from the raveled array into the new array using the same kind of index ordering as was used for the raveling. >>> np.reshape(a, (2, 3)) # C-like index ordering array([[0, 1, 2], [3, 4, 5]]) >>> np.reshape(np.ravel(a), (2, 3)) # equivalent to C ravel then C reshape array([[0, 1, 2], [3, 4, 5]]) >>> np.reshape(a, (2, 3), order='F') # Fortran-like index ordering array([[0, 4, 3], [2, 1, 5]]) >>> np.reshape(np.ravel(a, order='F'), (2, 3), order='F') array([[0, 4, 3], [2, 1, 5]]) #### Examples >>> import numpy as np >>> a = np.array([[1,2,3], [4,5,6]]) >>> np.reshape(a, 6) array([1, 2, 3, 4, 5, 6]) >>> np.reshape(a, 6, order='F') array([1, 4, 2, 5, 3, 6]) >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2 array([[1, 2], [3, 4], [5, 6]]) # numpy.resize numpy.resize(_a_ , _new_shape_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L1534-L1614) Return a new array with the specified shape. If the new array is larger than the original array, then the new array is filled with repeated copies of `a`. Note that this behavior is different from a.resize(new_shape) which fills with zeros instead of repeated copies of `a`. Parameters: **a** array_like Array to be resized. **new_shape** int or tuple of int Shape of resized array. Returns: **reshaped_array** ndarray The new array is formed from the data in the old array, repeated if necessary to fill out the required number of elements. The data are repeated iterating over the array in C-order. See also [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Reshape an array without changing the total size. [`numpy.pad`](numpy.pad#numpy.pad "numpy.pad") Enlarge and pad an array. [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") Repeat elements of an array. [`ndarray.resize`](numpy.ndarray.resize#numpy.ndarray.resize "numpy.ndarray.resize") resize an array in-place. #### Notes When the total size of the array does not change [`reshape`](numpy.reshape#numpy.reshape "numpy.reshape") should be used. In most other cases either indexing (to reduce the size) or padding (to increase the size) may be a more appropriate solution. Warning: This functionality does **not** consider axes separately, i.e. it does not apply interpolation/extrapolation. It fills the return array with the required number of elements, iterating over `a` in C-order, disregarding axes (and cycling back from the start if the new shape is larger). This functionality is therefore not suitable to resize images, or data where each axis represents a separate and distinct entity. #### Examples >>> import numpy as np >>> a = np.array([[0,1],[2,3]]) >>> np.resize(a,(2,3)) array([[0, 1, 2], [3, 0, 1]]) >>> np.resize(a,(1,4)) array([[0, 1, 2, 3]]) >>> np.resize(a,(2,4)) array([[0, 1, 2, 3], [0, 1, 2, 3]]) # numpy.result_type numpy.result_type(_* arrays_and_dtypes_) Returns the type that results from applying the NumPy type promotion rules to the arguments. Type promotion in NumPy works similarly to the rules in languages like C++, with some slight differences. When both scalars and arrays are used, the array’s type takes precedence and the actual value of the scalar is taken into account. For example, calculating 3*a, where a is an array of 32-bit floats, intuitively should result in a 32-bit float output. If the 3 is a 32-bit integer, the NumPy rules indicate it can’t convert losslessly into a 32-bit float, so a 64-bit float should be the result type. By examining the value of the constant, ‘3’, we see that it fits in an 8-bit integer, which can be cast losslessly into the 32-bit float. Parameters: **arrays_and_dtypes** list of arrays and dtypes The operands of some operation whose result type is needed. Returns: **out** dtype The result type. See also [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), [`promote_types`](numpy.promote_types#numpy.promote_types "numpy.promote_types"), [`min_scalar_type`](numpy.min_scalar_type#numpy.min_scalar_type "numpy.min_scalar_type"), [`can_cast`](numpy.can_cast#numpy.can_cast "numpy.can_cast") #### Notes The specific algorithm used is as follows. Categories are determined by first checking which of boolean, integer (int/uint), or floating point (float/complex) the maximum kind of all the arrays and the scalars are. If there are only scalars or the maximum category of the scalars is higher than the maximum category of the arrays, the data types are combined with [`promote_types`](numpy.promote_types#numpy.promote_types "numpy.promote_types") to produce the return value. Otherwise, [`min_scalar_type`](numpy.min_scalar_type#numpy.min_scalar_type "numpy.min_scalar_type") is called on each scalar, and the resulting data types are all combined with [`promote_types`](numpy.promote_types#numpy.promote_types "numpy.promote_types") to produce the return value. The set of int values is not a subset of the uint values for types with the same number of bits, something not reflected in [`min_scalar_type`](numpy.min_scalar_type#numpy.min_scalar_type "numpy.min_scalar_type"), but handled as a special case in `result_type`. #### Examples >>> import numpy as np >>> np.result_type(3, np.arange(7, dtype='i1')) dtype('int8') >>> np.result_type('i4', 'c8') dtype('complex128') >>> np.result_type(3.0, -2) dtype('float64') # numpy.right_shift numpy.right_shift(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Shift the bits of an integer to the right. Bits are shifted to the right `x2`. Because the internal representation of numbers is in binary format, this operation is equivalent to dividing `x1` by `2**x2`. Parameters: **x1** array_like, int Input values. **x2** array_like, int Number of bits to remove at the right of `x1`. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray, int Return `x1` with bits shifted `x2` times to the right. This is a scalar if both `x1` and `x2` are scalars. See also [`left_shift`](numpy.left_shift#numpy.left_shift "numpy.left_shift") Shift the bits of an integer to the left. [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Return the binary representation of the input number as a string. #### Examples >>> import numpy as np >>> np.binary_repr(10) '1010' >>> np.right_shift(10, 1) 5 >>> np.binary_repr(5) '101' >>> np.right_shift(10, [1,2,3]) array([5, 2, 1]) The `>>` operator can be used as a shorthand for `np.right_shift` on ndarrays. >>> x1 = 10 >>> x2 = np.array([1,2,3]) >>> x1 >> x2 array([5, 2, 1]) # numpy.rint numpy.rint(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Round elements of the array to the nearest integer. Parameters: **x** array_like Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Output array is same shape and type as `x`. This is a scalar if `x` is a scalar. See also [`fix`](numpy.fix#numpy.fix "numpy.fix"), [`ceil`](numpy.ceil#numpy.ceil "numpy.ceil"), [`floor`](numpy.floor#numpy.floor "numpy.floor"), [`trunc`](numpy.trunc#numpy.trunc "numpy.trunc") #### Notes For values exactly halfway between rounded decimal values, NumPy rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to 0.0, etc. #### Examples >>> import numpy as np >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) >>> np.rint(a) array([-2., -2., -0., 0., 2., 2., 2.]) # numpy.roll numpy.roll(_a_ , _shift_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L1185-L1288) Roll array elements along a given axis. Elements that roll beyond the last position are re-introduced at the first. Parameters: **a** array_like Input array. **shift** int or tuple of ints The number of places by which elements are shifted. If a tuple, then `axis` must be a tuple of the same size, and each of the given axes is shifted by the corresponding number. If an int while `axis` is a tuple of ints, then the same value is used for all given axes. **axis** int or tuple of ints, optional Axis or axes along which elements are shifted. By default, the array is flattened before shifting, after which the original shape is restored. Returns: **res** ndarray Output array, with the same shape as `a`. See also [`rollaxis`](numpy.rollaxis#numpy.rollaxis "numpy.rollaxis") Roll the specified axis backwards, until it lies in a given position. #### Notes Supports rolling over multiple dimensions simultaneously. #### Examples >>> import numpy as np >>> x = np.arange(10) >>> np.roll(x, 2) array([8, 9, 0, 1, 2, 3, 4, 5, 6, 7]) >>> np.roll(x, -2) array([2, 3, 4, 5, 6, 7, 8, 9, 0, 1]) >>> x2 = np.reshape(x, (2, 5)) >>> x2 array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]) >>> np.roll(x2, 1) array([[9, 0, 1, 2, 3], [4, 5, 6, 7, 8]]) >>> np.roll(x2, -1) array([[1, 2, 3, 4, 5], [6, 7, 8, 9, 0]]) >>> np.roll(x2, 1, axis=0) array([[5, 6, 7, 8, 9], [0, 1, 2, 3, 4]]) >>> np.roll(x2, -1, axis=0) array([[5, 6, 7, 8, 9], [0, 1, 2, 3, 4]]) >>> np.roll(x2, 1, axis=1) array([[4, 0, 1, 2, 3], [9, 5, 6, 7, 8]]) >>> np.roll(x2, -1, axis=1) array([[1, 2, 3, 4, 0], [6, 7, 8, 9, 5]]) >>> np.roll(x2, (1, 1), axis=(1, 0)) array([[9, 5, 6, 7, 8], [4, 0, 1, 2, 3]]) >>> np.roll(x2, (2, 1), axis=(1, 0)) array([[8, 9, 5, 6, 7], [3, 4, 0, 1, 2]]) # numpy.rollaxis numpy.rollaxis(_a_ , _axis_ , _start =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L1295-L1383) Roll the specified axis backwards, until it lies in a given position. This function continues to be supported for backward compatibility, but you should prefer [`moveaxis`](numpy.moveaxis#numpy.moveaxis "numpy.moveaxis"). The [`moveaxis`](numpy.moveaxis#numpy.moveaxis "numpy.moveaxis") function was added in NumPy 1.11. Parameters: **a** ndarray Input array. **axis** int The axis to be rolled. The positions of the other axes do not change relative to one another. **start** int, optional When `start <= axis`, the axis is rolled back until it lies in this position. When `start > axis`, the axis is rolled until it lies before this position. The default, 0, results in a “complete” roll. The following table describes how negative values of `start` are interpreted: `start` | Normalized `start` ---|--- `-(arr.ndim+1)` | raise `AxisError` `-arr.ndim` | 0 ⋮ | ⋮ `-1` | `arr.ndim-1` `0` | `0` ⋮ | ⋮ `arr.ndim` | `arr.ndim` `arr.ndim + 1` | raise `AxisError` Returns: **res** ndarray For NumPy >= 1.10.0 a view of `a` is always returned. For earlier NumPy versions a view of `a` is returned only if the order of the axes is changed, otherwise the input array is returned. See also [`moveaxis`](numpy.moveaxis#numpy.moveaxis "numpy.moveaxis") Move array axes to new positions. [`roll`](numpy.roll#numpy.roll "numpy.roll") Roll the elements of an array by a number of positions along a given axis. #### Examples >>> import numpy as np >>> a = np.ones((3,4,5,6)) >>> np.rollaxis(a, 3, 1).shape (3, 6, 4, 5) >>> np.rollaxis(a, 2).shape (5, 3, 4, 6) >>> np.rollaxis(a, 1, 4).shape (3, 5, 6, 4) # numpy.roots numpy.roots(_p_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_polynomial_impl.py#L163-L253) Return the roots of a polynomial with coefficients given in p. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials- package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). The values in the rank-1 array `p` are coefficients of a polynomial. If the length of `p` is n+1 then the polynomial is described by: p[0] * x**n + p[1] * x**(n-1) + ... + p[n-1]*x + p[n] Parameters: **p** array_like Rank-1 array of polynomial coefficients. Returns: **out** ndarray An array containing the roots of the polynomial. Raises: ValueError When `p` cannot be converted to a rank-1 array. See also [`poly`](numpy.poly#numpy.poly "numpy.poly") Find the coefficients of a polynomial with a given sequence of roots. [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") Compute polynomial values. [`polyfit`](numpy.polyfit#numpy.polyfit "numpy.polyfit") Least squares polynomial fit. [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") A one-dimensional polynomial class. #### Notes The algorithm relies on computing the eigenvalues of the companion matrix [1]. #### References [1] R. A. Horn & C. R. Johnson, _Matrix Analysis_. Cambridge, UK: Cambridge University Press, 1999, pp. 146-7. #### Examples >>> import numpy as np >>> coeff = [3.2, 2, 1] >>> np.roots(coeff) array([-0.3125+0.46351241j, -0.3125-0.46351241j]) # numpy.rot90 numpy.rot90(_m_ , _k =1_, _axes =(0, 1)_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L153-L241) Rotate an array by 90 degrees in the plane specified by axes. Rotation direction is from the first towards the second axis. This means for a 2D array with the default `k` and `axes`, the rotation will be counterclockwise. Parameters: **m** array_like Array of two or more dimensions. **k** integer Number of times the array is rotated by 90 degrees. **axes**(2,) array_like The array is rotated in the plane defined by the axes. Axes must be different. Returns: **y** ndarray A rotated view of `m`. See also [`flip`](numpy.flip#numpy.flip "numpy.flip") Reverse the order of elements in an array along the given axis. [`fliplr`](numpy.fliplr#numpy.fliplr "numpy.fliplr") Flip an array horizontally. [`flipud`](numpy.flipud#numpy.flipud "numpy.flipud") Flip an array vertically. #### Notes `rot90(m, k=1, axes=(1,0))` is the reverse of `rot90(m, k=1, axes=(0,1))` `rot90(m, k=1, axes=(1,0))` is equivalent to `rot90(m, k=-1, axes=(0,1))` #### Examples >>> import numpy as np >>> m = np.array([[1,2],[3,4]], int) >>> m array([[1, 2], [3, 4]]) >>> np.rot90(m) array([[2, 4], [1, 3]]) >>> np.rot90(m, 2) array([[4, 3], [2, 1]]) >>> m = np.arange(8).reshape((2,2,2)) >>> np.rot90(m, 1, (1,2)) array([[[1, 3], [0, 2]], [[5, 7], [4, 6]]]) # numpy.round numpy.round(_a_ , _decimals =0_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L3618-L3710) Evenly round to the given number of decimals. Parameters: **a** array_like Input data. **decimals** int, optional Number of decimal places to round to (default: 0). If decimals is negative, it specifies the number of positions to the left of the decimal point. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape as the expected output, but the type of the output values will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs- output-type) for more details. Returns: **rounded_array** ndarray An array of the same type as `a`, containing the rounded values. Unless `out` was specified, a new array is created. A reference to the result is returned. The real and imaginary parts of complex numbers are rounded separately. The result of rounding a float is a float. See also [`ndarray.round`](numpy.ndarray.round#numpy.ndarray.round "numpy.ndarray.round") equivalent method [`around`](numpy.around#numpy.around "numpy.around") an alias for this function [`ceil`](numpy.ceil#numpy.ceil "numpy.ceil"), [`fix`](numpy.fix#numpy.fix "numpy.fix"), [`floor`](numpy.floor#numpy.floor "numpy.floor"), [`rint`](numpy.rint#numpy.rint "numpy.rint"), [`trunc`](numpy.trunc#numpy.trunc "numpy.trunc") #### Notes For values exactly halfway between rounded decimal values, NumPy rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to 0.0, etc. `np.round` uses a fast but sometimes inexact algorithm to round floating-point datatypes. For positive `decimals` it is equivalent to `np.true_divide(np.rint(a * 10**decimals), 10**decimals)`, which has error due to the inexact representation of decimal fractions in the IEEE floating point standard [1] and errors introduced when scaling by powers of ten. For instance, note the extra “1” in the following: >>> np.round(56294995342131.5, 3) 56294995342131.51 If your goal is to print such values with a fixed number of decimals, it is preferable to use numpy’s float printing routines to limit the number of printed decimals: >>> np.format_float_positional(56294995342131.5, precision=3) '56294995342131.5' The float printing routines use an accurate but much more computationally demanding algorithm to compute the number of digits after the decimal point. Alternatively, Python’s builtin `round` function uses a more accurate but slower algorithm for 64-bit floating point values: >>> round(56294995342131.5, 3) 56294995342131.5 >>> np.round(16.055, 2), round(16.055, 2) # equals 16.0549999999999997 (16.06, 16.05) #### References [1] “Lecture Notes on the Status of IEEE 754”, William Kahan, #### Examples >>> import numpy as np >>> np.round([0.37, 1.64]) array([0., 2.]) >>> np.round([0.37, 1.64], decimals=1) array([0.4, 1.6]) >>> np.round([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value array([0., 2., 2., 4., 4.]) >>> np.round([1,2,3,11], decimals=1) # ndarray of ints is returned array([ 1, 2, 3, 11]) >>> np.round([1,2,3,11], decimals=-1) array([ 0, 0, 0, 10]) # numpy.s_ numpy.s__= _ A nicer way to build up index tuples for arrays. Note Use one of the two predefined instances `index_exp` or `s_` rather than directly using `IndexExpression`. For any index combination, including slicing and axis insertion, `a[indices]` is the same as `a[np.index_exp[indices]]` for any array `a`. However, `np.index_exp[indices]` can be used anywhere in Python code and returns a tuple of slice objects that can be used in the construction of complex index expressions. Parameters: **maketuple** bool If True, always returns a tuple. See also `s_` Predefined instance without tuple conversion: `s_ = IndexExpression(maketuple=False)`. The `index_exp` is another predefined instance that always returns a tuple: `index_exp = IndexExpression(maketuple=True)`. #### Notes You can do all this with [`slice`](https://docs.python.org/3/library/functions.html#slice "\(in Python v3.13\)") plus a few special objects, but there’s a lot to remember and this version is simpler because it uses the standard array indexing syntax. #### Examples >>> import numpy as np >>> np.s_[2::2] slice(2, None, 2) >>> np.index_exp[2::2] (slice(2, None, 2),) >>> np.array([0, 1, 2, 3, 4])[np.s_[2::2]] array([2, 4]) # numpy.save numpy.save(_file_ , _arr_ , _allow_pickle=True_ , _fix_imports= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_npyio_impl.py#L501-L582) Save an array to a binary file in NumPy `.npy` format. Parameters: **file** file, str, or pathlib.Path File or filename to which the data is saved. If file is a file-object, then the filename is unchanged. If file is a string or Path, a `.npy` extension will be appended to the filename if it does not already have one. **arr** array_like Array data to be saved. **allow_pickle** bool, optional Allow saving object arrays using Python pickles. Reasons for disallowing pickles include security (loading pickled data can execute arbitrary code) and portability (pickled objects may not be loadable on different Python installations, for example if the stored objects require libraries that are not available, and not all pickled data is compatible between different versions of Python). Default: True **fix_imports** bool, optional The `fix_imports` flag is deprecated and has no effect. Deprecated since version 2.1: This flag is ignored since NumPy 1.17 and was only needed to support loading some files in Python 2 written in Python 3. See also [`savez`](numpy.savez#numpy.savez "numpy.savez") Save several arrays into a `.npz` archive [`savetxt`](numpy.savetxt#numpy.savetxt "numpy.savetxt"), [`load`](numpy.load#numpy.load "numpy.load") #### Notes For a description of the `.npy` format, see [`numpy.lib.format`](numpy.lib.format#module-numpy.lib.format "numpy.lib.format"). Any data saved to the file is appended to the end of the file. #### Examples >>> import numpy as np >>> from tempfile import TemporaryFile >>> outfile = TemporaryFile() >>> x = np.arange(10) >>> np.save(outfile, x) >>> _ = outfile.seek(0) # Only needed to simulate closing & reopening file >>> np.load(outfile) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> with open('test.npy', 'wb') as f: ... np.save(f, np.array([1, 2])) ... np.save(f, np.array([1, 3])) >>> with open('test.npy', 'rb') as f: ... a = np.load(f) ... b = np.load(f) >>> print(a, b) # [1 2] [1 3] # numpy.savetxt numpy.savetxt(_fname_ , _X_ , _fmt ='%.18e'_, _delimiter =' '_, _newline ='\n'_, _header =''_, _footer =''_, _comments ='# '_, _encoding =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_npyio_impl.py#L1412-L1642) Save an array to a text file. Parameters: **fname** filename, file handle or pathlib.Path If the filename ends in `.gz`, the file is automatically saved in compressed gzip format. [`loadtxt`](numpy.loadtxt#numpy.loadtxt "numpy.loadtxt") understands gzipped files transparently. **X** 1D or 2D array_like Data to be saved to a text file. **fmt** str or sequence of strs, optional A single format (%10.5f), a sequence of formats, or a multi-format string, e.g. ‘Iteration %d – %10.5f’, in which case `delimiter` is ignored. For complex `X`, the legal options for `fmt` are: * a single specifier, `fmt='%.4e'`, resulting in numbers formatted like `' (%s+%sj)' % (fmt, fmt)` * a full string specifying every real and imaginary part, e.g. `' %.4e %+.4ej %.4e %+.4ej %.4e %+.4ej'` for 3 columns * a list of specifiers, one per column - in this case, the real and imaginary part must have separate specifiers, e.g. `['%.3e + %.3ej', '(%.15e%+.15ej)']` for 2 columns **delimiter** str, optional String or character separating columns. **newline** str, optional String or character separating lines. **header** str, optional String that will be written at the beginning of the file. **footer** str, optional String that will be written at the end of the file. **comments** str, optional String that will be prepended to the `header` and `footer` strings, to mark them as comments. Default: ‘# ‘, as expected by e.g. `numpy.loadtxt`. **encoding**{None, str}, optional Encoding used to encode the outputfile. Does not apply to output streams. If the encoding is something other than ‘bytes’ or ‘latin1’ you will not be able to load the file in NumPy versions < 1.14. Default is ‘latin1’. See also [`save`](numpy.save#numpy.save "numpy.save") Save an array to a binary file in NumPy `.npy` format [`savez`](numpy.savez#numpy.savez "numpy.savez") Save several arrays into an uncompressed `.npz` archive [`savez_compressed`](numpy.savez_compressed#numpy.savez_compressed "numpy.savez_compressed") Save several arrays into a compressed `.npz` archive #### Notes Further explanation of the `fmt` parameter (`%[flag]width[.precision]specifier`): flags: `-` : left justify `+` : Forces to precede result with + or -. `0` : Left pad the number with zeros instead of space (see width). width: Minimum number of characters to be printed. The value is not truncated if it has more characters. precision: * For integer specifiers (eg. `d,i,o,x`), the minimum number of digits. * For `e, E` and `f` specifiers, the number of digits to print after the decimal point. * For `g` and `G`, the maximum number of significant digits. * For `s`, the maximum number of characters. specifiers: `c` : character `d` or `i` : signed decimal integer `e` or `E` : scientific notation with `e` or `E`. `f` : decimal floating point `g,G` : use the shorter of `e,E` or `f` `o` : signed octal `s` : string of characters `u` : unsigned decimal integer `x,X` : unsigned hexadecimal integer This explanation of `fmt` is not complete, for an exhaustive specification see [1]. #### References [1] [Format Specification Mini- Language](https://docs.python.org/library/string.html#format-specification- mini-language), Python Documentation. #### Examples >>> import numpy as np >>> x = y = z = np.arange(0.0,5.0,1.0) >>> np.savetxt('test.out', x, delimiter=',') # X is an array >>> np.savetxt('test.out', (x,y,z)) # x,y,z equal sized 1D arrays >>> np.savetxt('test.out', x, fmt='%1.4e') # use exponential notation # numpy.savez numpy.savez(_file_ , _* args_, _allow_pickle =True_, _** kwds_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_npyio_impl.py#L590-L683) Save several arrays into a single file in uncompressed `.npz` format. Provide arrays as keyword arguments to store them under the corresponding name in the output file: `savez(fn, x=x, y=y)`. If arrays are specified as positional arguments, i.e., `savez(fn, x, y)`, their names will be `arr_0`, `arr_1`, etc. Parameters: **file** file, str, or pathlib.Path Either the filename (string) or an open file (file-like object) where the data will be saved. If file is a string or a Path, the `.npz` extension will be appended to the filename if it is not already there. **args** Arguments, optional Arrays to save to the file. Please use keyword arguments (see `kwds` below) to assign names to arrays. Arrays specified as args will be named “arr_0”, “arr_1”, and so on. **allow_pickle** bool, optional Allow saving object arrays using Python pickles. Reasons for disallowing pickles include security (loading pickled data can execute arbitrary code) and portability (pickled objects may not be loadable on different Python installations, for example if the stored objects require libraries that are not available, and not all pickled data is compatible between different versions of Python). Default: True **kwds** Keyword arguments, optional Arrays to save to the file. Each array will be saved to the output file with its corresponding keyword name. Returns: None See also [`save`](numpy.save#numpy.save "numpy.save") Save a single array to a binary file in NumPy format. [`savetxt`](numpy.savetxt#numpy.savetxt "numpy.savetxt") Save an array to a file as plain text. [`savez_compressed`](numpy.savez_compressed#numpy.savez_compressed "numpy.savez_compressed") Save several arrays into a compressed `.npz` archive #### Notes The `.npz` file format is a zipped archive of files named after the variables they contain. The archive is not compressed and each file in the archive contains one variable in `.npy` format. For a description of the `.npy` format, see [`numpy.lib.format`](numpy.lib.format#module-numpy.lib.format "numpy.lib.format"). When opening the saved `.npz` file with [`load`](numpy.load#numpy.load "numpy.load") a [`NpzFile`](numpy.lib.npyio.npzfile#numpy.lib.npyio.NpzFile "numpy.lib.npyio.NpzFile") object is returned. This is a dictionary-like object which can be queried for its list of arrays (with the `.files` attribute), and for the arrays themselves. Keys passed in `kwds` are used as filenames inside the ZIP archive. Therefore, keys should be valid filenames; e.g., avoid keys that begin with `/` or contain `.`. When naming variables with keyword arguments, it is not possible to name a variable `file`, as this would cause the `file` argument to be defined twice in the call to `savez`. #### Examples >>> import numpy as np >>> from tempfile import TemporaryFile >>> outfile = TemporaryFile() >>> x = np.arange(10) >>> y = np.sin(x) Using `savez` with *args, the arrays are saved with default names. >>> np.savez(outfile, x, y) >>> _ = outfile.seek(0) # Only needed to simulate closing & reopening file >>> npzfile = np.load(outfile) >>> npzfile.files ['arr_0', 'arr_1'] >>> npzfile['arr_0'] array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) Using `savez` with **kwds, the arrays are saved with the keyword names. >>> outfile = TemporaryFile() >>> np.savez(outfile, x=x, y=y) >>> _ = outfile.seek(0) >>> npzfile = np.load(outfile) >>> sorted(npzfile.files) ['x', 'y'] >>> npzfile['x'] array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) # numpy.savez_compressed numpy.savez_compressed(_file_ , _* args_, _allow_pickle =True_, _** kwds_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_npyio_impl.py#L691-L763) Save several arrays into a single file in compressed `.npz` format. Provide arrays as keyword arguments to store them under the corresponding name in the output file: `savez_compressed(fn, x=x, y=y)`. If arrays are specified as positional arguments, i.e., `savez_compressed(fn, x, y)`, their names will be `arr_0`, `arr_1`, etc. Parameters: **file** file, str, or pathlib.Path Either the filename (string) or an open file (file-like object) where the data will be saved. If file is a string or a Path, the `.npz` extension will be appended to the filename if it is not already there. **args** Arguments, optional Arrays to save to the file. Please use keyword arguments (see `kwds` below) to assign names to arrays. Arrays specified as args will be named “arr_0”, “arr_1”, and so on. **allow_pickle** bool, optional Allow saving object arrays using Python pickles. Reasons for disallowing pickles include security (loading pickled data can execute arbitrary code) and portability (pickled objects may not be loadable on different Python installations, for example if the stored objects require libraries that are not available, and not all pickled data is compatible between different versions of Python). Default: True **kwds** Keyword arguments, optional Arrays to save to the file. Each array will be saved to the output file with its corresponding keyword name. Returns: None See also [`numpy.save`](numpy.save#numpy.save "numpy.save") Save a single array to a binary file in NumPy format. [`numpy.savetxt`](numpy.savetxt#numpy.savetxt "numpy.savetxt") Save an array to a file as plain text. [`numpy.savez`](numpy.savez#numpy.savez "numpy.savez") Save several arrays into an uncompressed `.npz` file format [`numpy.load`](numpy.load#numpy.load "numpy.load") Load the files created by savez_compressed. #### Notes The `.npz` file format is a zipped archive of files named after the variables they contain. The archive is compressed with `zipfile.ZIP_DEFLATED` and each file in the archive contains one variable in `.npy` format. For a description of the `.npy` format, see [`numpy.lib.format`](numpy.lib.format#module- numpy.lib.format "numpy.lib.format"). When opening the saved `.npz` file with [`load`](numpy.load#numpy.load "numpy.load") a [`NpzFile`](numpy.lib.npyio.npzfile#numpy.lib.npyio.NpzFile "numpy.lib.npyio.NpzFile") object is returned. This is a dictionary-like object which can be queried for its list of arrays (with the `.files` attribute), and for the arrays themselves. #### Examples >>> import numpy as np >>> test_array = np.random.rand(3, 2) >>> test_vector = np.random.rand(4) >>> np.savez_compressed('/tmp/123', a=test_array, b=test_vector) >>> loaded = np.load('/tmp/123.npz') >>> print(np.array_equal(test_array, loaded['a'])) True >>> print(np.array_equal(test_vector, loaded['b'])) True # numpy.searchsorted numpy.searchsorted(_a_ , _v_ , _side ='left'_, _sorter =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L1447-L1527) Find indices where elements should be inserted to maintain order. Find the indices into a sorted array `a` such that, if the corresponding elements in `v` were inserted before the indices, the order of `a` would be preserved. Assuming that `a` is sorted: `side` | returned index `i` satisfies ---|--- left | `a[i-1] < v <= a[i]` right | `a[i-1] <= v < a[i]` Parameters: **a** 1-D array_like Input array. If `sorter` is None, then it must be sorted in ascending order, otherwise `sorter` must be an array of indices that sort it. **v** array_like Values to insert into `a`. **side**{‘left’, ‘right’}, optional If ‘left’, the index of the first suitable location found is given. If ‘right’, return the last such index. If there is no suitable index, return either 0 or N (where N is the length of `a`). **sorter** 1-D array_like, optional Optional array of integer indices that sort array a into ascending order. They are typically the result of argsort. Returns: **indices** int or array of ints Array of insertion points with the same shape as `v`, or an integer if `v` is a scalar. See also [`sort`](numpy.sort#numpy.sort "numpy.sort") Return a sorted copy of an array. [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram") Produce histogram from 1-D data. #### Notes Binary search is used to find the required insertion points. As of NumPy 1.4.0 `searchsorted` works with real/complex arrays containing [`nan`](../constants#numpy.nan "numpy.nan") values. The enhanced sort order is documented in [`sort`](numpy.sort#numpy.sort "numpy.sort"). This function uses the same algorithm as the builtin python [`bisect.bisect_left`](https://docs.python.org/3/library/bisect.html#bisect.bisect_left "\(in Python v3.13\)") (`side='left'`) and [`bisect.bisect_right`](https://docs.python.org/3/library/bisect.html#bisect.bisect_right "\(in Python v3.13\)") (`side='right'`) functions, which is also vectorized in the `v` argument. #### Examples >>> import numpy as np >>> np.searchsorted([11,12,13,14,15], 13) 2 >>> np.searchsorted([11,12,13,14,15], 13, side='right') 3 >>> np.searchsorted([11,12,13,14,15], [-10, 20, 12, 13]) array([0, 5, 1, 2]) When `sorter` is used, the returned indices refer to the sorted array of `a` and not `a` itself: >>> a = np.array([40, 10, 20, 30]) >>> sorter = np.argsort(a) >>> sorter array([1, 2, 3, 0]) # Indices that would sort the array 'a' >>> result = np.searchsorted(a, 25, sorter=sorter) >>> result 2 >>> a[sorter[result]] 30 # The element at index 2 of the sorted array is 30. # numpy.select numpy.select(_condlist_ , _choicelist_ , _default =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L786-L891) Return an array drawn from elements in choicelist, depending on conditions. Parameters: **condlist** list of bool ndarrays The list of conditions which determine from which array in `choicelist` the output elements are taken. When multiple conditions are satisfied, the first one encountered in `condlist` is used. **choicelist** list of ndarrays The list of arrays from which the output elements are taken. It has to be of the same length as `condlist`. **default** scalar, optional The element inserted in `output` when all conditions evaluate to False. Returns: **output** ndarray The output at position m is the m-th element of the array in `choicelist` where the m-th element of the corresponding array in `condlist` is True. See also [`where`](numpy.where#numpy.where "numpy.where") Return elements from one of two arrays depending on condition. [`take`](numpy.take#numpy.take "numpy.take"), [`choose`](numpy.choose#numpy.choose "numpy.choose"), [`compress`](numpy.compress#numpy.compress "numpy.compress"), [`diag`](numpy.diag#numpy.diag "numpy.diag"), [`diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") #### Examples >>> import numpy as np Beginning with an array of integers from 0 to 5 (inclusive), elements less than `3` are negated, elements greater than `3` are squared, and elements not meeting either of these conditions (exactly `3`) are replaced with a `default` value of `42`. >>> x = np.arange(6) >>> condlist = [x<3, x>3] >>> choicelist = [x, x**2] >>> np.select(condlist, choicelist, 42) array([ 0, 1, 2, 42, 16, 25]) When multiple conditions are satisfied, the first one encountered in `condlist` is used. >>> condlist = [x<=4, x>3] >>> choicelist = [x, x**2] >>> np.select(condlist, choicelist, 55) array([ 0, 1, 2, 3, 4, 25]) # numpy.set_printoptions numpy.set_printoptions(_precision =None_, _threshold =None_, _edgeitems =None_, _linewidth =None_, _suppress =None_, _nanstr =None_, _infstr =None_, _formatter =None_, _sign =None_, _floatmode =None_, _*_ , _legacy =None_, _override_repr =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/arrayprint.py#L113-L295) Set printing options. These options determine the way floating point numbers, arrays and other NumPy objects are displayed. Parameters: **precision** int or None, optional Number of digits of precision for floating point output (default 8). May be None if `floatmode` is not `fixed`, to print as many digits as necessary to uniquely specify the value. **threshold** int, optional Total number of array elements which trigger summarization rather than full repr (default 1000). To always use the full repr without summarization, pass [`sys.maxsize`](https://docs.python.org/3/library/sys.html#sys.maxsize "\(in Python v3.13\)"). **edgeitems** int, optional Number of array items in summary at beginning and end of each dimension (default 3). **linewidth** int, optional The number of characters per line for the purpose of inserting line breaks (default 75). **suppress** bool, optional If True, always print floating point numbers using fixed point notation, in which case numbers equal to zero in the current precision will print as zero. If False, then scientific notation is used when absolute value of the smallest number is < 1e-4 or the ratio of the maximum absolute value to the minimum is > 1e3. The default is False. **nanstr** str, optional String representation of floating point not-a-number (default nan). **infstr** str, optional String representation of floating point infinity (default inf). **sign** string, either ‘-’, ‘+’, or ‘ ‘, optional Controls printing of the sign of floating-point types. If ‘+’, always print the sign of positive values. If ‘ ‘, always prints a space (whitespace character) in the sign position of positive values. If ‘-’, omit the sign character of positive values. (default ‘-‘) Changed in version 2.0: The sign parameter can now be an integer type, previously types were floating-point types. **formatter** dict of callables, optional If not None, the keys should indicate the type(s) that the respective formatting function applies to. Callables should return a string. Types that are not specified (by their corresponding keys) are handled by the default formatters. Individual types for which a formatter can be set are: * ‘bool’ * ‘int’ * ‘timedelta’ : a [`numpy.timedelta64`](../arrays.scalars#numpy.timedelta64 "numpy.timedelta64") * ‘datetime’ : a [`numpy.datetime64`](../arrays.scalars#numpy.datetime64 "numpy.datetime64") * ‘float’ * ‘longfloat’ : 128-bit floats * ‘complexfloat’ * ‘longcomplexfloat’ : composed of two 128-bit floats * ‘numpystr’ : types [`numpy.bytes_`](../arrays.scalars#numpy.bytes_ "numpy.bytes_") and [`numpy.str_`](../arrays.scalars#numpy.str_ "numpy.str_") * ‘object’ : `np.object_` arrays Other keys that can be used to set a group of types at once are: * ‘all’ : sets all types * ‘int_kind’ : sets ‘int’ * ‘float_kind’ : sets ‘float’ and ‘longfloat’ * ‘complex_kind’ : sets ‘complexfloat’ and ‘longcomplexfloat’ * ‘str_kind’ : sets ‘numpystr’ **floatmode** str, optional Controls the interpretation of the `precision` option for floating-point types. Can take the following values (default maxprec_equal): * ‘fixed’: Always print exactly `precision` fractional digits, even if this would print more or fewer digits than necessary to specify the value uniquely. * ‘unique’: Print the minimum number of fractional digits necessary to represent each value uniquely. Different elements may have a different number of digits. The value of the `precision` option is ignored. * ‘maxprec’: Print at most `precision` fractional digits, but if an element can be uniquely represented with fewer digits only print it with that many. * ‘maxprec_equal’: Print at most `precision` fractional digits, but if every element in the array can be uniquely represented with an equal number of fewer digits, use that many digits for all elements. **legacy** string or `False`, optional If set to the string `'1.13'` enables 1.13 legacy printing mode. This approximates numpy 1.13 print output by including a space in the sign position of floats and different behavior for 0d arrays. This also enables 1.21 legacy printing mode (described below). If set to the string `'1.21'` enables 1.21 legacy printing mode. This approximates numpy 1.21 print output of complex structured dtypes by not inserting spaces after commas that separate fields and after colons. If set to `'1.25'` approximates printing of 1.25 which mainly means that numeric scalars are printed without their type information, e.g. as `3.0` rather than `np.float64(3.0)`. If set to `'2.1'`, shape information is not given when arrays are summarized (i.e., multiple elements replaced with `...`). If set to `False`, disables legacy mode. Unrecognized strings will be ignored with a warning for forward compatibility. Changed in version 1.22.0. Changed in version 2.2. **override_repr: callable, optional** If set a passed function will be used for generating arrays’ repr. Other options will be ignored. See also [`get_printoptions`](numpy.get_printoptions#numpy.get_printoptions "numpy.get_printoptions"), [`printoptions`](numpy.printoptions#numpy.printoptions "numpy.printoptions"), [`array2string`](numpy.array2string#numpy.array2string "numpy.array2string") #### Notes `formatter` is always reset with a call to `set_printoptions`. Use [`printoptions`](numpy.printoptions#numpy.printoptions "numpy.printoptions") as a context manager to set the values temporarily. #### Examples Floating point precision can be set: >>> import numpy as np >>> np.set_printoptions(precision=4) >>> np.array([1.123456789]) [1.1235] Long arrays can be summarised: >>> np.set_printoptions(threshold=5) >>> np.arange(10) array([0, 1, 2, ..., 7, 8, 9], shape=(10,)) Small results can be suppressed: >>> eps = np.finfo(float).eps >>> x = np.arange(4.) >>> x**2 - (x + eps)**2 array([-4.9304e-32, -4.4409e-16, 0.0000e+00, 0.0000e+00]) >>> np.set_printoptions(suppress=True) >>> x**2 - (x + eps)**2 array([-0., -0., 0., 0.]) A custom formatter can be used to display array elements as desired: >>> np.set_printoptions(formatter={'all':lambda x: 'int: '+str(-x)}) >>> x = np.arange(3) >>> x array([int: 0, int: -1, int: -2]) >>> np.set_printoptions() # formatter gets reset >>> x array([0, 1, 2]) To put back the default options, you can use: >>> np.set_printoptions(edgeitems=3, infstr='inf', ... linewidth=75, nanstr='nan', precision=8, ... suppress=False, threshold=1000, formatter=None) Also to temporarily override options, use [`printoptions`](numpy.printoptions#numpy.printoptions "numpy.printoptions") as a context manager: >>> with np.printoptions(precision=2, suppress=True, threshold=5): ... np.linspace(0, 10, 10) array([ 0. , 1.11, 2.22, ..., 7.78, 8.89, 10. ], shape=(10,)) # numpy.setbufsize numpy.setbufsize(_size_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_ufunc_config.py#L157-L194) Set the size of the buffer used in ufuncs. Changed in version 2.0: The scope of setting the buffer is tied to the [`numpy.errstate`](numpy.errstate#numpy.errstate "numpy.errstate") context. Exiting a `with errstate():` will also restore the bufsize. Parameters: **size** int Size of buffer. Returns: **bufsize** int Previous size of ufunc buffer in bytes. #### Examples When exiting a [`numpy.errstate`](numpy.errstate#numpy.errstate "numpy.errstate") context manager the bufsize is restored: >>> import numpy as np >>> with np.errstate(): ... np.setbufsize(4096) ... print(np.getbufsize()) ... 8192 4096 >>> np.getbufsize() 8192 # numpy.setdiff1d numpy.setdiff1d(_ar1_ , _ar2_ , _assume_unique =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_arraysetops_impl.py#L1177-L1215) Find the set difference of two arrays. Return the unique values in `ar1` that are not in `ar2`. Parameters: **ar1** array_like Input array. **ar2** array_like Input comparison array. **assume_unique** bool If True, the input arrays are both assumed to be unique, which can speed up the calculation. Default is False. Returns: **setdiff1d** ndarray 1D array of values in `ar1` that are not in `ar2`. The result is sorted when `assume_unique=False`, but otherwise only sorted if the input is sorted. #### Examples >>> import numpy as np >>> a = np.array([1, 2, 3, 2, 4, 1]) >>> b = np.array([3, 4, 5, 6]) >>> np.setdiff1d(a, b) array([1, 2]) # numpy.seterr numpy.seterr(_all =None_, _divide =None_, _over =None_, _under =None_, _invalid =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_ufunc_config.py#L20-L106) Set how floating-point errors are handled. Note that operations on integer scalar types (such as [`int16`](../arrays.scalars#numpy.int16 "numpy.int16")) are handled like floating point, and are affected by these settings. Parameters: **all**{‘ignore’, ‘warn’, ‘raise’, ‘call’, ‘print’, ‘log’}, optional Set treatment for all types of floating-point errors at once: * ignore: Take no action when the exception occurs. * warn: Print a [`RuntimeWarning`](https://docs.python.org/3/library/exceptions.html#RuntimeWarning "\(in Python v3.13\)") (via the Python [`warnings`](https://docs.python.org/3/library/warnings.html#module-warnings "\(in Python v3.13\)") module). * raise: Raise a [`FloatingPointError`](https://docs.python.org/3/library/exceptions.html#FloatingPointError "\(in Python v3.13\)"). * call: Call a function specified using the [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall") function. * print: Print a warning directly to `stdout`. * log: Record error in a Log object specified by [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall"). The default is not to change the current behavior. **divide**{‘ignore’, ‘warn’, ‘raise’, ‘call’, ‘print’, ‘log’}, optional Treatment for division by zero. **over**{‘ignore’, ‘warn’, ‘raise’, ‘call’, ‘print’, ‘log’}, optional Treatment for floating-point overflow. **under**{‘ignore’, ‘warn’, ‘raise’, ‘call’, ‘print’, ‘log’}, optional Treatment for floating-point underflow. **invalid**{‘ignore’, ‘warn’, ‘raise’, ‘call’, ‘print’, ‘log’}, optional Treatment for invalid floating-point operation. Returns: **old_settings** dict Dictionary containing the old settings. See also [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall") Set a callback function for the ‘call’ mode. [`geterr`](numpy.geterr#numpy.geterr "numpy.geterr"), [`geterrcall`](numpy.geterrcall#numpy.geterrcall "numpy.geterrcall"), [`errstate`](numpy.errstate#numpy.errstate "numpy.errstate") #### Notes The floating-point exceptions are defined in the IEEE 754 standard [1]: * Division by zero: infinite result obtained from finite numbers. * Overflow: result too large to be expressed. * Underflow: result so close to zero that some precision was lost. * Invalid operation: result is not an expressible number, typically indicates that a NaN was produced. [1] #### Examples >>> import numpy as np >>> orig_settings = np.seterr(all='ignore') # seterr to known value >>> np.int16(32000) * np.int16(3) np.int16(30464) >>> np.seterr(over='raise') {'divide': 'ignore', 'over': 'ignore', 'under': 'ignore', 'invalid': 'ignore'} >>> old_settings = np.seterr(all='warn', over='raise') >>> np.int16(32000) * np.int16(3) Traceback (most recent call last): File "", line 1, in FloatingPointError: overflow encountered in scalar multiply >>> old_settings = np.seterr(all='print') >>> np.geterr() {'divide': 'print', 'over': 'print', 'under': 'print', 'invalid': 'print'} >>> np.int16(32000) * np.int16(3) np.int16(30464) >>> np.seterr(**orig_settings) # restore original {'divide': 'print', 'over': 'print', 'under': 'print', 'invalid': 'print'} # numpy.seterrcall numpy.seterrcall(_func_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/_ufunc_config.py#L217-L304) Set the floating-point error callback function or log object. There are two ways to capture floating-point error messages. The first is to set the error-handler to ‘call’, using [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"). Then, set the function to call using this function. The second is to set the error-handler to ‘log’, using [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"). Floating-point errors then trigger a call to the ‘write’ method of the provided object. Parameters: **func** callable f(err, flag) or object with write method Function to call upon floating-point errors (‘call’-mode) or object whose ‘write’ method is used to log such message (‘log’-mode). The call function takes two arguments. The first is a string describing the type of error (such as “divide by zero”, “overflow”, “underflow”, or “invalid value”), and the second is the status flag. The flag is a byte, whose four least-significant bits indicate the type of error, one of “divide”, “over”, “under”, “invalid”: [0 0 0 0 divide over under invalid] In other words, `flags = divide + 2*over + 4*under + 8*invalid`. If an object is provided, its write method should take one argument, a string. Returns: **h** callable, log instance or None The old error handler. See also [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"), [`geterr`](numpy.geterr#numpy.geterr "numpy.geterr"), [`geterrcall`](numpy.geterrcall#numpy.geterrcall "numpy.geterrcall") #### Examples Callback upon error: >>> def err_handler(type, flag): ... print("Floating point error (%s), with flag %s" % (type, flag)) ... >>> import numpy as np >>> orig_handler = np.seterrcall(err_handler) >>> orig_err = np.seterr(all='call') >>> np.array([1, 2, 3]) / 0.0 Floating point error (divide by zero), with flag 1 array([inf, inf, inf]) >>> np.seterrcall(orig_handler) >>> np.seterr(**orig_err) {'divide': 'call', 'over': 'call', 'under': 'call', 'invalid': 'call'} Log error message: >>> class Log: ... def write(self, msg): ... print("LOG: %s" % msg) ... >>> log = Log() >>> saved_handler = np.seterrcall(log) >>> save_err = np.seterr(all='log') >>> np.array([1, 2, 3]) / 0.0 LOG: Warning: divide by zero encountered in divide array([inf, inf, inf]) >>> np.seterrcall(orig_handler) >>> np.seterr(**orig_err) {'divide': 'log', 'over': 'log', 'under': 'log', 'invalid': 'log'} # numpy.setxor1d numpy.setxor1d(_ar1_ , _ar2_ , _assume_unique =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_arraysetops_impl.py#L713-L754) Find the set exclusive-or of two arrays. Return the sorted, unique values that are in only one (not both) of the input arrays. Parameters: **ar1, ar2** array_like Input arrays. **assume_unique** bool If True, the input arrays are both assumed to be unique, which can speed up the calculation. Default is False. Returns: **setxor1d** ndarray Sorted 1D array of unique values that are in only one of the input arrays. #### Examples >>> import numpy as np >>> a = np.array([1, 2, 3, 2, 4]) >>> b = np.array([2, 3, 5, 7, 5]) >>> np.setxor1d(a,b) array([1, 4, 5, 7]) # numpy.shape numpy.shape(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L2118-L2164) Return the shape of an array. Parameters: **a** array_like Input array. Returns: **shape** tuple of ints The elements of the shape tuple give the lengths of the corresponding array dimensions. See also [`len`](https://docs.python.org/3/library/functions.html#len "\(in Python v3.13\)") `len(a)` is equivalent to `np.shape(a)[0]` for N-D arrays with `N>=1`. [`ndarray.shape`](numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") Equivalent array method. #### Examples >>> import numpy as np >>> np.shape(np.eye(3)) (3, 3) >>> np.shape([[1, 3]]) (1, 2) >>> np.shape([0]) (1,) >>> np.shape(0) () >>> a = np.array([(1, 2), (3, 4), (5, 6)], ... dtype=[('x', 'i4'), ('y', 'i4')]) >>> np.shape(a) (3,) >>> a.shape (3,) # numpy.shares_memory numpy.shares_memory(_a_ , _b_ , _/_ , _max_work =None_) Determine if two arrays share memory. Warning This function can be exponentially slow for some inputs, unless `max_work` is set to zero or a positive integer. If in doubt, use [`numpy.may_share_memory`](numpy.may_share_memory#numpy.may_share_memory "numpy.may_share_memory") instead. Parameters: **a, b** ndarray Input arrays **max_work** int, optional Effort to spend on solving the overlap problem (maximum number of candidate solutions to consider). The following special values are recognized: max_work=-1 (default) The problem is solved exactly. In this case, the function returns True only if there is an element shared between the arrays. Finding the exact solution may take extremely long in some cases. max_work=0 Only the memory bounds of a and b are checked. This is equivalent to using `may_share_memory()`. Returns: **out** bool Raises: numpy.exceptions.TooHardError Exceeded max_work. See also [`may_share_memory`](numpy.may_share_memory#numpy.may_share_memory "numpy.may_share_memory") #### Examples >>> import numpy as np >>> x = np.array([1, 2, 3, 4]) >>> np.shares_memory(x, np.array([5, 6, 7])) False >>> np.shares_memory(x[::2], x) True >>> np.shares_memory(x[::2], x[1::2]) False Checking whether two arrays share memory is NP-complete, and runtime may increase exponentially in the number of dimensions. Hence, `max_work` should generally be set to a finite number, as it is possible to construct examples that take extremely long to run: >>> from numpy.lib.stride_tricks import as_strided >>> x = np.zeros([192163377], dtype=np.int8) >>> x1 = as_strided( ... x, strides=(36674, 61119, 85569), shape=(1049, 1049, 1049)) >>> x2 = as_strided( ... x[64023025:], strides=(12223, 12224, 1), shape=(1049, 1049, 1)) >>> np.shares_memory(x1, x2, max_work=1000) Traceback (most recent call last): ... numpy.exceptions.TooHardError: Exceeded max_work Running `np.shares_memory(x1, x2)` without `max_work` set takes around 1 minute for this case. It is possible to find problems that take still significantly longer. # numpy.show_config numpy.show_config(_mode ='stdout'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__config__.py#L165-L166) Show libraries and system information on which NumPy was built and is being used Parameters: **mode**{`‘stdout’`, `‘dicts’`}, optional. Indicates how to display the config information. `‘stdout’` prints to console, `‘dicts’` returns a dictionary of the configuration. Returns: **out**{`dict`, `None`} If mode is `‘dicts’`, a dict is returned, else None See also [`get_include`](numpy.get_include#numpy.get_include "numpy.get_include") Returns the directory containing NumPy C header files. #### Notes 1. The `‘stdout’` mode will give more readable output if `pyyaml` is installed # numpy.show_runtime numpy.show_runtime()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_utils_impl.py#L18-L69) Print information about various resources in the system including available intrinsic support and BLAS/LAPACK library in use New in version 1.24.0. See also [`show_config`](numpy.show_config#numpy.show_config "numpy.show_config") Show libraries in the system on which NumPy was built. #### Notes 1. Information is derived with the help of [threadpoolctl](https://pypi.org/project/threadpoolctl/) library if available. 2. SIMD related information is derived from `__cpu_features__`, `__cpu_baseline__` and `__cpu_dispatch__` # numpy.sign numpy.sign(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns an element-wise indication of the sign of a number. The `sign` function returns `-1 if x < 0, 0 if x==0, 1 if x > 0`. nan is returned for nan inputs. For complex inputs, the `sign` function returns `x / abs(x)`, the generalization of the above (and `0 if x==0`). Changed in version 2.0.0: Definition of complex sign changed to follow the Array API standard. Parameters: **x** array_like Input values. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The sign of `x`. This is a scalar if `x` is a scalar. #### Notes There is more than one definition of sign in common use for complex numbers. The definition used here, \\(x/|x|\\), is the more common and useful one, but is different from the one used in numpy prior to version 2.0, \\(x/\sqrt{x*x}\\), which is equivalent to `sign(x.real) + 0j if x.real != 0 else sign(x.imag) + 0j`. #### Examples >>> import numpy as np >>> np.sign([-5., 4.5]) array([-1., 1.]) >>> np.sign(0) 0 >>> np.sign([3-4j, 8j]) array([0.6-0.8j, 0. +1.j ]) # numpy.signbit numpy.signbit(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns element-wise True where signbit is set (less than zero). Parameters: **x** array_like The input value(s). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **result** ndarray of bool Output array, or reference to `out` if that was supplied. This is a scalar if `x` is a scalar. #### Examples >>> import numpy as np >>> np.signbit(-1.2) True >>> np.signbit(np.array([1, -2.3, 2.1])) array([False, True, False]) # numpy.sin numpy.sin(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Trigonometric sine, element-wise. Parameters: **x** array_like Angle, in radians (\\(2 \pi\\) rad equals 360 degrees). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** array_like The sine of each element of x. This is a scalar if `x` is a scalar. See also [`arcsin`](numpy.arcsin#numpy.arcsin "numpy.arcsin"), [`sinh`](numpy.sinh#numpy.sinh "numpy.sinh"), [`cos`](numpy.cos#numpy.cos "numpy.cos") #### Notes The sine is one of the fundamental functions of trigonometry (the mathematical study of triangles). Consider a circle of radius 1 centered on the origin. A ray comes in from the \\(+x\\) axis, makes an angle at the origin (measured counter-clockwise from that axis), and departs from the origin. The \\(y\\) coordinate of the outgoing ray’s intersection with the unit circle is the sine of that angle. It ranges from -1 for \\(x=3\pi / 2\\) to +1 for \\(\pi / 2.\\) The function has zeroes where the angle is a multiple of \\(\pi\\). Sines of angles between \\(\pi\\) and \\(2\pi\\) are negative. The numerous properties of the sine and related functions are included in any standard trigonometry text. #### Examples >>> import numpy as np Print sine of one angle: >>> np.sin(np.pi/2.) 1.0 Print sines of an array of angles given in degrees: >>> np.sin(np.array((0., 30., 45., 60., 90.)) * np.pi / 180. ) array([ 0. , 0.5 , 0.70710678, 0.8660254 , 1. ]) Plot the sine function: >>> import matplotlib.pylab as plt >>> x = np.linspace(-np.pi, np.pi, 201) >>> plt.plot(x, np.sin(x)) >>> plt.xlabel('Angle [rad]') >>> plt.ylabel('sin(x)') >>> plt.axis('tight') >>> plt.show() # numpy.sinc numpy.sinc(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L3752-L3831) Return the normalized sinc function. The sinc function is equal to \\(\sin(\pi x)/(\pi x)\\) for any argument \\(x\ne 0\\). `sinc(0)` takes the limit value 1, making `sinc` not only everywhere continuous but also infinitely differentiable. Note Note the normalization factor of `pi` used in the definition. This is the most commonly used definition in signal processing. Use `sinc(x / np.pi)` to obtain the unnormalized sinc function \\(\sin(x)/x\\) that is more common in mathematics. Parameters: **x** ndarray Array (possibly multi-dimensional) of values for which to calculate `sinc(x)`. Returns: **out** ndarray `sinc(x)`, which has the same shape as the input. #### Notes The name sinc is short for “sine cardinal” or “sinus cardinalis”. The sinc function is used in various signal processing applications, including in anti-aliasing, in the construction of a Lanczos resampling filter, and in interpolation. For bandlimited interpolation of discrete-time signals, the ideal interpolation kernel is proportional to the sinc function. #### References [1] Weisstein, Eric W. “Sinc Function.” From MathWorld–A Wolfram Web Resource. [2] Wikipedia, “Sinc function”, #### Examples >>> import numpy as np >>> import matplotlib.pyplot as plt >>> x = np.linspace(-4, 4, 41) >>> np.sinc(x) array([-3.89804309e-17, -4.92362781e-02, -8.40918587e-02, # may vary -8.90384387e-02, -5.84680802e-02, 3.89804309e-17, 6.68206631e-02, 1.16434881e-01, 1.26137788e-01, 8.50444803e-02, -3.89804309e-17, -1.03943254e-01, -1.89206682e-01, -2.16236208e-01, -1.55914881e-01, 3.89804309e-17, 2.33872321e-01, 5.04551152e-01, 7.56826729e-01, 9.35489284e-01, 1.00000000e+00, 9.35489284e-01, 7.56826729e-01, 5.04551152e-01, 2.33872321e-01, 3.89804309e-17, -1.55914881e-01, -2.16236208e-01, -1.89206682e-01, -1.03943254e-01, -3.89804309e-17, 8.50444803e-02, 1.26137788e-01, 1.16434881e-01, 6.68206631e-02, 3.89804309e-17, -5.84680802e-02, -8.90384387e-02, -8.40918587e-02, -4.92362781e-02, -3.89804309e-17]) >>> plt.plot(x, np.sinc(x)) [] >>> plt.title("Sinc Function") Text(0.5, 1.0, 'Sinc Function') >>> plt.ylabel("Amplitude") Text(0, 0.5, 'Amplitude') >>> plt.xlabel("X") Text(0.5, 0, 'X') >>> plt.show() # numpy.sinh numpy.sinh(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Hyperbolic sine, element-wise. Equivalent to `1/2 * (np.exp(x) - np.exp(-x))` or `-1j * np.sin(1j*x)`. Parameters: **x** array_like Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The corresponding hyperbolic sine values. This is a scalar if `x` is a scalar. #### Notes If `out` is provided, the function writes the result into it, and returns a reference to `out`. (See Examples) #### References M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions. New York, NY: Dover, 1972, pg. 83. #### Examples >>> import numpy as np >>> np.sinh(0) 0.0 >>> np.sinh(np.pi*1j/2) 1j >>> np.sinh(np.pi*1j) # (exact value is 0) 1.2246063538223773e-016j >>> # Discrepancy due to vagaries of floating point arithmetic. >>> # Example of providing the optional output parameter >>> out1 = np.array([0], dtype='d') >>> out2 = np.sinh([0.1], out1) >>> out2 is out1 True >>> # Example of ValueError due to provision of shape mis-matched `out` >>> np.sinh(np.zeros((3,3)),np.zeros((2,2))) Traceback (most recent call last): File "", line 1, in ValueError: operands could not be broadcast together with shapes (3,3) (2,2) # numpy.size numpy.size(_a_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L3566-L3611) Return the number of elements along a given axis. Parameters: **a** array_like Input data. **axis** int, optional Axis along which the elements are counted. By default, give the total number of elements. Returns: **element_count** int Number of elements along the specified axis. See also [`shape`](numpy.shape#numpy.shape "numpy.shape") dimensions of array [`ndarray.shape`](numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") dimensions of array [`ndarray.size`](numpy.ndarray.size#numpy.ndarray.size "numpy.ndarray.size") number of elements in array #### Examples >>> import numpy as np >>> a = np.array([[1,2,3],[4,5,6]]) >>> np.size(a) 6 >>> np.size(a,1) 3 >>> np.size(a,0) 2 # numpy.sort numpy.sort(_a_ , _axis =-1_, _kind =None_, _order =None_, _*_ , _stable =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L969-L1123) Return a sorted copy of an array. Parameters: **a** array_like Array to be sorted. **axis** int or None, optional Axis along which to sort. If None, the array is flattened before sorting. The default is -1, which sorts along the last axis. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional Sorting algorithm. The default is ‘quicksort’. Note that both ‘stable’ and ‘mergesort’ use timsort or radix sort under the covers and, in general, the actual implementation will vary with data type. The ‘mergesort’ option is retained for backwards compatibility. **order** str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. **stable** bool, optional Sort stability. If `True`, the returned array will maintain the relative order of `a` values which compare as equal. If `False` or `None`, this is not guaranteed. Internally, this option selects `kind='stable'`. Default: `None`. New in version 2.0.0. Returns: **sorted_array** ndarray Array of the same type and shape as `a`. See also [`ndarray.sort`](numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort") Method to sort an array in-place. [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Indirect sort. [`lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort on multiple keys. [`searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") Find elements in a sorted array. [`partition`](numpy.partition#numpy.partition "numpy.partition") Partial sort. #### Notes The various sorting algorithms are characterized by their average speed, worst case performance, work space size, and whether they are stable. A stable sort keeps items with the same key in the same relative order. The four algorithms implemented in NumPy have the following properties: kind | speed | worst case | work space | stable ---|---|---|---|--- ‘quicksort’ | 1 | O(n^2) | 0 | no ‘heapsort’ | 3 | O(n*log(n)) | 0 | no ‘mergesort’ | 2 | O(n*log(n)) | ~n/2 | yes ‘timsort’ | 2 | O(n*log(n)) | ~n/2 | yes Note The datatype determines which of ‘mergesort’ or ‘timsort’ is actually used, even if ‘mergesort’ is specified. User selection at a finer scale is not currently available. For performance, `sort` makes a temporary copy if needed to make the data [contiguous](https://numpy.org/doc/stable/glossary.html#term-contiguous) in memory along the sort axis. For even better performance and reduced memory consumption, ensure that the array is already contiguous along the sort axis. The sort order for complex numbers is lexicographic. If both the real and imaginary parts are non-nan then the order is determined by the real parts except when they are equal, in which case the order is determined by the imaginary parts. Previous to numpy 1.4.0 sorting real and complex arrays containing nan values led to undefined behaviour. In numpy versions >= 1.4.0 nan values are sorted to the end. The extended sort order is: * Real: [R, nan] * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj] where R is a non-nan real value. Complex values with the same nan placements are sorted according to the non-nan part if it exists. Non-nan values are sorted as before. quicksort has been changed to: [introsort](https://en.wikipedia.org/wiki/Introsort). When sorting does not make enough progress it switches to [heapsort](https://en.wikipedia.org/wiki/Heapsort). This implementation makes quicksort O(n*log(n)) in the worst case. ‘stable’ automatically chooses the best stable sorting algorithm for the data type being sorted. It, along with ‘mergesort’ is currently mapped to [timsort](https://en.wikipedia.org/wiki/Timsort) or [radix sort](https://en.wikipedia.org/wiki/Radix_sort) depending on the data type. API forward compatibility currently limits the ability to select the implementation and it is hardwired for the different data types. Timsort is added for better performance on already or nearly sorted data. On random data timsort is almost identical to mergesort. It is now used for stable sort while quicksort is still the default sort if none is chosen. For timsort details, refer to [CPython listsort.txt](https://github.com/python/cpython/blob/3.7/Objects/listsort.txt) ‘mergesort’ and ‘stable’ are mapped to radix sort for integer data types. Radix sort is an O(n) sort instead of O(n log n). NaT now sorts to the end of arrays for consistency with NaN. #### Examples >>> import numpy as np >>> a = np.array([[1,4],[3,1]]) >>> np.sort(a) # sort along the last axis array([[1, 4], [1, 3]]) >>> np.sort(a, axis=None) # sort the flattened array array([1, 1, 3, 4]) >>> np.sort(a, axis=0) # sort along the first axis array([[1, 1], [3, 4]]) Use the `order` keyword to specify a field to use when sorting a structured array: >>> dtype = [('name', 'S10'), ('height', float), ('age', int)] >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38), ... ('Galahad', 1.7, 38)] >>> a = np.array(values, dtype=dtype) # create a structured array >>> np.sort(a, order='height') array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41), ('Lancelot', 1.8999999999999999, 38)], dtype=[('name', '|S10'), ('height', '>> np.sort(a, order=['age', 'height']) array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38), ('Arthur', 1.8, 41)], dtype=[('name', '|S10'), ('height', '>> import numpy as np >>> np.sort_complex([5, 3, 6, 2, 1]) array([1.+0.j, 2.+0.j, 3.+0.j, 5.+0.j, 6.+0.j]) >>> np.sort_complex([1 + 2j, 2 - 1j, 3 - 2j, 3 - 3j, 3 + 5j]) array([1.+2.j, 2.-1.j, 3.-3.j, 3.-2.j, 3.+5.j]) # numpy.spacing numpy.spacing(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the distance between x and the nearest adjacent number. Parameters: **x** array_like Values to find the spacing of. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar The spacing of values of `x`. This is a scalar if `x` is a scalar. #### Notes It can be considered as a generalization of EPS: `spacing(np.float64(1)) == np.finfo(np.float64).eps`, and there should not be any representable number between `x + spacing(x)` and x for any finite x. Spacing of +- inf and NaN is NaN. #### Examples >>> import numpy as np >>> np.spacing(1) == np.finfo(np.float64).eps True # numpy.split numpy.split(_ary_ , _indices_or_sections_ , _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_shape_base_impl.py#L803-L879) Split an array into multiple sub-arrays as views into `ary`. Parameters: **ary** ndarray Array to be divided into sub-arrays. **indices_or_sections** int or 1-D array If `indices_or_sections` is an integer, N, the array will be divided into N equal arrays along `axis`. If such a split is not possible, an error is raised. If `indices_or_sections` is a 1-D array of sorted integers, the entries indicate where along `axis` the array is split. For example, `[2, 3]` would, for `axis=0`, result in * ary[:2] * ary[2:3] * ary[3:] If an index exceeds the dimension of the array along `axis`, an empty sub- array is returned correspondingly. **axis** int, optional The axis along which to split, default is 0. Returns: **sub-arrays** list of ndarrays A list of sub-arrays as views into `ary`. Raises: ValueError If `indices_or_sections` is given as an integer, but a split does not result in equal division. See also [`array_split`](numpy.array_split#numpy.array_split "numpy.array_split") Split an array into multiple sub-arrays of equal or near-equal size. Does not raise an exception if an equal division cannot be made. [`hsplit`](numpy.hsplit#numpy.hsplit "numpy.hsplit") Split array into multiple sub-arrays horizontally (column-wise). [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit") Split array into multiple sub-arrays vertically (row wise). [`dsplit`](numpy.dsplit#numpy.dsplit "numpy.dsplit") Split array into multiple sub-arrays along the 3rd axis (depth). [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack") Stack arrays in sequence vertically (row wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third dimension). #### Examples >>> import numpy as np >>> x = np.arange(9.0) >>> np.split(x, 3) [array([0., 1., 2.]), array([3., 4., 5.]), array([6., 7., 8.])] >>> x = np.arange(8.0) >>> np.split(x, [3, 5, 6, 10]) [array([0., 1., 2.]), array([3., 4.]), array([5.]), array([6., 7.]), array([], dtype=float64)] # numpy.sqrt numpy.sqrt(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the non-negative square-root of an array, element-wise. Parameters: **x** array_like The values whose square-roots are required. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray An array of the same shape as `x`, containing the positive square-root of each element in `x`. If any element in `x` is complex, a complex array is returned (and the square-roots of negative reals are calculated). If all of the elements in `x` are real, so is `y`, with negative elements returning `nan`. If `out` was provided, `y` is a reference to it. This is a scalar if `x` is a scalar. See also [`emath.sqrt`](numpy.emath.sqrt#numpy.emath.sqrt "numpy.emath.sqrt") A version which returns complex numbers when given negative reals. Note that 0.0 and -0.0 are handled differently for complex inputs. #### Notes _sqrt_ has–consistent with common convention–as its branch cut the real “interval” [`-inf`, 0), and is continuous from above on it. A branch cut is a curve in the complex plane across which a given complex function fails to be continuous. #### Examples >>> import numpy as np >>> np.sqrt([1,4,9]) array([ 1., 2., 3.]) >>> np.sqrt([4, -1, -3+4J]) array([ 2.+0.j, 0.+1.j, 1.+2.j]) >>> np.sqrt([4, -1, np.inf]) array([ 2., nan, inf]) # numpy.square numpy.square(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the element-wise square of the input. Parameters: **x** array_like Input data. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Element-wise `x*x`, of the same shape and dtype as `x`. This is a scalar if `x` is a scalar. See also [`numpy.linalg.matrix_power`](numpy.linalg.matrix_power#numpy.linalg.matrix_power "numpy.linalg.matrix_power") [`sqrt`](numpy.sqrt#numpy.sqrt "numpy.sqrt") [`power`](numpy.power#numpy.power "numpy.power") #### Examples >>> import numpy as np >>> np.square([-1j, 1]) array([-1.-0.j, 1.+0.j]) # numpy.squeeze numpy.squeeze(_a_ , _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L1621-L1688) Remove axes of length one from `a`. Parameters: **a** array_like Input data. **axis** None or int or tuple of ints, optional Selects a subset of the entries of length one in the shape. If an axis is selected with shape entry greater than one, an error is raised. Returns: **squeezed** ndarray The input array, but with all or a subset of the dimensions of length 1 removed. This is always `a` itself or a view into `a`. Note that if all axes are squeezed, the result is a 0d array and not a scalar. Raises: ValueError If `axis` is not None, and an axis being squeezed is not of length 1 See also [`expand_dims`](numpy.expand_dims#numpy.expand_dims "numpy.expand_dims") The inverse operation, adding entries of length one [`reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Insert, remove, and combine dimensions, and resize existing ones #### Examples >>> import numpy as np >>> x = np.array([[[0], [1], [2]]]) >>> x.shape (1, 3, 1) >>> np.squeeze(x).shape (3,) >>> np.squeeze(x, axis=0).shape (3, 1) >>> np.squeeze(x, axis=1).shape Traceback (most recent call last): ... ValueError: cannot select an axis to squeeze out which has size not equal to one >>> np.squeeze(x, axis=2).shape (1, 3) >>> x = np.array([[1234]]) >>> x.shape (1, 1) >>> np.squeeze(x) array(1234) # 0d array >>> np.squeeze(x).shape () >>> np.squeeze(x)[()] 1234 # numpy.stack numpy.stack(_arrays_ , _axis =0_, _out =None_, _*_ , _dtype =None_, _casting ='same_kind'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/shape_base.py#L380-L468) Join a sequence of arrays along a new axis. The `axis` parameter specifies the index of the new axis in the dimensions of the result. For example, if `axis=0` it will be the first dimension and if `axis=-1` it will be the last dimension. Parameters: **arrays** sequence of ndarrays Each array must have the same shape. In the case of a single ndarray array_like input, it will be treated as a sequence of arrays; i.e., each element along the zeroth axis is treated as a separate array. **axis** int, optional The axis in the result array along which the input arrays are stacked. **out** ndarray, optional If provided, the destination to place the result. The shape must be correct, matching that of what stack would have returned if no out argument were specified. **dtype** str or dtype If provided, the destination array will have this dtype. Cannot be provided together with `out`. New in version 1.24. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘same_kind’. New in version 1.24. Returns: **stacked** ndarray The stacked array has one more dimension than the input arrays. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`split`](numpy.split#numpy.split "numpy.split") Split array into a list of multiple sub-arrays of equal size. [`unstack`](numpy.unstack#numpy.unstack "numpy.unstack") Split an array into a tuple of sub-arrays along an axis. #### Examples >>> import numpy as np >>> rng = np.random.default_rng() >>> arrays = [rng.normal(size=(3,4)) for _ in range(10)] >>> np.stack(arrays, axis=0).shape (10, 3, 4) >>> np.stack(arrays, axis=1).shape (3, 10, 4) >>> np.stack(arrays, axis=2).shape (3, 4, 10) >>> a = np.array([1, 2, 3]) >>> b = np.array([4, 5, 6]) >>> np.stack((a, b)) array([[1, 2, 3], [4, 5, 6]]) >>> np.stack((a, b), axis=-1) array([[1, 4], [2, 5], [3, 6]]) # numpy.std numpy.std(_a_ , _axis=None_ , _dtype=None_ , _out=None_ , _ddof=0_ , _keepdims= _, _*_ , _where= _, _mean= _, _correction= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L3869-L4065) Compute the standard deviation along the specified axis. Returns the standard deviation, a measure of the spread of a distribution, of the array elements. The standard deviation is computed for the flattened array by default, otherwise over the specified axis. Parameters: **a** array_like Calculate the standard deviation of these values. **axis** None or int or tuple of ints, optional Axis or axes along which the standard deviation is computed. The default is to compute the standard deviation of the flattened array. If this is a tuple of ints, a standard deviation is performed over multiple axes, instead of a single axis or all the axes as before. **dtype** dtype, optional Type to use in computing the standard deviation. For arrays of integer type the default is float64, for arrays of float types it is the same as the array type. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape as the expected output but the type (of the calculated values) will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. **ddof**{int, float}, optional Means Delta Degrees of Freedom. The divisor used in calculations is `N - ddof`, where `N` represents the number of elements. By default `ddof` is zero. See Notes for details about use of `ddof`. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the `std` method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **where** array_like of bool, optional Elements to include in the standard deviation. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.20.0. **mean** array_like, optional Provide the mean to prevent its recalculation. The mean should have a shape as if it was calculated with `keepdims=True`. The axis for the calculation of the mean should be the same as used in the call to this std function. New in version 2.0.0. **correction**{int, float}, optional Array API compatible name for the `ddof` parameter. Only one of them can be provided at the same time. New in version 2.0.0. Returns: **standard_deviation** ndarray, see dtype parameter above. If `out` is None, return a new array containing the standard deviation, otherwise return a reference to the output array. See also [`var`](numpy.var#numpy.var "numpy.var"), [`mean`](numpy.mean#numpy.mean "numpy.mean"), [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean"), [`nanstd`](numpy.nanstd#numpy.nanstd "numpy.nanstd"), [`nanvar`](numpy.nanvar#numpy.nanvar "numpy.nanvar") [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes There are several common variants of the array standard deviation calculation. Assuming the input `a` is a one-dimensional NumPy array and `mean` is either provided as an argument or computed as `a.mean()`, NumPy computes the standard deviation of an array as: N = len(a) d2 = abs(a - mean)**2 # abs is for complex `a` var = d2.sum() / (N - ddof) # note use of `ddof` std = var**0.5 Different values of the argument `ddof` are useful in different contexts. NumPy’s default `ddof=0` corresponds with the expression: \\[\sqrt{\frac{\sum_i{|a_i - \bar{a}|^2 }}{N}}\\] which is sometimes called the “population standard deviation” in the field of statistics because it applies the definition of standard deviation to `a` as if `a` were a complete population of possible observations. Many other libraries define the standard deviation of an array differently, e.g.: \\[\sqrt{\frac{\sum_i{|a_i - \bar{a}|^2 }}{N - 1}}\\] In statistics, the resulting quantity is sometimes called the “sample standard deviation” because if `a` is a random sample from a larger population, this calculation provides the square root of an unbiased estimate of the variance of the population. The use of \\(N-1\\) in the denominator is often called “Bessel’s correction” because it corrects for bias (toward lower values) in the variance estimate introduced when the sample mean of `a` is used in place of the true mean of the population. The resulting estimate of the standard deviation is still biased, but less than it would have been without the correction. For this quantity, use `ddof=1`. Note that, for complex numbers, `std` takes the absolute value before squaring, so that the result is always real and nonnegative. For floating-point input, the standard deviation is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher-accuracy accumulator using the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") keyword can alleviate this issue. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> np.std(a) 1.1180339887498949 # may vary >>> np.std(a, axis=0) array([1., 1.]) >>> np.std(a, axis=1) array([0.5, 0.5]) In single precision, std() can be inaccurate: >>> a = np.zeros((2, 512*512), dtype=np.float32) >>> a[0, :] = 1.0 >>> a[1, :] = 0.1 >>> np.std(a) np.float32(0.45000005) Computing the standard deviation in float64 is more accurate: >>> np.std(a, dtype=np.float64) 0.44999999925494177 # may vary Specifying a where argument: >>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> np.std(a) 2.614064523559687 # may vary >>> np.std(a, where=[[True], [True], [False]]) 2.0 Using the mean keyword to save computation time: >>> import numpy as np >>> from timeit import timeit >>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> mean = np.mean(a, axis=1, keepdims=True) >>> >>> g = globals() >>> n = 10000 >>> t1 = timeit("std = np.std(a, axis=1, mean=mean)", globals=g, number=n) >>> t2 = timeit("std = np.std(a, axis=1)", globals=g, number=n) >>> print(f'Percentage execution time saved {100*(t2-t1)/t2:.0f}%') Percentage execution time saved 30% # numpy.strings.add strings.add(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Add arguments element-wise. Parameters: **x1, x2** array_like The arrays to be added. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **add** ndarray or scalar The sum of `x1` and `x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. #### Notes Equivalent to `x1` \+ `x2` in terms of array broadcasting. #### Examples >>> import numpy as np >>> np.add(1.0, 4.0) 5.0 >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> np.add(x1, x2) array([[ 0., 2., 4.], [ 3., 5., 7.], [ 6., 8., 10.]]) The `+` operator can be used as a shorthand for `np.add` on ndarrays. >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> x1 + x2 array([[ 0., 2., 4.], [ 3., 5., 7.], [ 6., 8., 10.]]) # numpy.strings.capitalize strings.capitalize(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L1149-L1186) Return a copy of `a` with only the first character of each element capitalized. Calls [`str.capitalize`](https://docs.python.org/3/library/stdtypes.html#str.capitalize "\(in Python v3.13\)") element-wise. For byte strings, this method is locale-dependent. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype Input array of strings to capitalize. Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types See also [`str.capitalize`](https://docs.python.org/3/library/stdtypes.html#str.capitalize "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> c = np.array(['a1b2','1b2a','b2a1','2a1b'],'S4'); c array(['a1b2', '1b2a', 'b2a1', '2a1b'], dtype='|S4') >>> np.strings.capitalize(c) array(['A1b2', '1b2a', 'B2a1', '2a1b'], dtype='|S4') # numpy.strings.center strings.center(_a_ , _width_ , _fillchar =' '_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L654-L719) Return a copy of `a` with its elements centered in a string of length `width`. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype **width** array_like, with any integer dtype The length of the resulting strings, unless `width < str_len(a)`. **fillchar** array-like, with `StringDType`, `bytes_`, or `str_` dtype Optional padding character to use (default is space). Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types See also [`str.center`](https://docs.python.org/3/library/stdtypes.html#str.center "\(in Python v3.13\)") #### Notes While it is possible for `a` and `fillchar` to have different dtypes, passing a non-ASCII character in `fillchar` when `a` is of dtype “S” is not allowed, and a `ValueError` is raised. #### Examples >>> import numpy as np >>> c = np.array(['a1b2','1b2a','b2a1','2a1b']); c array(['a1b2', '1b2a', 'b2a1', '2a1b'], dtype='>> np.strings.center(c, width=9) array([' a1b2 ', ' 1b2a ', ' b2a1 ', ' 2a1b '], dtype='>> np.strings.center(c, width=9, fillchar='*') array(['***a1b2**', '***1b2a**', '***b2a1**', '***2a1b**'], dtype='>> np.strings.center(c, width=1) array(['a1b2', '1b2a', 'b2a1', '2a1b'], dtype='>> import numpy as np >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> c array(['aAaAaA', ' aA ', 'abBABba'], dtype='>> np.strings.count(c, 'A') array([3, 1, 1]) >>> np.strings.count(c, 'aA') array([3, 1, 0]) >>> np.strings.count(c, 'A', start=1, end=4) array([2, 1, 1]) >>> np.strings.count(c, 'A', start=1, end=3) array([1, 0, 0]) # numpy.strings.decode strings.decode(_a_ , _encoding =None_, _errors =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L509-L554) Calls [`bytes.decode`](https://docs.python.org/3/library/stdtypes.html#bytes.decode "\(in Python v3.13\)") element-wise. The set of available codecs comes from the Python standard library, and may be extended at runtime. For more information, see the [`codecs`](https://docs.python.org/3/library/codecs.html#module-codecs "\(in Python v3.13\)") module. Parameters: **a** array_like, with `bytes_` dtype **encoding** str, optional The name of an encoding **errors** str, optional Specifies how to handle encoding errors Returns: **out** ndarray See also [`bytes.decode`](https://docs.python.org/3/library/stdtypes.html#bytes.decode "\(in Python v3.13\)") #### Notes The type of the result will depend on the encoding specified. #### Examples >>> import numpy as np >>> c = np.array([b'\x81\xc1\x81\xc1\x81\xc1', b'@@\x81\xc1@@', ... b'\x81\x82\xc2\xc1\xc2\x82\x81']) >>> c array([b'\x81\xc1\x81\xc1\x81\xc1', b'@@\x81\xc1@@', b'\x81\x82\xc2\xc1\xc2\x82\x81'], dtype='|S7') >>> np.strings.decode(c, encoding='cp037') array(['aAaAaA', ' aA ', 'abBABba'], dtype='>> import numpy as np >>> a = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> np.strings.encode(a, encoding='cp037') array([b'ÁÁÁ', b'@@Á@@', b'‚ÂÁ‚'], dtype='|S7') # numpy.strings.endswith strings.endswith(_a_ , _suffix_ , _start =0_, _end =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L468-L506) Returns a boolean array which is `True` where the string element in `a` ends with `suffix`, otherwise `False`. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype **suffix** array-like, with `StringDType`, `bytes_`, or `str_` dtype **start, end** array_like, with any integer dtype With `start`, test beginning at that position. With `end`, stop comparing at that position. Returns: **out** ndarray Output array of bools See also [`str.endswith`](https://docs.python.org/3/library/stdtypes.html#str.endswith "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> s = np.array(['foo', 'bar']) >>> s array(['foo', 'bar'], dtype='>> np.strings.endswith(s, 'ar') array([False, True]) >>> np.strings.endswith(s, 'a', start=1, end=2) array([False, True]) # numpy.strings.equal strings.equal(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return (x1 == x2) element-wise. Parameters: **x1, x2** array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less`](numpy.less#numpy.less "numpy.less") #### Examples >>> import numpy as np >>> np.equal([0, 1, 3], np.arange(3)) array([ True, True, False]) What is compared are values, not types. So an int (1) and an array of length one can evaluate as True: >>> np.equal(1, np.ones(1)) array([ True]) The `==` operator can be used as a shorthand for `np.equal` on ndarrays. >>> a = np.array([2, 4, 6]) >>> b = np.array([2, 4, 2]) >>> a == b array([ True, True, False]) # numpy.strings.expandtabs strings.expandtabs(_a_ , _tabsize =8_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L602-L651) Return a copy of each string element where all tab characters are replaced by one or more spaces. Calls [`str.expandtabs`](https://docs.python.org/3/library/stdtypes.html#str.expandtabs "\(in Python v3.13\)") element-wise. Return a copy of each string element where all tab characters are replaced by one or more spaces, depending on the current column and the given `tabsize`. The column number is reset to zero after each newline occurring in the string. This doesn’t understand other non-printing characters or escape sequences. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype Input array **tabsize** int, optional Replace tabs with `tabsize` number of spaces. If not given defaults to 8 spaces. Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input type See also [`str.expandtabs`](https://docs.python.org/3/library/stdtypes.html#str.expandtabs "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> a = np.array([' Hello world']) >>> np.strings.expandtabs(a, tabsize=4) array([' Hello world'], dtype='>> import numpy as np >>> a = np.array(["NumPy is a Python library"]) >>> np.strings.find(a, "Python") array([11]) # numpy.strings.greater strings.greater(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the truth value of (x1 > x2) element-wise. Parameters: **x1, x2** array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`less`](numpy.less#numpy.less "numpy.less"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal") #### Examples >>> import numpy as np >>> np.greater([4,2],[2,2]) array([ True, False]) The `>` operator can be used as a shorthand for `np.greater` on ndarrays. >>> a = np.array([4, 2]) >>> b = np.array([2, 2]) >>> a > b array([ True, False]) # numpy.strings.greater_equal strings.greater_equal(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the truth value of (x1 >= x2) element-wise. Parameters: **x1, x2** array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** bool or ndarray of bool Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less`](numpy.less#numpy.less "numpy.less"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal") #### Examples >>> import numpy as np >>> np.greater_equal([4, 2, 1], [2, 2, 2]) array([ True, True, False]) The `>=` operator can be used as a shorthand for `np.greater_equal` on ndarrays. >>> a = np.array([4, 2, 1]) >>> b = np.array([2, 2, 2]) >>> a >= b array([ True, True, False]) # numpy.strings.index strings.index(_a_ , _sub_ , _start =0_, _end =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L314-L345) Like [`find`](numpy.strings.find#numpy.strings.find "numpy.strings.find"), but raises [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "\(in Python v3.13\)") when the substring is not found. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype **sub** array-like, with `StringDType`, `bytes_`, or `str_` dtype **start, end** array_like, with any integer dtype, optional Returns: **out** ndarray Output array of ints. See also [`find`](numpy.strings.find#numpy.strings.find "numpy.strings.find"), [`str.index`](https://docs.python.org/3/library/stdtypes.html#str.index "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> a = np.array(["Computer Science"]) >>> np.strings.index(a, "Science", start=0, end=None) array([9]) # numpy.strings.isalnum strings.isalnum(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns true for each element if all characters in the string are alphanumeric and there is at least one character, false otherwise. Parameters: **x** array_like, with `StringDType`, `bytes_` or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray Output array of bool This is a scalar if `x` is a scalar. See also [`str.isalnum`](https://docs.python.org/3/library/stdtypes.html#str.isalnum "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> a = np.array(['a', '1', 'a1', '(', '']) >>> np.strings.isalnum(a) array([ True, True, True, False, False]) # numpy.strings.isalpha strings.isalpha(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns true for each element if all characters in the data interpreted as a string are alphabetic and there is at least one character, false otherwise. For byte strings (i.e. `bytes`), alphabetic characters are those byte values in the sequence b’abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ’. For Unicode strings, alphabetic characters are those characters defined in the Unicode character database as “Letter”. Parameters: **x** array_like, with `StringDType`, `bytes_`, or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray Output array of bools This is a scalar if `x` is a scalar. See also [`str.isalpha`](https://docs.python.org/3/library/stdtypes.html#str.isalpha "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> a = np.array(['a', 'b', '0']) >>> np.strings.isalpha(a) array([ True, True, False]) >>> a = np.array([['a', 'b', '0'], ['c', '1', '2']]) >>> np.strings.isalpha(a) array([[ True, True, False], [ True, False, False]]) # numpy.strings.isdecimal strings.isdecimal(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ For each element, return True if there are only decimal characters in the element. Decimal characters include digit characters, and all characters that can be used to form decimal-radix numbers, e.g. `U+0660, ARABIC-INDIC DIGIT ZERO`. Parameters: **x** array_like, with `StringDType` or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray Output array of bools This is a scalar if `x` is a scalar. See also [`str.isdecimal`](https://docs.python.org/3/library/stdtypes.html#str.isdecimal "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> np.strings.isdecimal(['12345', '4.99', '123ABC', '']) array([ True, False, False, False]) # numpy.strings.isdigit strings.isdigit(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns true for each element if all characters in the string are digits and there is at least one character, false otherwise. For byte strings, digits are the byte values in the sequence b’0123456789’. For Unicode strings, digits include decimal characters and digits that need special handling, such as the compatibility superscript digits. This also covers digits which cannot be used to form numbers in base 10, like the Kharosthi numbers. Parameters: **x** array_like, with `StringDType`, `bytes_`, or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray Output array of bools This is a scalar if `x` is a scalar. See also [`str.isdigit`](https://docs.python.org/3/library/stdtypes.html#str.isdigit "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> a = np.array(['a', 'b', '0']) >>> np.strings.isdigit(a) array([False, False, True]) >>> a = np.array([['a', 'b', '0'], ['c', '1', '2']]) >>> np.strings.isdigit(a) array([[False, False, True], [False, True, True]]) # numpy.strings.islower strings.islower(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns true for each element if all cased characters in the string are lowercase and there is at least one cased character, false otherwise. Parameters: **x** array_like, with `StringDType`, `bytes_` or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray Output array of bools This is a scalar if `x` is a scalar. See also [`str.islower`](https://docs.python.org/3/library/stdtypes.html#str.islower "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> np.strings.islower("GHC") array(False) >>> np.strings.islower("ghc") array(True) # numpy.strings.isnumeric strings.isnumeric(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ For each element, return True if there are only numeric characters in the element. Numeric characters include digit characters, and all characters that have the Unicode numeric value property, e.g. `U+2155, VULGAR FRACTION ONE FIFTH`. Parameters: **x** array_like, with `StringDType` or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray Output array of bools This is a scalar if `x` is a scalar. See also [`str.isnumeric`](https://docs.python.org/3/library/stdtypes.html#str.isnumeric "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> np.strings.isnumeric(['123', '123abc', '9.0', '1/4', 'VIII']) array([ True, False, False, False, False]) # numpy.strings.isspace strings.isspace(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns true for each element if there are only whitespace characters in the string and there is at least one character, false otherwise. For byte strings, whitespace characters are the ones in the sequence b’ tnrx0bf’. For Unicode strings, a character is whitespace, if, in the Unicode character database, its general category is Zs (“Separator, space”), or its bidirectional class is one of WS, B, or S. Parameters: **x** array_like, with `StringDType`, `bytes_`, or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray Output array of bools This is a scalar if `x` is a scalar. See also [`str.isspace`](https://docs.python.org/3/library/stdtypes.html#str.isspace "\(in Python v3.13\)") #### Examples >>> np.char.isspace(list("a b c")) array([False, True, False, True, False]) >>> np.char.isspace(b'\x0a \x0b \x0c') np.True_ >>> np.char.isspace(b'\x0a \x0b \x0c N') np.False_ # numpy.strings.istitle strings.istitle(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns true for each element if the element is a titlecased string and there is at least one character, false otherwise. Parameters: **x** array_like, with `StringDType`, `bytes_` or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray Output array of bools This is a scalar if `x` is a scalar. See also [`str.istitle`](https://docs.python.org/3/library/stdtypes.html#str.istitle "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> np.strings.istitle("Numpy Is Great") array(True) >>> np.strings.istitle("Numpy is great") array(False) # numpy.strings.isupper strings.isupper(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return true for each element if all cased characters in the string are uppercase and there is at least one character, false otherwise. Parameters: **x** array_like, with `StringDType`, `bytes_` or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray Output array of bools This is a scalar if `x` is a scalar. See also [`str.isupper`](https://docs.python.org/3/library/stdtypes.html#str.isupper "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> np.strings.isupper("GHC") array(True) >>> a = np.array(["hello", "HELLO", "Hello"]) >>> np.strings.isupper(a) array([False, True, False]) # numpy.strings.less strings.less(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the truth value of (x1 < x2) element-wise. Parameters: **x1, x2** array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal") #### Examples >>> import numpy as np >>> np.less([1, 2], [2, 2]) array([ True, False]) The `<` operator can be used as a shorthand for `np.less` on ndarrays. >>> a = np.array([1, 2]) >>> b = np.array([2, 2]) >>> a < b array([ True, False]) # numpy.strings.less_equal strings.less_equal(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the truth value of (x1 <= x2) element-wise. Parameters: **x1, x2** array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less`](numpy.less#numpy.less "numpy.less"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal") #### Examples >>> import numpy as np >>> np.less_equal([4, 2, 1], [2, 2, 2]) array([False, True, True]) The `<=` operator can be used as a shorthand for `np.less_equal` on ndarrays. >>> a = np.array([4, 2, 1]) >>> b = np.array([2, 2, 2]) >>> a <= b array([False, True, True]) # numpy.strings.ljust strings.ljust(_a_ , _width_ , _fillchar =' '_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L722-L783) Return an array with the elements of `a` left-justified in a string of length `width`. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype **width** array_like, with any integer dtype The length of the resulting strings, unless `width < str_len(a)`. **fillchar** array-like, with `StringDType`, `bytes_`, or `str_` dtype Optional character to use for padding (default is space). Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types See also [`str.ljust`](https://docs.python.org/3/library/stdtypes.html#str.ljust "\(in Python v3.13\)") #### Notes While it is possible for `a` and `fillchar` to have different dtypes, passing a non-ASCII character in `fillchar` when `a` is of dtype “S” is not allowed, and a `ValueError` is raised. #### Examples >>> import numpy as np >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> np.strings.ljust(c, width=3) array(['aAaAaA', ' aA ', 'abBABba'], dtype='>> np.strings.ljust(c, width=9) array(['aAaAaA ', ' aA ', 'abBABba '], dtype='>> import numpy as np >>> c = np.array(['A1B C', '1BCA', 'BCA1']); c array(['A1B C', '1BCA', 'BCA1'], dtype='>> np.strings.lower(c) array(['a1b c', '1bca', 'bca1'], dtype='>> import numpy as np >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> c array(['aAaAaA', ' aA ', 'abBABba'], dtype='>> np.strings.lstrip(c, 'a') array(['AaAaA', ' aA ', 'bBABba'], dtype='>> np.strings.lstrip(c, 'A') # leaves c unchanged array(['aAaAaA', ' aA ', 'abBABba'], dtype='>> (np.strings.lstrip(c, ' ') == np.strings.lstrip(c, '')).all() np.False_ >>> (np.strings.lstrip(c, ' ') == np.strings.lstrip(c)).all() np.True_ # numpy.strings.mod strings.mod(_a_ , _values_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L196-L230) Return (a % i), that is pre-Python 2.6 string formatting (interpolation), element-wise for a pair of array_likes of str or unicode. Parameters: **a** array_like, with `np.bytes_` or `np.str_` dtype **values** array_like of values These values will be element-wise interpolated into the string. Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types #### Examples >>> import numpy as np >>> a = np.array(["NumPy is a %s library"]) >>> np.strings.mod(a, values=["Python"]) array(['NumPy is a Python library'], dtype='>> a = np.array([b'%d bytes', b'%d bits']) >>> values = np.array([8, 64]) >>> np.strings.mod(a, values) array([b'8 bytes', b'64 bits'], dtype='|S7') # numpy.strings.multiply strings.multiply(_a_ , _i_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L132-L193) Return (a * i), that is string multiple concatenation, element-wise. Values in `i` of less than 0 are treated as 0 (which yields an empty string). Parameters: **a** array_like, with `StringDType`, `bytes_` or `str_` dtype **i** array_like, with any integer dtype Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types #### Examples >>> import numpy as np >>> a = np.array(["a", "b", "c"]) >>> np.strings.multiply(a, 3) array(['aaa', 'bbb', 'ccc'], dtype='>> i = np.array([1, 2, 3]) >>> np.strings.multiply(a, i) array(['a', 'bb', 'ccc'], dtype='>> np.strings.multiply(np.array(['a']), i) array(['a', 'aa', 'aaa'], dtype='>> a = np.array(['a', 'b', 'c', 'd', 'e', 'f']).reshape((2, 3)) >>> np.strings.multiply(a, 3) array([['aaa', 'bbb', 'ccc'], ['ddd', 'eee', 'fff']], dtype='>> np.strings.multiply(a, i) array([['a', 'bb', 'ccc'], ['d', 'ee', 'fff']], dtype='_ Return (x1 != x2) element-wise. Parameters: **x1, x2** array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **out** ndarray or scalar Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`less`](numpy.less#numpy.less "numpy.less"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal") #### Examples >>> import numpy as np >>> np.not_equal([1.,2.], [1., 3.]) array([False, True]) >>> np.not_equal([1, 2], [[1, 3],[1, 4]]) array([[False, True], [False, True]]) The `!=` operator can be used as a shorthand for `np.not_equal` on ndarrays. >>> a = np.array([1., 2.]) >>> b = np.array([1., 3.]) >>> a != b array([False, True]) # numpy.strings.partition strings.partition(_a_ , _sep_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L1457-L1522) Partition each element in `a` around `sep`. For each element in `a`, split the element at the first occurrence of `sep`, and return a 3-tuple containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, the first item of the tuple will contain the whole string, and the second and third ones will be the empty string. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype Input array **sep** array-like, with `StringDType`, `bytes_`, or `str_` dtype Separator to split each string element in `a`. Returns: **out** 3-tuple: * array with `StringDType`, `bytes_` or `str_` dtype with the part before the separator * array with `StringDType`, `bytes_` or `str_` dtype with the separator * array with `StringDType`, `bytes_` or `str_` dtype with the part after the separator See also [`str.partition`](https://docs.python.org/3/library/stdtypes.html#str.partition "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> x = np.array(["Numpy is nice!"]) >>> np.strings.partition(x, " ") (array(['Numpy'], dtype='>> import numpy as np >>> a = np.array(["That is a mango", "Monkeys eat mangos"]) >>> np.strings.replace(a, 'mango', 'banana') array(['That is a banana', 'Monkeys eat bananas'], dtype='>> a = np.array(["The dish is fresh", "This is it"]) >>> np.strings.replace(a, 'is', 'was') array(['The dwash was fresh', 'Thwas was it'], dtype='>> import numpy as np >>> a = np.array(["Computer Science"]) >>> np.strings.rfind(a, "Science", start=0, end=None) array([9]) >>> np.strings.rfind(a, "Science", start=0, end=8) array([-1]) >>> b = np.array(["Computer Science", "Science"]) >>> np.strings.rfind(b, "Science", start=0, end=None) array([9, 0]) # numpy.strings.rindex strings.rindex(_a_ , _sub_ , _start =0_, _end =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L348-L379) Like [`rfind`](numpy.strings.rfind#numpy.strings.rfind "numpy.strings.rfind"), but raises [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "\(in Python v3.13\)") when the substring `sub` is not found. Parameters: **a** array-like, with `np.bytes_` or `np.str_` dtype **sub** array-like, with `np.bytes_` or `np.str_` dtype **start, end** array-like, with any integer dtype, optional Returns: **out** ndarray Output array of ints. See also [`rfind`](numpy.strings.rfind#numpy.strings.rfind "numpy.strings.rfind"), [`str.rindex`](https://docs.python.org/3/library/stdtypes.html#str.rindex "\(in Python v3.13\)") #### Examples >>> a = np.array(["Computer Science"]) >>> np.strings.rindex(a, "Science", start=0, end=None) array([9]) # numpy.strings.rjust strings.rjust(_a_ , _width_ , _fillchar =' '_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L786-L847) Return an array with the elements of `a` right-justified in a string of length `width`. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype **width** array_like, with any integer dtype The length of the resulting strings, unless `width < str_len(a)`. **fillchar** array-like, with `StringDType`, `bytes_`, or `str_` dtype Optional padding character to use (default is space). Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types See also [`str.rjust`](https://docs.python.org/3/library/stdtypes.html#str.rjust "\(in Python v3.13\)") #### Notes While it is possible for `a` and `fillchar` to have different dtypes, passing a non-ASCII character in `fillchar` when `a` is of dtype “S” is not allowed, and a `ValueError` is raised. #### Examples >>> import numpy as np >>> a = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> np.strings.rjust(a, width=3) array(['aAaAaA', ' aA ', 'abBABba'], dtype='>> np.strings.rjust(a, width=9) array([' aAaAaA', ' aA ', ' abBABba'], dtype='>> import numpy as np >>> a = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> np.strings.rpartition(a, 'A') (array(['aAaAa', ' a', 'abB'], dtype='>> import numpy as np >>> c = np.array(['aAaAaA', 'abBABba']) >>> c array(['aAaAaA', 'abBABba'], dtype='>> np.strings.rstrip(c, 'a') array(['aAaAaA', 'abBABb'], dtype='>> np.strings.rstrip(c, 'A') array(['aAaAa', 'abBABba'], dtype='>> import numpy as np >>> s = np.array(['foo', 'bar']) >>> s array(['foo', 'bar'], dtype='>> np.strings.startswith(s, 'fo') array([True, False]) >>> np.strings.startswith(s, 'o', start=1, end=2) array([True, False]) # numpy.strings.str_len strings.str_len(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Returns the length of each element. For byte strings, this is the number of bytes, while, for Unicode strings, it is the number of Unicode code points. Parameters: **x** array_like, with `StringDType`, `bytes_`, or `str_` dtype **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray Output array of ints This is a scalar if `x` is a scalar. See also [`len`](https://docs.python.org/3/library/functions.html#len "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> a = np.array(['Grace Hopper Conference', 'Open Source Day']) >>> np.strings.str_len(a) array([23, 15]) >>> a = np.array(['Р', 'о']) >>> np.strings.str_len(a) array([1, 1]) >>> a = np.array([['hello', 'world'], ['Р', 'о']]) >>> np.strings.str_len(a) array([[5, 5], [1, 1]]) # numpy.strings.strip strings.strip(_a_ , _chars =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L988-L1032) For each element in `a`, return a copy with the leading and trailing characters removed. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype **chars** scalar with the same dtype as `a`, optional The `chars` argument is a string specifying the set of characters to be removed. If `None`, the `chars` argument defaults to removing whitespace. The `chars` argument is not a prefix or suffix; rather, all combinations of its values are stripped. Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types See also [`str.strip`](https://docs.python.org/3/library/stdtypes.html#str.strip "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> c array(['aAaAaA', ' aA ', 'abBABba'], dtype='>> np.strings.strip(c) array(['aAaAaA', 'aA', 'abBABba'], dtype='>> np.strings.strip(c, 'a') array(['AaAaA', ' aA ', 'bBABb'], dtype='>> np.strings.strip(c, 'A') array(['aAaAa', ' aA ', 'abBABba'], dtype='>> import numpy as np >>> c=np.array(['a1B c','1b Ca','b Ca1','cA1b'],'S5'); c array(['a1B c', '1b Ca', 'b Ca1', 'cA1b'], dtype='|S5') >>> np.strings.swapcase(c) array(['A1b C', '1B cA', 'B cA1', 'Ca1B'], dtype='|S5') # numpy.strings.title strings.title(_a_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L1189-L1228) Return element-wise title cased version of string or unicode. Title case words start with uppercase characters, all remaining cased characters are lowercase. Calls [`str.title`](https://docs.python.org/3/library/stdtypes.html#str.title "\(in Python v3.13\)") element-wise. For 8-bit strings, this method is locale-dependent. Parameters: **a** array-like, with `StringDType`, `bytes_`, or `str_` dtype Input array. Returns: **out** ndarray Output array of `StringDType`, `bytes_` or `str_` dtype, depending on input types See also [`str.title`](https://docs.python.org/3/library/stdtypes.html#str.title "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> c=np.array(['a1b c','1b ca','b ca1','ca1b'],'S5'); c array(['a1b c', '1b ca', 'b ca1', 'ca1b'], dtype='|S5') >>> np.strings.title(c) array(['A1B C', '1B Ca', 'B Ca1', 'Ca1B'], dtype='|S5') # numpy.strings.translate strings.translate(_a_ , _table_ , _deletechars =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/strings.py#L1594-L1641) For each element in `a`, return a copy of the string where all characters occurring in the optional argument `deletechars` are removed, and the remaining characters have been mapped through the given translation table. Calls [`str.translate`](https://docs.python.org/3/library/stdtypes.html#str.translate "\(in Python v3.13\)") element-wise. Parameters: **a** array-like, with `np.bytes_` or `np.str_` dtype **table** str of length 256 **deletechars** str Returns: **out** ndarray Output array of str or unicode, depending on input type See also [`str.translate`](https://docs.python.org/3/library/stdtypes.html#str.translate "\(in Python v3.13\)") #### Examples >>> import numpy as np >>> a = np.array(['a1b c', '1bca', 'bca1']) >>> table = a[0].maketrans('abc', '123') >>> deletechars = ' ' >>> np.char.translate(a, table, deletechars) array(['112 3', '1231', '2311'], dtype='>> import numpy as np >>> c = np.array(['a1b c', '1bca', 'bca1']); c array(['a1b c', '1bca', 'bca1'], dtype='>> np.strings.upper(c) array(['A1B C', '1BCA', 'BCA1'], dtype='>> import numpy as np >>> np.strings.zfill(['1', '-1', '+1'], 3) array(['001', '-01', '+01'], dtype='_ Subtract arguments, element-wise. Parameters: **x1, x2** array_like The arrays to be subtracted from each other. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The difference of `x1` and `x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. #### Notes Equivalent to `x1 - x2` in terms of array broadcasting. #### Examples >>> import numpy as np >>> np.subtract(1.0, 4.0) -3.0 >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> np.subtract(x1, x2) array([[ 0., 0., 0.], [ 3., 3., 3.], [ 6., 6., 6.]]) The `-` operator can be used as a shorthand for `np.subtract` on ndarrays. >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> x1 - x2 array([[0., 0., 0.], [3., 3., 3.], [6., 6., 6.]]) # numpy.sum numpy.sum(_a_ , _axis=None_ , _dtype=None_ , _out=None_ , _keepdims= _, _initial= _, _where= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L2338-L2469) Sum of array elements over a given axis. Parameters: **a** array_like Elements to sum. **axis** None or int or tuple of ints, optional Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis. If axis is a tuple of ints, a sum is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before. **dtype** dtype, optional The type of the returned array and of the accumulator in which the elements are summed. The dtype of `a` is used by default unless `a` has an integer dtype of less precision than the default platform integer. In that case, if `a` is signed then the platform integer is used while if `a` is unsigned then an unsigned integer of the same precision as the platform integer is used. **out** ndarray, optional Alternative output array in which to place the result. It must have the same shape as the expected output, but the type of the output values will be cast if necessary. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the `sum` method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **initial** scalar, optional Starting value for the sum. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. **where** array_like of bool, optional Elements to include in the sum. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. Returns: **sum_along_axis** ndarray An array with the same shape as `a`, with the specified axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar is returned. If an output array is specified, a reference to `out` is returned. See also [`ndarray.sum`](numpy.ndarray.sum#numpy.ndarray.sum "numpy.ndarray.sum") Equivalent method. [`add`](numpy.add#numpy.add "numpy.add") `numpy.add.reduce` equivalent function. [`cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") Cumulative sum of array elements. [`trapezoid`](numpy.trapezoid#numpy.trapezoid "numpy.trapezoid") Integration of array values using composite trapezoidal rule. [`mean`](numpy.mean#numpy.mean "numpy.mean"), [`average`](numpy.average#numpy.average "numpy.average") #### Notes Arithmetic is modular when using integer types, and no error is raised on overflow. The sum of an empty array is the neutral element 0: >>> np.sum([]) 0.0 For floating point numbers the numerical precision of sum (and `np.add.reduce`) is in general limited by directly adding each number individually to the result causing rounding errors in every step. However, often numpy will use a numerically better approach (partial pairwise summation) leading to improved precision in many use-cases. This improved precision is always provided when no `axis` is given. When `axis` is given, it will depend on which axis is summed. Technically, to provide the best speed possible, the improved precision is only used when the summation is along the fast axis in memory. Note that the exact precision may vary depending on other parameters. In contrast to NumPy, Python’s `math.fsum` function uses a slower but more precise approach to summation. Especially when summing a large number of lower precision floating point numbers, such as `float32`, numerical errors can become significant. In such cases it can be advisable to use `dtype=”float64”` to use a higher precision for the output. #### Examples >>> import numpy as np >>> np.sum([0.5, 1.5]) 2.0 >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) np.int32(1) >>> np.sum([[0, 1], [0, 5]]) 6 >>> np.sum([[0, 1], [0, 5]], axis=0) array([0, 6]) >>> np.sum([[0, 1], [0, 5]], axis=1) array([1, 5]) >>> np.sum([[0, 1], [np.nan, 5]], where=[False, True], axis=1) array([1., 5.]) If the accumulator is too small, overflow occurs: >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) np.int8(-128) You can also start the sum with a value other than zero: >>> np.sum([10], initial=5) 15 # numpy.swapaxes numpy.swapaxes(_a_ , _axis1_ , _axis2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L578-L623) Interchange two axes of an array. Parameters: **a** array_like Input array. **axis1** int First axis. **axis2** int Second axis. Returns: **a_swapped** ndarray For NumPy >= 1.10.0, if `a` is an ndarray, then a view of `a` is returned; otherwise a new array is created. For earlier NumPy versions a view of `a` is returned only if the order of the axes is changed, otherwise the input array is returned. #### Examples >>> import numpy as np >>> x = np.array([[1,2,3]]) >>> np.swapaxes(x,0,1) array([[1], [2], [3]]) >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]]) >>> x array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> np.swapaxes(x,0,2) array([[[0, 4], [2, 6]], [[1, 5], [3, 7]]]) # numpy.take numpy.take(_a_ , _indices_ , _axis =None_, _out =None_, _mode ='raise'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L109-L203) Take elements from an array along an axis. When axis is not None, this function does the same thing as “fancy” indexing (indexing arrays using arrays); however, it can be easier to use if you need elements along a given axis. A call such as `np.take(arr, indices, axis=3)` is equivalent to `arr[:,:,:,indices,...]`. Explained without fancy indexing, this is equivalent to the following use of [`ndindex`](numpy.ndindex#numpy.ndindex "numpy.ndindex"), which sets each of `ii`, `jj`, and `kk` to a tuple of indices: Ni, Nk = a.shape[:axis], a.shape[axis+1:] Nj = indices.shape for ii in ndindex(Ni): for jj in ndindex(Nj): for kk in ndindex(Nk): out[ii + jj + kk] = a[ii + (indices[jj],) + kk] Parameters: **a** array_like (Ni…, M, Nk…) The source array. **indices** array_like (Nj…) The indices of the values to extract. Also allow scalars for indices. **axis** int, optional The axis over which to select values. By default, the flattened input array is used. **out** ndarray, optional (Ni…, Nj…, Nk…) If provided, the result will be placed in this array. It should be of the appropriate shape and dtype. Note that `out` is always buffered if `mode=’raise’`; use other modes for better performance. **mode**{‘raise’, ‘wrap’, ‘clip’}, optional Specifies how out-of-bounds indices will behave. * ‘raise’ – raise an error (default) * ‘wrap’ – wrap around * ‘clip’ – clip to the range ‘clip’ mode means that all indices that are too large are replaced by the index that addresses the last element along that axis. Note that this disables indexing with negative numbers. Returns: **out** ndarray (Ni…, Nj…, Nk…) The returned array has the same type as `a`. See also [`compress`](numpy.compress#numpy.compress "numpy.compress") Take elements using a boolean mask [`ndarray.take`](numpy.ndarray.take#numpy.ndarray.take "numpy.ndarray.take") equivalent method [`take_along_axis`](numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") Take elements by matching the array and the index arrays #### Notes By eliminating the inner loop in the description above, and using [`s_`](numpy.s_#numpy.s_ "numpy.s_") to build simple slice objects, `take` can be expressed in terms of applying fancy indexing to each 1-d slice: Ni, Nk = a.shape[:axis], a.shape[axis+1:] for ii in ndindex(Ni): for kk in ndindex(Nj): out[ii + s_[...,] + kk] = a[ii + s_[:,] + kk][indices] For this reason, it is equivalent to (but faster than) the following use of [`apply_along_axis`](numpy.apply_along_axis#numpy.apply_along_axis "numpy.apply_along_axis"): out = np.apply_along_axis(lambda a_1d: a_1d[indices], axis, a) #### Examples >>> import numpy as np >>> a = [4, 3, 5, 7, 6, 8] >>> indices = [0, 1, 4] >>> np.take(a, indices) array([4, 3, 6]) In this example if `a` is an ndarray, “fancy” indexing can be used. >>> a = np.array(a) >>> a[indices] array([4, 3, 6]) If [`indices`](numpy.indices#numpy.indices "numpy.indices") is not one dimensional, the output also has these dimensions. >>> np.take(a, [[0, 1], [2, 3]]) array([[4, 3], [5, 7]]) # numpy.take_along_axis numpy.take_along_axis(_arr_ , _indices_ , _axis_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_shape_base_impl.py#L57-L175) Take values from the input array by matching 1d index and data slices. This iterates over matching 1d slices oriented along the specified axis in the index and data arrays, and uses the former to look up values in the latter. These slices can be different lengths. Functions returning an index along an axis, like [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort") and [`argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition"), produce suitable indices for this function. Parameters: **arr** ndarray (Ni…, M, Nk…) Source array **indices** ndarray (Ni…, J, Nk…) Indices to take along each 1d slice of `arr`. This must match the dimension of arr, but dimensions Ni and Nj only need to broadcast against `arr`. **axis** int The axis to take 1d slices along. If axis is None, the input array is treated as if it had first been flattened to 1d, for consistency with [`sort`](numpy.sort#numpy.sort "numpy.sort") and [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort"). Returns: out: ndarray (Ni…, J, Nk…) The indexed result. See also [`take`](numpy.take#numpy.take "numpy.take") Take along an axis, using the same indices for every 1d slice [`put_along_axis`](numpy.put_along_axis#numpy.put_along_axis "numpy.put_along_axis") Put values into the destination array by matching 1d index and data slices #### Notes This is equivalent to (but faster than) the following use of [`ndindex`](numpy.ndindex#numpy.ndindex "numpy.ndindex") and [`s_`](numpy.s_#numpy.s_ "numpy.s_"), which sets each of `ii` and `kk` to a tuple of indices: Ni, M, Nk = a.shape[:axis], a.shape[axis], a.shape[axis+1:] J = indices.shape[axis] # Need not equal M out = np.empty(Ni + (J,) + Nk) for ii in ndindex(Ni): for kk in ndindex(Nk): a_1d = a [ii + s_[:,] + kk] indices_1d = indices[ii + s_[:,] + kk] out_1d = out [ii + s_[:,] + kk] for j in range(J): out_1d[j] = a_1d[indices_1d[j]] Equivalently, eliminating the inner loop, the last two lines would be: out_1d[:] = a_1d[indices_1d] #### Examples >>> import numpy as np For this sample array >>> a = np.array([[10, 30, 20], [60, 40, 50]]) We can sort either by using sort directly, or argsort and this function >>> np.sort(a, axis=1) array([[10, 20, 30], [40, 50, 60]]) >>> ai = np.argsort(a, axis=1) >>> ai array([[0, 2, 1], [1, 2, 0]]) >>> np.take_along_axis(a, ai, axis=1) array([[10, 20, 30], [40, 50, 60]]) The same works for max and min, if you maintain the trivial dimension with `keepdims`: >>> np.max(a, axis=1, keepdims=True) array([[30], [60]]) >>> ai = np.argmax(a, axis=1, keepdims=True) >>> ai array([[1], [0]]) >>> np.take_along_axis(a, ai, axis=1) array([[30], [60]]) If we want to get the max and min at the same time, we can stack the indices first >>> ai_min = np.argmin(a, axis=1, keepdims=True) >>> ai_max = np.argmax(a, axis=1, keepdims=True) >>> ai = np.concatenate([ai_min, ai_max], axis=1) >>> ai array([[0, 1], [1, 0]]) >>> np.take_along_axis(a, ai, axis=1) array([[10, 30], [40, 60]]) # numpy.tan numpy.tan(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Compute tangent element-wise. Equivalent to `np.sin(x)/np.cos(x)` element-wise. Parameters: **x** array_like Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The corresponding tangent values. This is a scalar if `x` is a scalar. #### Notes If `out` is provided, the function writes the result into it, and returns a reference to `out`. (See Examples) #### References M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions. New York, NY: Dover, 1972. #### Examples >>> import numpy as np >>> from math import pi >>> np.tan(np.array([-pi,pi/2,pi])) array([ 1.22460635e-16, 1.63317787e+16, -1.22460635e-16]) >>> >>> # Example of providing the optional output parameter illustrating >>> # that what is returned is a reference to said parameter >>> out1 = np.array([0], dtype='d') >>> out2 = np.cos([0.1], out1) >>> out2 is out1 True >>> >>> # Example of ValueError due to provision of shape mis-matched `out` >>> np.cos(np.zeros((3,3)),np.zeros((2,2))) Traceback (most recent call last): File "", line 1, in ValueError: operands could not be broadcast together with shapes (3,3) (2,2) # numpy.tanh numpy.tanh(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Compute hyperbolic tangent element-wise. Equivalent to `np.sinh(x)/np.cosh(x)` or `-1j * np.tan(1j*x)`. Parameters: **x** array_like Input array. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The corresponding hyperbolic tangent values. This is a scalar if `x` is a scalar. #### Notes If `out` is provided, the function writes the result into it, and returns a reference to `out`. (See Examples) #### References [1] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions. New York, NY: Dover, 1972, pg. 83. [2] Wikipedia, “Hyperbolic function”, #### Examples >>> import numpy as np >>> np.tanh((0, np.pi*1j, np.pi*1j/2)) array([ 0. +0.00000000e+00j, 0. -1.22460635e-16j, 0. +1.63317787e+16j]) >>> # Example of providing the optional output parameter illustrating >>> # that what is returned is a reference to said parameter >>> out1 = np.array([0], dtype='d') >>> out2 = np.tanh([0.1], out1) >>> out2 is out1 True >>> # Example of ValueError due to provision of shape mis-matched `out` >>> np.tanh(np.zeros((3,3)),np.zeros((2,2))) Traceback (most recent call last): File "", line 1, in ValueError: operands could not be broadcast together with shapes (3,3) (2,2) # numpy.tensordot numpy.tensordot(_a_ , _b_ , _axes =2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/numeric.py#L968-L1178) Compute tensor dot product along specified axes. Given two tensors, `a` and `b`, and an array_like object containing two array_like objects, `(a_axes, b_axes)`, sum the products of `a`’s and `b`’s elements (components) over the axes specified by `a_axes` and `b_axes`. The third argument can be a single non-negative integer_like scalar, `N`; if it is such, then the last `N` dimensions of `a` and the first `N` dimensions of `b` are summed over. Parameters: **a, b** array_like Tensors to “dot”. **axes** int or (2,) array_like * integer_like If an int N, sum over the last N axes of `a` and the first N axes of `b` in order. The sizes of the corresponding axes must match. * (2,) array_like Or, a list of axes to be summed over, first sequence applying to `a`, second to `b`. Both elements array_like must be of the same length. Returns: **output** ndarray The tensor dot product of the input. See also [`dot`](numpy.dot#numpy.dot "numpy.dot"), [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") #### Notes Three common use cases are: * `axes = 0` : tensor product \\(a\otimes b\\) * `axes = 1` : tensor dot product \\(a\cdot b\\) * `axes = 2` : (default) tensor double contraction \\(a:b\\) When `axes` is integer_like, the sequence of axes for evaluation will be: from the -Nth axis to the -1th axis in `a`, and from the 0th axis to (N-1)th axis in `b`. For example, `axes = 2` is the equal to `axes = [[-2, -1], [0, 1]]`. When N-1 is smaller than 0, or when -N is larger than -1, the element of `a` and `b` are defined as the `axes`. When there is more than one axis to sum over - and they are not the last (first) axes of `a` (`b`) - the argument `axes` should consist of two sequences of the same length, with the first axis to sum over given first in both sequences, the second axis second, and so forth. The calculation can be referred to `numpy.einsum`. The shape of the result consists of the non-contracted axes of the first tensor, followed by the non-contracted axes of the second. #### Examples An example on integer_like: >>> a_0 = np.array([[1, 2], [3, 4]]) >>> b_0 = np.array([[5, 6], [7, 8]]) >>> c_0 = np.tensordot(a_0, b_0, axes=0) >>> c_0.shape (2, 2, 2, 2) >>> c_0 array([[[[ 5, 6], [ 7, 8]], [[10, 12], [14, 16]]], [[[15, 18], [21, 24]], [[20, 24], [28, 32]]]]) An example on array_like: >>> a = np.arange(60.).reshape(3,4,5) >>> b = np.arange(24.).reshape(4,3,2) >>> c = np.tensordot(a,b, axes=([1,0],[0,1])) >>> c.shape (5, 2) >>> c array([[4400., 4730.], [4532., 4874.], [4664., 5018.], [4796., 5162.], [4928., 5306.]]) A slower but equivalent way of computing the same… >>> d = np.zeros((5,2)) >>> for i in range(5): ... for j in range(2): ... for k in range(3): ... for n in range(4): ... d[i,j] += a[k,n,i] * b[n,k,j] >>> c == d array([[ True, True], [ True, True], [ True, True], [ True, True], [ True, True]]) An extended example taking advantage of the overloading of + and *: >>> a = np.array(range(1, 9)) >>> a.shape = (2, 2, 2) >>> A = np.array(('a', 'b', 'c', 'd'), dtype=object) >>> A.shape = (2, 2) >>> a; A array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) array([['a', 'b'], ['c', 'd']], dtype=object) >>> np.tensordot(a, A) # third argument default is 2 for double-contraction array(['abbcccdddd', 'aaaaabbbbbbcccccccdddddddd'], dtype=object) >>> np.tensordot(a, A, 1) array([[['acc', 'bdd'], ['aaacccc', 'bbbdddd']], [['aaaaacccccc', 'bbbbbdddddd'], ['aaaaaaacccccccc', 'bbbbbbbdddddddd']]], dtype=object) >>> np.tensordot(a, A, 0) # tensor product (result too long to incl.) array([[[[['a', 'b'], ['c', 'd']], ... >>> np.tensordot(a, A, (0, 1)) array([[['abbbbb', 'cddddd'], ['aabbbbbb', 'ccdddddd']], [['aaabbbbbbb', 'cccddddddd'], ['aaaabbbbbbbb', 'ccccdddddddd']]], dtype=object) >>> np.tensordot(a, A, (2, 1)) array([[['abb', 'cdd'], ['aaabbbb', 'cccdddd']], [['aaaaabbbbbb', 'cccccdddddd'], ['aaaaaaabbbbbbbb', 'cccccccdddddddd']]], dtype=object) >>> np.tensordot(a, A, ((0, 1), (0, 1))) array(['abbbcccccddddddd', 'aabbbbccccccdddddddd'], dtype=object) >>> np.tensordot(a, A, ((2, 1), (1, 0))) array(['acccbbdddd', 'aaaaacccccccbbbbbbdddddddd'], dtype=object) # numpy.testing.assert_ testing.assert_(_val_ , _msg =''_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L74-L91) Assert that works in release mode. Accepts callable msg to allow deferring evaluation until failure. The Python built-in `assert` does not work when executing code in optimized mode (the `-O` flag) - no byte-code is generated for it. For documentation on usage, refer to the Python documentation. # numpy.testing.assert_allclose testing.assert_allclose(_actual_ , _desired_ , _rtol =1e-07_, _atol =0_, _equal_nan =True_, _err_msg =''_, _verbose =True_, _*_ , _strict =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L1574-L1680) Raises an AssertionError if two objects are not equal up to desired tolerance. Given two array_like objects, check that their shapes and all elements are equal (but see the Notes for the special handling of a scalar). An exception is raised if the shapes mismatch or any values conflict. In contrast to the standard usage in numpy, NaNs are compared like numbers, no assertion is raised if both objects have NaNs in the same positions. The test is equivalent to `allclose(actual, desired, rtol, atol)` (note that `allclose` has different default values). It compares the difference between `actual` and `desired` to `atol + rtol * abs(desired)`. Parameters: **actual** array_like Array obtained. **desired** array_like Array desired. **rtol** float, optional Relative tolerance. **atol** float, optional Absolute tolerance. **equal_nan** bool, optional. If True, NaNs will compare equal. **err_msg** str, optional The error message to be printed in case of failure. **verbose** bool, optional If True, the conflicting values are appended to the error message. **strict** bool, optional If True, raise an `AssertionError` when either the shape or the data type of the arguments does not match. The special handling of scalars mentioned in the Notes section is disabled. New in version 2.0.0. Raises: AssertionError If actual and desired are not equal up to specified precision. See also [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp"), [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp") #### Notes When one of `actual` and `desired` is a scalar and the other is array_like, the function performs the comparison as if the scalar were broadcasted to the shape of the array. This behaviour can be disabled with the `strict` parameter. #### Examples >>> x = [1e-5, 1e-3, 1e-1] >>> y = np.arccos(np.cos(x)) >>> np.testing.assert_allclose(x, y, rtol=1e-5, atol=0) As mentioned in the Notes section, `assert_allclose` has special handling for scalars. Here, the test checks that the value of [`numpy.sin`](numpy.sin#numpy.sin "numpy.sin") is nearly zero at integer multiples of π. >>> x = np.arange(3) * np.pi >>> np.testing.assert_allclose(np.sin(x), 0, atol=1e-15) Use `strict` to raise an `AssertionError` when comparing an array with one or more dimensions against a scalar. >>> np.testing.assert_allclose(np.sin(x), 0, atol=1e-15, strict=True) Traceback (most recent call last): ... AssertionError: Not equal to tolerance rtol=1e-07, atol=1e-15 (shapes (3,), () mismatch) ACTUAL: array([ 0.000000e+00, 1.224647e-16, -2.449294e-16]) DESIRED: array(0) The `strict` parameter also ensures that the array data types match: >>> y = np.zeros(3, dtype=np.float32) >>> np.testing.assert_allclose(np.sin(x), y, atol=1e-15, strict=True) Traceback (most recent call last): ... AssertionError: Not equal to tolerance rtol=1e-07, atol=1e-15 (dtypes float64, float32 mismatch) ACTUAL: array([ 0.000000e+00, 1.224647e-16, -2.449294e-16]) DESIRED: array([0., 0., 0.], dtype=float32) # numpy.testing.assert_almost_equal testing.assert_almost_equal(_actual_ , _desired_ , _decimal =7_, _err_msg =''_, _verbose =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L467-L590) Raises an AssertionError if two items are not equal up to desired precision. Note It is recommended to use one of [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose"), [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp") or [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp") instead of this function for more consistent floating point comparisons. The test verifies that the elements of `actual` and `desired` satisfy: abs(desired-actual) < float64(1.5 * 10**(-decimal)) That is a looser test than originally documented, but agrees with what the actual implementation in [`assert_array_almost_equal`](numpy.testing.assert_array_almost_equal#numpy.testing.assert_array_almost_equal "numpy.testing.assert_array_almost_equal") did up to rounding vagaries. An exception is raised at conflicting values. For ndarrays this delegates to assert_array_almost_equal Parameters: **actual** array_like The object to check. **desired** array_like The expected object. **decimal** int, optional Desired precision, default is 7. **err_msg** str, optional The error message to be printed in case of failure. **verbose** bool, optional If True, the conflicting values are appended to the error message. Raises: AssertionError If actual and desired are not equal up to specified precision. See also [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose") Compare two array_like objects for equality with desired relative and/or absolute precision. [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp"), [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp"), [`assert_equal`](numpy.testing.assert_equal#numpy.testing.assert_equal "numpy.testing.assert_equal") #### Examples >>> from numpy.testing import assert_almost_equal >>> assert_almost_equal(2.3333333333333, 2.33333334) >>> assert_almost_equal(2.3333333333333, 2.33333334, decimal=10) Traceback (most recent call last): ... AssertionError: Arrays are not almost equal to 10 decimals ACTUAL: 2.3333333333333 DESIRED: 2.33333334 >>> assert_almost_equal(np.array([1.0,2.3333333333333]), ... np.array([1.0,2.33333334]), decimal=9) Traceback (most recent call last): ... AssertionError: Arrays are not almost equal to 9 decimals Mismatched elements: 1 / 2 (50%) Max absolute difference among violations: 6.66669964e-09 Max relative difference among violations: 2.85715698e-09 ACTUAL: array([1. , 2.333333333]) DESIRED: array([1. , 2.33333334]) # numpy.testing.assert_approx_equal testing.assert_approx_equal(_actual_ , _desired_ , _significant =7_, _err_msg =''_, _verbose =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L593-L690) Raises an AssertionError if two items are not equal up to significant digits. Note It is recommended to use one of [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose"), [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp") or [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp") instead of this function for more consistent floating point comparisons. Given two numbers, check that they are approximately equal. Approximately equal is defined as the number of significant digits that agree. Parameters: **actual** scalar The object to check. **desired** scalar The expected object. **significant** int, optional Desired precision, default is 7. **err_msg** str, optional The error message to be printed in case of failure. **verbose** bool, optional If True, the conflicting values are appended to the error message. Raises: AssertionError If actual and desired are not equal up to specified precision. See also [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose") Compare two array_like objects for equality with desired relative and/or absolute precision. [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp"), [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp"), [`assert_equal`](numpy.testing.assert_equal#numpy.testing.assert_equal "numpy.testing.assert_equal") #### Examples >>> np.testing.assert_approx_equal(0.12345677777777e-20, 0.1234567e-20) >>> np.testing.assert_approx_equal(0.12345670e-20, 0.12345671e-20, ... significant=8) >>> np.testing.assert_approx_equal(0.12345670e-20, 0.12345672e-20, ... significant=8) Traceback (most recent call last): ... AssertionError: Items are not equal to 8 significant digits: ACTUAL: 1.234567e-21 DESIRED: 1.2345672e-21 the evaluated condition that raises the exception is >>> abs(0.12345670e-20/1e-21 - 0.12345672e-20/1e-21) >= 10**-(8-1) True # numpy.testing.assert_array_almost_equal testing.assert_array_almost_equal(_actual_ , _desired_ , _decimal =6_, _err_msg =''_, _verbose =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L1025-L1138) Raises an AssertionError if two objects are not equal up to desired precision. Note It is recommended to use one of [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose"), [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp") or [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp") instead of this function for more consistent floating point comparisons. The test verifies identical shapes and that the elements of `actual` and `desired` satisfy: abs(desired-actual) < 1.5 * 10**(-decimal) That is a looser test than originally documented, but agrees with what the actual implementation did up to rounding vagaries. An exception is raised at shape mismatch or conflicting values. In contrast to the standard usage in numpy, NaNs are compared like numbers, no assertion is raised if both objects have NaNs in the same positions. Parameters: **actual** array_like The actual object to check. **desired** array_like The desired, expected object. **decimal** int, optional Desired precision, default is 6. **err_msg** str, optional The error message to be printed in case of failure. **verbose** bool, optional If True, the conflicting values are appended to the error message. Raises: AssertionError If actual and desired are not equal up to specified precision. See also [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose") Compare two array_like objects for equality with desired relative and/or absolute precision. [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp"), [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp"), [`assert_equal`](numpy.testing.assert_equal#numpy.testing.assert_equal "numpy.testing.assert_equal") #### Examples the first assert does not raise an exception >>> np.testing.assert_array_almost_equal([1.0,2.333,np.nan], ... [1.0,2.333,np.nan]) >>> np.testing.assert_array_almost_equal([1.0,2.33333,np.nan], ... [1.0,2.33339,np.nan], decimal=5) Traceback (most recent call last): ... AssertionError: Arrays are not almost equal to 5 decimals Mismatched elements: 1 / 3 (33.3%) Max absolute difference among violations: 6.e-05 Max relative difference among violations: 2.57136612e-05 ACTUAL: array([1. , 2.33333, nan]) DESIRED: array([1. , 2.33339, nan]) >>> np.testing.assert_array_almost_equal([1.0,2.33333,np.nan], ... [1.0,2.33333, 5], decimal=5) Traceback (most recent call last): ... AssertionError: Arrays are not almost equal to 5 decimals nan location mismatch: ACTUAL: array([1. , 2.33333, nan]) DESIRED: array([1. , 2.33333, 5. ]) # numpy.testing.assert_array_almost_equal_nulp testing.assert_array_almost_equal_nulp(_x_ , _y_ , _nulp =1_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L1683-L1743) Compare two arrays relatively to their spacing. This is a relatively robust method to compare two arrays whose amplitude is variable. Parameters: **x, y** array_like Input arrays. **nulp** int, optional The maximum number of unit in the last place for tolerance (see Notes). Default is 1. Returns: None Raises: AssertionError If the spacing between `x` and `y` for one or more elements is larger than `nulp`. See also [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp") Check that all items of arrays differ in at most N Units in the Last Place. [`spacing`](numpy.spacing#numpy.spacing "numpy.spacing") Return the distance between x and the nearest adjacent number. #### Notes An assertion is raised if the following condition is not met: abs(x - y) <= nulp * spacing(maximum(abs(x), abs(y))) #### Examples >>> x = np.array([1., 1e-10, 1e-20]) >>> eps = np.finfo(x.dtype).eps >>> np.testing.assert_array_almost_equal_nulp(x, x*eps/2 + x) >>> np.testing.assert_array_almost_equal_nulp(x, x*eps + x) Traceback (most recent call last): ... AssertionError: Arrays are not equal to 1 ULP (max is 2) # numpy.testing.assert_array_equal testing.assert_array_equal(_actual_ , _desired_ , _err_msg =''_, _verbose =True_, _*_ , _strict =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L895-L1022) Raises an AssertionError if two array_like objects are not equal. Given two array_like objects, check that the shape is equal and all elements of these objects are equal (but see the Notes for the special handling of a scalar). An exception is raised at shape mismatch or conflicting values. In contrast to the standard usage in numpy, NaNs are compared like numbers, no assertion is raised if both objects have NaNs in the same positions. The usual caution for verifying equality with floating point numbers is advised. Note When either `actual` or `desired` is already an instance of [`numpy.ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") and `desired` is not a `dict`, the behavior of `assert_equal(actual, desired)` is identical to the behavior of this function. Otherwise, this function performs `np.asanyarray` on the inputs before comparison, whereas [`assert_equal`](numpy.testing.assert_equal#numpy.testing.assert_equal "numpy.testing.assert_equal") defines special comparison rules for common Python types. For example, only [`assert_equal`](numpy.testing.assert_equal#numpy.testing.assert_equal "numpy.testing.assert_equal") can be used to compare nested Python lists. In new code, consider using only [`assert_equal`](numpy.testing.assert_equal#numpy.testing.assert_equal "numpy.testing.assert_equal"), explicitly converting either `actual` or `desired` to arrays if the behavior of `assert_array_equal` is desired. Parameters: **actual** array_like The actual object to check. **desired** array_like The desired, expected object. **err_msg** str, optional The error message to be printed in case of failure. **verbose** bool, optional If True, the conflicting values are appended to the error message. **strict** bool, optional If True, raise an AssertionError when either the shape or the data type of the array_like objects does not match. The special handling for scalars mentioned in the Notes section is disabled. New in version 1.24.0. Raises: AssertionError If actual and desired objects are not equal. See also [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose") Compare two array_like objects for equality with desired relative and/or absolute precision. [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp"), [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp"), [`assert_equal`](numpy.testing.assert_equal#numpy.testing.assert_equal "numpy.testing.assert_equal") #### Notes When one of `actual` and `desired` is a scalar and the other is array_like, the function checks that each element of the array_like object is equal to the scalar. This behaviour can be disabled with the `strict` parameter. #### Examples The first assert does not raise an exception: >>> np.testing.assert_array_equal([1.0,2.33333,np.nan], ... [np.exp(0),2.33333, np.nan]) Assert fails with numerical imprecision with floats: >>> np.testing.assert_array_equal([1.0,np.pi,np.nan], ... [1, np.sqrt(np.pi)**2, np.nan]) Traceback (most recent call last): ... AssertionError: Arrays are not equal Mismatched elements: 1 / 3 (33.3%) Max absolute difference among violations: 4.4408921e-16 Max relative difference among violations: 1.41357986e-16 ACTUAL: array([1. , 3.141593, nan]) DESIRED: array([1. , 3.141593, nan]) Use [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose") or one of the nulp (number of floating point values) functions for these cases instead: >>> np.testing.assert_allclose([1.0,np.pi,np.nan], ... [1, np.sqrt(np.pi)**2, np.nan], ... rtol=1e-10, atol=0) As mentioned in the Notes section, `assert_array_equal` has special handling for scalars. Here the test checks that each value in `x` is 3: >>> x = np.full((2, 5), fill_value=3) >>> np.testing.assert_array_equal(x, 3) Use `strict` to raise an AssertionError when comparing a scalar with an array: >>> np.testing.assert_array_equal(x, 3, strict=True) Traceback (most recent call last): ... AssertionError: Arrays are not equal (shapes (2, 5), () mismatch) ACTUAL: array([[3, 3, 3, 3, 3], [3, 3, 3, 3, 3]]) DESIRED: array(3) The `strict` parameter also ensures that the array data types match: >>> x = np.array([2, 2, 2]) >>> y = np.array([2., 2., 2.], dtype=np.float32) >>> np.testing.assert_array_equal(x, y, strict=True) Traceback (most recent call last): ... AssertionError: Arrays are not equal (dtypes int64, float32 mismatch) ACTUAL: array([2, 2, 2]) DESIRED: array([2., 2., 2.], dtype=float32) # numpy.testing.assert_array_less testing.assert_array_less(_x_ , _y_ , _err_msg =''_, _verbose =True_, _*_ , _strict =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L1141-L1253) Raises an AssertionError if two array_like objects are not ordered by less than. Given two array_like objects `x` and `y`, check that the shape is equal and all elements of `x` are strictly less than the corresponding elements of `y` (but see the Notes for the special handling of a scalar). An exception is raised at shape mismatch or values that are not correctly ordered. In contrast to the standard usage in NumPy, no assertion is raised if both objects have NaNs in the same positions. Parameters: **x** array_like The smaller object to check. **y** array_like The larger object to compare. **err_msg** string The error message to be printed in case of failure. **verbose** bool If True, the conflicting values are appended to the error message. **strict** bool, optional If True, raise an AssertionError when either the shape or the data type of the array_like objects does not match. The special handling for scalars mentioned in the Notes section is disabled. New in version 2.0.0. Raises: AssertionError If x is not strictly smaller than y, element-wise. See also [`assert_array_equal`](numpy.testing.assert_array_equal#numpy.testing.assert_array_equal "numpy.testing.assert_array_equal") tests objects for equality [`assert_array_almost_equal`](numpy.testing.assert_array_almost_equal#numpy.testing.assert_array_almost_equal "numpy.testing.assert_array_almost_equal") test objects for equality up to precision #### Notes When one of `x` and `y` is a scalar and the other is array_like, the function performs the comparison as though the scalar were broadcasted to the shape of the array. This behaviour can be disabled with the `strict` parameter. #### Examples The following assertion passes because each finite element of `x` is strictly less than the corresponding element of `y`, and the NaNs are in corresponding locations. >>> x = [1.0, 1.0, np.nan] >>> y = [1.1, 2.0, np.nan] >>> np.testing.assert_array_less(x, y) The following assertion fails because the zeroth element of `x` is no longer strictly less than the zeroth element of `y`. >>> y[0] = 1 >>> np.testing.assert_array_less(x, y) Traceback (most recent call last): ... AssertionError: Arrays are not strictly ordered `x < y` Mismatched elements: 1 / 3 (33.3%) Max absolute difference among violations: 0. Max relative difference among violations: 0. x: array([ 1., 1., nan]) y: array([ 1., 2., nan]) Here, `y` is a scalar, so each element of `x` is compared to `y`, and the assertion passes. >>> x = [1.0, 4.0] >>> y = 5.0 >>> np.testing.assert_array_less(x, y) However, with `strict=True`, the assertion will fail because the shapes do not match. >>> np.testing.assert_array_less(x, y, strict=True) Traceback (most recent call last): ... AssertionError: Arrays are not strictly ordered `x < y` (shapes (2,), () mismatch) x: array([1., 4.]) y: array(5.) With `strict=True`, the assertion also fails if the dtypes of the two arrays do not match. >>> y = [5, 5] >>> np.testing.assert_array_less(x, y, strict=True) Traceback (most recent call last): ... AssertionError: Arrays are not strictly ordered `x < y` (dtypes float64, int64 mismatch) x: array([1., 4.]) y: array([5, 5]) # numpy.testing.assert_array_max_ulp testing.assert_array_max_ulp(_a_ , _b_ , _maxulp =1_, _dtype =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L1746-L1795) Check that all items of arrays differ in at most N Units in the Last Place. Parameters: **a, b** array_like Input arrays to be compared. **maxulp** int, optional The maximum number of units in the last place that elements of `a` and `b` can differ. Default is 1. **dtype** dtype, optional Data-type to convert `a` and `b` to if given. Default is None. Returns: **ret** ndarray Array containing number of representable floating point numbers between items in `a` and `b`. Raises: AssertionError If one or more elements differ by more than `maxulp`. See also [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp") Compare two arrays relatively to their spacing. #### Notes For computing the ULP difference, this API does not differentiate between various representations of NAN (ULP difference between 0x7fc00000 and 0xffc00000 is zero). #### Examples >>> a = np.linspace(0., 1., 100) >>> res = np.testing.assert_array_max_ulp(a, np.arcsin(np.sin(a))) # numpy.testing.assert_equal testing.assert_equal(_actual_ , _desired_ , _err_msg =''_, _verbose =True_, _*_ , _strict =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L216-L423) Raises an AssertionError if two objects are not equal. Given two objects (scalars, lists, tuples, dictionaries or numpy arrays), check that all elements of these objects are equal. An exception is raised at the first conflicting values. This function handles NaN comparisons as if NaN was a “normal” number. That is, AssertionError is not raised if both objects have NaNs in the same positions. This is in contrast to the IEEE standard on NaNs, which says that NaN compared to anything must return False. Parameters: **actual** array_like The object to check. **desired** array_like The expected object. **err_msg** str, optional The error message to be printed in case of failure. **verbose** bool, optional If True, the conflicting values are appended to the error message. **strict** bool, optional If True and either of the `actual` and `desired` arguments is an array, raise an `AssertionError` when either the shape or the data type of the arguments does not match. If neither argument is an array, this parameter has no effect. New in version 2.0.0. Raises: AssertionError If actual and desired are not equal. See also [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose") [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp") [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp") #### Notes By default, when one of `actual` and `desired` is a scalar and the other is an array, the function checks that each element of the array is equal to the scalar. This behaviour can be disabled by setting `strict==True`. #### Examples >>> np.testing.assert_equal([4, 5], [4, 6]) Traceback (most recent call last): ... AssertionError: Items are not equal: item=1 ACTUAL: 5 DESIRED: 6 The following comparison does not raise an exception. There are NaNs in the inputs, but they are in the same positions. >>> np.testing.assert_equal(np.array([1.0, 2.0, np.nan]), [1, 2, np.nan]) As mentioned in the Notes section, `assert_equal` has special handling for scalars when one of the arguments is an array. Here, the test checks that each value in `x` is 3: >>> x = np.full((2, 5), fill_value=3) >>> np.testing.assert_equal(x, 3) Use `strict` to raise an AssertionError when comparing a scalar with an array of a different shape: >>> np.testing.assert_equal(x, 3, strict=True) Traceback (most recent call last): ... AssertionError: Arrays are not equal (shapes (2, 5), () mismatch) ACTUAL: array([[3, 3, 3, 3, 3], [3, 3, 3, 3, 3]]) DESIRED: array(3) The `strict` parameter also ensures that the array data types match: >>> x = np.array([2, 2, 2]) >>> y = np.array([2., 2., 2.], dtype=np.float32) >>> np.testing.assert_equal(x, y, strict=True) Traceback (most recent call last): ... AssertionError: Arrays are not equal (dtypes int64, float32 mismatch) ACTUAL: array([2, 2, 2]) DESIRED: array([2., 2., 2.], dtype=float32) # numpy.testing.assert_no_gc_cycles testing.assert_no_gc_cycles(_* args_, _** kwargs_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L2512-L2542) Fail if the given callable produces any reference cycles. If called with all arguments omitted, may be used as a context manager: with assert_no_gc_cycles(): do_something() Parameters: **func** callable The callable to test. ***args** Arguments Arguments passed to `func`. ****kwargs** Kwargs Keyword arguments passed to `func`. Returns: Nothing. The result is deliberately discarded to ensure that all cycles are found. # numpy.testing.assert_no_warnings testing.assert_no_warnings(_* args_, _** kwargs_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L1971-L2002) Fail if the given callable produces any warnings. If called with all arguments omitted, may be used as a context manager: with assert_no_warnings(): do_something() The ability to be used as a context manager is new in NumPy v1.11.0. Parameters: **func** callable The callable to test. ***args** Arguments Arguments passed to `func`. ****kwargs** Kwargs Keyword arguments passed to `func`. Returns: The value returned by `func`. # numpy.testing.assert_raises testing.assert_raises(_exception_class_ , _callable_ , _*args_ , _**kwargs) assert_raises(exception_class_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L1410-L1436) testing.assert_raises(_exception_class_) → [None](https://docs.python.org/3/library/constants.html#None "\(in Python v3.13\)") Fail unless an exception of class exception_class is thrown by callable when invoked with arguments args and keyword arguments kwargs. If a different type of exception is thrown, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception. Alternatively, `assert_raises` can be used as a context manager: >>> from numpy.testing import assert_raises >>> with assert_raises(ZeroDivisionError): ... 1 / 0 is equivalent to >>> def div(x, y): ... return x / y >>> assert_raises(ZeroDivisionError, div, 1, 0) # numpy.testing.assert_raises_regex testing.assert_raises_regex(_exception_class_ , _expected_regexp_ , _callable_ , _*args_ , _**kwargs) assert_raises_regex(exception_class_ , _expected_regexp_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L1439-L1452) Fail unless an exception of class exception_class and with message that matches expected_regexp is thrown by callable when invoked with arguments args and keyword arguments kwargs. Alternatively, can be used as a context manager like [`assert_raises`](numpy.testing.assert_raises#numpy.testing.assert_raises "numpy.testing.assert_raises"). # numpy.testing.assert_string_equal testing.assert_string_equal(_actual_ , _desired_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L1260-L1328) Test if two strings are equal. If the given strings are equal, `assert_string_equal` does nothing. If they are not equal, an AssertionError is raised, and the diff between the strings is shown. Parameters: **actual** str The string to test for equality against the expected string. **desired** str The expected string. #### Examples >>> np.testing.assert_string_equal('abc', 'abc') >>> np.testing.assert_string_equal('abc', 'abcd') Traceback (most recent call last): File "", line 1, in ... AssertionError: Differences in strings: - abc+ abcd? + # numpy.testing.assert_warns testing.assert_warns(_warning_class_ , _* args_, _** kwargs_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L1901-L1957) Fail unless the given callable throws the specified warning. A warning of class warning_class should be thrown by the callable when invoked with arguments args and keyword arguments kwargs. If a different type of warning is thrown, it will not be caught. If called with all arguments other than the warning class omitted, may be used as a context manager: with assert_warns(SomeWarning): do_something() The ability to be used as a context manager is new in NumPy v1.11.0. Parameters: **warning_class** class The class defining the warning that `func` is expected to throw. **func** callable, optional Callable to test ***args** Arguments Arguments for `func`. ****kwargs** Kwargs Keyword arguments for `func`. Returns: The value returned by `func`. #### Examples >>> import warnings >>> def deprecated_func(num): ... warnings.warn("Please upgrade", DeprecationWarning) ... return num*num >>> with np.testing.assert_warns(DeprecationWarning): ... assert deprecated_func(4) == 16 >>> # or passing a func >>> ret = np.testing.assert_warns(DeprecationWarning, deprecated_func, 4) >>> assert ret == 16 # numpy.testing.clear_and_catch_warnings _class_ numpy.testing.clear_and_catch_warnings(_record =False_, _modules =()_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L2116-L2179) Context manager that resets warning registry for catching warnings Warnings can be slippery, because, whenever a warning is triggered, Python adds a `__warningregistry__` member to the _calling_ module. This makes it impossible to retrigger the warning in this module, whatever you put in the warnings filters. This context manager accepts a sequence of `modules` as a keyword argument to its constructor and: * stores and removes any `__warningregistry__` entries in given `modules` on entry; * resets `__warningregistry__` to its previous state on exit. This makes it possible to trigger any warning afresh inside the context manager without disturbing the state of warnings outside. For compatibility with Python 3.0, please consider all arguments to be keyword-only. Parameters: **record** bool, optional Specifies whether warnings should be captured by a custom implementation of `warnings.showwarning()` and be appended to a list returned by the context manager. Otherwise None is returned by the context manager. The objects appended to the list are arguments whose attributes mirror the arguments to `showwarning()`. **modules** sequence, optional Sequence of modules for which to reset warnings registry on entry and restore on exit. To work correctly, all ‘ignore’ filters should filter by one of these modules. #### Examples >>> import warnings >>> with np.testing.clear_and_catch_warnings( ... modules=[np._core.fromnumeric]): ... warnings.simplefilter('always') ... warnings.filterwarnings('ignore', module='np._core.fromnumeric') ... # do something that raises a warning but ignore those in ... # np._core.fromnumeric # numpy.testing.decorate_methods testing.decorate_methods(_cls_ , _decorator_ , _testmatch =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L1455-L1499) Apply a decorator to all methods in a class matching a regular expression. The given decorator is applied to all public methods of `cls` that are matched by the regular expression `testmatch` (`testmatch.search(methodname)`). Methods that are private, i.e. start with an underscore, are ignored. Parameters: **cls** class Class whose methods to decorate. **decorator** function Decorator to apply to methods **testmatch** compiled regexp or str, optional The regular expression. Default value is None, in which case the nose default (`re.compile(r'(?:^|[\b_\.%s-])[Tt]est' % os.sep)`) is used. If `testmatch` is a string, it is compiled to a regular expression first. # numpy.testing.measure testing.measure(_code_str_ , _times =1_, _label =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L1502-L1545) Return elapsed time for executing code in the namespace of the caller. The supplied code string is compiled with the Python builtin `compile`. The precision of the timing is 10 milli-seconds. If the code will execute fast on this timescale, it can be executed many times to get reasonable timing accuracy. Parameters: **code_str** str The code to be timed. **times** int, optional The number of times the code is executed. Default is 1. The code is only compiled once. **label** str, optional A label to identify `code_str` with. This is passed into `compile` as the second argument (for run-time error messages). Returns: **elapsed** float Total elapsed time in seconds for executing `code_str` `times` times. #### Examples >>> times = 10 >>> etime = np.testing.measure('for i in range(1000): np.sqrt(i**2)', times=times) >>> print("Time for a single execution : ", etime / times, "s") Time for a single execution : 0.005 s # numpy.testing.overrides.allows_array_function_override testing.overrides.allows_array_function_override(_func_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/overrides.py#L69-L83) Determine if a Numpy function can be overridden via `__array_function__` Parameters: **func** callable Function that may be overridable via `__array_function__` Returns: bool `True` if `func` is a function in the Numpy API that is overridable via `__array_function__` and `False` otherwise. # numpy.testing.overrides.allows_array_ufunc_override testing.overrides.allows_array_ufunc_override(_func_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/overrides.py#L27-L47) Determine if a function can be overridden via `__array_ufunc__` Parameters: **func** callable Function that may be overridable via `__array_ufunc__` Returns: bool `True` if `func` is overridable via `__array_ufunc__` and `False` otherwise. #### Notes This function is equivalent to `isinstance(func, np.ufunc)` and will work correctly for ufuncs defined outside of Numpy. # numpy.testing.overrides.get_overridable_numpy_array_functions testing.overrides.get_overridable_numpy_array_functions()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/overrides.py#L50-L67) List all numpy functions overridable via `__array_function__` Parameters: **None** Returns: set A set containing all functions in the public numpy API that are overridable via `__array_function__`. # numpy.testing.overrides.get_overridable_numpy_ufuncs testing.overrides.get_overridable_numpy_ufuncs()[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/overrides.py#L10-L24) List all numpy ufuncs overridable via `__array_ufunc__` Parameters: **None** Returns: set A set containing all overridable ufuncs in the public numpy API. # numpy.testing.print_assert_equal testing.print_assert_equal(_test_string_ , _actual_ , _desired_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L426-L464) Test if two objects are equal, and print an error message if test fails. The test is performed with `actual == desired`. Parameters: **test_string** str The message supplied to AssertionError. **actual** object The object to test for equality against `desired`. **desired** object The expected result. #### Examples >>> np.testing.print_assert_equal('Test XYZ of func xyz', [0, 1], [0, 1]) >>> np.testing.print_assert_equal('Test XYZ of func xyz', [0, 1], [0, 2]) Traceback (most recent call last): ... AssertionError: Test XYZ of func xyz failed ACTUAL: [0, 1] DESIRED: [0, 2] # numpy.testing.rundocs testing.rundocs(_filename =None_, _raise_on_error =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L1331-L1374) Run doctests found in the given file. By default `rundocs` raises an AssertionError on failure. Parameters: **filename** str The path to the file for which the doctests are run. **raise_on_error** bool Whether to raise an AssertionError when a doctest fails. Default is True. #### Notes The doctests can be run by the user/developer by adding the `doctests` argument to the `test()` call. For example, to run all tests (including doctests) for `numpy.lib`: >>> np.lib.test(doctests=True) # numpy.testing.suppress_warnings.__call__ method testing.suppress_warnings.__call__(_func_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L2447-L2457) Function decorator to apply certain suppressions to a whole function. # numpy.testing.suppress_warnings.filter method testing.suppress_warnings.filter(_category= _, _message=''_ , _module=None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L2303-L2324) Add a new suppressing filter or apply it if the state is entered. Parameters: **category** class, optional Warning class to filter **message** string, optional Regular expression matching the warning message. **module** module, optional Module to filter for. Note that the module (and its file) must match exactly and cannot be a submodule. This may make it unreliable for external modules. #### Notes When added within a context, filters are only added inside the context and will be forgotten when the context is exited. # numpy.testing.suppress_warnings _class_ numpy.testing.suppress_warnings(_forwarding_rule ='always'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L2182-L2457) Context manager and decorator doing much the same as `warnings.catch_warnings`. However, it also provides a filter mechanism to work around . This bug causes Python before 3.4 to not reliably show warnings again after they have been ignored once (even within catch_warnings). It means that no “ignore” filter can be used easily, since following tests might need to see the warning. Additionally it allows easier specificity for testing warnings and can be nested. Parameters: **forwarding_rule** str, optional One of “always”, “once”, “module”, or “location”. Analogous to the usual warnings module filter mode, it is useful to reduce noise mostly on the outmost level. Unsuppressed and unrecorded warnings will be forwarded based on this rule. Defaults to “always”. “location” is equivalent to the warnings “default”, match by exact location the warning warning originated from. #### Notes Filters added inside the context manager will be discarded again when leaving it. Upon entering all filters defined outside a context will be applied automatically. When a recording filter is added, matching warnings are stored in the `log` attribute as well as in the list returned by `record`. If filters are added and the `module` keyword is given, the warning registry of this module will additionally be cleared when applying it, entering the context, or exiting it. This could cause warnings to appear a second time after leaving the context if they were configured to be printed once (default) and were already printed before the context was entered. Nesting this context manager will work as expected when the forwarding rule is “always” (default). Unfiltered and unrecorded warnings will be passed out and be matched by the outer level. On the outmost level they will be printed (or caught by another warnings context). The forwarding rule argument can modify this behaviour. Like `catch_warnings` this context manager is not threadsafe. #### Examples With a context manager: with np.testing.suppress_warnings() as sup: sup.filter(DeprecationWarning, "Some text") sup.filter(module=np.ma.core) log = sup.record(FutureWarning, "Does this occur?") command_giving_warnings() # The FutureWarning was given once, the filtered warnings were # ignored. All other warnings abide outside settings (may be # printed/error) assert_(len(log) == 1) assert_(len(sup.log) == 1) # also stored in log attribute Or as a decorator: sup = np.testing.suppress_warnings() sup.filter(module=np.ma.core) # module must match exactly @sup def some_function(): # do something which causes a warning in np.ma.core pass #### Methods [`__call__`](numpy.testing.suppress_warnings.__call__#numpy.testing.suppress_warnings.__call__ "numpy.testing.suppress_warnings.__call__")(func) | Function decorator to apply certain suppressions to a whole function. ---|--- [`filter`](numpy.testing.suppress_warnings.filter#numpy.testing.suppress_warnings.filter "numpy.testing.suppress_warnings.filter")([category, message, module]) | Add a new suppressing filter or apply it if the state is entered. [`record`](numpy.testing.suppress_warnings.record#numpy.testing.suppress_warnings.record "numpy.testing.suppress_warnings.record")([category, message, module]) | Append a new recording filter or apply it if the state is entered. # numpy.testing.suppress_warnings.record method testing.suppress_warnings.record(_category= _, _message=''_ , _module=None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/testing/_private/utils.py#L2326-L2354) Append a new recording filter or apply it if the state is entered. All warnings matching will be appended to the `log` attribute. Parameters: **category** class, optional Warning class to filter **message** string, optional Regular expression matching the warning message. **module** module, optional Module to filter for. Note that the module (and its file) must match exactly and cannot be a submodule. This may make it unreliable for external modules. Returns: **log** list A list which will be filled with all matched warnings. #### Notes When added within a context, filters are only added inside the context and will be forgotten when the context is exited. # numpy.tile numpy.tile(_A_ , _reps_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_shape_base_impl.py#L1204-L1294) Construct an array by repeating A the number of times given by reps. If `reps` has length `d`, the result will have dimension of `max(d, A.ndim)`. If `A.ndim < d`, `A` is promoted to be d-dimensional by prepending new axes. So a shape (3,) array is promoted to (1, 3) for 2-D replication, or shape (1, 1, 3) for 3-D replication. If this is not the desired behavior, promote `A` to d-dimensions manually before calling this function. If `A.ndim > d`, `reps` is promoted to `A`.ndim by prepending 1’s to it. Thus for an `A` of shape (2, 3, 4, 5), a `reps` of (2, 2) is treated as (1, 1, 2, 2). Note : Although tile may be used for broadcasting, it is strongly recommended to use numpy’s broadcasting operations and functions. Parameters: **A** array_like The input array. **reps** array_like The number of repetitions of `A` along each axis. Returns: **c** ndarray The tiled output array. See also [`repeat`](numpy.repeat#numpy.repeat "numpy.repeat") Repeat elements of an array. [`broadcast_to`](numpy.broadcast_to#numpy.broadcast_to "numpy.broadcast_to") Broadcast an array to a new shape #### Examples >>> import numpy as np >>> a = np.array([0, 1, 2]) >>> np.tile(a, 2) array([0, 1, 2, 0, 1, 2]) >>> np.tile(a, (2, 2)) array([[0, 1, 2, 0, 1, 2], [0, 1, 2, 0, 1, 2]]) >>> np.tile(a, (2, 1, 2)) array([[[0, 1, 2, 0, 1, 2]], [[0, 1, 2, 0, 1, 2]]]) >>> b = np.array([[1, 2], [3, 4]]) >>> np.tile(b, 2) array([[1, 2, 1, 2], [3, 4, 3, 4]]) >>> np.tile(b, (2, 1)) array([[1, 2], [3, 4], [1, 2], [3, 4]]) >>> c = np.array([1,2,3,4]) >>> np.tile(c,(4,1)) array([[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]) # numpy.trace numpy.trace(_a_ , _offset =0_, _axis1 =0_, _axis2 =1_, _dtype =None_, _out =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L1831-L1897) Return the sum along diagonals of the array. If `a` is 2-D, the sum along its diagonal with the given offset is returned, i.e., the sum of elements `a[i,i+offset]` for all i. If `a` has more than two dimensions, then the axes specified by axis1 and axis2 are used to determine the 2-D sub-arrays whose traces are returned. The shape of the resulting array is the same as that of `a` with `axis1` and `axis2` removed. Parameters: **a** array_like Input array, from which the diagonals are taken. **offset** int, optional Offset of the diagonal from the main diagonal. Can be both positive and negative. Defaults to 0. **axis1, axis2** int, optional Axes to be used as the first and second axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults are the first two axes of `a`. **dtype** dtype, optional Determines the data-type of the returned array and of the accumulator where the elements are summed. If dtype has the value None and `a` is of integer type of precision less than the default integer precision, then the default integer precision is used. Otherwise, the precision is the same as that of `a`. **out** ndarray, optional Array into which the output is placed. Its type is preserved and it must be of the right shape to hold the output. Returns: **sum_along_diagonals** ndarray If `a` is 2-D, the sum along the diagonal is returned. If `a` has larger dimensions, then an array of sums along diagonals is returned. See also [`diag`](numpy.diag#numpy.diag "numpy.diag"), [`diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal"), [`diagflat`](numpy.diagflat#numpy.diagflat "numpy.diagflat") #### Examples >>> import numpy as np >>> np.trace(np.eye(3)) 3.0 >>> a = np.arange(8).reshape((2,2,2)) >>> np.trace(a) array([6, 8]) >>> a = np.arange(24).reshape((2,2,2,3)) >>> np.trace(a).shape (2, 3) # numpy.transpose numpy.transpose(_a_ , _axes =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L630-L703) Returns an array with axes transposed. For a 1-D array, this returns an unchanged view of the original array, as a transposed vector is simply the same vector. To convert a 1-D array into a 2-D column vector, an additional dimension must be added, e.g., `np.atleast_2d(a).T` achieves this, as does `a[:, np.newaxis]`. For a 2-D array, this is the standard matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided, then `transpose(a).shape == a.shape[::-1]`. Parameters: **a** array_like Input array. **axes** tuple or list of ints, optional If specified, it must be a tuple or list which contains a permutation of [0, 1, …, N-1] where N is the number of axes of `a`. Negative indices can also be used to specify axes. The i-th axis of the returned array will correspond to the axis numbered `axes[i]` of the input. If not specified, defaults to `range(a.ndim)[::-1]`, which reverses the order of the axes. Returns: **p** ndarray `a` with its axes permuted. A view is returned whenever possible. See also [`ndarray.transpose`](numpy.ndarray.transpose#numpy.ndarray.transpose "numpy.ndarray.transpose") Equivalent method. [`moveaxis`](numpy.moveaxis#numpy.moveaxis "numpy.moveaxis") Move axes of an array to new positions. [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Return the indices that would sort an array. #### Notes Use `transpose(a, argsort(axes))` to invert the transposition of tensors when using the `axes` keyword argument. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> np.transpose(a) array([[1, 3], [2, 4]]) >>> a = np.array([1, 2, 3, 4]) >>> a array([1, 2, 3, 4]) >>> np.transpose(a) array([1, 2, 3, 4]) >>> a = np.ones((1, 2, 3)) >>> np.transpose(a, (1, 0, 2)).shape (2, 1, 3) >>> a = np.ones((2, 3, 4, 5)) >>> np.transpose(a).shape (5, 4, 3, 2) >>> a = np.arange(3*4*5).reshape((3, 4, 5)) >>> np.transpose(a, (-1, 0, -2)).shape (5, 3, 4) # numpy.trapezoid numpy.trapezoid(_y_ , _x =None_, _dx =1.0_, _axis =-1_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L4959-L5091) Integrate along the given axis using the composite trapezoidal rule. If `x` is provided, the integration happens in sequence along its elements - they are not sorted. Integrate `y` (`x`) along each 1d slice on the given axis, compute \\(\int y(x) dx\\). When `x` is specified, this integrates along the parametric curve, computing \\(\int_t y(t) dt = \int_t y(t) \left.\frac{dx}{dt}\right|_{x=x(t)} dt\\). New in version 2.0.0. Parameters: **y** array_like Input array to integrate. **x** array_like, optional The sample points corresponding to the `y` values. If `x` is None, the sample points are assumed to be evenly spaced `dx` apart. The default is None. **dx** scalar, optional The spacing between sample points when `x` is None. The default is 1. **axis** int, optional The axis along which to integrate. Returns: **trapezoid** float or ndarray Definite integral of `y` = n-dimensional array as approximated along a single axis by the trapezoidal rule. If `y` is a 1-dimensional array, then the result is a float. If `n` is greater than 1, then the result is an `n`-1 dimensional array. See also [`sum`](numpy.sum#numpy.sum "numpy.sum"), [`cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") #### Notes Image [2] illustrates trapezoidal rule – y-axis locations of points will be taken from `y` array, by default x-axis distances between points will be 1.0, alternatively they can be provided with `x` array or with `dx` scalar. Return value will be equal to combined area under the red lines. #### References [1] Wikipedia page: [2] Illustration image: #### Examples >>> import numpy as np Use the trapezoidal rule on evenly spaced points: >>> np.trapezoid([1, 2, 3]) 4.0 The spacing between sample points can be selected by either the `x` or `dx` arguments: >>> np.trapezoid([1, 2, 3], x=[4, 6, 8]) 8.0 >>> np.trapezoid([1, 2, 3], dx=2) 8.0 Using a decreasing `x` corresponds to integrating in reverse: >>> np.trapezoid([1, 2, 3], x=[8, 6, 4]) -8.0 More generally `x` is used to integrate along a parametric curve. We can estimate the integral \\(\int_0^1 x^2 = 1/3\\) using: >>> x = np.linspace(0, 1, num=50) >>> y = x**2 >>> np.trapezoid(y, x) 0.33340274885464394 Or estimate the area of a circle, noting we repeat the sample which closes the curve: >>> theta = np.linspace(0, 2 * np.pi, num=1000, endpoint=True) >>> np.trapezoid(np.cos(theta), x=np.sin(theta)) 3.141571941375841 `np.trapezoid` can be applied along a specified axis to do multiple computations in one call: >>> a = np.arange(6).reshape(2, 3) >>> a array([[0, 1, 2], [3, 4, 5]]) >>> np.trapezoid(a, axis=0) array([1.5, 2.5, 3.5]) >>> np.trapezoid(a, axis=1) array([2., 8.]) # numpy.tri numpy.tri(_N_ , _M=None_ , _k=0_ , _dtype= _, _*_ , _like=None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_twodim_base_impl.py#L376-L431) An array with ones at and below the given diagonal and zeros elsewhere. Parameters: **N** int Number of rows in the array. **M** int, optional Number of columns in the array. By default, `M` is taken equal to `N`. **k** int, optional The sub-diagonal at and below which the array is filled. `k` = 0 is the main diagonal, while `k` < 0 is below it, and `k` > 0 is above. The default is 0. **dtype** dtype, optional Data type of the returned array. The default is float. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **tri** ndarray of shape (N, M) Array with its lower triangle filled with ones and zero elsewhere; in other words `T[i,j] == 1` for `j <= i + k`, 0 otherwise. #### Examples >>> import numpy as np >>> np.tri(3, 5, 2, dtype=int) array([[1, 1, 1, 0, 0], [1, 1, 1, 1, 0], [1, 1, 1, 1, 1]]) >>> np.tri(3, 5, -1) array([[0., 0., 0., 0., 0.], [1., 0., 0., 0., 0.], [1., 1., 0., 0., 0.]]) # numpy.tril numpy.tril(_m_ , _k =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_twodim_base_impl.py#L441-L494) Lower triangle of an array. Return a copy of an array with elements above the `k`-th diagonal zeroed. For arrays with `ndim` exceeding 2, `tril` will apply to the final two axes. Parameters: **m** array_like, shape (…, M, N) Input array. **k** int, optional Diagonal above which to zero elements. `k = 0` (the default) is the main diagonal, `k < 0` is below it and `k > 0` is above. Returns: **tril** ndarray, shape (…, M, N) Lower triangle of `m`, of same shape and data-type as `m`. See also [`triu`](numpy.triu#numpy.triu "numpy.triu") same thing, only for the upper triangle #### Examples >>> import numpy as np >>> np.tril([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1) array([[ 0, 0, 0], [ 4, 0, 0], [ 7, 8, 0], [10, 11, 12]]) >>> np.tril(np.arange(3*4*5).reshape(3, 4, 5)) array([[[ 0, 0, 0, 0, 0], [ 5, 6, 0, 0, 0], [10, 11, 12, 0, 0], [15, 16, 17, 18, 0]], [[20, 0, 0, 0, 0], [25, 26, 0, 0, 0], [30, 31, 32, 0, 0], [35, 36, 37, 38, 0]], [[40, 0, 0, 0, 0], [45, 46, 0, 0, 0], [50, 51, 52, 0, 0], [55, 56, 57, 58, 0]]]) # numpy.tril_indices numpy.tril_indices(_n_ , _k =0_, _m =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_twodim_base_impl.py#L894-L976) Return the indices for the lower-triangle of an (n, m) array. Parameters: **n** int The row dimension of the arrays for which the returned indices will be valid. **k** int, optional Diagonal offset (see [`tril`](numpy.tril#numpy.tril "numpy.tril") for details). **m** int, optional The column dimension of the arrays for which the returned arrays will be valid. By default `m` is taken equal to `n`. Returns: **inds** tuple of arrays The row and column indices, respectively. The row indices are sorted in non- decreasing order, and the correspdonding column indices are strictly increasing for each row. See also [`triu_indices`](numpy.triu_indices#numpy.triu_indices "numpy.triu_indices") similar function, for upper-triangular. [`mask_indices`](numpy.mask_indices#numpy.mask_indices "numpy.mask_indices") generic function accepting an arbitrary mask function. [`tril`](numpy.tril#numpy.tril "numpy.tril"), [`triu`](numpy.triu#numpy.triu "numpy.triu") #### Examples >>> import numpy as np Compute two different sets of indices to access 4x4 arrays, one for the lower triangular part starting at the main diagonal, and one starting two diagonals further right: >>> il1 = np.tril_indices(4) >>> il1 (array([0, 1, 1, 2, 2, 2, 3, 3, 3, 3]), array([0, 0, 1, 0, 1, 2, 0, 1, 2, 3])) Note that row indices (first array) are non-decreasing, and the corresponding column indices (second array) are strictly increasing for each row. Here is how they can be used with a sample array: >>> a = np.arange(16).reshape(4, 4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) Both for indexing: >>> a[il1] array([ 0, 4, 5, ..., 13, 14, 15]) And for assigning values: >>> a[il1] = -1 >>> a array([[-1, 1, 2, 3], [-1, -1, 6, 7], [-1, -1, -1, 11], [-1, -1, -1, -1]]) These cover almost the whole array (two diagonals right of the main one): >>> il2 = np.tril_indices(4, 2) >>> a[il2] = -10 >>> a array([[-10, -10, -10, 3], [-10, -10, -10, -10], [-10, -10, -10, -10], [-10, -10, -10, -10]]) # numpy.tril_indices_from numpy.tril_indices_from(_arr_ , _k =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_twodim_base_impl.py#L983-L1038) Return the indices for the lower-triangle of arr. See [`tril_indices`](numpy.tril_indices#numpy.tril_indices "numpy.tril_indices") for full details. Parameters: **arr** array_like The indices will be valid for square arrays whose dimensions are the same as arr. **k** int, optional Diagonal offset (see [`tril`](numpy.tril#numpy.tril "numpy.tril") for details). See also [`tril_indices`](numpy.tril_indices#numpy.tril_indices "numpy.tril_indices"), [`tril`](numpy.tril#numpy.tril "numpy.tril"), [`triu_indices_from`](numpy.triu_indices_from#numpy.triu_indices_from "numpy.triu_indices_from") #### Examples >>> import numpy as np Create a 4 by 4 array >>> a = np.arange(16).reshape(4, 4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) Pass the array to get the indices of the lower triangular elements. >>> trili = np.tril_indices_from(a) >>> trili (array([0, 1, 1, 2, 2, 2, 3, 3, 3, 3]), array([0, 0, 1, 0, 1, 2, 0, 1, 2, 3])) >>> a[trili] array([ 0, 4, 5, 8, 9, 10, 12, 13, 14, 15]) This is syntactic sugar for tril_indices(). >>> np.tril_indices(a.shape[0]) (array([0, 1, 1, 2, 2, 2, 3, 3, 3, 3]), array([0, 0, 1, 0, 1, 2, 0, 1, 2, 3])) Use the `k` parameter to return the indices for the lower triangular array up to the k-th diagonal. >>> trili1 = np.tril_indices_from(a, k=1) >>> a[trili1] array([ 0, 1, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 15]) # numpy.trim_zeros numpy.trim_zeros(_filt_ , _trim ='fb'_, _axis =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L1889-L1982) Remove values along a dimension which are zero along all other. Parameters: **filt** array_like Input array. **trim**{“fb”, “f”, “b”}, optional A string with ‘f’ representing trim from front and ‘b’ to trim from back. By default, zeros are trimmed on both sides. Front and back refer to the edges of a dimension, with “front” refering to the side with the lowest index 0, and “back” refering to the highest index (or index -1). **axis** int or sequence, optional If None, `filt` is cropped such, that the smallest bounding box is returned that still contains all values which are not zero. If an axis is specified, `filt` will be sliced in that dimension only on the sides specified by `trim`. The remaining area will be the smallest that still contains all values wich are not zero. Returns: **trimmed** ndarray or sequence The result of trimming the input. The number of dimensions and the input data type are preserved. #### Notes For all-zero arrays, the first axis is trimmed first. #### Examples >>> import numpy as np >>> a = np.array((0, 0, 0, 1, 2, 3, 0, 2, 1, 0)) >>> np.trim_zeros(a) array([1, 2, 3, 0, 2, 1]) >>> np.trim_zeros(a, trim='b') array([0, 0, 0, ..., 0, 2, 1]) Multiple dimensions are supported. >>> b = np.array([[0, 0, 2, 3, 0, 0], ... [0, 1, 0, 3, 0, 0], ... [0, 0, 0, 0, 0, 0]]) >>> np.trim_zeros(b) array([[0, 2, 3], [1, 0, 3]]) >>> np.trim_zeros(b, axis=-1) array([[0, 2, 3], [1, 0, 3], [0, 0, 0]]) The input data type is preserved, list/tuple in means list/tuple out. >>> np.trim_zeros([0, 1, 2, 0]) [1, 2] # numpy.triu numpy.triu(_m_ , _k =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_twodim_base_impl.py#L497-L539) Upper triangle of an array. Return a copy of an array with the elements below the `k`-th diagonal zeroed. For arrays with `ndim` exceeding 2, `triu` will apply to the final two axes. Please refer to the documentation for [`tril`](numpy.tril#numpy.tril "numpy.tril") for further details. See also [`tril`](numpy.tril#numpy.tril "numpy.tril") lower triangle of an array #### Examples >>> import numpy as np >>> np.triu([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1) array([[ 1, 2, 3], [ 4, 5, 6], [ 0, 8, 9], [ 0, 0, 12]]) >>> np.triu(np.arange(3*4*5).reshape(3, 4, 5)) array([[[ 0, 1, 2, 3, 4], [ 0, 6, 7, 8, 9], [ 0, 0, 12, 13, 14], [ 0, 0, 0, 18, 19]], [[20, 21, 22, 23, 24], [ 0, 26, 27, 28, 29], [ 0, 0, 32, 33, 34], [ 0, 0, 0, 38, 39]], [[40, 41, 42, 43, 44], [ 0, 46, 47, 48, 49], [ 0, 0, 52, 53, 54], [ 0, 0, 0, 58, 59]]]) # numpy.triu_indices numpy.triu_indices(_n_ , _k =0_, _m =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_twodim_base_impl.py#L1041-L1125) Return the indices for the upper-triangle of an (n, m) array. Parameters: **n** int The size of the arrays for which the returned indices will be valid. **k** int, optional Diagonal offset (see [`triu`](numpy.triu#numpy.triu "numpy.triu") for details). **m** int, optional The column dimension of the arrays for which the returned arrays will be valid. By default `m` is taken equal to `n`. Returns: **inds** tuple, shape(2) of ndarrays, shape(`n`) The row and column indices, respectively. The row indices are sorted in non- decreasing order, and the correspdonding column indices are strictly increasing for each row. See also [`tril_indices`](numpy.tril_indices#numpy.tril_indices "numpy.tril_indices") similar function, for lower-triangular. [`mask_indices`](numpy.mask_indices#numpy.mask_indices "numpy.mask_indices") generic function accepting an arbitrary mask function. [`triu`](numpy.triu#numpy.triu "numpy.triu"), [`tril`](numpy.tril#numpy.tril "numpy.tril") #### Examples >>> import numpy as np Compute two different sets of indices to access 4x4 arrays, one for the upper triangular part starting at the main diagonal, and one starting two diagonals further right: >>> iu1 = np.triu_indices(4) >>> iu1 (array([0, 0, 0, 0, 1, 1, 1, 2, 2, 3]), array([0, 1, 2, 3, 1, 2, 3, 2, 3, 3])) Note that row indices (first array) are non-decreasing, and the corresponding column indices (second array) are strictly increasing for each row. Here is how they can be used with a sample array: >>> a = np.arange(16).reshape(4, 4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) Both for indexing: >>> a[iu1] array([ 0, 1, 2, ..., 10, 11, 15]) And for assigning values: >>> a[iu1] = -1 >>> a array([[-1, -1, -1, -1], [ 4, -1, -1, -1], [ 8, 9, -1, -1], [12, 13, 14, -1]]) These cover only a small part of the whole array (two diagonals right of the main one): >>> iu2 = np.triu_indices(4, 2) >>> a[iu2] = -10 >>> a array([[ -1, -1, -10, -10], [ 4, -1, -1, -10], [ 8, 9, -1, -1], [ 12, 13, 14, -1]]) # numpy.triu_indices_from numpy.triu_indices_from(_arr_ , _k =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_twodim_base_impl.py#L1128-L1188) Return the indices for the upper-triangle of arr. See [`triu_indices`](numpy.triu_indices#numpy.triu_indices "numpy.triu_indices") for full details. Parameters: **arr** ndarray, shape(N, N) The indices will be valid for square arrays. **k** int, optional Diagonal offset (see [`triu`](numpy.triu#numpy.triu "numpy.triu") for details). Returns: **triu_indices_from** tuple, shape(2) of ndarray, shape(N) Indices for the upper-triangle of `arr`. See also [`triu_indices`](numpy.triu_indices#numpy.triu_indices "numpy.triu_indices"), [`triu`](numpy.triu#numpy.triu "numpy.triu"), [`tril_indices_from`](numpy.tril_indices_from#numpy.tril_indices_from "numpy.tril_indices_from") #### Examples >>> import numpy as np Create a 4 by 4 array >>> a = np.arange(16).reshape(4, 4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) Pass the array to get the indices of the upper triangular elements. >>> triui = np.triu_indices_from(a) >>> triui (array([0, 0, 0, 0, 1, 1, 1, 2, 2, 3]), array([0, 1, 2, 3, 1, 2, 3, 2, 3, 3])) >>> a[triui] array([ 0, 1, 2, 3, 5, 6, 7, 10, 11, 15]) This is syntactic sugar for triu_indices(). >>> np.triu_indices(a.shape[0]) (array([0, 0, 0, 0, 1, 1, 1, 2, 2, 3]), array([0, 1, 2, 3, 1, 2, 3, 2, 3, 3])) Use the `k` parameter to return the indices for the upper triangular array from the k-th diagonal. >>> triuim1 = np.triu_indices_from(a, k=1) >>> a[triuim1] array([ 1, 2, 3, 6, 7, 11]) # numpy.true_divide numpy.true_divide(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Divide arguments element-wise. Parameters: **x1** array_like Dividend array. **x2** array_like Divisor array. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray or scalar The quotient `x1/x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. See also [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr") Set whether to raise or warn on overflow, underflow and division by zero. #### Notes Equivalent to `x1` / `x2` in terms of array-broadcasting. The `true_divide(x1, x2)` function is an alias for `divide(x1, x2)`. #### Examples >>> import numpy as np >>> np.divide(2.0, 4.0) 0.5 >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> np.divide(x1, x2) array([[nan, 1. , 1. ], [inf, 4. , 2.5], [inf, 7. , 4. ]]) The `/` operator can be used as a shorthand for `np.divide` on ndarrays. >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = 2 * np.ones(3) >>> x1 / x2 array([[0. , 0.5, 1. ], [1.5, 2. , 2.5], [3. , 3.5, 4. ]]) # numpy.trunc numpy.trunc(_x_ , _/_ , _out=None_ , _*_ , _where=True_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_])_= _ Return the truncated value of the input, element-wise. The truncated value of the scalar `x` is the nearest integer `i` which is closer to zero than `x` is. In short, the fractional part of the signed number `x` is discarded. Parameters: **x** array_like Input data. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray or scalar The truncated value of each element in `x`. This is a scalar if `x` is a scalar. See also [`ceil`](numpy.ceil#numpy.ceil "numpy.ceil"), [`floor`](numpy.floor#numpy.floor "numpy.floor"), [`rint`](numpy.rint#numpy.rint "numpy.rint"), [`fix`](numpy.fix#numpy.fix "numpy.fix") #### Examples >>> import numpy as np >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) >>> np.trunc(a) array([-1., -1., -0., 0., 1., 1., 2.]) # numpy.typename numpy.typename(_char_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_type_check_impl.py#L574-L625) Return a description for the given data type code. Parameters: **char** str Data type code. Returns: **out** str Description of the input data type code. See also [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") #### Examples >>> import numpy as np >>> typechars = ['S1', '?', 'B', 'D', 'G', 'F', 'I', 'H', 'L', 'O', 'Q', ... 'S', 'U', 'V', 'b', 'd', 'g', 'f', 'i', 'h', 'l', 'q'] >>> for typechar in typechars: ... print(typechar, ' : ', np.typename(typechar)) ... S1 : character ? : bool B : unsigned char D : complex double precision G : complex long double precision F : complex single precision I : unsigned integer H : unsigned short L : unsigned long integer O : object Q : unsigned long long integer S : string U : unicode V : void b : signed char d : double precision g : long precision f : single precision i : integer h : short l : long integer q : long long integer # numpy.ufunc.__call__ method ufunc.__call__(_* args_, _** kwargs_) Call self as a function. # numpy.ufunc.accumulate method ufunc.accumulate(_array_ , _axis =0_, _dtype =None_, _out =None_) Accumulate the result of applying the operator to all elements. For a one-dimensional array, accumulate produces results equivalent to: r = np.empty(len(A)) t = op.identity # op = the ufunc being applied to A's elements for i in range(len(A)): t = op(t, A[i]) r[i] = t return r For example, add.accumulate() is equivalent to np.cumsum(). For a multi-dimensional array, accumulate is applied along only one axis (axis zero by default; see Examples below) so repeated use is necessary if one wants to accumulate over multiple axes. Parameters: **array** array_like The array to act on. **axis** int, optional The axis along which to apply the accumulation; default is zero. **dtype** data-type code, optional The data-type used to represent the intermediate results. Defaults to the data-type of the output array if such is provided, or the data-type of the input array if no output array is provided. **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If not provided or None, a freshly-allocated array is returned. For consistency with `ufunc.__call__`, if given as a keyword, this may be wrapped in a 1-element tuple. Returns: **r** ndarray The accumulated values. If `out` was supplied, `r` is a reference to `out`. #### Examples 1-D array examples: >>> import numpy as np >>> np.add.accumulate([2, 3, 5]) array([ 2, 5, 10]) >>> np.multiply.accumulate([2, 3, 5]) array([ 2, 6, 30]) 2-D array examples: >>> I = np.eye(2) >>> I array([[1., 0.], [0., 1.]]) Accumulate along axis 0 (rows), down columns: >>> np.add.accumulate(I, 0) array([[1., 0.], [1., 1.]]) >>> np.add.accumulate(I) # no axis specified = axis zero array([[1., 0.], [1., 1.]]) Accumulate along axis 1 (columns), through rows: >>> np.add.accumulate(I, 1) array([[1., 1.], [0., 1.]]) # numpy.ufunc.at method ufunc.at(_a_ , _indices_ , _b =None_, _/_) Performs unbuffered in place operation on operand ‘a’ for elements specified by ‘indices’. For addition ufunc, this method is equivalent to `a[indices] += b`, except that results are accumulated for elements that are indexed more than once. For example, `a[[0,0]] += 1` will only increment the first element once because of buffering, whereas `add.at(a, [0,0], 1)` will increment the first element twice. Parameters: **a** array_like The array to perform in place operation on. **indices** array_like or tuple Array like index object or slice object for indexing into first operand. If first operand has multiple dimensions, indices can be a tuple of array like index objects or slice objects. **b** array_like Second operand for ufuncs requiring two operands. Operand must be broadcastable over first operand after indexing or slicing. #### Examples Set items 0 and 1 to their negative values: >>> import numpy as np >>> a = np.array([1, 2, 3, 4]) >>> np.negative.at(a, [0, 1]) >>> a array([-1, -2, 3, 4]) Increment items 0 and 1, and increment item 2 twice: >>> a = np.array([1, 2, 3, 4]) >>> np.add.at(a, [0, 1, 2, 2], 1) >>> a array([2, 3, 5, 4]) Add items 0 and 1 in first array to second array, and store results in first array: >>> a = np.array([1, 2, 3, 4]) >>> b = np.array([1, 2]) >>> np.add.at(a, [0, 1], b) >>> a array([2, 4, 3, 4]) # numpy.ufunc _class_ numpy.ufunc[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) Functions that operate element by element on whole arrays. To see the documentation for a specific ufunc, use [`info`](numpy.info#numpy.info "numpy.info"). For example, `np.info(np.sin)`. Because ufuncs are written in C (for speed) and linked into Python with NumPy’s ufunc facility, Python’s help() function finds this page whenever help() is called on a ufunc. A detailed explanation of ufuncs can be found in the docs for [Universal functions (ufunc)](../ufuncs#ufuncs). **Calling ufuncs:** `op(*x[, out], where=True, **kwargs)` Apply `op` to the arguments `*x` elementwise, broadcasting the arguments. The broadcasting rules are: * Dimensions of length 1 may be prepended to either array. * Arrays may be repeated along dimensions of length 1. Parameters: ***x** array_like Input arrays. **out** ndarray, None, or tuple of ndarray and None, optional Alternate array object(s) in which to put the result; if provided, it must have a shape that the inputs broadcast to. A tuple of arrays (possible only as a keyword argument) must have length equal to the number of outputs; use None for uninitialized outputs to be allocated by the ufunc. **where** array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **r** ndarray or tuple of ndarray `r` will have the shape that the arrays in `x` broadcast to; if `out` is provided, it will be returned. If not, `r` will be allocated and may contain uninitialized values. If the function has more than one output, then the result will be a tuple of arrays. Attributes: [`identity`](numpy.identity#numpy.identity "numpy.identity") The identity value. [`nargs`](numpy.ufunc.nargs#numpy.ufunc.nargs "numpy.ufunc.nargs") The number of arguments. [`nin`](numpy.ufunc.nin#numpy.ufunc.nin "numpy.ufunc.nin") The number of inputs. [`nout`](numpy.ufunc.nout#numpy.ufunc.nout "numpy.ufunc.nout") The number of outputs. [`ntypes`](numpy.ufunc.ntypes#numpy.ufunc.ntypes "numpy.ufunc.ntypes") The number of types. [`signature`](numpy.ufunc.signature#numpy.ufunc.signature "numpy.ufunc.signature") Definition of the core elements a generalized ufunc operates on. [`types`](numpy.ufunc.types#numpy.ufunc.types "numpy.ufunc.types") Returns a list with types grouped input->output. #### Methods [`__call__`](numpy.ufunc.__call__#numpy.ufunc.__call__ "numpy.ufunc.__call__")(*args, **kwargs) | Call self as a function. ---|--- [`accumulate`](numpy.ufunc.accumulate#numpy.ufunc.accumulate "numpy.ufunc.accumulate")(array[, axis, dtype, out]) | Accumulate the result of applying the operator to all elements. [`at`](numpy.ufunc.at#numpy.ufunc.at "numpy.ufunc.at")(a, indices[, b]) | Performs unbuffered in place operation on operand 'a' for elements specified by 'indices'. [`outer`](numpy.ufunc.outer#numpy.ufunc.outer "numpy.ufunc.outer")(A, B, /, **kwargs) | Apply the ufunc `op` to all pairs (a, b) with a in `A` and b in `B`. [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce")(array[, axis, dtype, out, keepdims, ...]) | Reduces [`array`](numpy.array#numpy.array "numpy.array")'s dimension by one, by applying ufunc along one axis. [`reduceat`](numpy.ufunc.reduceat#numpy.ufunc.reduceat "numpy.ufunc.reduceat")(array, indices[, axis, dtype, out]) | Performs a (local) reduce with specified slices over a single axis. [`resolve_dtypes`](numpy.ufunc.resolve_dtypes#numpy.ufunc.resolve_dtypes "numpy.ufunc.resolve_dtypes")(dtypes, *[, signature, ...]) | Find the dtypes NumPy will use for the operation. # numpy.ufunc.identity attribute ufunc.identity The identity value. Data attribute containing the identity element for the ufunc, if it has one. If it does not, the attribute value is None. #### Examples >>> import numpy as np >>> np.add.identity 0 >>> np.multiply.identity 1 >>> np.power.identity 1 >>> print(np.exp.identity) None # numpy.ufunc.nargs attribute ufunc.nargs The number of arguments. Data attribute containing the number of arguments the ufunc takes, including optional ones. #### Notes Typically this value will be one more than what you might expect because all ufuncs take the optional “out” argument. #### Examples >>> import numpy as np >>> np.add.nargs 3 >>> np.multiply.nargs 3 >>> np.power.nargs 3 >>> np.exp.nargs 2 # numpy.ufunc.nin attribute ufunc.nin The number of inputs. Data attribute containing the number of arguments the ufunc treats as input. #### Examples >>> import numpy as np >>> np.add.nin 2 >>> np.multiply.nin 2 >>> np.power.nin 2 >>> np.exp.nin 1 # numpy.ufunc.nout attribute ufunc.nout The number of outputs. Data attribute containing the number of arguments the ufunc treats as output. #### Notes Since all ufuncs can take output arguments, this will always be at least 1. #### Examples >>> import numpy as np >>> np.add.nout 1 >>> np.multiply.nout 1 >>> np.power.nout 1 >>> np.exp.nout 1 # numpy.ufunc.ntypes attribute ufunc.ntypes The number of types. The number of numerical NumPy types - of which there are 18 total - on which the ufunc can operate. See also [`numpy.ufunc.types`](numpy.ufunc.types#numpy.ufunc.types "numpy.ufunc.types") #### Examples >>> import numpy as np >>> np.add.ntypes 18 >>> np.multiply.ntypes 18 >>> np.power.ntypes 17 >>> np.exp.ntypes 7 >>> np.remainder.ntypes 14 # numpy.ufunc.outer method ufunc.outer(_A_ , _B_ , _/_ , _** kwargs_) Apply the ufunc `op` to all pairs (a, b) with a in `A` and b in `B`. Let `M = A.ndim`, `N = B.ndim`. Then the result, `C`, of `op.outer(A, B)` is an array of dimension M + N such that: \\[C[i_0, ..., i_{M-1}, j_0, ..., j_{N-1}] = op(A[i_0, ..., i_{M-1}], B[j_0, ..., j_{N-1}])\\] For `A` and `B` one-dimensional, this is equivalent to: r = empty(len(A),len(B)) for i in range(len(A)): for j in range(len(B)): r[i,j] = op(A[i], B[j]) # op = ufunc in question Parameters: **A** array_like First array **B** array_like Second array **kwargs** any Arguments to pass on to the ufunc. Typically [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") or `out`. See [`ufunc`](numpy.ufunc#numpy.ufunc "numpy.ufunc") for a comprehensive overview of all available arguments. Returns: **r** ndarray Output array See also [`numpy.outer`](numpy.outer#numpy.outer "numpy.outer") A less powerful version of `np.multiply.outer` that [`ravel`](numpy.ravel#numpy.ravel "numpy.ravel")s all inputs to 1D. This exists primarily for compatibility with old code. [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") `np.tensordot(a, b, axes=((), ()))` and `np.multiply.outer(a, b)` behave same for all dimensions of a and b. #### Examples >>> np.multiply.outer([1, 2, 3], [4, 5, 6]) array([[ 4, 5, 6], [ 8, 10, 12], [12, 15, 18]]) A multi-dimensional example: >>> A = np.array([[1, 2, 3], [4, 5, 6]]) >>> A.shape (2, 3) >>> B = np.array([[1, 2, 3, 4]]) >>> B.shape (1, 4) >>> C = np.multiply.outer(A, B) >>> C.shape; C (2, 3, 1, 4) array([[[[ 1, 2, 3, 4]], [[ 2, 4, 6, 8]], [[ 3, 6, 9, 12]]], [[[ 4, 8, 12, 16]], [[ 5, 10, 15, 20]], [[ 6, 12, 18, 24]]]]) # numpy.ufunc.reduce method ufunc.reduce(_array_ , _axis=0_ , _dtype=None_ , _out=None_ , _keepdims=False_ , _initial= _, _where=True_) Reduces [`array`](numpy.array#numpy.array "numpy.array")’s dimension by one, by applying ufunc along one axis. Let \\(array.shape = (N_0, ..., N_i, ..., N_{M-1})\\). Then \\(ufunc.reduce(array, axis=i)[k_0, ..,k_{i-1}, k_{i+1}, .., k_{M-1}]\\) = the result of iterating `j` over \\(range(N_i)\\), cumulatively applying ufunc to each \\(array[k_0, ..,k_{i-1}, j, k_{i+1}, .., k_{M-1}]\\). For a one- dimensional array, reduce produces results equivalent to: r = op.identity # op = ufunc for i in range(len(A)): r = op(r, A[i]) return r For example, add.reduce() is equivalent to sum(). Parameters: **array** array_like The array to act on. **axis** None or int or tuple of ints, optional Axis or axes along which a reduction is performed. The default (`axis` = 0) is perform a reduction over the first dimension of the input array. `axis` may be negative, in which case it counts from the last to the first axis. If this is None, a reduction is performed over all the axes. If this is a tuple of ints, a reduction is performed on multiple axes, instead of a single axis or all the axes as before. For operations which are either not commutative or not associative, doing a reduction over multiple axes is not well-defined. The ufuncs do not currently raise an exception in this case, but will likely do so in the future. **dtype** data-type code, optional The data type used to perform the operation. Defaults to that of `out` if given, and the data type of `array` otherwise (though upcast to conserve precision for some cases, such as `numpy.add.reduce` for integer or boolean input). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If not provided or None, a freshly-allocated array is returned. For consistency with `ufunc.__call__`, if given as a keyword, this may be wrapped in a 1-element tuple. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original [`array`](numpy.array#numpy.array "numpy.array"). **initial** scalar, optional The value with which to start the reduction. If the ufunc has no identity or the dtype is object, this defaults to None - otherwise it defaults to ufunc.identity. If `None` is given, the first element of the reduction is used, and an error is thrown if the reduction is empty. **where** array_like of bool, optional A boolean array which is broadcasted to match the dimensions of [`array`](numpy.array#numpy.array "numpy.array"), and selects elements to include in the reduction. Note that for ufuncs like `minimum` that do not have an identity defined, one has to pass in also `initial`. Returns: **r** ndarray The reduced array. If `out` was supplied, `r` is a reference to it. #### Examples >>> import numpy as np >>> np.multiply.reduce([2,3,5]) 30 A multi-dimensional array example: >>> X = np.arange(8).reshape((2,2,2)) >>> X array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> np.add.reduce(X, 0) array([[ 4, 6], [ 8, 10]]) >>> np.add.reduce(X) # confirm: default axis value is 0 array([[ 4, 6], [ 8, 10]]) >>> np.add.reduce(X, 1) array([[ 2, 4], [10, 12]]) >>> np.add.reduce(X, 2) array([[ 1, 5], [ 9, 13]]) You can use the `initial` keyword argument to initialize the reduction with a different value, and `where` to select specific elements to include: >>> np.add.reduce([10], initial=5) 15 >>> np.add.reduce(np.ones((2, 2, 2)), axis=(0, 2), initial=10) array([14., 14.]) >>> a = np.array([10., np.nan, 10]) >>> np.add.reduce(a, where=~np.isnan(a)) 20.0 Allows reductions of empty arrays where they would normally fail, i.e. for ufuncs without an identity. >>> np.minimum.reduce([], initial=np.inf) inf >>> np.minimum.reduce([[1., 2.], [3., 4.]], initial=10., where=[True, False]) array([ 1., 10.]) >>> np.minimum.reduce([]) Traceback (most recent call last): ... ValueError: zero-size array to reduction operation minimum which has no identity # numpy.ufunc.reduceat method ufunc.reduceat(_array_ , _indices_ , _axis =0_, _dtype =None_, _out =None_) Performs a (local) reduce with specified slices over a single axis. For i in `range(len(indices))`, `reduceat` computes `ufunc.reduce(array[indices[i]:indices[i+1]])`, which becomes the i-th generalized “row” parallel to `axis` in the final result (i.e., in a 2-D array, for example, if `axis = 0`, it becomes the i-th row, but if `axis = 1`, it becomes the i-th column). There are three exceptions to this: * when `i = len(indices) - 1` (so for the last index), `indices[i+1] = array.shape[axis]`. * if `indices[i] >= indices[i + 1]`, the i-th generalized “row” is simply `array[indices[i]]`. * if `indices[i] >= len(array)` or `indices[i] < 0`, an error is raised. The shape of the output depends on the size of [`indices`](numpy.indices#numpy.indices "numpy.indices"), and may be larger than [`array`](numpy.array#numpy.array "numpy.array") (this happens if `len(indices) > array.shape[axis]`). Parameters: **array** array_like The array to act on. **indices** array_like Paired indices, comma separated (not colon), specifying slices to reduce. **axis** int, optional The axis along which to apply the reduceat. **dtype** data-type code, optional The data type used to perform the operation. Defaults to that of `out` if given, and the data type of `array` otherwise (though upcast to conserve precision for some cases, such as `numpy.add.reduce` for integer or boolean input). **out** ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If not provided or None, a freshly-allocated array is returned. For consistency with `ufunc.__call__`, if given as a keyword, this may be wrapped in a 1-element tuple. Returns: **r** ndarray The reduced values. If `out` was supplied, `r` is a reference to `out`. #### Notes A descriptive example: If [`array`](numpy.array#numpy.array "numpy.array") is 1-D, the function `ufunc.accumulate(array)` is the same as `ufunc.reduceat(array, indices)[::2]` where [`indices`](numpy.indices#numpy.indices "numpy.indices") is `range(len(array) - 1)` with a zero placed in every other element: `indices = zeros(2 * len(array) - 1)`, `indices[1::2] = range(1, len(array))`. Don’t be fooled by this attribute’s name: `reduceat(array)` is not necessarily smaller than [`array`](numpy.array#numpy.array "numpy.array"). #### Examples To take the running sum of four successive values: >>> import numpy as np >>> np.add.reduceat(np.arange(8),[0,4, 1,5, 2,6, 3,7])[::2] array([ 6, 10, 14, 18]) A 2-D example: >>> x = np.linspace(0, 15, 16).reshape(4,4) >>> x array([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [12., 13., 14., 15.]]) # reduce such that the result has the following five rows: # [row1 + row2 + row3] # [row4] # [row2] # [row3] # [row1 + row2 + row3 + row4] >>> np.add.reduceat(x, [0, 3, 1, 2, 0]) array([[12., 15., 18., 21.], [12., 13., 14., 15.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [24., 28., 32., 36.]]) # reduce such that result has the following two columns: # [col1 * col2 * col3, col4] >>> np.multiply.reduceat(x, [0, 3], 1) array([[ 0., 3.], [ 120., 7.], [ 720., 11.], [2184., 15.]]) # numpy.ufunc.resolve_dtypes method ufunc.resolve_dtypes(_dtypes_ , _*_ , _signature =None_, _casting =None_, _reduction =False_) Find the dtypes NumPy will use for the operation. Both input and output dtypes are returned and may differ from those provided. Note This function always applies NEP 50 rules since it is not provided any actual values. The Python types `int`, `float`, and `complex` thus behave weak and should be passed for “untyped” Python input. Parameters: **dtypes** tuple of dtypes, None, or literal int, float, complex The input dtypes for each operand. Output operands can be None, indicating that the dtype must be found. **signature** tuple of DTypes or None, optional If given, enforces exact DType (classes) of the specific operand. The ufunc `dtype` argument is equivalent to passing a tuple with only output dtypes set. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional The casting mode when casting is necessary. This is identical to the ufunc call casting modes. **reduction** boolean If given, the resolution assumes a reduce operation is happening which slightly changes the promotion and type resolution rules. [`dtypes`](../routines.dtypes#module-numpy.dtypes "numpy.dtypes") is usually something like `(None, np.dtype("i2"), None)` for reductions (first input is also the output). Note The default casting mode is “same_kind”, however, as of NumPy 1.24, NumPy uses “unsafe” for reductions. Returns: **dtypes** tuple of dtypes The dtypes which NumPy would use for the calculation. Note that dtypes may not match the passed in ones (casting is necessary). #### Examples This API requires passing dtypes, define them for convenience: >>> import numpy as np >>> int32 = np.dtype("int32") >>> float32 = np.dtype("float32") The typical ufunc call does not pass an output dtype. [`numpy.add`](numpy.add#numpy.add "numpy.add") has two inputs and one output, so leave the output as `None` (not provided): >>> np.add.resolve_dtypes((int32, float32, None)) (dtype('float64'), dtype('float64'), dtype('float64')) The loop found uses “float64” for all operands (including the output), the first input would be cast. `resolve_dtypes` supports “weak” handling for Python scalars by passing `int`, `float`, or `complex`: >>> np.add.resolve_dtypes((float32, float, None)) (dtype('float32'), dtype('float32'), dtype('float32')) Where the Python `float` behaves similar to a Python value `0.0` in a ufunc call. (See [NEP 50](https://numpy.org/neps/nep-0050-scalar- promotion.html#nep50 "\(in NumPy Enhancement Proposals\)") for details.) # numpy.ufunc.signature attribute ufunc.signature Definition of the core elements a generalized ufunc operates on. The signature determines how the dimensions of each input/output array are split into core and loop dimensions: 1. Each dimension in the signature is matched to a dimension of the corresponding passed-in array, starting from the end of the shape tuple. 2. Core dimensions assigned to the same label in the signature must have exactly matching sizes, no broadcasting is performed. 3. The core dimensions are removed from all inputs and the remaining dimensions are broadcast together, defining the loop dimensions. #### Notes Generalized ufuncs are used internally in many linalg functions, and in the testing suite; the examples below are taken from these. For ufuncs that operate on scalars, the signature is None, which is equivalent to ‘()’ for every argument. #### Examples >>> import numpy as np >>> np.linalg._umath_linalg.det.signature '(m,m)->()' >>> np.matmul.signature '(n?,k),(k,m?)->(n?,m?)' >>> np.add.signature is None True # equivalent to '(),()->()' # numpy.ufunc.types attribute ufunc.types Returns a list with types grouped input->output. Data attribute listing the data-type “Domain-Range” groupings the ufunc can deliver. The data-types are given using the character codes. See also [`numpy.ufunc.ntypes`](numpy.ufunc.ntypes#numpy.ufunc.ntypes "numpy.ufunc.ntypes") #### Examples >>> import numpy as np >>> np.add.types ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'OO->O'] >>> np.multiply.types ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'OO->O'] >>> np.power.types ['bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'OO->O'] >>> np.exp.types ['f->f', 'd->d', 'g->g', 'F->F', 'D->D', 'G->G', 'O->O'] >>> np.remainder.types ['bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'OO->O'] # numpy.union1d numpy.union1d(_ar1_ , _ar2_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_arraysetops_impl.py#L1140-L1170) Find the union of two arrays. Return the unique, sorted array of values that are in either of the two input arrays. Parameters: **ar1, ar2** array_like Input arrays. They are flattened if they are not already 1D. Returns: **union1d** ndarray Unique, sorted union of the input arrays. #### Examples >>> import numpy as np >>> np.union1d([-1, 0, 1], [-2, 0, 2]) array([-2, -1, 0, 1, 2]) To find the union of more than two arrays, use functools.reduce: >>> from functools import reduce >>> reduce(np.union1d, ([1, 3, 4, 3], [3, 1, 2, 1], [6, 3, 4, 2])) array([1, 2, 3, 4, 6]) # numpy.unique numpy.unique(_ar_ , _return_index =False_, _return_inverse =False_, _return_counts =False_, _axis =None_, _*_ , _equal_nan =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_arraysetops_impl.py#L145-L336) Find the unique elements of an array. Returns the sorted unique elements of an array. There are three optional outputs in addition to the unique elements: * the indices of the input array that give the unique values * the indices of the unique array that reconstruct the input array * the number of times each unique value comes up in the input array Parameters: **ar** array_like Input array. Unless `axis` is specified, this will be flattened if it is not already 1-D. **return_index** bool, optional If True, also return the indices of `ar` (along the specified axis, if provided, or in the flattened array) that result in the unique array. **return_inverse** bool, optional If True, also return the indices of the unique array (for the specified axis, if provided) that can be used to reconstruct `ar`. **return_counts** bool, optional If True, also return the number of times each unique item appears in `ar`. **axis** int or None, optional The axis to operate on. If None, `ar` will be flattened. If an integer, the subarrays indexed by the given axis will be flattened and treated as the elements of a 1-D array with the dimension of the given axis, see the notes for more details. Object arrays or structured arrays that contain objects are not supported if the `axis` kwarg is used. The default is None. **equal_nan** bool, optional If True, collapses multiple NaN values in the return array into one. New in version 1.24. Returns: **unique** ndarray The sorted unique values. **unique_indices** ndarray, optional The indices of the first occurrences of the unique values in the original array. Only provided if `return_index` is True. **unique_inverse** ndarray, optional The indices to reconstruct the original array from the unique array. Only provided if `return_inverse` is True. **unique_counts** ndarray, optional The number of times each of the unique values comes up in the original array. Only provided if `return_counts` is True. See also [`repeat`](numpy.repeat#numpy.repeat "numpy.repeat") Repeat elements of an array. [`sort`](numpy.sort#numpy.sort "numpy.sort") Return a sorted copy of an array. #### Notes When an axis is specified the subarrays indexed by the axis are sorted. This is done by making the specified axis the first dimension of the array (move the axis to the first dimension to keep the order of the other axes) and then flattening the subarrays in C order. The flattened subarrays are then viewed as a structured type with each element given a label, with the effect that we end up with a 1-D array of structured types that can be treated in the same way as any other 1-D array. The result is that the flattened subarrays are sorted in lexicographic order starting with the first element. Changed in version 1.21: Like np.sort, NaN will sort to the end of the values. For complex arrays all NaN values are considered equivalent (no matter whether the NaN is in the real or imaginary part). As the representant for the returned array the smallest one in the lexicographical order is chosen - see np.sort for how the lexicographical order is defined for complex arrays. Changed in version 2.0: For multi-dimensional inputs, `unique_inverse` is reshaped such that the input can be reconstructed using `np.take(unique, unique_inverse, axis=axis)`. The result is now not 1-dimensional when `axis=None`. Note that in NumPy 2.0.0 a higher dimensional array was returned also when `axis` was not `None`. This was reverted, but `inverse.reshape(-1)` can be used to ensure compatibility with both versions. #### Examples >>> import numpy as np >>> np.unique([1, 1, 2, 2, 3, 3]) array([1, 2, 3]) >>> a = np.array([[1, 1], [2, 3]]) >>> np.unique(a) array([1, 2, 3]) Return the unique rows of a 2D array >>> a = np.array([[1, 0, 0], [1, 0, 0], [2, 3, 4]]) >>> np.unique(a, axis=0) array([[1, 0, 0], [2, 3, 4]]) Return the indices of the original array that give the unique values: >>> a = np.array(['a', 'b', 'b', 'c', 'a']) >>> u, indices = np.unique(a, return_index=True) >>> u array(['a', 'b', 'c'], dtype='>> indices array([0, 1, 3]) >>> a[indices] array(['a', 'b', 'c'], dtype='>> a = np.array([1, 2, 6, 4, 2, 3, 2]) >>> u, indices = np.unique(a, return_inverse=True) >>> u array([1, 2, 3, 4, 6]) >>> indices array([0, 1, 4, 3, 1, 2, 1]) >>> u[indices] array([1, 2, 6, 4, 2, 3, 2]) Reconstruct the input values from the unique values and counts: >>> a = np.array([1, 2, 6, 4, 2, 3, 2]) >>> values, counts = np.unique(a, return_counts=True) >>> values array([1, 2, 3, 4, 6]) >>> counts array([1, 3, 1, 1, 1]) >>> np.repeat(values, counts) array([1, 2, 2, 2, 3, 4, 6]) # original order not preserved # numpy.unique_all numpy.unique_all(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_arraysetops_impl.py#L408-L461) Find the unique elements of an array, and counts, inverse, and indices. This function is an Array API compatible alternative to: np.unique(x, return_index=True, return_inverse=True, return_counts=True, equal_nan=False) but returns a namedtuple for easier access to each output. Parameters: **x** array_like Input array. It will be flattened if it is not already 1-D. Returns: **out** namedtuple The result containing: * values - The unique elements of an input array. * indices - The first occurring indices for each unique element. * inverse_indices - The indices from the set of unique elements that reconstruct `x`. * counts - The corresponding counts for each unique element. See also [`unique`](numpy.unique#numpy.unique "numpy.unique") Find the unique elements of an array. #### Examples >>> import numpy as np >>> x = [1, 1, 2] >>> uniq = np.unique_all(x) >>> uniq.values array([1, 2]) >>> uniq.indices array([0, 2]) >>> uniq.inverse_indices array([0, 0, 1]) >>> uniq.counts array([2, 1]) # numpy.unique_counts numpy.unique_counts(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_arraysetops_impl.py#L468-L513) Find the unique elements and counts of an input array `x`. This function is an Array API compatible alternative to: np.unique(x, return_counts=True, equal_nan=False) but returns a namedtuple for easier access to each output. Parameters: **x** array_like Input array. It will be flattened if it is not already 1-D. Returns: **out** namedtuple The result containing: * values - The unique elements of an input array. * counts - The corresponding counts for each unique element. See also [`unique`](numpy.unique#numpy.unique "numpy.unique") Find the unique elements of an array. #### Examples >>> import numpy as np >>> x = [1, 1, 2] >>> uniq = np.unique_counts(x) >>> uniq.values array([1, 2]) >>> uniq.counts array([2, 1]) # numpy.unique_inverse numpy.unique_inverse(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_arraysetops_impl.py#L520-L566) Find the unique elements of `x` and indices to reconstruct `x`. This function is an Array API compatible alternative to: np.unique(x, return_inverse=True, equal_nan=False) but returns a namedtuple for easier access to each output. Parameters: **x** array_like Input array. It will be flattened if it is not already 1-D. Returns: **out** namedtuple The result containing: * values - The unique elements of an input array. * inverse_indices - The indices from the set of unique elements that reconstruct `x`. See also [`unique`](numpy.unique#numpy.unique "numpy.unique") Find the unique elements of an array. #### Examples >>> import numpy as np >>> x = [1, 1, 2] >>> uniq = np.unique_inverse(x) >>> uniq.values array([1, 2]) >>> uniq.inverse_indices array([0, 0, 1]) # numpy.unique_values numpy.unique_values(_x_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_arraysetops_impl.py#L573-L609) Returns the unique elements of an input array `x`. This function is an Array API compatible alternative to: np.unique(x, equal_nan=False) Parameters: **x** array_like Input array. It will be flattened if it is not already 1-D. Returns: **out** ndarray The unique elements of an input array. See also [`unique`](numpy.unique#numpy.unique "numpy.unique") Find the unique elements of an array. #### Examples >>> import numpy as np >>> np.unique_values([1, 1, 2]) array([1, 2]) # numpy.unpackbits numpy.unpackbits(_a_ , _/_ , _axis =None_, _count =None_, _bitorder ='big'_) Unpacks elements of a uint8 array into a binary-valued output array. Each element of `a` represents a bit-field that should be unpacked into a binary-valued output array. The shape of the output array is either 1-D (if `axis` is `None`) or the same shape as the input array with unpacking done along the axis specified. Parameters: **a** ndarray, uint8 type Input array. **axis** int, optional The dimension over which bit-unpacking is done. `None` implies unpacking the flattened array. **count** int or None, optional The number of elements to unpack along `axis`, provided as a way of undoing the effect of packing a size that is not a multiple of eight. A non-negative number means to only unpack `count` bits. A negative number means to trim off that many bits from the end. `None` means to unpack the entire array (the default). Counts larger than the available number of bits will add zero padding to the output. Negative counts must not exceed the available number of bits. **bitorder**{‘big’, ‘little’}, optional The order of the returned bits. ‘big’ will mimic bin(val), `3 = 0b00000011 => [0, 0, 0, 0, 0, 0, 1, 1]`, ‘little’ will reverse the order to `[1, 1, 0, 0, 0, 0, 0, 0]`. Defaults to ‘big’. Returns: **unpacked** ndarray, uint8 type The elements are binary-valued (0 or 1). See also [`packbits`](numpy.packbits#numpy.packbits "numpy.packbits") Packs the elements of a binary-valued array into bits in a uint8 array. #### Examples >>> import numpy as np >>> a = np.array([[2], [7], [23]], dtype=np.uint8) >>> a array([[ 2], [ 7], [23]], dtype=uint8) >>> b = np.unpackbits(a, axis=1) >>> b array([[0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 1, 1, 1], [0, 0, 0, 1, 0, 1, 1, 1]], dtype=uint8) >>> c = np.unpackbits(a, axis=1, count=-3) >>> c array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 1, 0]], dtype=uint8) >>> p = np.packbits(b, axis=0) >>> np.unpackbits(p, axis=0) array([[0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 1, 1, 1], [0, 0, 0, 1, 0, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8) >>> np.array_equal(b, np.unpackbits(p, axis=0, count=b.shape[0])) True # numpy.unravel_index numpy.unravel_index(_indices_ , _shape_ , _order ='C'_) Converts a flat index or array of flat indices into a tuple of coordinate arrays. Parameters: **indices** array_like An integer array whose elements are indices into the flattened version of an array of dimensions `shape`. Before version 1.6.0, this function accepted just one index value. **shape** tuple of ints The shape of the array to use for unraveling `indices`. **order**{‘C’, ‘F’}, optional Determines whether the indices should be viewed as indexing in row-major (C-style) or column-major (Fortran-style) order. Returns: **unraveled_coords** tuple of ndarray Each array in the tuple has the same shape as the `indices` array. See also [`ravel_multi_index`](numpy.ravel_multi_index#numpy.ravel_multi_index "numpy.ravel_multi_index") #### Examples >>> import numpy as np >>> np.unravel_index([22, 41, 37], (7,6)) (array([3, 6, 6]), array([4, 5, 1])) >>> np.unravel_index([31, 41, 13], (7,6), order='F') (array([3, 6, 6]), array([4, 5, 1])) >>> np.unravel_index(1621, (6,7,8,9)) (3, 1, 4, 1) # numpy.unstack numpy.unstack(_x_ , _/_ , _*_ , _axis =0_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/shape_base.py#L473-L539) Split an array into a sequence of arrays along the given axis. The `axis` parameter specifies the dimension along which the array will be split. For example, if `axis=0` (the default) it will be the first dimension and if `axis=-1` it will be the last dimension. The result is a tuple of arrays split along `axis`. New in version 2.1.0. Parameters: **x** ndarray The array to be unstacked. **axis** int, optional Axis along which the array will be split. Default: `0`. Returns: **unstacked** tuple of ndarrays The unstacked arrays. See also [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`split`](numpy.split#numpy.split "numpy.split") Split array into a list of multiple sub-arrays of equal size. #### Notes `unstack` serves as the reverse operation of [`stack`](numpy.stack#numpy.stack "numpy.stack"), i.e., `stack(unstack(x, axis=axis), axis=axis) == x`. This function is equivalent to `tuple(np.moveaxis(x, axis, 0))`, since iterating on an array iterates along the first axis. #### Examples >>> arr = np.arange(24).reshape((2, 3, 4)) >>> np.unstack(arr) (array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]), array([[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]])) >>> np.unstack(arr, axis=1) (array([[ 0, 1, 2, 3], [12, 13, 14, 15]]), array([[ 4, 5, 6, 7], [16, 17, 18, 19]]), array([[ 8, 9, 10, 11], [20, 21, 22, 23]])) >>> arr2 = np.stack(np.unstack(arr, axis=1), axis=1) >>> arr2.shape (2, 3, 4) >>> np.all(arr == arr2) np.True_ # numpy.unwrap numpy.unwrap(_p_ , _discont =None_, _axis =-1_, _*_ , _period =6.283185307179586_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L1707-L1801) Unwrap by taking the complement of large deltas with respect to the period. This unwraps a signal `p` by changing elements which have an absolute difference from their predecessor of more than `max(discont, period/2)` to their `period`-complementary values. For the default case where `period` is \\(2\pi\\) and `discont` is \\(\pi\\), this unwraps a radian phase `p` such that adjacent differences are never greater than \\(\pi\\) by adding \\(2k\pi\\) for some integer \\(k\\). Parameters: **p** array_like Input array. **discont** float, optional Maximum discontinuity between values, default is `period/2`. Values below `period/2` are treated as if they were `period/2`. To have an effect different from the default, `discont` should be larger than `period/2`. **axis** int, optional Axis along which unwrap will operate, default is the last axis. **period** float, optional Size of the range over which the input wraps. By default, it is `2 pi`. New in version 1.21.0. Returns: **out** ndarray Output array. See also [`rad2deg`](numpy.rad2deg#numpy.rad2deg "numpy.rad2deg"), [`deg2rad`](numpy.deg2rad#numpy.deg2rad "numpy.deg2rad") #### Notes If the discontinuity in `p` is smaller than `period/2`, but larger than `discont`, no unwrapping is done because taking the complement would only make the discontinuity larger. #### Examples >>> import numpy as np >>> phase = np.linspace(0, np.pi, num=5) >>> phase[3:] += np.pi >>> phase array([ 0. , 0.78539816, 1.57079633, 5.49778714, 6.28318531]) # may vary >>> np.unwrap(phase) array([ 0. , 0.78539816, 1.57079633, -0.78539816, 0. ]) # may vary >>> np.unwrap([0, 1, 2, -1, 0], period=4) array([0, 1, 2, 3, 4]) >>> np.unwrap([ 1, 2, 3, 4, 5, 6, 1, 2, 3], period=6) array([1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.unwrap([2, 3, 4, 5, 2, 3, 4, 5], period=4) array([2, 3, 4, 5, 6, 7, 8, 9]) >>> phase_deg = np.mod(np.linspace(0 ,720, 19), 360) - 180 >>> np.unwrap(phase_deg, period=360) array([-180., -140., -100., -60., -20., 20., 60., 100., 140., 180., 220., 260., 300., 340., 380., 420., 460., 500., 540.]) # numpy.vander numpy.vander(_x_ , _N =None_, _increasing =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_twodim_base_impl.py#L547-L634) Generate a Vandermonde matrix. The columns of the output matrix are powers of the input vector. The order of the powers is determined by the `increasing` boolean argument. Specifically, when `increasing` is False, the `i`-th output column is the input vector raised element-wise to the power of `N - i - 1`. Such a matrix with a geometric progression in each row is named for Alexandre- Theophile Vandermonde. Parameters: **x** array_like 1-D input array. **N** int, optional Number of columns in the output. If `N` is not specified, a square array is returned (`N = len(x)`). **increasing** bool, optional Order of the powers of the columns. If True, the powers increase from left to right, if False (the default) they are reversed. Returns: **out** ndarray Vandermonde matrix. If `increasing` is False, the first column is `x^(N-1)`, the second `x^(N-2)` and so forth. If `increasing` is True, the columns are `x^0, x^1, ..., x^(N-1)`. See also [`polynomial.polynomial.polyvander`](numpy.polynomial.polynomial.polyvander#numpy.polynomial.polynomial.polyvander "numpy.polynomial.polynomial.polyvander") #### Examples >>> import numpy as np >>> x = np.array([1, 2, 3, 5]) >>> N = 3 >>> np.vander(x, N) array([[ 1, 1, 1], [ 4, 2, 1], [ 9, 3, 1], [25, 5, 1]]) >>> np.column_stack([x**(N-1-i) for i in range(N)]) array([[ 1, 1, 1], [ 4, 2, 1], [ 9, 3, 1], [25, 5, 1]]) >>> x = np.array([1, 2, 3, 5]) >>> np.vander(x) array([[ 1, 1, 1, 1], [ 8, 4, 2, 1], [ 27, 9, 3, 1], [125, 25, 5, 1]]) >>> np.vander(x, increasing=True) array([[ 1, 1, 1, 1], [ 1, 2, 4, 8], [ 1, 3, 9, 27], [ 1, 5, 25, 125]]) The determinant of a square Vandermonde matrix is the product of the differences between the values of the input vector: >>> np.linalg.det(np.vander(x)) 48.000000000000043 # may vary >>> (5-3)*(5-2)*(5-1)*(3-2)*(3-1)*(2-1) 48 # numpy.var numpy.var(_a_ , _axis=None_ , _dtype=None_ , _out=None_ , _ddof=0_ , _keepdims= _, _*_ , _where= _, _mean= _, _correction= _)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/fromnumeric.py#L4073-L4269) Compute the variance along the specified axis. Returns the variance of the array elements, a measure of the spread of a distribution. The variance is computed for the flattened array by default, otherwise over the specified axis. Parameters: **a** array_like Array containing numbers whose variance is desired. If `a` is not an array, a conversion is attempted. **axis** None or int or tuple of ints, optional Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array. If this is a tuple of ints, a variance is performed over multiple axes, instead of a single axis or all the axes as before. **dtype** data-type, optional Type to use in computing the variance. For arrays of integer type the default is [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"); for arrays of float types it is the same as the array type. **out** ndarray, optional Alternate output array in which to place the result. It must have the same shape as the expected output, but the type is cast if necessary. **ddof**{int, float}, optional “Delta Degrees of Freedom”: the divisor used in the calculation is `N - ddof`, where `N` represents the number of elements. By default `ddof` is zero. See notes for details about use of `ddof`. **keepdims** bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the `var` method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **where** array_like of bool, optional Elements to include in the variance. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.20.0. **mean** array like, optional Provide the mean to prevent its recalculation. The mean should have a shape as if it was calculated with `keepdims=True`. The axis for the calculation of the mean should be the same as used in the call to this var function. New in version 2.0.0. **correction**{int, float}, optional Array API compatible name for the `ddof` parameter. Only one of them can be provided at the same time. New in version 2.0.0. Returns: **variance** ndarray, see dtype parameter above If `out=None`, returns a new array containing the variance; otherwise, a reference to the output array is returned. See also [`std`](numpy.std#numpy.std "numpy.std"), [`mean`](numpy.mean#numpy.mean "numpy.mean"), [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean"), [`nanstd`](numpy.nanstd#numpy.nanstd "numpy.nanstd"), [`nanvar`](numpy.nanvar#numpy.nanvar "numpy.nanvar") [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes There are several common variants of the array variance calculation. Assuming the input `a` is a one-dimensional NumPy array and `mean` is either provided as an argument or computed as `a.mean()`, NumPy computes the variance of an array as: N = len(a) d2 = abs(a - mean)**2 # abs is for complex `a` var = d2.sum() / (N - ddof) # note use of `ddof` Different values of the argument `ddof` are useful in different contexts. NumPy’s default `ddof=0` corresponds with the expression: \\[\frac{\sum_i{|a_i - \bar{a}|^2 }}{N}\\] which is sometimes called the “population variance” in the field of statistics because it applies the definition of variance to `a` as if `a` were a complete population of possible observations. Many other libraries define the variance of an array differently, e.g.: \\[\frac{\sum_i{|a_i - \bar{a}|^2}}{N - 1}\\] In statistics, the resulting quantity is sometimes called the “sample variance” because if `a` is a random sample from a larger population, this calculation provides an unbiased estimate of the variance of the population. The use of \\(N-1\\) in the denominator is often called “Bessel’s correction” because it corrects for bias (toward lower values) in the variance estimate introduced when the sample mean of `a` is used in place of the true mean of the population. For this quantity, use `ddof=1`. Note that for complex numbers, the absolute value is taken before squaring, so that the result is always real and nonnegative. For floating-point input, the variance is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for [`float32`](../arrays.scalars#numpy.float32 "numpy.float32") (see example below). Specifying a higher-accuracy accumulator using the `dtype` keyword can alleviate this issue. #### Examples >>> import numpy as np >>> a = np.array([[1, 2], [3, 4]]) >>> np.var(a) 1.25 >>> np.var(a, axis=0) array([1., 1.]) >>> np.var(a, axis=1) array([0.25, 0.25]) In single precision, var() can be inaccurate: >>> a = np.zeros((2, 512*512), dtype=np.float32) >>> a[0, :] = 1.0 >>> a[1, :] = 0.1 >>> np.var(a) np.float32(0.20250003) Computing the variance in float64 is more accurate: >>> np.var(a, dtype=np.float64) 0.20249999932944759 # may vary >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 0.2025 Specifying a where argument: >>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> np.var(a) 6.833333333333333 # may vary >>> np.var(a, where=[[True], [True], [False]]) 4.0 Using the mean keyword to save computation time: >>> import numpy as np >>> from timeit import timeit >>> >>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> mean = np.mean(a, axis=1, keepdims=True) >>> >>> g = globals() >>> n = 10000 >>> t1 = timeit("var = np.var(a, axis=1, mean=mean)", globals=g, number=n) >>> t2 = timeit("var = np.var(a, axis=1)", globals=g, number=n) >>> print(f'Percentage execution time saved {100*(t2-t1)/t2:.0f}%') Percentage execution time saved 32% # numpy.vdot numpy.vdot(_a_ , _b_ , _/_) Return the dot product of two vectors. The `vdot` function handles complex numbers differently than [`dot`](numpy.dot#numpy.dot "numpy.dot"): if the first argument is complex, it is replaced by its complex conjugate in the dot product calculation. `vdot` also handles multidimensional arrays differently than [`dot`](numpy.dot#numpy.dot "numpy.dot"): it does not perform a matrix product, but flattens the arguments to 1-D arrays before taking a vector dot product. Consequently, when the arguments are 2-D arrays of the same shape, this function effectively returns their [Frobenius inner product](https://en.wikipedia.org/wiki/Frobenius_inner_product) (also known as the _trace inner product_ or the _standard inner product_ on a vector space of matrices). Parameters: **a** array_like If `a` is complex the complex conjugate is taken before calculation of the dot product. **b** array_like Second argument to the dot product. Returns: **output** ndarray Dot product of `a` and `b`. Can be an int, float, or complex depending on the types of `a` and `b`. See also [`dot`](numpy.dot#numpy.dot "numpy.dot") Return the dot product without using the complex conjugate of the first argument. #### Examples >>> import numpy as np >>> a = np.array([1+2j,3+4j]) >>> b = np.array([5+6j,7+8j]) >>> np.vdot(a, b) (70-8j) >>> np.vdot(b, a) (70+8j) Note that higher-dimensional arrays are flattened! >>> a = np.array([[1, 4], [5, 6]]) >>> b = np.array([[4, 1], [2, 2]]) >>> np.vdot(a, b) 30 >>> np.vdot(b, a) 30 >>> 1*4 + 4*1 + 5*2 + 6*2 30 # numpy.vecdot numpy.vecdot(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_ , _axes_ , _axis_])_= _ Vector dot product of two arrays. Let \\(\mathbf{a}\\) be a vector in `x1` and \\(\mathbf{b}\\) be a corresponding vector in `x2`. The dot product is defined as: \\[\mathbf{a} \cdot \mathbf{b} = \sum_{i=0}^{n-1} \overline{a_i}b_i\\] where the sum is over the last dimension (unless `axis` is specified) and where \\(\overline{a_i}\\) denotes the complex conjugate if \\(a_i\\) is complex and the identity otherwise. New in version 2.0.0. Parameters: **x1, x2** array_like Input arrays, scalars not allowed. **out** ndarray, optional A location into which the result is stored. If provided, it must have the broadcasted shape of `x1` and `x2` with the last axis removed. If not provided or None, a freshly-allocated array is used. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The vector dot product of the inputs. This is a scalar only when both x1, x2 are 1-d vectors. Raises: ValueError If the last dimension of `x1` is not the same size as the last dimension of `x2`. If a scalar value is passed in. See also [`vdot`](numpy.vdot#numpy.vdot "numpy.vdot") same but flattens arguments first [`matmul`](numpy.matmul#numpy.matmul "numpy.matmul") Matrix-matrix product. [`vecmat`](numpy.vecmat#numpy.vecmat "numpy.vecmat") Vector-matrix product. [`matvec`](numpy.matvec#numpy.matvec "numpy.matvec") Matrix-vector product. [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") Einstein summation convention. #### Examples >>> import numpy as np Get the projected size along a given normal for an array of vectors. >>> v = np.array([[0., 5., 0.], [0., 0., 10.], [0., 6., 8.]]) >>> n = np.array([0., 0.6, 0.8]) >>> np.vecdot(v, n) array([ 3., 8., 10.]) # numpy.vecmat numpy.vecmat(_x1_ , _x2_ , _/_ , _out=None_ , _*_ , _casting='same_kind'_ , _order='K'_ , _dtype=None_ , _subok=True_[, _signature_ , _axes_ , _axis_])_= _ Vector-matrix dot product of two arrays. Given a vector (or stack of vector) \\(\mathbf{v}\\) in `x1` and a matrix (or stack of matrices) \\(\mathbf{A}\\) in `x2`, the vector-matrix product is defined as: \\[\mathbf{b} \cdot \mathbf{A} = \sum_{i=0}^{n-1} \overline{v_i}A_{ij}\\] where the sum is over the last dimension of `x1` and the one-but-last dimensions in `x2` (unless `axes` is specified) and where \\(\overline{v_i}\\) denotes the complex conjugate if \\(v\\) is complex and the identity otherwise. (For a non-conjugated vector-matrix product, use `np.matvec(x2.mT, x1)`.) New in version 2.2.0. Parameters: **x1, x2** array_like Input arrays, scalars not allowed. **out** ndarray, optional A location into which the result is stored. If provided, it must have the broadcasted shape of `x1` and `x2` with the summation axis removed. If not provided or None, a freshly-allocated array is used. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs- kwargs). Returns: **y** ndarray The vector-matrix product of the inputs. Raises: ValueError If the last dimensions of `x1` and the one-but-last dimension of `x2` are not the same size. If a scalar value is passed in. See also [`vecdot`](numpy.vecdot#numpy.vecdot "numpy.vecdot") Vector-vector product. [`matvec`](numpy.matvec#numpy.matvec "numpy.matvec") Matrix-vector product. [`matmul`](numpy.matmul#numpy.matmul "numpy.matmul") Matrix-matrix product. [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") Einstein summation convention. #### Examples Project a vector along X and Y. >>> v = np.array([0., 4., 2.]) >>> a = np.array([[1., 0., 0.], ... [0., 1., 0.], ... [0., 0., 0.]]) >>> np.vecmat(v, a) array([ 0., 4., 0.]) # numpy.vectorize.__call__ method vectorize.__call__(_* args_, _** kwargs_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_function_base_impl.py#L2517-L2522) Call self as a function. # numpy.vectorize _class_ numpy.vectorize(_pyfunc =np._NoValue_, _otypes =None_, _doc =None_, _excluded =None_, _cache =False_, _signature =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) Returns an object that acts like pyfunc, but takes arrays as input. Define a vectorized function which takes a nested sequence of objects or numpy arrays as inputs and returns a single numpy array or a tuple of numpy arrays. The vectorized function evaluates `pyfunc` over successive tuples of the input arrays like the python map function, except it uses the broadcasting rules of numpy. The data type of the output of `vectorized` is determined by calling the function with the first element of the input. This can be avoided by specifying the `otypes` argument. Parameters: **pyfunc** callable, optional A python function or method. Can be omitted to produce a decorator with keyword arguments. **otypes** str or list of dtypes, optional The output data type. It must be specified as either a string of typecode characters or a list of data type specifiers. There should be one data type specifier for each output. **doc** str, optional The docstring for the function. If None, the docstring will be the `pyfunc.__doc__`. **excluded** set, optional Set of strings or integers representing the positional or keyword arguments for which the function will not be vectorized. These will be passed directly to `pyfunc` unmodified. **cache** bool, optional If `True`, then cache the first function call that determines the number of outputs if `otypes` is not provided. **signature** string, optional Generalized universal function signature, e.g., `(m,n),(n)->(m)` for vectorized matrix-vector multiplication. If provided, `pyfunc` will be called with (and expected to return) arrays with shapes given by the size of corresponding core dimensions. By default, `pyfunc` is assumed to take scalars as input and output. Returns: **out** callable A vectorized function if `pyfunc` was provided, a decorator otherwise. See also [`frompyfunc`](numpy.frompyfunc#numpy.frompyfunc "numpy.frompyfunc") Takes an arbitrary Python function and returns a ufunc #### Notes The `vectorize` function is provided primarily for convenience, not for performance. The implementation is essentially a for loop. If `otypes` is not specified, then a call to the function with the first argument will be used to determine the number of outputs. The results of this call will be cached if `cache` is `True` to prevent calling the function twice. However, to implement the cache, the original function must be wrapped which will slow down subsequent calls, so only do this if your function is expensive. The new keyword argument interface and `excluded` argument support further degrades performance. #### References [1] [Generalized universal function API](../c-api/generalized-ufuncs) #### Examples >>> import numpy as np >>> def myfunc(a, b): ... "Return a-b if a>b, otherwise return a+b" ... if a > b: ... return a - b ... else: ... return a + b >>> vfunc = np.vectorize(myfunc) >>> vfunc([1, 2, 3, 4], 2) array([3, 4, 1, 2]) The docstring is taken from the input function to `vectorize` unless it is specified: >>> vfunc.__doc__ 'Return a-b if a>b, otherwise return a+b' >>> vfunc = np.vectorize(myfunc, doc='Vectorized `myfunc`') >>> vfunc.__doc__ 'Vectorized `myfunc`' The output type is determined by evaluating the first element of the input, unless it is specified: >>> out = vfunc([1, 2, 3, 4], 2) >>> type(out[0]) >>> vfunc = np.vectorize(myfunc, otypes=[float]) >>> out = vfunc([1, 2, 3, 4], 2) >>> type(out[0]) The `excluded` argument can be used to prevent vectorizing over certain arguments. This can be useful for array-like arguments of a fixed length such as the coefficients for a polynomial as in [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval"): >>> def mypolyval(p, x): ... _p = list(p) ... res = _p.pop(0) ... while _p: ... res = res*x + _p.pop(0) ... return res Here, we exclude the zeroth argument from vectorization whether it is passed by position or keyword. >>> vpolyval = np.vectorize(mypolyval, excluded={0, 'p'}) >>> vpolyval([1, 2, 3], x=[0, 1]) array([3, 6]) >>> vpolyval(p=[1, 2, 3], x=[0, 1]) array([3, 6]) The `signature` argument allows for vectorizing functions that act on non- scalar arrays of fixed length. For example, you can use it for a vectorized calculation of Pearson correlation coefficient and its p-value: >>> import scipy.stats >>> pearsonr = np.vectorize(scipy.stats.pearsonr, ... signature='(n),(n)->(),()') >>> pearsonr([[0, 1, 2, 3]], [[1, 2, 3, 4], [4, 3, 2, 1]]) (array([ 1., -1.]), array([ 0., 0.])) Or for a vectorized convolution: >>> convolve = np.vectorize(np.convolve, signature='(n),(m)->(k)') >>> convolve(np.eye(4), [1, 2, 1]) array([[1., 2., 1., 0., 0., 0.], [0., 1., 2., 1., 0., 0.], [0., 0., 1., 2., 1., 0.], [0., 0., 0., 1., 2., 1.]]) Decorator syntax is supported. The decorator can be called as a function to provide keyword arguments: >>> @np.vectorize ... def identity(x): ... return x ... >>> identity([0, 1, 2]) array([0, 1, 2]) >>> @np.vectorize(otypes=[float]) ... def as_float(x): ... return x ... >>> as_float([0, 1, 2]) array([0., 1., 2.]) #### Methods [`__call__`](numpy.vectorize.__call__#numpy.vectorize.__call__ "numpy.vectorize.__call__")(*args, **kwargs) | Call self as a function. ---|--- # numpy.vsplit numpy.vsplit(_ary_ , _indices_or_sections_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/_shape_base_impl.py#L957-L1008) Split an array into multiple sub-arrays vertically (row-wise). Please refer to the `split` documentation. `vsplit` is equivalent to `split` with `axis=0` (default), the array is always split along the first axis regardless of the array dimension. See also [`split`](numpy.split#numpy.split "numpy.split") Split an array into multiple sub-arrays of equal size. #### Examples >>> import numpy as np >>> x = np.arange(16.0).reshape(4, 4) >>> x array([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [12., 13., 14., 15.]]) >>> np.vsplit(x, 2) [array([[0., 1., 2., 3.], [4., 5., 6., 7.]]), array([[ 8., 9., 10., 11.], [12., 13., 14., 15.]])] >>> np.vsplit(x, np.array([3, 6])) [array([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]]), array([[12., 13., 14., 15.]]), array([], shape=(0, 4), dtype=float64)] With a higher dimensional array the split is still along the first axis. >>> x = np.arange(8.0).reshape(2, 2, 2) >>> x array([[[0., 1.], [2., 3.]], [[4., 5.], [6., 7.]]]) >>> np.vsplit(x, 2) [array([[[0., 1.], [2., 3.]]]), array([[[4., 5.], [6., 7.]]])] # numpy.vstack numpy.vstack(_tup_ , _*_ , _dtype =None_, _casting ='same_kind'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/_core/shape_base.py#L220-L292) Stack arrays in sequence vertically (row wise). This is equivalent to concatenation along the first axis after 1-D arrays of shape `(N,)` have been reshaped to `(1,N)`. Rebuilds arrays divided by [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit"). This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`stack`](numpy.stack#numpy.stack "numpy.stack") and [`block`](numpy.block#numpy.block "numpy.block") provide more general stacking and concatenation operations. Parameters: **tup** sequence of ndarrays The arrays must have the same shape along all but the first axis. 1-D arrays must have the same length. In the case of a single array_like input, it will be treated as a sequence of arrays; i.e., each element along the zeroth axis is treated as a separate array. **dtype** str or dtype If provided, the destination array will have this dtype. Cannot be provided together with `out`. New in version 1.24. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘same_kind’. New in version 1.24. Returns: **stacked** ndarray The array formed by stacking the given arrays, will be at least 2-D. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third axis). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit") Split an array into multiple sub-arrays vertically (row-wise). [`unstack`](numpy.unstack#numpy.unstack "numpy.unstack") Split an array into a tuple of sub-arrays along an axis. #### Examples >>> import numpy as np >>> a = np.array([1, 2, 3]) >>> b = np.array([4, 5, 6]) >>> np.vstack((a,b)) array([[1, 2, 3], [4, 5, 6]]) >>> a = np.array([[1], [2], [3]]) >>> b = np.array([[4], [5], [6]]) >>> np.vstack((a,b)) array([[1], [2], [3], [4], [5], [6]]) # numpy.where numpy.where(_condition_ , [_x_ , _y_ , ]_/_) Return elements chosen from `x` or `y` depending on `condition`. Note When only `condition` is provided, this function is a shorthand for `np.asarray(condition).nonzero()`. Using [`nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") directly should be preferred, as it behaves correctly for subclasses. The rest of this documentation covers only the case where all three arguments are provided. Parameters: **condition** array_like, bool Where True, yield `x`, otherwise yield `y`. **x, y** array_like Values from which to choose. `x`, `y` and `condition` need to be broadcastable to some shape. Returns: **out** ndarray An array with elements from `x` where `condition` is True, and elements from `y` elsewhere. See also [`choose`](numpy.choose#numpy.choose "numpy.choose") [`nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") The function that is called when x and y are omitted #### Notes If all the arrays are 1-D, `where` is equivalent to: [xv if c else yv for c, xv, yv in zip(condition, x, y)] #### Examples >>> import numpy as np >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.where(a < 5, a, 10*a) array([ 0, 1, 2, 3, 4, 50, 60, 70, 80, 90]) This can be used on multidimensional arrays too: >>> np.where([[True, False], [True, True]], ... [[1, 2], [3, 4]], ... [[9, 8], [7, 6]]) array([[1, 8], [3, 4]]) The shapes of x, y, and the condition are broadcast together: >>> x, y = np.ogrid[:3, :4] >>> np.where(x < y, x, 10 + y) # both x and 10+y are broadcast array([[10, 0, 0, 0], [10, 11, 1, 1], [10, 11, 12, 2]]) >>> a = np.array([[0, 1, 2], ... [0, 2, 4], ... [0, 3, 6]]) >>> np.where(a < 4, a, -1) # -1 is broadcast array([[ 0, 1, 2], [ 0, 2, -1], [ 0, 3, -1]]) # numpy.zeros numpy.zeros(_shape_ , _dtype =float_, _order ='C'_, _*_ , _like =None_) Return a new array of given shape and type, filled with zeros. Parameters: **shape** int or tuple of ints Shape of the new array, e.g., `(2, 3)` or `2`. **dtype** data-type, optional The desired data-type for the array, e.g., [`numpy.int8`](../arrays.scalars#numpy.int8 "numpy.int8"). Default is [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64"). **order**{‘C’, ‘F’}, optional, default: ‘C’ Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. **like** array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns: **out** ndarray Array of zeros with the given shape, dtype, and order. See also [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Return an array of zeros with shape and type of input. [`empty`](numpy.empty#numpy.empty "numpy.empty") Return a new uninitialized array. [`ones`](numpy.ones#numpy.ones "numpy.ones") Return a new array setting values to one. [`full`](numpy.full#numpy.full "numpy.full") Return a new array of given shape filled with value. #### Examples >>> import numpy as np >>> np.zeros(5) array([ 0., 0., 0., 0., 0.]) >>> np.zeros((5,), dtype=int) array([0, 0, 0, 0, 0]) >>> np.zeros((2, 1)) array([[ 0.], [ 0.]]) >>> s = (2,2) >>> np.zeros(s) array([[ 0., 0.], [ 0., 0.]]) >>> np.zeros((2,), dtype=[('x', 'i4'), ('y', 'i4')]) # custom dtype array([(0, 0), (0, 0)], dtype=[('x', '>> import numpy as np >>> x = np.arange(6) >>> x = x.reshape((2, 3)) >>> x array([[0, 1, 2], [3, 4, 5]]) >>> np.zeros_like(x) array([[0, 0, 0], [0, 0, 0]]) >>> y = np.arange(3, dtype=float) >>> y array([0., 1., 2.]) >>> np.zeros_like(y) array([0., 0., 0.]) # Global Configuration Options NumPy has a few import-time, compile-time, or runtime configuration options which change the global behaviour. Most of these are related to performance or for debugging purposes and will not be interesting to the vast majority of users. ## Performance-related options ### Number of threads used for linear algebra NumPy itself is normally intentionally limited to a single thread during function calls, however it does support multiple Python threads running at the same time. Note that for performant linear algebra NumPy uses a BLAS backend such as OpenBLAS or MKL, which may use multiple threads that may be controlled by environment variables such as `OMP_NUM_THREADS` depending on what is used. One way to control the number of threads is the package [threadpoolctl](https://pypi.org/project/threadpoolctl/) ### madvise hugepage on Linux When working with very large arrays on modern Linux kernels, you can experience a significant speedup when [transparent hugepage](https://www.kernel.org/doc/html/latest/admin- guide/mm/transhuge.html) is used. The current system policy for transparent hugepages can be seen by: cat /sys/kernel/mm/transparent_hugepage/enabled When set to `madvise` NumPy will typically use hugepages for a performance boost. This behaviour can be modified by setting the environment variable: NUMPY_MADVISE_HUGEPAGE=0 or setting it to `1` to always enable it. When not set, the default is to use madvise on Kernels 4.6 and newer. These kernels presumably experience a large speedup with hugepage support. This flag is checked at import time. ### SIMD feature selection Setting `NPY_DISABLE_CPU_FEATURES` will exclude simd features at runtime. See [Runtime dispatch](simd/build-options#runtime-simd-dispatch). ## Debugging-related options ### Warn if no memory allocation policy when deallocating data Some users might pass ownership of the data pointer to the `ndarray` by setting the `OWNDATA` flag. If they do this without setting (manually) a memory allocation policy, the default will be to call `free`. If `NUMPY_WARN_IF_NO_MEM_POLICY` is set to `"1"`, a `RuntimeWarning` will be emitted. A better alternative is to use a `PyCapsule` with a deallocator and set the `ndarray.base`. Changed in version 1.25.2: This variable is only checked on the first import. # NumPy reference Release: 2.2 Date: December 14, 2024 This reference manual details functions, modules, and objects included in NumPy, describing what they are and what they do. For learning how to use NumPy, see the [complete documentation](../index#numpy-docs-mainpage). ## Python API * [NumPy’s module structure](module_structure) * [Array objects](arrays) * [The N-dimensional array (`ndarray`)](arrays.ndarray) * [Scalars](arrays.scalars) * [Data type objects (`dtype`)](arrays.dtypes) * [Data type promotion in NumPy](arrays.promotion) * [Iterating over arrays](arrays.nditer) * [Standard array subclasses](arrays.classes) * [Masked arrays](maskedarray) * [The array interface protocol](arrays.interface) * [Datetimes and timedeltas](arrays.datetime) * [Universal functions (`ufunc`)](ufuncs) * [Routines and objects by topic](routines) * [Constants](constants) * [Array creation routines](routines.array-creation) * [Array manipulation routines](routines.array-manipulation) * [Bit-wise operations](routines.bitwise) * [String functionality](routines.strings) * [Datetime support functions](routines.datetime) * [Data type routines](routines.dtype) * [Mathematical functions with automatic domain](routines.emath) * [Floating point error handling](routines.err) * [Exceptions and Warnings (`numpy.exceptions`)](routines.exceptions) * [Discrete Fourier Transform (`numpy.fft`)](routines.fft) * [Functional programming](routines.functional) * [Input and output](routines.io) * [Indexing routines](routines.indexing) * [Linear algebra (`numpy.linalg`)](routines.linalg) * [Logic functions](routines.logic) * [Masked array operations](routines.ma) * [Mathematical functions](routines.math) * [Miscellaneous routines](routines.other) * [Polynomials](routines.polynomials) * [Random sampling (`numpy.random`)](random/index) * [Set routines](routines.set) * [Sorting, searching, and counting](routines.sort) * [Statistics](routines.statistics) * [Test support (`numpy.testing`)](routines.testing) * [Window functions](routines.window) * [Typing (`numpy.typing`)](typing) * [Packaging (`numpy.distutils`)](distutils) ## C API * [NumPy C-API](c-api/index) * [Python types and C-structures](c-api/types-and-structures) * [System configuration](c-api/config) * [Data type API](c-api/dtype) * [Array API](c-api/array) * [Array iterator API](c-api/iterator) * [ufunc API](c-api/ufunc) * [Generalized universal function API](c-api/generalized-ufuncs) * [NpyString API](c-api/strings) * [NumPy core math library](c-api/coremath) * [Datetime API](c-api/datetimes) * [C API deprecations](c-api/deprecations) * [Memory management in NumPy](c-api/data_memory) ## Other topics * [Array API standard compatibility](array_api) * [CPU/SIMD optimizations](simd/index) * [Thread Safety](thread_safety) * [Global Configuration Options](global_state) * [NumPy security](security) * [Status of `numpy.distutils` and migration advice](distutils_status_migration) * [`numpy.distutils` user guide](distutils_guide) * [NumPy and SWIG](swig) ## Acknowledgements Large parts of this manual originate from Travis E. Oliphant’s book [Guide to NumPy](https://archive.org/details/NumPyBook) (which generously entered Public Domain in August 2008). The reference documentation for many of the functions are written by numerous contributors and developers of NumPy. # Constants of the numpy.ma module In addition to the `MaskedArray` class, the [`numpy.ma`](maskedarray.generic#module-numpy.ma "numpy.ma") module defines several constants. numpy.ma.masked The `masked` constant is a special case of `MaskedArray`, with a float datatype and a null shape. It is used to test whether a specific entry of a masked array is masked, or to mask one or several entries of a masked array: >>> import numpy as np >>> x = ma.array([1, 2, 3], mask=[0, 1, 0]) >>> x[1] is ma.masked True >>> x[-1] = ma.masked >>> x masked_array(data=[1, --, --], mask=[False, True, True], fill_value=999999) numpy.ma.nomask Value indicating that a masked array has no invalid entry. `nomask` is used internally to speed up computations when the mask is not needed. It is represented internally as `np.False_`. numpy.ma.masked_print_option String used in lieu of missing data when a masked array is printed. By default, this string is `'--'`. Use `set_display()` to change the default string. Example usage: `numpy.ma.masked_print_option.set_display('X')` replaces missing data with `'X'`. # The MaskedArray class _class_ numpy.ma.MaskedArray[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ma/__init__.py) A subclass of [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") designed to manipulate numerical arrays with missing data. An instance of `MaskedArray` can be thought as the combination of several elements: * The `data`, as a regular [`numpy.ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") of any shape or datatype (the data). * A boolean `mask` with the same shape as the data, where a `True` value indicates that the corresponding element of the data is invalid. The special value `nomask` is also acceptable for arrays without named fields, and indicates that no data is invalid. * A `fill_value`, a value that may be used to replace the invalid entries in order to return a standard [`numpy.ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). ## Attributes and properties of masked arrays See also [Array Attributes](arrays.ndarray#arrays-ndarray-attributes) ma.MaskedArray.data Returns the underlying data, as a view of the masked array. If the underlying data is a subclass of [`numpy.ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"), it is returned as such. >>> x = np.ma.array(np.matrix([[1, 2], [3, 4]]), mask=[[0, 1], [1, 0]]) >>> x.data matrix([[1, 2], [3, 4]]) The type of the data can be accessed through the `baseclass` attribute. ma.MaskedArray.mask Current mask. ma.MaskedArray.recordmask Get or set the mask of the array if it has no named fields. For structured arrays, returns a ndarray of booleans where entries are `True` if **all** the fields are masked, `False` otherwise: >>> x = np.ma.array([(1, 1), (2, 2), (3, 3), (4, 4), (5, 5)], ... mask=[(0, 0), (1, 0), (1, 1), (0, 1), (0, 0)], ... dtype=[('a', int), ('b', int)]) >>> x.recordmask array([False, False, True, False, False]) ma.MaskedArray.fill_value The filling value of the masked array is a scalar. When setting, None will set to a default based on the data type. #### Examples >>> import numpy as np >>> for dt in [np.int32, np.int64, np.float64, np.complex128]: ... np.ma.array([0, 1], dtype=dt).get_fill_value() ... np.int64(999999) np.int64(999999) np.float64(1e+20) np.complex128(1e+20+0j) >>> x = np.ma.array([0, 1.], fill_value=-np.inf) >>> x.fill_value np.float64(-inf) >>> x.fill_value = np.pi >>> x.fill_value np.float64(3.1415926535897931) Reset to default: >>> x.fill_value = None >>> x.fill_value np.float64(1e+20) ma.MaskedArray.baseclass Class of the underlying data (read-only). ma.MaskedArray.sharedmask Share status of the mask (read-only). ma.MaskedArray.hardmask Specifies whether values can be unmasked through assignments. By default, assigning definite values to masked array entries will unmask them. When `hardmask` is `True`, the mask will not change through assignments. See also [`ma.MaskedArray.harden_mask`](generated/numpy.ma.maskedarray.harden_mask#numpy.ma.MaskedArray.harden_mask "numpy.ma.MaskedArray.harden_mask") [`ma.MaskedArray.soften_mask`](generated/numpy.ma.maskedarray.soften_mask#numpy.ma.MaskedArray.soften_mask "numpy.ma.MaskedArray.soften_mask") #### Examples >>> import numpy as np >>> x = np.arange(10) >>> m = np.ma.masked_array(x, x>5) >>> assert not m.hardmask Since `m` has a soft mask, assigning an element value unmasks that element: >>> m[8] = 42 >>> m masked_array(data=[0, 1, 2, 3, 4, 5, --, --, 42, --], mask=[False, False, False, False, False, False, True, True, False, True], fill_value=999999) After hardening, the mask is not affected by assignments: >>> hardened = np.ma.harden_mask(m) >>> assert m.hardmask and hardened is m >>> m[:] = 23 >>> m masked_array(data=[23, 23, 23, 23, 23, 23, --, --, 23, --], mask=[False, False, False, False, False, False, True, True, False, True], fill_value=999999) As `MaskedArray` is a subclass of [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"), a masked array also inherits all the attributes and properties of a [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") instance. [`MaskedArray.base`](generated/numpy.ma.maskedarray.base#numpy.ma.MaskedArray.base "numpy.ma.MaskedArray.base") | Base object if memory is from some other object. ---|--- [`MaskedArray.ctypes`](generated/numpy.ma.maskedarray.ctypes#numpy.ma.MaskedArray.ctypes "numpy.ma.MaskedArray.ctypes") | An object to simplify the interaction of the array with the ctypes module. [`MaskedArray.dtype`](generated/numpy.ma.maskedarray.dtype#numpy.ma.MaskedArray.dtype "numpy.ma.MaskedArray.dtype") | Data-type of the array's elements. [`MaskedArray.flags`](generated/numpy.ma.maskedarray.flags#numpy.ma.MaskedArray.flags "numpy.ma.MaskedArray.flags") | Information about the memory layout of the array. [`MaskedArray.itemsize`](generated/numpy.ma.maskedarray.itemsize#numpy.ma.MaskedArray.itemsize "numpy.ma.MaskedArray.itemsize") | Length of one array element in bytes. [`MaskedArray.nbytes`](generated/numpy.ma.maskedarray.nbytes#numpy.ma.MaskedArray.nbytes "numpy.ma.MaskedArray.nbytes") | Total bytes consumed by the elements of the array. [`MaskedArray.ndim`](generated/numpy.ma.maskedarray.ndim#numpy.ma.MaskedArray.ndim "numpy.ma.MaskedArray.ndim") | Number of array dimensions. [`MaskedArray.shape`](generated/numpy.ma.maskedarray.shape#numpy.ma.MaskedArray.shape "numpy.ma.MaskedArray.shape") | Tuple of array dimensions. [`MaskedArray.size`](generated/numpy.ma.maskedarray.size#numpy.ma.MaskedArray.size "numpy.ma.MaskedArray.size") | Number of elements in the array. [`MaskedArray.strides`](generated/numpy.ma.maskedarray.strides#numpy.ma.MaskedArray.strides "numpy.ma.MaskedArray.strides") | Tuple of bytes to step in each dimension when traversing an array. [`MaskedArray.imag`](generated/numpy.ma.maskedarray.imag#numpy.ma.MaskedArray.imag "numpy.ma.MaskedArray.imag") | The imaginary part of the masked array. [`MaskedArray.real`](generated/numpy.ma.maskedarray.real#numpy.ma.MaskedArray.real "numpy.ma.MaskedArray.real") | The real part of the masked array. [`MaskedArray.flat`](generated/numpy.ma.maskedarray.flat#numpy.ma.MaskedArray.flat "numpy.ma.MaskedArray.flat") | Return a flat iterator, or set a flattened version of self to value. [`MaskedArray.__array_priority__`](generated/numpy.ma.maskedarray.__array_priority__#numpy.ma.MaskedArray.__array_priority__ "numpy.ma.MaskedArray.__array_priority__") | # MaskedArray methods See also [Array methods](arrays.ndarray#array-ndarray-methods) ## Conversion [`MaskedArray.__float__`](generated/numpy.ma.maskedarray.__float__#numpy.ma.MaskedArray.__float__ "numpy.ma.MaskedArray.__float__")() | Convert to float. ---|--- [`MaskedArray.__int__`](generated/numpy.ma.maskedarray.__int__#numpy.ma.MaskedArray.__int__ "numpy.ma.MaskedArray.__int__")() | Convert to int. [`MaskedArray.view`](generated/numpy.ma.maskedarray.view#numpy.ma.MaskedArray.view "numpy.ma.MaskedArray.view")([dtype, type, fill_value]) | Return a view of the MaskedArray data. [`MaskedArray.astype`](generated/numpy.ma.maskedarray.astype#numpy.ma.MaskedArray.astype "numpy.ma.MaskedArray.astype")(dtype[, order, casting, ...]) | Copy of the array, cast to a specified type. [`MaskedArray.byteswap`](generated/numpy.ma.maskedarray.byteswap#numpy.ma.MaskedArray.byteswap "numpy.ma.MaskedArray.byteswap")([inplace]) | Swap the bytes of the array elements [`MaskedArray.compressed`](generated/numpy.ma.maskedarray.compressed#numpy.ma.MaskedArray.compressed "numpy.ma.MaskedArray.compressed")() | Return all the non-masked data as a 1-D array. [`MaskedArray.filled`](generated/numpy.ma.maskedarray.filled#numpy.ma.MaskedArray.filled "numpy.ma.MaskedArray.filled")([fill_value]) | Return a copy of self, with masked values filled with a given value. [`MaskedArray.tofile`](generated/numpy.ma.maskedarray.tofile#numpy.ma.MaskedArray.tofile "numpy.ma.MaskedArray.tofile")(fid[, sep, format]) | Save a masked array to a file in binary format. [`MaskedArray.toflex`](generated/numpy.ma.maskedarray.toflex#numpy.ma.MaskedArray.toflex "numpy.ma.MaskedArray.toflex")() | Transforms a masked array into a flexible-type array. [`MaskedArray.tolist`](generated/numpy.ma.maskedarray.tolist#numpy.ma.MaskedArray.tolist "numpy.ma.MaskedArray.tolist")([fill_value]) | Return the data portion of the masked array as a hierarchical Python list. [`MaskedArray.torecords`](generated/numpy.ma.maskedarray.torecords#numpy.ma.MaskedArray.torecords "numpy.ma.MaskedArray.torecords")() | Transforms a masked array into a flexible-type array. [`MaskedArray.tostring`](generated/numpy.ma.maskedarray.tostring#numpy.ma.MaskedArray.tostring "numpy.ma.MaskedArray.tostring")([fill_value, order]) | A compatibility alias for `tobytes`, with exactly the same behavior. [`MaskedArray.tobytes`](generated/numpy.ma.maskedarray.tobytes#numpy.ma.MaskedArray.tobytes "numpy.ma.MaskedArray.tobytes")([fill_value, order]) | Return the array data as a string containing the raw bytes in the array. ## Shape manipulation For reshape, resize, and transpose, the single tuple argument may be replaced with `n` integers which will be interpreted as an n-tuple. [`MaskedArray.flatten`](generated/numpy.ma.maskedarray.flatten#numpy.ma.MaskedArray.flatten "numpy.ma.MaskedArray.flatten")([order]) | Return a copy of the array collapsed into one dimension. ---|--- [`MaskedArray.ravel`](generated/numpy.ma.maskedarray.ravel#numpy.ma.MaskedArray.ravel "numpy.ma.MaskedArray.ravel")([order]) | Returns a 1D version of self, as a view. [`MaskedArray.reshape`](generated/numpy.ma.maskedarray.reshape#numpy.ma.MaskedArray.reshape "numpy.ma.MaskedArray.reshape")(*s, **kwargs) | Give a new shape to the array without changing its data. [`MaskedArray.resize`](generated/numpy.ma.maskedarray.resize#numpy.ma.MaskedArray.resize "numpy.ma.MaskedArray.resize")(newshape[, refcheck, order]) | [`MaskedArray.squeeze`](generated/numpy.ma.maskedarray.squeeze#numpy.ma.MaskedArray.squeeze "numpy.ma.MaskedArray.squeeze")([axis]) | Remove axes of length one from `a`. [`MaskedArray.swapaxes`](generated/numpy.ma.maskedarray.swapaxes#numpy.ma.MaskedArray.swapaxes "numpy.ma.MaskedArray.swapaxes")(axis1, axis2) | Return a view of the array with `axis1` and `axis2` interchanged. [`MaskedArray.transpose`](generated/numpy.ma.maskedarray.transpose#numpy.ma.MaskedArray.transpose "numpy.ma.MaskedArray.transpose")(*axes) | Returns a view of the array with axes transposed. [`MaskedArray.T`](generated/numpy.ma.maskedarray.t#numpy.ma.MaskedArray.T "numpy.ma.MaskedArray.T") | View of the transposed array. ## Item selection and manipulation For array methods that take an `axis` keyword, it defaults to None. If axis is None, then the array is treated as a 1-D array. Any other value for `axis` represents the dimension along which the operation should proceed. [`MaskedArray.argmax`](generated/numpy.ma.maskedarray.argmax#numpy.ma.MaskedArray.argmax "numpy.ma.MaskedArray.argmax")([axis, fill_value, out, ...]) | Returns array of indices of the maximum values along the given axis. ---|--- [`MaskedArray.argmin`](generated/numpy.ma.maskedarray.argmin#numpy.ma.MaskedArray.argmin "numpy.ma.MaskedArray.argmin")([axis, fill_value, out, ...]) | Return array of indices to the minimum values along the given axis. [`MaskedArray.argsort`](generated/numpy.ma.maskedarray.argsort#numpy.ma.MaskedArray.argsort "numpy.ma.MaskedArray.argsort")([axis, kind, order, ...]) | Return an ndarray of indices that sort the array along the specified axis. [`MaskedArray.choose`](generated/numpy.ma.maskedarray.choose#numpy.ma.MaskedArray.choose "numpy.ma.MaskedArray.choose")(choices[, out, mode]) | Use an index array to construct a new array from a set of choices. [`MaskedArray.compress`](generated/numpy.ma.maskedarray.compress#numpy.ma.MaskedArray.compress "numpy.ma.MaskedArray.compress")(condition[, axis, out]) | Return `a` where condition is `True`. [`MaskedArray.diagonal`](generated/numpy.ma.maskedarray.diagonal#numpy.ma.MaskedArray.diagonal "numpy.ma.MaskedArray.diagonal")([offset, axis1, axis2]) | Return specified diagonals. [`MaskedArray.fill`](generated/numpy.ma.maskedarray.fill#numpy.ma.MaskedArray.fill "numpy.ma.MaskedArray.fill")(value) | Fill the array with a scalar value. [`MaskedArray.item`](generated/numpy.ma.maskedarray.item#numpy.ma.MaskedArray.item "numpy.ma.MaskedArray.item")(*args) | Copy an element of an array to a standard Python scalar and return it. [`MaskedArray.nonzero`](generated/numpy.ma.maskedarray.nonzero#numpy.ma.MaskedArray.nonzero "numpy.ma.MaskedArray.nonzero")() | Return the indices of unmasked elements that are not zero. [`MaskedArray.put`](generated/numpy.ma.maskedarray.put#numpy.ma.MaskedArray.put "numpy.ma.MaskedArray.put")(indices, values[, mode]) | Set storage-indexed locations to corresponding values. [`MaskedArray.repeat`](generated/numpy.ma.maskedarray.repeat#numpy.ma.MaskedArray.repeat "numpy.ma.MaskedArray.repeat")(repeats[, axis]) | Repeat elements of an array. [`MaskedArray.searchsorted`](generated/numpy.ma.maskedarray.searchsorted#numpy.ma.MaskedArray.searchsorted "numpy.ma.MaskedArray.searchsorted")(v[, side, sorter]) | Find indices where elements of v should be inserted in a to maintain order. [`MaskedArray.sort`](generated/numpy.ma.maskedarray.sort#numpy.ma.MaskedArray.sort "numpy.ma.MaskedArray.sort")([axis, kind, order, ...]) | Sort the array, in-place [`MaskedArray.take`](generated/numpy.ma.maskedarray.take#numpy.ma.MaskedArray.take "numpy.ma.MaskedArray.take")(indices[, axis, out, mode]) | Take elements from a masked array along an axis. ## Pickling and copy [`MaskedArray.copy`](generated/numpy.ma.maskedarray.copy#numpy.ma.MaskedArray.copy "numpy.ma.MaskedArray.copy")([order]) | Return a copy of the array. ---|--- [`MaskedArray.dump`](generated/numpy.ma.maskedarray.dump#numpy.ma.MaskedArray.dump "numpy.ma.MaskedArray.dump")(file) | Dump a pickle of the array to the specified file. [`MaskedArray.dumps`](generated/numpy.ma.maskedarray.dumps#numpy.ma.MaskedArray.dumps "numpy.ma.MaskedArray.dumps")() | Returns the pickle of the array as a string. ## Calculations [`MaskedArray.all`](generated/numpy.ma.maskedarray.all#numpy.ma.MaskedArray.all "numpy.ma.MaskedArray.all")([axis, out, keepdims]) | Returns True if all elements evaluate to True. ---|--- [`MaskedArray.anom`](generated/numpy.ma.maskedarray.anom#numpy.ma.MaskedArray.anom "numpy.ma.MaskedArray.anom")([axis, dtype]) | Compute the anomalies (deviations from the arithmetic mean) along the given axis. [`MaskedArray.any`](generated/numpy.ma.maskedarray.any#numpy.ma.MaskedArray.any "numpy.ma.MaskedArray.any")([axis, out, keepdims]) | Returns True if any of the elements of `a` evaluate to True. [`MaskedArray.clip`](generated/numpy.ma.maskedarray.clip#numpy.ma.MaskedArray.clip "numpy.ma.MaskedArray.clip")([min, max, out]) | Return an array whose values are limited to `[min, max]`. [`MaskedArray.conj`](generated/numpy.ma.maskedarray.conj#numpy.ma.MaskedArray.conj "numpy.ma.MaskedArray.conj")() | Complex-conjugate all elements. [`MaskedArray.conjugate`](generated/numpy.ma.maskedarray.conjugate#numpy.ma.MaskedArray.conjugate "numpy.ma.MaskedArray.conjugate")() | Return the complex conjugate, element-wise. [`MaskedArray.cumprod`](generated/numpy.ma.maskedarray.cumprod#numpy.ma.MaskedArray.cumprod "numpy.ma.MaskedArray.cumprod")([axis, dtype, out]) | Return the cumulative product of the array elements over the given axis. [`MaskedArray.cumsum`](generated/numpy.ma.maskedarray.cumsum#numpy.ma.MaskedArray.cumsum "numpy.ma.MaskedArray.cumsum")([axis, dtype, out]) | Return the cumulative sum of the array elements over the given axis. [`MaskedArray.max`](generated/numpy.ma.maskedarray.max#numpy.ma.MaskedArray.max "numpy.ma.MaskedArray.max")([axis, out, fill_value, ...]) | Return the maximum along a given axis. [`MaskedArray.mean`](generated/numpy.ma.maskedarray.mean#numpy.ma.MaskedArray.mean "numpy.ma.MaskedArray.mean")([axis, dtype, out, keepdims]) | Returns the average of the array elements along given axis. [`MaskedArray.min`](generated/numpy.ma.maskedarray.min#numpy.ma.MaskedArray.min "numpy.ma.MaskedArray.min")([axis, out, fill_value, ...]) | Return the minimum along a given axis. [`MaskedArray.prod`](generated/numpy.ma.maskedarray.prod#numpy.ma.MaskedArray.prod "numpy.ma.MaskedArray.prod")([axis, dtype, out, keepdims]) | Return the product of the array elements over the given axis. [`MaskedArray.product`](generated/numpy.ma.maskedarray.product#numpy.ma.MaskedArray.product "numpy.ma.MaskedArray.product")([axis, dtype, out, keepdims]) | Return the product of the array elements over the given axis. [`MaskedArray.ptp`](generated/numpy.ma.maskedarray.ptp#numpy.ma.MaskedArray.ptp "numpy.ma.MaskedArray.ptp")([axis, out, fill_value, ...]) | Return (maximum - minimum) along the given dimension (i.e. peak-to-peak value). [`MaskedArray.round`](generated/numpy.ma.maskedarray.round#numpy.ma.MaskedArray.round "numpy.ma.MaskedArray.round")([decimals, out]) | Return each element rounded to the given number of decimals. [`MaskedArray.std`](generated/numpy.ma.maskedarray.std#numpy.ma.MaskedArray.std "numpy.ma.MaskedArray.std")([axis, dtype, out, ddof, ...]) | Returns the standard deviation of the array elements along given axis. [`MaskedArray.sum`](generated/numpy.ma.maskedarray.sum#numpy.ma.MaskedArray.sum "numpy.ma.MaskedArray.sum")([axis, dtype, out, keepdims]) | Return the sum of the array elements over the given axis. [`MaskedArray.trace`](generated/numpy.ma.maskedarray.trace#numpy.ma.MaskedArray.trace "numpy.ma.MaskedArray.trace")([offset, axis1, axis2, ...]) | Return the sum along diagonals of the array. [`MaskedArray.var`](generated/numpy.ma.maskedarray.var#numpy.ma.MaskedArray.var "numpy.ma.MaskedArray.var")([axis, dtype, out, ddof, ...]) | Compute the variance along the specified axis. ## Arithmetic and comparison operations ### Comparison operators: [`MaskedArray.__lt__`](generated/numpy.ma.maskedarray.__lt__#numpy.ma.MaskedArray.__lt__ "numpy.ma.MaskedArray.__lt__")(other) | Return selfvalue. [`MaskedArray.__ge__`](generated/numpy.ma.maskedarray.__ge__#numpy.ma.MaskedArray.__ge__ "numpy.ma.MaskedArray.__ge__")(other) | Return self>=value. [`MaskedArray.__eq__`](generated/numpy.ma.maskedarray.__eq__#numpy.ma.MaskedArray.__eq__ "numpy.ma.MaskedArray.__eq__")(other) | Check whether other equals self elementwise. [`MaskedArray.__ne__`](generated/numpy.ma.maskedarray.__ne__#numpy.ma.MaskedArray.__ne__ "numpy.ma.MaskedArray.__ne__")(other) | Check whether other does not equal self elementwise. ### Truth value of an array ([`bool()`](https://docs.python.org/3/library/functions.html#bool "\(in Python v3.13\)")): [`MaskedArray.__bool__`](generated/numpy.ma.maskedarray.__bool__#numpy.ma.MaskedArray.__bool__ "numpy.ma.MaskedArray.__bool__")(/) | True if self else False ---|--- ### Arithmetic: [`MaskedArray.__abs__`](generated/numpy.ma.maskedarray.__abs__#numpy.ma.MaskedArray.__abs__ "numpy.ma.MaskedArray.__abs__")(self) | ---|--- [`MaskedArray.__add__`](generated/numpy.ma.maskedarray.__add__#numpy.ma.MaskedArray.__add__ "numpy.ma.MaskedArray.__add__")(other) | Add self to other, and return a new masked array. [`MaskedArray.__radd__`](generated/numpy.ma.maskedarray.__radd__#numpy.ma.MaskedArray.__radd__ "numpy.ma.MaskedArray.__radd__")(other) | Add other to self, and return a new masked array. [`MaskedArray.__sub__`](generated/numpy.ma.maskedarray.__sub__#numpy.ma.MaskedArray.__sub__ "numpy.ma.MaskedArray.__sub__")(other) | Subtract other from self, and return a new masked array. [`MaskedArray.__rsub__`](generated/numpy.ma.maskedarray.__rsub__#numpy.ma.MaskedArray.__rsub__ "numpy.ma.MaskedArray.__rsub__")(other) | Subtract self from other, and return a new masked array. [`MaskedArray.__mul__`](generated/numpy.ma.maskedarray.__mul__#numpy.ma.MaskedArray.__mul__ "numpy.ma.MaskedArray.__mul__")(other) | Multiply self by other, and return a new masked array. [`MaskedArray.__rmul__`](generated/numpy.ma.maskedarray.__rmul__#numpy.ma.MaskedArray.__rmul__ "numpy.ma.MaskedArray.__rmul__")(other) | Multiply other by self, and return a new masked array. [`MaskedArray.__div__`](generated/numpy.ma.maskedarray.__div__#numpy.ma.MaskedArray.__div__ "numpy.ma.MaskedArray.__div__")(other) | Divide other into self, and return a new masked array. [`MaskedArray.__truediv__`](generated/numpy.ma.maskedarray.__truediv__#numpy.ma.MaskedArray.__truediv__ "numpy.ma.MaskedArray.__truediv__")(other) | Divide other into self, and return a new masked array. [`MaskedArray.__rtruediv__`](generated/numpy.ma.maskedarray.__rtruediv__#numpy.ma.MaskedArray.__rtruediv__ "numpy.ma.MaskedArray.__rtruediv__")(other) | Divide self into other, and return a new masked array. [`MaskedArray.__floordiv__`](generated/numpy.ma.maskedarray.__floordiv__#numpy.ma.MaskedArray.__floordiv__ "numpy.ma.MaskedArray.__floordiv__")(other) | Divide other into self, and return a new masked array. [`MaskedArray.__rfloordiv__`](generated/numpy.ma.maskedarray.__rfloordiv__#numpy.ma.MaskedArray.__rfloordiv__ "numpy.ma.MaskedArray.__rfloordiv__")(other) | Divide self into other, and return a new masked array. [`MaskedArray.__mod__`](generated/numpy.ma.maskedarray.__mod__#numpy.ma.MaskedArray.__mod__ "numpy.ma.MaskedArray.__mod__")(value, /) | Return self%value. [`MaskedArray.__rmod__`](generated/numpy.ma.maskedarray.__rmod__#numpy.ma.MaskedArray.__rmod__ "numpy.ma.MaskedArray.__rmod__")(value, /) | Return value%self. [`MaskedArray.__divmod__`](generated/numpy.ma.maskedarray.__divmod__#numpy.ma.MaskedArray.__divmod__ "numpy.ma.MaskedArray.__divmod__")(value, /) | Return divmod(self, value). [`MaskedArray.__rdivmod__`](generated/numpy.ma.maskedarray.__rdivmod__#numpy.ma.MaskedArray.__rdivmod__ "numpy.ma.MaskedArray.__rdivmod__")(value, /) | Return divmod(value, self). [`MaskedArray.__pow__`](generated/numpy.ma.maskedarray.__pow__#numpy.ma.MaskedArray.__pow__ "numpy.ma.MaskedArray.__pow__")(other) | Raise self to the power other, masking the potential NaNs/Infs [`MaskedArray.__rpow__`](generated/numpy.ma.maskedarray.__rpow__#numpy.ma.MaskedArray.__rpow__ "numpy.ma.MaskedArray.__rpow__")(other) | Raise other to the power self, masking the potential NaNs/Infs [`MaskedArray.__lshift__`](generated/numpy.ma.maskedarray.__lshift__#numpy.ma.MaskedArray.__lshift__ "numpy.ma.MaskedArray.__lshift__")(value, /) | Return self<>value. [`MaskedArray.__rrshift__`](generated/numpy.ma.maskedarray.__rrshift__#numpy.ma.MaskedArray.__rrshift__ "numpy.ma.MaskedArray.__rrshift__")(value, /) | Return value>>self. [`MaskedArray.__and__`](generated/numpy.ma.maskedarray.__and__#numpy.ma.MaskedArray.__and__ "numpy.ma.MaskedArray.__and__")(value, /) | Return self&value. [`MaskedArray.__rand__`](generated/numpy.ma.maskedarray.__rand__#numpy.ma.MaskedArray.__rand__ "numpy.ma.MaskedArray.__rand__")(value, /) | Return value&self. [`MaskedArray.__or__`](generated/numpy.ma.maskedarray.__or__#numpy.ma.MaskedArray.__or__ "numpy.ma.MaskedArray.__or__")(value, /) | Return self|value. [`MaskedArray.__ror__`](generated/numpy.ma.maskedarray.__ror__#numpy.ma.MaskedArray.__ror__ "numpy.ma.MaskedArray.__ror__")(value, /) | Return value|self. [`MaskedArray.__xor__`](generated/numpy.ma.maskedarray.__xor__#numpy.ma.MaskedArray.__xor__ "numpy.ma.MaskedArray.__xor__")(value, /) | Return self^value. [`MaskedArray.__rxor__`](generated/numpy.ma.maskedarray.__rxor__#numpy.ma.MaskedArray.__rxor__ "numpy.ma.MaskedArray.__rxor__")(value, /) | Return value^self. ### Arithmetic, in-place: [`MaskedArray.__iadd__`](generated/numpy.ma.maskedarray.__iadd__#numpy.ma.MaskedArray.__iadd__ "numpy.ma.MaskedArray.__iadd__")(other) | Add other to self in-place. ---|--- [`MaskedArray.__isub__`](generated/numpy.ma.maskedarray.__isub__#numpy.ma.MaskedArray.__isub__ "numpy.ma.MaskedArray.__isub__")(other) | Subtract other from self in-place. [`MaskedArray.__imul__`](generated/numpy.ma.maskedarray.__imul__#numpy.ma.MaskedArray.__imul__ "numpy.ma.MaskedArray.__imul__")(other) | Multiply self by other in-place. [`MaskedArray.__idiv__`](generated/numpy.ma.maskedarray.__idiv__#numpy.ma.MaskedArray.__idiv__ "numpy.ma.MaskedArray.__idiv__")(other) | Divide self by other in-place. [`MaskedArray.__itruediv__`](generated/numpy.ma.maskedarray.__itruediv__#numpy.ma.MaskedArray.__itruediv__ "numpy.ma.MaskedArray.__itruediv__")(other) | True divide self by other in-place. [`MaskedArray.__ifloordiv__`](generated/numpy.ma.maskedarray.__ifloordiv__#numpy.ma.MaskedArray.__ifloordiv__ "numpy.ma.MaskedArray.__ifloordiv__")(other) | Floor divide self by other in-place. [`MaskedArray.__imod__`](generated/numpy.ma.maskedarray.__imod__#numpy.ma.MaskedArray.__imod__ "numpy.ma.MaskedArray.__imod__")(value, /) | Return self%=value. [`MaskedArray.__ipow__`](generated/numpy.ma.maskedarray.__ipow__#numpy.ma.MaskedArray.__ipow__ "numpy.ma.MaskedArray.__ipow__")(other) | Raise self to the power other, in place. [`MaskedArray.__ilshift__`](generated/numpy.ma.maskedarray.__ilshift__#numpy.ma.MaskedArray.__ilshift__ "numpy.ma.MaskedArray.__ilshift__")(value, /) | Return self<<=value. [`MaskedArray.__irshift__`](generated/numpy.ma.maskedarray.__irshift__#numpy.ma.MaskedArray.__irshift__ "numpy.ma.MaskedArray.__irshift__")(value, /) | Return self>>=value. [`MaskedArray.__iand__`](generated/numpy.ma.maskedarray.__iand__#numpy.ma.MaskedArray.__iand__ "numpy.ma.MaskedArray.__iand__")(value, /) | Return self&=value. [`MaskedArray.__ior__`](generated/numpy.ma.maskedarray.__ior__#numpy.ma.MaskedArray.__ior__ "numpy.ma.MaskedArray.__ior__")(value, /) | Return self|=value. [`MaskedArray.__ixor__`](generated/numpy.ma.maskedarray.__ixor__#numpy.ma.MaskedArray.__ixor__ "numpy.ma.MaskedArray.__ixor__")(value, /) | Return self^=value. ## Representation [`MaskedArray.__repr__`](generated/numpy.ma.maskedarray.__repr__#numpy.ma.MaskedArray.__repr__ "numpy.ma.MaskedArray.__repr__")() | Literal string representation. ---|--- [`MaskedArray.__str__`](generated/numpy.ma.maskedarray.__str__#numpy.ma.MaskedArray.__str__ "numpy.ma.MaskedArray.__str__")() | Return str(self). [`MaskedArray.ids`](generated/numpy.ma.maskedarray.ids#numpy.ma.MaskedArray.ids "numpy.ma.MaskedArray.ids")() | Return the addresses of the data and mask areas. [`MaskedArray.iscontiguous`](generated/numpy.ma.maskedarray.iscontiguous#numpy.ma.MaskedArray.iscontiguous "numpy.ma.MaskedArray.iscontiguous")() | Return a boolean indicating whether the data is contiguous. ## Special methods For standard library functions: [`MaskedArray.__copy__`](generated/numpy.ma.maskedarray.__copy__#numpy.ma.MaskedArray.__copy__ "numpy.ma.MaskedArray.__copy__")() | Used if [`copy.copy`](https://docs.python.org/3/library/copy.html#copy.copy "\(in Python v3.13\)") is called on an array. ---|--- [`MaskedArray.__deepcopy__`](generated/numpy.ma.maskedarray.__deepcopy__#numpy.ma.MaskedArray.__deepcopy__ "numpy.ma.MaskedArray.__deepcopy__")(memo, /) | Used if [`copy.deepcopy`](https://docs.python.org/3/library/copy.html#copy.deepcopy "\(in Python v3.13\)") is called on an array. [`MaskedArray.__getstate__`](generated/numpy.ma.maskedarray.__getstate__#numpy.ma.MaskedArray.__getstate__ "numpy.ma.MaskedArray.__getstate__")() | Return the internal state of the masked array, for pickling purposes. [`MaskedArray.__reduce__`](generated/numpy.ma.maskedarray.__reduce__#numpy.ma.MaskedArray.__reduce__ "numpy.ma.MaskedArray.__reduce__")() | Return a 3-tuple for pickling a MaskedArray. [`MaskedArray.__setstate__`](generated/numpy.ma.maskedarray.__setstate__#numpy.ma.MaskedArray.__setstate__ "numpy.ma.MaskedArray.__setstate__")(state) | Restore the internal state of the masked array, for pickling purposes. Basic customization: [`MaskedArray.__new__`](generated/numpy.ma.maskedarray.__new__#numpy.ma.MaskedArray.__new__ "numpy.ma.MaskedArray.__new__")(cls[, data, mask, ...]) | Create a new masked array from scratch. ---|--- [`MaskedArray.__array__`](generated/numpy.ma.maskedarray.__array__#numpy.ma.MaskedArray.__array__ "numpy.ma.MaskedArray.__array__")([dtype], *[, copy]) | For `dtype` parameter it returns a new reference to self if `dtype` is not given or it matches array's data type. [`MaskedArray.__array_wrap__`](generated/numpy.ma.maskedarray.__array_wrap__#numpy.ma.MaskedArray.__array_wrap__ "numpy.ma.MaskedArray.__array_wrap__")(obj[, context, ...]) | Special hook for ufuncs. Container customization: (see [Indexing](routines.indexing#arrays-indexing)) [`MaskedArray.__len__`](generated/numpy.ma.maskedarray.__len__#numpy.ma.MaskedArray.__len__ "numpy.ma.MaskedArray.__len__")(/) | Return len(self). ---|--- [`MaskedArray.__getitem__`](generated/numpy.ma.maskedarray.__getitem__#numpy.ma.MaskedArray.__getitem__ "numpy.ma.MaskedArray.__getitem__")(indx) | x.__getitem__(y) <==> x[y] [`MaskedArray.__setitem__`](generated/numpy.ma.maskedarray.__setitem__#numpy.ma.MaskedArray.__setitem__ "numpy.ma.MaskedArray.__setitem__")(indx, value) | x.__setitem__(i, y) <==> x[i]=y [`MaskedArray.__delitem__`](generated/numpy.ma.maskedarray.__delitem__#numpy.ma.MaskedArray.__delitem__ "numpy.ma.MaskedArray.__delitem__")(key, /) | Delete self[key]. [`MaskedArray.__contains__`](generated/numpy.ma.maskedarray.__contains__#numpy.ma.MaskedArray.__contains__ "numpy.ma.MaskedArray.__contains__")(key, /) | Return key in self. ## Specific methods ### Handling the mask The following methods can be used to access information about the mask or to manipulate the mask. [`MaskedArray.__setmask__`](generated/numpy.ma.maskedarray.__setmask__#numpy.ma.MaskedArray.__setmask__ "numpy.ma.MaskedArray.__setmask__")(mask[, copy]) | Set the mask. ---|--- [`MaskedArray.harden_mask`](generated/numpy.ma.maskedarray.harden_mask#numpy.ma.MaskedArray.harden_mask "numpy.ma.MaskedArray.harden_mask")() | Force the mask to hard, preventing unmasking by assignment. [`MaskedArray.soften_mask`](generated/numpy.ma.maskedarray.soften_mask#numpy.ma.MaskedArray.soften_mask "numpy.ma.MaskedArray.soften_mask")() | Force the mask to soft (default), allowing unmasking by assignment. [`MaskedArray.unshare_mask`](generated/numpy.ma.maskedarray.unshare_mask#numpy.ma.MaskedArray.unshare_mask "numpy.ma.MaskedArray.unshare_mask")() | Copy the mask and set the `sharedmask` flag to `False`. [`MaskedArray.shrink_mask`](generated/numpy.ma.maskedarray.shrink_mask#numpy.ma.MaskedArray.shrink_mask "numpy.ma.MaskedArray.shrink_mask")() | Reduce a mask to nomask when possible. ### Handling the `fill_value` [`MaskedArray.get_fill_value`](generated/numpy.ma.maskedarray.get_fill_value#numpy.ma.MaskedArray.get_fill_value "numpy.ma.MaskedArray.get_fill_value")() | The filling value of the masked array is a scalar. ---|--- [`MaskedArray.set_fill_value`](generated/numpy.ma.maskedarray.set_fill_value#numpy.ma.MaskedArray.set_fill_value "numpy.ma.MaskedArray.set_fill_value")([value]) | ### Counting the missing elements [`MaskedArray.count`](generated/numpy.ma.maskedarray.count#numpy.ma.MaskedArray.count "numpy.ma.MaskedArray.count")([axis, keepdims]) | Count the non-masked elements of the array along the given axis. ---|--- # The numpy.ma module ## Rationale Masked arrays are arrays that may have missing or invalid entries. The `numpy.ma` module provides a nearly work-alike replacement for numpy that supports data arrays with masks. ## What is a masked array? In many circumstances, datasets can be incomplete or tainted by the presence of invalid data. For example, a sensor may have failed to record a data, or recorded an invalid value. The `numpy.ma` module provides a convenient way to address this issue, by introducing masked arrays. A masked array is the combination of a standard [`numpy.ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") and a mask. A mask is either [`nomask`](maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask"), indicating that no value of the associated array is invalid, or an array of booleans that determines for each element of the associated array whether the value is valid or not. When an element of the mask is `False`, the corresponding element of the associated array is valid and is said to be unmasked. When an element of the mask is `True`, the corresponding element of the associated array is said to be masked (invalid). The package ensures that masked entries are not used in computations. As an illustration, let’s consider the following dataset: >>> import numpy as np >>> import numpy.ma as ma >>> x = np.array([1, 2, 3, -1, 5]) We wish to mark the fourth entry as invalid. The easiest is to create a masked array: >>> mx = ma.masked_array(x, mask=[0, 0, 0, 1, 0]) We can now compute the mean of the dataset, without taking the invalid data into account: >>> mx.mean() 2.75 ## The numpy.ma module The main feature of the `numpy.ma` module is the [`MaskedArray`](maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") class, which is a subclass of [`numpy.ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). The class, its attributes and methods are described in more details in the [MaskedArray class](maskedarray.baseclass#maskedarray-baseclass) section. The `numpy.ma` module can be used as an addition to [`numpy`](index#module- numpy "numpy"): >>> import numpy as np >>> import numpy.ma as ma To create an array with the second element invalid, we would do: >>> y = ma.array([1, 2, 3], mask = [0, 1, 0]) To create a masked array where all values close to 1.e20 are invalid, we would do: >>> z = ma.masked_values([1.0, 1.e20, 3.0, 4.0], 1.e20) For a complete discussion of creation methods for masked arrays please see section Constructing masked arrays. # Using numpy.ma ## Constructing masked arrays There are several ways to construct a masked array. * A first possibility is to directly invoke the [`MaskedArray`](maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") class. * A second possibility is to use the two masked array constructors, [`array`](generated/numpy.ma.array#numpy.ma.array "numpy.ma.array") and [`masked_array`](generated/numpy.ma.masked_array#numpy.ma.masked_array "numpy.ma.masked_array"). [`array`](generated/numpy.ma.array#numpy.ma.array "numpy.ma.array")(data[, dtype, copy, order, mask, ...]) | An array class with possibly masked values. ---|--- [`masked_array`](generated/numpy.ma.masked_array#numpy.ma.masked_array "numpy.ma.masked_array") | alias of [`MaskedArray`](maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") * A third option is to take the view of an existing array. In that case, the mask of the view is set to [`nomask`](maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask") if the array has no named fields, or an array of boolean with the same structure as the array otherwise. >>> import numpy as np >>> x = np.array([1, 2, 3]) >>> x.view(ma.MaskedArray) masked_array(data=[1, 2, 3], mask=False, fill_value=999999) >>> x = np.array([(1, 1.), (2, 2.)], dtype=[('a',int), ('b', float)]) >>> x.view(ma.MaskedArray) masked_array(data=[(1, 1.0), (2, 2.0)], mask=[(False, False), (False, False)], fill_value=(999999, 1e+20), dtype=[('a', '>> import numpy as np >>> x = ma.array([[1, 2], [3, 4]], mask=[[0, 1], [1, 0]]) >>> x[~x.mask] masked_array(data=[1, 4], mask=[False, False], fill_value=999999) Another way to retrieve the valid data is to use the [`compressed`](generated/numpy.ma.compressed#numpy.ma.compressed "numpy.ma.compressed") method, which returns a one-dimensional [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") (or one of its subclasses, depending on the value of the [`baseclass`](maskedarray.baseclass#numpy.ma.MaskedArray.baseclass "numpy.ma.MaskedArray.baseclass") attribute): >>> x.compressed() array([1, 4]) Note that the output of [`compressed`](generated/numpy.ma.compressed#numpy.ma.compressed "numpy.ma.compressed") is always 1D. ## Modifying the mask ### Masking an entry The recommended way to mark one or several specific entries of a masked array as invalid is to assign the special value [`masked`](maskedarray.baseclass#numpy.ma.masked "numpy.ma.masked") to them: >>> x = ma.array([1, 2, 3]) >>> x[0] = ma.masked >>> x masked_array(data=[--, 2, 3], mask=[ True, False, False], fill_value=999999) >>> y = ma.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> y[(0, 1, 2), (1, 2, 0)] = ma.masked >>> y masked_array( data=[[1, --, 3], [4, 5, --], [--, 8, 9]], mask=[[False, True, False], [False, False, True], [ True, False, False]], fill_value=999999) >>> z = ma.array([1, 2, 3, 4]) >>> z[:-2] = ma.masked >>> z masked_array(data=[--, --, 3, 4], mask=[ True, True, False, False], fill_value=999999) A second possibility is to modify the [`mask`](maskedarray.baseclass#numpy.ma.MaskedArray.mask "numpy.ma.MaskedArray.mask") directly, but this usage is discouraged. Note When creating a new masked array with a simple, non-structured datatype, the mask is initially set to the special value [`nomask`](maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask"), that corresponds roughly to the boolean `False`. Trying to set an element of [`nomask`](maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask") will fail with a [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "\(in Python v3.13\)") exception, as a boolean does not support item assignment. All the entries of an array can be masked at once by assigning `True` to the mask: >>> import numpy.ma as ma >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) >>> x.mask = True >>> x masked_array(data=[--, --, --], mask=[ True, True, True], fill_value=999999, dtype=int64) Finally, specific entries can be masked and/or unmasked by assigning to the mask a sequence of booleans: >>> x = ma.array([1, 2, 3]) >>> x.mask = [0, 1, 0] >>> x masked_array(data=[1, --, 3], mask=[False, True, False], fill_value=999999) ### Unmasking an entry To unmask one or several specific entries, we can just assign one or several new valid values to them: >>> import numpy.ma as ma >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) >>> x masked_array(data=[1, 2, --], mask=[False, False, True], fill_value=999999) >>> x[-1] = 5 >>> x masked_array(data=[1, 2, 5], mask=[False, False, False], fill_value=999999) Note Unmasking an entry by direct assignment will silently fail if the masked array has a _hard_ mask, as shown by the [`hardmask`](maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") attribute. This feature was introduced to prevent overwriting the mask. To force the unmasking of an entry where the array has a hard mask, the mask must first to be softened using the [`soften_mask`](generated/numpy.ma.soften_mask#numpy.ma.soften_mask "numpy.ma.soften_mask") method before the allocation. It can be re-hardened with [`harden_mask`](generated/numpy.ma.harden_mask#numpy.ma.harden_mask "numpy.ma.harden_mask") as follows: >>> import numpy.ma as ma >>> x = ma.array([1, 2, 3], mask=[0, 0, 1], hard_mask=True) >>> x masked_array(data=[1, 2, --], mask=[False, False, True], fill_value=999999) >>> x[-1] = 5 >>> x masked_array(data=[1, 2, --], mask=[False, False, True], fill_value=999999) >>> x.soften_mask() masked_array(data=[1, 2, --], mask=[False, False, True], fill_value=999999) >>> x[-1] = 5 >>> x masked_array(data=[1, 2, 5], mask=[False, False, False], fill_value=999999) >>> x.harden_mask() masked_array(data=[1, 2, 5], mask=[False, False, False], fill_value=999999) To unmask all masked entries of a masked array (provided the mask isn’t a hard mask), the simplest solution is to assign the constant [`nomask`](maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask") to the mask: >>> import numpy.ma as ma >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) >>> x masked_array(data=[1, 2, --], mask=[False, False, True], fill_value=999999) >>> x.mask = ma.nomask >>> x masked_array(data=[1, 2, 3], mask=[False, False, False], fill_value=999999) ## Indexing and slicing As a [`MaskedArray`](maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") is a subclass of [`numpy.ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"), it inherits its mechanisms for indexing and slicing. When accessing a single entry of a masked array with no named fields, the output is either a scalar (if the corresponding entry of the mask is `False`) or the special value [`masked`](maskedarray.baseclass#numpy.ma.masked "numpy.ma.masked") (if the corresponding entry of the mask is `True`): >>> import numpy.ma as ma >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) >>> x[0] 1 >>> x[-1] masked >>> x[-1] is ma.masked True If the masked array has named fields, accessing a single entry returns a [`numpy.void`](arrays.scalars#numpy.void "numpy.void") object if none of the fields are masked, or a 0d masked array with the same dtype as the initial array if at least one of the fields is masked. >>> import numpy.ma as ma >>> y = ma.masked_array([(1,2), (3, 4)], ... mask=[(0, 0), (0, 1)], ... dtype=[('a', int), ('b', int)]) >>> y[0] (1, 2) >>> y[-1] (3, --) When accessing a slice, the output is a masked array whose [`data`](maskedarray.baseclass#numpy.ma.MaskedArray.data "numpy.ma.MaskedArray.data") attribute is a view of the original data, and whose mask is either [`nomask`](maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask") (if there was no invalid entries in the original array) or a view of the corresponding slice of the original mask. The view is required to ensure propagation of any modification of the mask to the original. >>> import numpy.ma as ma >>> x = ma.array([1, 2, 3, 4, 5], mask=[0, 1, 0, 0, 1]) >>> mx = x[:3] >>> mx masked_array(data=[1, --, 3], mask=[False, True, False], fill_value=999999) >>> mx[1] = -1 >>> mx masked_array(data=[1, -1, 3], mask=[False, False, False], fill_value=999999) >>> x.mask array([False, False, False, False, True]) >>> x.data array([ 1, -1, 3, 4, 5]) Accessing a field of a masked array with structured datatype returns a [`MaskedArray`](maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"). ## Operations on masked arrays Arithmetic and comparison operations are supported by masked arrays. As much as possible, invalid entries of a masked array are not processed, meaning that the corresponding [`data`](maskedarray.baseclass#numpy.ma.MaskedArray.data "numpy.ma.MaskedArray.data") entries _should_ be the same before and after the operation. Warning We need to stress that this behavior may not be systematic, that masked data may be affected by the operation in some cases and therefore users should not rely on this data remaining unchanged. The `numpy.ma` module comes with a specific implementation of most ufuncs. Unary and binary functions that have a validity domain (such as [`log`](generated/numpy.log#numpy.log "numpy.log") or [`divide`](generated/numpy.divide#numpy.divide "numpy.divide")) return the [`masked`](maskedarray.baseclass#numpy.ma.masked "numpy.ma.masked") constant whenever the input is masked or falls outside the validity domain: >>> import numpy.ma as ma >>> ma.log([-1, 0, 1, 2]) masked_array(data=[--, --, 0.0, 0.6931471805599453], mask=[ True, True, False, False], fill_value=1e+20) Masked arrays also support standard numpy ufuncs. The output is then a masked array. The result of a unary ufunc is masked wherever the input is masked. The result of a binary ufunc is masked wherever any of the input is masked. If the ufunc also returns the optional context output (a 3-element tuple containing the name of the ufunc, its arguments and its domain), the context is processed and entries of the output masked array are masked wherever the corresponding input fall outside the validity domain: >>> import numpy.ma as ma >>> x = ma.array([-1, 1, 0, 2, 3], mask=[0, 0, 0, 0, 1]) >>> np.log(x) masked_array(data=[--, 0.0, --, 0.6931471805599453, --], mask=[ True, False, True, False, True], fill_value=1e+20) # Examples ## Data with a given value representing missing data Let’s consider a list of elements, `x`, where values of -9999. represent missing data. We wish to compute the average value of the data and the vector of anomalies (deviations from the average): >>> import numpy.ma as ma >>> x = [0.,1.,-9999.,3.,4.] >>> mx = ma.masked_values (x, -9999.) >>> print(mx.mean()) 2.0 >>> print(mx - mx.mean()) [-2.0 -1.0 -- 1.0 2.0] >>> print(mx.anom()) [-2.0 -1.0 -- 1.0 2.0] ## Filling in the missing data Suppose now that we wish to print that same data, but with the missing values replaced by the average value. >>> import numpy.ma as ma >>> mx = ma.masked_values (x, -9999.) >>> print(mx.filled(mx.mean())) [0. 1. 2. 3. 4.] ## Numerical operations Numerical operations can be easily performed without worrying about missing values, dividing by zero, square roots of negative numbers, etc.: >>> import numpy.ma as ma >>> x = ma.array([1., -1., 3., 4., 5., 6.], mask=[0,0,0,0,1,0]) >>> y = ma.array([1., 2., 0., 4., 5., 6.], mask=[0,0,0,0,0,1]) >>> print(ma.sqrt(x/y)) [1.0 -- -- 1.0 -- --] Four values of the output are invalid: the first one comes from taking the square root of a negative number, the second from the division by zero, and the last two where the inputs were masked. ## Ignoring extreme values Let’s consider an array `d` of floats between 0 and 1. We wish to compute the average of the values of `d` while ignoring any data outside the range `[0.2, 0.9]`: >>> import numpy as np >>> import numpy.ma as ma >>> d = np.linspace(0, 1, 20) >>> print(d.mean() - ma.masked_outside(d, 0.2, 0.9).mean()) -0.05263157894736836 # Masked arrays Masked arrays are arrays that may have missing or invalid entries. The [`numpy.ma`](maskedarray.generic#module-numpy.ma "numpy.ma") module provides a nearly work-alike replacement for numpy that supports data arrays with masks. * [The `numpy.ma` module](maskedarray.generic) * [Rationale](maskedarray.generic#rationale) * [What is a masked array?](maskedarray.generic#what-is-a-masked-array) * [The `numpy.ma` module](maskedarray.generic#id1) * [Using numpy.ma](maskedarray.generic#using-numpy-ma) * [Constructing masked arrays](maskedarray.generic#constructing-masked-arrays) * [Accessing the data](maskedarray.generic#accessing-the-data) * [Accessing the mask](maskedarray.generic#accessing-the-mask) * [Accessing only the valid entries](maskedarray.generic#accessing-only-the-valid-entries) * [Modifying the mask](maskedarray.generic#modifying-the-mask) * [Indexing and slicing](maskedarray.generic#indexing-and-slicing) * [Operations on masked arrays](maskedarray.generic#operations-on-masked-arrays) * [Examples](maskedarray.generic#examples) * [Data with a given value representing missing data](maskedarray.generic#data-with-a-given-value-representing-missing-data) * [Filling in the missing data](maskedarray.generic#filling-in-the-missing-data) * [Numerical operations](maskedarray.generic#numerical-operations) * [Ignoring extreme values](maskedarray.generic#ignoring-extreme-values) * [Constants of the `numpy.ma` module](maskedarray.baseclass) * [`masked`](maskedarray.baseclass#numpy.ma.masked) * [`nomask`](maskedarray.baseclass#numpy.ma.nomask) * [`masked_print_option`](maskedarray.baseclass#numpy.ma.masked_print_option) * [The `MaskedArray` class](maskedarray.baseclass#the-maskedarray-class) * [`MaskedArray`](maskedarray.baseclass#numpy.ma.MaskedArray) * [Attributes and properties of masked arrays](maskedarray.baseclass#attributes-and-properties-of-masked-arrays) * [`MaskedArray` methods](maskedarray.baseclass#maskedarray-methods) * [Conversion](maskedarray.baseclass#conversion) * [Shape manipulation](maskedarray.baseclass#shape-manipulation) * [Item selection and manipulation](maskedarray.baseclass#item-selection-and-manipulation) * [Pickling and copy](maskedarray.baseclass#pickling-and-copy) * [Calculations](maskedarray.baseclass#calculations) * [Arithmetic and comparison operations](maskedarray.baseclass#arithmetic-and-comparison-operations) * [Representation](maskedarray.baseclass#representation) * [Special methods](maskedarray.baseclass#special-methods) * [Specific methods](maskedarray.baseclass#specific-methods) * [Masked array operations](routines.ma) * [Constants](routines.ma#constants) * [Creation](routines.ma#creation) * [Inspecting the array](routines.ma#inspecting-the-array) * [Manipulating a MaskedArray](routines.ma#manipulating-a-maskedarray) * [Operations on masks](routines.ma#operations-on-masks) * [Conversion operations](routines.ma#conversion-operations) * [Masked arrays arithmetic](routines.ma#masked-arrays-arithmetic) # NumPy’s module structure NumPy has a large number of submodules. Most regular usage of NumPy requires only the main namespace and a smaller set of submodules. The rest either either special-purpose or niche namespaces. ## Main namespaces Regular/recommended user-facing namespaces for general use: * [numpy](routines#routines) * [numpy.exceptions](routines.exceptions#routines-exceptions) * [numpy.fft](routines.fft#routines-fft) * [numpy.linalg](routines.linalg#routines-linalg) * [numpy.polynomial](routines.polynomials-package#numpy-polynomial) * [numpy.random](random/index#numpyrandom) * [numpy.strings](routines.strings#routines-strings) * [numpy.testing](routines.testing#routines-testing) * [numpy.typing](typing#typing) ## Special-purpose namespaces * [numpy.ctypeslib](routines.ctypeslib#routines-ctypeslib) \- interacting with NumPy objects with [`ctypes`](https://docs.python.org/3/library/ctypes.html#module-ctypes "\(in Python v3.13\)") * [numpy.dtypes](routines.dtypes#routines-dtypes) \- dtype classes (typically not used directly by end users) * [numpy.emath](routines.emath#routines-emath) \- mathematical functions with automatic domain * [numpy.lib](routines.lib#routines-lib) \- utilities & functionality which do not fit the main namespace * [numpy.rec](routines.rec#routines-rec) \- record arrays (largely superseded by dataframe libraries) * [numpy.version](routines.version#routines-version) \- small module with more detailed version info ## Legacy namespaces Prefer not to use these namespaces for new code. There are better alternatives and/or this code is deprecated or isn’t reliable. * [numpy.char](routines.char#routines-char) \- legacy string functionality, only for fixed-width strings * [numpy.distutils](distutils#numpy-distutils-refguide) (deprecated) - build system support * [numpy.f2py](../f2py/usage#python-module-numpy-f2py) \- Fortran binding generation (usually used from the command line only) * [numpy.ma](routines.ma#routines-ma) \- masked arrays (not very reliable, needs an overhaul) * [numpy.matlib](routines.matlib#routines-matlib) (pending deprecation) - functions supporting `matrix` instances # numpy.random.BitGenerator.cffi attribute random.BitGenerator.cffi CFFI interface Returns: **interface** namedtuple Named tuple containing CFFI wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct # numpy.random.BitGenerator _class_ numpy.random.BitGenerator(_seed =None_) Base Class for generic BitGenerators, which provide a stream of random bits based on different algorithms. Must be overridden. Parameters: **seed**{None, int, array_like[ints], SeedSequence}, optional A seed to initialize the `BitGenerator`. If None, then fresh, unpredictable entropy will be pulled from the OS. If an `int` or `array_like[ints]` is passed, then it will be passed to [`SeedSequence`](numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to derive the initial `BitGenerator` state. One may also pass in a [`SeedSequence`](numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") instance. All integer values must be non- negative. See also [`SeedSequence`](numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") Attributes: **lock** threading.Lock Lock instance that is shared so that the same BitGenerator can be used in multiple Generators without corrupting the state. Code that generates values from a bit generator should hold the bit generator’s lock. #### Methods [`random_raw`](numpy.random.bitgenerator.random_raw#numpy.random.BitGenerator.random_raw "numpy.random.BitGenerator.random_raw")(self[, size]) | Return randoms as generated by the underlying BitGenerator ---|--- [`spawn`](numpy.random.bitgenerator.spawn#numpy.random.BitGenerator.spawn "numpy.random.BitGenerator.spawn")(n_children) | Create new independent child bit generators. # numpy.random.BitGenerator.random_raw method random.BitGenerator.random_raw(_self_ , _size =None_) Return randoms as generated by the underlying BitGenerator Parameters: **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **output** bool, optional Output values. Used for performance testing since the generated values are not returned. Returns: **out** uint or ndarray Drawn samples. #### Notes This method directly exposes the raw underlying pseudo-random number generator. All values are returned as unsigned 64-bit values irrespective of the number of bits produced by the PRNG. See the class docstring for the number of bits returned. # numpy.random.BitGenerator.spawn method random.BitGenerator.spawn(_n_children_) Create new independent child bit generators. See [SeedSequence spawning](../../parallel#seedsequence-spawn) for additional notes on spawning children. Some bit generators also implement `jumped` as a different approach for creating independent streams. New in version 1.25.0. Parameters: **n_children** int Returns: **child_bit_generators** list of BitGenerators Raises: TypeError When the underlying SeedSequence does not implement spawning. See also [`random.Generator.spawn`](../../generated/numpy.random.generator.spawn#numpy.random.Generator.spawn "numpy.random.Generator.spawn"), [`random.SeedSequence.spawn`](numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") Equivalent method on the generator and seed sequence. # numpy.random.MT19937.cffi attribute random.MT19937.cffi CFFI interface Returns: **interface** namedtuple Named tuple containing CFFI wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct # numpy.random.MT19937.ctypes attribute random.MT19937.ctypes ctypes interface Returns: **interface** namedtuple Named tuple containing ctypes wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct # numpy.random.MT19937.jumped method random.MT19937.jumped(_jumps =1_) Returns a new bit generator with the state jumped The state of the returned bit generator is jumped as-if 2**(128 * jumps) random numbers have been generated. Parameters: **jumps** integer, positive Number of times to jump the state of the bit generator returned Returns: **bit_generator** MT19937 New instance of generator jumped iter times #### Notes The jump step is computed using a modified version of Matsumoto’s implementation of Horner’s method. The step polynomial is precomputed to perform 2**128 steps. The jumped state has been verified to match the state produced using Matsumoto’s original code. #### References [1] Matsumoto, M, Generating multiple disjoint streams of pseudorandom number sequences. Accessed on: May 6, 2020. [2] Hiroshi Haramoto, Makoto Matsumoto, Takuji Nishimura, François Panneton, Pierre L’Ecuyer, “Efficient Jump Ahead for F2-Linear Random Number Generators”, INFORMS JOURNAL ON COMPUTING, Vol. 20, No. 3, Summer 2008, pp. 385-390. # numpy.random.MT19937.state attribute random.MT19937.state Get or set the PRNG state Returns: **state** dict Dictionary containing the information required to describe the state of the PRNG # numpy.random.PCG64.advance method random.PCG64.advance(_delta_) Advance the underlying RNG as-if delta draws have occurred. Parameters: **delta** integer, positive Number of draws to advance the RNG. Must be less than the size state variable in the underlying RNG. Returns: **self** PCG64 RNG advanced delta steps #### Notes Advancing a RNG updates the underlying RNG state as-if a given number of calls to the underlying RNG have been made. In general there is not a one-to-one relationship between the number output random values from a particular distribution and the number of draws from the core RNG. This occurs for two reasons: * The random values are simulated using a rejection-based method and so, on average, more than one value from the underlying RNG is required to generate an single draw. * The number of bits required to generate a simulated value differs from the number of bits generated by the underlying RNG. For example, two 16-bit integer values can be simulated from a single draw of a 32-bit RNG. Advancing the RNG state resets any pre-computed random numbers. This is required to ensure exact reproducibility. # numpy.random.PCG64.cffi attribute random.PCG64.cffi CFFI interface Returns: **interface** namedtuple Named tuple containing CFFI wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct # numpy.random.PCG64.ctypes attribute random.PCG64.ctypes ctypes interface Returns: **interface** namedtuple Named tuple containing ctypes wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct # numpy.random.PCG64.jumped method random.PCG64.jumped(_jumps =1_) Returns a new bit generator with the state jumped. Jumps the state as-if jumps * 210306068529402873165736369884012333109 random numbers have been generated. Parameters: **jumps** integer, positive Number of times to jump the state of the bit generator returned Returns: **bit_generator** PCG64 New instance of generator jumped iter times #### Notes The step size is phi-1 when multiplied by 2**128 where phi is the golden ratio. # numpy.random.PCG64.state attribute random.PCG64.state Get or set the PRNG state Returns: **state** dict Dictionary containing the information required to describe the state of the PRNG # numpy.random.PCG64DXSM.advance method random.PCG64DXSM.advance(_delta_) Advance the underlying RNG as-if delta draws have occurred. Parameters: **delta** integer, positive Number of draws to advance the RNG. Must be less than the size state variable in the underlying RNG. Returns: **self** PCG64 RNG advanced delta steps #### Notes Advancing a RNG updates the underlying RNG state as-if a given number of calls to the underlying RNG have been made. In general there is not a one-to-one relationship between the number output random values from a particular distribution and the number of draws from the core RNG. This occurs for two reasons: * The random values are simulated using a rejection-based method and so, on average, more than one value from the underlying RNG is required to generate an single draw. * The number of bits required to generate a simulated value differs from the number of bits generated by the underlying RNG. For example, two 16-bit integer values can be simulated from a single draw of a 32-bit RNG. Advancing the RNG state resets any pre-computed random numbers. This is required to ensure exact reproducibility. # numpy.random.PCG64DXSM.cffi attribute random.PCG64DXSM.cffi CFFI interface Returns: **interface** namedtuple Named tuple containing CFFI wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct # numpy.random.PCG64DXSM.ctypes attribute random.PCG64DXSM.ctypes ctypes interface Returns: **interface** namedtuple Named tuple containing ctypes wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct # numpy.random.PCG64DXSM.jumped method random.PCG64DXSM.jumped(_jumps =1_) Returns a new bit generator with the state jumped. Jumps the state as-if jumps * 210306068529402873165736369884012333109 random numbers have been generated. Parameters: **jumps** integer, positive Number of times to jump the state of the bit generator returned Returns: **bit_generator** PCG64DXSM New instance of generator jumped iter times #### Notes The step size is phi-1 when multiplied by 2**128 where phi is the golden ratio. # numpy.random.PCG64DXSM.state attribute random.PCG64DXSM.state Get or set the PRNG state Returns: **state** dict Dictionary containing the information required to describe the state of the PRNG # numpy.random.Philox.advance method random.Philox.advance(_delta_) Advance the underlying RNG as-if delta draws have occurred. Parameters: **delta** integer, positive Number of draws to advance the RNG. Must be less than the size state variable in the underlying RNG. Returns: **self** Philox RNG advanced delta steps #### Notes Advancing a RNG updates the underlying RNG state as-if a given number of calls to the underlying RNG have been made. In general there is not a one-to-one relationship between the number output random values from a particular distribution and the number of draws from the core RNG. This occurs for two reasons: * The random values are simulated using a rejection-based method and so, on average, more than one value from the underlying RNG is required to generate an single draw. * The number of bits required to generate a simulated value differs from the number of bits generated by the underlying RNG. For example, two 16-bit integer values can be simulated from a single draw of a 32-bit RNG. Advancing the RNG state resets any pre-computed random numbers. This is required to ensure exact reproducibility. # numpy.random.Philox.cffi attribute random.Philox.cffi CFFI interface Returns: **interface** namedtuple Named tuple containing CFFI wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct # numpy.random.Philox.ctypes attribute random.Philox.ctypes ctypes interface Returns: **interface** namedtuple Named tuple containing ctypes wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct # numpy.random.Philox.jumped method random.Philox.jumped(_jumps =1_) Returns a new bit generator with the state jumped The state of the returned bit generator is jumped as-if (2**128) * jumps random numbers have been generated. Parameters: **jumps** integer, positive Number of times to jump the state of the bit generator returned Returns: **bit_generator** Philox New instance of generator jumped iter times # numpy.random.Philox.state attribute random.Philox.state Get or set the PRNG state Returns: **state** dict Dictionary containing the information required to describe the state of the PRNG # numpy.random.SeedSequence.entropy attribute random.SeedSequence.entropy # numpy.random.SeedSequence.generate_state method random.SeedSequence.generate_state(_n_words_ , _dtype =np.uint32_) Return the requested number of words for PRNG seeding. A BitGenerator should call this method in its constructor with an appropriate `n_words` parameter to properly seed itself. Parameters: **n_words** int **dtype** np.uint32 or np.uint64, optional The size of each word. This should only be either [`uint32`](../../../arrays.scalars#numpy.uint32 "numpy.uint32") or [`uint64`](../../../arrays.scalars#numpy.uint64 "numpy.uint64"). Strings (`‘uint32’`, `‘uint64’`) are fine. Note that requesting [`uint64`](../../../arrays.scalars#numpy.uint64 "numpy.uint64") will draw twice as many bits as [`uint32`](../../../arrays.scalars#numpy.uint32 "numpy.uint32") for the same `n_words`. This is a convenience for `BitGenerator`s that express their states as [`uint64`](../../../arrays.scalars#numpy.uint64 "numpy.uint64") arrays. Returns: **state** uint32 or uint64 array, shape=(n_words,) # numpy.random.SeedSequence _class_ numpy.random.SeedSequence(_entropy =None_, _*_ , _spawn_key =()_, _pool_size =4_) SeedSequence mixes sources of entropy in a reproducible way to set the initial state for independent and very probably non-overlapping BitGenerators. Once the SeedSequence is instantiated, you can call the [`generate_state`](numpy.random.seedsequence.generate_state#numpy.random.SeedSequence.generate_state "numpy.random.SeedSequence.generate_state") method to get an appropriately sized seed. Calling [`spawn(n)`](numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") will create `n` SeedSequences that can be used to seed independent BitGenerators, i.e. for different threads. Parameters: **entropy**{None, int, sequence[int]}, optional The entropy for creating a `SeedSequence`. All integer values must be non- negative. **spawn_key**{(), sequence[int]}, optional An additional source of entropy based on the position of this `SeedSequence` in the tree of such objects created with the [`SeedSequence.spawn`](numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") method. Typically, only [`SeedSequence.spawn`](numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") will set this, and users will not. **pool_size**{int}, optional Size of the pooled entropy to store. Default is 4 to give a 128-bit entropy pool. 8 (for 256 bits) is another reasonable choice if working with larger PRNGs, but there is very little to be gained by selecting another value. **n_children_spawned**{int}, optional The number of children already spawned. Only pass this if reconstructing a `SeedSequence` from a serialized form. #### Notes Best practice for achieving reproducible bit streams is to use the default `None` for the initial entropy, and then use [`SeedSequence.entropy`](numpy.random.seedsequence.entropy#numpy.random.SeedSequence.entropy "numpy.random.SeedSequence.entropy") to log/pickle the [`entropy`](numpy.random.seedsequence.entropy#numpy.random.SeedSequence.entropy "numpy.random.SeedSequence.entropy") for reproducibility: >>> sq1 = np.random.SeedSequence() >>> sq1.entropy 243799254704924441050048792905230269161 # random >>> sq2 = np.random.SeedSequence(sq1.entropy) >>> np.all(sq1.generate_state(10) == sq2.generate_state(10)) True Attributes: **entropy** **n_children_spawned** **pool** **pool_size** **spawn_key** **state** #### Methods [`generate_state`](numpy.random.seedsequence.generate_state#numpy.random.SeedSequence.generate_state "numpy.random.SeedSequence.generate_state")(n_words[, dtype]) | Return the requested number of words for PRNG seeding. ---|--- [`spawn`](numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn")(n_children) | Spawn a number of child `SeedSequence` s by extending the [`spawn_key`](numpy.random.seedsequence.spawn_key#numpy.random.SeedSequence.spawn_key "numpy.random.SeedSequence.spawn_key"). # numpy.random.SeedSequence.spawn method random.SeedSequence.spawn(_n_children_) Spawn a number of child `SeedSequence` s by extending the [`spawn_key`](numpy.random.seedsequence.spawn_key#numpy.random.SeedSequence.spawn_key "numpy.random.SeedSequence.spawn_key"). See [SeedSequence spawning](../../parallel#seedsequence-spawn) for additional notes on spawning children. Parameters: **n_children** int Returns: **seqs** list of `SeedSequence` s See also [`random.Generator.spawn`](../../generated/numpy.random.generator.spawn#numpy.random.Generator.spawn "numpy.random.Generator.spawn"), [`random.BitGenerator.spawn`](numpy.random.bitgenerator.spawn#numpy.random.BitGenerator.spawn "numpy.random.BitGenerator.spawn") Equivalent method on the generator and bit generator. # numpy.random.SeedSequence.spawn_key attribute random.SeedSequence.spawn_key # numpy.random.SFC64.cffi attribute random.SFC64.cffi CFFI interface Returns: **interface** namedtuple Named tuple containing CFFI wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct # numpy.random.SFC64.ctypes attribute random.SFC64.ctypes ctypes interface Returns: **interface** namedtuple Named tuple containing ctypes wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct # numpy.random.SFC64.state attribute random.SFC64.state Get or set the PRNG state Returns: **state** dict Dictionary containing the information required to describe the state of the PRNG # Bit generators The random values produced by [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") originate in a BitGenerator. The BitGenerators do not directly provide random numbers and only contains methods used for seeding, getting or setting the state, jumping or advancing the state, and for accessing low-level wrappers for consumption by code that can efficiently access the functions provided, e.g., [numba](https://numba.pydata.org). ## Supported BitGenerators The included BitGenerators are: * PCG-64 - The default. A fast generator that can be advanced by an arbitrary amount. See the documentation for [`advance`](generated/numpy.random.pcg64.advance#numpy.random.PCG64.advance "numpy.random.PCG64.advance"). PCG-64 has a period of \\(2^{128}\\). See the [PCG author’s page](https://www.pcg-random.org/) for more details about this class of PRNG. * PCG-64 DXSM - An upgraded version of PCG-64 with better statistical properties in parallel contexts. See [Upgrading PCG64 with PCG64DXSM](../upgrading-pcg64#upgrading-pcg64) for more information on these improvements. * MT19937 - The standard Python BitGenerator. Adds a [`MT19937.jumped`](generated/numpy.random.mt19937.jumped#numpy.random.MT19937.jumped "numpy.random.MT19937.jumped") function that returns a new generator with state as-if \\(2^{128}\\) draws have been made. * Philox - A counter-based generator capable of being advanced an arbitrary number of steps or generating independent streams. See the [Random123](https://www.deshawresearch.com/resources_random123.html) page for more details about this class of bit generators. * SFC64 - A fast generator based on random invertible mappings. Usually the fastest generator of the four. See the [SFC author’s page](https://pracrand.sourceforge.net/RNG_engines.txt) for (a little) more detail. [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator")([seed]) | Base Class for generic BitGenerators, which provide a stream of random bits based on different algorithms. ---|--- * [MT19937](mt19937) * [PCG64](pcg64) * [PCG64DXSM](pcg64dxsm) * [Philox](philox) * [SFC64](sfc64) # Seeding and entropy A BitGenerator provides a stream of random values. In order to generate reproducible streams, BitGenerators support setting their initial state via a seed. All of the provided BitGenerators will take an arbitrary-sized non- negative integer, or a list of such integers, as a seed. BitGenerators need to take those inputs and process them into a high-quality internal state for the BitGenerator. All of the BitGenerators in numpy delegate that task to [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence"), which uses hashing techniques to ensure that even low-quality seeds generate high-quality initial states. from numpy.random import PCG64 bg = PCG64(12345678903141592653589793) [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") is designed to be convenient for implementing best practices. We recommend that a stochastic program defaults to using entropy from the OS so that each run is different. The program should print out or log that entropy. In order to reproduce a past value, the program should allow the user to provide that value through some mechanism, a command- line argument is common, so that the user can then re-enter that entropy to reproduce the result. [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") can take care of everything except for communicating with the user, which is up to you. from numpy.random import PCG64, SeedSequence # Get the user's seed somehow, maybe through `argparse`. # If the user did not provide a seed, it should return `None`. seed = get_user_seed() ss = SeedSequence(seed) print('seed = {}'.format(ss.entropy)) bg = PCG64(ss) We default to using a 128-bit integer using entropy gathered from the OS. This is a good amount of entropy to initialize all of the generators that we have in numpy. We do not recommend using small seeds below 32 bits for general use. Using just a small set of seeds to instantiate larger state spaces means that there are some initial states that are impossible to reach. This creates some biases if everyone uses such values. There will not be anything _wrong_ with the results, per se; even a seed of 0 is perfectly fine thanks to the processing that [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") does. If you just need _some_ fixed value for unit tests or debugging, feel free to use whatever seed you like. But if you want to make inferences from the results or publish them, drawing from a larger set of seeds is good practice. If you need to generate a good seed “offline”, then `SeedSequence().entropy` or using `secrets.randbits(128)` from the standard library are both convenient ways. If you need to run several stochastic simulations in parallel, best practice is to construct a random generator instance for each simulation. To make sure that the random streams have distinct initial states, you can use the `spawn` method of [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence"). For instance, here we construct a list of 12 instances: from numpy.random import PCG64, SeedSequence # High quality initial entropy entropy = 0x87351080e25cb0fad77a44a3be03b491 base_seq = SeedSequence(entropy) child_seqs = base_seq.spawn(12) # a list of 12 SeedSequences generators = [PCG64(seq) for seq in child_seqs] If you already have an initial random generator instance, you can shorten the above by using the [`spawn`](generated/numpy.random.bitgenerator.spawn#numpy.random.BitGenerator.spawn "numpy.random.BitGenerator.spawn") method: from numpy.random import PCG64, SeedSequence # High quality initial entropy entropy = 0x87351080e25cb0fad77a44a3be03b491 base_bitgen = PCG64(entropy) generators = base_bitgen.spawn(12) An alternative way is to use the fact that a [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") can be initialized by a tuple of elements. Here we use a base entropy value and an integer `worker_id` from numpy.random import PCG64, SeedSequence # High quality initial entropy entropy = 0x87351080e25cb0fad77a44a3be03b491 sequences = [SeedSequence((entropy, worker_id)) for worker_id in range(12)] generators = [PCG64(seq) for seq in sequences] Note that the sequences produced by the latter method will be distinct from those constructed via [`spawn`](generated/numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn"). [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence")([entropy, spawn_key, pool_size]) | SeedSequence mixes sources of entropy in a reproducible way to set the initial state for independent and very probably non-overlapping BitGenerators. ---|--- # Mersenne Twister (MT19937) _class_ numpy.random.MT19937(_seed =None_) Container for the Mersenne Twister pseudo-random number generator. Parameters: **seed**{None, int, array_like[ints], SeedSequence}, optional A seed to initialize the [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). If None, then fresh, unpredictable entropy will be pulled from the OS. If an `int` or `array_like[ints]` is passed, then it will be passed to [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to derive the initial [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") state. One may also pass in a [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") instance. #### Notes `MT19937` provides a capsule containing function pointers that produce doubles, and unsigned 32 and 64- bit integers [1]. These are not directly consumable in Python and must be consumed by a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") or similar object that supports low-level access. The Python stdlib module “random” also contains a Mersenne Twister pseudo- random number generator. **State and Seeding** The `MT19937` state vector consists of a 624-element array of 32-bit unsigned integers plus a single integer value between 0 and 624 that indexes the current position within the main array. The input seed is processed by [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to fill the whole state. The first element is reset such that only its most significant bit is set. **Parallel Features** The preferred way to use a BitGenerator in parallel applications is to use the [`SeedSequence.spawn`](generated/numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") method to obtain entropy values, and to use these to generate new BitGenerators: >>> from numpy.random import Generator, MT19937, SeedSequence >>> sg = SeedSequence(1234) >>> rg = [Generator(MT19937(s)) for s in sg.spawn(10)] Another method is to use [`MT19937.jumped`](generated/numpy.random.mt19937.jumped#numpy.random.MT19937.jumped "numpy.random.MT19937.jumped") which advances the state as-if \\(2^{128}\\) random numbers have been generated ([1], [2]). This allows the original sequence to be split so that distinct segments can be used in each worker process. All generators should be chained to ensure that the segments come from the same sequence. >>> from numpy.random import Generator, MT19937, SeedSequence >>> sg = SeedSequence(1234) >>> bit_generator = MT19937(sg) >>> rg = [] >>> for _ in range(10): ... rg.append(Generator(bit_generator)) ... # Chain the BitGenerators ... bit_generator = bit_generator.jumped() **Compatibility Guarantee** `MT19937` makes a guarantee that a fixed seed will always produce the same random integer stream. #### References [1] (1,2) Hiroshi Haramoto, Makoto Matsumoto, and Pierre L’Ecuyer, “A Fast Jump Ahead Algorithm for Linear Recurrences in a Polynomial Space”, Sequences and Their Applications - SETA, 290–298, 2008. [2] Hiroshi Haramoto, Makoto Matsumoto, Takuji Nishimura, François Panneton, Pierre L’Ecuyer, “Efficient Jump Ahead for F2-Linear Random Number Generators”, INFORMS JOURNAL ON COMPUTING, Vol. 20, No. 3, Summer 2008, pp. 385-390. Attributes: **lock: threading.Lock** Lock instance that is shared so that the same bit git generator can be used in multiple Generators without corrupting the state. Code that generates values from a bit generator should hold the bit generator’s lock. ## State [`state`](generated/numpy.random.mt19937.state#numpy.random.MT19937.state "numpy.random.MT19937.state") | Get or set the PRNG state ---|--- ## Parallel generation [`jumped`](generated/numpy.random.mt19937.jumped#numpy.random.MT19937.jumped "numpy.random.MT19937.jumped")([jumps]) | Returns a new bit generator with the state jumped ---|--- ## Extending [`cffi`](generated/numpy.random.mt19937.cffi#numpy.random.MT19937.cffi "numpy.random.MT19937.cffi") | CFFI interface ---|--- [`ctypes`](generated/numpy.random.mt19937.ctypes#numpy.random.MT19937.ctypes "numpy.random.MT19937.ctypes") | ctypes interface # Permuted congruential generator (64-bit, PCG64) _class_ numpy.random.PCG64(_seed =None_) BitGenerator for the PCG-64 pseudo-random number generator. Parameters: **seed**{None, int, array_like[ints], SeedSequence}, optional A seed to initialize the [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). If None, then fresh, unpredictable entropy will be pulled from the OS. If an `int` or `array_like[ints]` is passed, then it will be passed to [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to derive the initial [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") state. One may also pass in a [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") instance. #### Notes PCG-64 is a 128-bit implementation of O’Neill’s permutation congruential generator ([1], [2]). PCG-64 has a period of \\(2^{128}\\) and supports advancing an arbitrary number of steps as well as \\(2^{127}\\) streams. The specific member of the PCG family that we use is PCG XSL RR 128/64 as described in the paper ([2]). `PCG64` provides a capsule containing function pointers that produce doubles, and unsigned 32 and 64- bit integers. These are not directly consumable in Python and must be consumed by a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") or similar object that supports low-level access. Supports the method [`advance`](generated/numpy.random.pcg64.advance#numpy.random.PCG64.advance "numpy.random.PCG64.advance") to advance the RNG an arbitrary number of steps. The state of the PCG-64 RNG is represented by 2 128-bit unsigned integers. **State and Seeding** The `PCG64` state vector consists of 2 unsigned 128-bit values, which are represented externally as Python ints. One is the state of the PRNG, which is advanced by a linear congruential generator (LCG). The second is a fixed odd increment used in the LCG. The input seed is processed by [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to generate both values. The increment is not independently settable. **Parallel Features** The preferred way to use a BitGenerator in parallel applications is to use the [`SeedSequence.spawn`](generated/numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") method to obtain entropy values, and to use these to generate new BitGenerators: >>> from numpy.random import Generator, PCG64, SeedSequence >>> sg = SeedSequence(1234) >>> rg = [Generator(PCG64(s)) for s in sg.spawn(10)] **Compatibility Guarantee** `PCG64` makes a guarantee that a fixed seed will always produce the same random integer stream. #### References [1] [“PCG, A Family of Better Random Number Generators”](https://www.pcg- random.org/) [2] (1,2) O’Neill, Melissa E. [“PCG: A Family of Simple Fast Space-Efficient Statistically Good Algorithms for Random Number Generation”](https://www.cs.hmc.edu/tr/hmc-cs-2014-0905.pdf) ## State [`state`](generated/numpy.random.pcg64.state#numpy.random.PCG64.state "numpy.random.PCG64.state") | Get or set the PRNG state ---|--- ## Parallel generation [`advance`](generated/numpy.random.pcg64.advance#numpy.random.PCG64.advance "numpy.random.PCG64.advance")(delta) | Advance the underlying RNG as-if delta draws have occurred. ---|--- [`jumped`](generated/numpy.random.pcg64.jumped#numpy.random.PCG64.jumped "numpy.random.PCG64.jumped")([jumps]) | Returns a new bit generator with the state jumped. ## Extending [`cffi`](generated/numpy.random.pcg64.cffi#numpy.random.PCG64.cffi "numpy.random.PCG64.cffi") | CFFI interface ---|--- [`ctypes`](generated/numpy.random.pcg64.ctypes#numpy.random.PCG64.ctypes "numpy.random.PCG64.ctypes") | ctypes interface # Permuted congruential generator (64-bit, PCG64 DXSM) _class_ numpy.random.PCG64DXSM(_seed =None_) BitGenerator for the PCG-64 DXSM pseudo-random number generator. Parameters: **seed**{None, int, array_like[ints], SeedSequence}, optional A seed to initialize the [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). If None, then fresh, unpredictable entropy will be pulled from the OS. If an `int` or `array_like[ints]` is passed, then it will be passed to [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to derive the initial [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") state. One may also pass in a [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") instance. #### Notes PCG-64 DXSM is a 128-bit implementation of O’Neill’s permutation congruential generator ([1], [2]). PCG-64 DXSM has a period of \\(2^{128}\\) and supports advancing an arbitrary number of steps as well as \\(2^{127}\\) streams. The specific member of the PCG family that we use is PCG CM DXSM 128/64. It differs from [`PCG64`](pcg64#numpy.random.PCG64 "numpy.random.PCG64") in that it uses the stronger DXSM output function, a 64-bit “cheap multiplier” in the LCG, and outputs from the state before advancing it rather than advance-then- output. `PCG64DXSM` provides a capsule containing function pointers that produce doubles, and unsigned 32 and 64- bit integers. These are not directly consumable in Python and must be consumed by a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") or similar object that supports low-level access. Supports the method [`advance`](generated/numpy.random.pcg64dxsm.advance#numpy.random.PCG64DXSM.advance "numpy.random.PCG64DXSM.advance") to advance the RNG an arbitrary number of steps. The state of the PCG-64 DXSM RNG is represented by 2 128-bit unsigned integers. **State and Seeding** The `PCG64DXSM` state vector consists of 2 unsigned 128-bit values, which are represented externally as Python ints. One is the state of the PRNG, which is advanced by a linear congruential generator (LCG). The second is a fixed odd increment used in the LCG. The input seed is processed by [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to generate both values. The increment is not independently settable. **Parallel Features** The preferred way to use a BitGenerator in parallel applications is to use the [`SeedSequence.spawn`](generated/numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") method to obtain entropy values, and to use these to generate new BitGenerators: >>> from numpy.random import Generator, PCG64DXSM, SeedSequence >>> sg = SeedSequence(1234) >>> rg = [Generator(PCG64DXSM(s)) for s in sg.spawn(10)] **Compatibility Guarantee** `PCG64DXSM` makes a guarantee that a fixed seed will always produce the same random integer stream. #### References [1] [“PCG, A Family of Better Random Number Generators”](http://www.pcg- random.org/) [2] O’Neill, Melissa E. [“PCG: A Family of Simple Fast Space-Efficient Statistically Good Algorithms for Random Number Generation”](https://www.cs.hmc.edu/tr/hmc-cs-2014-0905.pdf) ## State [`state`](generated/numpy.random.pcg64dxsm.state#numpy.random.PCG64DXSM.state "numpy.random.PCG64DXSM.state") | Get or set the PRNG state ---|--- ## Parallel generation [`advance`](generated/numpy.random.pcg64dxsm.advance#numpy.random.PCG64DXSM.advance "numpy.random.PCG64DXSM.advance")(delta) | Advance the underlying RNG as-if delta draws have occurred. ---|--- [`jumped`](generated/numpy.random.pcg64dxsm.jumped#numpy.random.PCG64DXSM.jumped "numpy.random.PCG64DXSM.jumped")([jumps]) | Returns a new bit generator with the state jumped. ## Extending [`cffi`](generated/numpy.random.pcg64dxsm.cffi#numpy.random.PCG64DXSM.cffi "numpy.random.PCG64DXSM.cffi") | CFFI interface ---|--- [`ctypes`](generated/numpy.random.pcg64dxsm.ctypes#numpy.random.PCG64DXSM.ctypes "numpy.random.PCG64DXSM.ctypes") | ctypes interface # Philox counter-based RNG _class_ numpy.random.Philox(_seed =None_, _counter =None_, _key =None_) Container for the Philox (4x64) pseudo-random number generator. Parameters: **seed**{None, int, array_like[ints], SeedSequence}, optional A seed to initialize the [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). If None, then fresh, unpredictable entropy will be pulled from the OS. If an `int` or `array_like[ints]` is passed, then it will be passed to [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to derive the initial [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") state. One may also pass in a [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") instance. **counter**{None, int, array_like}, optional Counter to use in the Philox state. Can be either a Python int (long in 2.x) in [0, 2**256) or a 4-element uint64 array. If not provided, the RNG is initialized at 0. **key**{None, int, array_like}, optional Key to use in the Philox state. Unlike `seed`, the value in key is directly set. Can be either a Python int in [0, 2**128) or a 2-element uint64 array. `key` and `seed` cannot both be used. #### Notes Philox is a 64-bit PRNG that uses a counter-based design based on weaker (and faster) versions of cryptographic functions [1]. Instances using different values of the key produce independent sequences. Philox has a period of \\(2^{256} - 1\\) and supports arbitrary advancing and jumping the sequence in increments of \\(2^{128}\\). These features allow multiple non-overlapping sequences to be generated. `Philox` provides a capsule containing function pointers that produce doubles, and unsigned 32 and 64- bit integers. These are not directly consumable in Python and must be consumed by a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") or similar object that supports low-level access. **State and Seeding** The `Philox` state vector consists of a 256-bit value encoded as a 4-element uint64 array and a 128-bit value encoded as a 2-element uint64 array. The former is a counter which is incremented by 1 for every 4 64-bit randoms produced. The second is a key which determined the sequence produced. Using different keys produces independent sequences. The input `seed` is processed by [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to generate the key. The counter is set to 0. Alternately, one can omit the `seed` parameter and set the `key` and `counter` directly. **Parallel Features** The preferred way to use a BitGenerator in parallel applications is to use the [`SeedSequence.spawn`](generated/numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") method to obtain entropy values, and to use these to generate new BitGenerators: >>> from numpy.random import Generator, Philox, SeedSequence >>> sg = SeedSequence(1234) >>> rg = [Generator(Philox(s)) for s in sg.spawn(10)] `Philox` can be used in parallel applications by calling the [`jumped`](generated/numpy.random.philox.jumped#numpy.random.Philox.jumped "numpy.random.Philox.jumped") method to advance the state as-if \\(2^{128}\\) random numbers have been generated. Alternatively, [`advance`](generated/numpy.random.philox.advance#numpy.random.Philox.advance "numpy.random.Philox.advance") can be used to advance the counter for any positive step in [0, 2**256). When using [`jumped`](generated/numpy.random.philox.jumped#numpy.random.Philox.jumped "numpy.random.Philox.jumped"), all generators should be chained to ensure that the segments come from the same sequence. >>> from numpy.random import Generator, Philox >>> bit_generator = Philox(1234) >>> rg = [] >>> for _ in range(10): ... rg.append(Generator(bit_generator)) ... bit_generator = bit_generator.jumped() Alternatively, `Philox` can be used in parallel applications by using a sequence of distinct keys where each instance uses different key. >>> key = 2**96 + 2**33 + 2**17 + 2**9 >>> rg = [Generator(Philox(key=key+i)) for i in range(10)] **Compatibility Guarantee** `Philox` makes a guarantee that a fixed `seed` will always produce the same random integer stream. #### References [1] John K. Salmon, Mark A. Moraes, Ron O. Dror, and David E. Shaw, “Parallel Random Numbers: As Easy as 1, 2, 3,” Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC11), New York, NY: ACM, 2011. #### Examples >>> from numpy.random import Generator, Philox >>> rg = Generator(Philox(1234)) >>> rg.standard_normal() 0.123 # random Attributes: **lock: threading.Lock** Lock instance that is shared so that the same bit git generator can be used in multiple Generators without corrupting the state. Code that generates values from a bit generator should hold the bit generator’s lock. ## State [`state`](generated/numpy.random.philox.state#numpy.random.Philox.state "numpy.random.Philox.state") | Get or set the PRNG state ---|--- ## Parallel generation [`advance`](generated/numpy.random.philox.advance#numpy.random.Philox.advance "numpy.random.Philox.advance")(delta) | Advance the underlying RNG as-if delta draws have occurred. ---|--- [`jumped`](generated/numpy.random.philox.jumped#numpy.random.Philox.jumped "numpy.random.Philox.jumped")([jumps]) | Returns a new bit generator with the state jumped ## Extending [`cffi`](generated/numpy.random.philox.cffi#numpy.random.Philox.cffi "numpy.random.Philox.cffi") | CFFI interface ---|--- [`ctypes`](generated/numpy.random.philox.ctypes#numpy.random.Philox.ctypes "numpy.random.Philox.ctypes") | ctypes interface # SFC64 Small Fast Chaotic PRNG _class_ numpy.random.SFC64(_seed =None_) BitGenerator for Chris Doty-Humphrey’s Small Fast Chaotic PRNG. Parameters: **seed**{None, int, array_like[ints], SeedSequence}, optional A seed to initialize the [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). If None, then fresh, unpredictable entropy will be pulled from the OS. If an `int` or `array_like[ints]` is passed, then it will be passed to [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to derive the initial [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") state. One may also pass in a [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") instance. #### Notes `SFC64` is a 256-bit implementation of Chris Doty-Humphrey’s Small Fast Chaotic PRNG ([1]). `SFC64` has a few different cycles that one might be on, depending on the seed; the expected period will be about \\(2^{255}\\) ([2]). `SFC64` incorporates a 64-bit counter which means that the absolute minimum cycle length is \\(2^{64}\\) and that distinct seeds will not run into each other for at least \\(2^{64}\\) iterations. `SFC64` provides a capsule containing function pointers that produce doubles, and unsigned 32 and 64- bit integers. These are not directly consumable in Python and must be consumed by a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") or similar object that supports low-level access. **State and Seeding** The `SFC64` state vector consists of 4 unsigned 64-bit values. The last is a 64-bit counter that increments by 1 each iteration. The input seed is processed by [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to generate the first 3 values, then the `SFC64` algorithm is iterated a small number of times to mix. **Compatibility Guarantee** `SFC64` makes a guarantee that a fixed seed will always produce the same random integer stream. #### References [1] [“PractRand”](https://pracrand.sourceforge.net/RNG_engines.txt) [2] [“Random Invertible Mapping Statistics”](https://www.pcg- random.org/posts/random-invertible-mapping-statistics.html) ## State [`state`](generated/numpy.random.sfc64.state#numpy.random.SFC64.state "numpy.random.SFC64.state") | Get or set the PRNG state ---|--- ## Extending [`cffi`](generated/numpy.random.sfc64.cffi#numpy.random.SFC64.cffi "numpy.random.SFC64.cffi") | CFFI interface ---|--- [`ctypes`](generated/numpy.random.sfc64.ctypes#numpy.random.SFC64.ctypes "numpy.random.SFC64.ctypes") | ctypes interface # C API for random Access to various distributions below is available via Cython or C-wrapper libraries like CFFI. All the functions accept a `bitgen_t` as their first argument. To access these from Cython or C, you must link with the `npyrandom` static library which is part of the NumPy distribution, located in `numpy/random/lib`. Note that you must _also_ link with `npymath`, see [Linking against the core math library in an extension](../c-api/coremath#linking-npymath). typebitgen_t The `bitgen_t` holds the current state of the BitGenerator and pointers to functions that return standard C types while advancing the state. struct bitgen: void *state npy_uint64 (*next_uint64)(void *st) nogil uint32_t (*next_uint32)(void *st) nogil double (*next_double)(void *st) nogil npy_uint64 (*next_raw)(void *st) nogil ctypedef bitgen bitgen_t See [Extending](extending) for examples of using these functions. The functions are named with the following conventions: * “standard” refers to the reference values for any parameters. For instance “standard_uniform” means a uniform distribution on the interval `0.0` to `1.0` * “fill” functions will fill the provided `out` with `cnt` values. * The functions without “standard” in their name require additional parameters to describe the distributions. * Functions with `inv` in their name are based on the slower inverse method instead of a ziggurat lookup algorithm, which is significantly faster. The non-ziggurat variants are used in corner cases and for legacy compatibility. doublerandom_standard_uniform(bitgen_t*bitgen_state) voidrandom_standard_uniform_fill(bitgen_t*bitgen_state, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")cnt, double*out) doublerandom_standard_exponential(bitgen_t*bitgen_state) voidrandom_standard_exponential_fill(bitgen_t*bitgen_state, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")cnt, double*out) voidrandom_standard_exponential_inv_fill(bitgen_t*bitgen_state, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")cnt, double*out) doublerandom_standard_normal(bitgen_t*bitgen_state) voidrandom_standard_normal_fill(bitgen_t*bitgen_state, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")count, double*out) voidrandom_standard_normal_fill_f(bitgen_t*bitgen_state, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")count, float*out) doublerandom_standard_gamma(bitgen_t*bitgen_state, doubleshape) floatrandom_standard_uniform_f(bitgen_t*bitgen_state) voidrandom_standard_uniform_fill_f(bitgen_t*bitgen_state, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")cnt, float*out) floatrandom_standard_exponential_f(bitgen_t*bitgen_state) voidrandom_standard_exponential_fill_f(bitgen_t*bitgen_state, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")cnt, float*out) voidrandom_standard_exponential_inv_fill_f(bitgen_t*bitgen_state, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")cnt, float*out) floatrandom_standard_normal_f(bitgen_t*bitgen_state) floatrandom_standard_gamma_f(bitgen_t*bitgen_state, floatshape) doublerandom_normal(bitgen_t*bitgen_state, doubleloc, doublescale) doublerandom_gamma(bitgen_t*bitgen_state, doubleshape, doublescale) floatrandom_gamma_f(bitgen_t*bitgen_state, floatshape, floatscale) doublerandom_exponential(bitgen_t*bitgen_state, doublescale) doublerandom_uniform(bitgen_t*bitgen_state, doublelower, doublerange) doublerandom_beta(bitgen_t*bitgen_state, doublea, doubleb) doublerandom_chisquare(bitgen_t*bitgen_state, doubledf) doublerandom_f(bitgen_t*bitgen_state, doubledfnum, doubledfden) doublerandom_standard_cauchy(bitgen_t*bitgen_state) doublerandom_pareto(bitgen_t*bitgen_state, doublea) doublerandom_weibull(bitgen_t*bitgen_state, doublea) doublerandom_power(bitgen_t*bitgen_state, doublea) doublerandom_laplace(bitgen_t*bitgen_state, doubleloc, doublescale) doublerandom_gumbel(bitgen_t*bitgen_state, doubleloc, doublescale) doublerandom_logistic(bitgen_t*bitgen_state, doubleloc, doublescale) doublerandom_lognormal(bitgen_t*bitgen_state, doublemean, doublesigma) doublerandom_rayleigh(bitgen_t*bitgen_state, doublemode) doublerandom_standard_t(bitgen_t*bitgen_state, doubledf) doublerandom_noncentral_chisquare(bitgen_t*bitgen_state, doubledf, doublenonc) doublerandom_noncentral_f(bitgen_t*bitgen_state, doubledfnum, doubledfden, doublenonc) doublerandom_wald(bitgen_t*bitgen_state, doublemean, doublescale) doublerandom_vonmises(bitgen_t*bitgen_state, doublemu, doublekappa) doublerandom_triangular(bitgen_t*bitgen_state, doubleleft, doublemode, doubleright) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_poisson(bitgen_t*bitgen_state, doublelam) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_negative_binomial(bitgen_t*bitgen_state, doublen, doublep) typebinomial_t typedef struct s_binomial_t { int has_binomial; /* !=0: following parameters initialized for binomial */ double psave; RAND_INT_TYPE nsave; double r; double q; double fm; RAND_INT_TYPE m; double p1; double xm; double xl; double xr; double c; double laml; double lamr; double p2; double p3; double p4; } binomial_t; [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_binomial(bitgen_t*bitgen_state, doublep, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")n, binomial_t*binomial) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_logseries(bitgen_t*bitgen_state, doublep) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_geometric_search(bitgen_t*bitgen_state, doublep) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_geometric_inversion(bitgen_t*bitgen_state, doublep) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_geometric(bitgen_t*bitgen_state, doublep) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_zipf(bitgen_t*bitgen_state, doublea) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_hypergeometric(bitgen_t*bitgen_state, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")good, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")bad, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")sample) [npy_uint64](../c-api/dtype#c.npy_uint64 "npy_uint64")random_interval(bitgen_t*bitgen_state, [npy_uint64](../c-api/dtype#c.npy_uint64 "npy_uint64")max) voidrandom_multinomial(bitgen_t*bitgen_state, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")n, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")*mnix, double*pix, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")d, binomial_t*binomial) intrandom_multivariate_hypergeometric_count(bitgen_t*bitgen_state, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")total, size_tnum_colors, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")*colors, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")nsample, size_tnum_variates, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")*variates) voidrandom_multivariate_hypergeometric_marginals(bitgen_t*bitgen_state, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")total, size_tnum_colors, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")*colors, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")nsample, size_tnum_variates, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")*variates) Generate a single integer [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_positive_int64(bitgen_t*bitgen_state) [npy_int32](../c-api/dtype#c.npy_int32 "npy_int32")random_positive_int32(bitgen_t*bitgen_state) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_positive_int(bitgen_t*bitgen_state) [npy_uint64](../c-api/dtype#c.npy_uint64 "npy_uint64")random_uint(bitgen_t*bitgen_state) Generate random uint64 numbers in closed interval [off, off + rng]. [npy_uint64](../c-api/dtype#c.npy_uint64 "npy_uint64")random_bounded_uint64(bitgen_t*bitgen_state, [npy_uint64](../c-api/dtype#c.npy_uint64 "npy_uint64")off, [npy_uint64](../c-api/dtype#c.npy_uint64 "npy_uint64")rng, [npy_uint64](../c-api/dtype#c.npy_uint64 "npy_uint64")mask, booluse_masked) # Compatibility policy [`numpy.random`](index#module-numpy.random "numpy.random") has a somewhat stricter compatibility policy than the rest of NumPy. Users of pseudorandomness often have use cases for being able to reproduce runs in fine detail given the same seed (so-called “stream compatibility”), and so we try to balance those needs with the flexibility to enhance our algorithms. [NEP 19](https://numpy.org/neps/nep-0019-rng-policy.html#nep19 "\(in NumPy Enhancement Proposals\)") describes the evolution of this policy. The main kind of compatibility that we enforce is stream-compatibility from run to run under certain conditions. If you create a [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") with the same [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"), with the same seed, perform the same sequence of method calls with the same arguments, on the same build of `numpy`, in the same environment, on the same machine, you should get the same stream of numbers. Note that these conditions are very strict. There are a number of factors outside of NumPy’s control that limit our ability to guarantee much more than this. For example, different CPUs implement floating point arithmetic differently, and this can cause differences in certain edge cases that cascade to the rest of the stream. [`Generator.multivariate_normal`](generated/numpy.random.generator.multivariate_normal#numpy.random.Generator.multivariate_normal "numpy.random.Generator.multivariate_normal"), for another example, uses a matrix decomposition from [`numpy.linalg`](../routines.linalg#module- numpy.linalg "numpy.linalg"). Even on the same platform, a different build of `numpy` may use a different version of this matrix decomposition algorithm from the LAPACK that it links to, causing [`Generator.multivariate_normal`](generated/numpy.random.generator.multivariate_normal#numpy.random.Generator.multivariate_normal "numpy.random.Generator.multivariate_normal") to return completely different (but equally valid!) results. We strive to prefer algorithms that are more resistant to these effects, but this is always imperfect. Note Most of the [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") methods allow you to draw multiple values from a distribution as arrays. The requested size of this array is a parameter, for the purposes of the above policy. Calling `rng.random()` 5 times is not _guaranteed_ to give the same numbers as `rng.random(5)`. We reserve the ability to decide to use different algorithms for different-sized blocks. In practice, this happens rarely. Like the rest of NumPy, we generally maintain API source compatibility from version to version. If we _must_ make an API-breaking change, then we will only do so with an appropriate deprecation period and warnings, according to [general NumPy policy](https://numpy.org/neps/nep-0023-backwards- compatibility.html#nep23 "\(in NumPy Enhancement Proposals\)"). Breaking stream-compatibility in order to introduce new features or improve performance in [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") or [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") will be _allowed_ with _caution_. Such changes will be considered features, and as such will be no faster than the standard release cadence of features (i.e. on `X.Y` releases, never `X.Y.Z`). Slowness will not be considered a bug for this purpose. Correctness bug fixes that break stream-compatibility can happen on bugfix releases, per usual, but developers should consider if they can wait until the next feature release. We encourage developers to strongly weight user’s pain from the break in stream-compatibility against the improvements. One example of a worthwhile improvement would be to change algorithms for a significant increase in performance, for example, moving from the [Box-Muller transform](https://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform) method of Gaussian variate generation to the faster [Ziggurat algorithm](https://en.wikipedia.org/wiki/Ziggurat_algorithm). An example of a discouraged improvement would be tweaking the Ziggurat tables just a little bit for a small performance improvement. Note In particular, [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") is allowed to change the default [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") that it uses (again, with _caution_ and plenty of advance warning). In general, [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") classes have stronger guarantees of version-to- version stream compatibility. This allows them to be a firmer building block for downstream users that need it. Their limited API surface makes it easier for them to maintain this compatibility from version to version. See the docstrings of each [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") class for their individual compatibility guarantees. The legacy [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") and the [associated convenience functions](legacy#functions-in-numpy-random) have a stricter version-to- version compatibility guarantee. For reasons outlined in [NEP 19](https://numpy.org/neps/nep-0019-rng-policy.html#nep19 "\(in NumPy Enhancement Proposals\)"), we had made stronger promises about their version- to-version stability early in NumPy’s development. There are still some limited use cases for this kind of compatibility (like generating data for tests), so we maintain as much compatibility as we can. There will be no more modifications to [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState"), not even to fix correctness bugs. There are a few gray areas where we can make minor fixes to keep [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") working without segfaulting as NumPy’s internals change, and some docstring fixes. However, the previously-mentioned caveats about the variability from machine to machine and build to build still apply to [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") just as much as it does to [`Generator`](generator#numpy.random.Generator "numpy.random.Generator"). # Extending via CFFI """ Use cffi to access any of the underlying C functions from distributions.h """ import os import numpy as np import cffi from .parse import parse_distributions_h ffi = cffi.FFI() inc_dir = os.path.join(np.get_include(), 'numpy') # Basic numpy types ffi.cdef(''' typedef intptr_t npy_intp; typedef unsigned char npy_bool; ''') parse_distributions_h(ffi, inc_dir) lib = ffi.dlopen(np.random._generator.__file__) # Compare the distributions.h random_standard_normal_fill to # Generator.standard_random bit_gen = np.random.PCG64() rng = np.random.Generator(bit_gen) state = bit_gen.state interface = rng.bit_generator.cffi n = 100 vals_cffi = ffi.new('double[%d]' % n) lib.random_standard_normal_fill(interface.bit_generator, n, vals_cffi) # reset the state bit_gen.state = state vals = rng.standard_normal(n) for i in range(n): assert vals[i] == vals_cffi[i] # extending.pyx #cython: language_level=3 from libc.stdint cimport uint32_t from cpython.pycapsule cimport PyCapsule_IsValid, PyCapsule_GetPointer import numpy as np cimport numpy as np cimport cython from numpy.random cimport bitgen_t from numpy.random import PCG64 np.import_array() @cython.boundscheck(False) @cython.wraparound(False) def uniform_mean(Py_ssize_t n): cdef Py_ssize_t i cdef bitgen_t *rng cdef const char *capsule_name = "BitGenerator" cdef double[::1] random_values cdef np.ndarray randoms x = PCG64() capsule = x.capsule if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") rng = PyCapsule_GetPointer(capsule, capsule_name) random_values = np.empty(n) # Best practice is to acquire the lock whenever generating random values. # This prevents other threads from modifying the state. Acquiring the lock # is only necessary if the GIL is also released, as in this example. with x.lock, nogil: for i in range(n): random_values[i] = rng.next_double(rng.state) randoms = np.asarray(random_values) return randoms.mean() # This function is declared nogil so it can be used without the GIL below cdef uint32_t bounded_uint(uint32_t lb, uint32_t ub, bitgen_t *rng) nogil: cdef uint32_t mask, delta, val mask = delta = ub - lb mask |= mask >> 1 mask |= mask >> 2 mask |= mask >> 4 mask |= mask >> 8 mask |= mask >> 16 val = rng.next_uint32(rng.state) & mask while val > delta: val = rng.next_uint32(rng.state) & mask return lb + val @cython.boundscheck(False) @cython.wraparound(False) def bounded_uints(uint32_t lb, uint32_t ub, Py_ssize_t n): cdef Py_ssize_t i cdef bitgen_t *rng cdef uint32_t[::1] out cdef const char *capsule_name = "BitGenerator" x = PCG64() out = np.empty(n, dtype=np.uint32) capsule = x.capsule if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") rng = PyCapsule_GetPointer(capsule, capsule_name) with x.lock, nogil: for i in range(n): out[i] = bounded_uint(lb, ub, rng) return np.asarray(out) # extending_distributions.pyx #cython: language_level=3 """ This file shows how the to use a BitGenerator to create a distribution. """ import numpy as np cimport numpy as np cimport cython from cpython.pycapsule cimport PyCapsule_IsValid, PyCapsule_GetPointer from libc.stdint cimport uint16_t, uint64_t from numpy.random cimport bitgen_t from numpy.random import PCG64 from numpy.random.c_distributions cimport ( random_standard_uniform_fill, random_standard_uniform_fill_f) @cython.boundscheck(False) @cython.wraparound(False) def uniforms(Py_ssize_t n): """ Create an array of `n` uniformly distributed doubles. A 'real' distribution would want to process the values into some non-uniform distribution """ cdef Py_ssize_t i cdef bitgen_t *rng cdef const char *capsule_name = "BitGenerator" cdef double[::1] random_values x = PCG64() capsule = x.capsule # Optional check that the capsule if from a BitGenerator if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") # Cast the pointer rng = PyCapsule_GetPointer(capsule, capsule_name) random_values = np.empty(n, dtype='float64') with x.lock, nogil: for i in range(n): # Call the function random_values[i] = rng.next_double(rng.state) randoms = np.asarray(random_values) return randoms # cython example 2 @cython.boundscheck(False) @cython.wraparound(False) def uint10_uniforms(Py_ssize_t n): """Uniform 10 bit integers stored as 16-bit unsigned integers""" cdef Py_ssize_t i cdef bitgen_t *rng cdef const char *capsule_name = "BitGenerator" cdef uint16_t[::1] random_values cdef int bits_remaining cdef int width = 10 cdef uint64_t buff, mask = 0x3FF x = PCG64() capsule = x.capsule if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") rng = PyCapsule_GetPointer(capsule, capsule_name) random_values = np.empty(n, dtype='uint16') # Best practice is to release GIL and acquire the lock bits_remaining = 0 with x.lock, nogil: for i in range(n): if bits_remaining < width: buff = rng.next_uint64(rng.state) random_values[i] = buff & mask buff >>= width randoms = np.asarray(random_values) return randoms # cython example 3 def uniforms_ex(bit_generator, Py_ssize_t n, dtype=np.float64): """ Create an array of `n` uniformly distributed doubles via a "fill" function. A 'real' distribution would want to process the values into some non-uniform distribution Parameters ---------- bit_generator: BitGenerator instance n: int Output vector length dtype: {str, dtype}, optional Desired dtype, either 'd' (or 'float64') or 'f' (or 'float32'). The default dtype value is 'd' """ cdef Py_ssize_t i cdef bitgen_t *rng cdef const char *capsule_name = "BitGenerator" cdef np.ndarray randoms capsule = bit_generator.capsule # Optional check that the capsule if from a BitGenerator if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") # Cast the pointer rng = PyCapsule_GetPointer(capsule, capsule_name) _dtype = np.dtype(dtype) randoms = np.empty(n, dtype=_dtype) if _dtype == np.float32: with bit_generator.lock: random_standard_uniform_fill_f(rng, n, np.PyArray_DATA(randoms)) elif _dtype == np.float64: with bit_generator.lock: random_standard_uniform_fill(rng, n, np.PyArray_DATA(randoms)) else: raise TypeError('Unsupported dtype %r for random' % _dtype) return randoms # Extending numpy.random via Cython Starting with NumPy 1.26.0, Meson is the default build system for NumPy. See [Status of numpy.distutils and migration advice](../../../distutils_status_migration#distutils-status-migration). * [meson.build](meson.build) * [extending.pyx](extending.pyx) * [extending_distributions.pyx](extending_distributions.pyx) # meson.build project('random-build-examples', 'c', 'cpp', 'cython') py_mod = import('python') py3 = py_mod.find_installation(pure: false) cc = meson.get_compiler('c') cy = meson.get_compiler('cython') # Keep synced with pyproject.toml if not cy.version().version_compare('>=3.0.6') error('tests requires Cython >= 3.0.6') endif base_cython_args = [] if cy.version().version_compare('>=3.1.0') base_cython_args += ['-Xfreethreading_compatible=True'] endif _numpy_abs = run_command(py3, ['-c', 'import os; os.chdir(".."); import numpy; print(os.path.abspath(numpy.get_include() + "../../.."))'], check: true).stdout().strip() npymath_path = _numpy_abs / '_core' / 'lib' npy_include_path = _numpy_abs / '_core' / 'include' npyrandom_path = _numpy_abs / 'random' / 'lib' npymath_lib = cc.find_library('npymath', dirs: npymath_path) npyrandom_lib = cc.find_library('npyrandom', dirs: npyrandom_path) py3.extension_module( 'extending_distributions', 'extending_distributions.pyx', install: false, include_directories: [npy_include_path], dependencies: [npyrandom_lib, npymath_lib], cython_args: base_cython_args, ) py3.extension_module( 'extending', 'extending.pyx', install: false, include_directories: [npy_include_path], dependencies: [npyrandom_lib, npymath_lib], cython_args: base_cython_args, ) py3.extension_module( 'extending_cpp', 'extending_distributions.pyx', install: false, override_options : ['cython_language=cpp'], cython_args: base_cython_args + ['--module-name', 'extending_cpp'], include_directories: [npy_include_path], dependencies: [npyrandom_lib, npymath_lib], ) # Extending via Numba import numpy as np import numba as nb from numpy.random import PCG64 from timeit import timeit bit_gen = PCG64() next_d = bit_gen.cffi.next_double state_addr = bit_gen.cffi.state_address def normals(n, state): out = np.empty(n) for i in range((n + 1) // 2): x1 = 2.0 * next_d(state) - 1.0 x2 = 2.0 * next_d(state) - 1.0 r2 = x1 * x1 + x2 * x2 while r2 >= 1.0 or r2 == 0.0: x1 = 2.0 * next_d(state) - 1.0 x2 = 2.0 * next_d(state) - 1.0 r2 = x1 * x1 + x2 * x2 f = np.sqrt(-2.0 * np.log(r2) / r2) out[2 * i] = f * x1 if 2 * i + 1 < n: out[2 * i + 1] = f * x2 return out # Compile using Numba normalsj = nb.jit(normals, nopython=True) # Must use state address not state with numba n = 10000 def numbacall(): return normalsj(n, state_addr) rg = np.random.Generator(PCG64()) def numpycall(): return rg.normal(size=n) # Check that the functions work r1 = numbacall() r2 = numpycall() assert r1.shape == (n,) assert r1.shape == r2.shape t1 = timeit(numbacall, number=1000) print(f'{t1:.2f} secs for {n} PCG64 (Numba/PCG64) gaussian randoms') t2 = timeit(numpycall, number=1000) print(f'{t2:.2f} secs for {n} PCG64 (NumPy/PCG64) gaussian randoms') # example 2 next_u32 = bit_gen.ctypes.next_uint32 ctypes_state = bit_gen.ctypes.state @nb.jit(nopython=True) def bounded_uint(lb, ub, state): mask = delta = ub - lb mask |= mask >> 1 mask |= mask >> 2 mask |= mask >> 4 mask |= mask >> 8 mask |= mask >> 16 val = next_u32(state) & mask while val > delta: val = next_u32(state) & mask return lb + val print(bounded_uint(323, 2394691, ctypes_state.value)) @nb.jit(nopython=True) def bounded_uints(lb, ub, n, state): out = np.empty(n, dtype=np.uint32) for i in range(n): out[i] = bounded_uint(lb, ub, state) bounded_uints(323, 2394691, 10000000, ctypes_state.value) # Extending via Numba and CFFI r""" Building the required library in this example requires a source distribution of NumPy or clone of the NumPy git repository since distributions.c is not included in binary distributions. On *nix, execute in numpy/random/src/distributions export ${PYTHON_VERSION}=3.8 # Python version export PYTHON_INCLUDE=#path to Python's include folder, usually \ ${PYTHON_HOME}/include/python${PYTHON_VERSION}m export NUMPY_INCLUDE=#path to numpy's include folder, usually \ ${PYTHON_HOME}/lib/python${PYTHON_VERSION}/site-packages/numpy/_core/include gcc -shared -o libdistributions.so -fPIC distributions.c \ -I${NUMPY_INCLUDE} -I${PYTHON_INCLUDE} mv libdistributions.so ../../_examples/numba/ On Windows rem PYTHON_HOME and PYTHON_VERSION are setup dependent, this is an example set PYTHON_HOME=c:\Anaconda set PYTHON_VERSION=38 cl.exe /LD .\distributions.c -DDLL_EXPORT \ -I%PYTHON_HOME%\lib\site-packages\numpy\_core\include \ -I%PYTHON_HOME%\include %PYTHON_HOME%\libs\python%PYTHON_VERSION%.lib move distributions.dll ../../_examples/numba/ """ import os import numba as nb import numpy as np from cffi import FFI from numpy.random import PCG64 ffi = FFI() if os.path.exists('./distributions.dll'): lib = ffi.dlopen('./distributions.dll') elif os.path.exists('./libdistributions.so'): lib = ffi.dlopen('./libdistributions.so') else: raise RuntimeError('Required DLL/so file was not found.') ffi.cdef(""" double random_standard_normal(void *bitgen_state); """) x = PCG64() xffi = x.cffi bit_generator = xffi.bit_generator random_standard_normal = lib.random_standard_normal def normals(n, bit_generator): out = np.empty(n) for i in range(n): out[i] = random_standard_normal(bit_generator) return out normalsj = nb.jit(normals, nopython=True) # Numba requires a memory address for void * # Can also get address from x.ctypes.bit_generator.value bit_generator_address = int(ffi.cast('uintptr_t', bit_generator)) norm = normalsj(1000, bit_generator_address) print(norm[:12]) # Extending The [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator")s have been designed to be extendable using standard tools for high-performance Python – numba and Cython. The [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") object can also be used with user-provided [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator")s as long as these export a small set of required functions. ## Numba Numba can be used with either CTypes or CFFI. The current iteration of the [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator")s all export a small set of functions through both interfaces. This example shows how numba can be used to produce gaussian samples using a pure Python implementation which is then compiled. The random numbers are provided by `ctypes.next_double`. import numpy as np import numba as nb from numpy.random import PCG64 from timeit import timeit bit_gen = PCG64() next_d = bit_gen.cffi.next_double state_addr = bit_gen.cffi.state_address def normals(n, state): out = np.empty(n) for i in range((n + 1) // 2): x1 = 2.0 * next_d(state) - 1.0 x2 = 2.0 * next_d(state) - 1.0 r2 = x1 * x1 + x2 * x2 while r2 >= 1.0 or r2 == 0.0: x1 = 2.0 * next_d(state) - 1.0 x2 = 2.0 * next_d(state) - 1.0 r2 = x1 * x1 + x2 * x2 f = np.sqrt(-2.0 * np.log(r2) / r2) out[2 * i] = f * x1 if 2 * i + 1 < n: out[2 * i + 1] = f * x2 return out # Compile using Numba normalsj = nb.jit(normals, nopython=True) # Must use state address not state with numba n = 10000 def numbacall(): return normalsj(n, state_addr) rg = np.random.Generator(PCG64()) def numpycall(): return rg.normal(size=n) # Check that the functions work r1 = numbacall() r2 = numpycall() assert r1.shape == (n,) assert r1.shape == r2.shape t1 = timeit(numbacall, number=1000) print(f'{t1:.2f} secs for {n} PCG64 (Numba/PCG64) gaussian randoms') t2 = timeit(numpycall, number=1000) print(f'{t2:.2f} secs for {n} PCG64 (NumPy/PCG64) gaussian randoms') Both CTypes and CFFI allow the more complicated distributions to be used directly in Numba after compiling the file distributions.c into a `DLL` or `so`. An example showing the use of a more complicated distribution is in the Examples section below. ## Cython Cython can be used to unpack the `PyCapsule` provided by a [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). This example uses [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") and the example from above. The usual caveats for writing high-performance code using Cython – removing bounds checks and wrap around, providing array alignment information – still apply. #cython: language_level=3 """ This file shows how the to use a BitGenerator to create a distribution. """ import numpy as np cimport numpy as np cimport cython from cpython.pycapsule cimport PyCapsule_IsValid, PyCapsule_GetPointer from libc.stdint cimport uint16_t, uint64_t from numpy.random cimport bitgen_t from numpy.random import PCG64 from numpy.random.c_distributions cimport ( random_standard_uniform_fill, random_standard_uniform_fill_f) @cython.boundscheck(False) @cython.wraparound(False) def uniforms(Py_ssize_t n): """ Create an array of `n` uniformly distributed doubles. A 'real' distribution would want to process the values into some non-uniform distribution """ cdef Py_ssize_t i cdef bitgen_t *rng cdef const char *capsule_name = "BitGenerator" cdef double[::1] random_values x = PCG64() capsule = x.capsule # Optional check that the capsule if from a BitGenerator if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") # Cast the pointer rng = PyCapsule_GetPointer(capsule, capsule_name) random_values = np.empty(n, dtype='float64') with x.lock, nogil: for i in range(n): # Call the function random_values[i] = rng.next_double(rng.state) randoms = np.asarray(random_values) return randoms The [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") can also be directly accessed using the members of the `bitgen_t` struct. @cython.boundscheck(False) @cython.wraparound(False) def uint10_uniforms(Py_ssize_t n): """Uniform 10 bit integers stored as 16-bit unsigned integers""" cdef Py_ssize_t i cdef bitgen_t *rng cdef const char *capsule_name = "BitGenerator" cdef uint16_t[::1] random_values cdef int bits_remaining cdef int width = 10 cdef uint64_t buff, mask = 0x3FF x = PCG64() capsule = x.capsule if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") rng = PyCapsule_GetPointer(capsule, capsule_name) random_values = np.empty(n, dtype='uint16') # Best practice is to release GIL and acquire the lock bits_remaining = 0 with x.lock, nogil: for i in range(n): if bits_remaining < width: buff = rng.next_uint64(rng.state) random_values[i] = buff & mask buff >>= width randoms = np.asarray(random_values) return randoms Cython can be used to directly access the functions in `numpy/random/c_distributions.pxd`. This requires linking with the `npyrandom` library located in `numpy/random/lib`. def uniforms_ex(bit_generator, Py_ssize_t n, dtype=np.float64): """ Create an array of `n` uniformly distributed doubles via a "fill" function. A 'real' distribution would want to process the values into some non-uniform distribution Parameters ---------- bit_generator: BitGenerator instance n: int Output vector length dtype: {str, dtype}, optional Desired dtype, either 'd' (or 'float64') or 'f' (or 'float32'). The default dtype value is 'd' """ cdef Py_ssize_t i cdef bitgen_t *rng cdef const char *capsule_name = "BitGenerator" cdef np.ndarray randoms capsule = bit_generator.capsule # Optional check that the capsule if from a BitGenerator if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") # Cast the pointer rng = PyCapsule_GetPointer(capsule, capsule_name) _dtype = np.dtype(dtype) randoms = np.empty(n, dtype=_dtype) if _dtype == np.float32: with bit_generator.lock: random_standard_uniform_fill_f(rng, n, np.PyArray_DATA(randoms)) elif _dtype == np.float64: with bit_generator.lock: random_standard_uniform_fill(rng, n, np.PyArray_DATA(randoms)) else: raise TypeError('Unsupported dtype %r for random' % _dtype) return randoms See [Extending numpy.random via Cython](examples/cython/index#extending- cython-example) for the complete listings of these examples and a minimal `setup.py` to build the c-extension modules. ## CFFI CFFI can be used to directly access the functions in `include/numpy/random/distributions.h`. Some “massaging” of the header file is required: """ Use cffi to access any of the underlying C functions from distributions.h """ import os import numpy as np import cffi from .parse import parse_distributions_h ffi = cffi.FFI() inc_dir = os.path.join(np.get_include(), 'numpy') # Basic numpy types ffi.cdef(''' typedef intptr_t npy_intp; typedef unsigned char npy_bool; ''') parse_distributions_h(ffi, inc_dir) Once the header is parsed by `ffi.cdef`, the functions can be accessed directly from the `_generator` shared object, using the [`BitGenerator.cffi`](bit_generators/generated/numpy.random.bitgenerator.cffi#numpy.random.BitGenerator.cffi "numpy.random.BitGenerator.cffi") interface. # Compare the distributions.h random_standard_normal_fill to # Generator.standard_random bit_gen = np.random.PCG64() rng = np.random.Generator(bit_gen) state = bit_gen.state interface = rng.bit_generator.cffi n = 100 vals_cffi = ffi.new('double[%d]' % n) lib.random_standard_normal_fill(interface.bit_generator, n, vals_cffi) # reset the state bit_gen.state = state vals = rng.standard_normal(n) for i in range(n): assert vals[i] == vals_cffi[i] ## New BitGenerators [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") can be used with user-provided [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator")s. The simplest way to write a new [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") is to examine the pyx file of one of the existing [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator")s. The key structure that must be provided is the `capsule` which contains a `PyCapsule` to a struct pointer of type `bitgen_t`, typedef struct bitgen { void *state; uint64_t (*next_uint64)(void *st); uint32_t (*next_uint32)(void *st); double (*next_double)(void *st); uint64_t (*next_raw)(void *st); } bitgen_t; which provides 5 pointers. The first is an opaque pointer to the data structure used by the [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator")s. The next three are function pointers which return the next 64- and 32-bit unsigned integers, the next random double and the next raw value. This final function is used for testing and so can be set to the next 64-bit unsigned integer function if not needed. Functions inside [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") use this structure as in bitgen_state->next_uint64(bitgen_state->state) ## Examples * [Numba](examples/numba) * [CFFI + Numba](examples/numba_cffi) * [Cython](examples/cython/index) * [meson.build](examples/cython/meson.build) * [extending.pyx](examples/cython/extending.pyx) * [extending_distributions.pyx](examples/cython/extending_distributions.pyx) * [CFFI](examples/cffi) # numpy.random.beta random.beta(_a_ , _b_ , _size =None_) Draw samples from a Beta distribution. The Beta distribution is a special case of the Dirichlet distribution, and is related to the Gamma distribution. It has the probability distribution function \\[f(x; a,b) = \frac{1}{B(\alpha, \beta)} x^{\alpha - 1} (1 - x)^{\beta - 1},\\] where the normalization, B, is the beta function, \\[B(\alpha, \beta) = \int_0^1 t^{\alpha - 1} (1 - t)^{\beta - 1} dt.\\] It is often seen in Bayesian inference and order statistics. Note New code should use the [`beta`](numpy.random.generator.beta#numpy.random.Generator.beta "numpy.random.Generator.beta") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **a** float or array_like of floats Alpha, positive (>0). **b** float or array_like of floats Beta, positive (>0). **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` and `b` are both scalars. Otherwise, `np.broadcast(a, b).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized beta distribution. See also [`random.Generator.beta`](numpy.random.generator.beta#numpy.random.Generator.beta "numpy.random.Generator.beta") which should be used for new code. # numpy.random.binomial random.binomial(_n_ , _p_ , _size =None_) Draw samples from a binomial distribution. Samples are drawn from a binomial distribution with specified parameters, n trials and p probability of success where n an integer >= 0 and p is in the interval [0,1]. (n may be input as a float, but it is truncated to an integer in use) Note New code should use the [`binomial`](numpy.random.generator.binomial#numpy.random.Generator.binomial "numpy.random.Generator.binomial") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **n** int or array_like of ints Parameter of the distribution, >= 0. Floats are also accepted, but they will be truncated to integers. **p** float or array_like of floats Parameter of the distribution, >= 0 and <=1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `n` and `p` are both scalars. Otherwise, `np.broadcast(n, p).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized binomial distribution, where each sample is equal to the number of successes over the n trials. See also [`scipy.stats.binom`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binom.html#scipy.stats.binom "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.binomial`](numpy.random.generator.binomial#numpy.random.Generator.binomial "numpy.random.Generator.binomial") which should be used for new code. #### Notes The probability mass function (PMF) for the binomial distribution is \\[P(N) = \binom{n}{N}p^N(1-p)^{n-N},\\] where \\(n\\) is the number of trials, \\(p\\) is the probability of success, and \\(N\\) is the number of successes. When estimating the standard error of a proportion in a population by using a random sample, the normal distribution works well unless the product p*n <=5, where p = population proportion estimate, and n = number of samples, in which case the binomial distribution is used instead. For example, a sample of 15 people shows 4 who are left handed, and 11 who are right handed. Then p = 4/15 = 27%. 0.27*15 = 4, so the binomial distribution should be used in this case. #### References [1] Dalgaard, Peter, “Introductory Statistics with R”, Springer-Verlag, 2002. [2] Glantz, Stanton A. “Primer of Biostatistics.”, McGraw-Hill, Fifth Edition, 2002. [3] Lentner, Marvin, “Elementary Applied Statistics”, Bogden and Quigley, 1972. [4] Weisstein, Eric W. “Binomial Distribution.” From MathWorld–A Wolfram Web Resource. [5] Wikipedia, “Binomial distribution”, #### Examples Draw samples from the distribution: >>> n, p = 10, .5 # number of trials, probability of each trial >>> s = np.random.binomial(n, p, 1000) # result of flipping a coin 10 times, tested 1000 times. A real world example. A company drills 9 wild-cat oil exploration wells, each with an estimated probability of success of 0.1. All nine wells fail. What is the probability of that happening? Let’s do 20,000 trials of the model, and count the number that generate zero positive results. >>> sum(np.random.binomial(9, 0.1, 20000) == 0)/20000. # answer = 0.38885, or 38%. # numpy.random.bytes random.bytes(_length_) Return random bytes. Note New code should use the [`bytes`](numpy.random.generator.bytes#numpy.random.Generator.bytes "numpy.random.Generator.bytes") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **length** int Number of random bytes. Returns: **out** bytes String of length `length`. See also [`random.Generator.bytes`](numpy.random.generator.bytes#numpy.random.Generator.bytes "numpy.random.Generator.bytes") which should be used for new code. #### Examples >>> np.random.bytes(10) b' eh\x85\x022SZ\xbf\xa4' #random # numpy.random.chisquare random.chisquare(_df_ , _size =None_) Draw samples from a chi-square distribution. When `df` independent random variables, each with standard normal distributions (mean 0, variance 1), are squared and summed, the resulting distribution is chi-square (see Notes). This distribution is often used in hypothesis testing. Note New code should use the [`chisquare`](numpy.random.generator.chisquare#numpy.random.Generator.chisquare "numpy.random.Generator.chisquare") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **df** float or array_like of floats Number of degrees of freedom, must be > 0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` is a scalar. Otherwise, `np.array(df).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized chi-square distribution. Raises: ValueError When `df` <= 0 or when an inappropriate [`size`](../../generated/numpy.size#numpy.size "numpy.size") (e.g. `size=-1`) is given. See also [`random.Generator.chisquare`](numpy.random.generator.chisquare#numpy.random.Generator.chisquare "numpy.random.Generator.chisquare") which should be used for new code. #### Notes The variable obtained by summing the squares of `df` independent, standard normally distributed random variables: \\[Q = \sum_{i=1}^{\mathtt{df}} X^2_i\\] is chi-square distributed, denoted \\[Q \sim \chi^2_k.\\] The probability density function of the chi-squared distribution is \\[p(x) = \frac{(1/2)^{k/2}}{\Gamma(k/2)} x^{k/2 - 1} e^{-x/2},\\] where \\(\Gamma\\) is the gamma function, \\[\Gamma(x) = \int_0^{-\infty} t^{x - 1} e^{-t} dt.\\] #### References [1] NIST “Engineering Statistics Handbook” #### Examples >>> np.random.chisquare(2,4) array([ 1.89920014, 9.00867716, 3.13710533, 5.62318272]) # random # numpy.random.choice random.choice(_a_ , _size =None_, _replace =True_, _p =None_) Generates a random sample from a given 1-D array Note New code should use the [`choice`](numpy.random.generator.choice#numpy.random.Generator.choice "numpy.random.Generator.choice") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Warning This function uses the C-long dtype, which is 32bit on windows and otherwise 64bit on 64bit platforms (and 32bit on 32bit ones). Since NumPy 2.0, NumPy’s default integer is 32bit on 32bit platforms and 64bit on 64bit platforms. Parameters: **a** 1-D array-like or int If an ndarray, a random sample is generated from its elements. If an int, the random sample is generated as if it were `np.arange(a)` **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **replace** boolean, optional Whether the sample is with or without replacement. Default is True, meaning that a value of `a` can be selected multiple times. **p** 1-D array-like, optional The probabilities associated with each entry in a. If not given, the sample assumes a uniform distribution over all entries in `a`. Returns: **samples** single item or ndarray The generated random samples Raises: ValueError If a is an int and less than zero, if a or p are not 1-dimensional, if a is an array-like of size 0, if p is not a vector of probabilities, if a and p have different lengths, or if replace=False and the sample size is greater than the population size See also [`randint`](numpy.random.randint#numpy.random.randint "numpy.random.randint"), [`shuffle`](numpy.random.shuffle#numpy.random.shuffle "numpy.random.shuffle"), [`permutation`](numpy.random.permutation#numpy.random.permutation "numpy.random.permutation") [`random.Generator.choice`](numpy.random.generator.choice#numpy.random.Generator.choice "numpy.random.Generator.choice") which should be used in new code #### Notes Setting user-specified probabilities through `p` uses a more general but less efficient sampler than the default. The general sampler produces a different sample than the optimized sampler even if each element of `p` is 1 / len(a). Sampling random rows from a 2-D array is not possible with this function, but is possible with [`Generator.choice`](numpy.random.generator.choice#numpy.random.Generator.choice "numpy.random.Generator.choice") through its `axis` keyword. #### Examples Generate a uniform random sample from np.arange(5) of size 3: >>> np.random.choice(5, 3) array([0, 3, 4]) # random >>> #This is equivalent to np.random.randint(0,5,3) Generate a non-uniform random sample from np.arange(5) of size 3: >>> np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0]) array([3, 3, 0]) # random Generate a uniform random sample from np.arange(5) of size 3 without replacement: >>> np.random.choice(5, 3, replace=False) array([3,1,0]) # random >>> #This is equivalent to np.random.permutation(np.arange(5))[:3] Generate a non-uniform random sample from np.arange(5) of size 3 without replacement: >>> np.random.choice(5, 3, replace=False, p=[0.1, 0, 0.3, 0.6, 0]) array([2, 3, 0]) # random Any of the above can be repeated with an arbitrary array-like instead of just integers. For instance: >>> aa_milne_arr = ['pooh', 'rabbit', 'piglet', 'Christopher'] >>> np.random.choice(aa_milne_arr, 5, p=[0.5, 0.1, 0.1, 0.3]) array(['pooh', 'pooh', 'pooh', 'Christopher', 'piglet'], # random dtype='0\\) and \\(\sum_{i=1}^k x_i = 1\\). The probability density function \\(p\\) of a Dirichlet-distributed random vector \\(X\\) is proportional to \\[p(x) \propto \prod_{i=1}^{k}{x^{\alpha_i-1}_i},\\] where \\(\alpha\\) is a vector containing the positive concentration parameters. The method uses the following property for computation: let \\(Y\\) be a random vector which has components that follow a standard gamma distribution, then \\(X = \frac{1}{\sum_{i=1}^k{Y_i}} Y\\) is Dirichlet-distributed #### References [1] David McKay, “Information Theory, Inference and Learning Algorithms,” chapter 23, [2] Wikipedia, “Dirichlet distribution”, #### Examples Taking an example cited in Wikipedia, this distribution can be used if one wanted to cut strings (each of initial length 1.0) into K pieces with different lengths, where each piece had, on average, a designated average length, but allowing some variation in the relative sizes of the pieces. >>> s = np.random.dirichlet((10, 5, 3), 20).transpose() >>> import matplotlib.pyplot as plt >>> plt.barh(range(20), s[0]) >>> plt.barh(range(20), s[1], left=s[0], color='g') >>> plt.barh(range(20), s[2], left=s[0]+s[1], color='r') >>> plt.title("Lengths of Strings") # numpy.random.exponential random.exponential(_scale =1.0_, _size =None_) Draw samples from an exponential distribution. Its probability density function is \\[f(x; \frac{1}{\beta}) = \frac{1}{\beta} \exp(-\frac{x}{\beta}),\\] for `x > 0` and 0 elsewhere. \\(\beta\\) is the scale parameter, which is the inverse of the rate parameter \\(\lambda = 1/\beta\\). The rate parameter is an alternative, widely used parameterization of the exponential distribution [3]. The exponential distribution is a continuous analogue of the geometric distribution. It describes many common situations, such as the size of raindrops measured over many rainstorms [1], or the time between page requests to Wikipedia [2]. Note New code should use the [`exponential`](numpy.random.generator.exponential#numpy.random.Generator.exponential "numpy.random.Generator.exponential") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **scale** float or array_like of floats The scale parameter, \\(\beta = 1/\lambda\\). Must be non-negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `scale` is a scalar. Otherwise, `np.array(scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized exponential distribution. See also [`random.Generator.exponential`](numpy.random.generator.exponential#numpy.random.Generator.exponential "numpy.random.Generator.exponential") which should be used for new code. #### References [1] Peyton Z. Peebles Jr., “Probability, Random Variables and Random Signal Principles”, 4th ed, 2001, p. 57. [2] Wikipedia, “Poisson process”, [3] Wikipedia, “Exponential distribution”, #### Examples A real world example: Assume a company has 10000 customer support agents and the average time between customer calls is 4 minutes. >>> n = 10000 >>> time_between_calls = np.random.default_rng().exponential(scale=4, size=n) What is the probability that a customer will call in the next 4 to 5 minutes? >>> x = ((time_between_calls < 5).sum())/n >>> y = ((time_between_calls < 4).sum())/n >>> x-y 0.08 # may vary # numpy.random.f random.f(_dfnum_ , _dfden_ , _size =None_) Draw samples from an F distribution. Samples are drawn from an F distribution with specified parameters, `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of freedom in denominator), where both parameters must be greater than zero. The random variate of the F distribution (also known as the Fisher distribution) is a continuous probability distribution that arises in ANOVA tests, and is the ratio of two chi-square variates. Note New code should use the [`f`](numpy.random.generator.f#numpy.random.Generator.f "numpy.random.Generator.f") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **dfnum** float or array_like of floats Degrees of freedom in numerator, must be > 0. **dfden** float or array_like of float Degrees of freedom in denominator, must be > 0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `dfnum` and `dfden` are both scalars. Otherwise, `np.broadcast(dfnum, dfden).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Fisher distribution. See also [`scipy.stats.f`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.f.html#scipy.stats.f "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.f`](numpy.random.generator.f#numpy.random.Generator.f "numpy.random.Generator.f") which should be used for new code. #### Notes The F statistic is used to compare in-group variances to between-group variances. Calculating the distribution depends on the sampling, and so it is a function of the respective degrees of freedom in the problem. The variable `dfnum` is the number of samples minus one, the between-groups degrees of freedom, while `dfden` is the within-groups degrees of freedom, the sum of the number of samples in each group minus the number of groups. #### References [1] Glantz, Stanton A. “Primer of Biostatistics.”, McGraw-Hill, Fifth Edition, 2002. [2] Wikipedia, “F-distribution”, #### Examples An example from Glantz[1], pp 47-40: Two groups, children of diabetics (25 people) and children from people without diabetes (25 controls). Fasting blood glucose was measured, case group had a mean value of 86.1, controls had a mean value of 82.2. Standard deviations were 2.09 and 2.49 respectively. Are these data consistent with the null hypothesis that the parents diabetic status does not affect their children’s blood glucose levels? Calculating the F statistic from the data gives a value of 36.01. Draw samples from the distribution: >>> dfnum = 1. # between group degrees of freedom >>> dfden = 48. # within groups degrees of freedom >>> s = np.random.f(dfnum, dfden, 1000) The lower bound for the top 1% of the samples is : >>> np.sort(s)[-10] 7.61988120985 # random So there is about a 1% chance that the F statistic will exceed 7.62, the measured value is 36, so the null hypothesis is rejected at the 1% level. # numpy.random.gamma random.gamma(_shape_ , _scale =1.0_, _size =None_) Draw samples from a Gamma distribution. Samples are drawn from a Gamma distribution with specified parameters, [`shape`](../../generated/numpy.shape#numpy.shape "numpy.shape") (sometimes designated “k”) and `scale` (sometimes designated “theta”), where both parameters are > 0. Note New code should use the [`gamma`](numpy.random.generator.gamma#numpy.random.Generator.gamma "numpy.random.Generator.gamma") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **shape** float or array_like of floats The shape of the gamma distribution. Must be non-negative. **scale** float or array_like of floats, optional The scale of the gamma distribution. Must be non-negative. Default is equal to 1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `shape` and `scale` are both scalars. Otherwise, `np.broadcast(shape, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized gamma distribution. See also [`scipy.stats.gamma`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.gamma`](numpy.random.generator.gamma#numpy.random.Generator.gamma "numpy.random.Generator.gamma") which should be used for new code. #### Notes The probability density for the Gamma distribution is \\[p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},\\] where \\(k\\) is the shape and \\(\theta\\) the scale, and \\(\Gamma\\) is the Gamma function. The Gamma distribution is often used to model the times to failure of electronic components, and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. #### References [1] Weisstein, Eric W. “Gamma Distribution.” From MathWorld–A Wolfram Web Resource. [2] Wikipedia, “Gamma distribution”, #### Examples Draw samples from the distribution: >>> shape, scale = 2., 2. # mean=4, std=2*sqrt(2) >>> s = np.random.gamma(shape, scale, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> import scipy.special as sps >>> count, bins, ignored = plt.hist(s, 50, density=True) >>> y = bins**(shape-1)*(np.exp(-bins/scale) / ... (sps.gamma(shape)*scale**shape)) >>> plt.plot(bins, y, linewidth=2, color='r') >>> plt.show() # numpy.random.Generator.beta method random.Generator.beta(_a_ , _b_ , _size =None_) Draw samples from a Beta distribution. The Beta distribution is a special case of the Dirichlet distribution, and is related to the Gamma distribution. It has the probability distribution function \\[f(x; a,b) = \frac{1}{B(\alpha, \beta)} x^{\alpha - 1} (1 - x)^{\beta - 1},\\] where the normalization, B, is the beta function, \\[B(\alpha, \beta) = \int_0^1 t^{\alpha - 1} (1 - t)^{\beta - 1} dt.\\] It is often seen in Bayesian inference and order statistics. Parameters: **a** float or array_like of floats Alpha, positive (>0). **b** float or array_like of floats Beta, positive (>0). **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` and `b` are both scalars. Otherwise, `np.broadcast(a, b).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized beta distribution. #### References [1] Wikipedia, “Beta distribution”, #### Examples The beta distribution has mean a/(a+b). If `a == b` and both are > 1, the distribution is symmetric with mean 0.5. >>> rng = np.random.default_rng() >>> a, b, size = 2.0, 2.0, 10000 >>> sample = rng.beta(a=a, b=b, size=size) >>> np.mean(sample) 0.5047328775385895 # may vary Otherwise the distribution is skewed left or right according to whether `a` or `b` is greater. The distribution is mirror symmetric. See for example: >>> a, b, size = 2, 7, 10000 >>> sample_left = rng.beta(a=a, b=b, size=size) >>> sample_right = rng.beta(a=b, b=a, size=size) >>> m_left, m_right = np.mean(sample_left), np.mean(sample_right) >>> print(m_left, m_right) 0.2238596793678923 0.7774613834041182 # may vary >>> print(m_left - a/(a+b)) 0.001637457145670096 # may vary >>> print(m_right - b/(a+b)) -0.0003163943736596009 # may vary Display the histogram of the two samples: >>> import matplotlib.pyplot as plt >>> plt.hist([sample_left, sample_right], ... 50, density=True, histtype='bar') >>> plt.show() # numpy.random.Generator.binomial method random.Generator.binomial(_n_ , _p_ , _size =None_) Draw samples from a binomial distribution. Samples are drawn from a binomial distribution with specified parameters, n trials and p probability of success where n an integer >= 0 and p is in the interval [0,1]. (n may be input as a float, but it is truncated to an integer in use) Parameters: **n** int or array_like of ints Parameter of the distribution, >= 0. Floats are also accepted, but they will be truncated to integers. **p** float or array_like of floats Parameter of the distribution, >= 0 and <=1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `n` and `p` are both scalars. Otherwise, `np.broadcast(n, p).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized binomial distribution, where each sample is equal to the number of successes over the n trials. See also [`scipy.stats.binom`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binom.html#scipy.stats.binom "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. #### Notes The probability mass function (PMF) for the binomial distribution is \\[P(N) = \binom{n}{N}p^N(1-p)^{n-N},\\] where \\(n\\) is the number of trials, \\(p\\) is the probability of success, and \\(N\\) is the number of successes. When estimating the standard error of a proportion in a population by using a random sample, the normal distribution works well unless the product p*n <=5, where p = population proportion estimate, and n = number of samples, in which case the binomial distribution is used instead. For example, a sample of 15 people shows 4 who are left handed, and 11 who are right handed. Then p = 4/15 = 27%. 0.27*15 = 4, so the binomial distribution should be used in this case. #### References [1] Dalgaard, Peter, “Introductory Statistics with R”, Springer-Verlag, 2002. [2] Glantz, Stanton A. “Primer of Biostatistics.”, McGraw-Hill, Fifth Edition, 2002. [3] Lentner, Marvin, “Elementary Applied Statistics”, Bogden and Quigley, 1972. [4] Weisstein, Eric W. “Binomial Distribution.” From MathWorld–A Wolfram Web Resource. [5] Wikipedia, “Binomial distribution”, #### Examples Draw samples from the distribution: >>> rng = np.random.default_rng() >>> n, p, size = 10, .5, 10000 >>> s = rng.binomial(n, p, 10000) Assume a company drills 9 wild-cat oil exploration wells, each with an estimated probability of success of `p=0.1`. All nine wells fail. What is the probability of that happening? Over `size = 20,000` trials the probability of this happening is on average: >>> n, p, size = 9, 0.1, 20000 >>> np.sum(rng.binomial(n=n, p=p, size=size) == 0)/size 0.39015 # may vary The following can be used to visualize a sample with `n=100`, `p=0.4` and the corresponding probability density function: >>> import matplotlib.pyplot as plt >>> from scipy.stats import binom >>> n, p, size = 100, 0.4, 10000 >>> sample = rng.binomial(n, p, size=size) >>> count, bins, _ = plt.hist(sample, 30, density=True) >>> x = np.arange(n) >>> y = binom.pmf(x, n, p) >>> plt.plot(x, y, linewidth=2, color='r') # numpy.random.Generator.bit_generator attribute random.Generator.bit_generator Gets the bit generator instance used by the generator Returns: **bit_generator** BitGenerator The bit generator instance used by the generator # numpy.random.Generator.bytes method random.Generator.bytes(_length_) Return random bytes. Parameters: **length** int Number of random bytes. Returns: **out** bytes String of length `length`. #### Notes This function generates random bytes from a discrete uniform distribution. The generated bytes are independent from the CPU’s native endianness. #### Examples >>> rng = np.random.default_rng() >>> rng.bytes(10) b'\xfeC\x9b\x86\x17\xf2\xa1\xafcp' # random # numpy.random.Generator.chisquare method random.Generator.chisquare(_df_ , _size =None_) Draw samples from a chi-square distribution. When `df` independent random variables, each with standard normal distributions (mean 0, variance 1), are squared and summed, the resulting distribution is chi-square (see Notes). This distribution is often used in hypothesis testing. Parameters: **df** float or array_like of floats Number of degrees of freedom, must be > 0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` is a scalar. Otherwise, `np.array(df).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized chi-square distribution. Raises: ValueError When `df` <= 0 or when an inappropriate [`size`](../../generated/numpy.size#numpy.size "numpy.size") (e.g. `size=-1`) is given. #### Notes The variable obtained by summing the squares of `df` independent, standard normally distributed random variables: \\[Q = \sum_{i=1}^{\mathtt{df}} X^2_i\\] is chi-square distributed, denoted \\[Q \sim \chi^2_k.\\] The probability density function of the chi-squared distribution is \\[p(x) = \frac{(1/2)^{k/2}}{\Gamma(k/2)} x^{k/2 - 1} e^{-x/2},\\] where \\(\Gamma\\) is the gamma function, \\[\Gamma(x) = \int_0^{-\infty} t^{x - 1} e^{-t} dt.\\] #### References [1] NIST “Engineering Statistics Handbook” #### Examples >>> rng = np.random.default_rng() >>> rng.chisquare(2,4) array([ 1.89920014, 9.00867716, 3.13710533, 5.62318272]) # random The distribution of a chi-square random variable with 20 degrees of freedom looks as follows: >>> import matplotlib.pyplot as plt >>> import scipy.stats as stats >>> s = rng.chisquare(20, 10000) >>> count, bins, _ = plt.hist(s, 30, density=True) >>> x = np.linspace(0, 60, 1000) >>> plt.plot(x, stats.chi2.pdf(x, df=20)) >>> plt.xlim([0, 60]) >>> plt.show() # numpy.random.Generator.choice method random.Generator.choice(_a_ , _size =None_, _replace =True_, _p =None_, _axis =0_, _shuffle =True_) Generates a random sample from a given array Parameters: **a**{array_like, int} If an ndarray, a random sample is generated from its elements. If an int, the random sample is generated from np.arange(a). **size**{int, tuple[int]}, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn from the 1-d `a`. If `a` has more than one dimension, the [`size`](../../generated/numpy.size#numpy.size "numpy.size") shape will be inserted into the `axis` dimension, so the output `ndim` will be `a.ndim - 1 + len(size)`. Default is None, in which case a single value is returned. **replace** bool, optional Whether the sample is with or without replacement. Default is True, meaning that a value of `a` can be selected multiple times. **p** 1-D array_like, optional The probabilities associated with each entry in a. If not given, the sample assumes a uniform distribution over all entries in `a`. **axis** int, optional The axis along which the selection is performed. The default, 0, selects by row. **shuffle** bool, optional Whether the sample is shuffled when sampling without replacement. Default is True, False provides a speedup. Returns: **samples** single item or ndarray The generated random samples Raises: ValueError If a is an int and less than zero, if p is not 1-dimensional, if a is array- like with a size 0, if p is not a vector of probabilities, if a and p have different lengths, or if replace=False and the sample size is greater than the population size. See also [`integers`](numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers"), [`shuffle`](numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle"), [`permutation`](numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") #### Notes Setting user-specified probabilities through `p` uses a more general but less efficient sampler than the default. The general sampler produces a different sample than the optimized sampler even if each element of `p` is 1 / len(a). `p` must sum to 1 when cast to `float64`. To ensure this, you may wish to normalize using `p = p / np.sum(p, dtype=float)`. When passing `a` as an integer type and `size` is not specified, the return type is a native Python `int`. #### Examples Generate a uniform random sample from np.arange(5) of size 3: >>> rng = np.random.default_rng() >>> rng.choice(5, 3) array([0, 3, 4]) # random >>> #This is equivalent to rng.integers(0,5,3) Generate a non-uniform random sample from np.arange(5) of size 3: >>> rng.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0]) array([3, 3, 0]) # random Generate a uniform random sample from np.arange(5) of size 3 without replacement: >>> rng.choice(5, 3, replace=False) array([3,1,0]) # random >>> #This is equivalent to rng.permutation(np.arange(5))[:3] Generate a uniform random sample from a 2-D array along the first axis (the default), without replacement: >>> rng.choice([[0, 1, 2], [3, 4, 5], [6, 7, 8]], 2, replace=False) array([[3, 4, 5], # random [0, 1, 2]]) Generate a non-uniform random sample from np.arange(5) of size 3 without replacement: >>> rng.choice(5, 3, replace=False, p=[0.1, 0, 0.3, 0.6, 0]) array([2, 3, 0]) # random Any of the above can be repeated with an arbitrary array-like instead of just integers. For instance: >>> aa_milne_arr = ['pooh', 'rabbit', 'piglet', 'Christopher'] >>> rng.choice(aa_milne_arr, 5, p=[0.5, 0.1, 0.1, 0.3]) array(['pooh', 'pooh', 'pooh', 'Christopher', 'piglet'], # random dtype='0\\) and \\(\sum_{i=1}^k x_i = 1\\). The probability density function \\(p\\) of a Dirichlet-distributed random vector \\(X\\) is proportional to \\[p(x) \propto \prod_{i=1}^{k}{x^{\alpha_i-1}_i},\\] where \\(\alpha\\) is a vector containing the positive concentration parameters. The method uses the following property for computation: let \\(Y\\) be a random vector which has components that follow a standard gamma distribution, then \\(X = \frac{1}{\sum_{i=1}^k{Y_i}} Y\\) is Dirichlet-distributed #### References [1] David McKay, “Information Theory, Inference and Learning Algorithms,” chapter 23, [2] Wikipedia, “Dirichlet distribution”, #### Examples Taking an example cited in Wikipedia, this distribution can be used if one wanted to cut strings (each of initial length 1.0) into K pieces with different lengths, where each piece had, on average, a designated average length, but allowing some variation in the relative sizes of the pieces. >>> rng = np.random.default_rng() >>> s = rng.dirichlet((10, 5, 3), 20).transpose() >>> import matplotlib.pyplot as plt >>> plt.barh(range(20), s[0]) >>> plt.barh(range(20), s[1], left=s[0], color='g') >>> plt.barh(range(20), s[2], left=s[0]+s[1], color='r') >>> plt.title("Lengths of Strings") # numpy.random.Generator.exponential method random.Generator.exponential(_scale =1.0_, _size =None_) Draw samples from an exponential distribution. Its probability density function is \\[f(x; \frac{1}{\beta}) = \frac{1}{\beta} \exp(-\frac{x}{\beta}),\\] for `x > 0` and 0 elsewhere. \\(\beta\\) is the scale parameter, which is the inverse of the rate parameter \\(\lambda = 1/\beta\\). The rate parameter is an alternative, widely used parameterization of the exponential distribution [3]. The exponential distribution is a continuous analogue of the geometric distribution. It describes many common situations, such as the size of raindrops measured over many rainstorms [1], or the time between page requests to Wikipedia [2]. Parameters: **scale** float or array_like of floats The scale parameter, \\(\beta = 1/\lambda\\). Must be non-negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `scale` is a scalar. Otherwise, `np.array(scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized exponential distribution. #### References [1] Peyton Z. Peebles Jr., “Probability, Random Variables and Random Signal Principles”, 4th ed, 2001, p. 57. [2] Wikipedia, “Poisson process”, [3] Wikipedia, “Exponential distribution”, #### Examples Assume a company has 10000 customer support agents and the time between customer calls is exponentially distributed and that the average time between customer calls is 4 minutes. >>> scale, size = 4, 10000 >>> rng = np.random.default_rng() >>> time_between_calls = rng.exponential(scale=scale, size=size) What is the probability that a customer will call in the next 4 to 5 minutes? >>> x = ((time_between_calls < 5).sum())/size >>> y = ((time_between_calls < 4).sum())/size >>> x - y 0.08 # may vary The corresponding distribution can be visualized as follows: >>> import matplotlib.pyplot as plt >>> scale, size = 4, 10000 >>> rng = np.random.default_rng() >>> sample = rng.exponential(scale=scale, size=size) >>> count, bins, _ = plt.hist(sample, 30, density=True) >>> plt.plot(bins, scale**(-1)*np.exp(-scale**-1*bins), linewidth=2, color='r') >>> plt.show() # numpy.random.Generator.f method random.Generator.f(_dfnum_ , _dfden_ , _size =None_) Draw samples from an F distribution. Samples are drawn from an F distribution with specified parameters, `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of freedom in denominator), where both parameters must be greater than zero. The random variate of the F distribution (also known as the Fisher distribution) is a continuous probability distribution that arises in ANOVA tests, and is the ratio of two chi-square variates. Parameters: **dfnum** float or array_like of floats Degrees of freedom in numerator, must be > 0. **dfden** float or array_like of float Degrees of freedom in denominator, must be > 0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `dfnum` and `dfden` are both scalars. Otherwise, `np.broadcast(dfnum, dfden).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Fisher distribution. See also [`scipy.stats.f`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.f.html#scipy.stats.f "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. #### Notes The F statistic is used to compare in-group variances to between-group variances. Calculating the distribution depends on the sampling, and so it is a function of the respective degrees of freedom in the problem. The variable `dfnum` is the number of samples minus one, the between-groups degrees of freedom, while `dfden` is the within-groups degrees of freedom, the sum of the number of samples in each group minus the number of groups. #### References [1] Glantz, Stanton A. “Primer of Biostatistics.”, McGraw-Hill, Fifth Edition, 2002. [2] Wikipedia, “F-distribution”, #### Examples An example from Glantz[1], pp 47-40: Two groups, children of diabetics (25 people) and children from people without diabetes (25 controls). Fasting blood glucose was measured, case group had a mean value of 86.1, controls had a mean value of 82.2. Standard deviations were 2.09 and 2.49 respectively. Are these data consistent with the null hypothesis that the parents diabetic status does not affect their children’s blood glucose levels? Calculating the F statistic from the data gives a value of 36.01. Draw samples from the distribution: >>> dfnum = 1. # between group degrees of freedom >>> dfden = 48. # within groups degrees of freedom >>> rng = np.random.default_rng() >>> s = rng.f(dfnum, dfden, 1000) The lower bound for the top 1% of the samples is : >>> np.sort(s)[-10] 7.61988120985 # random So there is about a 1% chance that the F statistic will exceed 7.62, the measured value is 36, so the null hypothesis is rejected at the 1% level. The corresponding probability density function for `n = 20` and `m = 20` is: >>> import matplotlib.pyplot as plt >>> from scipy import stats >>> dfnum, dfden, size = 20, 20, 10000 >>> s = rng.f(dfnum=dfnum, dfden=dfden, size=size) >>> bins, density, _ = plt.hist(s, 30, density=True) >>> x = np.linspace(0, 5, 1000) >>> plt.plot(x, stats.f.pdf(x, dfnum, dfden)) >>> plt.xlim([0, 5]) >>> plt.show() # numpy.random.Generator.gamma method random.Generator.gamma(_shape_ , _scale =1.0_, _size =None_) Draw samples from a Gamma distribution. Samples are drawn from a Gamma distribution with specified parameters, [`shape`](../../generated/numpy.shape#numpy.shape "numpy.shape") (sometimes designated “k”) and `scale` (sometimes designated “theta”), where both parameters are > 0. Parameters: **shape** float or array_like of floats The shape of the gamma distribution. Must be non-negative. **scale** float or array_like of floats, optional The scale of the gamma distribution. Must be non-negative. Default is equal to 1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `shape` and `scale` are both scalars. Otherwise, `np.broadcast(shape, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized gamma distribution. See also [`scipy.stats.gamma`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. #### Notes The probability density for the Gamma distribution is \\[p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},\\] where \\(k\\) is the shape and \\(\theta\\) the scale, and \\(\Gamma\\) is the Gamma function. The Gamma distribution is often used to model the times to failure of electronic components, and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. #### References [1] Weisstein, Eric W. “Gamma Distribution.” From MathWorld–A Wolfram Web Resource. [2] Wikipedia, “Gamma distribution”, #### Examples Draw samples from the distribution: >>> shape, scale = 2., 2. # mean=4, std=2*sqrt(2) >>> rng = np.random.default_rng() >>> s = rng.gamma(shape, scale, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> import scipy.special as sps >>> count, bins, _ = plt.hist(s, 50, density=True) >>> y = bins**(shape-1)*(np.exp(-bins/scale) / ... (sps.gamma(shape)*scale**shape)) >>> plt.plot(bins, y, linewidth=2, color='r') >>> plt.show() # numpy.random.Generator.geometric method random.Generator.geometric(_p_ , _size =None_) Draw samples from the geometric distribution. Bernoulli trials are experiments with one of two outcomes: success or failure (an example of such an experiment is flipping a coin). The geometric distribution models the number of trials that must be run in order to achieve success. It is therefore supported on the positive integers, `k = 1, 2, ...`. The probability mass function of the geometric distribution is \\[f(k) = (1 - p)^{k - 1} p\\] where `p` is the probability of success of an individual trial. Parameters: **p** float or array_like of floats The probability of success of an individual trial. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `p` is a scalar. Otherwise, `np.array(p).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized geometric distribution. #### References [1] Wikipedia, “Geometric distribution”, #### Examples Draw 10,000 values from the geometric distribution, with the probability of an individual success equal to `p = 0.35`: >>> p, size = 0.35, 10000 >>> rng = np.random.default_rng() >>> sample = rng.geometric(p=p, size=size) What proportion of trials succeeded after a single run? >>> (sample == 1).sum()/size 0.34889999999999999 # may vary The geometric distribution with `p=0.35` looks as follows: >>> import matplotlib.pyplot as plt >>> count, bins, _ = plt.hist(sample, bins=30, density=True) >>> plt.plot(bins, (1-p)**(bins-1)*p) >>> plt.xlim([0, 25]) >>> plt.show() # numpy.random.Generator.gumbel method random.Generator.gumbel(_loc =0.0_, _scale =1.0_, _size =None_) Draw samples from a Gumbel distribution. Draw samples from a Gumbel distribution with specified location and scale. For more information on the Gumbel distribution, see Notes and References below. Parameters: **loc** float or array_like of floats, optional The location of the mode of the distribution. Default is 0. **scale** float or array_like of floats, optional The scale parameter of the distribution. Default is 1. Must be non- negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Gumbel distribution. See also [`scipy.stats.gumbel_l`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gumbel_l.html#scipy.stats.gumbel_l "\(in SciPy v1.14.1\)") [`scipy.stats.gumbel_r`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gumbel_r.html#scipy.stats.gumbel_r "\(in SciPy v1.14.1\)") [`scipy.stats.genextreme`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genextreme.html#scipy.stats.genextreme "\(in SciPy v1.14.1\)") [`weibull`](numpy.random.generator.weibull#numpy.random.Generator.weibull "numpy.random.Generator.weibull") #### Notes The Gumbel (or Smallest Extreme Value (SEV) or the Smallest Extreme Value Type I) distribution is one of a class of Generalized Extreme Value (GEV) distributions used in modeling extreme value problems. The Gumbel is a special case of the Extreme Value Type I distribution for maximums from distributions with “exponential-like” tails. The probability density for the Gumbel distribution is \\[p(x) = \frac{e^{-(x - \mu)/ \beta}}{\beta} e^{ -e^{-(x - \mu)/ \beta}},\\] where \\(\mu\\) is the mode, a location parameter, and \\(\beta\\) is the scale parameter. The Gumbel (named for German mathematician Emil Julius Gumbel) was used very early in the hydrology literature, for modeling the occurrence of flood events. It is also used for modeling maximum wind speed and rainfall rates. It is a “fat-tailed” distribution - the probability of an event in the tail of the distribution is larger than if one used a Gaussian, hence the surprisingly frequent occurrence of 100-year floods. Floods were initially modeled as a Gaussian process, which underestimated the frequency of extreme events. It is one of a class of extreme value distributions, the Generalized Extreme Value (GEV) distributions, which also includes the Weibull and Frechet. The function has a mean of \\(\mu + 0.57721\beta\\) and a variance of \\(\frac{\pi^2}{6}\beta^2\\). #### References [1] Gumbel, E. J., “Statistics of Extremes,” New York: Columbia University Press, 1958. [2] Reiss, R.-D. and Thomas, M., “Statistical Analysis of Extreme Values from Insurance, Finance, Hydrology and Other Fields,” Basel: Birkhauser Verlag, 2001. #### Examples Draw samples from the distribution: >>> rng = np.random.default_rng() >>> mu, beta = 0, 0.1 # location and scale >>> s = rng.gumbel(mu, beta, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, _ = plt.hist(s, 30, density=True) >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta) ... * np.exp( -np.exp( -(bins - mu) /beta) ), ... linewidth=2, color='r') >>> plt.show() Show how an extreme value distribution can arise from a Gaussian process and compare to a Gaussian: >>> means = [] >>> maxima = [] >>> for i in range(0,1000) : ... a = rng.normal(mu, beta, 1000) ... means.append(a.mean()) ... maxima.append(a.max()) >>> count, bins, _ = plt.hist(maxima, 30, density=True) >>> beta = np.std(maxima) * np.sqrt(6) / np.pi >>> mu = np.mean(maxima) - 0.57721*beta >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta) ... * np.exp(-np.exp(-(bins - mu)/beta)), ... linewidth=2, color='r') >>> plt.plot(bins, 1/(beta * np.sqrt(2 * np.pi)) ... * np.exp(-(bins - mu)**2 / (2 * beta**2)), ... linewidth=2, color='g') >>> plt.show() # numpy.random.Generator.hypergeometric method random.Generator.hypergeometric(_ngood_ , _nbad_ , _nsample_ , _size =None_) Draw samples from a Hypergeometric distribution. Samples are drawn from a hypergeometric distribution with specified parameters, `ngood` (ways to make a good selection), `nbad` (ways to make a bad selection), and `nsample` (number of items sampled, which is less than or equal to the sum `ngood + nbad`). Parameters: **ngood** int or array_like of ints Number of ways to make a good selection. Must be nonnegative and less than 10**9. **nbad** int or array_like of ints Number of ways to make a bad selection. Must be nonnegative and less than 10**9. **nsample** int or array_like of ints Number of items sampled. Must be nonnegative and less than `ngood + nbad`. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `ngood`, `nbad`, and `nsample` are all scalars. Otherwise, `np.broadcast(ngood, nbad, nsample).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized hypergeometric distribution. Each sample is the number of good items within a randomly selected subset of size `nsample` taken from a set of `ngood` good items and `nbad` bad items. See also [`multivariate_hypergeometric`](numpy.random.generator.multivariate_hypergeometric#numpy.random.Generator.multivariate_hypergeometric "numpy.random.Generator.multivariate_hypergeometric") Draw samples from the multivariate hypergeometric distribution. [`scipy.stats.hypergeom`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.hypergeom.html#scipy.stats.hypergeom "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. #### Notes The probability mass function (PMF) for the Hypergeometric distribution is \\[P(x) = \frac{\binom{g}{x}\binom{b}{n-x}}{\binom{g+b}{n}},\\] where \\(0 \le x \le n\\) and \\(n-b \le x \le g\\) for P(x) the probability of `x` good results in the drawn sample, g = `ngood`, b = `nbad`, and n = `nsample`. Consider an urn with black and white marbles in it, `ngood` of them are black and `nbad` are white. If you draw `nsample` balls without replacement, then the hypergeometric distribution describes the distribution of black balls in the drawn sample. Note that this distribution is very similar to the binomial distribution, except that in this case, samples are drawn without replacement, whereas in the Binomial case samples are drawn with replacement (or the sample space is infinite). As the sample space becomes large, this distribution approaches the binomial. The arguments `ngood` and `nbad` each must be less than `10**9`. For extremely large arguments, the algorithm that is used to compute the samples [4] breaks down because of loss of precision in floating point calculations. For such large values, if `nsample` is not also large, the distribution can be approximated with the binomial distribution, `binomial(n=nsample, p=ngood/(ngood + nbad))`. #### References [1] Lentner, Marvin, “Elementary Applied Statistics”, Bogden and Quigley, 1972. [2] Weisstein, Eric W. “Hypergeometric Distribution.” From MathWorld–A Wolfram Web Resource. [3] Wikipedia, “Hypergeometric distribution”, [4] Stadlober, Ernst, “The ratio of uniforms approach for generating discrete random variates”, Journal of Computational and Applied Mathematics, 31, pp. 181-189 (1990). #### Examples Draw samples from the distribution: >>> rng = np.random.default_rng() >>> ngood, nbad, nsamp = 100, 2, 10 # number of good, number of bad, and number of samples >>> s = rng.hypergeometric(ngood, nbad, nsamp, 1000) >>> from matplotlib.pyplot import hist >>> hist(s) # note that it is very unlikely to grab both bad items Suppose you have an urn with 15 white and 15 black marbles. If you pull 15 marbles at random, how likely is it that 12 or more of them are one color? >>> s = rng.hypergeometric(15, 15, 15, 100000) >>> sum(s>=12)/100000. + sum(s<=3)/100000. # answer = 0.003 ... pretty unlikely! # numpy.random.Generator.integers method random.Generator.integers(_low_ , _high =None_, _size =None_, _dtype =np.int64_, _endpoint =False_) Return random integers from `low` (inclusive) to `high` (exclusive), or if endpoint=True, `low` (inclusive) to `high` (inclusive). Replaces `RandomState.randint` (with endpoint=False) and `RandomState.random_integers` (with endpoint=True) Return random integers from the “discrete uniform” distribution of the specified dtype. If `high` is None (the default), then results are from 0 to `low`. Parameters: **low** int or array-like of ints Lowest (signed) integers to be drawn from the distribution (unless `high=None`, in which case this parameter is 0 and this value is used for `high`). **high** int or array-like of ints, optional If provided, one above the largest (signed) integer to be drawn from the distribution (see above for behavior if `high=None`). If array-like, must contain integer values **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **dtype** dtype, optional Desired dtype of the result. Byteorder must be native. The default value is np.int64. **endpoint** bool, optional If true, sample from the interval [low, high] instead of the default [low, high) Defaults to False Returns: **out** int or ndarray of ints [`size`](../../generated/numpy.size#numpy.size "numpy.size")-shaped array of random integers from the appropriate distribution, or a single such random int if [`size`](../../generated/numpy.size#numpy.size "numpy.size") not provided. #### Notes When using broadcasting with uint64 dtypes, the maximum value (2**64) cannot be represented as a standard integer type. The high array (or low if high is None) must have object dtype, e.g., array([2**64]). #### References [1] Daniel Lemire., “Fast Random Integer Generation in an Interval”, ACM Transactions on Modeling and Computer Simulation 29 (1), 2019, . #### Examples >>> rng = np.random.default_rng() >>> rng.integers(2, size=10) array([1, 0, 0, 0, 1, 1, 0, 0, 1, 0]) # random >>> rng.integers(1, size=10) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) Generate a 2 x 4 array of ints between 0 and 4, inclusive: >>> rng.integers(5, size=(2, 4)) array([[4, 0, 2, 1], [3, 2, 2, 0]]) # random Generate a 1 x 3 array with 3 different upper bounds >>> rng.integers(1, [3, 5, 10]) array([2, 2, 9]) # random Generate a 1 by 3 array with 3 different lower bounds >>> rng.integers([1, 5, 7], 10) array([9, 8, 7]) # random Generate a 2 by 4 array using broadcasting with dtype of uint8 >>> rng.integers([1, 3, 5, 7], [[10], [20]], dtype=np.uint8) array([[ 8, 6, 9, 7], [ 1, 16, 9, 12]], dtype=uint8) # random # numpy.random.Generator.laplace method random.Generator.laplace(_loc =0.0_, _scale =1.0_, _size =None_) Draw samples from the Laplace or double exponential distribution with specified location (or mean) and scale (decay). The Laplace distribution is similar to the Gaussian/normal distribution, but is sharper at the peak and has fatter tails. It represents the difference between two independent, identically distributed exponential random variables. Parameters: **loc** float or array_like of floats, optional The position, \\(\mu\\), of the distribution peak. Default is 0. **scale** float or array_like of floats, optional \\(\lambda\\), the exponential decay. Default is 1. Must be non- negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Laplace distribution. #### Notes It has the probability density function \\[f(x; \mu, \lambda) = \frac{1}{2\lambda} \exp\left(-\frac{|x - \mu|}{\lambda}\right).\\] The first law of Laplace, from 1774, states that the frequency of an error can be expressed as an exponential function of the absolute magnitude of the error, which leads to the Laplace distribution. For many problems in economics and health sciences, this distribution seems to model the data better than the standard Gaussian distribution. #### References [1] Abramowitz, M. and Stegun, I. A. (Eds.). “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing,” New York: Dover, 1972. [2] Kotz, Samuel, et. al. “The Laplace Distribution and Generalizations, “ Birkhauser, 2001. [3] Weisstein, Eric W. “Laplace Distribution.” From MathWorld–A Wolfram Web Resource. [4] Wikipedia, “Laplace distribution”, #### Examples Draw samples from the distribution >>> loc, scale = 0., 1. >>> rng = np.random.default_rng() >>> s = rng.laplace(loc, scale, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, _ = plt.hist(s, 30, density=True) >>> x = np.arange(-8., 8., .01) >>> pdf = np.exp(-abs(x-loc)/scale)/(2.*scale) >>> plt.plot(x, pdf) Plot Gaussian for comparison: >>> g = (1/(scale * np.sqrt(2 * np.pi)) * ... np.exp(-(x - loc)**2 / (2 * scale**2))) >>> plt.plot(x,g) # numpy.random.Generator.logistic method random.Generator.logistic(_loc =0.0_, _scale =1.0_, _size =None_) Draw samples from a logistic distribution. Samples are drawn from a logistic distribution with specified parameters, loc (location or mean, also median), and scale (>0). Parameters: **loc** float or array_like of floats, optional Parameter of the distribution. Default is 0. **scale** float or array_like of floats, optional Parameter of the distribution. Must be non-negative. Default is 1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized logistic distribution. See also [`scipy.stats.logistic`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logistic.html#scipy.stats.logistic "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. #### Notes The probability density for the Logistic distribution is \\[P(x) = \frac{e^{-(x-\mu)/s}}{s(1+e^{-(x-\mu)/s})^2},\\] where \\(\mu\\) = location and \\(s\\) = scale. The Logistic distribution is used in Extreme Value problems where it can act as a mixture of Gumbel distributions, in Epidemiology, and by the World Chess Federation (FIDE) where it is used in the Elo ranking system, assuming the performance of each player is a logistically distributed random variable. #### References [1] Reiss, R.-D. and Thomas M. (2001), “Statistical Analysis of Extreme Values, from Insurance, Finance, Hydrology and Other Fields,” Birkhauser Verlag, Basel, pp 132-133. [2] Weisstein, Eric W. “Logistic Distribution.” From MathWorld–A Wolfram Web Resource. [3] Wikipedia, “Logistic-distribution”, #### Examples Draw samples from the distribution: >>> loc, scale = 10, 1 >>> rng = np.random.default_rng() >>> s = rng.logistic(loc, scale, 10000) >>> import matplotlib.pyplot as plt >>> count, bins, _ = plt.hist(s, bins=50, label='Sampled data') # plot sampled data against the exact distribution >>> def logistic(x, loc, scale): ... return np.exp((loc-x)/scale)/(scale*(1+np.exp((loc-x)/scale))**2) >>> logistic_values = logistic(bins, loc, scale) >>> bin_spacing = np.mean(np.diff(bins)) >>> plt.plot(bins, logistic_values * bin_spacing * s.size, label='Logistic PDF') >>> plt.legend() >>> plt.show() # numpy.random.Generator.lognormal method random.Generator.lognormal(_mean =0.0_, _sigma =1.0_, _size =None_) Draw samples from a log-normal distribution. Draw samples from a log-normal distribution with specified mean, standard deviation, and array shape. Note that the mean and standard deviation are not the values for the distribution itself, but of the underlying normal distribution it is derived from. Parameters: **mean** float or array_like of floats, optional Mean value of the underlying normal distribution. Default is 0. **sigma** float or array_like of floats, optional Standard deviation of the underlying normal distribution. Must be non- negative. Default is 1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mean` and `sigma` are both scalars. Otherwise, `np.broadcast(mean, sigma).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized log-normal distribution. See also [`scipy.stats.lognorm`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html#scipy.stats.lognorm "\(in SciPy v1.14.1\)") probability density function, distribution, cumulative density function, etc. #### Notes A variable `x` has a log-normal distribution if `log(x)` is normally distributed. The probability density function for the log-normal distribution is: \\[p(x) = \frac{1}{\sigma x \sqrt{2\pi}} e^{(-\frac{(ln(x)-\mu)^2}{2\sigma^2})}\\] where \\(\mu\\) is the mean and \\(\sigma\\) is the standard deviation of the normally distributed logarithm of the variable. A log-normal distribution results if a random variable is the _product_ of a large number of independent, identically-distributed variables in the same way that a normal distribution results if the variable is the _sum_ of a large number of independent, identically-distributed variables. #### References [1] Limpert, E., Stahel, W. A., and Abbt, M., “Log-normal Distributions across the Sciences: Keys and Clues,” BioScience, Vol. 51, No. 5, May, 2001. [2] Reiss, R.D. and Thomas, M., “Statistical Analysis of Extreme Values,” Basel: Birkhauser Verlag, 2001, pp. 31-32. #### Examples Draw samples from the distribution: >>> rng = np.random.default_rng() >>> mu, sigma = 3., 1. # mean and standard deviation >>> s = rng.lognormal(mu, sigma, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, _ = plt.hist(s, 100, density=True, align='mid') >>> x = np.linspace(min(bins), max(bins), 10000) >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2)) ... / (x * sigma * np.sqrt(2 * np.pi))) >>> plt.plot(x, pdf, linewidth=2, color='r') >>> plt.axis('tight') >>> plt.show() Demonstrate that taking the products of random samples from a uniform distribution can be fit well by a log-normal probability density function. >>> # Generate a thousand samples: each is the product of 100 random >>> # values, drawn from a normal distribution. >>> rng = rng >>> b = [] >>> for i in range(1000): ... a = 10. + rng.standard_normal(100) ... b.append(np.prod(a)) >>> b = np.array(b) / np.min(b) # scale values to be positive >>> count, bins, _ = plt.hist(b, 100, density=True, align='mid') >>> sigma = np.std(np.log(b)) >>> mu = np.mean(np.log(b)) >>> x = np.linspace(min(bins), max(bins), 10000) >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2)) ... / (x * sigma * np.sqrt(2 * np.pi))) >>> plt.plot(x, pdf, color='r', linewidth=2) >>> plt.show() # numpy.random.Generator.logseries method random.Generator.logseries(_p_ , _size =None_) Draw samples from a logarithmic series distribution. Samples are drawn from a log series distribution with specified shape parameter, 0 <= `p` < 1. Parameters: **p** float or array_like of floats Shape parameter for the distribution. Must be in the range [0, 1). **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `p` is a scalar. Otherwise, `np.array(p).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized logarithmic series distribution. See also [`scipy.stats.logser`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logser.html#scipy.stats.logser "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. #### Notes The probability mass function for the Log Series distribution is \\[P(k) = \frac{-p^k}{k \ln(1-p)},\\] where p = probability. The log series distribution is frequently used to represent species richness and occurrence, first proposed by Fisher, Corbet, and Williams in 1943 [2]. It may also be used to model the numbers of occupants seen in cars [3]. #### References [1] Buzas, Martin A.; Culver, Stephen J., Understanding regional species diversity through the log series distribution of occurrences: BIODIVERSITY RESEARCH Diversity & Distributions, Volume 5, Number 5, September 1999 , pp. 187-195(9). [2] Fisher, R.A,, A.S. Corbet, and C.B. Williams. 1943. The relation between the number of species and the number of individuals in a random sample of an animal population. Journal of Animal Ecology, 12:42-58. [3] D. J. Hand, F. Daly, D. Lunn, E. Ostrowski, A Handbook of Small Data Sets, CRC Press, 1994. [4] Wikipedia, “Logarithmic distribution”, #### Examples Draw samples from the distribution: >>> a = .6 >>> rng = np.random.default_rng() >>> s = rng.logseries(a, 10000) >>> import matplotlib.pyplot as plt >>> bins = np.arange(-.5, max(s) + .5 ) >>> count, bins, _ = plt.hist(s, bins=bins, label='Sample count') # plot against distribution >>> def logseries(k, p): ... return -p**k/(k*np.log(1-p)) >>> centres = np.arange(1, max(s) + 1) >>> plt.plot(centres, logseries(centres, a) * s.size, 'r', label='logseries PMF') >>> plt.legend() >>> plt.show() # numpy.random.Generator.multinomial method random.Generator.multinomial(_n_ , _pvals_ , _size =None_) Draw samples from a multinomial distribution. The multinomial distribution is a multivariate generalization of the binomial distribution. Take an experiment with one of `p` possible outcomes. An example of such an experiment is throwing a dice, where the outcome can be 1 through 6. Each sample drawn from the distribution represents `n` such experiments. Its values, `X_i = [X_0, X_1, ..., X_p]`, represent the number of times the outcome was `i`. Parameters: **n** int or array-like of ints Number of experiments. **pvals** array-like of floats Probabilities of each of the `p` different outcomes with shape `(k0, k1, ..., kn, p)`. Each element `pvals[i,j,...,:]` must sum to 1 (however, the last element is always assumed to account for the remaining probability, as long as `sum(pvals[..., :-1], axis=-1) <= 1.0`. Must have at least 1 dimension where pvals.shape[-1] > 0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn each with `p` elements. Default is None where the output size is determined by the broadcast shape of `n` and all by the final dimension of `pvals`, which is denoted as `b=(b0, b1, ..., bq)`. If size is not None, then it must be compatible with the broadcast shape `b`. Specifically, size must have `q` or more elements and size[-(q-j):] must equal `bj`. Returns: **out** ndarray The drawn samples, of shape size, if provided. When size is provided, the output shape is size + (p,) If not specified, the shape is determined by the broadcast shape of `n` and `pvals`, `(b0, b1, ..., bq)` augmented with the dimension of the multinomial, `p`, so that that output shape is `(b0, b1, ..., bq, p)`. Each entry `out[i,j,...,:]` is a `p`-dimensional value drawn from the distribution. #### Examples Throw a dice 20 times: >>> rng = np.random.default_rng() >>> rng.multinomial(20, [1/6.]*6, size=1) array([[4, 1, 7, 5, 2, 1]]) # random It landed 4 times on 1, once on 2, etc. Now, throw the dice 20 times, and 20 times again: >>> rng.multinomial(20, [1/6.]*6, size=2) array([[3, 4, 3, 3, 4, 3], [2, 4, 3, 4, 0, 7]]) # random For the first run, we threw 3 times 1, 4 times 2, etc. For the second, we threw 2 times 1, 4 times 2, etc. Now, do one experiment throwing the dice 10 time, and 10 times again, and another throwing the dice 20 times, and 20 times again: >>> rng.multinomial([[10], [20]], [1/6.]*6, size=(2, 2)) array([[[2, 4, 0, 1, 2, 1], [1, 3, 0, 3, 1, 2]], [[1, 4, 4, 4, 4, 3], [3, 3, 2, 5, 5, 2]]]) # random The first array shows the outcomes of throwing the dice 10 times, and the second shows the outcomes from throwing the dice 20 times. A loaded die is more likely to land on number 6: >>> rng.multinomial(100, [1/7.]*5 + [2/7.]) array([11, 16, 14, 17, 16, 26]) # random Simulate 10 throws of a 4-sided die and 20 throws of a 6-sided die >>> rng.multinomial([10, 20],[[1/4]*4 + [0]*2, [1/6]*6]) array([[2, 1, 4, 3, 0, 0], [3, 3, 3, 6, 1, 4]], dtype=int64) # random Generate categorical random variates from two categories where the first has 3 outcomes and the second has 2. >>> rng.multinomial(1, [[.1, .5, .4 ], [.3, .7, .0]]) array([[0, 0, 1], [0, 1, 0]], dtype=int64) # random `argmax(axis=-1)` is then used to return the categories. >>> pvals = [[.1, .5, .4 ], [.3, .7, .0]] >>> rvs = rng.multinomial(1, pvals, size=(4,2)) >>> rvs.argmax(axis=-1) array([[0, 1], [2, 0], [2, 1], [2, 0]], dtype=int64) # random The same output dimension can be produced using broadcasting. >>> rvs = rng.multinomial([[1]] * 4, pvals) >>> rvs.argmax(axis=-1) array([[0, 1], [2, 0], [2, 1], [2, 0]], dtype=int64) # random The probability inputs should be normalized. As an implementation detail, the value of the last entry is ignored and assumed to take up any leftover probability mass, but this should not be relied on. A biased coin which has twice as much weight on one side as on the other should be sampled like so: >>> rng.multinomial(100, [1.0 / 3, 2.0 / 3]) # RIGHT array([38, 62]) # random not like: >>> rng.multinomial(100, [1.0, 2.0]) # WRONG Traceback (most recent call last): ValueError: pvals < 0, pvals > 1 or pvals contains NaNs # numpy.random.Generator.multivariate_hypergeometric method random.Generator.multivariate_hypergeometric(_colors_ , _nsample_ , _size =None_, _method ='marginals'_) Generate variates from a multivariate hypergeometric distribution. The multivariate hypergeometric distribution is a generalization of the hypergeometric distribution. Choose `nsample` items at random without replacement from a collection with `N` distinct types. `N` is the length of `colors`, and the values in `colors` are the number of occurrences of that type in the collection. The total number of items in the collection is `sum(colors)`. Each random variate generated by this function is a vector of length `N` holding the counts of the different types that occurred in the `nsample` items. The name `colors` comes from a common description of the distribution: it is the probability distribution of the number of marbles of each color selected without replacement from an urn containing marbles of different colors; `colors[i]` is the number of marbles in the urn with color `i`. Parameters: **colors** sequence of integers The number of each type of item in the collection from which a sample is drawn. The values in `colors` must be nonnegative. To avoid loss of precision in the algorithm, `sum(colors)` must be less than `10**9` when `method` is “marginals”. **nsample** int The number of items selected. `nsample` must not be greater than `sum(colors)`. **size** int or tuple of ints, optional The number of variates to generate, either an integer or a tuple holding the shape of the array of variates. If the given size is, e.g., `(k, m)`, then `k * m` variates are drawn, where one variate is a vector of length `len(colors)`, and the return value has shape `(k, m, len(colors))`. If [`size`](../../generated/numpy.size#numpy.size "numpy.size") is an integer, the output has shape `(size, len(colors))`. Default is None, in which case a single variate is returned as an array with shape `(len(colors),)`. **method** string, optional Specify the algorithm that is used to generate the variates. Must be ‘count’ or ‘marginals’ (the default). See the Notes for a description of the methods. Returns: **variates** ndarray Array of variates drawn from the multivariate hypergeometric distribution. See also [`hypergeometric`](numpy.random.generator.hypergeometric#numpy.random.Generator.hypergeometric "numpy.random.Generator.hypergeometric") Draw samples from the (univariate) hypergeometric distribution. #### Notes The two methods do not return the same sequence of variates. The “count” algorithm is roughly equivalent to the following numpy code: choices = np.repeat(np.arange(len(colors)), colors) selection = np.random.choice(choices, nsample, replace=False) variate = np.bincount(selection, minlength=len(colors)) The “count” algorithm uses a temporary array of integers with length `sum(colors)`. The “marginals” algorithm generates a variate by using repeated calls to the univariate hypergeometric sampler. It is roughly equivalent to: variate = np.zeros(len(colors), dtype=np.int64) # `remaining` is the cumulative sum of `colors` from the last # element to the first; e.g. if `colors` is [3, 1, 5], then # `remaining` is [9, 6, 5]. remaining = np.cumsum(colors[::-1])[::-1] for i in range(len(colors)-1): if nsample < 1: break variate[i] = hypergeometric(colors[i], remaining[i+1], nsample) nsample -= variate[i] variate[-1] = nsample The default method is “marginals”. For some cases (e.g. when `colors` contains relatively small integers), the “count” method can be significantly faster than the “marginals” method. If performance of the algorithm is important, test the two methods with typical inputs to decide which works best. #### Examples >>> colors = [16, 8, 4] >>> seed = 4861946401452 >>> gen = np.random.Generator(np.random.PCG64(seed)) >>> gen.multivariate_hypergeometric(colors, 6) array([5, 0, 1]) >>> gen.multivariate_hypergeometric(colors, 6, size=3) array([[5, 0, 1], [2, 2, 2], [3, 3, 0]]) >>> gen.multivariate_hypergeometric(colors, 6, size=(2, 2)) array([[[3, 2, 1], [3, 2, 1]], [[4, 1, 1], [3, 2, 1]]]) # numpy.random.Generator.multivariate_normal method random.Generator.multivariate_normal(_mean_ , _cov_ , _size =None_, _check_valid ='warn'_, _tol =1e-8_, _*_ , _method ='svd'_) Draw random samples from a multivariate normal distribution. The multivariate normal, multinormal or Gaussian distribution is a generalization of the one-dimensional normal distribution to higher dimensions. Such a distribution is specified by its mean and covariance matrix. These parameters are analogous to the mean (average or “center”) and variance (the squared standard deviation, or “width”) of the one-dimensional normal distribution. Parameters: **mean** 1-D array_like, of length N Mean of the N-dimensional distribution. **cov** 2-D array_like, of shape (N, N) Covariance matrix of the distribution. It must be symmetric and positive- semidefinite for proper sampling. **size** int or tuple of ints, optional Given a shape of, for example, `(m,n,k)`, `m*n*k` samples are generated, and packed in an `m`-by-`n`-by-`k` arrangement. Because each sample is `N`-dimensional, the output shape is `(m,n,k,N)`. If no shape is specified, a single (`N`-D) sample is returned. **check_valid**{ ‘warn’, ‘raise’, ‘ignore’ }, optional Behavior when the covariance matrix is not positive semidefinite. **tol** float, optional Tolerance when checking the singular values in covariance matrix. cov is cast to double before the check. **method**{ ‘svd’, ‘eigh’, ‘cholesky’}, optional The cov input is used to compute a factor matrix A such that `A @ A.T = cov`. This argument is used to select the method used to compute the factor matrix A. The default method ‘svd’ is the slowest, while ‘cholesky’ is the fastest but less robust than the slowest method. The method `eigh` uses eigen decomposition to compute A and is faster than svd but slower than cholesky. Returns: **out** ndarray The drawn samples, of shape _size_ , if that was provided. If not, the shape is `(N,)`. In other words, each entry `out[i,j,...,:]` is an N-dimensional value drawn from the distribution. #### Notes The mean is a coordinate in N-dimensional space, which represents the location where samples are most likely to be generated. This is analogous to the peak of the bell curve for the one-dimensional or univariate normal distribution. Covariance indicates the level to which two variables vary together. From the multivariate normal distribution, we draw N-dimensional samples, \\(X = [x_1, x_2, ... x_N]\\). The covariance matrix element \\(C_{ij}\\) is the covariance of \\(x_i\\) and \\(x_j\\). The element \\(C_{ii}\\) is the variance of \\(x_i\\) (i.e. its “spread”). Instead of specifying the full covariance matrix, popular approximations include: * Spherical covariance ([`cov`](../../generated/numpy.cov#numpy.cov "numpy.cov") is a multiple of the identity matrix) * Diagonal covariance ([`cov`](../../generated/numpy.cov#numpy.cov "numpy.cov") has non-negative elements, and only on the diagonal) This geometrical property can be seen in two dimensions by plotting generated data-points: >>> mean = [0, 0] >>> cov = [[1, 0], [0, 100]] # diagonal covariance Diagonal covariance means that points are oriented along x or y-axis: >>> import matplotlib.pyplot as plt >>> rng = np.random.default_rng() >>> x, y = rng.multivariate_normal(mean, cov, 5000).T >>> plt.plot(x, y, 'x') >>> plt.axis('equal') >>> plt.show() Note that the covariance matrix must be positive semidefinite (a.k.a. nonnegative-definite). Otherwise, the behavior of this method is undefined and backwards compatibility is not guaranteed. This function internally uses linear algebra routines, and thus results may not be identical (even up to precision) across architectures, OSes, or even builds. For example, this is likely if `cov` has multiple equal singular values and `method` is `'svd'` (default). In this case, `method='cholesky'` may be more robust. #### References [1] Papoulis, A., “Probability, Random Variables, and Stochastic Processes,” 3rd ed., New York: McGraw-Hill, 1991. [2] Duda, R. O., Hart, P. E., and Stork, D. G., “Pattern Classification,” 2nd ed., New York: Wiley, 2001. #### Examples >>> mean = (1, 2) >>> cov = [[1, 0], [0, 1]] >>> rng = np.random.default_rng() >>> x = rng.multivariate_normal(mean, cov, (3, 3)) >>> x.shape (3, 3, 2) We can use a different method other than the default to factorize cov: >>> y = rng.multivariate_normal(mean, cov, (3, 3), method='cholesky') >>> y.shape (3, 3, 2) Here we generate 800 samples from the bivariate normal distribution with mean [0, 0] and covariance matrix [[6, -3], [-3, 3.5]]. The expected variances of the first and second components of the sample are 6 and 3.5, respectively, and the expected correlation coefficient is -3/sqrt(6*3.5) ≈ -0.65465. >>> cov = np.array([[6, -3], [-3, 3.5]]) >>> pts = rng.multivariate_normal([0, 0], cov, size=800) Check that the mean, covariance, and correlation coefficient of the sample are close to the expected values: >>> pts.mean(axis=0) array([ 0.0326911 , -0.01280782]) # may vary >>> np.cov(pts.T) array([[ 5.96202397, -2.85602287], [-2.85602287, 3.47613949]]) # may vary >>> np.corrcoef(pts.T)[0, 1] -0.6273591314603949 # may vary We can visualize this data with a scatter plot. The orientation of the point cloud illustrates the negative correlation of the components of this sample. >>> import matplotlib.pyplot as plt >>> plt.plot(pts[:, 0], pts[:, 1], '.', alpha=0.5) >>> plt.axis('equal') >>> plt.grid() >>> plt.show() # numpy.random.Generator.negative_binomial method random.Generator.negative_binomial(_n_ , _p_ , _size =None_) Draw samples from a negative binomial distribution. Samples are drawn from a negative binomial distribution with specified parameters, `n` successes and `p` probability of success where `n` is > 0 and `p` is in the interval (0, 1]. Parameters: **n** float or array_like of floats Parameter of the distribution, > 0. **p** float or array_like of floats Parameter of the distribution. Must satisfy 0 < p <= 1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `n` and `p` are both scalars. Otherwise, `np.broadcast(n, p).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized negative binomial distribution, where each sample is equal to N, the number of failures that occurred before a total of n successes was reached. #### Notes The probability mass function of the negative binomial distribution is \\[P(N;n,p) = \frac{\Gamma(N+n)}{N!\Gamma(n)}p^{n}(1-p)^{N},\\] where \\(n\\) is the number of successes, \\(p\\) is the probability of success, \\(N+n\\) is the number of trials, and \\(\Gamma\\) is the gamma function. When \\(n\\) is an integer, \\(\frac{\Gamma(N+n)}{N!\Gamma(n)} = \binom{N+n-1}{N}\\), which is the more common form of this term in the pmf. The negative binomial distribution gives the probability of N failures given n successes, with a success on the last trial. If one throws a die repeatedly until the third time a “1” appears, then the probability distribution of the number of non-“1”s that appear before the third “1” is a negative binomial distribution. Because this method internally calls `Generator.poisson` with an intermediate random value, a ValueError is raised when the choice of \\(n\\) and \\(p\\) would result in the mean + 10 sigma of the sampled intermediate distribution exceeding the max acceptable value of the `Generator.poisson` method. This happens when \\(p\\) is too low (a lot of failures happen for every success) and \\(n\\) is too big ( a lot of successes are allowed). Therefore, the \\(n\\) and \\(p\\) values must satisfy the constraint: \\[n\frac{1-p}{p}+10n\sqrt{n}\frac{1-p}{p}<2^{63}-1-10\sqrt{2^{63}-1},\\] Where the left side of the equation is the derived mean + 10 sigma of a sample from the gamma distribution internally used as the \\(lam\\) parameter of a poisson sample, and the right side of the equation is the constraint for maximum value of \\(lam\\) in `Generator.poisson`. #### References [1] Weisstein, Eric W. “Negative Binomial Distribution.” From MathWorld–A Wolfram Web Resource. [2] Wikipedia, “Negative binomial distribution”, #### Examples Draw samples from the distribution: A real world example. A company drills wild-cat oil exploration wells, each with an estimated probability of success of 0.1. What is the probability of having one success for each successive well, that is what is the probability of a single success after drilling 5 wells, after 6 wells, etc.? >>> rng = np.random.default_rng() >>> s = rng.negative_binomial(1, 0.1, 100000) >>> for i in range(1, 11): ... probability = sum(s [2] (1,2,3) P. R. Peebles Jr., “Central Limit Theorem” in “Probability, Random Variables and Random Signal Principles”, 4th ed., 2001, pp. 51, 51, 125. #### Examples Draw samples from the distribution: >>> mu, sigma = 0, 0.1 # mean and standard deviation >>> rng = np.random.default_rng() >>> s = rng.normal(mu, sigma, 1000) Verify the mean and the standard deviation: >>> abs(mu - np.mean(s)) 0.0 # may vary >>> abs(sigma - np.std(s, ddof=1)) 0.0 # may vary Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, _ = plt.hist(s, 30, density=True) >>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * ... np.exp( - (bins - mu)**2 / (2 * sigma**2) ), ... linewidth=2, color='r') >>> plt.show() Two-by-four array of samples from the normal distribution with mean 3 and standard deviation 2.5: >>> rng = np.random.default_rng() >>> rng.normal(3, 2.5, size=(2, 4)) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random # numpy.random.Generator.pareto method random.Generator.pareto(_a_ , _size =None_) Draw samples from a Pareto II (AKA Lomax) distribution with specified shape. Parameters: **a** float or array_like of floats Shape of the distribution. Must be positive. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the Pareto II distribution. See also [`scipy.stats.pareto`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pareto.html#scipy.stats.pareto "\(in SciPy v1.14.1\)") Pareto I distribution [`scipy.stats.lomax`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lomax.html#scipy.stats.lomax "\(in SciPy v1.14.1\)") Lomax (Pareto II) distribution [`scipy.stats.genpareto`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genpareto.html#scipy.stats.genpareto "\(in SciPy v1.14.1\)") Generalized Pareto distribution #### Notes The probability density for the Pareto II distribution is \\[p(x) = \frac{a}{{x+1}^{a+1}} , x \ge 0\\] where \\(a > 0\\) is the shape. The Pareto II distribution is a shifted and scaled version of the Pareto I distribution, which can be found in [`scipy.stats.pareto`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pareto.html#scipy.stats.pareto "\(in SciPy v1.14.1\)"). #### References [1] Francis Hunt and Paul Johnson, On the Pareto Distribution of Sourceforge projects. [2] Pareto, V. (1896). Course of Political Economy. Lausanne. [3] Reiss, R.D., Thomas, M.(2001), Statistical Analysis of Extreme Values, Birkhauser Verlag, Basel, pp 23-30. [4] Wikipedia, “Pareto distribution”, #### Examples Draw samples from the distribution: >>> a = 3. >>> rng = np.random.default_rng() >>> s = rng.pareto(a, 10000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> x = np.linspace(0, 3, 50) >>> pdf = a / (x+1)**(a+1) >>> plt.hist(s, bins=x, density=True, label='histogram') >>> plt.plot(x, pdf, linewidth=2, color='r', label='pdf') >>> plt.xlim(x.min(), x.max()) >>> plt.legend() >>> plt.show() # numpy.random.Generator.permutation method random.Generator.permutation(_x_ , _axis =0_) Randomly permute a sequence, or return a permuted range. Parameters: **x** int or array_like If `x` is an integer, randomly permute `np.arange(x)`. If `x` is an array, make a copy and shuffle the elements randomly. **axis** int, optional The axis which `x` is shuffled along. Default is 0. Returns: **out** ndarray Permuted sequence or array range. #### Examples >>> rng = np.random.default_rng() >>> rng.permutation(10) array([1, 7, 4, 3, 0, 9, 2, 5, 8, 6]) # random >>> rng.permutation([1, 4, 9, 12, 15]) array([15, 1, 9, 4, 12]) # random >>> arr = np.arange(9).reshape((3, 3)) >>> rng.permutation(arr) array([[6, 7, 8], # random [0, 1, 2], [3, 4, 5]]) >>> rng.permutation("abc") Traceback (most recent call last): ... numpy.exceptions.AxisError: axis 0 is out of bounds for array of dimension 0 >>> arr = np.arange(9).reshape((3, 3)) >>> rng.permutation(arr, axis=1) array([[0, 2, 1], # random [3, 5, 4], [6, 8, 7]]) # numpy.random.Generator.permuted method random.Generator.permuted(_x_ , _axis =None_, _out =None_) Randomly permute `x` along axis `axis`. Unlike [`shuffle`](numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle"), each slice along the given axis is shuffled independently of the others. Parameters: **x** array_like, at least one-dimensional Array to be shuffled. **axis** int, optional Slices of `x` in this axis are shuffled. Each slice is shuffled independently of the others. If `axis` is None, the flattened array is shuffled. **out** ndarray, optional If given, this is the destination of the shuffled array. If `out` is None, a shuffled copy of the array is returned. Returns: ndarray If `out` is None, a shuffled copy of `x` is returned. Otherwise, the shuffled array is stored in `out`, and `out` is returned See also [`shuffle`](numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") [`permutation`](numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") #### Notes An important distinction between methods `shuffle` and `permuted` is how they both treat the `axis` parameter which can be found at [Handling the axis parameter](../generator#generator-handling-axis-parameter). #### Examples Create a [`numpy.random.Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance: >>> rng = np.random.default_rng() Create a test array: >>> x = np.arange(24).reshape(3, 8) >>> x array([[ 0, 1, 2, 3, 4, 5, 6, 7], [ 8, 9, 10, 11, 12, 13, 14, 15], [16, 17, 18, 19, 20, 21, 22, 23]]) Shuffle the rows of `x`: >>> y = rng.permuted(x, axis=1) >>> y array([[ 4, 3, 6, 7, 1, 2, 5, 0], # random [15, 10, 14, 9, 12, 11, 8, 13], [17, 16, 20, 21, 18, 22, 23, 19]]) `x` has not been modified: >>> x array([[ 0, 1, 2, 3, 4, 5, 6, 7], [ 8, 9, 10, 11, 12, 13, 14, 15], [16, 17, 18, 19, 20, 21, 22, 23]]) To shuffle the rows of `x` in-place, pass `x` as the `out` parameter: >>> y = rng.permuted(x, axis=1, out=x) >>> x array([[ 3, 0, 4, 7, 1, 6, 2, 5], # random [ 8, 14, 13, 9, 12, 11, 15, 10], [17, 18, 16, 22, 19, 23, 20, 21]]) Note that when the `out` parameter is given, the return value is `out`: >>> y is x True # numpy.random.Generator.poisson method random.Generator.poisson(_lam =1.0_, _size =None_) Draw samples from a Poisson distribution. The Poisson distribution is the limit of the binomial distribution for large N. Parameters: **lam** float or array_like of floats Expected number of events occurring in a fixed-time interval, must be >= 0. A sequence must be broadcastable over the requested size. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `lam` is a scalar. Otherwise, `np.array(lam).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Poisson distribution. #### Notes The probability mass function (PMF) of Poisson distribution is \\[f(k; \lambda)=\frac{\lambda^k e^{-\lambda}}{k!}\\] For events with an expected separation \\(\lambda\\) the Poisson distribution \\(f(k; \lambda)\\) describes the probability of \\(k\\) events occurring within the observed interval \\(\lambda\\). Because the output is limited to the range of the C int64 type, a ValueError is raised when `lam` is within 10 sigma of the maximum representable value. #### References [1] Weisstein, Eric W. “Poisson Distribution.” From MathWorld–A Wolfram Web Resource. [2] Wikipedia, “Poisson distribution”, #### Examples Draw samples from the distribution: >>> rng = np.random.default_rng() >>> lam, size = 5, 10000 >>> s = rng.poisson(lam=lam, size=size) Verify the mean and variance, which should be approximately `lam`: >>> s.mean(), s.var() (4.9917 5.1088311) # may vary Display the histogram and probability mass function: >>> import matplotlib.pyplot as plt >>> from scipy import stats >>> x = np.arange(0, 21) >>> pmf = stats.poisson.pmf(x, mu=lam) >>> plt.hist(s, bins=x, density=True, width=0.5) >>> plt.stem(x, pmf, 'C1-') >>> plt.show() Draw each 100 values for lambda 100 and 500: >>> s = rng.poisson(lam=(100., 500.), size=(100, 2)) # numpy.random.Generator.power method random.Generator.power(_a_ , _size =None_) Draws samples in [0, 1] from a power distribution with positive exponent a - 1. Also known as the power function distribution. Parameters: **a** float or array_like of floats Parameter of the distribution. Must be non-negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized power distribution. Raises: ValueError If a <= 0. #### Notes The probability density function is \\[P(x; a) = ax^{a-1}, 0 \le x \le 1, a>0.\\] The power function distribution is just the inverse of the Pareto distribution. It may also be seen as a special case of the Beta distribution. It is used, for example, in modeling the over-reporting of insurance claims. #### References [1] Christian Kleiber, Samuel Kotz, “Statistical size distributions in economics and actuarial sciences”, Wiley, 2003. [2] Heckert, N. A. and Filliben, James J. “NIST Handbook 148: Dataplot Reference Manual, Volume 2: Let Subcommands and Library Functions”, National Institute of Standards and Technology Handbook Series, June 2003. #### Examples Draw samples from the distribution: >>> rng = np.random.default_rng() >>> a = 5. # shape >>> samples = 1000 >>> s = rng.power(a, samples) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, _ = plt.hist(s, bins=30) >>> x = np.linspace(0, 1, 100) >>> y = a*x**(a-1.) >>> normed_y = samples*np.diff(bins)[0]*y >>> plt.plot(x, normed_y) >>> plt.show() Compare the power function distribution to the inverse of the Pareto. >>> from scipy import stats >>> rvs = rng.power(5, 1000000) >>> rvsp = rng.pareto(5, 1000000) >>> xx = np.linspace(0,1,100) >>> powpdf = stats.powerlaw.pdf(xx,5) >>> plt.figure() >>> plt.hist(rvs, bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('power(5)') >>> plt.figure() >>> plt.hist(1./(1.+rvsp), bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('inverse of 1 + Generator.pareto(5)') >>> plt.figure() >>> plt.hist(1./(1.+rvsp), bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('inverse of stats.pareto(5)') # numpy.random.Generator.random method random.Generator.random(_size =None_, _dtype =np.float64_, _out =None_) Return random floats in the half-open interval [0.0, 1.0). Results are from the “continuous uniform” distribution over the stated interval. To sample \\(Unif[a, b), b > a\\) use [`uniform`](numpy.random.generator.uniform#numpy.random.Generator.uniform "numpy.random.Generator.uniform") or multiply the output of [`random`](../index#module-numpy.random "numpy.random") by `(b - a)` and add `a`: (b - a) * random() + a Parameters: **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **dtype** dtype, optional Desired dtype of the result, only [`float64`](../../arrays.scalars#numpy.float64 "numpy.float64") and [`float32`](../../arrays.scalars#numpy.float32 "numpy.float32") are supported. Byteorder must be native. The default value is np.float64. **out** ndarray, optional Alternative output array in which to place the result. If size is not None, it must have the same shape as the provided size and must match the type of the output values. Returns: **out** float or ndarray of floats Array of random floats of shape [`size`](../../generated/numpy.size#numpy.size "numpy.size") (unless `size=None`, in which case a single float is returned). See also [`uniform`](numpy.random.generator.uniform#numpy.random.Generator.uniform "numpy.random.Generator.uniform") Draw samples from the parameterized uniform distribution. #### Examples >>> rng = np.random.default_rng() >>> rng.random() 0.47108547995356098 # random >>> type(rng.random()) >>> rng.random((5,)) array([ 0.30220482, 0.86820401, 0.1654503 , 0.11659149, 0.54323428]) # random Three-by-two array of random numbers from [-5, 0): >>> 5 * rng.random((3, 2)) - 5 array([[-3.99149989, -0.52338984], # random [-2.99091858, -0.79479508], [-1.23204345, -1.75224494]]) # numpy.random.Generator.rayleigh method random.Generator.rayleigh(_scale =1.0_, _size =None_) Draw samples from a Rayleigh distribution. The \\(\chi\\) and Weibull distributions are generalizations of the Rayleigh. Parameters: **scale** float or array_like of floats, optional Scale, also equals the mode. Must be non-negative. Default is 1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `scale` is a scalar. Otherwise, `np.array(scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Rayleigh distribution. #### Notes The probability density function for the Rayleigh distribution is \\[P(x;scale) = \frac{x}{scale^2}e^{\frac{-x^2}{2 \cdotp scale^2}}\\] The Rayleigh distribution would arise, for example, if the East and North components of the wind velocity had identical zero-mean Gaussian distributions. Then the wind speed would have a Rayleigh distribution. #### References [1] Brighton Webs Ltd., “Rayleigh Distribution,” [2] Wikipedia, “Rayleigh distribution” #### Examples Draw values from the distribution and plot the histogram >>> from matplotlib.pyplot import hist >>> rng = np.random.default_rng() >>> values = hist(rng.rayleigh(3, 100000), bins=200, density=True) Wave heights tend to follow a Rayleigh distribution. If the mean wave height is 1 meter, what fraction of waves are likely to be larger than 3 meters? >>> meanvalue = 1 >>> modevalue = np.sqrt(2 / np.pi) * meanvalue >>> s = rng.rayleigh(modevalue, 1000000) The percentage of waves larger than 3 meters is: >>> 100.*sum(s>3)/1000000. 0.087300000000000003 # random # numpy.random.Generator.shuffle method random.Generator.shuffle(_x_ , _axis =0_) Modify an array or sequence in-place by shuffling its contents. The order of sub-arrays is changed but their contents remains the same. Parameters: **x** ndarray or MutableSequence The array, list or mutable sequence to be shuffled. **axis** int, optional The axis which `x` is shuffled along. Default is 0. It is only supported on [`ndarray`](../../generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") objects. Returns: None See also [`permuted`](numpy.random.generator.permuted#numpy.random.Generator.permuted "numpy.random.Generator.permuted") [`permutation`](numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") #### Notes An important distinction between methods `shuffle` and `permuted` is how they both treat the `axis` parameter which can be found at [Handling the axis parameter](../generator#generator-handling-axis-parameter). #### Examples >>> rng = np.random.default_rng() >>> arr = np.arange(10) >>> arr array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> rng.shuffle(arr) >>> arr array([2, 0, 7, 5, 1, 4, 8, 9, 3, 6]) # random >>> arr = np.arange(9).reshape((3, 3)) >>> arr array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> rng.shuffle(arr) >>> arr array([[3, 4, 5], # random [6, 7, 8], [0, 1, 2]]) >>> arr = np.arange(9).reshape((3, 3)) >>> arr array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> rng.shuffle(arr, axis=1) >>> arr array([[2, 0, 1], # random [5, 3, 4], [8, 6, 7]]) # numpy.random.Generator.spawn method random.Generator.spawn(_n_children_) Create new independent child generators. See [SeedSequence spawning](../parallel#seedsequence-spawn) for additional notes on spawning children. New in version 1.25.0. Parameters: **n_children** int Returns: **child_generators** list of Generators Raises: TypeError When the underlying SeedSequence does not implement spawning. See also [`random.BitGenerator.spawn`](../bit_generators/generated/numpy.random.bitgenerator.spawn#numpy.random.BitGenerator.spawn "numpy.random.BitGenerator.spawn"), [`random.SeedSequence.spawn`](../bit_generators/generated/numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") Equivalent method on the bit generator and seed sequence. [`bit_generator`](numpy.random.generator.bit_generator#numpy.random.Generator.bit_generator "numpy.random.Generator.bit_generator") The bit generator instance used by the generator. #### Examples Starting from a seeded default generator: >>> # High quality entropy created with: f"0x{secrets.randbits(128):x}" >>> entropy = 0x3034c61a9ae04ff8cb62ab8ec2c4b501 >>> rng = np.random.default_rng(entropy) Create two new generators for example for parallel execution: >>> child_rng1, child_rng2 = rng.spawn(2) Drawn numbers from each are independent but derived from the initial seeding entropy: >>> rng.uniform(), child_rng1.uniform(), child_rng2.uniform() (0.19029263503854454, 0.9475673279178444, 0.4702687338396767) It is safe to spawn additional children from the original `rng` or the children: >>> more_child_rngs = rng.spawn(20) >>> nested_spawn = child_rng1.spawn(20) # numpy.random.Generator.standard_cauchy method random.Generator.standard_cauchy(_size =None_) Draw samples from a standard Cauchy distribution with mode = 0. Also known as the Lorentz distribution. Parameters: **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns: **samples** ndarray or scalar The drawn samples. #### Notes The probability density function for the full Cauchy distribution is \\[P(x; x_0, \gamma) = \frac{1}{\pi \gamma \bigl[ 1+ (\frac{x-x_0}{\gamma})^2 \bigr] }\\] and the Standard Cauchy distribution just sets \\(x_0=0\\) and \\(\gamma=1\\) The Cauchy distribution arises in the solution to the driven harmonic oscillator problem, and also describes spectral line broadening. It also describes the distribution of values at which a line tilted at a random angle will cut the x axis. When studying hypothesis tests that assume normality, seeing how the tests perform on data from a Cauchy distribution is a good indicator of their sensitivity to a heavy-tailed distribution, since the Cauchy looks very much like a Gaussian distribution, but with heavier tails. #### References [1] NIST/SEMATECH e-Handbook of Statistical Methods, “Cauchy Distribution”, [2] Weisstein, Eric W. “Cauchy Distribution.” From MathWorld–A Wolfram Web Resource. [3] Wikipedia, “Cauchy distribution” #### Examples Draw samples and plot the distribution: >>> import matplotlib.pyplot as plt >>> rng = np.random.default_rng() >>> s = rng.standard_cauchy(1000000) >>> s = s[(s>-25) & (s<25)] # truncate distribution so it plots well >>> plt.hist(s, bins=100) >>> plt.show() # numpy.random.Generator.standard_exponential method random.Generator.standard_exponential(_size =None_, _dtype =np.float64_, _method ='zig'_, _out =None_) Draw samples from the standard exponential distribution. `standard_exponential` is identical to the exponential distribution with a scale parameter of 1. Parameters: **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **dtype** dtype, optional Desired dtype of the result, only [`float64`](../../arrays.scalars#numpy.float64 "numpy.float64") and [`float32`](../../arrays.scalars#numpy.float32 "numpy.float32") are supported. Byteorder must be native. The default value is np.float64. **method** str, optional Either ‘inv’ or ‘zig’. ‘inv’ uses the default inverse CDF method. ‘zig’ uses the much faster Ziggurat method of Marsaglia and Tsang. **out** ndarray, optional Alternative output array in which to place the result. If size is not None, it must have the same shape as the provided size and must match the type of the output values. Returns: **out** float or ndarray Drawn samples. #### Examples Output a 3x8000 array: >>> rng = np.random.default_rng() >>> n = rng.standard_exponential((3, 8000)) # numpy.random.Generator.standard_gamma method random.Generator.standard_gamma(_shape_ , _size =None_, _dtype =np.float64_, _out =None_) Draw samples from a standard Gamma distribution. Samples are drawn from a Gamma distribution with specified parameters, shape (sometimes designated “k”) and scale=1. Parameters: **shape** float or array_like of floats Parameter, must be non-negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `shape` is a scalar. Otherwise, `np.array(shape).size` samples are drawn. **dtype** dtype, optional Desired dtype of the result, only [`float64`](../../arrays.scalars#numpy.float64 "numpy.float64") and [`float32`](../../arrays.scalars#numpy.float32 "numpy.float32") are supported. Byteorder must be native. The default value is np.float64. **out** ndarray, optional Alternative output array in which to place the result. If size is not None, it must have the same shape as the provided size and must match the type of the output values. Returns: **out** ndarray or scalar Drawn samples from the parameterized standard gamma distribution. See also [`scipy.stats.gamma`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. #### Notes The probability density for the Gamma distribution is \\[p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},\\] where \\(k\\) is the shape and \\(\theta\\) the scale, and \\(\Gamma\\) is the Gamma function. The Gamma distribution is often used to model the times to failure of electronic components, and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. #### References [1] Weisstein, Eric W. “Gamma Distribution.” From MathWorld–A Wolfram Web Resource. [2] Wikipedia, “Gamma distribution”, #### Examples Draw samples from the distribution: >>> shape, scale = 2., 1. # mean and width >>> rng = np.random.default_rng() >>> s = rng.standard_gamma(shape, 1000000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> import scipy.special as sps >>> count, bins, _ = plt.hist(s, 50, density=True) >>> y = bins**(shape-1) * ((np.exp(-bins/scale))/ ... (sps.gamma(shape) * scale**shape)) >>> plt.plot(bins, y, linewidth=2, color='r') >>> plt.show() # numpy.random.Generator.standard_normal method random.Generator.standard_normal(_size =None_, _dtype =np.float64_, _out =None_) Draw samples from a standard Normal distribution (mean=0, stdev=1). Parameters: **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **dtype** dtype, optional Desired dtype of the result, only [`float64`](../../arrays.scalars#numpy.float64 "numpy.float64") and [`float32`](../../arrays.scalars#numpy.float32 "numpy.float32") are supported. Byteorder must be native. The default value is np.float64. **out** ndarray, optional Alternative output array in which to place the result. If size is not None, it must have the same shape as the provided size and must match the type of the output values. Returns: **out** float or ndarray A floating-point array of shape `size` of drawn samples, or a single sample if `size` was not specified. See also [`normal`](numpy.random.generator.normal#numpy.random.Generator.normal "numpy.random.Generator.normal") Equivalent function with additional `loc` and `scale` arguments for setting the mean and standard deviation. #### Notes For random samples from the normal distribution with mean `mu` and standard deviation `sigma`, use one of: mu + sigma * rng.standard_normal(size=...) rng.normal(mu, sigma, size=...) #### Examples >>> rng = np.random.default_rng() >>> rng.standard_normal() 2.1923875335537315 # random >>> s = rng.standard_normal(8000) >>> s array([ 0.6888893 , 0.78096262, -0.89086505, ..., 0.49876311, # random -0.38672696, -0.4685006 ]) # random >>> s.shape (8000,) >>> s = rng.standard_normal(size=(3, 4, 2)) >>> s.shape (3, 4, 2) Two-by-four array of samples from the normal distribution with mean 3 and standard deviation 2.5: >>> 3 + 2.5 * rng.standard_normal(size=(2, 4)) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random # numpy.random.Generator.standard_t method random.Generator.standard_t(_df_ , _size =None_) Draw samples from a standard Student’s t distribution with `df` degrees of freedom. A special case of the hyperbolic distribution. As `df` gets large, the result resembles that of the standard normal distribution ([`standard_normal`](numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal")). Parameters: **df** float or array_like of floats Degrees of freedom, must be > 0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` is a scalar. Otherwise, `np.array(df).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized standard Student’s t distribution. #### Notes The probability density function for the t distribution is \\[P(x, df) = \frac{\Gamma(\frac{df+1}{2})}{\sqrt{\pi df} \Gamma(\frac{df}{2})}\Bigl( 1+\frac{x^2}{df} \Bigr)^{-(df+1)/2}\\] The t test is based on an assumption that the data come from a Normal distribution. The t test provides a way to test whether the sample mean (that is the mean calculated from the data) is a good estimate of the true mean. The derivation of the t-distribution was first published in 1908 by William Gosset while working for the Guinness Brewery in Dublin. Due to proprietary issues, he had to publish under a pseudonym, and so he used the name Student. #### References [1] Dalgaard, Peter, “Introductory Statistics With R”, Springer, 2002. [2] Wikipedia, “Student’s t-distribution” [https://en.wikipedia.org/wiki/Student’s_t- distribution](https://en.wikipedia.org/wiki/Student's_t-distribution) #### Examples From Dalgaard page 83 [1], suppose the daily energy intake for 11 women in kilojoules (kJ) is: >>> intake = np.array([5260., 5470, 5640, 6180, 6390, 6515, 6805, 7515, \ ... 7515, 8230, 8770]) Does their energy intake deviate systematically from the recommended value of 7725 kJ? Our null hypothesis will be the absence of deviation, and the alternate hypothesis will be the presence of an effect that could be either positive or negative, hence making our test 2-tailed. Because we are estimating the mean and we have N=11 values in our sample, we have N-1=10 degrees of freedom. We set our significance level to 95% and compute the t statistic using the empirical mean and empirical standard deviation of our intake. We use a ddof of 1 to base the computation of our empirical standard deviation on an unbiased estimate of the variance (note: the final estimate is not unbiased due to the concave nature of the square root). >>> np.mean(intake) 6753.636363636364 >>> intake.std(ddof=1) 1142.1232221373727 >>> t = (np.mean(intake)-7725)/(intake.std(ddof=1)/np.sqrt(len(intake))) >>> t -2.8207540608310198 We draw 1000000 samples from Student’s t distribution with the adequate degrees of freedom. >>> import matplotlib.pyplot as plt >>> rng = np.random.default_rng() >>> s = rng.standard_t(10, size=1000000) >>> h = plt.hist(s, bins=100, density=True) Does our t statistic land in one of the two critical regions found at both tails of the distribution? >>> np.sum(np.abs(t) < np.abs(s)) / float(len(s)) 0.018318 #random < 0.05, statistic is in critical region The probability value for this 2-tailed test is about 1.83%, which is lower than the 5% pre-determined significance threshold. Therefore, the probability of observing values as extreme as our intake conditionally on the null hypothesis being true is too low, and we reject the null hypothesis of no deviation. # numpy.random.Generator.triangular method random.Generator.triangular(_left_ , _mode_ , _right_ , _size =None_) Draw samples from the triangular distribution over the interval `[left, right]`. The triangular distribution is a continuous probability distribution with lower limit left, peak at mode, and upper limit right. Unlike the other distributions, these parameters directly define the shape of the pdf. Parameters: **left** float or array_like of floats Lower limit. **mode** float or array_like of floats The value where the peak of the distribution occurs. The value must fulfill the condition `left <= mode <= right`. **right** float or array_like of floats Upper limit, must be larger than `left`. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `left`, `mode`, and `right` are all scalars. Otherwise, `np.broadcast(left, mode, right).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized triangular distribution. #### Notes The probability density function for the triangular distribution is \\[\begin{split}P(x;l, m, r) = \begin{cases} \frac{2(x-l)}{(r-l)(m-l)}& \text{for $l \leq x \leq m$},\\\ \frac{2(r-x)}{(r-l)(r-m)}& \text{for $m \leq x \leq r$},\\\ 0& \text{otherwise}. \end{cases}\end{split}\\] The triangular distribution is often used in ill-defined problems where the underlying distribution is not known, but some knowledge of the limits and mode exists. Often it is used in simulations. #### References [1] Wikipedia, “Triangular distribution” #### Examples Draw values from the distribution and plot the histogram: >>> import matplotlib.pyplot as plt >>> rng = np.random.default_rng() >>> h = plt.hist(rng.triangular(-3, 0, 8, 100000), bins=200, ... density=True) >>> plt.show() # numpy.random.Generator.uniform method random.Generator.uniform(_low =0.0_, _high =1.0_, _size =None_) Draw samples from a uniform distribution. Samples are uniformly distributed over the half-open interval `[low, high)` (includes low, but excludes high). In other words, any value within the given interval is equally likely to be drawn by `uniform`. Parameters: **low** float or array_like of floats, optional Lower boundary of the output interval. All values generated will be greater than or equal to low. The default value is 0. **high** float or array_like of floats Upper boundary of the output interval. All values generated will be less than high. The high limit may be included in the returned array of floats due to floating-point rounding in the equation `low + (high-low) * random_sample()`. high - low must be non-negative. The default value is 1.0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `low` and `high` are both scalars. Otherwise, `np.broadcast(low, high).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized uniform distribution. See also [`integers`](numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers") Discrete uniform distribution, yielding integers. [`random`](../index#module-numpy.random "numpy.random") Floats uniformly distributed over `[0, 1)`. #### Notes The probability density function of the uniform distribution is \\[p(x) = \frac{1}{b - a}\\] anywhere within the interval `[a, b)`, and zero elsewhere. When `high` == `low`, values of `low` will be returned. #### Examples Draw samples from the distribution: >>> rng = np.random.default_rng() >>> s = rng.uniform(-1,0,1000) All values are within the given interval: >>> np.all(s >= -1) True >>> np.all(s < 0) True Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, _ = plt.hist(s, 15, density=True) >>> plt.plot(bins, np.ones_like(bins), linewidth=2, color='r') >>> plt.show() # numpy.random.Generator.vonmises method random.Generator.vonmises(_mu_ , _kappa_ , _size =None_) Draw samples from a von Mises distribution. Samples are drawn from a von Mises distribution with specified mode (mu) and concentration (kappa), on the interval [-pi, pi]. The von Mises distribution (also known as the circular normal distribution) is a continuous probability distribution on the unit circle. It may be thought of as the circular analogue of the normal distribution. Parameters: **mu** float or array_like of floats Mode (“center”) of the distribution. **kappa** float or array_like of floats Concentration of the distribution, has to be >=0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mu` and `kappa` are both scalars. Otherwise, `np.broadcast(mu, kappa).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized von Mises distribution. See also [`scipy.stats.vonmises`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.vonmises.html#scipy.stats.vonmises "\(in SciPy v1.14.1\)") probability density function, distribution, or cumulative density function, etc. #### Notes The probability density for the von Mises distribution is \\[p(x) = \frac{e^{\kappa cos(x-\mu)}}{2\pi I_0(\kappa)},\\] where \\(\mu\\) is the mode and \\(\kappa\\) the concentration, and \\(I_0(\kappa)\\) is the modified Bessel function of order 0. The von Mises is named for Richard Edler von Mises, who was born in Austria- Hungary, in what is now the Ukraine. He fled to the United States in 1939 and became a professor at Harvard. He worked in probability theory, aerodynamics, fluid mechanics, and philosophy of science. #### References [1] Abramowitz, M. and Stegun, I. A. (Eds.). “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing,” New York: Dover, 1972. [2] von Mises, R., “Mathematical Theory of Probability and Statistics”, New York: Academic Press, 1964. #### Examples Draw samples from the distribution: >>> mu, kappa = 0.0, 4.0 # mean and concentration >>> rng = np.random.default_rng() >>> s = rng.vonmises(mu, kappa, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> from scipy.special import i0 >>> plt.hist(s, 50, density=True) >>> x = np.linspace(-np.pi, np.pi, num=51) >>> y = np.exp(kappa*np.cos(x-mu))/(2*np.pi*i0(kappa)) >>> plt.plot(x, y, linewidth=2, color='r') >>> plt.show() # numpy.random.Generator.wald method random.Generator.wald(_mean_ , _scale_ , _size =None_) Draw samples from a Wald, or inverse Gaussian, distribution. As the scale approaches infinity, the distribution becomes more like a Gaussian. Some references claim that the Wald is an inverse Gaussian with mean equal to 1, but this is by no means universal. The inverse Gaussian distribution was first studied in relationship to Brownian motion. In 1956 M.C.K. Tweedie used the name inverse Gaussian because there is an inverse relationship between the time to cover a unit distance and distance covered in unit time. Parameters: **mean** float or array_like of floats Distribution mean, must be > 0. **scale** float or array_like of floats Scale parameter, must be > 0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mean` and `scale` are both scalars. Otherwise, `np.broadcast(mean, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Wald distribution. #### Notes The probability density function for the Wald distribution is \\[P(x;mean,scale) = \sqrt{\frac{scale}{2\pi x^3}}e^ \frac{-scale(x-mean)^2}{2\cdotp mean^2x}\\] As noted above the inverse Gaussian distribution first arise from attempts to model Brownian motion. It is also a competitor to the Weibull for use in reliability modeling and modeling stock returns and interest rate processes. #### References [1] Brighton Webs Ltd., Wald Distribution, [2] Chhikara, Raj S., and Folks, J. Leroy, “The Inverse Gaussian Distribution: Theory : Methodology, and Applications”, CRC Press, 1988. [3] Wikipedia, “Inverse Gaussian distribution” #### Examples Draw values from the distribution and plot the histogram: >>> import matplotlib.pyplot as plt >>> rng = np.random.default_rng() >>> h = plt.hist(rng.wald(3, 2, 100000), bins=200, density=True) >>> plt.show() # numpy.random.Generator.weibull method random.Generator.weibull(_a_ , _size =None_) Draw samples from a Weibull distribution. Draw samples from a 1-parameter Weibull distribution with the given shape parameter `a`. \\[X = (-ln(U))^{1/a}\\] Here, U is drawn from the uniform distribution over (0,1]. The more common 2-parameter Weibull, including a scale parameter \\(\lambda\\) is just \\(X = \lambda(-ln(U))^{1/a}\\). Parameters: **a** float or array_like of floats Shape parameter of the distribution. Must be nonnegative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Weibull distribution. See also [`scipy.stats.weibull_max`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.weibull_max.html#scipy.stats.weibull_max "\(in SciPy v1.14.1\)") [`scipy.stats.weibull_min`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.weibull_min.html#scipy.stats.weibull_min "\(in SciPy v1.14.1\)") [`scipy.stats.genextreme`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genextreme.html#scipy.stats.genextreme "\(in SciPy v1.14.1\)") [`gumbel`](numpy.random.generator.gumbel#numpy.random.Generator.gumbel "numpy.random.Generator.gumbel") #### Notes The Weibull (or Type III asymptotic extreme value distribution for smallest values, SEV Type III, or Rosin-Rammler distribution) is one of a class of Generalized Extreme Value (GEV) distributions used in modeling extreme value problems. This class includes the Gumbel and Frechet distributions. The probability density for the Weibull distribution is \\[p(x) = \frac{a} {\lambda}(\frac{x}{\lambda})^{a-1}e^{-(x/\lambda)^a},\\] where \\(a\\) is the shape and \\(\lambda\\) the scale. The function has its peak (the mode) at \\(\lambda(\frac{a-1}{a})^{1/a}\\). When `a = 1`, the Weibull distribution reduces to the exponential distribution. #### References [1] Waloddi Weibull, Royal Technical University, Stockholm, 1939 “A Statistical Theory Of The Strength Of Materials”, Ingeniorsvetenskapsakademiens Handlingar Nr 151, 1939, Generalstabens Litografiska Anstalts Forlag, Stockholm. [2] Waloddi Weibull, “A Statistical Distribution Function of Wide Applicability”, Journal Of Applied Mechanics ASME Paper 1951. [3] Wikipedia, “Weibull distribution”, #### Examples Draw samples from the distribution: >>> rng = np.random.default_rng() >>> a = 5. # shape >>> s = rng.weibull(a, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> def weibull(x, n, a): ... return (a / n) * (x / n)**(a - 1) * np.exp(-(x / n)**a) >>> count, bins, _ = plt.hist(rng.weibull(5., 1000)) >>> x = np.linspace(0, 2, 1000) >>> bin_spacing = np.mean(np.diff(bins)) >>> plt.plot(x, weibull(x, 1., 5.) * bin_spacing * s.size, label='Weibull PDF') >>> plt.legend() >>> plt.show() # numpy.random.Generator.zipf method random.Generator.zipf(_a_ , _size =None_) Draw samples from a Zipf distribution. Samples are drawn from a Zipf distribution with specified parameter `a` > 1. The Zipf distribution (also known as the zeta distribution) is a discrete probability distribution that satisfies Zipf’s law: the frequency of an item is inversely proportional to its rank in a frequency table. Parameters: **a** float or array_like of floats Distribution parameter. Must be greater than 1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Zipf distribution. See also [`scipy.stats.zipf`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.zipf.html#scipy.stats.zipf "\(in SciPy v1.14.1\)") probability density function, distribution, or cumulative density function, etc. #### Notes The probability mass function (PMF) for the Zipf distribution is \\[p(k) = \frac{k^{-a}}{\zeta(a)},\\] for integers \\(k \geq 1\\), where \\(\zeta\\) is the Riemann Zeta function. It is named for the American linguist George Kingsley Zipf, who noted that the frequency of any word in a sample of a language is inversely proportional to its rank in the frequency table. #### References [1] Zipf, G. K., “Selected Studies of the Principle of Relative Frequency in Language,” Cambridge, MA: Harvard Univ. Press, 1932. #### Examples Draw samples from the distribution: >>> a = 4.0 >>> n = 20000 >>> rng = np.random.default_rng() >>> s = rng.zipf(a, size=n) Display the histogram of the samples, along with the expected histogram based on the probability density function: >>> import matplotlib.pyplot as plt >>> from scipy.special import zeta [`bincount`](../../generated/numpy.bincount#numpy.bincount "numpy.bincount") provides a fast histogram for small integers. >>> count = np.bincount(s) >>> k = np.arange(1, s.max() + 1) >>> plt.bar(k, count[1:], alpha=0.5, label='sample count') >>> plt.plot(k, n*(k**-a)/zeta(a), 'k.-', alpha=0.5, ... label='expected count') >>> plt.semilogy() >>> plt.grid(alpha=0.4) >>> plt.legend() >>> plt.title(f'Zipf sample, a={a}, size={n}') >>> plt.show() # numpy.random.geometric random.geometric(_p_ , _size =None_) Draw samples from the geometric distribution. Bernoulli trials are experiments with one of two outcomes: success or failure (an example of such an experiment is flipping a coin). The geometric distribution models the number of trials that must be run in order to achieve success. It is therefore supported on the positive integers, `k = 1, 2, ...`. The probability mass function of the geometric distribution is \\[f(k) = (1 - p)^{k - 1} p\\] where `p` is the probability of success of an individual trial. Note New code should use the [`geometric`](numpy.random.generator.geometric#numpy.random.Generator.geometric "numpy.random.Generator.geometric") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **p** float or array_like of floats The probability of success of an individual trial. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `p` is a scalar. Otherwise, `np.array(p).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized geometric distribution. See also [`random.Generator.geometric`](numpy.random.generator.geometric#numpy.random.Generator.geometric "numpy.random.Generator.geometric") which should be used for new code. #### Examples Draw ten thousand values from the geometric distribution, with the probability of an individual success equal to 0.35: >>> z = np.random.geometric(p=0.35, size=10000) How many trials succeeded after a single run? >>> (z == 1).sum() / 10000. 0.34889999999999999 #random # numpy.random.get_state random.get_state(_legacy =True_) Return a tuple representing the internal state of the generator. For more details, see [`set_state`](numpy.random.set_state#numpy.random.set_state "numpy.random.set_state"). Parameters: **legacy** bool, optional Flag indicating to return a legacy tuple state when the BitGenerator is MT19937, instead of a dict. Raises ValueError if the underlying bit generator is not an instance of MT19937. Returns: **out**{tuple(str, ndarray of 624 uints, int, int, float), dict} If legacy is True, the returned tuple has the following items: 1. the string ‘MT19937’. 2. a 1-D array of 624 unsigned integer keys. 3. an integer `pos`. 4. an integer `has_gauss`. 5. a float `cached_gaussian`. If `legacy` is False, or the BitGenerator is not MT19937, then state is returned as a dictionary. See also [`set_state`](numpy.random.set_state#numpy.random.set_state "numpy.random.set_state") #### Notes [`set_state`](numpy.random.set_state#numpy.random.set_state "numpy.random.set_state") and `get_state` are not needed to work with any of the random distributions in NumPy. If the internal state is manually altered, the user should know exactly what he/she is doing. # numpy.random.gumbel random.gumbel(_loc =0.0_, _scale =1.0_, _size =None_) Draw samples from a Gumbel distribution. Draw samples from a Gumbel distribution with specified location and scale. For more information on the Gumbel distribution, see Notes and References below. Note New code should use the [`gumbel`](numpy.random.generator.gumbel#numpy.random.Generator.gumbel "numpy.random.Generator.gumbel") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **loc** float or array_like of floats, optional The location of the mode of the distribution. Default is 0. **scale** float or array_like of floats, optional The scale parameter of the distribution. Default is 1. Must be non- negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Gumbel distribution. See also [`scipy.stats.gumbel_l`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gumbel_l.html#scipy.stats.gumbel_l "\(in SciPy v1.14.1\)") [`scipy.stats.gumbel_r`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gumbel_r.html#scipy.stats.gumbel_r "\(in SciPy v1.14.1\)") [`scipy.stats.genextreme`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genextreme.html#scipy.stats.genextreme "\(in SciPy v1.14.1\)") [`weibull`](numpy.random.weibull#numpy.random.weibull "numpy.random.weibull") [`random.Generator.gumbel`](numpy.random.generator.gumbel#numpy.random.Generator.gumbel "numpy.random.Generator.gumbel") which should be used for new code. #### Notes The Gumbel (or Smallest Extreme Value (SEV) or the Smallest Extreme Value Type I) distribution is one of a class of Generalized Extreme Value (GEV) distributions used in modeling extreme value problems. The Gumbel is a special case of the Extreme Value Type I distribution for maximums from distributions with “exponential-like” tails. The probability density for the Gumbel distribution is \\[p(x) = \frac{e^{-(x - \mu)/ \beta}}{\beta} e^{ -e^{-(x - \mu)/ \beta}},\\] where \\(\mu\\) is the mode, a location parameter, and \\(\beta\\) is the scale parameter. The Gumbel (named for German mathematician Emil Julius Gumbel) was used very early in the hydrology literature, for modeling the occurrence of flood events. It is also used for modeling maximum wind speed and rainfall rates. It is a “fat-tailed” distribution - the probability of an event in the tail of the distribution is larger than if one used a Gaussian, hence the surprisingly frequent occurrence of 100-year floods. Floods were initially modeled as a Gaussian process, which underestimated the frequency of extreme events. It is one of a class of extreme value distributions, the Generalized Extreme Value (GEV) distributions, which also includes the Weibull and Frechet. The function has a mean of \\(\mu + 0.57721\beta\\) and a variance of \\(\frac{\pi^2}{6}\beta^2\\). #### References [1] Gumbel, E. J., “Statistics of Extremes,” New York: Columbia University Press, 1958. [2] Reiss, R.-D. and Thomas, M., “Statistical Analysis of Extreme Values from Insurance, Finance, Hydrology and Other Fields,” Basel: Birkhauser Verlag, 2001. #### Examples Draw samples from the distribution: >>> mu, beta = 0, 0.1 # location and scale >>> s = np.random.gumbel(mu, beta, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta) ... * np.exp( -np.exp( -(bins - mu) /beta) ), ... linewidth=2, color='r') >>> plt.show() Show how an extreme value distribution can arise from a Gaussian process and compare to a Gaussian: >>> means = [] >>> maxima = [] >>> for i in range(0,1000) : ... a = np.random.normal(mu, beta, 1000) ... means.append(a.mean()) ... maxima.append(a.max()) >>> count, bins, ignored = plt.hist(maxima, 30, density=True) >>> beta = np.std(maxima) * np.sqrt(6) / np.pi >>> mu = np.mean(maxima) - 0.57721*beta >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta) ... * np.exp(-np.exp(-(bins - mu)/beta)), ... linewidth=2, color='r') >>> plt.plot(bins, 1/(beta * np.sqrt(2 * np.pi)) ... * np.exp(-(bins - mu)**2 / (2 * beta**2)), ... linewidth=2, color='g') >>> plt.show() # numpy.random.hypergeometric random.hypergeometric(_ngood_ , _nbad_ , _nsample_ , _size =None_) Draw samples from a Hypergeometric distribution. Samples are drawn from a hypergeometric distribution with specified parameters, `ngood` (ways to make a good selection), `nbad` (ways to make a bad selection), and `nsample` (number of items sampled, which is less than or equal to the sum `ngood + nbad`). Note New code should use the [`hypergeometric`](numpy.random.generator.hypergeometric#numpy.random.Generator.hypergeometric "numpy.random.Generator.hypergeometric") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **ngood** int or array_like of ints Number of ways to make a good selection. Must be nonnegative. **nbad** int or array_like of ints Number of ways to make a bad selection. Must be nonnegative. **nsample** int or array_like of ints Number of items sampled. Must be at least 1 and at most `ngood + nbad`. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `ngood`, `nbad`, and `nsample` are all scalars. Otherwise, `np.broadcast(ngood, nbad, nsample).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized hypergeometric distribution. Each sample is the number of good items within a randomly selected subset of size `nsample` taken from a set of `ngood` good items and `nbad` bad items. See also [`scipy.stats.hypergeom`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.hypergeom.html#scipy.stats.hypergeom "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.hypergeometric`](numpy.random.generator.hypergeometric#numpy.random.Generator.hypergeometric "numpy.random.Generator.hypergeometric") which should be used for new code. #### Notes The probability mass function (PMF) for the Hypergeometric distribution is \\[P(x) = \frac{\binom{g}{x}\binom{b}{n-x}}{\binom{g+b}{n}},\\] where \\(0 \le x \le n\\) and \\(n-b \le x \le g\\) for P(x) the probability of `x` good results in the drawn sample, g = `ngood`, b = `nbad`, and n = `nsample`. Consider an urn with black and white marbles in it, `ngood` of them are black and `nbad` are white. If you draw `nsample` balls without replacement, then the hypergeometric distribution describes the distribution of black balls in the drawn sample. Note that this distribution is very similar to the binomial distribution, except that in this case, samples are drawn without replacement, whereas in the Binomial case samples are drawn with replacement (or the sample space is infinite). As the sample space becomes large, this distribution approaches the binomial. #### References [1] Lentner, Marvin, “Elementary Applied Statistics”, Bogden and Quigley, 1972. [2] Weisstein, Eric W. “Hypergeometric Distribution.” From MathWorld–A Wolfram Web Resource. [3] Wikipedia, “Hypergeometric distribution”, #### Examples Draw samples from the distribution: >>> ngood, nbad, nsamp = 100, 2, 10 # number of good, number of bad, and number of samples >>> s = np.random.hypergeometric(ngood, nbad, nsamp, 1000) >>> from matplotlib.pyplot import hist >>> hist(s) # note that it is very unlikely to grab both bad items Suppose you have an urn with 15 white and 15 black marbles. If you pull 15 marbles at random, how likely is it that 12 or more of them are one color? >>> s = np.random.hypergeometric(15, 15, 15, 100000) >>> sum(s>=12)/100000. + sum(s<=3)/100000. # answer = 0.003 ... pretty unlikely! # numpy.random.laplace random.laplace(_loc =0.0_, _scale =1.0_, _size =None_) Draw samples from the Laplace or double exponential distribution with specified location (or mean) and scale (decay). The Laplace distribution is similar to the Gaussian/normal distribution, but is sharper at the peak and has fatter tails. It represents the difference between two independent, identically distributed exponential random variables. Note New code should use the [`laplace`](numpy.random.generator.laplace#numpy.random.Generator.laplace "numpy.random.Generator.laplace") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **loc** float or array_like of floats, optional The position, \\(\mu\\), of the distribution peak. Default is 0. **scale** float or array_like of floats, optional \\(\lambda\\), the exponential decay. Default is 1. Must be non- negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Laplace distribution. See also [`random.Generator.laplace`](numpy.random.generator.laplace#numpy.random.Generator.laplace "numpy.random.Generator.laplace") which should be used for new code. #### Notes It has the probability density function \\[f(x; \mu, \lambda) = \frac{1}{2\lambda} \exp\left(-\frac{|x - \mu|}{\lambda}\right).\\] The first law of Laplace, from 1774, states that the frequency of an error can be expressed as an exponential function of the absolute magnitude of the error, which leads to the Laplace distribution. For many problems in economics and health sciences, this distribution seems to model the data better than the standard Gaussian distribution. #### References [1] Abramowitz, M. and Stegun, I. A. (Eds.). “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing,” New York: Dover, 1972. [2] Kotz, Samuel, et. al. “The Laplace Distribution and Generalizations, “ Birkhauser, 2001. [3] Weisstein, Eric W. “Laplace Distribution.” From MathWorld–A Wolfram Web Resource. [4] Wikipedia, “Laplace distribution”, #### Examples Draw samples from the distribution >>> loc, scale = 0., 1. >>> s = np.random.laplace(loc, scale, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> x = np.arange(-8., 8., .01) >>> pdf = np.exp(-abs(x-loc)/scale)/(2.*scale) >>> plt.plot(x, pdf) Plot Gaussian for comparison: >>> g = (1/(scale * np.sqrt(2 * np.pi)) * ... np.exp(-(x - loc)**2 / (2 * scale**2))) >>> plt.plot(x,g) # numpy.random.logistic random.logistic(_loc =0.0_, _scale =1.0_, _size =None_) Draw samples from a logistic distribution. Samples are drawn from a logistic distribution with specified parameters, loc (location or mean, also median), and scale (>0). Note New code should use the [`logistic`](numpy.random.generator.logistic#numpy.random.Generator.logistic "numpy.random.Generator.logistic") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **loc** float or array_like of floats, optional Parameter of the distribution. Default is 0. **scale** float or array_like of floats, optional Parameter of the distribution. Must be non-negative. Default is 1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized logistic distribution. See also [`scipy.stats.logistic`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logistic.html#scipy.stats.logistic "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.logistic`](numpy.random.generator.logistic#numpy.random.Generator.logistic "numpy.random.Generator.logistic") which should be used for new code. #### Notes The probability density for the Logistic distribution is \\[P(x) = P(x) = \frac{e^{-(x-\mu)/s}}{s(1+e^{-(x-\mu)/s})^2},\\] where \\(\mu\\) = location and \\(s\\) = scale. The Logistic distribution is used in Extreme Value problems where it can act as a mixture of Gumbel distributions, in Epidemiology, and by the World Chess Federation (FIDE) where it is used in the Elo ranking system, assuming the performance of each player is a logistically distributed random variable. #### References [1] Reiss, R.-D. and Thomas M. (2001), “Statistical Analysis of Extreme Values, from Insurance, Finance, Hydrology and Other Fields,” Birkhauser Verlag, Basel, pp 132-133. [2] Weisstein, Eric W. “Logistic Distribution.” From MathWorld–A Wolfram Web Resource. [3] Wikipedia, “Logistic-distribution”, #### Examples Draw samples from the distribution: >>> loc, scale = 10, 1 >>> s = np.random.logistic(loc, scale, 10000) >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, bins=50) # plot against distribution >>> def logist(x, loc, scale): ... return np.exp((loc-x)/scale)/(scale*(1+np.exp((loc-x)/scale))**2) >>> lgst_val = logist(bins, loc, scale) >>> plt.plot(bins, lgst_val * count.max() / lgst_val.max()) >>> plt.show() # numpy.random.lognormal random.lognormal(_mean =0.0_, _sigma =1.0_, _size =None_) Draw samples from a log-normal distribution. Draw samples from a log-normal distribution with specified mean, standard deviation, and array shape. Note that the mean and standard deviation are not the values for the distribution itself, but of the underlying normal distribution it is derived from. Note New code should use the [`lognormal`](numpy.random.generator.lognormal#numpy.random.Generator.lognormal "numpy.random.Generator.lognormal") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **mean** float or array_like of floats, optional Mean value of the underlying normal distribution. Default is 0. **sigma** float or array_like of floats, optional Standard deviation of the underlying normal distribution. Must be non- negative. Default is 1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mean` and `sigma` are both scalars. Otherwise, `np.broadcast(mean, sigma).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized log-normal distribution. See also [`scipy.stats.lognorm`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html#scipy.stats.lognorm "\(in SciPy v1.14.1\)") probability density function, distribution, cumulative density function, etc. [`random.Generator.lognormal`](numpy.random.generator.lognormal#numpy.random.Generator.lognormal "numpy.random.Generator.lognormal") which should be used for new code. #### Notes A variable `x` has a log-normal distribution if `log(x)` is normally distributed. The probability density function for the log-normal distribution is: \\[p(x) = \frac{1}{\sigma x \sqrt{2\pi}} e^{(-\frac{(ln(x)-\mu)^2}{2\sigma^2})}\\] where \\(\mu\\) is the mean and \\(\sigma\\) is the standard deviation of the normally distributed logarithm of the variable. A log-normal distribution results if a random variable is the _product_ of a large number of independent, identically-distributed variables in the same way that a normal distribution results if the variable is the _sum_ of a large number of independent, identically-distributed variables. #### References [1] Limpert, E., Stahel, W. A., and Abbt, M., “Log-normal Distributions across the Sciences: Keys and Clues,” BioScience, Vol. 51, No. 5, May, 2001. [2] Reiss, R.D. and Thomas, M., “Statistical Analysis of Extreme Values,” Basel: Birkhauser Verlag, 2001, pp. 31-32. #### Examples Draw samples from the distribution: >>> mu, sigma = 3., 1. # mean and standard deviation >>> s = np.random.lognormal(mu, sigma, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 100, density=True, align='mid') >>> x = np.linspace(min(bins), max(bins), 10000) >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2)) ... / (x * sigma * np.sqrt(2 * np.pi))) >>> plt.plot(x, pdf, linewidth=2, color='r') >>> plt.axis('tight') >>> plt.show() Demonstrate that taking the products of random samples from a uniform distribution can be fit well by a log-normal probability density function. >>> # Generate a thousand samples: each is the product of 100 random >>> # values, drawn from a normal distribution. >>> b = [] >>> for i in range(1000): ... a = 10. + np.random.standard_normal(100) ... b.append(np.prod(a)) >>> b = np.array(b) / np.min(b) # scale values to be positive >>> count, bins, ignored = plt.hist(b, 100, density=True, align='mid') >>> sigma = np.std(np.log(b)) >>> mu = np.mean(np.log(b)) >>> x = np.linspace(min(bins), max(bins), 10000) >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2)) ... / (x * sigma * np.sqrt(2 * np.pi))) >>> plt.plot(x, pdf, color='r', linewidth=2) >>> plt.show() # numpy.random.logseries random.logseries(_p_ , _size =None_) Draw samples from a logarithmic series distribution. Samples are drawn from a log series distribution with specified shape parameter, 0 <= `p` < 1. Note New code should use the [`logseries`](numpy.random.generator.logseries#numpy.random.Generator.logseries "numpy.random.Generator.logseries") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **p** float or array_like of floats Shape parameter for the distribution. Must be in the range [0, 1). **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `p` is a scalar. Otherwise, `np.array(p).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized logarithmic series distribution. See also [`scipy.stats.logser`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logser.html#scipy.stats.logser "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.logseries`](numpy.random.generator.logseries#numpy.random.Generator.logseries "numpy.random.Generator.logseries") which should be used for new code. #### Notes The probability density for the Log Series distribution is \\[P(k) = \frac{-p^k}{k \ln(1-p)},\\] where p = probability. The log series distribution is frequently used to represent species richness and occurrence, first proposed by Fisher, Corbet, and Williams in 1943 [2]. It may also be used to model the numbers of occupants seen in cars [3]. #### References [1] Buzas, Martin A.; Culver, Stephen J., Understanding regional species diversity through the log series distribution of occurrences: BIODIVERSITY RESEARCH Diversity & Distributions, Volume 5, Number 5, September 1999 , pp. 187-195(9). [2] Fisher, R.A,, A.S. Corbet, and C.B. Williams. 1943. The relation between the number of species and the number of individuals in a random sample of an animal population. Journal of Animal Ecology, 12:42-58. [3] D. J. Hand, F. Daly, D. Lunn, E. Ostrowski, A Handbook of Small Data Sets, CRC Press, 1994. [4] Wikipedia, “Logarithmic distribution”, #### Examples Draw samples from the distribution: >>> a = .6 >>> s = np.random.logseries(a, 10000) >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s) # plot against distribution >>> def logseries(k, p): ... return -p**k/(k*np.log(1-p)) >>> plt.plot(bins, logseries(bins, a)*count.max()/ ... logseries(bins, a).max(), 'r') >>> plt.show() # numpy.random.multinomial random.multinomial(_n_ , _pvals_ , _size =None_) Draw samples from a multinomial distribution. The multinomial distribution is a multivariate generalization of the binomial distribution. Take an experiment with one of `p` possible outcomes. An example of such an experiment is throwing a dice, where the outcome can be 1 through 6. Each sample drawn from the distribution represents `n` such experiments. Its values, `X_i = [X_0, X_1, ..., X_p]`, represent the number of times the outcome was `i`. Note New code should use the [`multinomial`](numpy.random.generator.multinomial#numpy.random.Generator.multinomial "numpy.random.Generator.multinomial") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Warning This function defaults to the C-long dtype, which is 32bit on windows and otherwise 64bit on 64bit platforms (and 32bit on 32bit ones). Since NumPy 2.0, NumPy’s default integer is 32bit on 32bit platforms and 64bit on 64bit platforms. Parameters: **n** int Number of experiments. **pvals** sequence of floats, length p Probabilities of each of the `p` different outcomes. These must sum to 1 (however, the last element is always assumed to account for the remaining probability, as long as `sum(pvals[:-1]) <= 1)`. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns: **out** ndarray The drawn samples, of shape _size_ , if that was provided. If not, the shape is `(N,)`. In other words, each entry `out[i,j,...,:]` is an N-dimensional value drawn from the distribution. See also [`random.Generator.multinomial`](numpy.random.generator.multinomial#numpy.random.Generator.multinomial "numpy.random.Generator.multinomial") which should be used for new code. #### Examples Throw a dice 20 times: >>> np.random.multinomial(20, [1/6.]*6, size=1) array([[4, 1, 7, 5, 2, 1]]) # random It landed 4 times on 1, once on 2, etc. Now, throw the dice 20 times, and 20 times again: >>> np.random.multinomial(20, [1/6.]*6, size=2) array([[3, 4, 3, 3, 4, 3], # random [2, 4, 3, 4, 0, 7]]) For the first run, we threw 3 times 1, 4 times 2, etc. For the second, we threw 2 times 1, 4 times 2, etc. A loaded die is more likely to land on number 6: >>> np.random.multinomial(100, [1/7.]*5 + [2/7.]) array([11, 16, 14, 17, 16, 26]) # random The probability inputs should be normalized. As an implementation detail, the value of the last entry is ignored and assumed to take up any leftover probability mass, but this should not be relied on. A biased coin which has twice as much weight on one side as on the other should be sampled like so: >>> np.random.multinomial(100, [1.0 / 3, 2.0 / 3]) # RIGHT array([38, 62]) # random not like: >>> np.random.multinomial(100, [1.0, 2.0]) # WRONG Traceback (most recent call last): ValueError: pvals < 0, pvals > 1 or pvals contains NaNs # numpy.random.multivariate_normal random.multivariate_normal(_mean_ , _cov_ , _size =None_, _check_valid ='warn'_, _tol =1e-8_) Draw random samples from a multivariate normal distribution. The multivariate normal, multinormal or Gaussian distribution is a generalization of the one-dimensional normal distribution to higher dimensions. Such a distribution is specified by its mean and covariance matrix. These parameters are analogous to the mean (average or “center”) and variance (standard deviation, or “width,” squared) of the one-dimensional normal distribution. Note New code should use the [`multivariate_normal`](numpy.random.generator.multivariate_normal#numpy.random.Generator.multivariate_normal "numpy.random.Generator.multivariate_normal") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **mean** 1-D array_like, of length N Mean of the N-dimensional distribution. **cov** 2-D array_like, of shape (N, N) Covariance matrix of the distribution. It must be symmetric and positive- semidefinite for proper sampling. **size** int or tuple of ints, optional Given a shape of, for example, `(m,n,k)`, `m*n*k` samples are generated, and packed in an `m`-by-`n`-by-`k` arrangement. Because each sample is `N`-dimensional, the output shape is `(m,n,k,N)`. If no shape is specified, a single (`N`-D) sample is returned. **check_valid**{ ‘warn’, ‘raise’, ‘ignore’ }, optional Behavior when the covariance matrix is not positive semidefinite. **tol** float, optional Tolerance when checking the singular values in covariance matrix. cov is cast to double before the check. Returns: **out** ndarray The drawn samples, of shape _size_ , if that was provided. If not, the shape is `(N,)`. In other words, each entry `out[i,j,...,:]` is an N-dimensional value drawn from the distribution. See also [`random.Generator.multivariate_normal`](numpy.random.generator.multivariate_normal#numpy.random.Generator.multivariate_normal "numpy.random.Generator.multivariate_normal") which should be used for new code. #### Notes The mean is a coordinate in N-dimensional space, which represents the location where samples are most likely to be generated. This is analogous to the peak of the bell curve for the one-dimensional or univariate normal distribution. Covariance indicates the level to which two variables vary together. From the multivariate normal distribution, we draw N-dimensional samples, \\(X = [x_1, x_2, ... x_N]\\). The covariance matrix element \\(C_{ij}\\) is the covariance of \\(x_i\\) and \\(x_j\\). The element \\(C_{ii}\\) is the variance of \\(x_i\\) (i.e. its “spread”). Instead of specifying the full covariance matrix, popular approximations include: * Spherical covariance ([`cov`](../../generated/numpy.cov#numpy.cov "numpy.cov") is a multiple of the identity matrix) * Diagonal covariance ([`cov`](../../generated/numpy.cov#numpy.cov "numpy.cov") has non-negative elements, and only on the diagonal) This geometrical property can be seen in two dimensions by plotting generated data-points: >>> mean = [0, 0] >>> cov = [[1, 0], [0, 100]] # diagonal covariance Diagonal covariance means that points are oriented along x or y-axis: >>> import matplotlib.pyplot as plt >>> x, y = np.random.multivariate_normal(mean, cov, 5000).T >>> plt.plot(x, y, 'x') >>> plt.axis('equal') >>> plt.show() Note that the covariance matrix must be positive semidefinite (a.k.a. nonnegative-definite). Otherwise, the behavior of this method is undefined and backwards compatibility is not guaranteed. #### References [1] Papoulis, A., “Probability, Random Variables, and Stochastic Processes,” 3rd ed., New York: McGraw-Hill, 1991. [2] Duda, R. O., Hart, P. E., and Stork, D. G., “Pattern Classification,” 2nd ed., New York: Wiley, 2001. #### Examples >>> mean = (1, 2) >>> cov = [[1, 0], [0, 1]] >>> x = np.random.multivariate_normal(mean, cov, (3, 3)) >>> x.shape (3, 3, 2) Here we generate 800 samples from the bivariate normal distribution with mean [0, 0] and covariance matrix [[6, -3], [-3, 3.5]]. The expected variances of the first and second components of the sample are 6 and 3.5, respectively, and the expected correlation coefficient is -3/sqrt(6*3.5) ≈ -0.65465. >>> cov = np.array([[6, -3], [-3, 3.5]]) >>> pts = np.random.multivariate_normal([0, 0], cov, size=800) Check that the mean, covariance, and correlation coefficient of the sample are close to the expected values: >>> pts.mean(axis=0) array([ 0.0326911 , -0.01280782]) # may vary >>> np.cov(pts.T) array([[ 5.96202397, -2.85602287], [-2.85602287, 3.47613949]]) # may vary >>> np.corrcoef(pts.T)[0, 1] -0.6273591314603949 # may vary We can visualize this data with a scatter plot. The orientation of the point cloud illustrates the negative correlation of the components of this sample. >>> import matplotlib.pyplot as plt >>> plt.plot(pts[:, 0], pts[:, 1], '.', alpha=0.5) >>> plt.axis('equal') >>> plt.grid() >>> plt.show() # numpy.random.negative_binomial random.negative_binomial(_n_ , _p_ , _size =None_) Draw samples from a negative binomial distribution. Samples are drawn from a negative binomial distribution with specified parameters, `n` successes and `p` probability of success where `n` is > 0 and `p` is in the interval [0, 1]. Note New code should use the [`negative_binomial`](numpy.random.generator.negative_binomial#numpy.random.Generator.negative_binomial "numpy.random.Generator.negative_binomial") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **n** float or array_like of floats Parameter of the distribution, > 0. **p** float or array_like of floats Parameter of the distribution, >= 0 and <=1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `n` and `p` are both scalars. Otherwise, `np.broadcast(n, p).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized negative binomial distribution, where each sample is equal to N, the number of failures that occurred before a total of n successes was reached. Warning This function returns the C-long dtype, which is 32bit on windows and otherwise 64bit on 64bit platforms (and 32bit on 32bit ones). Since NumPy 2.0, NumPy’s default integer is 32bit on 32bit platforms and 64bit on 64bit platforms. See also [`random.Generator.negative_binomial`](numpy.random.generator.negative_binomial#numpy.random.Generator.negative_binomial "numpy.random.Generator.negative_binomial") which should be used for new code. #### Notes The probability mass function of the negative binomial distribution is \\[P(N;n,p) = \frac{\Gamma(N+n)}{N!\Gamma(n)}p^{n}(1-p)^{N},\\] where \\(n\\) is the number of successes, \\(p\\) is the probability of success, \\(N+n\\) is the number of trials, and \\(\Gamma\\) is the gamma function. When \\(n\\) is an integer, \\(\frac{\Gamma(N+n)}{N!\Gamma(n)} = \binom{N+n-1}{N}\\), which is the more common form of this term in the pmf. The negative binomial distribution gives the probability of N failures given n successes, with a success on the last trial. If one throws a die repeatedly until the third time a “1” appears, then the probability distribution of the number of non-“1”s that appear before the third “1” is a negative binomial distribution. #### References [1] Weisstein, Eric W. “Negative Binomial Distribution.” From MathWorld–A Wolfram Web Resource. [2] Wikipedia, “Negative binomial distribution”, #### Examples Draw samples from the distribution: A real world example. A company drills wild-cat oil exploration wells, each with an estimated probability of success of 0.1. What is the probability of having one success for each successive well, that is what is the probability of a single success after drilling 5 wells, after 6 wells, etc.? >>> s = np.random.negative_binomial(1, 0.1, 100000) >>> for i in range(1, 11): ... probability = sum(s 0. **nonc** float or array_like of floats Non-centrality, must be non-negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` and `nonc` are both scalars. Otherwise, `np.broadcast(df, nonc).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized noncentral chi-square distribution. See also [`random.Generator.noncentral_chisquare`](numpy.random.generator.noncentral_chisquare#numpy.random.Generator.noncentral_chisquare "numpy.random.Generator.noncentral_chisquare") which should be used for new code. #### Notes The probability density function for the noncentral Chi-square distribution is \\[P(x;df,nonc) = \sum^{\infty}_{i=0} \frac{e^{-nonc/2}(nonc/2)^{i}}{i!} P_{Y_{df+2i}}(x),\\] where \\(Y_{q}\\) is the Chi-square with q degrees of freedom. #### References [1] Wikipedia, “Noncentral chi-squared distribution” #### Examples Draw values from the distribution and plot the histogram >>> import matplotlib.pyplot as plt >>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000), ... bins=200, density=True) >>> plt.show() Draw values from a noncentral chisquare with very small noncentrality, and compare to a chisquare. >>> plt.figure() >>> values = plt.hist(np.random.noncentral_chisquare(3, .0000001, 100000), ... bins=np.arange(0., 25, .1), density=True) >>> values2 = plt.hist(np.random.chisquare(3, 100000), ... bins=np.arange(0., 25, .1), density=True) >>> plt.plot(values[1][0:-1], values[0]-values2[0], 'ob') >>> plt.show() Demonstrate how large values of non-centrality lead to a more symmetric distribution. >>> plt.figure() >>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000), ... bins=200, density=True) >>> plt.show() # numpy.random.noncentral_f random.noncentral_f(_dfnum_ , _dfden_ , _nonc_ , _size =None_) Draw samples from the noncentral F distribution. Samples are drawn from an F distribution with specified parameters, `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of freedom in denominator), where both parameters > 1\. `nonc` is the non-centrality parameter. Note New code should use the [`noncentral_f`](numpy.random.generator.noncentral_f#numpy.random.Generator.noncentral_f "numpy.random.Generator.noncentral_f") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **dfnum** float or array_like of floats Numerator degrees of freedom, must be > 0. **dfden** float or array_like of floats Denominator degrees of freedom, must be > 0. **nonc** float or array_like of floats Non-centrality parameter, the sum of the squares of the numerator means, must be >= 0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `dfnum`, `dfden`, and `nonc` are all scalars. Otherwise, `np.broadcast(dfnum, dfden, nonc).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized noncentral Fisher distribution. See also [`random.Generator.noncentral_f`](numpy.random.generator.noncentral_f#numpy.random.Generator.noncentral_f "numpy.random.Generator.noncentral_f") which should be used for new code. #### Notes When calculating the power of an experiment (power = probability of rejecting the null hypothesis when a specific alternative is true) the non-central F statistic becomes important. When the null hypothesis is true, the F statistic follows a central F distribution. When the null hypothesis is not true, then it follows a non-central F statistic. #### References [1] Weisstein, Eric W. “Noncentral F-Distribution.” From MathWorld–A Wolfram Web Resource. [2] Wikipedia, “Noncentral F-distribution”, #### Examples In a study, testing for a specific alternative to the null hypothesis requires use of the Noncentral F distribution. We need to calculate the area in the tail of the distribution that exceeds the value of the F distribution for the null hypothesis. We’ll plot the two probability distributions for comparison. >>> dfnum = 3 # between group deg of freedom >>> dfden = 20 # within groups degrees of freedom >>> nonc = 3.0 >>> nc_vals = np.random.noncentral_f(dfnum, dfden, nonc, 1000000) >>> NF = np.histogram(nc_vals, bins=50, density=True) >>> c_vals = np.random.f(dfnum, dfden, 1000000) >>> F = np.histogram(c_vals, bins=50, density=True) >>> import matplotlib.pyplot as plt >>> plt.plot(F[1][1:], F[0]) >>> plt.plot(NF[1][1:], NF[0]) >>> plt.show() # numpy.random.normal random.normal(_loc =0.0_, _scale =1.0_, _size =None_) Draw random samples from a normal (Gaussian) distribution. The probability density function of the normal distribution, first derived by De Moivre and 200 years later by both Gauss and Laplace independently [2], is often called the bell curve because of its characteristic shape (see the example below). The normal distributions occurs often in nature. For example, it describes the commonly occurring distribution of samples influenced by a large number of tiny, random disturbances, each with its own unique distribution [2]. Note New code should use the [`normal`](numpy.random.generator.normal#numpy.random.Generator.normal "numpy.random.Generator.normal") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **loc** float or array_like of floats Mean (“centre”) of the distribution. **scale** float or array_like of floats Standard deviation (spread or “width”) of the distribution. Must be non- negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized normal distribution. See also [`scipy.stats.norm`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html#scipy.stats.norm "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.normal`](numpy.random.generator.normal#numpy.random.Generator.normal "numpy.random.Generator.normal") which should be used for new code. #### Notes The probability density for the Gaussian distribution is \\[p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }} e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },\\] where \\(\mu\\) is the mean and \\(\sigma\\) the standard deviation. The square of the standard deviation, \\(\sigma^2\\), is called the variance. The function has its peak at the mean, and its “spread” increases with the standard deviation (the function reaches 0.607 times its maximum at \\(x + \sigma\\) and \\(x - \sigma\\) [2]). This implies that normal is more likely to return samples lying close to the mean, rather than those far away. #### References [1] Wikipedia, “Normal distribution”, [2] (1,2,3) P. R. Peebles Jr., “Central Limit Theorem” in “Probability, Random Variables and Random Signal Principles”, 4th ed., 2001, pp. 51, 51, 125. #### Examples Draw samples from the distribution: >>> mu, sigma = 0, 0.1 # mean and standard deviation >>> s = np.random.normal(mu, sigma, 1000) Verify the mean and the standard deviation: >>> abs(mu - np.mean(s)) 0.0 # may vary >>> abs(sigma - np.std(s, ddof=1)) 0.1 # may vary Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * ... np.exp( - (bins - mu)**2 / (2 * sigma**2) ), ... linewidth=2, color='r') >>> plt.show() Two-by-four array of samples from the normal distribution with mean 3 and standard deviation 2.5: >>> np.random.normal(3, 2.5, size=(2, 4)) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random # numpy.random.pareto random.pareto(_a_ , _size =None_) Draw samples from a Pareto II or Lomax distribution with specified shape. The Lomax or Pareto II distribution is a shifted Pareto distribution. The classical Pareto distribution can be obtained from the Lomax distribution by adding 1 and multiplying by the scale parameter `m` (see Notes). The smallest value of the Lomax distribution is zero while for the classical Pareto distribution it is `mu`, where the standard Pareto distribution has location `mu = 1`. Lomax can also be considered as a simplified version of the Generalized Pareto distribution (available in SciPy), with the scale set to one and the location set to zero. The Pareto distribution must be greater than zero, and is unbounded above. It is also known as the “80-20 rule”. In this distribution, 80 percent of the weights are in the lowest 20 percent of the range, while the other 20 percent fill the remaining 80 percent of the range. Note New code should use the [`pareto`](numpy.random.generator.pareto#numpy.random.Generator.pareto "numpy.random.Generator.pareto") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **a** float or array_like of floats Shape of the distribution. Must be positive. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Pareto distribution. See also [`scipy.stats.lomax`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lomax.html#scipy.stats.lomax "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`scipy.stats.genpareto`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genpareto.html#scipy.stats.genpareto "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.pareto`](numpy.random.generator.pareto#numpy.random.Generator.pareto "numpy.random.Generator.pareto") which should be used for new code. #### Notes The probability density for the Pareto distribution is \\[p(x) = \frac{am^a}{x^{a+1}}\\] where \\(a\\) is the shape and \\(m\\) the scale. The Pareto distribution, named after the Italian economist Vilfredo Pareto, is a power law probability distribution useful in many real world problems. Outside the field of economics it is generally referred to as the Bradford distribution. Pareto developed the distribution to describe the distribution of wealth in an economy. It has also found use in insurance, web page access statistics, oil field sizes, and many other problems, including the download frequency for projects in Sourceforge [1]. It is one of the so-called “fat- tailed” distributions. #### References [1] Francis Hunt and Paul Johnson, On the Pareto Distribution of Sourceforge projects. [2] Pareto, V. (1896). Course of Political Economy. Lausanne. [3] Reiss, R.D., Thomas, M.(2001), Statistical Analysis of Extreme Values, Birkhauser Verlag, Basel, pp 23-30. [4] Wikipedia, “Pareto distribution”, #### Examples Draw samples from the distribution: >>> a, m = 3., 2. # shape and mode >>> s = (np.random.pareto(a, 1000) + 1) * m Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, _ = plt.hist(s, 100, density=True) >>> fit = a*m**a / bins**(a+1) >>> plt.plot(bins, max(count)*fit/max(fit), linewidth=2, color='r') >>> plt.show() # numpy.random.permutation random.permutation(_x_) Randomly permute a sequence, or return a permuted range. If `x` is a multi-dimensional array, it is only shuffled along its first index. Note New code should use the [`permutation`](numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **x** int or array_like If `x` is an integer, randomly permute `np.arange(x)`. If `x` is an array, make a copy and shuffle the elements randomly. Returns: **out** ndarray Permuted sequence or array range. See also [`random.Generator.permutation`](numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") which should be used for new code. #### Examples >>> np.random.permutation(10) array([1, 7, 4, 3, 0, 9, 2, 5, 8, 6]) # random >>> np.random.permutation([1, 4, 9, 12, 15]) array([15, 1, 9, 4, 12]) # random >>> arr = np.arange(9).reshape((3, 3)) >>> np.random.permutation(arr) array([[6, 7, 8], # random [0, 1, 2], [3, 4, 5]]) # numpy.random.poisson random.poisson(_lam =1.0_, _size =None_) Draw samples from a Poisson distribution. The Poisson distribution is the limit of the binomial distribution for large N. Note New code should use the [`poisson`](numpy.random.generator.poisson#numpy.random.Generator.poisson "numpy.random.Generator.poisson") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **lam** float or array_like of floats Expected number of events occurring in a fixed-time interval, must be >= 0. A sequence must be broadcastable over the requested size. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `lam` is a scalar. Otherwise, `np.array(lam).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Poisson distribution. See also [`random.Generator.poisson`](numpy.random.generator.poisson#numpy.random.Generator.poisson "numpy.random.Generator.poisson") which should be used for new code. #### Notes The probability mass function (PMF) of Poisson distribution is \\[f(k; \lambda)=\frac{\lambda^k e^{-\lambda}}{k!}\\] For events with an expected separation \\(\lambda\\) the Poisson distribution \\(f(k; \lambda)\\) describes the probability of \\(k\\) events occurring within the observed interval \\(\lambda\\). Because the output is limited to the range of the C int64 type, a ValueError is raised when `lam` is within 10 sigma of the maximum representable value. #### References [1] Weisstein, Eric W. “Poisson Distribution.” From MathWorld–A Wolfram Web Resource. [2] Wikipedia, “Poisson distribution”, #### Examples Draw samples from the distribution: >>> import numpy as np >>> s = np.random.poisson(5, 10000) Display histogram of the sample: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 14, density=True) >>> plt.show() Draw each 100 values for lambda 100 and 500: >>> s = np.random.poisson(lam=(100., 500.), size=(100, 2)) # numpy.random.power random.power(_a_ , _size =None_) Draws samples in [0, 1] from a power distribution with positive exponent a - 1. Also known as the power function distribution. Note New code should use the [`power`](numpy.random.generator.power#numpy.random.Generator.power "numpy.random.Generator.power") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **a** float or array_like of floats Parameter of the distribution. Must be non-negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized power distribution. Raises: ValueError If a <= 0. See also [`random.Generator.power`](numpy.random.generator.power#numpy.random.Generator.power "numpy.random.Generator.power") which should be used for new code. #### Notes The probability density function is \\[P(x; a) = ax^{a-1}, 0 \le x \le 1, a>0.\\] The power function distribution is just the inverse of the Pareto distribution. It may also be seen as a special case of the Beta distribution. It is used, for example, in modeling the over-reporting of insurance claims. #### References [1] Christian Kleiber, Samuel Kotz, “Statistical size distributions in economics and actuarial sciences”, Wiley, 2003. [2] Heckert, N. A. and Filliben, James J. “NIST Handbook 148: Dataplot Reference Manual, Volume 2: Let Subcommands and Library Functions”, National Institute of Standards and Technology Handbook Series, June 2003. #### Examples Draw samples from the distribution: >>> a = 5. # shape >>> samples = 1000 >>> s = np.random.power(a, samples) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, bins=30) >>> x = np.linspace(0, 1, 100) >>> y = a*x**(a-1.) >>> normed_y = samples*np.diff(bins)[0]*y >>> plt.plot(x, normed_y) >>> plt.show() Compare the power function distribution to the inverse of the Pareto. >>> from scipy import stats >>> rvs = np.random.power(5, 1000000) >>> rvsp = np.random.pareto(5, 1000000) >>> xx = np.linspace(0,1,100) >>> powpdf = stats.powerlaw.pdf(xx,5) >>> plt.figure() >>> plt.hist(rvs, bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('np.random.power(5)') >>> plt.figure() >>> plt.hist(1./(1.+rvsp), bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('inverse of 1 + np.random.pareto(5)') >>> plt.figure() >>> plt.hist(1./(1.+rvsp), bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('inverse of stats.pareto(5)') # numpy.random.rand random.rand(_d0_ , _d1_ , _..._ , _dn_) Random values in a given shape. Note This is a convenience function for users porting code from Matlab, and wraps [`random_sample`](numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample"). That function takes a tuple to specify the size of the output, which is consistent with other NumPy functions like [`numpy.zeros`](../../generated/numpy.zeros#numpy.zeros "numpy.zeros") and [`numpy.ones`](../../generated/numpy.ones#numpy.ones "numpy.ones"). Create an array of the given shape and populate it with random samples from a uniform distribution over `[0, 1)`. Parameters: **d0, d1, …, dn** int, optional The dimensions of the returned array, must be non-negative. If no argument is given a single Python float is returned. Returns: **out** ndarray, shape `(d0, d1, ..., dn)` Random values. See also [`random`](../index#module-numpy.random "numpy.random") #### Examples >>> np.random.rand(3,2) array([[ 0.14022471, 0.96360618], #random [ 0.37601032, 0.25528411], #random [ 0.49313049, 0.94909878]]) #random # numpy.random.randint random.randint(_low_ , _high =None_, _size =None_, _dtype =int_) Return random integers from `low` (inclusive) to `high` (exclusive). Return random integers from the “discrete uniform” distribution of the specified dtype in the “half-open” interval [`low`, `high`). If `high` is None (the default), then results are from [0, `low`). Note New code should use the [`integers`](numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **low** int or array-like of ints Lowest (signed) integers to be drawn from the distribution (unless `high=None`, in which case this parameter is one above the _highest_ such integer). **high** int or array-like of ints, optional If provided, one above the largest (signed) integer to be drawn from the distribution (see above for behavior if `high=None`). If array-like, must contain integer values **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **dtype** dtype, optional Desired dtype of the result. Byteorder must be native. The default value is long. Warning This function defaults to the C-long dtype, which is 32bit on windows and otherwise 64bit on 64bit platforms (and 32bit on 32bit ones). Since NumPy 2.0, NumPy’s default integer is 32bit on 32bit platforms and 64bit on 64bit platforms. Which corresponds to `np.intp`. (`dtype=int` is not the same as in most NumPy functions.) Returns: **out** int or ndarray of ints [`size`](../../generated/numpy.size#numpy.size "numpy.size")-shaped array of random integers from the appropriate distribution, or a single such random int if [`size`](../../generated/numpy.size#numpy.size "numpy.size") not provided. See also [`random_integers`](numpy.random.random_integers#numpy.random.random_integers "numpy.random.random_integers") similar to `randint`, only for the closed interval [`low`, `high`], and 1 is the lowest value if `high` is omitted. [`random.Generator.integers`](numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers") which should be used for new code. #### Examples >>> np.random.randint(2, size=10) array([1, 0, 0, 0, 1, 1, 0, 0, 1, 0]) # random >>> np.random.randint(1, size=10) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) Generate a 2 x 4 array of ints between 0 and 4, inclusive: >>> np.random.randint(5, size=(2, 4)) array([[4, 0, 2, 1], # random [3, 2, 2, 0]]) Generate a 1 x 3 array with 3 different upper bounds >>> np.random.randint(1, [3, 5, 10]) array([2, 2, 9]) # random Generate a 1 by 3 array with 3 different lower bounds >>> np.random.randint([1, 5, 7], 10) array([9, 8, 7]) # random Generate a 2 by 4 array using broadcasting with dtype of uint8 >>> np.random.randint([1, 3, 5, 7], [[10], [20]], dtype=np.uint8) array([[ 8, 6, 9, 7], # random [ 1, 16, 9, 12]], dtype=uint8) # numpy.random.randn random.randn(_d0_ , _d1_ , _..._ , _dn_) Return a sample (or samples) from the “standard normal” distribution. Note This is a convenience function for users porting code from Matlab, and wraps [`standard_normal`](numpy.random.standard_normal#numpy.random.standard_normal "numpy.random.standard_normal"). That function takes a tuple to specify the size of the output, which is consistent with other NumPy functions like [`numpy.zeros`](../../generated/numpy.zeros#numpy.zeros "numpy.zeros") and [`numpy.ones`](../../generated/numpy.ones#numpy.ones "numpy.ones"). Note New code should use the [`standard_normal`](numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). If positive int_like arguments are provided, `randn` generates an array of shape `(d0, d1, ..., dn)`, filled with random floats sampled from a univariate “normal” (Gaussian) distribution of mean 0 and variance 1. A single float randomly sampled from the distribution is returned if no argument is provided. Parameters: **d0, d1, …, dn** int, optional The dimensions of the returned array, must be non-negative. If no argument is given a single Python float is returned. Returns: **Z** ndarray or float A `(d0, d1, ..., dn)`-shaped array of floating-point samples from the standard normal distribution, or a single such float if no parameters were supplied. See also [`standard_normal`](numpy.random.standard_normal#numpy.random.standard_normal "numpy.random.standard_normal") Similar, but takes a tuple as its argument. [`normal`](numpy.random.normal#numpy.random.normal "numpy.random.normal") Also accepts mu and sigma arguments. [`random.Generator.standard_normal`](numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal") which should be used for new code. #### Notes For random samples from the normal distribution with mean `mu` and standard deviation `sigma`, use: sigma * np.random.randn(...) + mu #### Examples >>> np.random.randn() 2.1923875335537315 # random Two-by-four array of samples from the normal distribution with mean 3 and standard deviation 2.5: >>> 3 + 2.5 * np.random.randn(2, 4) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random # numpy.random.random random.random(_size =None_) Return random floats in the half-open interval [0.0, 1.0). Alias for [`random_sample`](numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample") to ease forward-porting to the new random API. # numpy.random.random_integers random.random_integers(_low_ , _high =None_, _size =None_) Random integers of type [`numpy.int_`](../../arrays.scalars#numpy.int_ "numpy.int_") between `low` and `high`, inclusive. Return random integers of type [`numpy.int_`](../../arrays.scalars#numpy.int_ "numpy.int_") from the “discrete uniform” distribution in the closed interval [`low`, `high`]. If `high` is None (the default), then results are from [1, `low`]. The [`numpy.int_`](../../arrays.scalars#numpy.int_ "numpy.int_") type translates to the C long integer type and its precision is platform dependent. This function has been deprecated. Use randint instead. Deprecated since version 1.11.0. Parameters: **low** int Lowest (signed) integer to be drawn from the distribution (unless `high=None`, in which case this parameter is the _highest_ such integer). **high** int, optional If provided, the largest (signed) integer to be drawn from the distribution (see above for behavior if `high=None`). **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns: **out** int or ndarray of ints [`size`](../../generated/numpy.size#numpy.size "numpy.size")-shaped array of random integers from the appropriate distribution, or a single such random int if [`size`](../../generated/numpy.size#numpy.size "numpy.size") not provided. See also [`randint`](numpy.random.randint#numpy.random.randint "numpy.random.randint") Similar to `random_integers`, only for the half-open interval [`low`, `high`), and 0 is the lowest value if `high` is omitted. #### Notes To sample from N evenly spaced floating-point numbers between a and b, use: a + (b - a) * (np.random.random_integers(N) - 1) / (N - 1.) #### Examples >>> np.random.random_integers(5) 4 # random >>> type(np.random.random_integers(5)) >>> np.random.random_integers(5, size=(3,2)) array([[5, 4], # random [3, 3], [4, 5]]) Choose five random numbers from the set of five evenly-spaced numbers between 0 and 2.5, inclusive (_i.e._ , from the set \\({0, 5/8, 10/8, 15/8, 20/8}\\)): >>> 2.5 * (np.random.random_integers(5, size=(5,)) - 1) / 4. array([ 0.625, 1.25 , 0.625, 0.625, 2.5 ]) # random Roll two six sided dice 1000 times and sum the results: >>> d1 = np.random.random_integers(1, 6, 1000) >>> d2 = np.random.random_integers(1, 6, 1000) >>> dsums = d1 + d2 Display results as a histogram: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(dsums, 11, density=True) >>> plt.show() # numpy.random.random_sample random.random_sample(_size =None_) Return random floats in the half-open interval [0.0, 1.0). Results are from the “continuous uniform” distribution over the stated interval. To sample \\(Unif[a, b), b > a\\) multiply the output of `random_sample` by `(b-a)` and add `a`: (b - a) * random_sample() + a Note New code should use the [`random`](numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns: **out** float or ndarray of floats Array of random floats of shape [`size`](../../generated/numpy.size#numpy.size "numpy.size") (unless `size=None`, in which case a single float is returned). See also [`random.Generator.random`](numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random") which should be used for new code. #### Examples >>> np.random.random_sample() 0.47108547995356098 # random >>> type(np.random.random_sample()) >>> np.random.random_sample((5,)) array([ 0.30220482, 0.86820401, 0.1654503 , 0.11659149, 0.54323428]) # random Three-by-two array of random numbers from [-5, 0): >>> 5 * np.random.random_sample((3, 2)) - 5 array([[-3.99149989, -0.52338984], # random [-2.99091858, -0.79479508], [-1.23204345, -1.75224494]]) # numpy.random.RandomState.beta method random.RandomState.beta(_a_ , _b_ , _size =None_) Draw samples from a Beta distribution. The Beta distribution is a special case of the Dirichlet distribution, and is related to the Gamma distribution. It has the probability distribution function \\[f(x; a,b) = \frac{1}{B(\alpha, \beta)} x^{\alpha - 1} (1 - x)^{\beta - 1},\\] where the normalization, B, is the beta function, \\[B(\alpha, \beta) = \int_0^1 t^{\alpha - 1} (1 - t)^{\beta - 1} dt.\\] It is often seen in Bayesian inference and order statistics. Note New code should use the [`beta`](numpy.random.generator.beta#numpy.random.Generator.beta "numpy.random.Generator.beta") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **a** float or array_like of floats Alpha, positive (>0). **b** float or array_like of floats Beta, positive (>0). **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` and `b` are both scalars. Otherwise, `np.broadcast(a, b).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized beta distribution. See also [`random.Generator.beta`](numpy.random.generator.beta#numpy.random.Generator.beta "numpy.random.Generator.beta") which should be used for new code. # numpy.random.RandomState.binomial method random.RandomState.binomial(_n_ , _p_ , _size =None_) Draw samples from a binomial distribution. Samples are drawn from a binomial distribution with specified parameters, n trials and p probability of success where n an integer >= 0 and p is in the interval [0,1]. (n may be input as a float, but it is truncated to an integer in use) Note New code should use the [`binomial`](numpy.random.generator.binomial#numpy.random.Generator.binomial "numpy.random.Generator.binomial") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **n** int or array_like of ints Parameter of the distribution, >= 0. Floats are also accepted, but they will be truncated to integers. **p** float or array_like of floats Parameter of the distribution, >= 0 and <=1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `n` and `p` are both scalars. Otherwise, `np.broadcast(n, p).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized binomial distribution, where each sample is equal to the number of successes over the n trials. See also [`scipy.stats.binom`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binom.html#scipy.stats.binom "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.binomial`](numpy.random.generator.binomial#numpy.random.Generator.binomial "numpy.random.Generator.binomial") which should be used for new code. #### Notes The probability mass function (PMF) for the binomial distribution is \\[P(N) = \binom{n}{N}p^N(1-p)^{n-N},\\] where \\(n\\) is the number of trials, \\(p\\) is the probability of success, and \\(N\\) is the number of successes. When estimating the standard error of a proportion in a population by using a random sample, the normal distribution works well unless the product p*n <=5, where p = population proportion estimate, and n = number of samples, in which case the binomial distribution is used instead. For example, a sample of 15 people shows 4 who are left handed, and 11 who are right handed. Then p = 4/15 = 27%. 0.27*15 = 4, so the binomial distribution should be used in this case. #### References [1] Dalgaard, Peter, “Introductory Statistics with R”, Springer-Verlag, 2002. [2] Glantz, Stanton A. “Primer of Biostatistics.”, McGraw-Hill, Fifth Edition, 2002. [3] Lentner, Marvin, “Elementary Applied Statistics”, Bogden and Quigley, 1972. [4] Weisstein, Eric W. “Binomial Distribution.” From MathWorld–A Wolfram Web Resource. [5] Wikipedia, “Binomial distribution”, #### Examples Draw samples from the distribution: >>> n, p = 10, .5 # number of trials, probability of each trial >>> s = np.random.binomial(n, p, 1000) # result of flipping a coin 10 times, tested 1000 times. A real world example. A company drills 9 wild-cat oil exploration wells, each with an estimated probability of success of 0.1. All nine wells fail. What is the probability of that happening? Let’s do 20,000 trials of the model, and count the number that generate zero positive results. >>> sum(np.random.binomial(9, 0.1, 20000) == 0)/20000. # answer = 0.38885, or 38%. # numpy.random.RandomState.bytes method random.RandomState.bytes(_length_) Return random bytes. Note New code should use the [`bytes`](numpy.random.generator.bytes#numpy.random.Generator.bytes "numpy.random.Generator.bytes") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **length** int Number of random bytes. Returns: **out** bytes String of length `length`. See also [`random.Generator.bytes`](numpy.random.generator.bytes#numpy.random.Generator.bytes "numpy.random.Generator.bytes") which should be used for new code. #### Examples >>> np.random.bytes(10) b' eh\x85\x022SZ\xbf\xa4' #random # numpy.random.RandomState.chisquare method random.RandomState.chisquare(_df_ , _size =None_) Draw samples from a chi-square distribution. When `df` independent random variables, each with standard normal distributions (mean 0, variance 1), are squared and summed, the resulting distribution is chi-square (see Notes). This distribution is often used in hypothesis testing. Note New code should use the [`chisquare`](numpy.random.generator.chisquare#numpy.random.Generator.chisquare "numpy.random.Generator.chisquare") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **df** float or array_like of floats Number of degrees of freedom, must be > 0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` is a scalar. Otherwise, `np.array(df).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized chi-square distribution. Raises: ValueError When `df` <= 0 or when an inappropriate [`size`](../../generated/numpy.size#numpy.size "numpy.size") (e.g. `size=-1`) is given. See also [`random.Generator.chisquare`](numpy.random.generator.chisquare#numpy.random.Generator.chisquare "numpy.random.Generator.chisquare") which should be used for new code. #### Notes The variable obtained by summing the squares of `df` independent, standard normally distributed random variables: \\[Q = \sum_{i=1}^{\mathtt{df}} X^2_i\\] is chi-square distributed, denoted \\[Q \sim \chi^2_k.\\] The probability density function of the chi-squared distribution is \\[p(x) = \frac{(1/2)^{k/2}}{\Gamma(k/2)} x^{k/2 - 1} e^{-x/2},\\] where \\(\Gamma\\) is the gamma function, \\[\Gamma(x) = \int_0^{-\infty} t^{x - 1} e^{-t} dt.\\] #### References [1] NIST “Engineering Statistics Handbook” #### Examples >>> np.random.chisquare(2,4) array([ 1.89920014, 9.00867716, 3.13710533, 5.62318272]) # random # numpy.random.RandomState.choice method random.RandomState.choice(_a_ , _size =None_, _replace =True_, _p =None_) Generates a random sample from a given 1-D array Note New code should use the [`choice`](numpy.random.generator.choice#numpy.random.Generator.choice "numpy.random.Generator.choice") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Warning This function uses the C-long dtype, which is 32bit on windows and otherwise 64bit on 64bit platforms (and 32bit on 32bit ones). Since NumPy 2.0, NumPy’s default integer is 32bit on 32bit platforms and 64bit on 64bit platforms. Parameters: **a** 1-D array-like or int If an ndarray, a random sample is generated from its elements. If an int, the random sample is generated as if it were `np.arange(a)` **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **replace** boolean, optional Whether the sample is with or without replacement. Default is True, meaning that a value of `a` can be selected multiple times. **p** 1-D array-like, optional The probabilities associated with each entry in a. If not given, the sample assumes a uniform distribution over all entries in `a`. Returns: **samples** single item or ndarray The generated random samples Raises: ValueError If a is an int and less than zero, if a or p are not 1-dimensional, if a is an array-like of size 0, if p is not a vector of probabilities, if a and p have different lengths, or if replace=False and the sample size is greater than the population size See also [`randint`](numpy.random.randomstate.randint#numpy.random.RandomState.randint "numpy.random.RandomState.randint"), [`shuffle`](numpy.random.randomstate.shuffle#numpy.random.RandomState.shuffle "numpy.random.RandomState.shuffle"), [`permutation`](numpy.random.randomstate.permutation#numpy.random.RandomState.permutation "numpy.random.RandomState.permutation") [`random.Generator.choice`](numpy.random.generator.choice#numpy.random.Generator.choice "numpy.random.Generator.choice") which should be used in new code #### Notes Setting user-specified probabilities through `p` uses a more general but less efficient sampler than the default. The general sampler produces a different sample than the optimized sampler even if each element of `p` is 1 / len(a). Sampling random rows from a 2-D array is not possible with this function, but is possible with `Generator.choice` through its `axis` keyword. #### Examples Generate a uniform random sample from np.arange(5) of size 3: >>> np.random.choice(5, 3) array([0, 3, 4]) # random >>> #This is equivalent to np.random.randint(0,5,3) Generate a non-uniform random sample from np.arange(5) of size 3: >>> np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0]) array([3, 3, 0]) # random Generate a uniform random sample from np.arange(5) of size 3 without replacement: >>> np.random.choice(5, 3, replace=False) array([3,1,0]) # random >>> #This is equivalent to np.random.permutation(np.arange(5))[:3] Generate a non-uniform random sample from np.arange(5) of size 3 without replacement: >>> np.random.choice(5, 3, replace=False, p=[0.1, 0, 0.3, 0.6, 0]) array([2, 3, 0]) # random Any of the above can be repeated with an arbitrary array-like instead of just integers. For instance: >>> aa_milne_arr = ['pooh', 'rabbit', 'piglet', 'Christopher'] >>> np.random.choice(aa_milne_arr, 5, p=[0.5, 0.1, 0.1, 0.3]) array(['pooh', 'pooh', 'pooh', 'Christopher', 'piglet'], # random dtype='0\\) and \\(\sum_{i=1}^k x_i = 1\\). The probability density function \\(p\\) of a Dirichlet-distributed random vector \\(X\\) is proportional to \\[p(x) \propto \prod_{i=1}^{k}{x^{\alpha_i-1}_i},\\] where \\(\alpha\\) is a vector containing the positive concentration parameters. The method uses the following property for computation: let \\(Y\\) be a random vector which has components that follow a standard gamma distribution, then \\(X = \frac{1}{\sum_{i=1}^k{Y_i}} Y\\) is Dirichlet-distributed #### References [1] David McKay, “Information Theory, Inference and Learning Algorithms,” chapter 23, [2] Wikipedia, “Dirichlet distribution”, #### Examples Taking an example cited in Wikipedia, this distribution can be used if one wanted to cut strings (each of initial length 1.0) into K pieces with different lengths, where each piece had, on average, a designated average length, but allowing some variation in the relative sizes of the pieces. >>> s = np.random.dirichlet((10, 5, 3), 20).transpose() >>> import matplotlib.pyplot as plt >>> plt.barh(range(20), s[0]) >>> plt.barh(range(20), s[1], left=s[0], color='g') >>> plt.barh(range(20), s[2], left=s[0]+s[1], color='r') >>> plt.title("Lengths of Strings") # numpy.random.RandomState.exponential method random.RandomState.exponential(_scale =1.0_, _size =None_) Draw samples from an exponential distribution. Its probability density function is \\[f(x; \frac{1}{\beta}) = \frac{1}{\beta} \exp(-\frac{x}{\beta}),\\] for `x > 0` and 0 elsewhere. \\(\beta\\) is the scale parameter, which is the inverse of the rate parameter \\(\lambda = 1/\beta\\). The rate parameter is an alternative, widely used parameterization of the exponential distribution [3]. The exponential distribution is a continuous analogue of the geometric distribution. It describes many common situations, such as the size of raindrops measured over many rainstorms [1], or the time between page requests to Wikipedia [2]. Note New code should use the [`exponential`](numpy.random.generator.exponential#numpy.random.Generator.exponential "numpy.random.Generator.exponential") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **scale** float or array_like of floats The scale parameter, \\(\beta = 1/\lambda\\). Must be non-negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `scale` is a scalar. Otherwise, `np.array(scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized exponential distribution. See also [`random.Generator.exponential`](numpy.random.generator.exponential#numpy.random.Generator.exponential "numpy.random.Generator.exponential") which should be used for new code. #### References [1] Peyton Z. Peebles Jr., “Probability, Random Variables and Random Signal Principles”, 4th ed, 2001, p. 57. [2] Wikipedia, “Poisson process”, [3] Wikipedia, “Exponential distribution”, #### Examples A real world example: Assume a company has 10000 customer support agents and the average time between customer calls is 4 minutes. >>> n = 10000 >>> time_between_calls = np.random.default_rng().exponential(scale=4, size=n) What is the probability that a customer will call in the next 4 to 5 minutes? >>> x = ((time_between_calls < 5).sum())/n >>> y = ((time_between_calls < 4).sum())/n >>> x-y 0.08 # may vary # numpy.random.RandomState.f method random.RandomState.f(_dfnum_ , _dfden_ , _size =None_) Draw samples from an F distribution. Samples are drawn from an F distribution with specified parameters, `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of freedom in denominator), where both parameters must be greater than zero. The random variate of the F distribution (also known as the Fisher distribution) is a continuous probability distribution that arises in ANOVA tests, and is the ratio of two chi-square variates. Note New code should use the [`f`](numpy.random.generator.f#numpy.random.Generator.f "numpy.random.Generator.f") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **dfnum** float or array_like of floats Degrees of freedom in numerator, must be > 0. **dfden** float or array_like of float Degrees of freedom in denominator, must be > 0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `dfnum` and `dfden` are both scalars. Otherwise, `np.broadcast(dfnum, dfden).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Fisher distribution. See also [`scipy.stats.f`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.f.html#scipy.stats.f "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.f`](numpy.random.generator.f#numpy.random.Generator.f "numpy.random.Generator.f") which should be used for new code. #### Notes The F statistic is used to compare in-group variances to between-group variances. Calculating the distribution depends on the sampling, and so it is a function of the respective degrees of freedom in the problem. The variable `dfnum` is the number of samples minus one, the between-groups degrees of freedom, while `dfden` is the within-groups degrees of freedom, the sum of the number of samples in each group minus the number of groups. #### References [1] Glantz, Stanton A. “Primer of Biostatistics.”, McGraw-Hill, Fifth Edition, 2002. [2] Wikipedia, “F-distribution”, #### Examples An example from Glantz[1], pp 47-40: Two groups, children of diabetics (25 people) and children from people without diabetes (25 controls). Fasting blood glucose was measured, case group had a mean value of 86.1, controls had a mean value of 82.2. Standard deviations were 2.09 and 2.49 respectively. Are these data consistent with the null hypothesis that the parents diabetic status does not affect their children’s blood glucose levels? Calculating the F statistic from the data gives a value of 36.01. Draw samples from the distribution: >>> dfnum = 1. # between group degrees of freedom >>> dfden = 48. # within groups degrees of freedom >>> s = np.random.f(dfnum, dfden, 1000) The lower bound for the top 1% of the samples is : >>> np.sort(s)[-10] 7.61988120985 # random So there is about a 1% chance that the F statistic will exceed 7.62, the measured value is 36, so the null hypothesis is rejected at the 1% level. # numpy.random.RandomState.gamma method random.RandomState.gamma(_shape_ , _scale =1.0_, _size =None_) Draw samples from a Gamma distribution. Samples are drawn from a Gamma distribution with specified parameters, [`shape`](../../generated/numpy.shape#numpy.shape "numpy.shape") (sometimes designated “k”) and `scale` (sometimes designated “theta”), where both parameters are > 0. Note New code should use the [`gamma`](numpy.random.generator.gamma#numpy.random.Generator.gamma "numpy.random.Generator.gamma") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **shape** float or array_like of floats The shape of the gamma distribution. Must be non-negative. **scale** float or array_like of floats, optional The scale of the gamma distribution. Must be non-negative. Default is equal to 1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `shape` and `scale` are both scalars. Otherwise, `np.broadcast(shape, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized gamma distribution. See also [`scipy.stats.gamma`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.gamma`](numpy.random.generator.gamma#numpy.random.Generator.gamma "numpy.random.Generator.gamma") which should be used for new code. #### Notes The probability density for the Gamma distribution is \\[p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},\\] where \\(k\\) is the shape and \\(\theta\\) the scale, and \\(\Gamma\\) is the Gamma function. The Gamma distribution is often used to model the times to failure of electronic components, and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. #### References [1] Weisstein, Eric W. “Gamma Distribution.” From MathWorld–A Wolfram Web Resource. [2] Wikipedia, “Gamma distribution”, #### Examples Draw samples from the distribution: >>> shape, scale = 2., 2. # mean=4, std=2*sqrt(2) >>> s = np.random.gamma(shape, scale, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> import scipy.special as sps >>> count, bins, ignored = plt.hist(s, 50, density=True) >>> y = bins**(shape-1)*(np.exp(-bins/scale) / ... (sps.gamma(shape)*scale**shape)) >>> plt.plot(bins, y, linewidth=2, color='r') >>> plt.show() # numpy.random.RandomState.geometric method random.RandomState.geometric(_p_ , _size =None_) Draw samples from the geometric distribution. Bernoulli trials are experiments with one of two outcomes: success or failure (an example of such an experiment is flipping a coin). The geometric distribution models the number of trials that must be run in order to achieve success. It is therefore supported on the positive integers, `k = 1, 2, ...`. The probability mass function of the geometric distribution is \\[f(k) = (1 - p)^{k - 1} p\\] where `p` is the probability of success of an individual trial. Note New code should use the [`geometric`](numpy.random.generator.geometric#numpy.random.Generator.geometric "numpy.random.Generator.geometric") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **p** float or array_like of floats The probability of success of an individual trial. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `p` is a scalar. Otherwise, `np.array(p).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized geometric distribution. See also [`random.Generator.geometric`](numpy.random.generator.geometric#numpy.random.Generator.geometric "numpy.random.Generator.geometric") which should be used for new code. #### Examples Draw ten thousand values from the geometric distribution, with the probability of an individual success equal to 0.35: >>> z = np.random.geometric(p=0.35, size=10000) How many trials succeeded after a single run? >>> (z == 1).sum() / 10000. 0.34889999999999999 #random # numpy.random.RandomState.get_state method random.RandomState.get_state(_legacy =True_) Return a tuple representing the internal state of the generator. For more details, see [`set_state`](numpy.random.randomstate.set_state#numpy.random.RandomState.set_state "numpy.random.RandomState.set_state"). Parameters: **legacy** bool, optional Flag indicating to return a legacy tuple state when the BitGenerator is MT19937, instead of a dict. Raises ValueError if the underlying bit generator is not an instance of MT19937. Returns: **out**{tuple(str, ndarray of 624 uints, int, int, float), dict} If legacy is True, the returned tuple has the following items: 1. the string ‘MT19937’. 2. a 1-D array of 624 unsigned integer keys. 3. an integer `pos`. 4. an integer `has_gauss`. 5. a float `cached_gaussian`. If `legacy` is False, or the BitGenerator is not MT19937, then state is returned as a dictionary. See also [`set_state`](numpy.random.randomstate.set_state#numpy.random.RandomState.set_state "numpy.random.RandomState.set_state") #### Notes [`set_state`](numpy.random.randomstate.set_state#numpy.random.RandomState.set_state "numpy.random.RandomState.set_state") and `get_state` are not needed to work with any of the random distributions in NumPy. If the internal state is manually altered, the user should know exactly what he/she is doing. # numpy.random.RandomState.gumbel method random.RandomState.gumbel(_loc =0.0_, _scale =1.0_, _size =None_) Draw samples from a Gumbel distribution. Draw samples from a Gumbel distribution with specified location and scale. For more information on the Gumbel distribution, see Notes and References below. Note New code should use the [`gumbel`](numpy.random.generator.gumbel#numpy.random.Generator.gumbel "numpy.random.Generator.gumbel") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **loc** float or array_like of floats, optional The location of the mode of the distribution. Default is 0. **scale** float or array_like of floats, optional The scale parameter of the distribution. Default is 1. Must be non- negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Gumbel distribution. See also [`scipy.stats.gumbel_l`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gumbel_l.html#scipy.stats.gumbel_l "\(in SciPy v1.14.1\)") [`scipy.stats.gumbel_r`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gumbel_r.html#scipy.stats.gumbel_r "\(in SciPy v1.14.1\)") [`scipy.stats.genextreme`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genextreme.html#scipy.stats.genextreme "\(in SciPy v1.14.1\)") [`weibull`](numpy.random.randomstate.weibull#numpy.random.RandomState.weibull "numpy.random.RandomState.weibull") [`random.Generator.gumbel`](numpy.random.generator.gumbel#numpy.random.Generator.gumbel "numpy.random.Generator.gumbel") which should be used for new code. #### Notes The Gumbel (or Smallest Extreme Value (SEV) or the Smallest Extreme Value Type I) distribution is one of a class of Generalized Extreme Value (GEV) distributions used in modeling extreme value problems. The Gumbel is a special case of the Extreme Value Type I distribution for maximums from distributions with “exponential-like” tails. The probability density for the Gumbel distribution is \\[p(x) = \frac{e^{-(x - \mu)/ \beta}}{\beta} e^{ -e^{-(x - \mu)/ \beta}},\\] where \\(\mu\\) is the mode, a location parameter, and \\(\beta\\) is the scale parameter. The Gumbel (named for German mathematician Emil Julius Gumbel) was used very early in the hydrology literature, for modeling the occurrence of flood events. It is also used for modeling maximum wind speed and rainfall rates. It is a “fat-tailed” distribution - the probability of an event in the tail of the distribution is larger than if one used a Gaussian, hence the surprisingly frequent occurrence of 100-year floods. Floods were initially modeled as a Gaussian process, which underestimated the frequency of extreme events. It is one of a class of extreme value distributions, the Generalized Extreme Value (GEV) distributions, which also includes the Weibull and Frechet. The function has a mean of \\(\mu + 0.57721\beta\\) and a variance of \\(\frac{\pi^2}{6}\beta^2\\). #### References [1] Gumbel, E. J., “Statistics of Extremes,” New York: Columbia University Press, 1958. [2] Reiss, R.-D. and Thomas, M., “Statistical Analysis of Extreme Values from Insurance, Finance, Hydrology and Other Fields,” Basel: Birkhauser Verlag, 2001. #### Examples Draw samples from the distribution: >>> mu, beta = 0, 0.1 # location and scale >>> s = np.random.gumbel(mu, beta, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta) ... * np.exp( -np.exp( -(bins - mu) /beta) ), ... linewidth=2, color='r') >>> plt.show() Show how an extreme value distribution can arise from a Gaussian process and compare to a Gaussian: >>> means = [] >>> maxima = [] >>> for i in range(0,1000) : ... a = np.random.normal(mu, beta, 1000) ... means.append(a.mean()) ... maxima.append(a.max()) >>> count, bins, ignored = plt.hist(maxima, 30, density=True) >>> beta = np.std(maxima) * np.sqrt(6) / np.pi >>> mu = np.mean(maxima) - 0.57721*beta >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta) ... * np.exp(-np.exp(-(bins - mu)/beta)), ... linewidth=2, color='r') >>> plt.plot(bins, 1/(beta * np.sqrt(2 * np.pi)) ... * np.exp(-(bins - mu)**2 / (2 * beta**2)), ... linewidth=2, color='g') >>> plt.show() # numpy.random.RandomState.hypergeometric method random.RandomState.hypergeometric(_ngood_ , _nbad_ , _nsample_ , _size =None_) Draw samples from a Hypergeometric distribution. Samples are drawn from a hypergeometric distribution with specified parameters, `ngood` (ways to make a good selection), `nbad` (ways to make a bad selection), and `nsample` (number of items sampled, which is less than or equal to the sum `ngood + nbad`). Note New code should use the [`hypergeometric`](numpy.random.generator.hypergeometric#numpy.random.Generator.hypergeometric "numpy.random.Generator.hypergeometric") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **ngood** int or array_like of ints Number of ways to make a good selection. Must be nonnegative. **nbad** int or array_like of ints Number of ways to make a bad selection. Must be nonnegative. **nsample** int or array_like of ints Number of items sampled. Must be at least 1 and at most `ngood + nbad`. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `ngood`, `nbad`, and `nsample` are all scalars. Otherwise, `np.broadcast(ngood, nbad, nsample).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized hypergeometric distribution. Each sample is the number of good items within a randomly selected subset of size `nsample` taken from a set of `ngood` good items and `nbad` bad items. See also [`scipy.stats.hypergeom`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.hypergeom.html#scipy.stats.hypergeom "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.hypergeometric`](numpy.random.generator.hypergeometric#numpy.random.Generator.hypergeometric "numpy.random.Generator.hypergeometric") which should be used for new code. #### Notes The probability mass function (PMF) for the Hypergeometric distribution is \\[P(x) = \frac{\binom{g}{x}\binom{b}{n-x}}{\binom{g+b}{n}},\\] where \\(0 \le x \le n\\) and \\(n-b \le x \le g\\) for P(x) the probability of `x` good results in the drawn sample, g = `ngood`, b = `nbad`, and n = `nsample`. Consider an urn with black and white marbles in it, `ngood` of them are black and `nbad` are white. If you draw `nsample` balls without replacement, then the hypergeometric distribution describes the distribution of black balls in the drawn sample. Note that this distribution is very similar to the binomial distribution, except that in this case, samples are drawn without replacement, whereas in the Binomial case samples are drawn with replacement (or the sample space is infinite). As the sample space becomes large, this distribution approaches the binomial. #### References [1] Lentner, Marvin, “Elementary Applied Statistics”, Bogden and Quigley, 1972. [2] Weisstein, Eric W. “Hypergeometric Distribution.” From MathWorld–A Wolfram Web Resource. [3] Wikipedia, “Hypergeometric distribution”, #### Examples Draw samples from the distribution: >>> ngood, nbad, nsamp = 100, 2, 10 # number of good, number of bad, and number of samples >>> s = np.random.hypergeometric(ngood, nbad, nsamp, 1000) >>> from matplotlib.pyplot import hist >>> hist(s) # note that it is very unlikely to grab both bad items Suppose you have an urn with 15 white and 15 black marbles. If you pull 15 marbles at random, how likely is it that 12 or more of them are one color? >>> s = np.random.hypergeometric(15, 15, 15, 100000) >>> sum(s>=12)/100000. + sum(s<=3)/100000. # answer = 0.003 ... pretty unlikely! # numpy.random.RandomState.laplace method random.RandomState.laplace(_loc =0.0_, _scale =1.0_, _size =None_) Draw samples from the Laplace or double exponential distribution with specified location (or mean) and scale (decay). The Laplace distribution is similar to the Gaussian/normal distribution, but is sharper at the peak and has fatter tails. It represents the difference between two independent, identically distributed exponential random variables. Note New code should use the [`laplace`](numpy.random.generator.laplace#numpy.random.Generator.laplace "numpy.random.Generator.laplace") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **loc** float or array_like of floats, optional The position, \\(\mu\\), of the distribution peak. Default is 0. **scale** float or array_like of floats, optional \\(\lambda\\), the exponential decay. Default is 1. Must be non- negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Laplace distribution. See also [`random.Generator.laplace`](numpy.random.generator.laplace#numpy.random.Generator.laplace "numpy.random.Generator.laplace") which should be used for new code. #### Notes It has the probability density function \\[f(x; \mu, \lambda) = \frac{1}{2\lambda} \exp\left(-\frac{|x - \mu|}{\lambda}\right).\\] The first law of Laplace, from 1774, states that the frequency of an error can be expressed as an exponential function of the absolute magnitude of the error, which leads to the Laplace distribution. For many problems in economics and health sciences, this distribution seems to model the data better than the standard Gaussian distribution. #### References [1] Abramowitz, M. and Stegun, I. A. (Eds.). “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing,” New York: Dover, 1972. [2] Kotz, Samuel, et. al. “The Laplace Distribution and Generalizations, “ Birkhauser, 2001. [3] Weisstein, Eric W. “Laplace Distribution.” From MathWorld–A Wolfram Web Resource. [4] Wikipedia, “Laplace distribution”, #### Examples Draw samples from the distribution >>> loc, scale = 0., 1. >>> s = np.random.laplace(loc, scale, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> x = np.arange(-8., 8., .01) >>> pdf = np.exp(-abs(x-loc)/scale)/(2.*scale) >>> plt.plot(x, pdf) Plot Gaussian for comparison: >>> g = (1/(scale * np.sqrt(2 * np.pi)) * ... np.exp(-(x - loc)**2 / (2 * scale**2))) >>> plt.plot(x,g) # numpy.random.RandomState.logistic method random.RandomState.logistic(_loc =0.0_, _scale =1.0_, _size =None_) Draw samples from a logistic distribution. Samples are drawn from a logistic distribution with specified parameters, loc (location or mean, also median), and scale (>0). Note New code should use the [`logistic`](numpy.random.generator.logistic#numpy.random.Generator.logistic "numpy.random.Generator.logistic") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **loc** float or array_like of floats, optional Parameter of the distribution. Default is 0. **scale** float or array_like of floats, optional Parameter of the distribution. Must be non-negative. Default is 1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized logistic distribution. See also [`scipy.stats.logistic`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logistic.html#scipy.stats.logistic "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.logistic`](numpy.random.generator.logistic#numpy.random.Generator.logistic "numpy.random.Generator.logistic") which should be used for new code. #### Notes The probability density for the Logistic distribution is \\[P(x) = P(x) = \frac{e^{-(x-\mu)/s}}{s(1+e^{-(x-\mu)/s})^2},\\] where \\(\mu\\) = location and \\(s\\) = scale. The Logistic distribution is used in Extreme Value problems where it can act as a mixture of Gumbel distributions, in Epidemiology, and by the World Chess Federation (FIDE) where it is used in the Elo ranking system, assuming the performance of each player is a logistically distributed random variable. #### References [1] Reiss, R.-D. and Thomas M. (2001), “Statistical Analysis of Extreme Values, from Insurance, Finance, Hydrology and Other Fields,” Birkhauser Verlag, Basel, pp 132-133. [2] Weisstein, Eric W. “Logistic Distribution.” From MathWorld–A Wolfram Web Resource. [3] Wikipedia, “Logistic-distribution”, #### Examples Draw samples from the distribution: >>> loc, scale = 10, 1 >>> s = np.random.logistic(loc, scale, 10000) >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, bins=50) # plot against distribution >>> def logist(x, loc, scale): ... return np.exp((loc-x)/scale)/(scale*(1+np.exp((loc-x)/scale))**2) >>> lgst_val = logist(bins, loc, scale) >>> plt.plot(bins, lgst_val * count.max() / lgst_val.max()) >>> plt.show() # numpy.random.RandomState.lognormal method random.RandomState.lognormal(_mean =0.0_, _sigma =1.0_, _size =None_) Draw samples from a log-normal distribution. Draw samples from a log-normal distribution with specified mean, standard deviation, and array shape. Note that the mean and standard deviation are not the values for the distribution itself, but of the underlying normal distribution it is derived from. Note New code should use the [`lognormal`](numpy.random.generator.lognormal#numpy.random.Generator.lognormal "numpy.random.Generator.lognormal") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **mean** float or array_like of floats, optional Mean value of the underlying normal distribution. Default is 0. **sigma** float or array_like of floats, optional Standard deviation of the underlying normal distribution. Must be non- negative. Default is 1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mean` and `sigma` are both scalars. Otherwise, `np.broadcast(mean, sigma).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized log-normal distribution. See also [`scipy.stats.lognorm`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html#scipy.stats.lognorm "\(in SciPy v1.14.1\)") probability density function, distribution, cumulative density function, etc. [`random.Generator.lognormal`](numpy.random.generator.lognormal#numpy.random.Generator.lognormal "numpy.random.Generator.lognormal") which should be used for new code. #### Notes A variable `x` has a log-normal distribution if `log(x)` is normally distributed. The probability density function for the log-normal distribution is: \\[p(x) = \frac{1}{\sigma x \sqrt{2\pi}} e^{(-\frac{(ln(x)-\mu)^2}{2\sigma^2})}\\] where \\(\mu\\) is the mean and \\(\sigma\\) is the standard deviation of the normally distributed logarithm of the variable. A log-normal distribution results if a random variable is the _product_ of a large number of independent, identically-distributed variables in the same way that a normal distribution results if the variable is the _sum_ of a large number of independent, identically-distributed variables. #### References [1] Limpert, E., Stahel, W. A., and Abbt, M., “Log-normal Distributions across the Sciences: Keys and Clues,” BioScience, Vol. 51, No. 5, May, 2001. [2] Reiss, R.D. and Thomas, M., “Statistical Analysis of Extreme Values,” Basel: Birkhauser Verlag, 2001, pp. 31-32. #### Examples Draw samples from the distribution: >>> mu, sigma = 3., 1. # mean and standard deviation >>> s = np.random.lognormal(mu, sigma, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 100, density=True, align='mid') >>> x = np.linspace(min(bins), max(bins), 10000) >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2)) ... / (x * sigma * np.sqrt(2 * np.pi))) >>> plt.plot(x, pdf, linewidth=2, color='r') >>> plt.axis('tight') >>> plt.show() Demonstrate that taking the products of random samples from a uniform distribution can be fit well by a log-normal probability density function. >>> # Generate a thousand samples: each is the product of 100 random >>> # values, drawn from a normal distribution. >>> b = [] >>> for i in range(1000): ... a = 10. + np.random.standard_normal(100) ... b.append(np.prod(a)) >>> b = np.array(b) / np.min(b) # scale values to be positive >>> count, bins, ignored = plt.hist(b, 100, density=True, align='mid') >>> sigma = np.std(np.log(b)) >>> mu = np.mean(np.log(b)) >>> x = np.linspace(min(bins), max(bins), 10000) >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2)) ... / (x * sigma * np.sqrt(2 * np.pi))) >>> plt.plot(x, pdf, color='r', linewidth=2) >>> plt.show() # numpy.random.RandomState.logseries method random.RandomState.logseries(_p_ , _size =None_) Draw samples from a logarithmic series distribution. Samples are drawn from a log series distribution with specified shape parameter, 0 <= `p` < 1. Note New code should use the [`logseries`](numpy.random.generator.logseries#numpy.random.Generator.logseries "numpy.random.Generator.logseries") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **p** float or array_like of floats Shape parameter for the distribution. Must be in the range [0, 1). **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `p` is a scalar. Otherwise, `np.array(p).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized logarithmic series distribution. See also [`scipy.stats.logser`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logser.html#scipy.stats.logser "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.logseries`](numpy.random.generator.logseries#numpy.random.Generator.logseries "numpy.random.Generator.logseries") which should be used for new code. #### Notes The probability density for the Log Series distribution is \\[P(k) = \frac{-p^k}{k \ln(1-p)},\\] where p = probability. The log series distribution is frequently used to represent species richness and occurrence, first proposed by Fisher, Corbet, and Williams in 1943 [2]. It may also be used to model the numbers of occupants seen in cars [3]. #### References [1] Buzas, Martin A.; Culver, Stephen J., Understanding regional species diversity through the log series distribution of occurrences: BIODIVERSITY RESEARCH Diversity & Distributions, Volume 5, Number 5, September 1999 , pp. 187-195(9). [2] Fisher, R.A,, A.S. Corbet, and C.B. Williams. 1943. The relation between the number of species and the number of individuals in a random sample of an animal population. Journal of Animal Ecology, 12:42-58. [3] D. J. Hand, F. Daly, D. Lunn, E. Ostrowski, A Handbook of Small Data Sets, CRC Press, 1994. [4] Wikipedia, “Logarithmic distribution”, #### Examples Draw samples from the distribution: >>> a = .6 >>> s = np.random.logseries(a, 10000) >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s) # plot against distribution >>> def logseries(k, p): ... return -p**k/(k*np.log(1-p)) >>> plt.plot(bins, logseries(bins, a)*count.max()/ ... logseries(bins, a).max(), 'r') >>> plt.show() # numpy.random.RandomState.multinomial method random.RandomState.multinomial(_n_ , _pvals_ , _size =None_) Draw samples from a multinomial distribution. The multinomial distribution is a multivariate generalization of the binomial distribution. Take an experiment with one of `p` possible outcomes. An example of such an experiment is throwing a dice, where the outcome can be 1 through 6. Each sample drawn from the distribution represents `n` such experiments. Its values, `X_i = [X_0, X_1, ..., X_p]`, represent the number of times the outcome was `i`. Note New code should use the [`multinomial`](numpy.random.generator.multinomial#numpy.random.Generator.multinomial "numpy.random.Generator.multinomial") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Warning This function defaults to the C-long dtype, which is 32bit on windows and otherwise 64bit on 64bit platforms (and 32bit on 32bit ones). Since NumPy 2.0, NumPy’s default integer is 32bit on 32bit platforms and 64bit on 64bit platforms. Parameters: **n** int Number of experiments. **pvals** sequence of floats, length p Probabilities of each of the `p` different outcomes. These must sum to 1 (however, the last element is always assumed to account for the remaining probability, as long as `sum(pvals[:-1]) <= 1)`. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns: **out** ndarray The drawn samples, of shape _size_ , if that was provided. If not, the shape is `(N,)`. In other words, each entry `out[i,j,...,:]` is an N-dimensional value drawn from the distribution. See also [`random.Generator.multinomial`](numpy.random.generator.multinomial#numpy.random.Generator.multinomial "numpy.random.Generator.multinomial") which should be used for new code. #### Examples Throw a dice 20 times: >>> np.random.multinomial(20, [1/6.]*6, size=1) array([[4, 1, 7, 5, 2, 1]]) # random It landed 4 times on 1, once on 2, etc. Now, throw the dice 20 times, and 20 times again: >>> np.random.multinomial(20, [1/6.]*6, size=2) array([[3, 4, 3, 3, 4, 3], # random [2, 4, 3, 4, 0, 7]]) For the first run, we threw 3 times 1, 4 times 2, etc. For the second, we threw 2 times 1, 4 times 2, etc. A loaded die is more likely to land on number 6: >>> np.random.multinomial(100, [1/7.]*5 + [2/7.]) array([11, 16, 14, 17, 16, 26]) # random The probability inputs should be normalized. As an implementation detail, the value of the last entry is ignored and assumed to take up any leftover probability mass, but this should not be relied on. A biased coin which has twice as much weight on one side as on the other should be sampled like so: >>> np.random.multinomial(100, [1.0 / 3, 2.0 / 3]) # RIGHT array([38, 62]) # random not like: >>> np.random.multinomial(100, [1.0, 2.0]) # WRONG Traceback (most recent call last): ValueError: pvals < 0, pvals > 1 or pvals contains NaNs # numpy.random.RandomState.multivariate_normal method random.RandomState.multivariate_normal(_mean_ , _cov_ , _size =None_, _check_valid ='warn'_, _tol =1e-8_) Draw random samples from a multivariate normal distribution. The multivariate normal, multinormal or Gaussian distribution is a generalization of the one-dimensional normal distribution to higher dimensions. Such a distribution is specified by its mean and covariance matrix. These parameters are analogous to the mean (average or “center”) and variance (standard deviation, or “width,” squared) of the one-dimensional normal distribution. Note New code should use the [`multivariate_normal`](numpy.random.generator.multivariate_normal#numpy.random.Generator.multivariate_normal "numpy.random.Generator.multivariate_normal") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **mean** 1-D array_like, of length N Mean of the N-dimensional distribution. **cov** 2-D array_like, of shape (N, N) Covariance matrix of the distribution. It must be symmetric and positive- semidefinite for proper sampling. **size** int or tuple of ints, optional Given a shape of, for example, `(m,n,k)`, `m*n*k` samples are generated, and packed in an `m`-by-`n`-by-`k` arrangement. Because each sample is `N`-dimensional, the output shape is `(m,n,k,N)`. If no shape is specified, a single (`N`-D) sample is returned. **check_valid**{ ‘warn’, ‘raise’, ‘ignore’ }, optional Behavior when the covariance matrix is not positive semidefinite. **tol** float, optional Tolerance when checking the singular values in covariance matrix. cov is cast to double before the check. Returns: **out** ndarray The drawn samples, of shape _size_ , if that was provided. If not, the shape is `(N,)`. In other words, each entry `out[i,j,...,:]` is an N-dimensional value drawn from the distribution. See also [`random.Generator.multivariate_normal`](numpy.random.generator.multivariate_normal#numpy.random.Generator.multivariate_normal "numpy.random.Generator.multivariate_normal") which should be used for new code. #### Notes The mean is a coordinate in N-dimensional space, which represents the location where samples are most likely to be generated. This is analogous to the peak of the bell curve for the one-dimensional or univariate normal distribution. Covariance indicates the level to which two variables vary together. From the multivariate normal distribution, we draw N-dimensional samples, \\(X = [x_1, x_2, ... x_N]\\). The covariance matrix element \\(C_{ij}\\) is the covariance of \\(x_i\\) and \\(x_j\\). The element \\(C_{ii}\\) is the variance of \\(x_i\\) (i.e. its “spread”). Instead of specifying the full covariance matrix, popular approximations include: * Spherical covariance ([`cov`](../../generated/numpy.cov#numpy.cov "numpy.cov") is a multiple of the identity matrix) * Diagonal covariance ([`cov`](../../generated/numpy.cov#numpy.cov "numpy.cov") has non-negative elements, and only on the diagonal) This geometrical property can be seen in two dimensions by plotting generated data-points: >>> mean = [0, 0] >>> cov = [[1, 0], [0, 100]] # diagonal covariance Diagonal covariance means that points are oriented along x or y-axis: >>> import matplotlib.pyplot as plt >>> x, y = np.random.multivariate_normal(mean, cov, 5000).T >>> plt.plot(x, y, 'x') >>> plt.axis('equal') >>> plt.show() Note that the covariance matrix must be positive semidefinite (a.k.a. nonnegative-definite). Otherwise, the behavior of this method is undefined and backwards compatibility is not guaranteed. #### References [1] Papoulis, A., “Probability, Random Variables, and Stochastic Processes,” 3rd ed., New York: McGraw-Hill, 1991. [2] Duda, R. O., Hart, P. E., and Stork, D. G., “Pattern Classification,” 2nd ed., New York: Wiley, 2001. #### Examples >>> mean = (1, 2) >>> cov = [[1, 0], [0, 1]] >>> x = np.random.multivariate_normal(mean, cov, (3, 3)) >>> x.shape (3, 3, 2) Here we generate 800 samples from the bivariate normal distribution with mean [0, 0] and covariance matrix [[6, -3], [-3, 3.5]]. The expected variances of the first and second components of the sample are 6 and 3.5, respectively, and the expected correlation coefficient is -3/sqrt(6*3.5) ≈ -0.65465. >>> cov = np.array([[6, -3], [-3, 3.5]]) >>> pts = np.random.multivariate_normal([0, 0], cov, size=800) Check that the mean, covariance, and correlation coefficient of the sample are close to the expected values: >>> pts.mean(axis=0) array([ 0.0326911 , -0.01280782]) # may vary >>> np.cov(pts.T) array([[ 5.96202397, -2.85602287], [-2.85602287, 3.47613949]]) # may vary >>> np.corrcoef(pts.T)[0, 1] -0.6273591314603949 # may vary We can visualize this data with a scatter plot. The orientation of the point cloud illustrates the negative correlation of the components of this sample. >>> import matplotlib.pyplot as plt >>> plt.plot(pts[:, 0], pts[:, 1], '.', alpha=0.5) >>> plt.axis('equal') >>> plt.grid() >>> plt.show() # numpy.random.RandomState.negative_binomial method random.RandomState.negative_binomial(_n_ , _p_ , _size =None_) Draw samples from a negative binomial distribution. Samples are drawn from a negative binomial distribution with specified parameters, `n` successes and `p` probability of success where `n` is > 0 and `p` is in the interval [0, 1]. Note New code should use the [`negative_binomial`](numpy.random.generator.negative_binomial#numpy.random.Generator.negative_binomial "numpy.random.Generator.negative_binomial") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **n** float or array_like of floats Parameter of the distribution, > 0. **p** float or array_like of floats Parameter of the distribution, >= 0 and <=1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `n` and `p` are both scalars. Otherwise, `np.broadcast(n, p).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized negative binomial distribution, where each sample is equal to N, the number of failures that occurred before a total of n successes was reached. Warning This function returns the C-long dtype, which is 32bit on windows and otherwise 64bit on 64bit platforms (and 32bit on 32bit ones). Since NumPy 2.0, NumPy’s default integer is 32bit on 32bit platforms and 64bit on 64bit platforms. See also [`random.Generator.negative_binomial`](numpy.random.generator.negative_binomial#numpy.random.Generator.negative_binomial "numpy.random.Generator.negative_binomial") which should be used for new code. #### Notes The probability mass function of the negative binomial distribution is \\[P(N;n,p) = \frac{\Gamma(N+n)}{N!\Gamma(n)}p^{n}(1-p)^{N},\\] where \\(n\\) is the number of successes, \\(p\\) is the probability of success, \\(N+n\\) is the number of trials, and \\(\Gamma\\) is the gamma function. When \\(n\\) is an integer, \\(\frac{\Gamma(N+n)}{N!\Gamma(n)} = \binom{N+n-1}{N}\\), which is the more common form of this term in the pmf. The negative binomial distribution gives the probability of N failures given n successes, with a success on the last trial. If one throws a die repeatedly until the third time a “1” appears, then the probability distribution of the number of non-“1”s that appear before the third “1” is a negative binomial distribution. #### References [1] Weisstein, Eric W. “Negative Binomial Distribution.” From MathWorld–A Wolfram Web Resource. [2] Wikipedia, “Negative binomial distribution”, #### Examples Draw samples from the distribution: A real world example. A company drills wild-cat oil exploration wells, each with an estimated probability of success of 0.1. What is the probability of having one success for each successive well, that is what is the probability of a single success after drilling 5 wells, after 6 wells, etc.? >>> s = np.random.negative_binomial(1, 0.1, 100000) >>> for i in range(1, 11): ... probability = sum(s 0. **nonc** float or array_like of floats Non-centrality, must be non-negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` and `nonc` are both scalars. Otherwise, `np.broadcast(df, nonc).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized noncentral chi-square distribution. See also [`random.Generator.noncentral_chisquare`](numpy.random.generator.noncentral_chisquare#numpy.random.Generator.noncentral_chisquare "numpy.random.Generator.noncentral_chisquare") which should be used for new code. #### Notes The probability density function for the noncentral Chi-square distribution is \\[P(x;df,nonc) = \sum^{\infty}_{i=0} \frac{e^{-nonc/2}(nonc/2)^{i}}{i!} P_{Y_{df+2i}}(x),\\] where \\(Y_{q}\\) is the Chi-square with q degrees of freedom. #### References [1] Wikipedia, “Noncentral chi-squared distribution” #### Examples Draw values from the distribution and plot the histogram >>> import matplotlib.pyplot as plt >>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000), ... bins=200, density=True) >>> plt.show() Draw values from a noncentral chisquare with very small noncentrality, and compare to a chisquare. >>> plt.figure() >>> values = plt.hist(np.random.noncentral_chisquare(3, .0000001, 100000), ... bins=np.arange(0., 25, .1), density=True) >>> values2 = plt.hist(np.random.chisquare(3, 100000), ... bins=np.arange(0., 25, .1), density=True) >>> plt.plot(values[1][0:-1], values[0]-values2[0], 'ob') >>> plt.show() Demonstrate how large values of non-centrality lead to a more symmetric distribution. >>> plt.figure() >>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000), ... bins=200, density=True) >>> plt.show() # numpy.random.RandomState.noncentral_f method random.RandomState.noncentral_f(_dfnum_ , _dfden_ , _nonc_ , _size =None_) Draw samples from the noncentral F distribution. Samples are drawn from an F distribution with specified parameters, `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of freedom in denominator), where both parameters > 1\. `nonc` is the non-centrality parameter. Note New code should use the [`noncentral_f`](numpy.random.generator.noncentral_f#numpy.random.Generator.noncentral_f "numpy.random.Generator.noncentral_f") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **dfnum** float or array_like of floats Numerator degrees of freedom, must be > 0. **dfden** float or array_like of floats Denominator degrees of freedom, must be > 0. **nonc** float or array_like of floats Non-centrality parameter, the sum of the squares of the numerator means, must be >= 0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `dfnum`, `dfden`, and `nonc` are all scalars. Otherwise, `np.broadcast(dfnum, dfden, nonc).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized noncentral Fisher distribution. See also [`random.Generator.noncentral_f`](numpy.random.generator.noncentral_f#numpy.random.Generator.noncentral_f "numpy.random.Generator.noncentral_f") which should be used for new code. #### Notes When calculating the power of an experiment (power = probability of rejecting the null hypothesis when a specific alternative is true) the non-central F statistic becomes important. When the null hypothesis is true, the F statistic follows a central F distribution. When the null hypothesis is not true, then it follows a non-central F statistic. #### References [1] Weisstein, Eric W. “Noncentral F-Distribution.” From MathWorld–A Wolfram Web Resource. [2] Wikipedia, “Noncentral F-distribution”, #### Examples In a study, testing for a specific alternative to the null hypothesis requires use of the Noncentral F distribution. We need to calculate the area in the tail of the distribution that exceeds the value of the F distribution for the null hypothesis. We’ll plot the two probability distributions for comparison. >>> dfnum = 3 # between group deg of freedom >>> dfden = 20 # within groups degrees of freedom >>> nonc = 3.0 >>> nc_vals = np.random.noncentral_f(dfnum, dfden, nonc, 1000000) >>> NF = np.histogram(nc_vals, bins=50, density=True) >>> c_vals = np.random.f(dfnum, dfden, 1000000) >>> F = np.histogram(c_vals, bins=50, density=True) >>> import matplotlib.pyplot as plt >>> plt.plot(F[1][1:], F[0]) >>> plt.plot(NF[1][1:], NF[0]) >>> plt.show() # numpy.random.RandomState.normal method random.RandomState.normal(_loc =0.0_, _scale =1.0_, _size =None_) Draw random samples from a normal (Gaussian) distribution. The probability density function of the normal distribution, first derived by De Moivre and 200 years later by both Gauss and Laplace independently [2], is often called the bell curve because of its characteristic shape (see the example below). The normal distributions occurs often in nature. For example, it describes the commonly occurring distribution of samples influenced by a large number of tiny, random disturbances, each with its own unique distribution [2]. Note New code should use the [`normal`](numpy.random.generator.normal#numpy.random.Generator.normal "numpy.random.Generator.normal") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **loc** float or array_like of floats Mean (“centre”) of the distribution. **scale** float or array_like of floats Standard deviation (spread or “width”) of the distribution. Must be non- negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized normal distribution. See also [`scipy.stats.norm`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html#scipy.stats.norm "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.normal`](numpy.random.generator.normal#numpy.random.Generator.normal "numpy.random.Generator.normal") which should be used for new code. #### Notes The probability density for the Gaussian distribution is \\[p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }} e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },\\] where \\(\mu\\) is the mean and \\(\sigma\\) the standard deviation. The square of the standard deviation, \\(\sigma^2\\), is called the variance. The function has its peak at the mean, and its “spread” increases with the standard deviation (the function reaches 0.607 times its maximum at \\(x + \sigma\\) and \\(x - \sigma\\) [2]). This implies that normal is more likely to return samples lying close to the mean, rather than those far away. #### References [1] Wikipedia, “Normal distribution”, [2] (1,2,3) P. R. Peebles Jr., “Central Limit Theorem” in “Probability, Random Variables and Random Signal Principles”, 4th ed., 2001, pp. 51, 51, 125. #### Examples Draw samples from the distribution: >>> mu, sigma = 0, 0.1 # mean and standard deviation >>> s = np.random.normal(mu, sigma, 1000) Verify the mean and the standard deviation: >>> abs(mu - np.mean(s)) 0.0 # may vary >>> abs(sigma - np.std(s, ddof=1)) 0.1 # may vary Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * ... np.exp( - (bins - mu)**2 / (2 * sigma**2) ), ... linewidth=2, color='r') >>> plt.show() Two-by-four array of samples from the normal distribution with mean 3 and standard deviation 2.5: >>> np.random.normal(3, 2.5, size=(2, 4)) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random # numpy.random.RandomState.pareto method random.RandomState.pareto(_a_ , _size =None_) Draw samples from a Pareto II or Lomax distribution with specified shape. The Lomax or Pareto II distribution is a shifted Pareto distribution. The classical Pareto distribution can be obtained from the Lomax distribution by adding 1 and multiplying by the scale parameter `m` (see Notes). The smallest value of the Lomax distribution is zero while for the classical Pareto distribution it is `mu`, where the standard Pareto distribution has location `mu = 1`. Lomax can also be considered as a simplified version of the Generalized Pareto distribution (available in SciPy), with the scale set to one and the location set to zero. The Pareto distribution must be greater than zero, and is unbounded above. It is also known as the “80-20 rule”. In this distribution, 80 percent of the weights are in the lowest 20 percent of the range, while the other 20 percent fill the remaining 80 percent of the range. Note New code should use the [`pareto`](numpy.random.generator.pareto#numpy.random.Generator.pareto "numpy.random.Generator.pareto") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **a** float or array_like of floats Shape of the distribution. Must be positive. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Pareto distribution. See also [`scipy.stats.lomax`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lomax.html#scipy.stats.lomax "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`scipy.stats.genpareto`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genpareto.html#scipy.stats.genpareto "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.pareto`](numpy.random.generator.pareto#numpy.random.Generator.pareto "numpy.random.Generator.pareto") which should be used for new code. #### Notes The probability density for the Pareto distribution is \\[p(x) = \frac{am^a}{x^{a+1}}\\] where \\(a\\) is the shape and \\(m\\) the scale. The Pareto distribution, named after the Italian economist Vilfredo Pareto, is a power law probability distribution useful in many real world problems. Outside the field of economics it is generally referred to as the Bradford distribution. Pareto developed the distribution to describe the distribution of wealth in an economy. It has also found use in insurance, web page access statistics, oil field sizes, and many other problems, including the download frequency for projects in Sourceforge [1]. It is one of the so-called “fat- tailed” distributions. #### References [1] Francis Hunt and Paul Johnson, On the Pareto Distribution of Sourceforge projects. [2] Pareto, V. (1896). Course of Political Economy. Lausanne. [3] Reiss, R.D., Thomas, M.(2001), Statistical Analysis of Extreme Values, Birkhauser Verlag, Basel, pp 23-30. [4] Wikipedia, “Pareto distribution”, #### Examples Draw samples from the distribution: >>> a, m = 3., 2. # shape and mode >>> s = (np.random.pareto(a, 1000) + 1) * m Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, _ = plt.hist(s, 100, density=True) >>> fit = a*m**a / bins**(a+1) >>> plt.plot(bins, max(count)*fit/max(fit), linewidth=2, color='r') >>> plt.show() # numpy.random.RandomState.permutation method random.RandomState.permutation(_x_) Randomly permute a sequence, or return a permuted range. If `x` is a multi-dimensional array, it is only shuffled along its first index. Note New code should use the [`permutation`](numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **x** int or array_like If `x` is an integer, randomly permute `np.arange(x)`. If `x` is an array, make a copy and shuffle the elements randomly. Returns: **out** ndarray Permuted sequence or array range. See also [`random.Generator.permutation`](numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") which should be used for new code. #### Examples >>> np.random.permutation(10) array([1, 7, 4, 3, 0, 9, 2, 5, 8, 6]) # random >>> np.random.permutation([1, 4, 9, 12, 15]) array([15, 1, 9, 4, 12]) # random >>> arr = np.arange(9).reshape((3, 3)) >>> np.random.permutation(arr) array([[6, 7, 8], # random [0, 1, 2], [3, 4, 5]]) # numpy.random.RandomState.poisson method random.RandomState.poisson(_lam =1.0_, _size =None_) Draw samples from a Poisson distribution. The Poisson distribution is the limit of the binomial distribution for large N. Note New code should use the [`poisson`](numpy.random.generator.poisson#numpy.random.Generator.poisson "numpy.random.Generator.poisson") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **lam** float or array_like of floats Expected number of events occurring in a fixed-time interval, must be >= 0. A sequence must be broadcastable over the requested size. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `lam` is a scalar. Otherwise, `np.array(lam).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Poisson distribution. See also [`random.Generator.poisson`](numpy.random.generator.poisson#numpy.random.Generator.poisson "numpy.random.Generator.poisson") which should be used for new code. #### Notes The probability mass function (PMF) of Poisson distribution is \\[f(k; \lambda)=\frac{\lambda^k e^{-\lambda}}{k!}\\] For events with an expected separation \\(\lambda\\) the Poisson distribution \\(f(k; \lambda)\\) describes the probability of \\(k\\) events occurring within the observed interval \\(\lambda\\). Because the output is limited to the range of the C int64 type, a ValueError is raised when `lam` is within 10 sigma of the maximum representable value. #### References [1] Weisstein, Eric W. “Poisson Distribution.” From MathWorld–A Wolfram Web Resource. [2] Wikipedia, “Poisson distribution”, #### Examples Draw samples from the distribution: >>> import numpy as np >>> s = np.random.poisson(5, 10000) Display histogram of the sample: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 14, density=True) >>> plt.show() Draw each 100 values for lambda 100 and 500: >>> s = np.random.poisson(lam=(100., 500.), size=(100, 2)) # numpy.random.RandomState.power method random.RandomState.power(_a_ , _size =None_) Draws samples in [0, 1] from a power distribution with positive exponent a - 1. Also known as the power function distribution. Note New code should use the [`power`](numpy.random.generator.power#numpy.random.Generator.power "numpy.random.Generator.power") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **a** float or array_like of floats Parameter of the distribution. Must be non-negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized power distribution. Raises: ValueError If a <= 0. See also [`random.Generator.power`](numpy.random.generator.power#numpy.random.Generator.power "numpy.random.Generator.power") which should be used for new code. #### Notes The probability density function is \\[P(x; a) = ax^{a-1}, 0 \le x \le 1, a>0.\\] The power function distribution is just the inverse of the Pareto distribution. It may also be seen as a special case of the Beta distribution. It is used, for example, in modeling the over-reporting of insurance claims. #### References [1] Christian Kleiber, Samuel Kotz, “Statistical size distributions in economics and actuarial sciences”, Wiley, 2003. [2] Heckert, N. A. and Filliben, James J. “NIST Handbook 148: Dataplot Reference Manual, Volume 2: Let Subcommands and Library Functions”, National Institute of Standards and Technology Handbook Series, June 2003. #### Examples Draw samples from the distribution: >>> a = 5. # shape >>> samples = 1000 >>> s = np.random.power(a, samples) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, bins=30) >>> x = np.linspace(0, 1, 100) >>> y = a*x**(a-1.) >>> normed_y = samples*np.diff(bins)[0]*y >>> plt.plot(x, normed_y) >>> plt.show() Compare the power function distribution to the inverse of the Pareto. >>> from scipy import stats >>> rvs = np.random.power(5, 1000000) >>> rvsp = np.random.pareto(5, 1000000) >>> xx = np.linspace(0,1,100) >>> powpdf = stats.powerlaw.pdf(xx,5) >>> plt.figure() >>> plt.hist(rvs, bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('np.random.power(5)') >>> plt.figure() >>> plt.hist(1./(1.+rvsp), bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('inverse of 1 + np.random.pareto(5)') >>> plt.figure() >>> plt.hist(1./(1.+rvsp), bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('inverse of stats.pareto(5)') # numpy.random.RandomState.rand method random.RandomState.rand(_d0_ , _d1_ , _..._ , _dn_) Random values in a given shape. Note This is a convenience function for users porting code from Matlab, and wraps [`random_sample`](numpy.random.randomstate.random_sample#numpy.random.RandomState.random_sample "numpy.random.RandomState.random_sample"). That function takes a tuple to specify the size of the output, which is consistent with other NumPy functions like [`numpy.zeros`](../../generated/numpy.zeros#numpy.zeros "numpy.zeros") and [`numpy.ones`](../../generated/numpy.ones#numpy.ones "numpy.ones"). Create an array of the given shape and populate it with random samples from a uniform distribution over `[0, 1)`. Parameters: **d0, d1, …, dn** int, optional The dimensions of the returned array, must be non-negative. If no argument is given a single Python float is returned. Returns: **out** ndarray, shape `(d0, d1, ..., dn)` Random values. See also [`random`](../index#module-numpy.random "numpy.random") #### Examples >>> np.random.rand(3,2) array([[ 0.14022471, 0.96360618], #random [ 0.37601032, 0.25528411], #random [ 0.49313049, 0.94909878]]) #random # numpy.random.RandomState.randint method random.RandomState.randint(_low_ , _high =None_, _size =None_, _dtype =int_) Return random integers from `low` (inclusive) to `high` (exclusive). Return random integers from the “discrete uniform” distribution of the specified dtype in the “half-open” interval [`low`, `high`). If `high` is None (the default), then results are from [0, `low`). Note New code should use the [`integers`](numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **low** int or array-like of ints Lowest (signed) integers to be drawn from the distribution (unless `high=None`, in which case this parameter is one above the _highest_ such integer). **high** int or array-like of ints, optional If provided, one above the largest (signed) integer to be drawn from the distribution (see above for behavior if `high=None`). If array-like, must contain integer values **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **dtype** dtype, optional Desired dtype of the result. Byteorder must be native. The default value is long. Warning This function defaults to the C-long dtype, which is 32bit on windows and otherwise 64bit on 64bit platforms (and 32bit on 32bit ones). Since NumPy 2.0, NumPy’s default integer is 32bit on 32bit platforms and 64bit on 64bit platforms. Which corresponds to `np.intp`. (`dtype=int` is not the same as in most NumPy functions.) Returns: **out** int or ndarray of ints [`size`](../../generated/numpy.size#numpy.size "numpy.size")-shaped array of random integers from the appropriate distribution, or a single such random int if [`size`](../../generated/numpy.size#numpy.size "numpy.size") not provided. See also [`random_integers`](numpy.random.randomstate.random_integers#numpy.random.RandomState.random_integers "numpy.random.RandomState.random_integers") similar to `randint`, only for the closed interval [`low`, `high`], and 1 is the lowest value if `high` is omitted. [`random.Generator.integers`](numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers") which should be used for new code. #### Examples >>> np.random.randint(2, size=10) array([1, 0, 0, 0, 1, 1, 0, 0, 1, 0]) # random >>> np.random.randint(1, size=10) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) Generate a 2 x 4 array of ints between 0 and 4, inclusive: >>> np.random.randint(5, size=(2, 4)) array([[4, 0, 2, 1], # random [3, 2, 2, 0]]) Generate a 1 x 3 array with 3 different upper bounds >>> np.random.randint(1, [3, 5, 10]) array([2, 2, 9]) # random Generate a 1 by 3 array with 3 different lower bounds >>> np.random.randint([1, 5, 7], 10) array([9, 8, 7]) # random Generate a 2 by 4 array using broadcasting with dtype of uint8 >>> np.random.randint([1, 3, 5, 7], [[10], [20]], dtype=np.uint8) array([[ 8, 6, 9, 7], # random [ 1, 16, 9, 12]], dtype=uint8) # numpy.random.RandomState.randn method random.RandomState.randn(_d0_ , _d1_ , _..._ , _dn_) Return a sample (or samples) from the “standard normal” distribution. Note This is a convenience function for users porting code from Matlab, and wraps [`standard_normal`](numpy.random.randomstate.standard_normal#numpy.random.RandomState.standard_normal "numpy.random.RandomState.standard_normal"). That function takes a tuple to specify the size of the output, which is consistent with other NumPy functions like [`numpy.zeros`](../../generated/numpy.zeros#numpy.zeros "numpy.zeros") and [`numpy.ones`](../../generated/numpy.ones#numpy.ones "numpy.ones"). Note New code should use the [`standard_normal`](numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). If positive int_like arguments are provided, `randn` generates an array of shape `(d0, d1, ..., dn)`, filled with random floats sampled from a univariate “normal” (Gaussian) distribution of mean 0 and variance 1. A single float randomly sampled from the distribution is returned if no argument is provided. Parameters: **d0, d1, …, dn** int, optional The dimensions of the returned array, must be non-negative. If no argument is given a single Python float is returned. Returns: **Z** ndarray or float A `(d0, d1, ..., dn)`-shaped array of floating-point samples from the standard normal distribution, or a single such float if no parameters were supplied. See also [`standard_normal`](numpy.random.randomstate.standard_normal#numpy.random.RandomState.standard_normal "numpy.random.RandomState.standard_normal") Similar, but takes a tuple as its argument. [`normal`](numpy.random.randomstate.normal#numpy.random.RandomState.normal "numpy.random.RandomState.normal") Also accepts mu and sigma arguments. [`random.Generator.standard_normal`](numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal") which should be used for new code. #### Notes For random samples from the normal distribution with mean `mu` and standard deviation `sigma`, use: sigma * np.random.randn(...) + mu #### Examples >>> np.random.randn() 2.1923875335537315 # random Two-by-four array of samples from the normal distribution with mean 3 and standard deviation 2.5: >>> 3 + 2.5 * np.random.randn(2, 4) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random # numpy.random.RandomState.random_integers method random.RandomState.random_integers(_low_ , _high =None_, _size =None_) Random integers of type [`numpy.int_`](../../arrays.scalars#numpy.int_ "numpy.int_") between `low` and `high`, inclusive. Return random integers of type [`numpy.int_`](../../arrays.scalars#numpy.int_ "numpy.int_") from the “discrete uniform” distribution in the closed interval [`low`, `high`]. If `high` is None (the default), then results are from [1, `low`]. The [`numpy.int_`](../../arrays.scalars#numpy.int_ "numpy.int_") type translates to the C long integer type and its precision is platform dependent. This function has been deprecated. Use randint instead. Deprecated since version 1.11.0. Parameters: **low** int Lowest (signed) integer to be drawn from the distribution (unless `high=None`, in which case this parameter is the _highest_ such integer). **high** int, optional If provided, the largest (signed) integer to be drawn from the distribution (see above for behavior if `high=None`). **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns: **out** int or ndarray of ints [`size`](../../generated/numpy.size#numpy.size "numpy.size")-shaped array of random integers from the appropriate distribution, or a single such random int if [`size`](../../generated/numpy.size#numpy.size "numpy.size") not provided. See also [`randint`](numpy.random.randomstate.randint#numpy.random.RandomState.randint "numpy.random.RandomState.randint") Similar to `random_integers`, only for the half-open interval [`low`, `high`), and 0 is the lowest value if `high` is omitted. #### Notes To sample from N evenly spaced floating-point numbers between a and b, use: a + (b - a) * (np.random.random_integers(N) - 1) / (N - 1.) #### Examples >>> np.random.random_integers(5) 4 # random >>> type(np.random.random_integers(5)) >>> np.random.random_integers(5, size=(3,2)) array([[5, 4], # random [3, 3], [4, 5]]) Choose five random numbers from the set of five evenly-spaced numbers between 0 and 2.5, inclusive (_i.e._ , from the set \\({0, 5/8, 10/8, 15/8, 20/8}\\)): >>> 2.5 * (np.random.random_integers(5, size=(5,)) - 1) / 4. array([ 0.625, 1.25 , 0.625, 0.625, 2.5 ]) # random Roll two six sided dice 1000 times and sum the results: >>> d1 = np.random.random_integers(1, 6, 1000) >>> d2 = np.random.random_integers(1, 6, 1000) >>> dsums = d1 + d2 Display results as a histogram: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(dsums, 11, density=True) >>> plt.show() # numpy.random.RandomState.random_sample method random.RandomState.random_sample(_size =None_) Return random floats in the half-open interval [0.0, 1.0). Results are from the “continuous uniform” distribution over the stated interval. To sample \\(Unif[a, b), b > a\\) multiply the output of `random_sample` by `(b-a)` and add `a`: (b - a) * random_sample() + a Note New code should use the [`random`](numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns: **out** float or ndarray of floats Array of random floats of shape [`size`](../../generated/numpy.size#numpy.size "numpy.size") (unless `size=None`, in which case a single float is returned). See also [`random.Generator.random`](numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random") which should be used for new code. #### Examples >>> np.random.random_sample() 0.47108547995356098 # random >>> type(np.random.random_sample()) >>> np.random.random_sample((5,)) array([ 0.30220482, 0.86820401, 0.1654503 , 0.11659149, 0.54323428]) # random Three-by-two array of random numbers from [-5, 0): >>> 5 * np.random.random_sample((3, 2)) - 5 array([[-3.99149989, -0.52338984], # random [-2.99091858, -0.79479508], [-1.23204345, -1.75224494]]) # numpy.random.RandomState.rayleigh method random.RandomState.rayleigh(_scale =1.0_, _size =None_) Draw samples from a Rayleigh distribution. The \\(\chi\\) and Weibull distributions are generalizations of the Rayleigh. Note New code should use the [`rayleigh`](numpy.random.generator.rayleigh#numpy.random.Generator.rayleigh "numpy.random.Generator.rayleigh") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **scale** float or array_like of floats, optional Scale, also equals the mode. Must be non-negative. Default is 1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `scale` is a scalar. Otherwise, `np.array(scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Rayleigh distribution. See also [`random.Generator.rayleigh`](numpy.random.generator.rayleigh#numpy.random.Generator.rayleigh "numpy.random.Generator.rayleigh") which should be used for new code. #### Notes The probability density function for the Rayleigh distribution is \\[P(x;scale) = \frac{x}{scale^2}e^{\frac{-x^2}{2 \cdotp scale^2}}\\] The Rayleigh distribution would arise, for example, if the East and North components of the wind velocity had identical zero-mean Gaussian distributions. Then the wind speed would have a Rayleigh distribution. #### References [1] Brighton Webs Ltd., “Rayleigh Distribution,” [2] Wikipedia, “Rayleigh distribution” #### Examples Draw values from the distribution and plot the histogram >>> from matplotlib.pyplot import hist >>> values = hist(np.random.rayleigh(3, 100000), bins=200, density=True) Wave heights tend to follow a Rayleigh distribution. If the mean wave height is 1 meter, what fraction of waves are likely to be larger than 3 meters? >>> meanvalue = 1 >>> modevalue = np.sqrt(2 / np.pi) * meanvalue >>> s = np.random.rayleigh(modevalue, 1000000) The percentage of waves larger than 3 meters is: >>> 100.*sum(s>3)/1000000. 0.087300000000000003 # random # numpy.random.RandomState.seed method random.RandomState.seed(_seed =None_) Reseed a legacy MT19937 BitGenerator #### Notes This is a convenience, legacy function. The best practice is to **not** reseed a BitGenerator, rather to recreate a new one. This method is here for legacy reasons. This example demonstrates best practice. >>> from numpy.random import MT19937 >>> from numpy.random import RandomState, SeedSequence >>> rs = RandomState(MT19937(SeedSequence(123456789))) # Later, you want to restart the stream >>> rs = RandomState(MT19937(SeedSequence(987654321))) # numpy.random.RandomState.set_state method random.RandomState.set_state(_state_) Set the internal state of the generator from a tuple. For use if one has reason to manually (re-)set the internal state of the bit generator used by the RandomState instance. By default, RandomState uses the “Mersenne Twister”[1] pseudo-random number generating algorithm. Parameters: **state**{tuple(str, ndarray of 624 uints, int, int, float), dict} The `state` tuple has the following items: 1. the string ‘MT19937’, specifying the Mersenne Twister algorithm. 2. a 1-D array of 624 unsigned integers `keys`. 3. an integer `pos`. 4. an integer `has_gauss`. 5. a float `cached_gaussian`. If state is a dictionary, it is directly set using the BitGenerators `state` property. Returns: **out** None Returns ‘None’ on success. See also [`get_state`](numpy.random.randomstate.get_state#numpy.random.RandomState.get_state "numpy.random.RandomState.get_state") #### Notes `set_state` and [`get_state`](numpy.random.randomstate.get_state#numpy.random.RandomState.get_state "numpy.random.RandomState.get_state") are not needed to work with any of the random distributions in NumPy. If the internal state is manually altered, the user should know exactly what he/she is doing. For backwards compatibility, the form (str, array of 624 uints, int) is also accepted although it is missing some information about the cached Gaussian value: `state = ('MT19937', keys, pos)`. #### References [1] M. Matsumoto and T. Nishimura, “Mersenne Twister: A 623-dimensionally equidistributed uniform pseudorandom number generator,” _ACM Trans. on Modeling and Computer Simulation_ , Vol. 8, No. 1, pp. 3-30, Jan. 1998. # numpy.random.RandomState.shuffle method random.RandomState.shuffle(_x_) Modify a sequence in-place by shuffling its contents. This function only shuffles the array along the first axis of a multi- dimensional array. The order of sub-arrays is changed but their contents remains the same. Note New code should use the [`shuffle`](numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **x** ndarray or MutableSequence The array, list or mutable sequence to be shuffled. Returns: None See also [`random.Generator.shuffle`](numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") which should be used for new code. #### Examples >>> arr = np.arange(10) >>> np.random.shuffle(arr) >>> arr [1 7 5 2 9 4 3 6 0 8] # random Multi-dimensional arrays are only shuffled along the first axis: >>> arr = np.arange(9).reshape((3, 3)) >>> np.random.shuffle(arr) >>> arr array([[3, 4, 5], # random [6, 7, 8], [0, 1, 2]]) # numpy.random.RandomState.standard_cauchy method random.RandomState.standard_cauchy(_size =None_) Draw samples from a standard Cauchy distribution with mode = 0. Also known as the Lorentz distribution. Note New code should use the [`standard_cauchy`](numpy.random.generator.standard_cauchy#numpy.random.Generator.standard_cauchy "numpy.random.Generator.standard_cauchy") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns: **samples** ndarray or scalar The drawn samples. See also [`random.Generator.standard_cauchy`](numpy.random.generator.standard_cauchy#numpy.random.Generator.standard_cauchy "numpy.random.Generator.standard_cauchy") which should be used for new code. #### Notes The probability density function for the full Cauchy distribution is \\[P(x; x_0, \gamma) = \frac{1}{\pi \gamma \bigl[ 1+ (\frac{x-x_0}{\gamma})^2 \bigr] }\\] and the Standard Cauchy distribution just sets \\(x_0=0\\) and \\(\gamma=1\\) The Cauchy distribution arises in the solution to the driven harmonic oscillator problem, and also describes spectral line broadening. It also describes the distribution of values at which a line tilted at a random angle will cut the x axis. When studying hypothesis tests that assume normality, seeing how the tests perform on data from a Cauchy distribution is a good indicator of their sensitivity to a heavy-tailed distribution, since the Cauchy looks very much like a Gaussian distribution, but with heavier tails. #### References [1] NIST/SEMATECH e-Handbook of Statistical Methods, “Cauchy Distribution”, [2] Weisstein, Eric W. “Cauchy Distribution.” From MathWorld–A Wolfram Web Resource. [3] Wikipedia, “Cauchy distribution” #### Examples Draw samples and plot the distribution: >>> import matplotlib.pyplot as plt >>> s = np.random.standard_cauchy(1000000) >>> s = s[(s>-25) & (s<25)] # truncate distribution so it plots well >>> plt.hist(s, bins=100) >>> plt.show() # numpy.random.RandomState.standard_exponential method random.RandomState.standard_exponential(_size =None_) Draw samples from the standard exponential distribution. `standard_exponential` is identical to the exponential distribution with a scale parameter of 1. Note New code should use the [`standard_exponential`](numpy.random.generator.standard_exponential#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns: **out** float or ndarray Drawn samples. See also [`random.Generator.standard_exponential`](numpy.random.generator.standard_exponential#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential") which should be used for new code. #### Examples Output a 3x8000 array: >>> n = np.random.standard_exponential((3, 8000)) # numpy.random.RandomState.standard_gamma method random.RandomState.standard_gamma(_shape_ , _size =None_) Draw samples from a standard Gamma distribution. Samples are drawn from a Gamma distribution with specified parameters, shape (sometimes designated “k”) and scale=1. Note New code should use the [`standard_gamma`](numpy.random.generator.standard_gamma#numpy.random.Generator.standard_gamma "numpy.random.Generator.standard_gamma") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **shape** float or array_like of floats Parameter, must be non-negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `shape` is a scalar. Otherwise, `np.array(shape).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized standard gamma distribution. See also [`scipy.stats.gamma`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.standard_gamma`](numpy.random.generator.standard_gamma#numpy.random.Generator.standard_gamma "numpy.random.Generator.standard_gamma") which should be used for new code. #### Notes The probability density for the Gamma distribution is \\[p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},\\] where \\(k\\) is the shape and \\(\theta\\) the scale, and \\(\Gamma\\) is the Gamma function. The Gamma distribution is often used to model the times to failure of electronic components, and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. #### References [1] Weisstein, Eric W. “Gamma Distribution.” From MathWorld–A Wolfram Web Resource. [2] Wikipedia, “Gamma distribution”, #### Examples Draw samples from the distribution: >>> shape, scale = 2., 1. # mean and width >>> s = np.random.standard_gamma(shape, 1000000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> import scipy.special as sps >>> count, bins, ignored = plt.hist(s, 50, density=True) >>> y = bins**(shape-1) * ((np.exp(-bins/scale))/ ... (sps.gamma(shape) * scale**shape)) >>> plt.plot(bins, y, linewidth=2, color='r') >>> plt.show() # numpy.random.RandomState.standard_normal method random.RandomState.standard_normal(_size =None_) Draw samples from a standard Normal distribution (mean=0, stdev=1). Note New code should use the [`standard_normal`](numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns: **out** float or ndarray A floating-point array of shape `size` of drawn samples, or a single sample if `size` was not specified. See also [`normal`](numpy.random.randomstate.normal#numpy.random.RandomState.normal "numpy.random.RandomState.normal") Equivalent function with additional `loc` and `scale` arguments for setting the mean and standard deviation. [`random.Generator.standard_normal`](numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal") which should be used for new code. #### Notes For random samples from the normal distribution with mean `mu` and standard deviation `sigma`, use one of: mu + sigma * np.random.standard_normal(size=...) np.random.normal(mu, sigma, size=...) #### Examples >>> np.random.standard_normal() 2.1923875335537315 #random >>> s = np.random.standard_normal(8000) >>> s array([ 0.6888893 , 0.78096262, -0.89086505, ..., 0.49876311, # random -0.38672696, -0.4685006 ]) # random >>> s.shape (8000,) >>> s = np.random.standard_normal(size=(3, 4, 2)) >>> s.shape (3, 4, 2) Two-by-four array of samples from the normal distribution with mean 3 and standard deviation 2.5: >>> 3 + 2.5 * np.random.standard_normal(size=(2, 4)) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random # numpy.random.RandomState.standard_t method random.RandomState.standard_t(_df_ , _size =None_) Draw samples from a standard Student’s t distribution with `df` degrees of freedom. A special case of the hyperbolic distribution. As `df` gets large, the result resembles that of the standard normal distribution ([`standard_normal`](numpy.random.randomstate.standard_normal#numpy.random.RandomState.standard_normal "numpy.random.RandomState.standard_normal")). Note New code should use the [`standard_t`](numpy.random.generator.standard_t#numpy.random.Generator.standard_t "numpy.random.Generator.standard_t") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **df** float or array_like of floats Degrees of freedom, must be > 0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` is a scalar. Otherwise, `np.array(df).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized standard Student’s t distribution. See also [`random.Generator.standard_t`](numpy.random.generator.standard_t#numpy.random.Generator.standard_t "numpy.random.Generator.standard_t") which should be used for new code. #### Notes The probability density function for the t distribution is \\[P(x, df) = \frac{\Gamma(\frac{df+1}{2})}{\sqrt{\pi df} \Gamma(\frac{df}{2})}\Bigl( 1+\frac{x^2}{df} \Bigr)^{-(df+1)/2}\\] The t test is based on an assumption that the data come from a Normal distribution. The t test provides a way to test whether the sample mean (that is the mean calculated from the data) is a good estimate of the true mean. The derivation of the t-distribution was first published in 1908 by William Gosset while working for the Guinness Brewery in Dublin. Due to proprietary issues, he had to publish under a pseudonym, and so he used the name Student. #### References [1] Dalgaard, Peter, “Introductory Statistics With R”, Springer, 2002. [2] Wikipedia, “Student’s t-distribution” [https://en.wikipedia.org/wiki/Student’s_t- distribution](https://en.wikipedia.org/wiki/Student's_t-distribution) #### Examples From Dalgaard page 83 [1], suppose the daily energy intake for 11 women in kilojoules (kJ) is: >>> intake = np.array([5260., 5470, 5640, 6180, 6390, 6515, 6805, 7515, \ ... 7515, 8230, 8770]) Does their energy intake deviate systematically from the recommended value of 7725 kJ? Our null hypothesis will be the absence of deviation, and the alternate hypothesis will be the presence of an effect that could be either positive or negative, hence making our test 2-tailed. Because we are estimating the mean and we have N=11 values in our sample, we have N-1=10 degrees of freedom. We set our significance level to 95% and compute the t statistic using the empirical mean and empirical standard deviation of our intake. We use a ddof of 1 to base the computation of our empirical standard deviation on an unbiased estimate of the variance (note: the final estimate is not unbiased due to the concave nature of the square root). >>> np.mean(intake) 6753.636363636364 >>> intake.std(ddof=1) 1142.1232221373727 >>> t = (np.mean(intake)-7725)/(intake.std(ddof=1)/np.sqrt(len(intake))) >>> t -2.8207540608310198 We draw 1000000 samples from Student’s t distribution with the adequate degrees of freedom. >>> import matplotlib.pyplot as plt >>> s = np.random.standard_t(10, size=1000000) >>> h = plt.hist(s, bins=100, density=True) Does our t statistic land in one of the two critical regions found at both tails of the distribution? >>> np.sum(np.abs(t) < np.abs(s)) / float(len(s)) 0.018318 #random < 0.05, statistic is in critical region The probability value for this 2-tailed test is about 1.83%, which is lower than the 5% pre-determined significance threshold. Therefore, the probability of observing values as extreme as our intake conditionally on the null hypothesis being true is too low, and we reject the null hypothesis of no deviation. # numpy.random.RandomState.triangular method random.RandomState.triangular(_left_ , _mode_ , _right_ , _size =None_) Draw samples from the triangular distribution over the interval `[left, right]`. The triangular distribution is a continuous probability distribution with lower limit left, peak at mode, and upper limit right. Unlike the other distributions, these parameters directly define the shape of the pdf. Note New code should use the [`triangular`](numpy.random.generator.triangular#numpy.random.Generator.triangular "numpy.random.Generator.triangular") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **left** float or array_like of floats Lower limit. **mode** float or array_like of floats The value where the peak of the distribution occurs. The value must fulfill the condition `left <= mode <= right`. **right** float or array_like of floats Upper limit, must be larger than `left`. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `left`, `mode`, and `right` are all scalars. Otherwise, `np.broadcast(left, mode, right).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized triangular distribution. See also [`random.Generator.triangular`](numpy.random.generator.triangular#numpy.random.Generator.triangular "numpy.random.Generator.triangular") which should be used for new code. #### Notes The probability density function for the triangular distribution is \\[\begin{split}P(x;l, m, r) = \begin{cases} \frac{2(x-l)}{(r-l)(m-l)}& \text{for $l \leq x \leq m$},\\\ \frac{2(r-x)}{(r-l)(r-m)}& \text{for $m \leq x \leq r$},\\\ 0& \text{otherwise}. \end{cases}\end{split}\\] The triangular distribution is often used in ill-defined problems where the underlying distribution is not known, but some knowledge of the limits and mode exists. Often it is used in simulations. #### References [1] Wikipedia, “Triangular distribution” #### Examples Draw values from the distribution and plot the histogram: >>> import matplotlib.pyplot as plt >>> h = plt.hist(np.random.triangular(-3, 0, 8, 100000), bins=200, ... density=True) >>> plt.show() # numpy.random.RandomState.uniform method random.RandomState.uniform(_low =0.0_, _high =1.0_, _size =None_) Draw samples from a uniform distribution. Samples are uniformly distributed over the half-open interval `[low, high)` (includes low, but excludes high). In other words, any value within the given interval is equally likely to be drawn by `uniform`. Note New code should use the [`uniform`](numpy.random.generator.uniform#numpy.random.Generator.uniform "numpy.random.Generator.uniform") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **low** float or array_like of floats, optional Lower boundary of the output interval. All values generated will be greater than or equal to low. The default value is 0. **high** float or array_like of floats Upper boundary of the output interval. All values generated will be less than or equal to high. The high limit may be included in the returned array of floats due to floating-point rounding in the equation `low + (high-low) * random_sample()`. The default value is 1.0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `low` and `high` are both scalars. Otherwise, `np.broadcast(low, high).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized uniform distribution. See also [`randint`](numpy.random.randomstate.randint#numpy.random.RandomState.randint "numpy.random.RandomState.randint") Discrete uniform distribution, yielding integers. [`random_integers`](numpy.random.randomstate.random_integers#numpy.random.RandomState.random_integers "numpy.random.RandomState.random_integers") Discrete uniform distribution over the closed interval `[low, high]`. [`random_sample`](numpy.random.randomstate.random_sample#numpy.random.RandomState.random_sample "numpy.random.RandomState.random_sample") Floats uniformly distributed over `[0, 1)`. [`random`](../index#module-numpy.random "numpy.random") Alias for [`random_sample`](numpy.random.randomstate.random_sample#numpy.random.RandomState.random_sample "numpy.random.RandomState.random_sample"). [`rand`](numpy.random.randomstate.rand#numpy.random.RandomState.rand "numpy.random.RandomState.rand") Convenience function that accepts dimensions as input, e.g., `rand(2,2)` would generate a 2-by-2 array of floats, uniformly distributed over `[0, 1)`. [`random.Generator.uniform`](numpy.random.generator.uniform#numpy.random.Generator.uniform "numpy.random.Generator.uniform") which should be used for new code. #### Notes The probability density function of the uniform distribution is \\[p(x) = \frac{1}{b - a}\\] anywhere within the interval `[a, b)`, and zero elsewhere. When `high` == `low`, values of `low` will be returned. If `high` < `low`, the results are officially undefined and may eventually raise an error, i.e. do not rely on this function to behave when passed arguments satisfying that inequality condition. The `high` limit may be included in the returned array of floats due to floating-point rounding in the equation `low + (high-low) * random_sample()`. For example: >>> x = np.float32(5*0.99999999) >>> x np.float32(5.0) #### Examples Draw samples from the distribution: >>> s = np.random.uniform(-1,0,1000) All values are within the given interval: >>> np.all(s >= -1) True >>> np.all(s < 0) True Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 15, density=True) >>> plt.plot(bins, np.ones_like(bins), linewidth=2, color='r') >>> plt.show() # numpy.random.RandomState.vonmises method random.RandomState.vonmises(_mu_ , _kappa_ , _size =None_) Draw samples from a von Mises distribution. Samples are drawn from a von Mises distribution with specified mode (mu) and concentration (kappa), on the interval [-pi, pi]. The von Mises distribution (also known as the circular normal distribution) is a continuous probability distribution on the unit circle. It may be thought of as the circular analogue of the normal distribution. Note New code should use the [`vonmises`](numpy.random.generator.vonmises#numpy.random.Generator.vonmises "numpy.random.Generator.vonmises") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **mu** float or array_like of floats Mode (“center”) of the distribution. **kappa** float or array_like of floats Concentration of the distribution, has to be >=0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mu` and `kappa` are both scalars. Otherwise, `np.broadcast(mu, kappa).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized von Mises distribution. See also [`scipy.stats.vonmises`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.vonmises.html#scipy.stats.vonmises "\(in SciPy v1.14.1\)") probability density function, distribution, or cumulative density function, etc. [`random.Generator.vonmises`](numpy.random.generator.vonmises#numpy.random.Generator.vonmises "numpy.random.Generator.vonmises") which should be used for new code. #### Notes The probability density for the von Mises distribution is \\[p(x) = \frac{e^{\kappa cos(x-\mu)}}{2\pi I_0(\kappa)},\\] where \\(\mu\\) is the mode and \\(\kappa\\) the concentration, and \\(I_0(\kappa)\\) is the modified Bessel function of order 0. The von Mises is named for Richard Edler von Mises, who was born in Austria- Hungary, in what is now the Ukraine. He fled to the United States in 1939 and became a professor at Harvard. He worked in probability theory, aerodynamics, fluid mechanics, and philosophy of science. #### References [1] Abramowitz, M. and Stegun, I. A. (Eds.). “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing,” New York: Dover, 1972. [2] von Mises, R., “Mathematical Theory of Probability and Statistics”, New York: Academic Press, 1964. #### Examples Draw samples from the distribution: >>> mu, kappa = 0.0, 4.0 # mean and concentration >>> s = np.random.vonmises(mu, kappa, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> from scipy.special import i0 >>> plt.hist(s, 50, density=True) >>> x = np.linspace(-np.pi, np.pi, num=51) >>> y = np.exp(kappa*np.cos(x-mu))/(2*np.pi*i0(kappa)) >>> plt.plot(x, y, linewidth=2, color='r') >>> plt.show() # numpy.random.RandomState.wald method random.RandomState.wald(_mean_ , _scale_ , _size =None_) Draw samples from a Wald, or inverse Gaussian, distribution. As the scale approaches infinity, the distribution becomes more like a Gaussian. Some references claim that the Wald is an inverse Gaussian with mean equal to 1, but this is by no means universal. The inverse Gaussian distribution was first studied in relationship to Brownian motion. In 1956 M.C.K. Tweedie used the name inverse Gaussian because there is an inverse relationship between the time to cover a unit distance and distance covered in unit time. Note New code should use the [`wald`](numpy.random.generator.wald#numpy.random.Generator.wald "numpy.random.Generator.wald") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **mean** float or array_like of floats Distribution mean, must be > 0. **scale** float or array_like of floats Scale parameter, must be > 0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mean` and `scale` are both scalars. Otherwise, `np.broadcast(mean, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Wald distribution. See also [`random.Generator.wald`](numpy.random.generator.wald#numpy.random.Generator.wald "numpy.random.Generator.wald") which should be used for new code. #### Notes The probability density function for the Wald distribution is \\[P(x;mean,scale) = \sqrt{\frac{scale}{2\pi x^3}}e^ \frac{-scale(x-mean)^2}{2\cdotp mean^2x}\\] As noted above the inverse Gaussian distribution first arise from attempts to model Brownian motion. It is also a competitor to the Weibull for use in reliability modeling and modeling stock returns and interest rate processes. #### References [1] Brighton Webs Ltd., Wald Distribution, [2] Chhikara, Raj S., and Folks, J. Leroy, “The Inverse Gaussian Distribution: Theory : Methodology, and Applications”, CRC Press, 1988. [3] Wikipedia, “Inverse Gaussian distribution” #### Examples Draw values from the distribution and plot the histogram: >>> import matplotlib.pyplot as plt >>> h = plt.hist(np.random.wald(3, 2, 100000), bins=200, density=True) >>> plt.show() # numpy.random.RandomState.weibull method random.RandomState.weibull(_a_ , _size =None_) Draw samples from a Weibull distribution. Draw samples from a 1-parameter Weibull distribution with the given shape parameter `a`. \\[X = (-ln(U))^{1/a}\\] Here, U is drawn from the uniform distribution over (0,1]. The more common 2-parameter Weibull, including a scale parameter \\(\lambda\\) is just \\(X = \lambda(-ln(U))^{1/a}\\). Note New code should use the [`weibull`](numpy.random.generator.weibull#numpy.random.Generator.weibull "numpy.random.Generator.weibull") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **a** float or array_like of floats Shape parameter of the distribution. Must be nonnegative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Weibull distribution. See also [`scipy.stats.weibull_max`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.weibull_max.html#scipy.stats.weibull_max "\(in SciPy v1.14.1\)") [`scipy.stats.weibull_min`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.weibull_min.html#scipy.stats.weibull_min "\(in SciPy v1.14.1\)") [`scipy.stats.genextreme`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genextreme.html#scipy.stats.genextreme "\(in SciPy v1.14.1\)") [`gumbel`](numpy.random.randomstate.gumbel#numpy.random.RandomState.gumbel "numpy.random.RandomState.gumbel") [`random.Generator.weibull`](numpy.random.generator.weibull#numpy.random.Generator.weibull "numpy.random.Generator.weibull") which should be used for new code. #### Notes The Weibull (or Type III asymptotic extreme value distribution for smallest values, SEV Type III, or Rosin-Rammler distribution) is one of a class of Generalized Extreme Value (GEV) distributions used in modeling extreme value problems. This class includes the Gumbel and Frechet distributions. The probability density for the Weibull distribution is \\[p(x) = \frac{a} {\lambda}(\frac{x}{\lambda})^{a-1}e^{-(x/\lambda)^a},\\] where \\(a\\) is the shape and \\(\lambda\\) the scale. The function has its peak (the mode) at \\(\lambda(\frac{a-1}{a})^{1/a}\\). When `a = 1`, the Weibull distribution reduces to the exponential distribution. #### References [1] Waloddi Weibull, Royal Technical University, Stockholm, 1939 “A Statistical Theory Of The Strength Of Materials”, Ingeniorsvetenskapsakademiens Handlingar Nr 151, 1939, Generalstabens Litografiska Anstalts Forlag, Stockholm. [2] Waloddi Weibull, “A Statistical Distribution Function of Wide Applicability”, Journal Of Applied Mechanics ASME Paper 1951. [3] Wikipedia, “Weibull distribution”, #### Examples Draw samples from the distribution: >>> a = 5. # shape >>> s = np.random.weibull(a, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> x = np.arange(1,100.)/50. >>> def weib(x,n,a): ... return (a / n) * (x / n)**(a - 1) * np.exp(-(x / n)**a) >>> count, bins, ignored = plt.hist(np.random.weibull(5.,1000)) >>> x = np.arange(1,100.)/50. >>> scale = count.max()/weib(x, 1., 5.).max() >>> plt.plot(x, weib(x, 1., 5.)*scale) >>> plt.show() # numpy.random.RandomState.zipf method random.RandomState.zipf(_a_ , _size =None_) Draw samples from a Zipf distribution. Samples are drawn from a Zipf distribution with specified parameter `a` > 1. The Zipf distribution (also known as the zeta distribution) is a discrete probability distribution that satisfies Zipf’s law: the frequency of an item is inversely proportional to its rank in a frequency table. Note New code should use the [`zipf`](numpy.random.generator.zipf#numpy.random.Generator.zipf "numpy.random.Generator.zipf") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **a** float or array_like of floats Distribution parameter. Must be greater than 1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Zipf distribution. See also [`scipy.stats.zipf`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.zipf.html#scipy.stats.zipf "\(in SciPy v1.14.1\)") probability density function, distribution, or cumulative density function, etc. [`random.Generator.zipf`](numpy.random.generator.zipf#numpy.random.Generator.zipf "numpy.random.Generator.zipf") which should be used for new code. #### Notes The probability mass function (PMF) for the Zipf distribution is \\[p(k) = \frac{k^{-a}}{\zeta(a)},\\] for integers \\(k \geq 1\\), where \\(\zeta\\) is the Riemann Zeta function. It is named for the American linguist George Kingsley Zipf, who noted that the frequency of any word in a sample of a language is inversely proportional to its rank in the frequency table. #### References [1] Zipf, G. K., “Selected Studies of the Principle of Relative Frequency in Language,” Cambridge, MA: Harvard Univ. Press, 1932. #### Examples Draw samples from the distribution: >>> a = 4.0 >>> n = 20000 >>> s = np.random.zipf(a, n) Display the histogram of the samples, along with the expected histogram based on the probability density function: >>> import matplotlib.pyplot as plt >>> from scipy.special import zeta [`bincount`](../../generated/numpy.bincount#numpy.bincount "numpy.bincount") provides a fast histogram for small integers. >>> count = np.bincount(s) >>> k = np.arange(1, s.max() + 1) >>> plt.bar(k, count[1:], alpha=0.5, label='sample count') >>> plt.plot(k, n*(k**-a)/zeta(a), 'k.-', alpha=0.5, ... label='expected count') >>> plt.semilogy() >>> plt.grid(alpha=0.4) >>> plt.legend() >>> plt.title(f'Zipf sample, a={a}, size={n}') >>> plt.show() # numpy.random.ranf random.ranf(_* args_, _** kwargs_) This is an alias of [`random_sample`](numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample"). See [`random_sample`](numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample") for the complete documentation. # numpy.random.rayleigh random.rayleigh(_scale =1.0_, _size =None_) Draw samples from a Rayleigh distribution. The \\(\chi\\) and Weibull distributions are generalizations of the Rayleigh. Note New code should use the [`rayleigh`](numpy.random.generator.rayleigh#numpy.random.Generator.rayleigh "numpy.random.Generator.rayleigh") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **scale** float or array_like of floats, optional Scale, also equals the mode. Must be non-negative. Default is 1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `scale` is a scalar. Otherwise, `np.array(scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Rayleigh distribution. See also [`random.Generator.rayleigh`](numpy.random.generator.rayleigh#numpy.random.Generator.rayleigh "numpy.random.Generator.rayleigh") which should be used for new code. #### Notes The probability density function for the Rayleigh distribution is \\[P(x;scale) = \frac{x}{scale^2}e^{\frac{-x^2}{2 \cdotp scale^2}}\\] The Rayleigh distribution would arise, for example, if the East and North components of the wind velocity had identical zero-mean Gaussian distributions. Then the wind speed would have a Rayleigh distribution. #### References [1] Brighton Webs Ltd., “Rayleigh Distribution,” [2] Wikipedia, “Rayleigh distribution” #### Examples Draw values from the distribution and plot the histogram >>> from matplotlib.pyplot import hist >>> values = hist(np.random.rayleigh(3, 100000), bins=200, density=True) Wave heights tend to follow a Rayleigh distribution. If the mean wave height is 1 meter, what fraction of waves are likely to be larger than 3 meters? >>> meanvalue = 1 >>> modevalue = np.sqrt(2 / np.pi) * meanvalue >>> s = np.random.rayleigh(modevalue, 1000000) The percentage of waves larger than 3 meters is: >>> 100.*sum(s>3)/1000000. 0.087300000000000003 # random # numpy.random.sample random.sample(_* args_, _** kwargs_) This is an alias of [`random_sample`](numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample"). See [`random_sample`](numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample") for the complete documentation. # numpy.random.seed random.seed(_seed =None_) Reseed the singleton RandomState instance. See also [`numpy.random.Generator`](../generator#numpy.random.Generator "numpy.random.Generator") #### Notes This is a convenience, legacy function that exists to support older code that uses the singleton RandomState. Best practice is to use a dedicated `Generator` instance rather than the random variate generation methods exposed directly in the random module. # numpy.random.set_state random.set_state(_state_) Set the internal state of the generator from a tuple. For use if one has reason to manually (re-)set the internal state of the bit generator used by the RandomState instance. By default, RandomState uses the “Mersenne Twister”[1] pseudo-random number generating algorithm. Parameters: **state**{tuple(str, ndarray of 624 uints, int, int, float), dict} The `state` tuple has the following items: 1. the string ‘MT19937’, specifying the Mersenne Twister algorithm. 2. a 1-D array of 624 unsigned integers `keys`. 3. an integer `pos`. 4. an integer `has_gauss`. 5. a float `cached_gaussian`. If state is a dictionary, it is directly set using the BitGenerators `state` property. Returns: **out** None Returns ‘None’ on success. See also [`get_state`](numpy.random.get_state#numpy.random.get_state "numpy.random.get_state") #### Notes `set_state` and [`get_state`](numpy.random.get_state#numpy.random.get_state "numpy.random.get_state") are not needed to work with any of the random distributions in NumPy. If the internal state is manually altered, the user should know exactly what he/she is doing. For backwards compatibility, the form (str, array of 624 uints, int) is also accepted although it is missing some information about the cached Gaussian value: `state = ('MT19937', keys, pos)`. #### References [1] M. Matsumoto and T. Nishimura, “Mersenne Twister: A 623-dimensionally equidistributed uniform pseudorandom number generator,” _ACM Trans. on Modeling and Computer Simulation_ , Vol. 8, No. 1, pp. 3-30, Jan. 1998. # numpy.random.shuffle random.shuffle(_x_) Modify a sequence in-place by shuffling its contents. This function only shuffles the array along the first axis of a multi- dimensional array. The order of sub-arrays is changed but their contents remains the same. Note New code should use the [`shuffle`](numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **x** ndarray or MutableSequence The array, list or mutable sequence to be shuffled. Returns: None See also [`random.Generator.shuffle`](numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") which should be used for new code. #### Examples >>> arr = np.arange(10) >>> np.random.shuffle(arr) >>> arr [1 7 5 2 9 4 3 6 0 8] # random Multi-dimensional arrays are only shuffled along the first axis: >>> arr = np.arange(9).reshape((3, 3)) >>> np.random.shuffle(arr) >>> arr array([[3, 4, 5], # random [6, 7, 8], [0, 1, 2]]) # numpy.random.standard_cauchy random.standard_cauchy(_size =None_) Draw samples from a standard Cauchy distribution with mode = 0. Also known as the Lorentz distribution. Note New code should use the [`standard_cauchy`](numpy.random.generator.standard_cauchy#numpy.random.Generator.standard_cauchy "numpy.random.Generator.standard_cauchy") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns: **samples** ndarray or scalar The drawn samples. See also [`random.Generator.standard_cauchy`](numpy.random.generator.standard_cauchy#numpy.random.Generator.standard_cauchy "numpy.random.Generator.standard_cauchy") which should be used for new code. #### Notes The probability density function for the full Cauchy distribution is \\[P(x; x_0, \gamma) = \frac{1}{\pi \gamma \bigl[ 1+ (\frac{x-x_0}{\gamma})^2 \bigr] }\\] and the Standard Cauchy distribution just sets \\(x_0=0\\) and \\(\gamma=1\\) The Cauchy distribution arises in the solution to the driven harmonic oscillator problem, and also describes spectral line broadening. It also describes the distribution of values at which a line tilted at a random angle will cut the x axis. When studying hypothesis tests that assume normality, seeing how the tests perform on data from a Cauchy distribution is a good indicator of their sensitivity to a heavy-tailed distribution, since the Cauchy looks very much like a Gaussian distribution, but with heavier tails. #### References [1] NIST/SEMATECH e-Handbook of Statistical Methods, “Cauchy Distribution”, [2] Weisstein, Eric W. “Cauchy Distribution.” From MathWorld–A Wolfram Web Resource. [3] Wikipedia, “Cauchy distribution” #### Examples Draw samples and plot the distribution: >>> import matplotlib.pyplot as plt >>> s = np.random.standard_cauchy(1000000) >>> s = s[(s>-25) & (s<25)] # truncate distribution so it plots well >>> plt.hist(s, bins=100) >>> plt.show() # numpy.random.standard_exponential random.standard_exponential(_size =None_) Draw samples from the standard exponential distribution. `standard_exponential` is identical to the exponential distribution with a scale parameter of 1. Note New code should use the [`standard_exponential`](numpy.random.generator.standard_exponential#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns: **out** float or ndarray Drawn samples. See also [`random.Generator.standard_exponential`](numpy.random.generator.standard_exponential#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential") which should be used for new code. #### Examples Output a 3x8000 array: >>> n = np.random.standard_exponential((3, 8000)) # numpy.random.standard_gamma random.standard_gamma(_shape_ , _size =None_) Draw samples from a standard Gamma distribution. Samples are drawn from a Gamma distribution with specified parameters, shape (sometimes designated “k”) and scale=1. Note New code should use the [`standard_gamma`](numpy.random.generator.standard_gamma#numpy.random.Generator.standard_gamma "numpy.random.Generator.standard_gamma") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **shape** float or array_like of floats Parameter, must be non-negative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `shape` is a scalar. Otherwise, `np.array(shape).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized standard gamma distribution. See also [`scipy.stats.gamma`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma "\(in SciPy v1.14.1\)") probability density function, distribution or cumulative density function, etc. [`random.Generator.standard_gamma`](numpy.random.generator.standard_gamma#numpy.random.Generator.standard_gamma "numpy.random.Generator.standard_gamma") which should be used for new code. #### Notes The probability density for the Gamma distribution is \\[p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},\\] where \\(k\\) is the shape and \\(\theta\\) the scale, and \\(\Gamma\\) is the Gamma function. The Gamma distribution is often used to model the times to failure of electronic components, and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. #### References [1] Weisstein, Eric W. “Gamma Distribution.” From MathWorld–A Wolfram Web Resource. [2] Wikipedia, “Gamma distribution”, #### Examples Draw samples from the distribution: >>> shape, scale = 2., 1. # mean and width >>> s = np.random.standard_gamma(shape, 1000000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> import scipy.special as sps >>> count, bins, ignored = plt.hist(s, 50, density=True) >>> y = bins**(shape-1) * ((np.exp(-bins/scale))/ ... (sps.gamma(shape) * scale**shape)) >>> plt.plot(bins, y, linewidth=2, color='r') >>> plt.show() # numpy.random.standard_normal random.standard_normal(_size =None_) Draw samples from a standard Normal distribution (mean=0, stdev=1). Note New code should use the [`standard_normal`](numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns: **out** float or ndarray A floating-point array of shape `size` of drawn samples, or a single sample if `size` was not specified. See also [`normal`](numpy.random.normal#numpy.random.normal "numpy.random.normal") Equivalent function with additional `loc` and `scale` arguments for setting the mean and standard deviation. [`random.Generator.standard_normal`](numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal") which should be used for new code. #### Notes For random samples from the normal distribution with mean `mu` and standard deviation `sigma`, use one of: mu + sigma * np.random.standard_normal(size=...) np.random.normal(mu, sigma, size=...) #### Examples >>> np.random.standard_normal() 2.1923875335537315 #random >>> s = np.random.standard_normal(8000) >>> s array([ 0.6888893 , 0.78096262, -0.89086505, ..., 0.49876311, # random -0.38672696, -0.4685006 ]) # random >>> s.shape (8000,) >>> s = np.random.standard_normal(size=(3, 4, 2)) >>> s.shape (3, 4, 2) Two-by-four array of samples from the normal distribution with mean 3 and standard deviation 2.5: >>> 3 + 2.5 * np.random.standard_normal(size=(2, 4)) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random # numpy.random.standard_t random.standard_t(_df_ , _size =None_) Draw samples from a standard Student’s t distribution with `df` degrees of freedom. A special case of the hyperbolic distribution. As `df` gets large, the result resembles that of the standard normal distribution ([`standard_normal`](numpy.random.standard_normal#numpy.random.standard_normal "numpy.random.standard_normal")). Note New code should use the [`standard_t`](numpy.random.generator.standard_t#numpy.random.Generator.standard_t "numpy.random.Generator.standard_t") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **df** float or array_like of floats Degrees of freedom, must be > 0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` is a scalar. Otherwise, `np.array(df).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized standard Student’s t distribution. See also [`random.Generator.standard_t`](numpy.random.generator.standard_t#numpy.random.Generator.standard_t "numpy.random.Generator.standard_t") which should be used for new code. #### Notes The probability density function for the t distribution is \\[P(x, df) = \frac{\Gamma(\frac{df+1}{2})}{\sqrt{\pi df} \Gamma(\frac{df}{2})}\Bigl( 1+\frac{x^2}{df} \Bigr)^{-(df+1)/2}\\] The t test is based on an assumption that the data come from a Normal distribution. The t test provides a way to test whether the sample mean (that is the mean calculated from the data) is a good estimate of the true mean. The derivation of the t-distribution was first published in 1908 by William Gosset while working for the Guinness Brewery in Dublin. Due to proprietary issues, he had to publish under a pseudonym, and so he used the name Student. #### References [1] Dalgaard, Peter, “Introductory Statistics With R”, Springer, 2002. [2] Wikipedia, “Student’s t-distribution” [https://en.wikipedia.org/wiki/Student’s_t- distribution](https://en.wikipedia.org/wiki/Student's_t-distribution) #### Examples From Dalgaard page 83 [1], suppose the daily energy intake for 11 women in kilojoules (kJ) is: >>> intake = np.array([5260., 5470, 5640, 6180, 6390, 6515, 6805, 7515, \ ... 7515, 8230, 8770]) Does their energy intake deviate systematically from the recommended value of 7725 kJ? Our null hypothesis will be the absence of deviation, and the alternate hypothesis will be the presence of an effect that could be either positive or negative, hence making our test 2-tailed. Because we are estimating the mean and we have N=11 values in our sample, we have N-1=10 degrees of freedom. We set our significance level to 95% and compute the t statistic using the empirical mean and empirical standard deviation of our intake. We use a ddof of 1 to base the computation of our empirical standard deviation on an unbiased estimate of the variance (note: the final estimate is not unbiased due to the concave nature of the square root). >>> np.mean(intake) 6753.636363636364 >>> intake.std(ddof=1) 1142.1232221373727 >>> t = (np.mean(intake)-7725)/(intake.std(ddof=1)/np.sqrt(len(intake))) >>> t -2.8207540608310198 We draw 1000000 samples from Student’s t distribution with the adequate degrees of freedom. >>> import matplotlib.pyplot as plt >>> s = np.random.standard_t(10, size=1000000) >>> h = plt.hist(s, bins=100, density=True) Does our t statistic land in one of the two critical regions found at both tails of the distribution? >>> np.sum(np.abs(t) < np.abs(s)) / float(len(s)) 0.018318 #random < 0.05, statistic is in critical region The probability value for this 2-tailed test is about 1.83%, which is lower than the 5% pre-determined significance threshold. Therefore, the probability of observing values as extreme as our intake conditionally on the null hypothesis being true is too low, and we reject the null hypothesis of no deviation. # numpy.random.triangular random.triangular(_left_ , _mode_ , _right_ , _size =None_) Draw samples from the triangular distribution over the interval `[left, right]`. The triangular distribution is a continuous probability distribution with lower limit left, peak at mode, and upper limit right. Unlike the other distributions, these parameters directly define the shape of the pdf. Note New code should use the [`triangular`](numpy.random.generator.triangular#numpy.random.Generator.triangular "numpy.random.Generator.triangular") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **left** float or array_like of floats Lower limit. **mode** float or array_like of floats The value where the peak of the distribution occurs. The value must fulfill the condition `left <= mode <= right`. **right** float or array_like of floats Upper limit, must be larger than `left`. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `left`, `mode`, and `right` are all scalars. Otherwise, `np.broadcast(left, mode, right).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized triangular distribution. See also [`random.Generator.triangular`](numpy.random.generator.triangular#numpy.random.Generator.triangular "numpy.random.Generator.triangular") which should be used for new code. #### Notes The probability density function for the triangular distribution is \\[\begin{split}P(x;l, m, r) = \begin{cases} \frac{2(x-l)}{(r-l)(m-l)}& \text{for $l \leq x \leq m$},\\\ \frac{2(r-x)}{(r-l)(r-m)}& \text{for $m \leq x \leq r$},\\\ 0& \text{otherwise}. \end{cases}\end{split}\\] The triangular distribution is often used in ill-defined problems where the underlying distribution is not known, but some knowledge of the limits and mode exists. Often it is used in simulations. #### References [1] Wikipedia, “Triangular distribution” #### Examples Draw values from the distribution and plot the histogram: >>> import matplotlib.pyplot as plt >>> h = plt.hist(np.random.triangular(-3, 0, 8, 100000), bins=200, ... density=True) >>> plt.show() # numpy.random.uniform random.uniform(_low =0.0_, _high =1.0_, _size =None_) Draw samples from a uniform distribution. Samples are uniformly distributed over the half-open interval `[low, high)` (includes low, but excludes high). In other words, any value within the given interval is equally likely to be drawn by `uniform`. Note New code should use the [`uniform`](numpy.random.generator.uniform#numpy.random.Generator.uniform "numpy.random.Generator.uniform") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **low** float or array_like of floats, optional Lower boundary of the output interval. All values generated will be greater than or equal to low. The default value is 0. **high** float or array_like of floats Upper boundary of the output interval. All values generated will be less than or equal to high. The high limit may be included in the returned array of floats due to floating-point rounding in the equation `low + (high-low) * random_sample()`. The default value is 1.0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `low` and `high` are both scalars. Otherwise, `np.broadcast(low, high).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized uniform distribution. See also [`randint`](numpy.random.randint#numpy.random.randint "numpy.random.randint") Discrete uniform distribution, yielding integers. [`random_integers`](numpy.random.random_integers#numpy.random.random_integers "numpy.random.random_integers") Discrete uniform distribution over the closed interval `[low, high]`. [`random_sample`](numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample") Floats uniformly distributed over `[0, 1)`. [`random`](../index#module-numpy.random "numpy.random") Alias for [`random_sample`](numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample"). [`rand`](numpy.random.rand#numpy.random.rand "numpy.random.rand") Convenience function that accepts dimensions as input, e.g., `rand(2,2)` would generate a 2-by-2 array of floats, uniformly distributed over `[0, 1)`. [`random.Generator.uniform`](numpy.random.generator.uniform#numpy.random.Generator.uniform "numpy.random.Generator.uniform") which should be used for new code. #### Notes The probability density function of the uniform distribution is \\[p(x) = \frac{1}{b - a}\\] anywhere within the interval `[a, b)`, and zero elsewhere. When `high` == `low`, values of `low` will be returned. If `high` < `low`, the results are officially undefined and may eventually raise an error, i.e. do not rely on this function to behave when passed arguments satisfying that inequality condition. The `high` limit may be included in the returned array of floats due to floating-point rounding in the equation `low + (high-low) * random_sample()`. For example: >>> x = np.float32(5*0.99999999) >>> x np.float32(5.0) #### Examples Draw samples from the distribution: >>> s = np.random.uniform(-1,0,1000) All values are within the given interval: >>> np.all(s >= -1) True >>> np.all(s < 0) True Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 15, density=True) >>> plt.plot(bins, np.ones_like(bins), linewidth=2, color='r') >>> plt.show() # numpy.random.vonmises random.vonmises(_mu_ , _kappa_ , _size =None_) Draw samples from a von Mises distribution. Samples are drawn from a von Mises distribution with specified mode (mu) and concentration (kappa), on the interval [-pi, pi]. The von Mises distribution (also known as the circular normal distribution) is a continuous probability distribution on the unit circle. It may be thought of as the circular analogue of the normal distribution. Note New code should use the [`vonmises`](numpy.random.generator.vonmises#numpy.random.Generator.vonmises "numpy.random.Generator.vonmises") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **mu** float or array_like of floats Mode (“center”) of the distribution. **kappa** float or array_like of floats Concentration of the distribution, has to be >=0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mu` and `kappa` are both scalars. Otherwise, `np.broadcast(mu, kappa).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized von Mises distribution. See also [`scipy.stats.vonmises`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.vonmises.html#scipy.stats.vonmises "\(in SciPy v1.14.1\)") probability density function, distribution, or cumulative density function, etc. [`random.Generator.vonmises`](numpy.random.generator.vonmises#numpy.random.Generator.vonmises "numpy.random.Generator.vonmises") which should be used for new code. #### Notes The probability density for the von Mises distribution is \\[p(x) = \frac{e^{\kappa cos(x-\mu)}}{2\pi I_0(\kappa)},\\] where \\(\mu\\) is the mode and \\(\kappa\\) the concentration, and \\(I_0(\kappa)\\) is the modified Bessel function of order 0. The von Mises is named for Richard Edler von Mises, who was born in Austria- Hungary, in what is now the Ukraine. He fled to the United States in 1939 and became a professor at Harvard. He worked in probability theory, aerodynamics, fluid mechanics, and philosophy of science. #### References [1] Abramowitz, M. and Stegun, I. A. (Eds.). “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing,” New York: Dover, 1972. [2] von Mises, R., “Mathematical Theory of Probability and Statistics”, New York: Academic Press, 1964. #### Examples Draw samples from the distribution: >>> mu, kappa = 0.0, 4.0 # mean and concentration >>> s = np.random.vonmises(mu, kappa, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> from scipy.special import i0 >>> plt.hist(s, 50, density=True) >>> x = np.linspace(-np.pi, np.pi, num=51) >>> y = np.exp(kappa*np.cos(x-mu))/(2*np.pi*i0(kappa)) >>> plt.plot(x, y, linewidth=2, color='r') >>> plt.show() # numpy.random.wald random.wald(_mean_ , _scale_ , _size =None_) Draw samples from a Wald, or inverse Gaussian, distribution. As the scale approaches infinity, the distribution becomes more like a Gaussian. Some references claim that the Wald is an inverse Gaussian with mean equal to 1, but this is by no means universal. The inverse Gaussian distribution was first studied in relationship to Brownian motion. In 1956 M.C.K. Tweedie used the name inverse Gaussian because there is an inverse relationship between the time to cover a unit distance and distance covered in unit time. Note New code should use the [`wald`](numpy.random.generator.wald#numpy.random.Generator.wald "numpy.random.Generator.wald") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **mean** float or array_like of floats Distribution mean, must be > 0. **scale** float or array_like of floats Scale parameter, must be > 0. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mean` and `scale` are both scalars. Otherwise, `np.broadcast(mean, scale).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Wald distribution. See also [`random.Generator.wald`](numpy.random.generator.wald#numpy.random.Generator.wald "numpy.random.Generator.wald") which should be used for new code. #### Notes The probability density function for the Wald distribution is \\[P(x;mean,scale) = \sqrt{\frac{scale}{2\pi x^3}}e^ \frac{-scale(x-mean)^2}{2\cdotp mean^2x}\\] As noted above the inverse Gaussian distribution first arise from attempts to model Brownian motion. It is also a competitor to the Weibull for use in reliability modeling and modeling stock returns and interest rate processes. #### References [1] Brighton Webs Ltd., Wald Distribution, [2] Chhikara, Raj S., and Folks, J. Leroy, “The Inverse Gaussian Distribution: Theory : Methodology, and Applications”, CRC Press, 1988. [3] Wikipedia, “Inverse Gaussian distribution” #### Examples Draw values from the distribution and plot the histogram: >>> import matplotlib.pyplot as plt >>> h = plt.hist(np.random.wald(3, 2, 100000), bins=200, density=True) >>> plt.show() # numpy.random.weibull random.weibull(_a_ , _size =None_) Draw samples from a Weibull distribution. Draw samples from a 1-parameter Weibull distribution with the given shape parameter `a`. \\[X = (-ln(U))^{1/a}\\] Here, U is drawn from the uniform distribution over (0,1]. The more common 2-parameter Weibull, including a scale parameter \\(\lambda\\) is just \\(X = \lambda(-ln(U))^{1/a}\\). Note New code should use the [`weibull`](numpy.random.generator.weibull#numpy.random.Generator.weibull "numpy.random.Generator.weibull") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **a** float or array_like of floats Shape parameter of the distribution. Must be nonnegative. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Weibull distribution. See also [`scipy.stats.weibull_max`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.weibull_max.html#scipy.stats.weibull_max "\(in SciPy v1.14.1\)") [`scipy.stats.weibull_min`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.weibull_min.html#scipy.stats.weibull_min "\(in SciPy v1.14.1\)") [`scipy.stats.genextreme`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genextreme.html#scipy.stats.genextreme "\(in SciPy v1.14.1\)") [`gumbel`](numpy.random.gumbel#numpy.random.gumbel "numpy.random.gumbel") [`random.Generator.weibull`](numpy.random.generator.weibull#numpy.random.Generator.weibull "numpy.random.Generator.weibull") which should be used for new code. #### Notes The Weibull (or Type III asymptotic extreme value distribution for smallest values, SEV Type III, or Rosin-Rammler distribution) is one of a class of Generalized Extreme Value (GEV) distributions used in modeling extreme value problems. This class includes the Gumbel and Frechet distributions. The probability density for the Weibull distribution is \\[p(x) = \frac{a} {\lambda}(\frac{x}{\lambda})^{a-1}e^{-(x/\lambda)^a},\\] where \\(a\\) is the shape and \\(\lambda\\) the scale. The function has its peak (the mode) at \\(\lambda(\frac{a-1}{a})^{1/a}\\). When `a = 1`, the Weibull distribution reduces to the exponential distribution. #### References [1] Waloddi Weibull, Royal Technical University, Stockholm, 1939 “A Statistical Theory Of The Strength Of Materials”, Ingeniorsvetenskapsakademiens Handlingar Nr 151, 1939, Generalstabens Litografiska Anstalts Forlag, Stockholm. [2] Waloddi Weibull, “A Statistical Distribution Function of Wide Applicability”, Journal Of Applied Mechanics ASME Paper 1951. [3] Wikipedia, “Weibull distribution”, #### Examples Draw samples from the distribution: >>> a = 5. # shape >>> s = np.random.weibull(a, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> x = np.arange(1,100.)/50. >>> def weib(x,n,a): ... return (a / n) * (x / n)**(a - 1) * np.exp(-(x / n)**a) >>> count, bins, ignored = plt.hist(np.random.weibull(5.,1000)) >>> x = np.arange(1,100.)/50. >>> scale = count.max()/weib(x, 1., 5.).max() >>> plt.plot(x, weib(x, 1., 5.)*scale) >>> plt.show() # numpy.random.zipf random.zipf(_a_ , _size =None_) Draw samples from a Zipf distribution. Samples are drawn from a Zipf distribution with specified parameter `a` > 1. The Zipf distribution (also known as the zeta distribution) is a discrete probability distribution that satisfies Zipf’s law: the frequency of an item is inversely proportional to its rank in a frequency table. Note New code should use the [`zipf`](numpy.random.generator.zipf#numpy.random.Generator.zipf "numpy.random.Generator.zipf") method of a [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance instead; please see the [Quick start](../index#random-quick-start). Parameters: **a** float or array_like of floats Distribution parameter. Must be greater than 1. **size** int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns: **out** ndarray or scalar Drawn samples from the parameterized Zipf distribution. See also [`scipy.stats.zipf`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.zipf.html#scipy.stats.zipf "\(in SciPy v1.14.1\)") probability density function, distribution, or cumulative density function, etc. [`random.Generator.zipf`](numpy.random.generator.zipf#numpy.random.Generator.zipf "numpy.random.Generator.zipf") which should be used for new code. #### Notes The probability mass function (PMF) for the Zipf distribution is \\[p(k) = \frac{k^{-a}}{\zeta(a)},\\] for integers \\(k \geq 1\\), where \\(\zeta\\) is the Riemann Zeta function. It is named for the American linguist George Kingsley Zipf, who noted that the frequency of any word in a sample of a language is inversely proportional to its rank in the frequency table. #### References [1] Zipf, G. K., “Selected Studies of the Principle of Relative Frequency in Language,” Cambridge, MA: Harvard Univ. Press, 1932. #### Examples Draw samples from the distribution: >>> a = 4.0 >>> n = 20000 >>> s = np.random.zipf(a, n) Display the histogram of the samples, along with the expected histogram based on the probability density function: >>> import matplotlib.pyplot as plt >>> from scipy.special import zeta [`bincount`](../../generated/numpy.bincount#numpy.bincount "numpy.bincount") provides a fast histogram for small integers. >>> count = np.bincount(s) >>> k = np.arange(1, s.max() + 1) >>> plt.bar(k, count[1:], alpha=0.5, label='sample count') >>> plt.plot(k, n*(k**-a)/zeta(a), 'k.-', alpha=0.5, ... label='expected count') >>> plt.semilogy() >>> plt.grid(alpha=0.4) >>> plt.legend() >>> plt.title(f'Zipf sample, a={a}, size={n}') >>> plt.show() # Random Generator The `Generator` provides access to a wide range of distributions, and served as a replacement for [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState"). The main difference between the two is that `Generator` relies on an additional BitGenerator to manage state and generate the random bits, which are then transformed into random values from useful distributions. The default BitGenerator used by `Generator` is [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64"). The BitGenerator can be changed by passing an instantized BitGenerator to `Generator`. numpy.random.default_rng(_seed =None_) Construct a new Generator with the default BitGenerator (PCG64). Parameters: **seed**{None, int, array_like[ints], SeedSequence, BitGenerator, Generator, RandomState}, optional A seed to initialize the [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). If None, then fresh, unpredictable entropy will be pulled from the OS. If an `int` or `array_like[ints]` is passed, then all values must be non-negative and will be passed to [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to derive the initial [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") state. One may also pass in a [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") instance. Additionally, when passed a [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"), it will be wrapped by `Generator`. If passed a `Generator`, it will be returned unaltered. When passed a legacy [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") instance it will be coerced to a `Generator`. Returns: Generator The initialized generator object. #### Notes If `seed` is not a [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") or a `Generator`, a new [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") is instantiated. This function does not manage a default global instance. See [Seeding and entropy](bit_generators/index#seeding-and-entropy) for more information about seeding. #### Examples `default_rng` is the recommended constructor for the random number class `Generator`. Here are several ways we can construct a random number generator using `default_rng` and the `Generator` class. Here we use `default_rng` to generate a random float: >>> import numpy as np >>> rng = np.random.default_rng(12345) >>> print(rng) Generator(PCG64) >>> rfloat = rng.random() >>> rfloat 0.22733602246716966 >>> type(rfloat) Here we use `default_rng` to generate 3 random integers between 0 (inclusive) and 10 (exclusive): >>> import numpy as np >>> rng = np.random.default_rng(12345) >>> rints = rng.integers(low=0, high=10, size=3) >>> rints array([6, 2, 7]) >>> type(rints[0]) Here we specify a seed so that we have reproducible results: >>> import numpy as np >>> rng = np.random.default_rng(seed=42) >>> print(rng) Generator(PCG64) >>> arr1 = rng.random((3, 3)) >>> arr1 array([[0.77395605, 0.43887844, 0.85859792], [0.69736803, 0.09417735, 0.97562235], [0.7611397 , 0.78606431, 0.12811363]]) If we exit and restart our Python interpreter, we’ll see that we generate the same random numbers again: >>> import numpy as np >>> rng = np.random.default_rng(seed=42) >>> arr2 = rng.random((3, 3)) >>> arr2 array([[0.77395605, 0.43887844, 0.85859792], [0.69736803, 0.09417735, 0.97562235], [0.7611397 , 0.78606431, 0.12811363]]) _class_ numpy.random.Generator(_bit_generator_) Container for the BitGenerators. `Generator` exposes a number of methods for generating random numbers drawn from a variety of probability distributions. In addition to the distribution- specific arguments, each method takes a keyword argument `size` that defaults to `None`. If `size` is `None`, then a single value is generated and returned. If `size` is an integer, then a 1-D array filled with generated values is returned. If `size` is a tuple, then an array with that shape is filled and returned. The function `numpy.random.default_rng` will instantiate a `Generator` with numpy’s default [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). **No Compatibility Guarantee** `Generator` does not provide a version compatibility guarantee. In particular, as better algorithms evolve the bit stream may change. Parameters: **bit_generator** BitGenerator BitGenerator to use as the core generator. See also `default_rng` Recommended constructor for `Generator`. #### Notes The Python stdlib module [`random`](https://docs.python.org/3/library/random.html#module-random "\(in Python v3.13\)") contains pseudo-random number generator with a number of methods that are similar to the ones available in `Generator`. It uses Mersenne Twister, and this bit generator can be accessed using [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937"). `Generator`, besides being NumPy-aware, has the advantage that it provides a much larger number of probability distributions to choose from. #### Examples >>> from numpy.random import Generator, PCG64 >>> rng = Generator(PCG64()) >>> rng.standard_normal() -0.203 # random ## Accessing the BitGenerator and spawning [`bit_generator`](generated/numpy.random.generator.bit_generator#numpy.random.Generator.bit_generator "numpy.random.Generator.bit_generator") | Gets the bit generator instance used by the generator ---|--- [`spawn`](generated/numpy.random.generator.spawn#numpy.random.Generator.spawn "numpy.random.Generator.spawn")(n_children) | Create new independent child generators. ## Simple random data [`integers`](generated/numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers")(low[, high, size, dtype, endpoint]) | Return random integers from `low` (inclusive) to `high` (exclusive), or if endpoint=True, `low` (inclusive) to `high` (inclusive). ---|--- [`random`](generated/numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random")([size, dtype, out]) | Return random floats in the half-open interval [0.0, 1.0). [`choice`](generated/numpy.random.generator.choice#numpy.random.Generator.choice "numpy.random.Generator.choice")(a[, size, replace, p, axis, shuffle]) | Generates a random sample from a given array [`bytes`](generated/numpy.random.generator.bytes#numpy.random.Generator.bytes "numpy.random.Generator.bytes")(length) | Return random bytes. ## Permutations The methods for randomly permuting a sequence are [`shuffle`](generated/numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle")(x[, axis]) | Modify an array or sequence in-place by shuffling its contents. ---|--- [`permutation`](generated/numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation")(x[, axis]) | Randomly permute a sequence, or return a permuted range. [`permuted`](generated/numpy.random.generator.permuted#numpy.random.Generator.permuted "numpy.random.Generator.permuted")(x[, axis, out]) | Randomly permute `x` along axis `axis`. The following table summarizes the behaviors of the methods. method | copy/in-place | axis handling ---|---|--- shuffle | in-place | as if 1d permutation | copy | as if 1d permuted | either (use ‘out’ for in-place) | axis independent The following subsections provide more details about the differences. ### In-place vs. copy The main difference between [`Generator.shuffle`](generated/numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") and [`Generator.permutation`](generated/numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") is that [`Generator.shuffle`](generated/numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") operates in-place, while [`Generator.permutation`](generated/numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") returns a copy. By default, [`Generator.permuted`](generated/numpy.random.generator.permuted#numpy.random.Generator.permuted "numpy.random.Generator.permuted") returns a copy. To operate in-place with [`Generator.permuted`](generated/numpy.random.generator.permuted#numpy.random.Generator.permuted "numpy.random.Generator.permuted"), pass the same array as the first argument _and_ as the value of the `out` parameter. For example, >>> import numpy as np >>> rng = np.random.default_rng() >>> x = np.arange(0, 15).reshape(3, 5) >>> x array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]]) >>> y = rng.permuted(x, axis=1, out=x) >>> x array([[ 1, 0, 2, 4, 3], # random [ 6, 7, 8, 9, 5], [10, 14, 11, 13, 12]]) Note that when `out` is given, the return value is `out`: >>> y is x True ### Handling the `axis` parameter An important distinction for these methods is how they handle the `axis` parameter. Both [`Generator.shuffle`](generated/numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") and [`Generator.permutation`](generated/numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") treat the input as a one-dimensional sequence, and the `axis` parameter determines which dimension of the input array to use as the sequence. In the case of a two-dimensional array, `axis=0` will, in effect, rearrange the rows of the array, and `axis=1` will rearrange the columns. For example >>> import numpy as np >>> rng = np.random.default_rng() >>> x = np.arange(0, 15).reshape(3, 5) >>> x array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]]) >>> rng.permutation(x, axis=1) array([[ 1, 3, 2, 0, 4], # random [ 6, 8, 7, 5, 9], [11, 13, 12, 10, 14]]) Note that the columns have been rearranged “in bulk”: the values within each column have not changed. The method [`Generator.permuted`](generated/numpy.random.generator.permuted#numpy.random.Generator.permuted "numpy.random.Generator.permuted") treats the `axis` parameter similar to how [`numpy.sort`](../generated/numpy.sort#numpy.sort "numpy.sort") treats it. Each slice along the given axis is shuffled independently of the others. Compare the following example of the use of [`Generator.permuted`](generated/numpy.random.generator.permuted#numpy.random.Generator.permuted "numpy.random.Generator.permuted") to the above example of [`Generator.permutation`](generated/numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation"): >>> import numpy as np >>> rng = np.random.default_rng() >>> rng.permuted(x, axis=1) array([[ 1, 0, 2, 4, 3], # random [ 5, 7, 6, 9, 8], [10, 14, 12, 13, 11]]) In this example, the values within each row (i.e. the values along `axis=1`) have been shuffled independently. This is not a “bulk” shuffle of the columns. ### Shuffling non-NumPy sequences [`Generator.shuffle`](generated/numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") works on non-NumPy sequences. That is, if it is given a sequence that is not a NumPy array, it shuffles that sequence in- place. >>> import numpy as np >>> rng = np.random.default_rng() >>> a = ['A', 'B', 'C', 'D', 'E'] >>> rng.shuffle(a) # shuffle the list in-place >>> a ['B', 'D', 'A', 'E', 'C'] # random ## Distributions [`beta`](generated/numpy.random.generator.beta#numpy.random.Generator.beta "numpy.random.Generator.beta")(a, b[, size]) | Draw samples from a Beta distribution. ---|--- [`binomial`](generated/numpy.random.generator.binomial#numpy.random.Generator.binomial "numpy.random.Generator.binomial")(n, p[, size]) | Draw samples from a binomial distribution. [`chisquare`](generated/numpy.random.generator.chisquare#numpy.random.Generator.chisquare "numpy.random.Generator.chisquare")(df[, size]) | Draw samples from a chi-square distribution. [`dirichlet`](generated/numpy.random.generator.dirichlet#numpy.random.Generator.dirichlet "numpy.random.Generator.dirichlet")(alpha[, size]) | Draw samples from the Dirichlet distribution. [`exponential`](generated/numpy.random.generator.exponential#numpy.random.Generator.exponential "numpy.random.Generator.exponential")([scale, size]) | Draw samples from an exponential distribution. [`f`](generated/numpy.random.generator.f#numpy.random.Generator.f "numpy.random.Generator.f")(dfnum, dfden[, size]) | Draw samples from an F distribution. [`gamma`](generated/numpy.random.generator.gamma#numpy.random.Generator.gamma "numpy.random.Generator.gamma")(shape[, scale, size]) | Draw samples from a Gamma distribution. [`geometric`](generated/numpy.random.generator.geometric#numpy.random.Generator.geometric "numpy.random.Generator.geometric")(p[, size]) | Draw samples from the geometric distribution. [`gumbel`](generated/numpy.random.generator.gumbel#numpy.random.Generator.gumbel "numpy.random.Generator.gumbel")([loc, scale, size]) | Draw samples from a Gumbel distribution. [`hypergeometric`](generated/numpy.random.generator.hypergeometric#numpy.random.Generator.hypergeometric "numpy.random.Generator.hypergeometric")(ngood, nbad, nsample[, size]) | Draw samples from a Hypergeometric distribution. [`laplace`](generated/numpy.random.generator.laplace#numpy.random.Generator.laplace "numpy.random.Generator.laplace")([loc, scale, size]) | Draw samples from the Laplace or double exponential distribution with specified location (or mean) and scale (decay). [`logistic`](generated/numpy.random.generator.logistic#numpy.random.Generator.logistic "numpy.random.Generator.logistic")([loc, scale, size]) | Draw samples from a logistic distribution. [`lognormal`](generated/numpy.random.generator.lognormal#numpy.random.Generator.lognormal "numpy.random.Generator.lognormal")([mean, sigma, size]) | Draw samples from a log-normal distribution. [`logseries`](generated/numpy.random.generator.logseries#numpy.random.Generator.logseries "numpy.random.Generator.logseries")(p[, size]) | Draw samples from a logarithmic series distribution. [`multinomial`](generated/numpy.random.generator.multinomial#numpy.random.Generator.multinomial "numpy.random.Generator.multinomial")(n, pvals[, size]) | Draw samples from a multinomial distribution. [`multivariate_hypergeometric`](generated/numpy.random.generator.multivariate_hypergeometric#numpy.random.Generator.multivariate_hypergeometric "numpy.random.Generator.multivariate_hypergeometric")(colors, nsample) | Generate variates from a multivariate hypergeometric distribution. [`multivariate_normal`](generated/numpy.random.generator.multivariate_normal#numpy.random.Generator.multivariate_normal "numpy.random.Generator.multivariate_normal")(mean, cov[, size, ...]) | Draw random samples from a multivariate normal distribution. [`negative_binomial`](generated/numpy.random.generator.negative_binomial#numpy.random.Generator.negative_binomial "numpy.random.Generator.negative_binomial")(n, p[, size]) | Draw samples from a negative binomial distribution. [`noncentral_chisquare`](generated/numpy.random.generator.noncentral_chisquare#numpy.random.Generator.noncentral_chisquare "numpy.random.Generator.noncentral_chisquare")(df, nonc[, size]) | Draw samples from a noncentral chi-square distribution. [`noncentral_f`](generated/numpy.random.generator.noncentral_f#numpy.random.Generator.noncentral_f "numpy.random.Generator.noncentral_f")(dfnum, dfden, nonc[, size]) | Draw samples from the noncentral F distribution. [`normal`](generated/numpy.random.generator.normal#numpy.random.Generator.normal "numpy.random.Generator.normal")([loc, scale, size]) | Draw random samples from a normal (Gaussian) distribution. [`pareto`](generated/numpy.random.generator.pareto#numpy.random.Generator.pareto "numpy.random.Generator.pareto")(a[, size]) | Draw samples from a Pareto II (AKA Lomax) distribution with specified shape. [`poisson`](generated/numpy.random.generator.poisson#numpy.random.Generator.poisson "numpy.random.Generator.poisson")([lam, size]) | Draw samples from a Poisson distribution. [`power`](generated/numpy.random.generator.power#numpy.random.Generator.power "numpy.random.Generator.power")(a[, size]) | Draws samples in [0, 1] from a power distribution with positive exponent a - 1. [`rayleigh`](generated/numpy.random.generator.rayleigh#numpy.random.Generator.rayleigh "numpy.random.Generator.rayleigh")([scale, size]) | Draw samples from a Rayleigh distribution. [`standard_cauchy`](generated/numpy.random.generator.standard_cauchy#numpy.random.Generator.standard_cauchy "numpy.random.Generator.standard_cauchy")([size]) | Draw samples from a standard Cauchy distribution with mode = 0. [`standard_exponential`](generated/numpy.random.generator.standard_exponential#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential")([size, dtype, method, out]) | Draw samples from the standard exponential distribution. [`standard_gamma`](generated/numpy.random.generator.standard_gamma#numpy.random.Generator.standard_gamma "numpy.random.Generator.standard_gamma")(shape[, size, dtype, out]) | Draw samples from a standard Gamma distribution. [`standard_normal`](generated/numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal")([size, dtype, out]) | Draw samples from a standard Normal distribution (mean=0, stdev=1). [`standard_t`](generated/numpy.random.generator.standard_t#numpy.random.Generator.standard_t "numpy.random.Generator.standard_t")(df[, size]) | Draw samples from a standard Student's t distribution with `df` degrees of freedom. [`triangular`](generated/numpy.random.generator.triangular#numpy.random.Generator.triangular "numpy.random.Generator.triangular")(left, mode, right[, size]) | Draw samples from the triangular distribution over the interval `[left, right]`. [`uniform`](generated/numpy.random.generator.uniform#numpy.random.Generator.uniform "numpy.random.Generator.uniform")([low, high, size]) | Draw samples from a uniform distribution. [`vonmises`](generated/numpy.random.generator.vonmises#numpy.random.Generator.vonmises "numpy.random.Generator.vonmises")(mu, kappa[, size]) | Draw samples from a von Mises distribution. [`wald`](generated/numpy.random.generator.wald#numpy.random.Generator.wald "numpy.random.Generator.wald")(mean, scale[, size]) | Draw samples from a Wald, or inverse Gaussian, distribution. [`weibull`](generated/numpy.random.generator.weibull#numpy.random.Generator.weibull "numpy.random.Generator.weibull")(a[, size]) | Draw samples from a Weibull distribution. [`zipf`](generated/numpy.random.generator.zipf#numpy.random.Generator.zipf "numpy.random.Generator.zipf")(a[, size]) | Draw samples from a Zipf distribution. # Random sampling (numpy.random) ## Quick start The `numpy.random` module implements pseudo-random number generators (PRNGs or RNGs, for short) with the ability to draw samples from a variety of probability distributions. In general, users will create a [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") instance with [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") and call the various methods on it to obtain samples from different distributions. >>> import numpy as np >>> rng = np.random.default_rng() # Generate one random float uniformly distributed over the range [0, 1) >>> rng.random() 0.06369197489564249 # may vary # Generate an array of 10 numbers according to a unit Gaussian distribution >>> rng.standard_normal(10) array([-0.31018314, -1.8922078 , -0.3628523 , -0.63526532, 0.43181166, # may vary 0.51640373, 1.25693945, 0.07779185, 0.84090247, -2.13406828]) # Generate an array of 5 integers uniformly over the range [0, 10) >>> rng.integers(low=0, high=10, size=5) array([8, 7, 6, 2, 0]) # may vary Our RNGs are deterministic sequences and can be reproduced by specifying a seed integer to derive its initial state. By default, with no seed provided, [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") will seed the RNG from nondeterministic data from the operating system and therefore generate different numbers each time. The pseudo-random sequences will be independent for all practical purposes, at least those purposes for which our pseudo-randomness was good for in the first place. >>> import numpy as np >>> rng1 = np.random.default_rng() >>> rng1.random() 0.6596288841243357 # may vary >>> rng2 = np.random.default_rng() >>> rng2.random() 0.11885628817151628 # may vary Warning The pseudo-random number generators implemented in this module are designed for statistical modeling and simulation. They are not suitable for security or cryptographic purposes. See the [`secrets`](https://docs.python.org/3/library/secrets.html#module-secrets "\(in Python v3.13\)") module from the standard library for such use cases. Seeds should be large positive integers. [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") can take positive integers of any size. We recommend using very large, unique numbers to ensure that your seed is different from anyone else’s. This is good practice to ensure that your results are statistically independent from theirs unless you are intentionally _trying_ to reproduce their result. A convenient way to get such a seed number is to use [`secrets.randbits`](https://docs.python.org/3/library/secrets.html#secrets.randbits "\(in Python v3.13\)") to get an arbitrary 128-bit integer. >>> import numpy as np >>> import secrets >>> secrets.randbits(128) 122807528840384100672342137672332424406 # may vary >>> rng1 = np.random.default_rng(122807528840384100672342137672332424406) >>> rng1.random() 0.5363922081269535 >>> rng2 = np.random.default_rng(122807528840384100672342137672332424406) >>> rng2.random() 0.5363922081269535 See the documentation on [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") and [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") for more advanced options for controlling the seed in specialized scenarios. [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") and its associated infrastructure was introduced in NumPy version 1.17.0. There is still a lot of code that uses the older [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") and the functions in `numpy.random`. While there are no plans to remove them at this time, we do recommend transitioning to [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") as you can. The algorithms are faster, more flexible, and will receive more improvements in the future. For the most part, [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") can be used as a replacement for [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState"). See [Legacy random generation](legacy#legacy) for information on the legacy infrastructure, [What’s new or different](new-or- different#new-or-different) for information on transitioning, and [NEP 19](https://numpy.org/neps/nep-0019-rng-policy.html#nep19 "\(in NumPy Enhancement Proposals\)") for some of the reasoning for the transition. ## Design Users primarily interact with [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") instances. Each [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") instance owns a [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") instance that implements the core RNG algorithm. The [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") has a limited set of responsibilities. It manages state and provides functions to produce random doubles and random unsigned 32- and 64-bit values. The [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") takes the bit generator-provided stream and transforms them into more useful distributions, e.g., simulated normal random values. This structure allows alternative bit generators to be used with little code duplication. NumPy implements several different [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") classes implementing different RNG algorithms. [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") currently uses [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") as the default [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). It has better statistical properties and performance than the [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") algorithm used in the legacy [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState"). See [Bit generators](bit_generators/index#random-bit-generators) for more details on the supported BitGenerators. [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") and BitGenerators delegate the conversion of seeds into RNG states to [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") internally. [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") implements a sophisticated algorithm that intermediates between the user’s input and the internal implementation details of each [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") algorithm, each of which can require different amounts of bits for its state. Importantly, it lets you use arbitrary-sized integers and arbitrary sequences of such integers to mix together into the RNG state. This is a useful primitive for constructing a [flexible pattern for parallel RNG streams](parallel#seedsequence-spawn). For backward compatibility, we still maintain the legacy [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") class. It continues to use the [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") algorithm by default, and old seeds continue to reproduce the same results. The convenience [Functions in numpy.random](legacy#functions-in-numpy-random) are still aliases to the methods on a single global [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") instance. See [Legacy random generation](legacy#legacy) for the complete details. See [What’s new or different](new-or-different#new-or-different) for a detailed comparison between [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") and [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState"). ### Parallel Generation The included generators can be used in parallel, distributed applications in a number of ways: * [SeedSequence spawning](parallel#seedsequence-spawn) * [Sequence of integer seeds](parallel#sequence-of-seeds) * [Independent streams](parallel#independent-streams) * [Jumping the BitGenerator state](parallel#parallel-jumped) Users with a very large amount of parallelism will want to consult [Upgrading PCG64 with PCG64DXSM](upgrading-pcg64#upgrading-pcg64). ## Concepts * [Random `Generator`](generator) * [Legacy Generator (RandomState)](legacy) * [Bit generators](bit_generators/index) * [Seeding and entropy](bit_generators/index#seeding-and-entropy) * [Upgrading PCG64 with PCG64DXSM](upgrading-pcg64) * [Compatibility policy](compatibility) ## Features * [Parallel Applications](parallel) * [`SeedSequence` spawning](parallel#seedsequence-spawning) * [Sequence of integer seeds](parallel#sequence-of-integer-seeds) * [Independent streams](parallel#independent-streams) * [Jumping the BitGenerator state](parallel#jumping-the-bitgenerator-state) * [Multithreaded Generation](multithreading) * [What’s new or different](new-or-different) * [Comparing Performance](performance) * [Recommendation](performance#recommendation) * [Timings](performance#timings) * [Performance on different operating systems](performance#performance-on-different-operating-systems) * [C API for random](c-api) * [Examples of using Numba, Cython, CFFI](extending) * [Numba](extending#numba) * [Cython](extending#cython) * [CFFI](extending#cffi) * [New BitGenerators](extending#new-bitgenerators) * [Examples](extending#examples) ### Original Source of the Generator and BitGenerators This package was developed independently of NumPy and was integrated in version 1.17.0. The original repo is at [bashtage/randomgen](https://github.com/bashtage/randomgen). # Legacy random generation The `RandomState` provides access to legacy generators. This generator is considered frozen and will have no further improvements. It is guaranteed to produce the same values as the final point release of NumPy v1.16. These all depend on Box-Muller normals or inverse CDF exponentials or gammas. This class should only be used if it is essential to have randoms that are identical to what would have been produced by previous versions of NumPy. `RandomState` adds additional information to the state which is required when using Box-Muller normals since these are produced in pairs. It is important to use [`RandomState.get_state`](generated/numpy.random.randomstate.get_state#numpy.random.RandomState.get_state "numpy.random.RandomState.get_state"), and not the underlying bit generators `state`, when accessing the state so that these extra values are saved. Although we provide the [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") BitGenerator for use independent of `RandomState`, note that its default seeding uses [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") rather than the legacy seeding algorithm. `RandomState` will use the legacy seeding algorithm. The methods to use the legacy seeding algorithm are currently private as the main reason to use them is just to implement `RandomState`. However, one can reset the state of [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") using the state of the `RandomState`: from numpy.random import MT19937 from numpy.random import RandomState rs = RandomState(12345) mt19937 = MT19937() mt19937.state = rs.get_state() rs2 = RandomState(mt19937) # Same output rs.standard_normal() rs2.standard_normal() rs.random() rs2.random() rs.standard_exponential() rs2.standard_exponential() _class_ numpy.random.RandomState(_seed =None_) Container for the slow Mersenne Twister pseudo-random number generator. Consider using a different BitGenerator with the Generator container instead. `RandomState` and [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") expose a number of methods for generating random numbers drawn from a variety of probability distributions. In addition to the distribution-specific arguments, each method takes a keyword argument `size` that defaults to `None`. If `size` is `None`, then a single value is generated and returned. If `size` is an integer, then a 1-D array filled with generated values is returned. If `size` is a tuple, then an array with that shape is filled and returned. **Compatibility Guarantee** A fixed bit generator using a fixed seed and a fixed series of calls to ‘RandomState’ methods using the same parameters will always produce the same results up to roundoff error except when the values were incorrect. `RandomState` is effectively frozen and will only receive updates that are required by changes in the internals of Numpy. More substantial changes, including algorithmic improvements, are reserved for [`Generator`](generator#numpy.random.Generator "numpy.random.Generator"). Parameters: **seed**{None, int, array_like, BitGenerator}, optional Random seed used to initialize the pseudo-random number generator or an instantized BitGenerator. If an integer or array, used as a seed for the MT19937 BitGenerator. Values can be any integer between 0 and 2**32 - 1 inclusive, an array (or other sequence) of such integers, or `None` (the default). If [`seed`](generated/numpy.random.seed#numpy.random.seed "numpy.random.seed") is `None`, then the [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") BitGenerator is initialized by reading data from `/dev/urandom` (or the Windows analogue) if available or seed from the clock otherwise. See also [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") [`numpy.random.BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") #### Notes The Python stdlib module “random” also contains a Mersenne Twister pseudo- random number generator with a number of methods that are similar to the ones available in `RandomState`. `RandomState`, besides being NumPy-aware, has the advantage that it provides a much larger number of probability distributions to choose from. ## Seeding and state [`get_state`](generated/numpy.random.randomstate.get_state#numpy.random.RandomState.get_state "numpy.random.RandomState.get_state")([legacy]) | Return a tuple representing the internal state of the generator. ---|--- [`set_state`](generated/numpy.random.randomstate.set_state#numpy.random.RandomState.set_state "numpy.random.RandomState.set_state")(state) | Set the internal state of the generator from a tuple. [`seed`](generated/numpy.random.randomstate.seed#numpy.random.RandomState.seed "numpy.random.RandomState.seed")([seed]) | Reseed a legacy MT19937 BitGenerator ## Simple random data [`rand`](generated/numpy.random.randomstate.rand#numpy.random.RandomState.rand "numpy.random.RandomState.rand")(d0, d1, ..., dn) | Random values in a given shape. ---|--- [`randn`](generated/numpy.random.randomstate.randn#numpy.random.RandomState.randn "numpy.random.RandomState.randn")(d0, d1, ..., dn) | Return a sample (or samples) from the "standard normal" distribution. [`randint`](generated/numpy.random.randomstate.randint#numpy.random.RandomState.randint "numpy.random.RandomState.randint")(low[, high, size, dtype]) | Return random integers from `low` (inclusive) to `high` (exclusive). [`random_integers`](generated/numpy.random.randomstate.random_integers#numpy.random.RandomState.random_integers "numpy.random.RandomState.random_integers")(low[, high, size]) | Random integers of type [`numpy.int_`](../arrays.scalars#numpy.int_ "numpy.int_") between `low` and `high`, inclusive. [`random_sample`](generated/numpy.random.randomstate.random_sample#numpy.random.RandomState.random_sample "numpy.random.RandomState.random_sample")([size]) | Return random floats in the half-open interval [0.0, 1.0). [`choice`](generated/numpy.random.randomstate.choice#numpy.random.RandomState.choice "numpy.random.RandomState.choice")(a[, size, replace, p]) | Generates a random sample from a given 1-D array [`bytes`](generated/numpy.random.randomstate.bytes#numpy.random.RandomState.bytes "numpy.random.RandomState.bytes")(length) | Return random bytes. ## Permutations [`shuffle`](generated/numpy.random.randomstate.shuffle#numpy.random.RandomState.shuffle "numpy.random.RandomState.shuffle")(x) | Modify a sequence in-place by shuffling its contents. ---|--- [`permutation`](generated/numpy.random.randomstate.permutation#numpy.random.RandomState.permutation "numpy.random.RandomState.permutation")(x) | Randomly permute a sequence, or return a permuted range. ## Distributions [`beta`](generated/numpy.random.randomstate.beta#numpy.random.RandomState.beta "numpy.random.RandomState.beta")(a, b[, size]) | Draw samples from a Beta distribution. ---|--- [`binomial`](generated/numpy.random.randomstate.binomial#numpy.random.RandomState.binomial "numpy.random.RandomState.binomial")(n, p[, size]) | Draw samples from a binomial distribution. [`chisquare`](generated/numpy.random.randomstate.chisquare#numpy.random.RandomState.chisquare "numpy.random.RandomState.chisquare")(df[, size]) | Draw samples from a chi-square distribution. [`dirichlet`](generated/numpy.random.randomstate.dirichlet#numpy.random.RandomState.dirichlet "numpy.random.RandomState.dirichlet")(alpha[, size]) | Draw samples from the Dirichlet distribution. [`exponential`](generated/numpy.random.randomstate.exponential#numpy.random.RandomState.exponential "numpy.random.RandomState.exponential")([scale, size]) | Draw samples from an exponential distribution. [`f`](generated/numpy.random.randomstate.f#numpy.random.RandomState.f "numpy.random.RandomState.f")(dfnum, dfden[, size]) | Draw samples from an F distribution. [`gamma`](generated/numpy.random.randomstate.gamma#numpy.random.RandomState.gamma "numpy.random.RandomState.gamma")(shape[, scale, size]) | Draw samples from a Gamma distribution. [`geometric`](generated/numpy.random.randomstate.geometric#numpy.random.RandomState.geometric "numpy.random.RandomState.geometric")(p[, size]) | Draw samples from the geometric distribution. [`gumbel`](generated/numpy.random.randomstate.gumbel#numpy.random.RandomState.gumbel "numpy.random.RandomState.gumbel")([loc, scale, size]) | Draw samples from a Gumbel distribution. [`hypergeometric`](generated/numpy.random.randomstate.hypergeometric#numpy.random.RandomState.hypergeometric "numpy.random.RandomState.hypergeometric")(ngood, nbad, nsample[, size]) | Draw samples from a Hypergeometric distribution. [`laplace`](generated/numpy.random.randomstate.laplace#numpy.random.RandomState.laplace "numpy.random.RandomState.laplace")([loc, scale, size]) | Draw samples from the Laplace or double exponential distribution with specified location (or mean) and scale (decay). [`logistic`](generated/numpy.random.randomstate.logistic#numpy.random.RandomState.logistic "numpy.random.RandomState.logistic")([loc, scale, size]) | Draw samples from a logistic distribution. [`lognormal`](generated/numpy.random.randomstate.lognormal#numpy.random.RandomState.lognormal "numpy.random.RandomState.lognormal")([mean, sigma, size]) | Draw samples from a log-normal distribution. [`logseries`](generated/numpy.random.randomstate.logseries#numpy.random.RandomState.logseries "numpy.random.RandomState.logseries")(p[, size]) | Draw samples from a logarithmic series distribution. [`multinomial`](generated/numpy.random.randomstate.multinomial#numpy.random.RandomState.multinomial "numpy.random.RandomState.multinomial")(n, pvals[, size]) | Draw samples from a multinomial distribution. [`multivariate_normal`](generated/numpy.random.randomstate.multivariate_normal#numpy.random.RandomState.multivariate_normal "numpy.random.RandomState.multivariate_normal")(mean, cov[, size, ...]) | Draw random samples from a multivariate normal distribution. [`negative_binomial`](generated/numpy.random.randomstate.negative_binomial#numpy.random.RandomState.negative_binomial "numpy.random.RandomState.negative_binomial")(n, p[, size]) | Draw samples from a negative binomial distribution. [`noncentral_chisquare`](generated/numpy.random.randomstate.noncentral_chisquare#numpy.random.RandomState.noncentral_chisquare "numpy.random.RandomState.noncentral_chisquare")(df, nonc[, size]) | Draw samples from a noncentral chi-square distribution. [`noncentral_f`](generated/numpy.random.randomstate.noncentral_f#numpy.random.RandomState.noncentral_f "numpy.random.RandomState.noncentral_f")(dfnum, dfden, nonc[, size]) | Draw samples from the noncentral F distribution. [`normal`](generated/numpy.random.randomstate.normal#numpy.random.RandomState.normal "numpy.random.RandomState.normal")([loc, scale, size]) | Draw random samples from a normal (Gaussian) distribution. [`pareto`](generated/numpy.random.randomstate.pareto#numpy.random.RandomState.pareto "numpy.random.RandomState.pareto")(a[, size]) | Draw samples from a Pareto II or Lomax distribution with specified shape. [`poisson`](generated/numpy.random.randomstate.poisson#numpy.random.RandomState.poisson "numpy.random.RandomState.poisson")([lam, size]) | Draw samples from a Poisson distribution. [`power`](generated/numpy.random.randomstate.power#numpy.random.RandomState.power "numpy.random.RandomState.power")(a[, size]) | Draws samples in [0, 1] from a power distribution with positive exponent a - 1. [`rayleigh`](generated/numpy.random.randomstate.rayleigh#numpy.random.RandomState.rayleigh "numpy.random.RandomState.rayleigh")([scale, size]) | Draw samples from a Rayleigh distribution. [`standard_cauchy`](generated/numpy.random.randomstate.standard_cauchy#numpy.random.RandomState.standard_cauchy "numpy.random.RandomState.standard_cauchy")([size]) | Draw samples from a standard Cauchy distribution with mode = 0. [`standard_exponential`](generated/numpy.random.randomstate.standard_exponential#numpy.random.RandomState.standard_exponential "numpy.random.RandomState.standard_exponential")([size]) | Draw samples from the standard exponential distribution. [`standard_gamma`](generated/numpy.random.randomstate.standard_gamma#numpy.random.RandomState.standard_gamma "numpy.random.RandomState.standard_gamma")(shape[, size]) | Draw samples from a standard Gamma distribution. [`standard_normal`](generated/numpy.random.randomstate.standard_normal#numpy.random.RandomState.standard_normal "numpy.random.RandomState.standard_normal")([size]) | Draw samples from a standard Normal distribution (mean=0, stdev=1). [`standard_t`](generated/numpy.random.randomstate.standard_t#numpy.random.RandomState.standard_t "numpy.random.RandomState.standard_t")(df[, size]) | Draw samples from a standard Student's t distribution with `df` degrees of freedom. [`triangular`](generated/numpy.random.randomstate.triangular#numpy.random.RandomState.triangular "numpy.random.RandomState.triangular")(left, mode, right[, size]) | Draw samples from the triangular distribution over the interval `[left, right]`. [`uniform`](generated/numpy.random.randomstate.uniform#numpy.random.RandomState.uniform "numpy.random.RandomState.uniform")([low, high, size]) | Draw samples from a uniform distribution. [`vonmises`](generated/numpy.random.randomstate.vonmises#numpy.random.RandomState.vonmises "numpy.random.RandomState.vonmises")(mu, kappa[, size]) | Draw samples from a von Mises distribution. [`wald`](generated/numpy.random.randomstate.wald#numpy.random.RandomState.wald "numpy.random.RandomState.wald")(mean, scale[, size]) | Draw samples from a Wald, or inverse Gaussian, distribution. [`weibull`](generated/numpy.random.randomstate.weibull#numpy.random.RandomState.weibull "numpy.random.RandomState.weibull")(a[, size]) | Draw samples from a Weibull distribution. [`zipf`](generated/numpy.random.randomstate.zipf#numpy.random.RandomState.zipf "numpy.random.RandomState.zipf")(a[, size]) | Draw samples from a Zipf distribution. ## Functions in numpy.random Many of the RandomState methods above are exported as functions in [`numpy.random`](index#module-numpy.random "numpy.random") This usage is discouraged, as it is implemented via a global `RandomState` instance which is not advised on two counts: * It uses global state, which means results will change as the code changes * It uses a `RandomState` rather than the more modern [`Generator`](generator#numpy.random.Generator "numpy.random.Generator"). For backward compatible legacy reasons, we will not change this. [`beta`](generated/numpy.random.beta#numpy.random.beta "numpy.random.beta")(a, b[, size]) | Draw samples from a Beta distribution. ---|--- [`binomial`](generated/numpy.random.binomial#numpy.random.binomial "numpy.random.binomial")(n, p[, size]) | Draw samples from a binomial distribution. [`bytes`](generated/numpy.random.bytes#numpy.random.bytes "numpy.random.bytes")(length) | Return random bytes. [`chisquare`](generated/numpy.random.chisquare#numpy.random.chisquare "numpy.random.chisquare")(df[, size]) | Draw samples from a chi-square distribution. [`choice`](generated/numpy.random.choice#numpy.random.choice "numpy.random.choice")(a[, size, replace, p]) | Generates a random sample from a given 1-D array [`dirichlet`](generated/numpy.random.dirichlet#numpy.random.dirichlet "numpy.random.dirichlet")(alpha[, size]) | Draw samples from the Dirichlet distribution. [`exponential`](generated/numpy.random.exponential#numpy.random.exponential "numpy.random.exponential")([scale, size]) | Draw samples from an exponential distribution. [`f`](generated/numpy.random.f#numpy.random.f "numpy.random.f")(dfnum, dfden[, size]) | Draw samples from an F distribution. [`gamma`](generated/numpy.random.gamma#numpy.random.gamma "numpy.random.gamma")(shape[, scale, size]) | Draw samples from a Gamma distribution. [`geometric`](generated/numpy.random.geometric#numpy.random.geometric "numpy.random.geometric")(p[, size]) | Draw samples from the geometric distribution. [`get_state`](generated/numpy.random.get_state#numpy.random.get_state "numpy.random.get_state")([legacy]) | Return a tuple representing the internal state of the generator. [`gumbel`](generated/numpy.random.gumbel#numpy.random.gumbel "numpy.random.gumbel")([loc, scale, size]) | Draw samples from a Gumbel distribution. [`hypergeometric`](generated/numpy.random.hypergeometric#numpy.random.hypergeometric "numpy.random.hypergeometric")(ngood, nbad, nsample[, size]) | Draw samples from a Hypergeometric distribution. [`laplace`](generated/numpy.random.laplace#numpy.random.laplace "numpy.random.laplace")([loc, scale, size]) | Draw samples from the Laplace or double exponential distribution with specified location (or mean) and scale (decay). [`logistic`](generated/numpy.random.logistic#numpy.random.logistic "numpy.random.logistic")([loc, scale, size]) | Draw samples from a logistic distribution. [`lognormal`](generated/numpy.random.lognormal#numpy.random.lognormal "numpy.random.lognormal")([mean, sigma, size]) | Draw samples from a log-normal distribution. [`logseries`](generated/numpy.random.logseries#numpy.random.logseries "numpy.random.logseries")(p[, size]) | Draw samples from a logarithmic series distribution. [`multinomial`](generated/numpy.random.multinomial#numpy.random.multinomial "numpy.random.multinomial")(n, pvals[, size]) | Draw samples from a multinomial distribution. [`multivariate_normal`](generated/numpy.random.multivariate_normal#numpy.random.multivariate_normal "numpy.random.multivariate_normal")(mean, cov[, size, ...]) | Draw random samples from a multivariate normal distribution. [`negative_binomial`](generated/numpy.random.negative_binomial#numpy.random.negative_binomial "numpy.random.negative_binomial")(n, p[, size]) | Draw samples from a negative binomial distribution. [`noncentral_chisquare`](generated/numpy.random.noncentral_chisquare#numpy.random.noncentral_chisquare "numpy.random.noncentral_chisquare")(df, nonc[, size]) | Draw samples from a noncentral chi-square distribution. [`noncentral_f`](generated/numpy.random.noncentral_f#numpy.random.noncentral_f "numpy.random.noncentral_f")(dfnum, dfden, nonc[, size]) | Draw samples from the noncentral F distribution. [`normal`](generated/numpy.random.normal#numpy.random.normal "numpy.random.normal")([loc, scale, size]) | Draw random samples from a normal (Gaussian) distribution. [`pareto`](generated/numpy.random.pareto#numpy.random.pareto "numpy.random.pareto")(a[, size]) | Draw samples from a Pareto II or Lomax distribution with specified shape. [`permutation`](generated/numpy.random.permutation#numpy.random.permutation "numpy.random.permutation")(x) | Randomly permute a sequence, or return a permuted range. [`poisson`](generated/numpy.random.poisson#numpy.random.poisson "numpy.random.poisson")([lam, size]) | Draw samples from a Poisson distribution. [`power`](generated/numpy.random.power#numpy.random.power "numpy.random.power")(a[, size]) | Draws samples in [0, 1] from a power distribution with positive exponent a - 1. [`rand`](generated/numpy.random.rand#numpy.random.rand "numpy.random.rand")(d0, d1, ..., dn) | Random values in a given shape. [`randint`](generated/numpy.random.randint#numpy.random.randint "numpy.random.randint")(low[, high, size, dtype]) | Return random integers from `low` (inclusive) to `high` (exclusive). [`randn`](generated/numpy.random.randn#numpy.random.randn "numpy.random.randn")(d0, d1, ..., dn) | Return a sample (or samples) from the "standard normal" distribution. [`random`](generated/numpy.random.random#numpy.random.random "numpy.random.random")([size]) | Return random floats in the half-open interval [0.0, 1.0). [`random_integers`](generated/numpy.random.random_integers#numpy.random.random_integers "numpy.random.random_integers")(low[, high, size]) | Random integers of type [`numpy.int_`](../arrays.scalars#numpy.int_ "numpy.int_") between `low` and `high`, inclusive. [`random_sample`](generated/numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample")([size]) | Return random floats in the half-open interval [0.0, 1.0). [`ranf`](generated/numpy.random.ranf#numpy.random.ranf "numpy.random.ranf")(*args, **kwargs) | This is an alias of [`random_sample`](generated/numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample"). [`rayleigh`](generated/numpy.random.rayleigh#numpy.random.rayleigh "numpy.random.rayleigh")([scale, size]) | Draw samples from a Rayleigh distribution. [`sample`](generated/numpy.random.sample#numpy.random.sample "numpy.random.sample")(*args, **kwargs) | This is an alias of [`random_sample`](generated/numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample"). [`seed`](generated/numpy.random.seed#numpy.random.seed "numpy.random.seed")([seed]) | Reseed the singleton RandomState instance. [`set_state`](generated/numpy.random.set_state#numpy.random.set_state "numpy.random.set_state")(state) | Set the internal state of the generator from a tuple. [`shuffle`](generated/numpy.random.shuffle#numpy.random.shuffle "numpy.random.shuffle")(x) | Modify a sequence in-place by shuffling its contents. [`standard_cauchy`](generated/numpy.random.standard_cauchy#numpy.random.standard_cauchy "numpy.random.standard_cauchy")([size]) | Draw samples from a standard Cauchy distribution with mode = 0. [`standard_exponential`](generated/numpy.random.standard_exponential#numpy.random.standard_exponential "numpy.random.standard_exponential")([size]) | Draw samples from the standard exponential distribution. [`standard_gamma`](generated/numpy.random.standard_gamma#numpy.random.standard_gamma "numpy.random.standard_gamma")(shape[, size]) | Draw samples from a standard Gamma distribution. [`standard_normal`](generated/numpy.random.standard_normal#numpy.random.standard_normal "numpy.random.standard_normal")([size]) | Draw samples from a standard Normal distribution (mean=0, stdev=1). [`standard_t`](generated/numpy.random.standard_t#numpy.random.standard_t "numpy.random.standard_t")(df[, size]) | Draw samples from a standard Student's t distribution with `df` degrees of freedom. [`triangular`](generated/numpy.random.triangular#numpy.random.triangular "numpy.random.triangular")(left, mode, right[, size]) | Draw samples from the triangular distribution over the interval `[left, right]`. [`uniform`](generated/numpy.random.uniform#numpy.random.uniform "numpy.random.uniform")([low, high, size]) | Draw samples from a uniform distribution. [`vonmises`](generated/numpy.random.vonmises#numpy.random.vonmises "numpy.random.vonmises")(mu, kappa[, size]) | Draw samples from a von Mises distribution. [`wald`](generated/numpy.random.wald#numpy.random.wald "numpy.random.wald")(mean, scale[, size]) | Draw samples from a Wald, or inverse Gaussian, distribution. [`weibull`](generated/numpy.random.weibull#numpy.random.weibull "numpy.random.weibull")(a[, size]) | Draw samples from a Weibull distribution. [`zipf`](generated/numpy.random.zipf#numpy.random.zipf "numpy.random.zipf")(a[, size]) | Draw samples from a Zipf distribution. # Multithreaded generation The four core distributions ([`random`](generated/numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random"), [`standard_normal`](generated/numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal"), [`standard_exponential`](generated/numpy.random.generator.standard_exponential#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential"), and [`standard_gamma`](generated/numpy.random.generator.standard_gamma#numpy.random.Generator.standard_gamma "numpy.random.Generator.standard_gamma")) all allow existing arrays to be filled using the `out` keyword argument. Existing arrays need to be contiguous and well-behaved (writable and aligned). Under normal circumstances, arrays created using the common constructors such as [`numpy.empty`](../generated/numpy.empty#numpy.empty "numpy.empty") will satisfy these requirements. This example makes use of Python 3 [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html#module- concurrent.futures "\(in Python v3.13\)") to fill an array using multiple threads. Threads are long-lived so that repeated calls do not require any additional overheads from thread creation. The random numbers generated are reproducible in the sense that the same seed will produce the same outputs, given that the number of threads does not change. from numpy.random import default_rng, SeedSequence import multiprocessing import concurrent.futures import numpy as np class MultithreadedRNG: def __init__(self, n, seed=None, threads=None): if threads is None: threads = multiprocessing.cpu_count() self.threads = threads seq = SeedSequence(seed) self._random_generators = [default_rng(s) for s in seq.spawn(threads)] self.n = n self.executor = concurrent.futures.ThreadPoolExecutor(threads) self.values = np.empty(n) self.step = np.ceil(n / threads).astype(np.int_) def fill(self): def _fill(random_state, out, first, last): random_state.standard_normal(out=out[first:last]) futures = {} for i in range(self.threads): args = (_fill, self._random_generators[i], self.values, i * self.step, (i + 1) * self.step) futures[self.executor.submit(*args)] = i concurrent.futures.wait(futures) def __del__(self): self.executor.shutdown(False) The multithreaded random number generator can be used to fill an array. The `values` attributes shows the zero-value before the fill and the random value after. In [2]: mrng = MultithreadedRNG(10000000, seed=12345) ...: print(mrng.values[-1]) Out[2]: 0.0 In [3]: mrng.fill() ...: print(mrng.values[-1]) Out[3]: 2.4545724517479104 The time required to produce using multiple threads can be compared to the time required to generate using a single thread. In [4]: print(mrng.threads) ...: %timeit mrng.fill() Out[4]: 4 ...: 32.8 ms ± 2.71 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) The single threaded call directly uses the BitGenerator. In [5]: values = np.empty(10000000) ...: rg = default_rng() ...: %timeit rg.standard_normal(out=values) Out[5]: 99.6 ms ± 222 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) The gains are substantial and the scaling is reasonable even for arrays that are only moderately large. The gains are even larger when compared to a call that does not use an existing array due to array creation overhead. In [6]: rg = default_rng() ...: %timeit rg.standard_normal(10000000) Out[6]: 125 ms ± 309 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) Note that if `threads` is not set by the user, it will be determined by `multiprocessing.cpu_count()`. In [7]: # simulate the behavior for `threads=None`, if the machine had only one thread ...: mrng = MultithreadedRNG(10000000, seed=12345, threads=1) ...: print(mrng.values[-1]) Out[7]: 1.1800150052158556 # What’s new or different NumPy 1.17.0 introduced [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") as an improved replacement for the [legacy](legacy#legacy) [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState"). Here is a quick comparison of the two implementations. Feature | Older Equivalent | Notes ---|---|--- [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") | [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") | [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") requires a stream source, called a [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") A number of these are provided. [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") uses the Mersenne Twister [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") by default, but can also be instantiated with any BitGenerator. [`random`](generated/numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random") | [`random_sample`](generated/numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample"), [`rand`](generated/numpy.random.rand#numpy.random.rand "numpy.random.rand") | Access the values in a BitGenerator, convert them to `float64` in the interval `[0.0., 1.0)`. In addition to the `size` kwarg, now supports `dtype='d'` or `dtype='f'`, and an `out` kwarg to fill a user-supplied array. Many other distributions are also supported. [`integers`](generated/numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers") | [`randint`](generated/numpy.random.randint#numpy.random.randint "numpy.random.randint"), [`random_integers`](generated/numpy.random.random_integers#numpy.random.random_integers "numpy.random.random_integers") | Use the `endpoint` kwarg to adjust the inclusion or exclusion of the `high` interval endpoint. * The normal, exponential and gamma generators use 256-step Ziggurat methods which are 2-10 times faster than NumPy’s default implementation in [`standard_normal`](generated/numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal"), [`standard_exponential`](generated/numpy.random.generator.standard_exponential#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential") or [`standard_gamma`](generated/numpy.random.generator.standard_gamma#numpy.random.Generator.standard_gamma "numpy.random.Generator.standard_gamma"). Because of the change in algorithms, it is not possible to reproduce the exact random values using [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") for these distributions or any distribution method that relies on them. In [1]: import numpy.random In [2]: rng = np.random.default_rng() In [3]: %timeit -n 1 rng.standard_normal(100000) ...: %timeit -n 1 numpy.random.standard_normal(100000) ...: 936 us +- 4.33 us per loop (mean +- std. dev. of 7 runs, 1 loop each) 1.72 ms +- 11.5 us per loop (mean +- std. dev. of 7 runs, 1 loop each) In [4]: %timeit -n 1 rng.standard_exponential(100000) ...: %timeit -n 1 numpy.random.standard_exponential(100000) ...: 464 us +- 3.84 us per loop (mean +- std. dev. of 7 runs, 1 loop each) 1.23 ms +- 7.66 us per loop (mean +- std. dev. of 7 runs, 1 loop each) In [5]: %timeit -n 1 rng.standard_gamma(3.0, 100000) ...: %timeit -n 1 numpy.random.standard_gamma(3.0, 100000) ...: 1.75 ms +- 9.05 us per loop (mean +- std. dev. of 7 runs, 1 loop each) 3.48 ms +- 4.1 us per loop (mean +- std. dev. of 7 runs, 1 loop each) * [`integers`](generated/numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers") is now the canonical way to generate integer random numbers from a discrete uniform distribution. This replaces both [`randint`](generated/numpy.random.randint#numpy.random.randint "numpy.random.randint") and the deprecated [`random_integers`](generated/numpy.random.random_integers#numpy.random.random_integers "numpy.random.random_integers"). * The [`rand`](generated/numpy.random.rand#numpy.random.rand "numpy.random.rand") and [`randn`](generated/numpy.random.randn#numpy.random.randn "numpy.random.randn") methods are only available through the legacy [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState"). * [`Generator.random`](generated/numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random") is now the canonical way to generate floating-point random numbers, which replaces [`RandomState.random_sample`](generated/numpy.random.randomstate.random_sample#numpy.random.RandomState.random_sample "numpy.random.RandomState.random_sample"), [`sample`](generated/numpy.random.sample#numpy.random.sample "numpy.random.sample"), and [`ranf`](generated/numpy.random.ranf#numpy.random.ranf "numpy.random.ranf"), all of which were aliases. This is consistent with Python’s [`random.random`](https://docs.python.org/3/library/random.html#random.random "\(in Python v3.13\)"). * All bit generators can produce doubles, uint64s and uint32s via CTypes ([`ctypes`](bit_generators/generated/numpy.random.pcg64.ctypes#numpy.random.PCG64.ctypes "numpy.random.PCG64.ctypes")) and CFFI ([`cffi`](bit_generators/generated/numpy.random.pcg64.cffi#numpy.random.PCG64.cffi "numpy.random.PCG64.cffi")). This allows these bit generators to be used in numba. * The bit generators can be used in downstream projects via Cython. * All bit generators use [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to [convert seed integers to initialized states](bit_generators/index#seeding-and-entropy). * Optional `dtype` argument that accepts `np.float32` or `np.float64` to produce either single or double precision uniform random variables for select distributions. [`integers`](generated/numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers") accepts a `dtype` argument with any signed or unsigned integer dtype. * Uniforms ([`random`](generated/numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random") and [`integers`](generated/numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers")) * Normals ([`standard_normal`](generated/numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal")) * Standard Gammas ([`standard_gamma`](generated/numpy.random.generator.standard_gamma#numpy.random.Generator.standard_gamma "numpy.random.Generator.standard_gamma")) * Standard Exponentials ([`standard_exponential`](generated/numpy.random.generator.standard_exponential#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential")) In [6]: rng = np.random.default_rng() In [7]: rng.random(3, dtype=np.float64) Out[7]: array([0.09583911, 0.93160588, 0.71947891]) In [8]: rng.random(3, dtype=np.float32) Out[8]: array([0.50844425, 0.20221537, 0.7923881 ], dtype=float32) In [9]: rng.integers(0, 256, size=3, dtype=np.uint8) Out[9]: array([ 4, 201, 126], dtype=uint8) * Optional `out` argument that allows existing arrays to be filled for select distributions * Uniforms ([`random`](generated/numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random")) * Normals ([`standard_normal`](generated/numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal")) * Standard Gammas ([`standard_gamma`](generated/numpy.random.generator.standard_gamma#numpy.random.Generator.standard_gamma "numpy.random.Generator.standard_gamma")) * Standard Exponentials ([`standard_exponential`](generated/numpy.random.generator.standard_exponential#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential")) This allows multithreading to fill large arrays in chunks using suitable BitGenerators in parallel. In [10]: rng = np.random.default_rng() In [11]: existing = np.zeros(4) In [12]: rng.random(out=existing[:2]) Out[12]: array([0.42493599, 0.03707944]) In [13]: print(existing) [0.42493599 0.03707944 0. 0. ] * Optional `axis` argument for methods like [`choice`](generated/numpy.random.generator.choice#numpy.random.Generator.choice "numpy.random.Generator.choice"), [`permutation`](generated/numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") and [`shuffle`](generated/numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") that controls which axis an operation is performed over for multi-dimensional arrays. In [14]: rng = np.random.default_rng() In [15]: a = np.arange(12).reshape((3, 4)) In [16]: a Out[16]: array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) In [17]: rng.choice(a, axis=1, size=5) Out[17]: array([[ 1, 2, 2, 2, 0], [ 5, 6, 6, 6, 4], [ 9, 10, 10, 10, 8]]) In [18]: rng.shuffle(a, axis=1) # Shuffle in-place In [19]: a Out[19]: array([[ 1, 3, 2, 0], [ 5, 7, 6, 4], [ 9, 11, 10, 8]]) * Added a method to sample from the complex normal distribution (`complex_normal`) # Parallel random number generation There are four main strategies implemented that can be used to produce repeatable pseudo-random numbers across multiple processes (local or distributed). ## SeedSequence spawning NumPy allows you to spawn new (with very high probability) independent [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") and [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") instances via their `spawn()` method. This spawning is implemented by the [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") used for initializing the bit generators random stream. [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") [implements an algorithm](https://www.pcg- random.org/posts/developing-a-seed_seq-alternative.html) to process a user- provided seed, typically as an integer of some size, and to convert it into an initial state for a [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). It uses hashing techniques to ensure that low- quality seeds are turned into high quality initial states (at least, with very high probability). For example, [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") has a state consisting of 624 `uint32` integers. A naive way to take a 32-bit integer seed would be to just set the last element of the state to the 32-bit seed and leave the rest 0s. This is a valid state for [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937"), but not a good one. The Mersenne Twister algorithm [suffers if there are too many 0s](http://www.math.sci.hiroshima-u.ac.jp/m-mat/MT/MT2002/emt19937ar.html). Similarly, two adjacent 32-bit integer seeds (i.e. `12345` and `12346`) would produce very similar streams. [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") avoids these problems by using successions of integer hashes with good [avalanche properties](https://en.wikipedia.org/wiki/Avalanche_effect) to ensure that flipping any bit in the input has about a 50% chance of flipping any bit in the output. Two input seeds that are very close to each other will produce initial states that are very far from each other (with very high probability). It is also constructed in such a way that you can provide arbitrary-sized integers or lists of integers. [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") will take all of the bits that you provide and mix them together to produce however many bits the consuming [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") needs to initialize itself. These properties together mean that we can safely mix together the usual user- provided seed with simple incrementing counters to get [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") states that are (to very high probability) independent of each other. We can wrap this together into an API that is easy to use and difficult to misuse. Note that while [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") attempts to solve many of the issues related to user-provided small seeds, we still [recommend](index#recommend-secrets- randbits) using [`secrets.randbits`](https://docs.python.org/3/library/secrets.html#secrets.randbits "\(in Python v3.13\)") to generate seeds with 128 bits of entropy to avoid the remaining biases introduced by human-chosen seeds. from numpy.random import SeedSequence, default_rng ss = SeedSequence(12345) # Spawn off 10 child SeedSequences to pass to child processes. child_seeds = ss.spawn(10) streams = [default_rng(s) for s in child_seeds] For convenience the direct use of [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") is not necessary. The above `streams` can be spawned directly from a parent generator via [`spawn`](generated/numpy.random.generator.spawn#numpy.random.Generator.spawn "numpy.random.Generator.spawn"): parent_rng = default_rng(12345) streams = parent_rng.spawn(10) Child objects can also spawn to make grandchildren, and so on. Each child has a [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") with its position in the tree of spawned child objects mixed in with the user-provided seed to generate independent (with very high probability) streams. grandchildren = streams[0].spawn(4) This feature lets you make local decisions about when and how to split up streams without coordination between processes. You do not have to preallocate space to avoid overlapping or request streams from a common global service. This general “tree-hashing” scheme is [not unique to numpy](https://www.iro.umontreal.ca/~lecuyer/myftp/papers/parallel-rng- imacs.pdf) but not yet widespread. Python has increasingly-flexible mechanisms for parallelization available, and this scheme fits in very well with that kind of use. Using this scheme, an upper bound on the probability of a collision can be estimated if one knows the number of streams that you derive. [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") hashes its inputs, both the seed and the spawn- tree-path, down to a 128-bit pool by default. The probability that there is a collision in that pool, pessimistically-estimated ([1]), will be about \\(n^2*2^{-128}\\) where `n` is the number of streams spawned. If a program uses an aggressive million streams, about \\(2^{20}\\), then the probability that at least one pair of them are identical is about \\(2^{-88}\\), which is in solidly-ignorable territory ([2]). [1] The algorithm is carefully designed to eliminate a number of possible ways to collide. For example, if one only does one level of spawning, it is guaranteed that all states will be unique. But it’s easier to estimate the naive upper bound on a napkin and take comfort knowing that the probability is actually lower. [2] In this calculation, we can mostly ignore the amount of numbers drawn from each stream. See [Upgrading PCG64 with PCG64DXSM](upgrading-pcg64#upgrading- pcg64) for the technical details about [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64"). The other PRNGs we provide have some extra protection built in that avoids overlaps if the [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") pools differ in the slightest bit. [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM") has \\(2^{127}\\) separate cycles determined by the seed in addition to the position in the \\(2^{128}\\) long period for each cycle, so one has to both get on or near the same cycle _and_ seed a nearby position in the cycle. [`Philox`](bit_generators/philox#numpy.random.Philox "numpy.random.Philox") has completely independent cycles determined by the seed. [`SFC64`](bit_generators/sfc64#numpy.random.SFC64 "numpy.random.SFC64") incorporates a 64-bit counter so every unique seed is at least \\(2^{64}\\) iterations away from any other seed. And finally, [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") has just an unimaginably huge period. Getting a collision internal to [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") is the way a failure would be observed. ## Sequence of integer seeds As discussed in the previous section, [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") can not only take an integer seed, it can also take an arbitrary-length sequence of (non-negative) integers. If one exercises a little care, one can use this feature to design _ad hoc_ schemes for getting safe parallel PRNG streams with similar safety guarantees as spawning. For example, one common use case is that a worker process is passed one root seed integer for the whole calculation and also an integer worker ID (or something more granular like a job ID, batch ID, or something similar). If these IDs are created deterministically and uniquely, then one can derive reproducible parallel PRNG streams by combining the ID and the root seed integer in a list. # default_rng() and each of the BitGenerators use SeedSequence underneath, so # they all accept sequences of integers as seeds the same way. from numpy.random import default_rng def worker(root_seed, worker_id): rng = default_rng([worker_id, root_seed]) # Do work ... root_seed = 0x8c3c010cb4754c905776bdac5ee7501 results = [worker(root_seed, worker_id) for worker_id in range(10)] This can be used to replace a number of unsafe strategies that have been used in the past which try to combine the root seed and the ID back into a single integer seed value. For example, it is common to see users add the worker ID to the root seed, especially with the legacy [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") code. # UNSAFE! Do not do this! worker_seed = root_seed + worker_id rng = np.random.RandomState(worker_seed) It is true that for any one run of a parallel program constructed this way, each worker will have distinct streams. However, it is quite likely that multiple invocations of the program with different seeds will get overlapping sets of worker seeds. It is not uncommon (in the author’s self-experience) to change the root seed merely by an increment or two when doing these repeat runs. If the worker seeds are also derived by small increments of the worker ID, then subsets of the workers will return identical results, causing a bias in the overall ensemble of results. Combining the worker ID and the root seed as a list of integers eliminates this risk. Lazy seeding practices will still be fairly safe. This scheme does require that the extra IDs be unique and deterministically created. This may require coordination between the worker processes. It is recommended to place the varying IDs _before_ the unvarying root seed. [`spawn`](bit_generators/generated/numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") _appends_ integers after the user-provided seed, so if you might be mixing both this _ad hoc_ mechanism and spawning, or passing your objects down to library code that might be spawning, then it is a little bit safer to prepend your worker IDs rather than append them to avoid a collision. # Good. worker_seed = [worker_id, root_seed] # Less good. It will *work*, but it's less flexible. worker_seed = [root_seed, worker_id] With those caveats in mind, the safety guarantees against collision are about the same as with spawning, discussed in the previous section. The algorithmic mechanisms are the same. ## Independent streams [`Philox`](bit_generators/philox#numpy.random.Philox "numpy.random.Philox") is a counter-based RNG based which generates values by encrypting an incrementing counter using weak cryptographic primitives. The seed determines the key that is used for the encryption. Unique keys create unique, independent streams. [`Philox`](bit_generators/philox#numpy.random.Philox "numpy.random.Philox") lets you bypass the seeding algorithm to directly set the 128-bit key. Similar, but different, keys will still create independent streams. import secrets from numpy.random import Philox # 128-bit number as a seed root_seed = secrets.getrandbits(128) streams = [Philox(key=root_seed + stream_id) for stream_id in range(10)] This scheme does require that you avoid reusing stream IDs. This may require coordination between the parallel processes. ## Jumping the BitGenerator state `jumped` advances the state of the BitGenerator _as-if_ a large number of random numbers have been drawn, and returns a new instance with this state. The specific number of draws varies by BitGenerator, and ranges from \\(2^{64}\\) to \\(2^{128}\\). Additionally, the _as-if_ draws also depend on the size of the default random number produced by the specific BitGenerator. The BitGenerators that support `jumped`, along with the period of the BitGenerator, the size of the jump and the bits in the default unsigned random are listed below. BitGenerator | Period | Jump Size | Bits per Draw ---|---|---|--- [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") | \\(2^{19937}-1\\) | \\(2^{128}\\) | 32 [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") | \\(2^{128}\\) | \\(~2^{127}\\) ([3]) | 64 [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM") | \\(2^{128}\\) | \\(~2^{127}\\) ([3]) | 64 [`Philox`](bit_generators/philox#numpy.random.Philox "numpy.random.Philox") | \\(2^{256}\\) | \\(2^{128}\\) | 64 [3] (1,2) The jump size is \\((\phi-1)*2^{128}\\) where \\(\phi\\) is the golden ratio. As the jumps wrap around the period, the actual distances between neighboring streams will slowly grow smaller than the jump size, but using the golden ratio this way is a classic method of constructing a low-discrepancy sequence that spreads out the states around the period optimally. You will not be able to jump enough to make those distances small enough to overlap in your lifetime. `jumped` can be used to produce long blocks which should be long enough to not overlap. import secrets from numpy.random import PCG64 seed = secrets.getrandbits(128) blocked_rng = [] rng = PCG64(seed) for i in range(10): blocked_rng.append(rng.jumped(i)) When using `jumped`, one does have to take care not to jump to a stream that was already used. In the above example, one could not later use `blocked_rng[0].jumped()` as it would overlap with `blocked_rng[1]`. Like with the independent streams, if the main process here wants to split off 10 more streams by jumping, then it needs to start with `range(10, 20)`, otherwise it would recreate the same streams. On the other hand, if you carefully construct the streams, then you are guaranteed to have streams that do not overlap. # Performance ## Recommendation The recommended generator for general use is [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") or its upgraded variant [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM") for heavily-parallel use cases. They are statistically high quality, full-featured, and fast on most platforms, but somewhat slow when compiled for 32-bit processes. See [Upgrading PCG64 with PCG64DXSM](upgrading-pcg64#upgrading-pcg64) for details on when heavy parallelism would indicate using [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM"). [`Philox`](bit_generators/philox#numpy.random.Philox "numpy.random.Philox") is fairly slow, but its statistical properties have very high quality, and it is easy to get an assuredly-independent stream by using unique keys. If that is the style you wish to use for parallel streams, or you are porting from another system that uses that style, then [`Philox`](bit_generators/philox#numpy.random.Philox "numpy.random.Philox") is your choice. [`SFC64`](bit_generators/sfc64#numpy.random.SFC64 "numpy.random.SFC64") is statistically high quality and very fast. However, it lacks jumpability. If you are not using that capability and want lots of speed, even on 32-bit processes, this is your choice. [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") [fails some statistical tests](https://www.iro.umontreal.ca/~lecuyer/myftp/papers/testu01.pdf) and is not especially fast compared to modern PRNGs. For these reasons, we mostly do not recommend using it on its own, only through the legacy [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") for reproducing old results. That said, it has a very long history as a default in many systems. ## Timings The timings below are the time in ns to produce 1 random value from a specific distribution. The original [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") generator is much slower since it requires 2 32-bit values to equal the output of the faster generators. Integer performance has a similar ordering. The pattern is similar for other, more complex generators. The normal performance of the legacy [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") generator is much lower than the other since it uses the Box-Muller transform rather than the Ziggurat method. The performance gap for Exponentials is also large due to the cost of computing the log function to invert the CDF. The column labeled MT19973 uses the same 32-bit generator as [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") but produces random variates using [`Generator`](generator#numpy.random.Generator "numpy.random.Generator"). | MT19937 | PCG64 | PCG64DXSM | Philox | SFC64 | RandomState ---|---|---|---|---|---|--- 32-bit Unsigned Ints | 3.3 | 1.9 | 2.0 | 3.3 | 1.8 | 3.1 64-bit Unsigned Ints | 5.6 | 3.2 | 2.9 | 4.9 | 2.5 | 5.5 Uniforms | 5.9 | 3.1 | 2.9 | 5.0 | 2.6 | 6.0 Normals | 13.9 | 10.8 | 10.5 | 12.0 | 8.3 | 56.8 Exponentials | 9.1 | 6.0 | 5.8 | 8.1 | 5.4 | 63.9 Gammas | 37.2 | 30.8 | 28.9 | 34.0 | 27.5 | 77.0 Binomials | 21.3 | 17.4 | 17.6 | 19.3 | 15.6 | 21.4 Laplaces | 73.2 | 72.3 | 76.1 | 73.0 | 72.3 | 82.5 Poissons | 111.7 | 103.4 | 100.5 | 109.4 | 90.7 | 115.2 The next table presents the performance in percentage relative to values generated by the legacy generator, `RandomState(MT19937())`. The overall performance was computed using a geometric mean. | MT19937 | PCG64 | PCG64DXSM | Philox | SFC64 ---|---|---|---|---|--- 32-bit Unsigned Ints | 96 | 162 | 160 | 96 | 175 64-bit Unsigned Ints | 97 | 171 | 188 | 113 | 218 Uniforms | 102 | 192 | 206 | 121 | 233 Normals | 409 | 526 | 541 | 471 | 684 Exponentials | 701 | 1071 | 1101 | 784 | 1179 Gammas | 207 | 250 | 266 | 227 | 281 Binomials | 100 | 123 | 122 | 111 | 138 Laplaces | 113 | 114 | 108 | 113 | 114 Poissons | 103 | 111 | 115 | 105 | 127 Overall | 159 | 219 | 225 | 174 | 251 Note All timings were taken using Linux on an AMD Ryzen 9 3900X processor. ## Performance on different operating systems Performance differs across platforms due to compiler and hardware availability (e.g., register width) differences. The default bit generator has been chosen to perform well on 64-bit platforms. Performance on 32-bit operating systems is very different. The values reported are normalized relative to the speed of MT19937 in each table. A value of 100 indicates that the performance matches the MT19937. Higher values indicate improved performance. These values cannot be compared across tables. ### 64-bit Linux Distribution | MT19937 | PCG64 | PCG64DXSM | Philox | SFC64 ---|---|---|---|---|--- 32-bit Unsigned Ints | 100 | 168 | 166 | 100 | 182 64-bit Unsigned Ints | 100 | 176 | 193 | 116 | 224 Uniforms | 100 | 188 | 202 | 118 | 228 Normals | 100 | 128 | 132 | 115 | 167 Exponentials | 100 | 152 | 157 | 111 | 168 Overall | 100 | 161 | 168 | 112 | 192 ### 64-bit Windows The relative performance on 64-bit Linux and 64-bit Windows is broadly similar with the notable exception of the Philox generator. Distribution | MT19937 | PCG64 | PCG64DXSM | Philox | SFC64 ---|---|---|---|---|--- 32-bit Unsigned Ints | 100 | 155 | 131 | 29 | 150 64-bit Unsigned Ints | 100 | 157 | 143 | 25 | 154 Uniforms | 100 | 151 | 144 | 24 | 155 Normals | 100 | 129 | 128 | 37 | 150 Exponentials | 100 | 150 | 145 | 28 | 159 **Overall** | 100 | 148 | 138 | 28 | 154 ### 32-bit Windows The performance of 64-bit generators on 32-bit Windows is much lower than on 64-bit operating systems due to register width. MT19937, the generator that has been in NumPy since 2005, operates on 32-bit integers. Distribution | MT19937 | PCG64 | PCG64DXSM | Philox | SFC64 ---|---|---|---|---|--- 32-bit Unsigned Ints | 100 | 24 | 34 | 14 | 57 64-bit Unsigned Ints | 100 | 21 | 32 | 14 | 74 Uniforms | 100 | 21 | 34 | 16 | 73 Normals | 100 | 36 | 57 | 28 | 101 Exponentials | 100 | 28 | 44 | 20 | 88 **Overall** | 100 | 25 | 39 | 18 | 77 Note Linux timings used Ubuntu 20.04 and GCC 9.3.0. Windows timings were made on Windows 10 using Microsoft C/C++ Optimizing Compiler Version 19 (Visual Studio 2019). All timings were produced on an AMD Ryzen 9 3900X processor. # Upgrading PCG64 with PCG64DXSM Uses of the [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") in a massively-parallel context have been shown to have statistical weaknesses that were not apparent at the first release in numpy 1.17. Most users will never observe this weakness and are safe to continue to use [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64"). We have introduced a new [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM") [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") that will eventually become the new default [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") implementation used by [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") in future releases. [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM") solves the statistical weakness while preserving the performance and the features of [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64"). ## Does this affect me? If you 1. only use a single [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") instance, 2. only use [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") or the functions in [`numpy.random`](index#module-numpy.random "numpy.random"), 3. only use the [`PCG64.jumped`](bit_generators/generated/numpy.random.pcg64.jumped#numpy.random.PCG64.jumped "numpy.random.PCG64.jumped") method to generate parallel streams, 4. explicitly use a [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") other than [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64"), then this weakness does not affect you at all. Carry on. If you use moderate numbers of parallel streams created with [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") or [`SeedSequence.spawn`](bit_generators/generated/numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn"), in the 1000s, then the chance of observing this weakness is negligibly small. You can continue to use [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") comfortably. If you use very large numbers of parallel streams, in the millions, and draw large amounts of numbers from each, then the chance of observing this weakness can become non-negligible, if still small. An example of such a use case would be a very large distributed reinforcement learning problem with millions of long Monte Carlo playouts each generating billions of random number draws. Such use cases should consider using [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM") explicitly or another modern [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") like [`SFC64`](bit_generators/sfc64#numpy.random.SFC64 "numpy.random.SFC64") or [`Philox`](bit_generators/philox#numpy.random.Philox "numpy.random.Philox"), but it is unlikely that any old results you may have calculated are invalid. In any case, the weakness is a kind of [Birthday Paradox](https://en.wikipedia.org/wiki/Birthday_problem) collision. That is, a single pair of parallel streams out of the millions, considered together, might fail a stringent set of statistical tests of randomness. The remaining millions of streams would all be perfectly fine, and the effect of the bad pair in the whole calculation is very likely to be swamped by the remaining streams in most applications. ## Technical details Like many PRNG algorithms, [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") is constructed from a transition function, which advances a 128-bit state, and an output function, that mixes the 128-bit state into a 64-bit integer to be output. One of the guiding design principles of the PCG family of PRNGs is to balance the computational cost (and pseudorandomness strength) between the transition function and the output function. The transition function is a 128-bit linear congruential generator (LCG), which consists of multiplying the 128-bit state with a fixed multiplication constant and then adding a user-chosen increment, in 128-bit modular arithmetic. LCGs are well-analyzed PRNGs with known weaknesses, though 128-bit LCGs are large enough to pass stringent statistical tests on their own, with only the trivial output function. The output function of [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") is intended to patch up some of those known weaknesses by doing “just enough” scrambling of the bits to assist in the statistical properties without adding too much computational cost. One of these known weaknesses is that advancing the state of the LCG by steps numbering a power of two (`bg.advance(2**N)`) will leave the lower `N` bits identical to the state that was just left. For a single stream drawn from sequentially, this is of little consequence. The remaining \\(128-N\\) bits provide plenty of pseudorandomness that will be mixed in for any practical `N` that can be observed in a single stream, which is why one does not need to worry about this if you only use a single stream in your application. Similarly, the [`PCG64.jumped`](bit_generators/generated/numpy.random.pcg64.jumped#numpy.random.PCG64.jumped "numpy.random.PCG64.jumped") method uses a carefully chosen number of steps to avoid creating these collisions. However, once you start creating “randomly- initialized” parallel streams, either using OS entropy by calling [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") repeatedly or using [`SeedSequence.spawn`](bit_generators/generated/numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn"), then we need to consider how many lower bits need to “collide” in order to create a bad pair of streams, and then evaluate the probability of creating such a collision. [Empirically](https://github.com/numpy/numpy/issues/16313), it has been determined that if one shares the lower 58 bits of state and shares an increment, then the pair of streams, when interleaved, will fail [PractRand](https://pracrand.sourceforge.net/) in a reasonable amount of time, after drawing a few gigabytes of data. Following the standard Birthday Paradox calculations for a collision of 58 bits, we can see that we can create \\(2^{29}\\), or about half a billion, streams which is when the probability of such a collision becomes high. Half a billion streams is quite high, and the amount of data each stream needs to draw before the statistical correlations become apparent to even the strict `PractRand` tests is in the gigabytes. But this is on the horizon for very large applications like distributed reinforcement learning. There are reasons to expect that even in these applications a collision probably will not have a practical effect in the total result, since the statistical problem is constrained to just the colliding pair. Now, let us consider the case when the increment is not constrained to be the same. Our implementation of [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") seeds both the state and the increment; that is, two calls to [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") (almost certainly) have different states and increments. Upon our first release, we believed that having the seeded increment would provide a certain amount of extra protection, that one would have to be “close” in both the state space and increment space in order to observe correlations (`PractRand` failures) in a pair of streams. If that were true, then the “bottleneck” for collisions would be the 128-bit entropy pool size inside of [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") (and 128-bit collisions are in the “preposterously unlikely” category). Unfortunately, this is not true. One of the known properties of an LCG is that different increments create _distinct_ streams, but with a known relationship. Each LCG has an orbit that traverses all \\(2^{128}\\) different 128-bit states. Two LCGs with different increments are related in that one can “rotate” the orbit of the first LCG (advance it by a number of steps that we can compute from the two increments) such that then both LCGs will always then have the same state, up to an additive constant and maybe an inversion of the bits. If you then iterate both streams in lockstep, then the states will _always_ remain related by that same additive constant (and the inversion, if present). Recall that [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") is constructed from both a transition function (the LCG) and an output function. It was expected that the scrambling effect of the output function would have been strong enough to make the distinct streams practically independent (i.e. “passing the `PractRand` tests”) unless the two increments were pathologically related to each other (e.g. 1 and 3). The output function XSL-RR of the then- standard PCG algorithm that we implemented in [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") turns out to be too weak to cover up for the 58-bit collision of the underlying LCG that we described above. For any given pair of increments, the size of the “colliding” space of states is the same, so for this weakness, the extra distinctness provided by the increments does not translate into extra protection from statistical correlations that `PractRand` can detect. Fortunately, strengthening the output function is able to correct this weakness and _does_ turn the extra distinctness provided by differing increments into additional protection from these low-bit collisions. To the [PCG author’s credit](https://github.com/numpy/numpy/issues/13635#issuecomment-506088698), she had developed a stronger output function in response to related discussions during the long birth of the new [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") system. We NumPy developers chose to be “conservative” and use the XSL-RR variant that had undergone a longer period of testing at that time. The DXSM output function adopts a “xorshift-multiply” construction used in strong integer hashes that has much better avalanche properties than the XSL-RR output function. While there are “pathological” pairs of increments that induce “bad” additive constants that relate the two streams, the vast majority of pairs induce “good” additive constants that make the merely-distinct streams of LCG states into practically-independent output streams. Indeed, now the claim we once made about [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") is actually true of [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM"): collisions are possible, but both streams have to simultaneously be both “close” in the 128 bit state space _and_ “close” in the 127-bit increment space, so that would be less likely than the negligible chance of colliding in the 128-bit internal [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") pool. The DXSM output function is more computationally intensive than XSL-RR, but some optimizations in the LCG more than make up for the performance hit on most machines, so [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM") is a good, safe upgrade. There are, of course, an infinite number of stronger output functions that one could consider, but most will have a greater computational cost, and the DXSM output function has now received many CPU cycles of testing via `PractRand` at this time. # Array creation routines See also [Array creation](../user/basics.creation#arrays-creation) ## From shape or value [`empty`](generated/numpy.empty#numpy.empty "numpy.empty")(shape[, dtype, order, device, like]) | Return a new array of given shape and type, without initializing entries. ---|--- [`empty_like`](generated/numpy.empty_like#numpy.empty_like "numpy.empty_like")(prototype[, dtype, order, subok, ...]) | Return a new array with the same shape and type as a given array. [`eye`](generated/numpy.eye#numpy.eye "numpy.eye")(N[, M, k, dtype, order, device, like]) | Return a 2-D array with ones on the diagonal and zeros elsewhere. [`identity`](generated/numpy.identity#numpy.identity "numpy.identity")(n[, dtype, like]) | Return the identity array. [`ones`](generated/numpy.ones#numpy.ones "numpy.ones")(shape[, dtype, order, device, like]) | Return a new array of given shape and type, filled with ones. [`ones_like`](generated/numpy.ones_like#numpy.ones_like "numpy.ones_like")(a[, dtype, order, subok, shape, ...]) | Return an array of ones with the same shape and type as a given array. [`zeros`](generated/numpy.zeros#numpy.zeros "numpy.zeros")(shape[, dtype, order, like]) | Return a new array of given shape and type, filled with zeros. [`zeros_like`](generated/numpy.zeros_like#numpy.zeros_like "numpy.zeros_like")(a[, dtype, order, subok, shape, ...]) | Return an array of zeros with the same shape and type as a given array. [`full`](generated/numpy.full#numpy.full "numpy.full")(shape, fill_value[, dtype, order, ...]) | Return a new array of given shape and type, filled with `fill_value`. [`full_like`](generated/numpy.full_like#numpy.full_like "numpy.full_like")(a, fill_value[, dtype, order, ...]) | Return a full array with the same shape and type as a given array. ## From existing data [`array`](generated/numpy.array#numpy.array "numpy.array")(object[, dtype, copy, order, subok, ...]) | Create an array. ---|--- [`asarray`](generated/numpy.asarray#numpy.asarray "numpy.asarray")(a[, dtype, order, device, copy, like]) | Convert the input to an array. [`asanyarray`](generated/numpy.asanyarray#numpy.asanyarray "numpy.asanyarray")(a[, dtype, order, device, copy, like]) | Convert the input to an ndarray, but pass ndarray subclasses through. [`ascontiguousarray`](generated/numpy.ascontiguousarray#numpy.ascontiguousarray "numpy.ascontiguousarray")(a[, dtype, like]) | Return a contiguous array (ndim >= 1) in memory (C order). [`asmatrix`](generated/numpy.asmatrix#numpy.asmatrix "numpy.asmatrix")(data[, dtype]) | Interpret the input as a matrix. [`astype`](generated/numpy.astype#numpy.astype "numpy.astype")(x, dtype, /, *[, copy, device]) | Copies an array to a specified data type. [`copy`](generated/numpy.copy#numpy.copy "numpy.copy")(a[, order, subok]) | Return an array copy of the given object. [`frombuffer`](generated/numpy.frombuffer#numpy.frombuffer "numpy.frombuffer")(buffer[, dtype, count, offset, like]) | Interpret a buffer as a 1-dimensional array. [`from_dlpack`](generated/numpy.from_dlpack#numpy.from_dlpack "numpy.from_dlpack")(x, /, *[, device, copy]) | Create a NumPy array from an object implementing the `__dlpack__` protocol. [`fromfile`](generated/numpy.fromfile#numpy.fromfile "numpy.fromfile")(file[, dtype, count, sep, offset, like]) | Construct an array from data in a text or binary file. [`fromfunction`](generated/numpy.fromfunction#numpy.fromfunction "numpy.fromfunction")(function, shape, *[, dtype, like]) | Construct an array by executing a function over each coordinate. [`fromiter`](generated/numpy.fromiter#numpy.fromiter "numpy.fromiter")(iter, dtype[, count, like]) | Create a new 1-dimensional array from an iterable object. [`fromstring`](generated/numpy.fromstring#numpy.fromstring "numpy.fromstring")(string[, dtype, count, like]) | A new 1-D array initialized from text data in a string. [`loadtxt`](generated/numpy.loadtxt#numpy.loadtxt "numpy.loadtxt")(fname[, dtype, comments, delimiter, ...]) | Load data from a text file. ## Creating record arrays Note Please refer to [Record arrays](arrays.classes#arrays-classes-rec) for record arrays. [`rec.array`](generated/numpy.rec.array#numpy.rec.array "numpy.rec.array")(obj[, dtype, shape, offset, ...]) | Construct a record array from a wide-variety of objects. ---|--- [`rec.fromarrays`](generated/numpy.rec.fromarrays#numpy.rec.fromarrays "numpy.rec.fromarrays")(arrayList[, dtype, shape, ...]) | Create a record array from a (flat) list of arrays [`rec.fromrecords`](generated/numpy.rec.fromrecords#numpy.rec.fromrecords "numpy.rec.fromrecords")(recList[, dtype, shape, ...]) | Create a recarray from a list of records in text form. [`rec.fromstring`](generated/numpy.rec.fromstring#numpy.rec.fromstring "numpy.rec.fromstring")(datastring[, dtype, shape, ...]) | Create a record array from binary data [`rec.fromfile`](generated/numpy.rec.fromfile#numpy.rec.fromfile "numpy.rec.fromfile")(fd[, dtype, shape, offset, ...]) | Create an array from binary file data ## Creating character arrays (numpy.char) Note [`numpy.char`](routines.char#module-numpy.char "numpy.char") is used to create character arrays. [`char.array`](generated/numpy.char.array#numpy.char.array "numpy.char.array")(obj[, itemsize, copy, unicode, order]) | Create a [`chararray`](generated/numpy.char.chararray#numpy.char.chararray "numpy.char.chararray"). ---|--- [`char.asarray`](generated/numpy.char.asarray#numpy.char.asarray "numpy.char.asarray")(obj[, itemsize, unicode, order]) | Convert the input to a [`chararray`](generated/numpy.char.chararray#numpy.char.chararray "numpy.char.chararray"), copying the data only if necessary. ## Numerical ranges [`arange`](generated/numpy.arange#numpy.arange "numpy.arange")([start,] stop[, step,][, dtype, ...]) | Return evenly spaced values within a given interval. ---|--- [`linspace`](generated/numpy.linspace#numpy.linspace "numpy.linspace")(start, stop[, num, endpoint, ...]) | Return evenly spaced numbers over a specified interval. [`logspace`](generated/numpy.logspace#numpy.logspace "numpy.logspace")(start, stop[, num, endpoint, base, ...]) | Return numbers spaced evenly on a log scale. [`geomspace`](generated/numpy.geomspace#numpy.geomspace "numpy.geomspace")(start, stop[, num, endpoint, ...]) | Return numbers spaced evenly on a log scale (a geometric progression). [`meshgrid`](generated/numpy.meshgrid#numpy.meshgrid "numpy.meshgrid")(*xi[, copy, sparse, indexing]) | Return a tuple of coordinate matrices from coordinate vectors. [`mgrid`](generated/numpy.mgrid#numpy.mgrid "numpy.mgrid") | An instance which returns a dense multi-dimensional "meshgrid". [`ogrid`](generated/numpy.ogrid#numpy.ogrid "numpy.ogrid") | An instance which returns an open multi-dimensional "meshgrid". ## Building matrices [`diag`](generated/numpy.diag#numpy.diag "numpy.diag")(v[, k]) | Extract a diagonal or construct a diagonal array. ---|--- [`diagflat`](generated/numpy.diagflat#numpy.diagflat "numpy.diagflat")(v[, k]) | Create a two-dimensional array with the flattened input as a diagonal. [`tri`](generated/numpy.tri#numpy.tri "numpy.tri")(N[, M, k, dtype, like]) | An array with ones at and below the given diagonal and zeros elsewhere. [`tril`](generated/numpy.tril#numpy.tril "numpy.tril")(m[, k]) | Lower triangle of an array. [`triu`](generated/numpy.triu#numpy.triu "numpy.triu")(m[, k]) | Upper triangle of an array. [`vander`](generated/numpy.vander#numpy.vander "numpy.vander")(x[, N, increasing]) | Generate a Vandermonde matrix. ## The matrix class [`bmat`](generated/numpy.bmat#numpy.bmat "numpy.bmat")(obj[, ldict, gdict]) | Build a matrix object from a string, nested sequence, or array. ---|--- # Array manipulation routines ## Basic operations [`copyto`](generated/numpy.copyto#numpy.copyto "numpy.copyto")(dst, src[, casting, where]) | Copies values from one array to another, broadcasting as necessary. ---|--- [`ndim`](generated/numpy.ndim#numpy.ndim "numpy.ndim")(a) | Return the number of dimensions of an array. [`shape`](generated/numpy.shape#numpy.shape "numpy.shape")(a) | Return the shape of an array. [`size`](generated/numpy.size#numpy.size "numpy.size")(a[, axis]) | Return the number of elements along a given axis. ## Changing array shape [`reshape`](generated/numpy.reshape#numpy.reshape "numpy.reshape")(a, /[, shape, order, newshape, copy]) | Gives a new shape to an array without changing its data. ---|--- [`ravel`](generated/numpy.ravel#numpy.ravel "numpy.ravel")(a[, order]) | Return a contiguous flattened array. [`ndarray.flat`](generated/numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") | A 1-D iterator over the array. [`ndarray.flatten`](generated/numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten")([order]) | Return a copy of the array collapsed into one dimension. ## Transpose-like operations [`moveaxis`](generated/numpy.moveaxis#numpy.moveaxis "numpy.moveaxis")(a, source, destination) | Move axes of an array to new positions. ---|--- [`rollaxis`](generated/numpy.rollaxis#numpy.rollaxis "numpy.rollaxis")(a, axis[, start]) | Roll the specified axis backwards, until it lies in a given position. [`swapaxes`](generated/numpy.swapaxes#numpy.swapaxes "numpy.swapaxes")(a, axis1, axis2) | Interchange two axes of an array. [`ndarray.T`](generated/numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T") | View of the transposed array. [`transpose`](generated/numpy.transpose#numpy.transpose "numpy.transpose")(a[, axes]) | Returns an array with axes transposed. [`permute_dims`](generated/numpy.permute_dims#numpy.permute_dims "numpy.permute_dims")(a[, axes]) | Returns an array with axes transposed. [`matrix_transpose`](generated/numpy.matrix_transpose#numpy.matrix_transpose "numpy.matrix_transpose")(x, /) | Transposes a matrix (or a stack of matrices) `x`. ## Changing number of dimensions [`atleast_1d`](generated/numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d")(*arys) | Convert inputs to arrays with at least one dimension. ---|--- [`atleast_2d`](generated/numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d")(*arys) | View inputs as arrays with at least two dimensions. [`atleast_3d`](generated/numpy.atleast_3d#numpy.atleast_3d "numpy.atleast_3d")(*arys) | View inputs as arrays with at least three dimensions. [`broadcast`](generated/numpy.broadcast#numpy.broadcast "numpy.broadcast") | Produce an object that mimics broadcasting. [`broadcast_to`](generated/numpy.broadcast_to#numpy.broadcast_to "numpy.broadcast_to")(array, shape[, subok]) | Broadcast an array to a new shape. [`broadcast_arrays`](generated/numpy.broadcast_arrays#numpy.broadcast_arrays "numpy.broadcast_arrays")(*args[, subok]) | Broadcast any number of arrays against each other. [`expand_dims`](generated/numpy.expand_dims#numpy.expand_dims "numpy.expand_dims")(a, axis) | Expand the shape of an array. [`squeeze`](generated/numpy.squeeze#numpy.squeeze "numpy.squeeze")(a[, axis]) | Remove axes of length one from `a`. ## Changing kind of array [`asarray`](generated/numpy.asarray#numpy.asarray "numpy.asarray")(a[, dtype, order, device, copy, like]) | Convert the input to an array. ---|--- [`asanyarray`](generated/numpy.asanyarray#numpy.asanyarray "numpy.asanyarray")(a[, dtype, order, device, copy, like]) | Convert the input to an ndarray, but pass ndarray subclasses through. [`asmatrix`](generated/numpy.asmatrix#numpy.asmatrix "numpy.asmatrix")(data[, dtype]) | Interpret the input as a matrix. [`asfortranarray`](generated/numpy.asfortranarray#numpy.asfortranarray "numpy.asfortranarray")(a[, dtype, like]) | Return an array (ndim >= 1) laid out in Fortran order in memory. [`ascontiguousarray`](generated/numpy.ascontiguousarray#numpy.ascontiguousarray "numpy.ascontiguousarray")(a[, dtype, like]) | Return a contiguous array (ndim >= 1) in memory (C order). [`asarray_chkfinite`](generated/numpy.asarray_chkfinite#numpy.asarray_chkfinite "numpy.asarray_chkfinite")(a[, dtype, order]) | Convert the input to an array, checking for NaNs or Infs. [`require`](generated/numpy.require#numpy.require "numpy.require")(a[, dtype, requirements, like]) | Return an ndarray of the provided type that satisfies requirements. ## Joining arrays [`concatenate`](generated/numpy.concatenate#numpy.concatenate "numpy.concatenate")([axis, out, dtype, casting]) | Join a sequence of arrays along an existing axis. ---|--- [`concat`](generated/numpy.concat#numpy.concat "numpy.concat")([axis, out, dtype, casting]) | Join a sequence of arrays along an existing axis. [`stack`](generated/numpy.stack#numpy.stack "numpy.stack")(arrays[, axis, out, dtype, casting]) | Join a sequence of arrays along a new axis. [`block`](generated/numpy.block#numpy.block "numpy.block")(arrays) | Assemble an nd-array from nested lists of blocks. [`vstack`](generated/numpy.vstack#numpy.vstack "numpy.vstack")(tup, *[, dtype, casting]) | Stack arrays in sequence vertically (row wise). [`hstack`](generated/numpy.hstack#numpy.hstack "numpy.hstack")(tup, *[, dtype, casting]) | Stack arrays in sequence horizontally (column wise). [`dstack`](generated/numpy.dstack#numpy.dstack "numpy.dstack")(tup) | Stack arrays in sequence depth wise (along third axis). [`column_stack`](generated/numpy.column_stack#numpy.column_stack "numpy.column_stack")(tup) | Stack 1-D arrays as columns into a 2-D array. ## Splitting arrays [`split`](generated/numpy.split#numpy.split "numpy.split")(ary, indices_or_sections[, axis]) | Split an array into multiple sub-arrays as views into `ary`. ---|--- [`array_split`](generated/numpy.array_split#numpy.array_split "numpy.array_split")(ary, indices_or_sections[, axis]) | Split an array into multiple sub-arrays. [`dsplit`](generated/numpy.dsplit#numpy.dsplit "numpy.dsplit")(ary, indices_or_sections) | Split array into multiple sub-arrays along the 3rd axis (depth). [`hsplit`](generated/numpy.hsplit#numpy.hsplit "numpy.hsplit")(ary, indices_or_sections) | Split an array into multiple sub-arrays horizontally (column-wise). [`vsplit`](generated/numpy.vsplit#numpy.vsplit "numpy.vsplit")(ary, indices_or_sections) | Split an array into multiple sub-arrays vertically (row-wise). [`unstack`](generated/numpy.unstack#numpy.unstack "numpy.unstack")(x, /, *[, axis]) | Split an array into a sequence of arrays along the given axis. ## Tiling arrays [`tile`](generated/numpy.tile#numpy.tile "numpy.tile")(A, reps) | Construct an array by repeating A the number of times given by reps. ---|--- [`repeat`](generated/numpy.repeat#numpy.repeat "numpy.repeat")(a, repeats[, axis]) | Repeat each element of an array after themselves ## Adding and removing elements [`delete`](generated/numpy.delete#numpy.delete "numpy.delete")(arr, obj[, axis]) | Return a new array with sub-arrays along an axis deleted. ---|--- [`insert`](generated/numpy.insert#numpy.insert "numpy.insert")(arr, obj, values[, axis]) | Insert values along the given axis before the given indices. [`append`](generated/numpy.append#numpy.append "numpy.append")(arr, values[, axis]) | Append values to the end of an array. [`resize`](generated/numpy.resize#numpy.resize "numpy.resize")(a, new_shape) | Return a new array with the specified shape. [`trim_zeros`](generated/numpy.trim_zeros#numpy.trim_zeros "numpy.trim_zeros")(filt[, trim, axis]) | Remove values along a dimension which are zero along all other. [`unique`](generated/numpy.unique#numpy.unique "numpy.unique")(ar[, return_index, return_inverse, ...]) | Find the unique elements of an array. [`pad`](generated/numpy.pad#numpy.pad "numpy.pad")(array, pad_width[, mode]) | Pad an array. ## Rearranging elements [`flip`](generated/numpy.flip#numpy.flip "numpy.flip")(m[, axis]) | Reverse the order of elements in an array along the given axis. ---|--- [`fliplr`](generated/numpy.fliplr#numpy.fliplr "numpy.fliplr")(m) | Reverse the order of elements along axis 1 (left/right). [`flipud`](generated/numpy.flipud#numpy.flipud "numpy.flipud")(m) | Reverse the order of elements along axis 0 (up/down). [`roll`](generated/numpy.roll#numpy.roll "numpy.roll")(a, shift[, axis]) | Roll array elements along a given axis. [`rot90`](generated/numpy.rot90#numpy.rot90 "numpy.rot90")(m[, k, axes]) | Rotate an array by 90 degrees in the plane specified by axes. # Bit-wise operations ## Elementwise bit operations [`bitwise_and`](generated/numpy.bitwise_and#numpy.bitwise_and "numpy.bitwise_and")(x1, x2, /[, out, where, ...]) | Compute the bit-wise AND of two arrays element-wise. ---|--- [`bitwise_or`](generated/numpy.bitwise_or#numpy.bitwise_or "numpy.bitwise_or")(x1, x2, /[, out, where, casting, ...]) | Compute the bit-wise OR of two arrays element-wise. [`bitwise_xor`](generated/numpy.bitwise_xor#numpy.bitwise_xor "numpy.bitwise_xor")(x1, x2, /[, out, where, ...]) | Compute the bit-wise XOR of two arrays element-wise. [`invert`](generated/numpy.invert#numpy.invert "numpy.invert")(x, /[, out, where, casting, order, ...]) | Compute bit-wise inversion, or bit-wise NOT, element-wise. [`bitwise_invert`](generated/numpy.bitwise_invert#numpy.bitwise_invert "numpy.bitwise_invert")(x, /[, out, where, casting, ...]) | Compute bit-wise inversion, or bit-wise NOT, element-wise. [`left_shift`](generated/numpy.left_shift#numpy.left_shift "numpy.left_shift")(x1, x2, /[, out, where, casting, ...]) | Shift the bits of an integer to the left. [`bitwise_left_shift`](generated/numpy.bitwise_left_shift#numpy.bitwise_left_shift "numpy.bitwise_left_shift")(x1, x2, /[, out, where, ...]) | Shift the bits of an integer to the left. [`right_shift`](generated/numpy.right_shift#numpy.right_shift "numpy.right_shift")(x1, x2, /[, out, where, ...]) | Shift the bits of an integer to the right. [`bitwise_right_shift`](generated/numpy.bitwise_right_shift#numpy.bitwise_right_shift "numpy.bitwise_right_shift")(x1, x2, /[, out, where, ...]) | Shift the bits of an integer to the right. ## Bit packing [`packbits`](generated/numpy.packbits#numpy.packbits "numpy.packbits")(a, /[, axis, bitorder]) | Packs the elements of a binary-valued array into bits in a uint8 array. ---|--- [`unpackbits`](generated/numpy.unpackbits#numpy.unpackbits "numpy.unpackbits")(a, /[, axis, count, bitorder]) | Unpacks elements of a uint8 array into a binary-valued output array. ## Output formatting [`binary_repr`](generated/numpy.binary_repr#numpy.binary_repr "numpy.binary_repr")(num[, width]) | Return the binary representation of the input number as a string. ---|--- # Legacy fixed-width string functionality Legacy This submodule is considered legacy and will no longer receive updates. This could also mean it will be removed in future NumPy versions. The string operations in this module, as well as the [`numpy.char.chararray`](generated/numpy.char.chararray#numpy.char.chararray "numpy.char.chararray") class, are planned to be deprecated in the future. Use [`numpy.strings`](routines.strings#module-numpy.strings "numpy.strings") instead. The `numpy.char` module provides a set of vectorized string operations for arrays of type [`numpy.str_`](arrays.scalars#numpy.str_ "numpy.str_") or [`numpy.bytes_`](arrays.scalars#numpy.bytes_ "numpy.bytes_"). For example >>> import numpy as np >>> np.char.capitalize(["python", "numpy"]) array(['Python', 'Numpy'], dtype='>> np.char.add(["num", "doc"], ["py", "umentation"]) array(['numpy', 'documentation'], dtype='= x2) element-wise. [`less_equal`](generated/numpy.char.less_equal#numpy.char.less_equal "numpy.char.less_equal")(x1, x2) | Return (x1 <= x2) element-wise. [`greater`](generated/numpy.char.greater#numpy.char.greater "numpy.char.greater")(x1, x2) | Return (x1 > x2) element-wise. [`less`](generated/numpy.char.less#numpy.char.less "numpy.char.less")(x1, x2) | Return (x1 < x2) element-wise. [`compare_chararrays`](generated/numpy.char.compare_chararrays#numpy.char.compare_chararrays "numpy.char.compare_chararrays")(a1, a2, cmp, rstrip) | Performs element-wise comparison of two string arrays using the comparison operator specified by `cmp`. ## String information [`count`](generated/numpy.char.count#numpy.char.count "numpy.char.count")(a, sub[, start, end]) | Returns an array with the number of non-overlapping occurrences of substring `sub` in the range [`start`, `end`). ---|--- [`endswith`](generated/numpy.char.endswith#numpy.char.endswith "numpy.char.endswith")(a, suffix[, start, end]) | Returns a boolean array which is `True` where the string element in `a` ends with `suffix`, otherwise `False`. [`find`](generated/numpy.char.find#numpy.char.find "numpy.char.find")(a, sub[, start, end]) | For each element, return the lowest index in the string where substring `sub` is found, such that `sub` is contained in the range [`start`, `end`). [`index`](generated/numpy.char.index#numpy.char.index "numpy.char.index")(a, sub[, start, end]) | Like [`find`](generated/numpy.char.find#numpy.char.find "numpy.char.find"), but raises [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "\(in Python v3.13\)") when the substring is not found. [`isalpha`](generated/numpy.char.isalpha#numpy.char.isalpha "numpy.char.isalpha")(x, /[, out, where, casting, order, ...]) | Returns true for each element if all characters in the data interpreted as a string are alphabetic and there is at least one character, false otherwise. [`isalnum`](generated/numpy.char.isalnum#numpy.char.isalnum "numpy.char.isalnum")(x, /[, out, where, casting, order, ...]) | Returns true for each element if all characters in the string are alphanumeric and there is at least one character, false otherwise. [`isdecimal`](generated/numpy.char.isdecimal#numpy.char.isdecimal "numpy.char.isdecimal")(x, /[, out, where, casting, ...]) | For each element, return True if there are only decimal characters in the element. [`isdigit`](generated/numpy.char.isdigit#numpy.char.isdigit "numpy.char.isdigit")(x, /[, out, where, casting, order, ...]) | Returns true for each element if all characters in the string are digits and there is at least one character, false otherwise. [`islower`](generated/numpy.char.islower#numpy.char.islower "numpy.char.islower")(x, /[, out, where, casting, order, ...]) | Returns true for each element if all cased characters in the string are lowercase and there is at least one cased character, false otherwise. [`isnumeric`](generated/numpy.char.isnumeric#numpy.char.isnumeric "numpy.char.isnumeric")(x, /[, out, where, casting, ...]) | For each element, return True if there are only numeric characters in the element. [`isspace`](generated/numpy.char.isspace#numpy.char.isspace "numpy.char.isspace")(x, /[, out, where, casting, order, ...]) | Returns true for each element if there are only whitespace characters in the string and there is at least one character, false otherwise. [`istitle`](generated/numpy.char.istitle#numpy.char.istitle "numpy.char.istitle")(x, /[, out, where, casting, order, ...]) | Returns true for each element if the element is a titlecased string and there is at least one character, false otherwise. [`isupper`](generated/numpy.char.isupper#numpy.char.isupper "numpy.char.isupper")(x, /[, out, where, casting, order, ...]) | Return true for each element if all cased characters in the string are uppercase and there is at least one character, false otherwise. [`rfind`](generated/numpy.char.rfind#numpy.char.rfind "numpy.char.rfind")(a, sub[, start, end]) | For each element, return the highest index in the string where substring `sub` is found, such that `sub` is contained in the range [`start`, `end`). [`rindex`](generated/numpy.char.rindex#numpy.char.rindex "numpy.char.rindex")(a, sub[, start, end]) | Like [`rfind`](generated/numpy.char.rfind#numpy.char.rfind "numpy.char.rfind"), but raises [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "\(in Python v3.13\)") when the substring `sub` is not found. [`startswith`](generated/numpy.char.startswith#numpy.char.startswith "numpy.char.startswith")(a, prefix[, start, end]) | Returns a boolean array which is `True` where the string element in `a` starts with `prefix`, otherwise `False`. [`str_len`](generated/numpy.char.str_len#numpy.char.str_len "numpy.char.str_len")(x, /[, out, where, casting, order, ...]) | Returns the length of each element. ## Convenience class [`array`](generated/numpy.char.array#numpy.char.array "numpy.char.array")(obj[, itemsize, copy, unicode, order]) | Create a [`chararray`](generated/numpy.char.chararray#numpy.char.chararray "numpy.char.chararray"). ---|--- [`asarray`](generated/numpy.char.asarray#numpy.char.asarray "numpy.char.asarray")(obj[, itemsize, unicode, order]) | Convert the input to a [`chararray`](generated/numpy.char.chararray#numpy.char.chararray "numpy.char.chararray"), copying the data only if necessary. [`chararray`](generated/numpy.char.chararray#numpy.char.chararray "numpy.char.chararray")(shape[, itemsize, unicode, ...]) | Provides a convenient view on arrays of string and unicode values. # ctypes foreign function interface (numpy.ctypeslib) numpy.ctypeslib.as_array(_obj_ , _shape =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ctypeslib.py#L520-L558) Create a numpy array from a ctypes array or POINTER. The numpy array shares the memory with the ctypes object. The shape parameter must be given if converting from a ctypes POINTER. The shape parameter is ignored if converting from a ctypes array #### Examples Converting a ctypes integer array: >>> import ctypes >>> ctypes_array = (ctypes.c_int * 5)(0, 1, 2, 3, 4) >>> np_array = np.ctypeslib.as_array(ctypes_array) >>> np_array array([0, 1, 2, 3, 4], dtype=int32) Converting a ctypes POINTER: >>> import ctypes >>> buffer = (ctypes.c_int * 5)(0, 1, 2, 3, 4) >>> pointer = ctypes.cast(buffer, ctypes.POINTER(ctypes.c_int)) >>> np_array = np.ctypeslib.as_array(pointer, (5,)) >>> np_array array([0, 1, 2, 3, 4], dtype=int32) numpy.ctypeslib.as_ctypes(_obj_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ctypeslib.py#L561-L602) Create and return a ctypes object from a numpy array. Actually anything that exposes the __array_interface__ is accepted. #### Examples Create ctypes object from inferred int `np.array`: >>> inferred_int_array = np.array([1, 2, 3]) >>> c_int_array = np.ctypeslib.as_ctypes(inferred_int_array) >>> type(c_int_array) >>> c_int_array[:] [1, 2, 3] Create ctypes object from explicit 8 bit unsigned int `np.array` : >>> exp_int_array = np.array([1, 2, 3], dtype=np.uint8) >>> c_int_array = np.ctypeslib.as_ctypes(exp_int_array) >>> type(c_int_array) >>> c_int_array[:] [1, 2, 3] numpy.ctypeslib.as_ctypes_type(_dtype_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ctypeslib.py#L463-L517) Convert a dtype into a ctypes type. Parameters: **dtype** dtype The dtype to convert Returns: ctype A ctype scalar, union, array, or struct Raises: NotImplementedError If the conversion is not possible #### Notes This function does not losslessly round-trip in either direction. `np.dtype(as_ctypes_type(dt))` will: * insert padding fields * reorder fields to be sorted by offset * discard field titles `as_ctypes_type(np.dtype(ctype))` will: * discard the class names of [`ctypes.Structure`](https://docs.python.org/3/library/ctypes.html#ctypes.Structure "\(in Python v3.13\)")s and [`ctypes.Union`](https://docs.python.org/3/library/ctypes.html#ctypes.Union "\(in Python v3.13\)")s * convert single-element [`ctypes.Union`](https://docs.python.org/3/library/ctypes.html#ctypes.Union "\(in Python v3.13\)")s into single-element [`ctypes.Structure`](https://docs.python.org/3/library/ctypes.html#ctypes.Structure "\(in Python v3.13\)")s * insert padding fields #### Examples Converting a simple dtype: >>> dt = np.dtype('int8') >>> ctype = np.ctypeslib.as_ctypes_type(dt) >>> ctype Converting a structured dtype: >>> dt = np.dtype([('x', 'i4'), ('y', 'f4')]) >>> ctype = np.ctypeslib.as_ctypes_type(dt) >>> ctype numpy.ctypeslib.load_library(_libname_ , _loader_path_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ctypeslib.py#L88-L159) It is possible to load a library using >>> lib = ctypes.cdll[] But there are cross-platform considerations, such as library file extensions, plus the fact Windows will just load the first library it finds with that name. NumPy supplies the load_library function as a convenience. Changed in version 1.20.0: Allow libname and loader_path to take any [path- like object](https://docs.python.org/3/glossary.html#term-path-like-object "\(in Python v3.13\)"). Parameters: **libname** path-like Name of the library, which can have ‘lib’ as a prefix, but without an extension. **loader_path** path-like Where the library can be found. Returns: **ctypes.cdll[libpath]** library object A ctypes library object Raises: OSError If there is no library with the expected extension, or the library is defective and cannot be loaded. numpy.ctypeslib.ndpointer(_dtype =None_, _ndim =None_, _shape =None_, _flags =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/ctypeslib.py#L231-L345) Array-checking restype/argtypes. An ndpointer instance is used to describe an ndarray in restypes and argtypes specifications. This approach is more flexible than using, for example, `POINTER(c_double)`, since several restrictions can be specified, which are verified upon calling the ctypes function. These include data type, number of dimensions, shape and flags. If a given array does not satisfy the specified restrictions, a `TypeError` is raised. Parameters: **dtype** data-type, optional Array data-type. **ndim** int, optional Number of array dimensions. **shape** tuple of ints, optional Array shape. **flags** str or tuple of str Array flags; may be one or more of: * C_CONTIGUOUS / C / CONTIGUOUS * F_CONTIGUOUS / F / FORTRAN * OWNDATA / O * WRITEABLE / W * ALIGNED / A * WRITEBACKIFCOPY / X Returns: **klass** ndpointer type object A type object, which is an `_ndtpr` instance containing dtype, ndim, shape and flags information. Raises: TypeError If a given array does not satisfy the specified restrictions. #### Examples >>> clib.somefunc.argtypes = [np.ctypeslib.ndpointer(dtype=np.float64, ... ndim=1, ... flags='C_CONTIGUOUS')] ... >>> clib.somefunc(np.array([1, 2, 3], dtype=np.float64)) ... _class_ numpy.ctypeslib.c_intp A [`ctypes`](https://docs.python.org/3/library/ctypes.html#module-ctypes "\(in Python v3.13\)") signed integer type of the same size as [`numpy.intp`](arrays.scalars#numpy.intp "numpy.intp"). Depending on the platform, it can be an alias for either [`c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "\(in Python v3.13\)"), [`c_long`](https://docs.python.org/3/library/ctypes.html#ctypes.c_long "\(in Python v3.13\)") or [`c_longlong`](https://docs.python.org/3/library/ctypes.html#ctypes.c_longlong "\(in Python v3.13\)"). # Datetime support functions [`datetime_as_string`](generated/numpy.datetime_as_string#numpy.datetime_as_string "numpy.datetime_as_string")(arr[, unit, timezone, ...]) | Convert an array of datetimes into an array of strings. ---|--- [`datetime_data`](generated/numpy.datetime_data#numpy.datetime_data "numpy.datetime_data")(dtype, /) | Get information about the step size of a date or time type. ## Business day functions [`busdaycalendar`](generated/numpy.busdaycalendar#numpy.busdaycalendar "numpy.busdaycalendar")([weekmask, holidays]) | A business day calendar object that efficiently stores information defining valid days for the busday family of functions. ---|--- [`is_busday`](generated/numpy.is_busday#numpy.is_busday "numpy.is_busday")(dates[, weekmask, holidays, ...]) | Calculates which of the given dates are valid days, and which are not. [`busday_offset`](generated/numpy.busday_offset#numpy.busday_offset "numpy.busday_offset")(dates, offsets[, roll, ...]) | First adjusts the date to fall on a valid day according to the `roll` rule, then applies offsets to the given dates counted in valid days. [`busday_count`](generated/numpy.busday_count#numpy.busday_count "numpy.busday_count")(begindates, enddates[, ...]) | Counts the number of valid days between `begindates` and `enddates`, not including the day of `enddates`. # Data type routines [`can_cast`](generated/numpy.can_cast#numpy.can_cast "numpy.can_cast")(from_, to[, casting]) | Returns True if cast between data types can occur according to the casting rule. ---|--- [`promote_types`](generated/numpy.promote_types#numpy.promote_types "numpy.promote_types")(type1, type2) | Returns the data type with the smallest size and smallest scalar kind to which both `type1` and `type2` may be safely cast. [`min_scalar_type`](generated/numpy.min_scalar_type#numpy.min_scalar_type "numpy.min_scalar_type")(a, /) | For scalar `a`, returns the data type with the smallest size and smallest scalar kind which can hold its value. [`result_type`](generated/numpy.result_type#numpy.result_type "numpy.result_type")(*arrays_and_dtypes) | Returns the type that results from applying the NumPy type promotion rules to the arguments. [`common_type`](generated/numpy.common_type#numpy.common_type "numpy.common_type")(*arrays) | Return a scalar type which is common to the input arrays. ## Creating data types [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype")(dtype[, align, copy]) | Create a data type object. ---|--- [`rec.format_parser`](generated/numpy.rec.format_parser#numpy.rec.format_parser "numpy.rec.format_parser")(formats, names, titles[, ...]) | Class to convert formats, names, titles description to a dtype. ## Data type information [`finfo`](generated/numpy.finfo#numpy.finfo "numpy.finfo")(dtype) | Machine limits for floating point types. ---|--- [`iinfo`](generated/numpy.iinfo#numpy.iinfo "numpy.iinfo")(type) | Machine limits for integer types. ## Data type testing [`isdtype`](generated/numpy.isdtype#numpy.isdtype "numpy.isdtype")(dtype, kind) | Determine if a provided dtype is of a specified data type `kind`. ---|--- [`issubdtype`](generated/numpy.issubdtype#numpy.issubdtype "numpy.issubdtype")(arg1, arg2) | Returns True if first argument is a typecode lower/equal in type hierarchy. ## Miscellaneous [`typename`](generated/numpy.typename#numpy.typename "numpy.typename")(char) | Return a description for the given data type code. ---|--- [`mintypecode`](generated/numpy.mintypecode#numpy.mintypecode "numpy.mintypecode")(typechars[, typeset, default]) | Return the character for the minimum-size type to which given types can be safely cast. # Data type classes (numpy.dtypes) This module is home to specific dtypes related functionality and their classes. For more general information about dtypes, also see [`numpy.dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") and [Data type objects (dtype)](arrays.dtypes#arrays-dtypes). Similar to the builtin `types` module, this submodule defines types (classes) that are not widely used directly. New in version NumPy: 1.25 The dtypes module is new in NumPy 1.25. Previously DType classes were only accessible indirectly. ## DType classes The following are the classes of the corresponding NumPy dtype instances and NumPy scalar types. The classes can be used in `isinstance` checks and can also be instantiated or used directly. Direct use of these classes is not typical, since their scalar counterparts (e.g. `np.float64`) or strings like `"float64"` can be used. ## Boolean numpy.dtypes.BoolDType[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/dtypes.py) ## Bit-sized integers numpy.dtypes.Int8DType[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/dtypes.py) numpy.dtypes.UInt8DType numpy.dtypes.Int16DType numpy.dtypes.UInt16DType numpy.dtypes.Int32DType numpy.dtypes.UInt32DType numpy.dtypes.Int64DType numpy.dtypes.UInt64DType ## C-named integers (may be aliases) numpy.dtypes.ByteDType[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/dtypes.py) numpy.dtypes.UByteDType numpy.dtypes.ShortDType numpy.dtypes.UShortDType numpy.dtypes.IntDType numpy.dtypes.UIntDType numpy.dtypes.LongDType numpy.dtypes.ULongDType numpy.dtypes.LongLongDType numpy.dtypes.ULongLongDType ## Floating point numpy.dtypes.Float16DType[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/dtypes.py) numpy.dtypes.Float32DType numpy.dtypes.Float64DType numpy.dtypes.LongDoubleDType ## Complex numpy.dtypes.Complex64DType[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/dtypes.py) numpy.dtypes.Complex128DType numpy.dtypes.CLongDoubleDType ## Strings and Bytestrings numpy.dtypes.StrDType[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/dtypes.py) numpy.dtypes.BytesDType numpy.dtypes.StringDType ## Times numpy.dtypes.DateTime64DType[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/dtypes.py) numpy.dtypes.TimeDelta64DType ## Others numpy.dtypes.ObjectDType[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/dtypes.py) numpy.dtypes.VoidDType # Mathematical functions with automatic domain Note `numpy.emath` is a preferred alias for `numpy.lib.scimath`, available after [`numpy`](index#module-numpy "numpy") is imported. Wrapper functions to more user-friendly calling of certain math functions whose output data-type is different than the input data-type in certain domains of the input. For example, for functions like [`log`](generated/numpy.emath.log#numpy.emath.log "numpy.emath.log") with branch cuts, the versions in this module provide the mathematically valid answers in the complex plane: >>> import math >>> np.emath.log(-math.exp(1)) == (1+1j*math.pi) True Similarly, [`sqrt`](generated/numpy.emath.sqrt#numpy.emath.sqrt "numpy.emath.sqrt"), other base logarithms, [`power`](generated/numpy.emath.power#numpy.emath.power "numpy.emath.power") and trig functions are correctly handled. See their respective docstrings for specific examples. ## Functions [`arccos`](generated/numpy.emath.arccos#numpy.emath.arccos "numpy.emath.arccos")(x) | Compute the inverse cosine of x. ---|--- [`arcsin`](generated/numpy.emath.arcsin#numpy.emath.arcsin "numpy.emath.arcsin")(x) | Compute the inverse sine of x. [`arctanh`](generated/numpy.emath.arctanh#numpy.emath.arctanh "numpy.emath.arctanh")(x) | Compute the inverse hyperbolic tangent of `x`. [`log`](generated/numpy.emath.log#numpy.emath.log "numpy.emath.log")(x) | Compute the natural logarithm of `x`. [`log2`](generated/numpy.emath.log2#numpy.emath.log2 "numpy.emath.log2")(x) | Compute the logarithm base 2 of `x`. [`logn`](generated/numpy.emath.logn#numpy.emath.logn "numpy.emath.logn")(n, x) | Take log base n of x. [`log10`](generated/numpy.emath.log10#numpy.emath.log10 "numpy.emath.log10")(x) | Compute the logarithm base 10 of `x`. [`power`](generated/numpy.emath.power#numpy.emath.power "numpy.emath.power")(x, p) | Return x to the power p, (x**p). [`sqrt`](generated/numpy.emath.sqrt#numpy.emath.sqrt "numpy.emath.sqrt")(x) | Compute the square root of x. # Floating point error handling ## Setting and getting error handling [`seterr`](generated/numpy.seterr#numpy.seterr "numpy.seterr")([all, divide, over, under, invalid]) | Set how floating-point errors are handled. ---|--- [`geterr`](generated/numpy.geterr#numpy.geterr "numpy.geterr")() | Get the current way of handling floating-point errors. [`seterrcall`](generated/numpy.seterrcall#numpy.seterrcall "numpy.seterrcall")(func) | Set the floating-point error callback function or log object. [`geterrcall`](generated/numpy.geterrcall#numpy.geterrcall "numpy.geterrcall")() | Return the current callback function used on floating-point errors. [`errstate`](generated/numpy.errstate#numpy.errstate "numpy.errstate")(**kwargs) | Context manager for floating-point error handling. # Exceptions and Warnings (numpy.exceptions) General exceptions used by NumPy. Note that some exceptions may be module specific, such as linear algebra errors. New in version NumPy: 1.25 The exceptions module is new in NumPy 1.25. Older exceptions remain available through the main NumPy namespace for compatibility. ## Warnings [`ComplexWarning`](generated/numpy.exceptions.complexwarning#numpy.exceptions.ComplexWarning "numpy.exceptions.ComplexWarning") | The warning raised when casting a complex dtype to a real dtype. ---|--- [`VisibleDeprecationWarning`](generated/numpy.exceptions.visibledeprecationwarning#numpy.exceptions.VisibleDeprecationWarning "numpy.exceptions.VisibleDeprecationWarning") | Visible deprecation warning. [`RankWarning`](generated/numpy.exceptions.rankwarning#numpy.exceptions.RankWarning "numpy.exceptions.RankWarning") | Matrix rank warning. ## Exceptions [`AxisError`](generated/numpy.exceptions.axiserror#numpy.exceptions.AxisError "numpy.exceptions.AxisError")(axis[, ndim, msg_prefix]) | Axis supplied was invalid. ---|--- [`DTypePromotionError`](generated/numpy.exceptions.dtypepromotionerror#numpy.exceptions.DTypePromotionError "numpy.exceptions.DTypePromotionError") | Multiple DTypes could not be converted to a common one. [`TooHardError`](generated/numpy.exceptions.tooharderror#numpy.exceptions.TooHardError "numpy.exceptions.TooHardError") | max_work was exceeded. # Discrete Fourier Transform (numpy.fft) The SciPy module [`scipy.fft`](https://docs.scipy.org/doc/scipy/reference/fft.html#module- scipy.fft "\(in SciPy v1.14.1\)") is a more comprehensive superset of `numpy.fft`, which includes only a basic set of routines. ## Standard FFTs [`fft`](generated/numpy.fft.fft#numpy.fft.fft "numpy.fft.fft")(a[, n, axis, norm, out]) | Compute the one-dimensional discrete Fourier Transform. ---|--- [`ifft`](generated/numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft")(a[, n, axis, norm, out]) | Compute the one-dimensional inverse discrete Fourier Transform. [`fft2`](generated/numpy.fft.fft2#numpy.fft.fft2 "numpy.fft.fft2")(a[, s, axes, norm, out]) | Compute the 2-dimensional discrete Fourier Transform. [`ifft2`](generated/numpy.fft.ifft2#numpy.fft.ifft2 "numpy.fft.ifft2")(a[, s, axes, norm, out]) | Compute the 2-dimensional inverse discrete Fourier Transform. [`fftn`](generated/numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn")(a[, s, axes, norm, out]) | Compute the N-dimensional discrete Fourier Transform. [`ifftn`](generated/numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn")(a[, s, axes, norm, out]) | Compute the N-dimensional inverse discrete Fourier Transform. ## Real FFTs [`rfft`](generated/numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft")(a[, n, axis, norm, out]) | Compute the one-dimensional discrete Fourier Transform for real input. ---|--- [`irfft`](generated/numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft")(a[, n, axis, norm, out]) | Computes the inverse of [`rfft`](generated/numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft"). [`rfft2`](generated/numpy.fft.rfft2#numpy.fft.rfft2 "numpy.fft.rfft2")(a[, s, axes, norm, out]) | Compute the 2-dimensional FFT of a real array. [`irfft2`](generated/numpy.fft.irfft2#numpy.fft.irfft2 "numpy.fft.irfft2")(a[, s, axes, norm, out]) | Computes the inverse of [`rfft2`](generated/numpy.fft.rfft2#numpy.fft.rfft2 "numpy.fft.rfft2"). [`rfftn`](generated/numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn")(a[, s, axes, norm, out]) | Compute the N-dimensional discrete Fourier Transform for real input. [`irfftn`](generated/numpy.fft.irfftn#numpy.fft.irfftn "numpy.fft.irfftn")(a[, s, axes, norm, out]) | Computes the inverse of [`rfftn`](generated/numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn"). ## Hermitian FFTs [`hfft`](generated/numpy.fft.hfft#numpy.fft.hfft "numpy.fft.hfft")(a[, n, axis, norm, out]) | Compute the FFT of a signal that has Hermitian symmetry, i.e., a real spectrum. ---|--- [`ihfft`](generated/numpy.fft.ihfft#numpy.fft.ihfft "numpy.fft.ihfft")(a[, n, axis, norm, out]) | Compute the inverse FFT of a signal that has Hermitian symmetry. ## Helper routines [`fftfreq`](generated/numpy.fft.fftfreq#numpy.fft.fftfreq "numpy.fft.fftfreq")(n[, d, device]) | Return the Discrete Fourier Transform sample frequencies. ---|--- [`rfftfreq`](generated/numpy.fft.rfftfreq#numpy.fft.rfftfreq "numpy.fft.rfftfreq")(n[, d, device]) | Return the Discrete Fourier Transform sample frequencies (for usage with rfft, irfft). [`fftshift`](generated/numpy.fft.fftshift#numpy.fft.fftshift "numpy.fft.fftshift")(x[, axes]) | Shift the zero-frequency component to the center of the spectrum. [`ifftshift`](generated/numpy.fft.ifftshift#numpy.fft.ifftshift "numpy.fft.ifftshift")(x[, axes]) | The inverse of [`fftshift`](generated/numpy.fft.fftshift#numpy.fft.fftshift "numpy.fft.fftshift"). ## Background information Fourier analysis is fundamentally a method for expressing a function as a sum of periodic components, and for recovering the function from those components. When both the function and its Fourier transform are replaced with discretized counterparts, it is called the discrete Fourier transform (DFT). The DFT has become a mainstay of numerical computing in part because of a very fast algorithm for computing it, called the Fast Fourier Transform (FFT), which was known to Gauss (1805) and was brought to light in its current form by Cooley and Tukey [CT]. Press et al. [NR] provide an accessible introduction to Fourier analysis and its applications. Because the discrete Fourier transform separates its input into components that contribute at discrete frequencies, it has a great number of applications in digital signal processing, e.g., for filtering, and in this context the discretized input to the transform is customarily referred to as a _signal_ , which exists in the _time domain_. The output is called a _spectrum_ or _transform_ and exists in the _frequency domain_. ## Implementation details There are many ways to define the DFT, varying in the sign of the exponent, normalization, etc. In this implementation, the DFT is defined as \\[A_k = \sum_{m=0}^{n-1} a_m \exp\left\\{-2\pi i{mk \over n}\right\\} \qquad k = 0,\ldots,n-1.\\] The DFT is in general defined for complex inputs and outputs, and a single- frequency component at linear frequency \\(f\\) is represented by a complex exponential \\(a_m = \exp\\{2\pi i\,f m\Delta t\\}\\), where \\(\Delta t\\) is the sampling interval. The values in the result follow so-called “standard” order: If `A = fft(a, n)`, then `A[0]` contains the zero-frequency term (the sum of the signal), which is always purely real for real inputs. Then `A[1:n/2]` contains the positive-frequency terms, and `A[n/2+1:]` contains the negative-frequency terms, in order of decreasingly negative frequency. For an even number of input points, `A[n/2]` represents both positive and negative Nyquist frequency, and is also purely real for real input. For an odd number of input points, `A[(n-1)/2]` contains the largest positive frequency, while `A[(n+1)/2]` contains the largest negative frequency. The routine `np.fft.fftfreq(n)` returns an array giving the frequencies of corresponding elements in the output. The routine `np.fft.fftshift(A)` shifts transforms and their frequencies to put the zero-frequency components in the middle, and `np.fft.ifftshift(A)` undoes that shift. When the input `a` is a time-domain signal and `A = fft(a)`, `np.abs(A)` is its amplitude spectrum and `np.abs(A)**2` is its power spectrum. The phase spectrum is obtained by `np.angle(A)`. The inverse DFT is defined as \\[a_m = \frac{1}{n}\sum_{k=0}^{n-1}A_k\exp\left\\{2\pi i{mk\over n}\right\\} \qquad m = 0,\ldots,n-1.\\] It differs from the forward transform by the sign of the exponential argument and the default normalization by \\(1/n\\). ## Type Promotion `numpy.fft` promotes `float32` and `complex64` arrays to `float64` and `complex128` arrays respectively. For an FFT implementation that does not promote input arrays, see [`scipy.fftpack`](https://docs.scipy.org/doc/scipy/reference/fftpack.html#module- scipy.fftpack "\(in SciPy v1.14.1\)"). ## Normalization The argument `norm` indicates which direction of the pair of direct/inverse transforms is scaled and with what normalization factor. The default normalization (`"backward"`) has the direct (forward) transforms unscaled and the inverse (backward) transforms scaled by \\(1/n\\). It is possible to obtain unitary transforms by setting the keyword argument `norm` to `"ortho"` so that both direct and inverse transforms are scaled by \\(1/\sqrt{n}\\). Finally, setting the keyword argument `norm` to `"forward"` has the direct transforms scaled by \\(1/n\\) and the inverse transforms unscaled (i.e. exactly opposite to the default `"backward"`). `None` is an alias of the default option `"backward"` for backward compatibility. ## Real and Hermitian transforms When the input is purely real, its transform is Hermitian, i.e., the component at frequency \\(f_k\\) is the complex conjugate of the component at frequency \\(-f_k\\), which means that for real inputs there is no information in the negative frequency components that is not already available from the positive frequency components. The family of [`rfft`](generated/numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft") functions is designed to operate on real inputs, and exploits this symmetry by computing only the positive frequency components, up to and including the Nyquist frequency. Thus, `n` input points produce `n/2+1` complex output points. The inverses of this family assumes the same symmetry of its input, and for an output of `n` points uses `n/2+1` input points. Correspondingly, when the spectrum is purely real, the signal is Hermitian. The [`hfft`](generated/numpy.fft.hfft#numpy.fft.hfft "numpy.fft.hfft") family of functions exploits this symmetry by using `n/2+1` complex points in the input (time) domain for `n` real points in the frequency domain. In higher dimensions, FFTs are used, e.g., for image analysis and filtering. The computational efficiency of the FFT means that it can also be a faster way to compute large convolutions, using the property that a convolution in the time domain is equivalent to a point-by-point multiplication in the frequency domain. ## Higher dimensions In two dimensions, the DFT is defined as \\[A_{kl} = \sum_{m=0}^{M-1} \sum_{n=0}^{N-1} a_{mn}\exp\left\\{-2\pi i \left({mk\over M}+{nl\over N}\right)\right\\} \qquad k = 0, \ldots, M-1;\quad l = 0, \ldots, N-1,\\] which extends in the obvious way to higher dimensions, and the inverses in higher dimensions also extend in the same way. ## References [CT] Cooley, James W., and John W. Tukey, 1965, “An algorithm for the machine calculation of complex Fourier series,” _Math. Comput._ 19: 297-301. [NR] Press, W., Teukolsky, S., Vetterline, W.T., and Flannery, B.P., 2007, _Numerical Recipes: The Art of Scientific Computing_ , ch. 12-13. Cambridge Univ. Press, Cambridge, UK. ## Examples For examples, see the various functions. # Functional programming [`apply_along_axis`](generated/numpy.apply_along_axis#numpy.apply_along_axis "numpy.apply_along_axis")(func1d, axis, arr, *args, ...) | Apply a function to 1-D slices along the given axis. ---|--- [`apply_over_axes`](generated/numpy.apply_over_axes#numpy.apply_over_axes "numpy.apply_over_axes")(func, a, axes) | Apply a function repeatedly over multiple axes. [`vectorize`](generated/numpy.vectorize#numpy.vectorize "numpy.vectorize")([pyfunc, otypes, doc, excluded, ...]) | Returns an object that acts like pyfunc, but takes arrays as input. [`frompyfunc`](generated/numpy.frompyfunc#numpy.frompyfunc "numpy.frompyfunc")(func, /, nin, nout, *[, identity]) | Takes an arbitrary Python function and returns a NumPy ufunc. [`piecewise`](generated/numpy.piecewise#numpy.piecewise "numpy.piecewise")(x, condlist, funclist, *args, **kw) | Evaluate a piecewise-defined function. # Routines and objects by topic In this chapter routine docstrings are presented, grouped by functionality. Many docstrings contain example code, which demonstrates basic usage of the routine. The examples assume that NumPy is imported with: >>> import numpy as np A convenient way to execute examples is the `%doctest_mode` mode of IPython, which allows for pasting of multi-line examples and preserves indentation. * [Constants](constants) * [Array creation routines](routines.array-creation) * [Array manipulation routines](routines.array-manipulation) * [Bit-wise operations](routines.bitwise) * [String functionality](routines.strings) * [Datetime support functions](routines.datetime) * [Data type routines](routines.dtype) * [Mathematical functions with automatic domain](routines.emath) * [Floating point error handling](routines.err) * [Exceptions and Warnings (`numpy.exceptions`)](routines.exceptions) * [Discrete Fourier Transform (`numpy.fft`)](routines.fft) * [Functional programming](routines.functional) * [Input and output](routines.io) * [Indexing routines](routines.indexing) * [Linear algebra (`numpy.linalg`)](routines.linalg) * [Logic functions](routines.logic) * [Masked array operations](routines.ma) * [Mathematical functions](routines.math) * [Miscellaneous routines](routines.other) * [Polynomials](routines.polynomials) * [Random sampling (`numpy.random`)](random/index) * [Set routines](routines.set) * [Sorting, searching, and counting](routines.sort) * [Statistics](routines.statistics) * [Test support (`numpy.testing`)](routines.testing) * [Window functions](routines.window) # Indexing routines See also [Indexing on ndarrays](../user/basics.indexing#basics-indexing) ## Generating index arrays [`c_`](generated/numpy.c_#numpy.c_ "numpy.c_") | Translates slice objects to concatenation along the second axis. ---|--- [`r_`](generated/numpy.r_#numpy.r_ "numpy.r_") | Translates slice objects to concatenation along the first axis. [`s_`](generated/numpy.s_#numpy.s_ "numpy.s_") | A nicer way to build up index tuples for arrays. [`nonzero`](generated/numpy.nonzero#numpy.nonzero "numpy.nonzero")(a) | Return the indices of the elements that are non-zero. [`where`](generated/numpy.where#numpy.where "numpy.where")(condition, [x, y], /) | Return elements chosen from `x` or `y` depending on `condition`. [`indices`](generated/numpy.indices#numpy.indices "numpy.indices")(dimensions[, dtype, sparse]) | Return an array representing the indices of a grid. [`ix_`](generated/numpy.ix_#numpy.ix_ "numpy.ix_")(*args) | Construct an open mesh from multiple sequences. [`ogrid`](generated/numpy.ogrid#numpy.ogrid "numpy.ogrid") | An instance which returns an open multi-dimensional "meshgrid". [`ravel_multi_index`](generated/numpy.ravel_multi_index#numpy.ravel_multi_index "numpy.ravel_multi_index")(multi_index, dims[, mode, ...]) | Converts a tuple of index arrays into an array of flat indices, applying boundary modes to the multi-index. [`unravel_index`](generated/numpy.unravel_index#numpy.unravel_index "numpy.unravel_index")(indices, shape[, order]) | Converts a flat index or array of flat indices into a tuple of coordinate arrays. [`diag_indices`](generated/numpy.diag_indices#numpy.diag_indices "numpy.diag_indices")(n[, ndim]) | Return the indices to access the main diagonal of an array. [`diag_indices_from`](generated/numpy.diag_indices_from#numpy.diag_indices_from "numpy.diag_indices_from")(arr) | Return the indices to access the main diagonal of an n-dimensional array. [`mask_indices`](generated/numpy.mask_indices#numpy.mask_indices "numpy.mask_indices")(n, mask_func[, k]) | Return the indices to access (n, n) arrays, given a masking function. [`tril_indices`](generated/numpy.tril_indices#numpy.tril_indices "numpy.tril_indices")(n[, k, m]) | Return the indices for the lower-triangle of an (n, m) array. [`tril_indices_from`](generated/numpy.tril_indices_from#numpy.tril_indices_from "numpy.tril_indices_from")(arr[, k]) | Return the indices for the lower-triangle of arr. [`triu_indices`](generated/numpy.triu_indices#numpy.triu_indices "numpy.triu_indices")(n[, k, m]) | Return the indices for the upper-triangle of an (n, m) array. [`triu_indices_from`](generated/numpy.triu_indices_from#numpy.triu_indices_from "numpy.triu_indices_from")(arr[, k]) | Return the indices for the upper-triangle of arr. ## Indexing-like operations [`take`](generated/numpy.take#numpy.take "numpy.take")(a, indices[, axis, out, mode]) | Take elements from an array along an axis. ---|--- [`take_along_axis`](generated/numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis")(arr, indices, axis) | Take values from the input array by matching 1d index and data slices. [`choose`](generated/numpy.choose#numpy.choose "numpy.choose")(a, choices[, out, mode]) | Construct an array from an index array and a list of arrays to choose from. [`compress`](generated/numpy.compress#numpy.compress "numpy.compress")(condition, a[, axis, out]) | Return selected slices of an array along given axis. [`diag`](generated/numpy.diag#numpy.diag "numpy.diag")(v[, k]) | Extract a diagonal or construct a diagonal array. [`diagonal`](generated/numpy.diagonal#numpy.diagonal "numpy.diagonal")(a[, offset, axis1, axis2]) | Return specified diagonals. [`select`](generated/numpy.select#numpy.select "numpy.select")(condlist, choicelist[, default]) | Return an array drawn from elements in choicelist, depending on conditions. ## Inserting data into arrays [`place`](generated/numpy.place#numpy.place "numpy.place")(arr, mask, vals) | Change elements of an array based on conditional and input values. ---|--- [`put`](generated/numpy.put#numpy.put "numpy.put")(a, ind, v[, mode]) | Replaces specified elements of an array with given values. [`put_along_axis`](generated/numpy.put_along_axis#numpy.put_along_axis "numpy.put_along_axis")(arr, indices, values, axis) | Put values into the destination array by matching 1d index and data slices. [`putmask`](generated/numpy.putmask#numpy.putmask "numpy.putmask")(a, mask, values) | Changes elements of an array based on conditional and input values. [`fill_diagonal`](generated/numpy.fill_diagonal#numpy.fill_diagonal "numpy.fill_diagonal")(a, val[, wrap]) | Fill the main diagonal of the given array of any dimensionality. ## Iterating over arrays [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer")(op[, flags, op_flags, op_dtypes, ...]) | Efficient multi-dimensional iterator object to iterate over arrays. ---|--- [`ndenumerate`](generated/numpy.ndenumerate#numpy.ndenumerate "numpy.ndenumerate")(arr) | Multidimensional index iterator. [`ndindex`](generated/numpy.ndindex#numpy.ndindex "numpy.ndindex")(*shape) | An N-dimensional iterator object to index arrays. [`nested_iters`](generated/numpy.nested_iters#numpy.nested_iters "numpy.nested_iters")(op, axes[, flags, op_flags, ...]) | Create nditers for use in nested loops [`flatiter`](generated/numpy.flatiter#numpy.flatiter "numpy.flatiter")() | Flat iterator object to iterate over arrays. [`iterable`](generated/numpy.iterable#numpy.iterable "numpy.iterable")(y) | Check whether or not an object can be iterated over. # Input and output ## NumPy binary files (npy, npz) [`load`](generated/numpy.load#numpy.load "numpy.load")(file[, mmap_mode, allow_pickle, ...]) | Load arrays or pickled objects from `.npy`, `.npz` or pickled files. ---|--- [`save`](generated/numpy.save#numpy.save "numpy.save")(file, arr[, allow_pickle, fix_imports]) | Save an array to a binary file in NumPy `.npy` format. [`savez`](generated/numpy.savez#numpy.savez "numpy.savez")(file, *args[, allow_pickle]) | Save several arrays into a single file in uncompressed `.npz` format. [`savez_compressed`](generated/numpy.savez_compressed#numpy.savez_compressed "numpy.savez_compressed")(file, *args[, allow_pickle]) | Save several arrays into a single file in compressed `.npz` format. [`lib.npyio.NpzFile`](generated/numpy.lib.npyio.npzfile#numpy.lib.npyio.NpzFile "numpy.lib.npyio.NpzFile")(fid) | A dictionary-like object with lazy-loading of files in the zipped archive provided on construction. The format of these binary file types is documented in [`numpy.lib.format`](generated/numpy.lib.format#module-numpy.lib.format "numpy.lib.format") ## Text files [`loadtxt`](generated/numpy.loadtxt#numpy.loadtxt "numpy.loadtxt")(fname[, dtype, comments, delimiter, ...]) | Load data from a text file. ---|--- [`savetxt`](generated/numpy.savetxt#numpy.savetxt "numpy.savetxt")(fname, X[, fmt, delimiter, newline, ...]) | Save an array to a text file. [`genfromtxt`](generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt")(fname[, dtype, comments, ...]) | Load data from a text file, with missing values handled as specified. [`fromregex`](generated/numpy.fromregex#numpy.fromregex "numpy.fromregex")(file, regexp, dtype[, encoding]) | Construct an array from a text file, using regular expression parsing. [`fromstring`](generated/numpy.fromstring#numpy.fromstring "numpy.fromstring")(string[, dtype, count, like]) | A new 1-D array initialized from text data in a string. [`ndarray.tofile`](generated/numpy.ndarray.tofile#numpy.ndarray.tofile "numpy.ndarray.tofile")(fid[, sep, format]) | Write array to a file as text or binary (default). [`ndarray.tolist`](generated/numpy.ndarray.tolist#numpy.ndarray.tolist "numpy.ndarray.tolist")() | Return the array as an `a.ndim`-levels deep nested list of Python scalars. ## Raw binary files [`fromfile`](generated/numpy.fromfile#numpy.fromfile "numpy.fromfile")(file[, dtype, count, sep, offset, like]) | Construct an array from data in a text or binary file. ---|--- [`ndarray.tofile`](generated/numpy.ndarray.tofile#numpy.ndarray.tofile "numpy.ndarray.tofile")(fid[, sep, format]) | Write array to a file as text or binary (default). ## String formatting [`array2string`](generated/numpy.array2string#numpy.array2string "numpy.array2string")(a[, max_line_width, precision, ...]) | Return a string representation of an array. ---|--- [`array_repr`](generated/numpy.array_repr#numpy.array_repr "numpy.array_repr")(arr[, max_line_width, precision, ...]) | Return the string representation of an array. [`array_str`](generated/numpy.array_str#numpy.array_str "numpy.array_str")(a[, max_line_width, precision, ...]) | Return a string representation of the data in an array. [`format_float_positional`](generated/numpy.format_float_positional#numpy.format_float_positional "numpy.format_float_positional")(x[, precision, ...]) | Format a floating-point scalar as a decimal string in positional notation. [`format_float_scientific`](generated/numpy.format_float_scientific#numpy.format_float_scientific "numpy.format_float_scientific")(x[, precision, ...]) | Format a floating-point scalar as a decimal string in scientific notation. ## Memory mapping files [`memmap`](generated/numpy.memmap#numpy.memmap "numpy.memmap")(filename[, dtype, mode, offset, ...]) | Create a memory-map to an array stored in a _binary_ file on disk. ---|--- [`lib.format.open_memmap`](generated/numpy.lib.format.open_memmap#numpy.lib.format.open_memmap "numpy.lib.format.open_memmap")(filename[, mode, ...]) | Open a .npy file as a memory-mapped array. ## Text formatting options [`set_printoptions`](generated/numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions")([precision, threshold, ...]) | Set printing options. ---|--- [`get_printoptions`](generated/numpy.get_printoptions#numpy.get_printoptions "numpy.get_printoptions")() | Return the current print options. [`printoptions`](generated/numpy.printoptions#numpy.printoptions "numpy.printoptions")(*args, **kwargs) | Context manager for setting print options. ## Base-n representations [`binary_repr`](generated/numpy.binary_repr#numpy.binary_repr "numpy.binary_repr")(num[, width]) | Return the binary representation of the input number as a string. ---|--- [`base_repr`](generated/numpy.base_repr#numpy.base_repr "numpy.base_repr")(number[, base, padding]) | Return a string representation of a number in the given base system. ## Data sources [`lib.npyio.DataSource`](generated/numpy.lib.npyio.datasource#numpy.lib.npyio.DataSource "numpy.lib.npyio.DataSource")([destpath]) | A generic data source file (file, http, ftp, ...). ---|--- ## Binary format description [`lib.format`](generated/numpy.lib.format#module-numpy.lib.format "numpy.lib.format") | Binary serialization ---|--- # Lib module (numpy.lib) ## Functions & other objects [`add_docstring`](generated/numpy.lib.add_docstring#numpy.lib.add_docstring "numpy.lib.add_docstring")(obj, docstring) | Add a docstring to a built-in obj if possible. ---|--- [`add_newdoc`](generated/numpy.lib.add_newdoc#numpy.lib.add_newdoc "numpy.lib.add_newdoc")(place, obj, doc[, warn_on_python]) | Add documentation to an existing object, typically one defined in C [`Arrayterator`](generated/numpy.lib.arrayterator#numpy.lib.Arrayterator "numpy.lib.Arrayterator")(var[, buf_size]) | Buffered iterator for big arrays. [`NumpyVersion`](generated/numpy.lib.numpyversion#numpy.lib.NumpyVersion "numpy.lib.NumpyVersion")(vstring) | Parse and compare numpy version strings. ## Submodules [`array_utils`](generated/numpy.lib.array_utils#module-numpy.lib.array_utils "numpy.lib.array_utils") | Miscellaneous utils. ---|--- [`format`](generated/numpy.lib.format#module-numpy.lib.format "numpy.lib.format") | Binary serialization [`introspect`](generated/numpy.lib.introspect#module-numpy.lib.introspect "numpy.lib.introspect") | Introspection helper functions. [`mixins`](generated/numpy.lib.mixins#module-numpy.lib.mixins "numpy.lib.mixins") | Mixin classes for custom array types that don't inherit from ndarray. [`npyio`](generated/numpy.lib.npyio#module-numpy.lib.npyio "numpy.lib.npyio") | IO related functions. [`scimath`](generated/numpy.lib.scimath#module-numpy.lib.scimath "numpy.lib.scimath") | Wrapper functions to more user-friendly calling of certain math functions whose output data-type is different than the input data-type in certain domains of the input. [`stride_tricks`](generated/numpy.lib.stride_tricks#module-numpy.lib.stride_tricks "numpy.lib.stride_tricks") | Utilities that manipulate strides to achieve desirable effects. # Linear algebra (numpy.linalg) The NumPy linear algebra functions rely on BLAS and LAPACK to provide efficient low level implementations of standard linear algebra algorithms. Those libraries may be provided by NumPy itself using C versions of a subset of their reference implementations but, when possible, highly optimized libraries that take advantage of specialized processor functionality are preferred. Examples of such libraries are [OpenBLAS](https://www.openblas.net/), MKL (TM), and ATLAS. Because those libraries are multithreaded and processor dependent, environmental variables and external packages such as [threadpoolctl](https://github.com/joblib/threadpoolctl) may be needed to control the number of threads or specify the processor architecture. The SciPy library also contains a [`linalg`](https://docs.scipy.org/doc/scipy/reference/linalg.html#module- scipy.linalg "\(in SciPy v1.14.1\)") submodule, and there is overlap in the functionality provided by the SciPy and NumPy submodules. SciPy contains functions not found in `numpy.linalg`, such as functions related to LU decomposition and the Schur decomposition, multiple ways of calculating the pseudoinverse, and matrix transcendentals such as the matrix logarithm. Some functions that exist in both have augmented functionality in [`scipy.linalg`](https://docs.scipy.org/doc/scipy/reference/linalg.html#module- scipy.linalg "\(in SciPy v1.14.1\)"). For example, [`scipy.linalg.eig`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eig.html#scipy.linalg.eig "\(in SciPy v1.14.1\)") can take a second matrix argument for solving generalized eigenvalue problems. Some functions in NumPy, however, have more flexible broadcasting options. For example, [`numpy.linalg.solve`](generated/numpy.linalg.solve#numpy.linalg.solve "numpy.linalg.solve") can handle “stacked” arrays, while [`scipy.linalg.solve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve.html#scipy.linalg.solve "\(in SciPy v1.14.1\)") accepts only a single square array as its first argument. Note The term _matrix_ as it is used on this page indicates a 2d [`numpy.array`](generated/numpy.array#numpy.array "numpy.array") object, and _not_ a [`numpy.matrix`](generated/numpy.matrix#numpy.matrix "numpy.matrix") object. The latter is no longer recommended, even for linear algebra. See [the matrix object documentation](arrays.classes#matrix-objects) for more information. ## The `@` operator Introduced in NumPy 1.10.0, the `@` operator is preferable to other methods when computing the matrix product between 2d arrays. The [`numpy.matmul`](generated/numpy.matmul#numpy.matmul "numpy.matmul") function implements the `@` operator. ## Matrix and vector products [`dot`](generated/numpy.dot#numpy.dot "numpy.dot")(a, b[, out]) | Dot product of two arrays. ---|--- [`linalg.multi_dot`](generated/numpy.linalg.multi_dot#numpy.linalg.multi_dot "numpy.linalg.multi_dot")(arrays, *[, out]) | Compute the dot product of two or more arrays in a single function call, while automatically selecting the fastest evaluation order. [`vdot`](generated/numpy.vdot#numpy.vdot "numpy.vdot")(a, b, /) | Return the dot product of two vectors. [`vecdot`](generated/numpy.vecdot#numpy.vecdot "numpy.vecdot")(x1, x2, /[, out, casting, order, ...]) | Vector dot product of two arrays. [`linalg.vecdot`](generated/numpy.linalg.vecdot#numpy.linalg.vecdot "numpy.linalg.vecdot")(x1, x2, /, *[, axis]) | Computes the vector dot product. [`inner`](generated/numpy.inner#numpy.inner "numpy.inner")(a, b, /) | Inner product of two arrays. [`outer`](generated/numpy.outer#numpy.outer "numpy.outer")(a, b[, out]) | Compute the outer product of two vectors. [`matmul`](generated/numpy.matmul#numpy.matmul "numpy.matmul")(x1, x2, /[, out, casting, order, ...]) | Matrix product of two arrays. [`linalg.matmul`](generated/numpy.linalg.matmul#numpy.linalg.matmul "numpy.linalg.matmul")(x1, x2, /) | Computes the matrix product. [`matvec`](generated/numpy.matvec#numpy.matvec "numpy.matvec")(x1, x2, /[, out, casting, order, ...]) | Matrix-vector dot product of two arrays. [`vecmat`](generated/numpy.vecmat#numpy.vecmat "numpy.vecmat")(x1, x2, /[, out, casting, order, ...]) | Vector-matrix dot product of two arrays. [`tensordot`](generated/numpy.tensordot#numpy.tensordot "numpy.tensordot")(a, b[, axes]) | Compute tensor dot product along specified axes. [`linalg.tensordot`](generated/numpy.linalg.tensordot#numpy.linalg.tensordot "numpy.linalg.tensordot")(x1, x2, /, *[, axes]) | Compute tensor dot product along specified axes. [`einsum`](generated/numpy.einsum#numpy.einsum "numpy.einsum")(subscripts, *operands[, out, dtype, ...]) | Evaluates the Einstein summation convention on the operands. [`einsum_path`](generated/numpy.einsum_path#numpy.einsum_path "numpy.einsum_path")(subscripts, *operands[, optimize]) | Evaluates the lowest cost contraction order for an einsum expression by considering the creation of intermediate arrays. [`linalg.matrix_power`](generated/numpy.linalg.matrix_power#numpy.linalg.matrix_power "numpy.linalg.matrix_power")(a, n) | Raise a square matrix to the (integer) power `n`. [`kron`](generated/numpy.kron#numpy.kron "numpy.kron")(a, b) | Kronecker product of two arrays. [`linalg.cross`](generated/numpy.linalg.cross#numpy.linalg.cross "numpy.linalg.cross")(x1, x2, /, *[, axis]) | Returns the cross product of 3-element vectors. ## Decompositions [`linalg.cholesky`](generated/numpy.linalg.cholesky#numpy.linalg.cholesky "numpy.linalg.cholesky")(a, /, *[, upper]) | Cholesky decomposition. ---|--- [`linalg.outer`](generated/numpy.linalg.outer#numpy.linalg.outer "numpy.linalg.outer")(x1, x2, /) | Compute the outer product of two vectors. [`linalg.qr`](generated/numpy.linalg.qr#numpy.linalg.qr "numpy.linalg.qr")(a[, mode]) | Compute the qr factorization of a matrix. [`linalg.svd`](generated/numpy.linalg.svd#numpy.linalg.svd "numpy.linalg.svd")(a[, full_matrices, compute_uv, ...]) | Singular Value Decomposition. [`linalg.svdvals`](generated/numpy.linalg.svdvals#numpy.linalg.svdvals "numpy.linalg.svdvals")(x, /) | Returns the singular values of a matrix (or a stack of matrices) `x`. ## Matrix eigenvalues [`linalg.eig`](generated/numpy.linalg.eig#numpy.linalg.eig "numpy.linalg.eig")(a) | Compute the eigenvalues and right eigenvectors of a square array. ---|--- [`linalg.eigh`](generated/numpy.linalg.eigh#numpy.linalg.eigh "numpy.linalg.eigh")(a[, UPLO]) | Return the eigenvalues and eigenvectors of a complex Hermitian (conjugate symmetric) or a real symmetric matrix. [`linalg.eigvals`](generated/numpy.linalg.eigvals#numpy.linalg.eigvals "numpy.linalg.eigvals")(a) | Compute the eigenvalues of a general matrix. [`linalg.eigvalsh`](generated/numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh")(a[, UPLO]) | Compute the eigenvalues of a complex Hermitian or real symmetric matrix. ## Norms and other numbers [`linalg.norm`](generated/numpy.linalg.norm#numpy.linalg.norm "numpy.linalg.norm")(x[, ord, axis, keepdims]) | Matrix or vector norm. ---|--- [`linalg.matrix_norm`](generated/numpy.linalg.matrix_norm#numpy.linalg.matrix_norm "numpy.linalg.matrix_norm")(x, /, *[, keepdims, ord]) | Computes the matrix norm of a matrix (or a stack of matrices) `x`. [`linalg.vector_norm`](generated/numpy.linalg.vector_norm#numpy.linalg.vector_norm "numpy.linalg.vector_norm")(x, /, *[, axis, ...]) | Computes the vector norm of a vector (or batch of vectors) `x`. [`linalg.cond`](generated/numpy.linalg.cond#numpy.linalg.cond "numpy.linalg.cond")(x[, p]) | Compute the condition number of a matrix. [`linalg.det`](generated/numpy.linalg.det#numpy.linalg.det "numpy.linalg.det")(a) | Compute the determinant of an array. [`linalg.matrix_rank`](generated/numpy.linalg.matrix_rank#numpy.linalg.matrix_rank "numpy.linalg.matrix_rank")(A[, tol, hermitian, rtol]) | Return matrix rank of array using SVD method [`linalg.slogdet`](generated/numpy.linalg.slogdet#numpy.linalg.slogdet "numpy.linalg.slogdet")(a) | Compute the sign and (natural) logarithm of the determinant of an array. [`trace`](generated/numpy.trace#numpy.trace "numpy.trace")(a[, offset, axis1, axis2, dtype, out]) | Return the sum along diagonals of the array. [`linalg.trace`](generated/numpy.linalg.trace#numpy.linalg.trace "numpy.linalg.trace")(x, /, *[, offset, dtype]) | Returns the sum along the specified diagonals of a matrix (or a stack of matrices) `x`. ## Solving equations and inverting matrices [`linalg.solve`](generated/numpy.linalg.solve#numpy.linalg.solve "numpy.linalg.solve")(a, b) | Solve a linear matrix equation, or system of linear scalar equations. ---|--- [`linalg.tensorsolve`](generated/numpy.linalg.tensorsolve#numpy.linalg.tensorsolve "numpy.linalg.tensorsolve")(a, b[, axes]) | Solve the tensor equation `a x = b` for x. [`linalg.lstsq`](generated/numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq")(a, b[, rcond]) | Return the least-squares solution to a linear matrix equation. [`linalg.inv`](generated/numpy.linalg.inv#numpy.linalg.inv "numpy.linalg.inv")(a) | Compute the inverse of a matrix. [`linalg.pinv`](generated/numpy.linalg.pinv#numpy.linalg.pinv "numpy.linalg.pinv")(a[, rcond, hermitian, rtol]) | Compute the (Moore-Penrose) pseudo-inverse of a matrix. [`linalg.tensorinv`](generated/numpy.linalg.tensorinv#numpy.linalg.tensorinv "numpy.linalg.tensorinv")(a[, ind]) | Compute the 'inverse' of an N-dimensional array. ## Other matrix operations [`diagonal`](generated/numpy.diagonal#numpy.diagonal "numpy.diagonal")(a[, offset, axis1, axis2]) | Return specified diagonals. ---|--- [`linalg.diagonal`](generated/numpy.linalg.diagonal#numpy.linalg.diagonal "numpy.linalg.diagonal")(x, /, *[, offset]) | Returns specified diagonals of a matrix (or a stack of matrices) `x`. [`linalg.matrix_transpose`](generated/numpy.linalg.matrix_transpose#numpy.linalg.matrix_transpose "numpy.linalg.matrix_transpose")(x, /) | Transposes a matrix (or a stack of matrices) `x`. ## Exceptions [`linalg.LinAlgError`](generated/numpy.linalg.linalgerror#numpy.linalg.LinAlgError "numpy.linalg.LinAlgError") | Generic Python-exception-derived object raised by linalg functions. ---|--- ## Linear algebra on several matrices at once Several of the linear algebra routines listed above are able to compute results for several matrices at once, if they are stacked into the same array. This is indicated in the documentation via input parameter specifications such as `a : (..., M, M) array_like`. This means that if for instance given an input array `a.shape == (N, M, M)`, it is interpreted as a “stack” of N matrices, each of size M-by-M. Similar specification applies to return values, for instance the determinant has `det : (...)` and will in this case return an array of shape `det(a).shape == (N,)`. This generalizes to linear algebra operations on higher-dimensional arrays: the last 1 or 2 dimensions of a multidimensional array are interpreted as vectors or matrices, as appropriate for each operation. # Logic functions ## Truth value testing [`all`](generated/numpy.all#numpy.all "numpy.all")(a[, axis, out, keepdims, where]) | Test whether all array elements along a given axis evaluate to True. ---|--- [`any`](generated/numpy.any#numpy.any "numpy.any")(a[, axis, out, keepdims, where]) | Test whether any array element along a given axis evaluates to True. ## Array contents [`isfinite`](generated/numpy.isfinite#numpy.isfinite "numpy.isfinite")(x, /[, out, where, casting, order, ...]) | Test element-wise for finiteness (not infinity and not Not a Number). ---|--- [`isinf`](generated/numpy.isinf#numpy.isinf "numpy.isinf")(x, /[, out, where, casting, order, ...]) | Test element-wise for positive or negative infinity. [`isnan`](generated/numpy.isnan#numpy.isnan "numpy.isnan")(x, /[, out, where, casting, order, ...]) | Test element-wise for NaN and return result as a boolean array. [`isnat`](generated/numpy.isnat#numpy.isnat "numpy.isnat")(x, /[, out, where, casting, order, ...]) | Test element-wise for NaT (not a time) and return result as a boolean array. [`isneginf`](generated/numpy.isneginf#numpy.isneginf "numpy.isneginf")(x[, out]) | Test element-wise for negative infinity, return result as bool array. [`isposinf`](generated/numpy.isposinf#numpy.isposinf "numpy.isposinf")(x[, out]) | Test element-wise for positive infinity, return result as bool array. ## Array type testing [`iscomplex`](generated/numpy.iscomplex#numpy.iscomplex "numpy.iscomplex")(x) | Returns a bool array, where True if input element is complex. ---|--- [`iscomplexobj`](generated/numpy.iscomplexobj#numpy.iscomplexobj "numpy.iscomplexobj")(x) | Check for a complex type or an array of complex numbers. [`isfortran`](generated/numpy.isfortran#numpy.isfortran "numpy.isfortran")(a) | Check if the array is Fortran contiguous but _not_ C contiguous. [`isreal`](generated/numpy.isreal#numpy.isreal "numpy.isreal")(x) | Returns a bool array, where True if input element is real. [`isrealobj`](generated/numpy.isrealobj#numpy.isrealobj "numpy.isrealobj")(x) | Return True if x is a not complex type or an array of complex numbers. [`isscalar`](generated/numpy.isscalar#numpy.isscalar "numpy.isscalar")(element) | Returns True if the type of `element` is a scalar type. ## Logical operations [`logical_and`](generated/numpy.logical_and#numpy.logical_and "numpy.logical_and")(x1, x2, /[, out, where, ...]) | Compute the truth value of x1 AND x2 element-wise. ---|--- [`logical_or`](generated/numpy.logical_or#numpy.logical_or "numpy.logical_or")(x1, x2, /[, out, where, casting, ...]) | Compute the truth value of x1 OR x2 element-wise. [`logical_not`](generated/numpy.logical_not#numpy.logical_not "numpy.logical_not")(x, /[, out, where, casting, ...]) | Compute the truth value of NOT x element-wise. [`logical_xor`](generated/numpy.logical_xor#numpy.logical_xor "numpy.logical_xor")(x1, x2, /[, out, where, ...]) | Compute the truth value of x1 XOR x2, element-wise. ## Comparison [`allclose`](generated/numpy.allclose#numpy.allclose "numpy.allclose")(a, b[, rtol, atol, equal_nan]) | Returns True if two arrays are element-wise equal within a tolerance. ---|--- [`isclose`](generated/numpy.isclose#numpy.isclose "numpy.isclose")(a, b[, rtol, atol, equal_nan]) | Returns a boolean array where two arrays are element-wise equal within a tolerance. [`array_equal`](generated/numpy.array_equal#numpy.array_equal "numpy.array_equal")(a1, a2[, equal_nan]) | True if two arrays have the same shape and elements, False otherwise. [`array_equiv`](generated/numpy.array_equiv#numpy.array_equiv "numpy.array_equiv")(a1, a2) | Returns True if input arrays are shape consistent and all elements equal. [`greater`](generated/numpy.greater#numpy.greater "numpy.greater")(x1, x2, /[, out, where, casting, ...]) | Return the truth value of (x1 > x2) element-wise. ---|--- [`greater_equal`](generated/numpy.greater_equal#numpy.greater_equal "numpy.greater_equal")(x1, x2, /[, out, where, ...]) | Return the truth value of (x1 >= x2) element-wise. [`less`](generated/numpy.less#numpy.less "numpy.less")(x1, x2, /[, out, where, casting, ...]) | Return the truth value of (x1 < x2) element-wise. [`less_equal`](generated/numpy.less_equal#numpy.less_equal "numpy.less_equal")(x1, x2, /[, out, where, casting, ...]) | Return the truth value of (x1 <= x2) element-wise. [`equal`](generated/numpy.equal#numpy.equal "numpy.equal")(x1, x2, /[, out, where, casting, ...]) | Return (x1 == x2) element-wise. [`not_equal`](generated/numpy.not_equal#numpy.not_equal "numpy.not_equal")(x1, x2, /[, out, where, casting, ...]) | Return (x1 != x2) element-wise. # Masked array operations ## Constants [`ma.MaskType`](generated/numpy.ma.masktype#numpy.ma.MaskType "numpy.ma.MaskType") | alias of [`bool`](arrays.scalars#numpy.bool "numpy.bool") ---|--- ## Creation ### From existing data [`ma.masked_array`](generated/numpy.ma.masked_array#numpy.ma.masked_array "numpy.ma.masked_array") | alias of [`MaskedArray`](maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") ---|--- [`ma.array`](generated/numpy.ma.array#numpy.ma.array "numpy.ma.array")(data[, dtype, copy, order, mask, ...]) | An array class with possibly masked values. [`ma.copy`](generated/numpy.ma.copy#numpy.ma.copy "numpy.ma.copy")(self, *args, **params) a.copy(order=) | Return a copy of the array. [`ma.frombuffer`](generated/numpy.ma.frombuffer#numpy.ma.frombuffer "numpy.ma.frombuffer")(buffer[, dtype, count, ...]) | Interpret a buffer as a 1-dimensional array. [`ma.fromfunction`](generated/numpy.ma.fromfunction#numpy.ma.fromfunction "numpy.ma.fromfunction")(function, shape, **dtype) | Construct an array by executing a function over each coordinate. [`ma.MaskedArray.copy`](generated/numpy.ma.maskedarray.copy#numpy.ma.MaskedArray.copy "numpy.ma.MaskedArray.copy")([order]) | Return a copy of the array. [`ma.diagflat`](generated/numpy.ma.diagflat#numpy.ma.diagflat "numpy.ma.diagflat") | Create a two-dimensional array with the flattened input as a diagonal. ### Ones and zeros [`ma.empty`](generated/numpy.ma.empty#numpy.ma.empty "numpy.ma.empty")(shape[, dtype, order, device, like]) | Return a new array of given shape and type, without initializing entries. ---|--- [`ma.empty_like`](generated/numpy.ma.empty_like#numpy.ma.empty_like "numpy.ma.empty_like")(prototype[, dtype, order, ...]) | Return a new array with the same shape and type as a given array. [`ma.masked_all`](generated/numpy.ma.masked_all#numpy.ma.masked_all "numpy.ma.masked_all")(shape[, dtype]) | Empty masked array with all elements masked. [`ma.masked_all_like`](generated/numpy.ma.masked_all_like#numpy.ma.masked_all_like "numpy.ma.masked_all_like")(arr) | Empty masked array with the properties of an existing array. [`ma.ones`](generated/numpy.ma.ones#numpy.ma.ones "numpy.ma.ones")(shape[, dtype, order]) | Return a new array of given shape and type, filled with ones. [`ma.ones_like`](generated/numpy.ma.ones_like#numpy.ma.ones_like "numpy.ma.ones_like") | Return an array of ones with the same shape and type as a given array. [`ma.zeros`](generated/numpy.ma.zeros#numpy.ma.zeros "numpy.ma.zeros")(shape[, dtype, order, like]) | Return a new array of given shape and type, filled with zeros. [`ma.zeros_like`](generated/numpy.ma.zeros_like#numpy.ma.zeros_like "numpy.ma.zeros_like") | Return an array of zeros with the same shape and type as a given array. ## Inspecting the array [`ma.all`](generated/numpy.ma.all#numpy.ma.all "numpy.ma.all")(self[, axis, out, keepdims]) | Returns True if all elements evaluate to True. ---|--- [`ma.any`](generated/numpy.ma.any#numpy.ma.any "numpy.ma.any")(self[, axis, out, keepdims]) | Returns True if any of the elements of `a` evaluate to True. [`ma.count`](generated/numpy.ma.count#numpy.ma.count "numpy.ma.count")(self[, axis, keepdims]) | Count the non-masked elements of the array along the given axis. [`ma.count_masked`](generated/numpy.ma.count_masked#numpy.ma.count_masked "numpy.ma.count_masked")(arr[, axis]) | Count the number of masked elements along the given axis. [`ma.getmask`](generated/numpy.ma.getmask#numpy.ma.getmask "numpy.ma.getmask")(a) | Return the mask of a masked array, or nomask. [`ma.getmaskarray`](generated/numpy.ma.getmaskarray#numpy.ma.getmaskarray "numpy.ma.getmaskarray")(arr) | Return the mask of a masked array, or full boolean array of False. [`ma.getdata`](generated/numpy.ma.getdata#numpy.ma.getdata "numpy.ma.getdata")(a[, subok]) | Return the data of a masked array as an ndarray. [`ma.nonzero`](generated/numpy.ma.nonzero#numpy.ma.nonzero "numpy.ma.nonzero")(self) | Return the indices of unmasked elements that are not zero. [`ma.shape`](generated/numpy.ma.shape#numpy.ma.shape "numpy.ma.shape")(obj) | Return the shape of an array. [`ma.size`](generated/numpy.ma.size#numpy.ma.size "numpy.ma.size")(obj[, axis]) | Return the number of elements along a given axis. [`ma.is_masked`](generated/numpy.ma.is_masked#numpy.ma.is_masked "numpy.ma.is_masked")(x) | Determine whether input has masked values. [`ma.is_mask`](generated/numpy.ma.is_mask#numpy.ma.is_mask "numpy.ma.is_mask")(m) | Return True if m is a valid, standard mask. [`ma.isMaskedArray`](generated/numpy.ma.ismaskedarray#numpy.ma.isMaskedArray "numpy.ma.isMaskedArray")(x) | Test whether input is an instance of MaskedArray. [`ma.isMA`](generated/numpy.ma.isma#numpy.ma.isMA "numpy.ma.isMA")(x) | Test whether input is an instance of MaskedArray. [`ma.isarray`](generated/numpy.ma.isarray#numpy.ma.isarray "numpy.ma.isarray")(x) | Test whether input is an instance of MaskedArray. [`ma.isin`](generated/numpy.ma.isin#numpy.ma.isin "numpy.ma.isin")(element, test_elements[, ...]) | Calculates `element in test_elements`, broadcasting over `element` only. [`ma.in1d`](generated/numpy.ma.in1d#numpy.ma.in1d "numpy.ma.in1d")(ar1, ar2[, assume_unique, invert]) | Test whether each element of an array is also present in a second array. [`ma.unique`](generated/numpy.ma.unique#numpy.ma.unique "numpy.ma.unique")(ar1[, return_index, return_inverse]) | Finds the unique elements of an array. [`ma.MaskedArray.all`](generated/numpy.ma.maskedarray.all#numpy.ma.MaskedArray.all "numpy.ma.MaskedArray.all")([axis, out, keepdims]) | Returns True if all elements evaluate to True. [`ma.MaskedArray.any`](generated/numpy.ma.maskedarray.any#numpy.ma.MaskedArray.any "numpy.ma.MaskedArray.any")([axis, out, keepdims]) | Returns True if any of the elements of `a` evaluate to True. [`ma.MaskedArray.count`](generated/numpy.ma.maskedarray.count#numpy.ma.MaskedArray.count "numpy.ma.MaskedArray.count")([axis, keepdims]) | Count the non-masked elements of the array along the given axis. [`ma.MaskedArray.nonzero`](generated/numpy.ma.maskedarray.nonzero#numpy.ma.MaskedArray.nonzero "numpy.ma.MaskedArray.nonzero")() | Return the indices of unmasked elements that are not zero. [`ma.shape`](generated/numpy.ma.shape#numpy.ma.shape "numpy.ma.shape")(obj) | Return the shape of an array. [`ma.size`](generated/numpy.ma.size#numpy.ma.size "numpy.ma.size")(obj[, axis]) | Return the number of elements along a given axis. [`ma.MaskedArray.data`](maskedarray.baseclass#numpy.ma.MaskedArray.data "numpy.ma.MaskedArray.data") | Returns the underlying data, as a view of the masked array. ---|--- [`ma.MaskedArray.mask`](maskedarray.baseclass#numpy.ma.MaskedArray.mask "numpy.ma.MaskedArray.mask") | Current mask. [`ma.MaskedArray.recordmask`](maskedarray.baseclass#numpy.ma.MaskedArray.recordmask "numpy.ma.MaskedArray.recordmask") | Get or set the mask of the array if it has no named fields. ## Manipulating a MaskedArray ### Changing the shape [`ma.ravel`](generated/numpy.ma.ravel#numpy.ma.ravel "numpy.ma.ravel")(self[, order]) | Returns a 1D version of self, as a view. ---|--- [`ma.reshape`](generated/numpy.ma.reshape#numpy.ma.reshape "numpy.ma.reshape")(a, new_shape[, order]) | Returns an array containing the same data with a new shape. [`ma.resize`](generated/numpy.ma.resize#numpy.ma.resize "numpy.ma.resize")(x, new_shape) | Return a new masked array with the specified size and shape. [`ma.MaskedArray.flatten`](generated/numpy.ma.maskedarray.flatten#numpy.ma.MaskedArray.flatten "numpy.ma.MaskedArray.flatten")([order]) | Return a copy of the array collapsed into one dimension. [`ma.MaskedArray.ravel`](generated/numpy.ma.maskedarray.ravel#numpy.ma.MaskedArray.ravel "numpy.ma.MaskedArray.ravel")([order]) | Returns a 1D version of self, as a view. [`ma.MaskedArray.reshape`](generated/numpy.ma.maskedarray.reshape#numpy.ma.MaskedArray.reshape "numpy.ma.MaskedArray.reshape")(*s, **kwargs) | Give a new shape to the array without changing its data. [`ma.MaskedArray.resize`](generated/numpy.ma.maskedarray.resize#numpy.ma.MaskedArray.resize "numpy.ma.MaskedArray.resize")(newshape[, refcheck, ...]) | ### Modifying axes [`ma.swapaxes`](generated/numpy.ma.swapaxes#numpy.ma.swapaxes "numpy.ma.swapaxes")(self, *args, ...) | Return a view of the array with `axis1` and `axis2` interchanged. ---|--- [`ma.transpose`](generated/numpy.ma.transpose#numpy.ma.transpose "numpy.ma.transpose")(a[, axes]) | Permute the dimensions of an array. [`ma.MaskedArray.swapaxes`](generated/numpy.ma.maskedarray.swapaxes#numpy.ma.MaskedArray.swapaxes "numpy.ma.MaskedArray.swapaxes")(axis1, axis2) | Return a view of the array with `axis1` and `axis2` interchanged. [`ma.MaskedArray.transpose`](generated/numpy.ma.maskedarray.transpose#numpy.ma.MaskedArray.transpose "numpy.ma.MaskedArray.transpose")(*axes) | Returns a view of the array with axes transposed. ### Changing the number of dimensions [`ma.atleast_1d`](generated/numpy.ma.atleast_1d#numpy.ma.atleast_1d "numpy.ma.atleast_1d") | Convert inputs to arrays with at least one dimension. ---|--- [`ma.atleast_2d`](generated/numpy.ma.atleast_2d#numpy.ma.atleast_2d "numpy.ma.atleast_2d") | View inputs as arrays with at least two dimensions. [`ma.atleast_3d`](generated/numpy.ma.atleast_3d#numpy.ma.atleast_3d "numpy.ma.atleast_3d") | View inputs as arrays with at least three dimensions. [`ma.expand_dims`](generated/numpy.ma.expand_dims#numpy.ma.expand_dims "numpy.ma.expand_dims")(a, axis) | Expand the shape of an array. [`ma.squeeze`](generated/numpy.ma.squeeze#numpy.ma.squeeze "numpy.ma.squeeze") | Remove axes of length one from `a`. [`ma.MaskedArray.squeeze`](generated/numpy.ma.maskedarray.squeeze#numpy.ma.MaskedArray.squeeze "numpy.ma.MaskedArray.squeeze")([axis]) | Remove axes of length one from `a`. [`ma.stack`](generated/numpy.ma.stack#numpy.ma.stack "numpy.ma.stack") | Join a sequence of arrays along a new axis. [`ma.column_stack`](generated/numpy.ma.column_stack#numpy.ma.column_stack "numpy.ma.column_stack") | Stack 1-D arrays as columns into a 2-D array. [`ma.concatenate`](generated/numpy.ma.concatenate#numpy.ma.concatenate "numpy.ma.concatenate")(arrays[, axis]) | Concatenate a sequence of arrays along the given axis. [`ma.dstack`](generated/numpy.ma.dstack#numpy.ma.dstack "numpy.ma.dstack") | Stack arrays in sequence depth wise (along third axis). [`ma.hstack`](generated/numpy.ma.hstack#numpy.ma.hstack "numpy.ma.hstack") | Stack arrays in sequence horizontally (column wise). [`ma.hsplit`](generated/numpy.ma.hsplit#numpy.ma.hsplit "numpy.ma.hsplit") | Split an array into multiple sub-arrays horizontally (column-wise). [`ma.mr_`](generated/numpy.ma.mr_#numpy.ma.mr_ "numpy.ma.mr_") | Translate slice objects to concatenation along the first axis. [`ma.vstack`](generated/numpy.ma.vstack#numpy.ma.vstack "numpy.ma.vstack") | Stack arrays in sequence vertically (row wise). ### Joining arrays [`ma.concatenate`](generated/numpy.ma.concatenate#numpy.ma.concatenate "numpy.ma.concatenate")(arrays[, axis]) | Concatenate a sequence of arrays along the given axis. ---|--- [`ma.stack`](generated/numpy.ma.stack#numpy.ma.stack "numpy.ma.stack") | Join a sequence of arrays along a new axis. [`ma.vstack`](generated/numpy.ma.vstack#numpy.ma.vstack "numpy.ma.vstack") | Stack arrays in sequence vertically (row wise). [`ma.hstack`](generated/numpy.ma.hstack#numpy.ma.hstack "numpy.ma.hstack") | Stack arrays in sequence horizontally (column wise). [`ma.dstack`](generated/numpy.ma.dstack#numpy.ma.dstack "numpy.ma.dstack") | Stack arrays in sequence depth wise (along third axis). [`ma.column_stack`](generated/numpy.ma.column_stack#numpy.ma.column_stack "numpy.ma.column_stack") | Stack 1-D arrays as columns into a 2-D array. [`ma.append`](generated/numpy.ma.append#numpy.ma.append "numpy.ma.append")(a, b[, axis]) | Append values to the end of an array. ## Operations on masks ### Creating a mask [`ma.make_mask`](generated/numpy.ma.make_mask#numpy.ma.make_mask "numpy.ma.make_mask")(m[, copy, shrink, dtype]) | Create a boolean mask from an array. ---|--- [`ma.make_mask_none`](generated/numpy.ma.make_mask_none#numpy.ma.make_mask_none "numpy.ma.make_mask_none")(newshape[, dtype]) | Return a boolean mask of the given shape, filled with False. [`ma.mask_or`](generated/numpy.ma.mask_or#numpy.ma.mask_or "numpy.ma.mask_or")(m1, m2[, copy, shrink]) | Combine two masks with the `logical_or` operator. [`ma.make_mask_descr`](generated/numpy.ma.make_mask_descr#numpy.ma.make_mask_descr "numpy.ma.make_mask_descr")(ndtype) | Construct a dtype description list from a given dtype. ### Accessing a mask [`ma.getmask`](generated/numpy.ma.getmask#numpy.ma.getmask "numpy.ma.getmask")(a) | Return the mask of a masked array, or nomask. ---|--- [`ma.getmaskarray`](generated/numpy.ma.getmaskarray#numpy.ma.getmaskarray "numpy.ma.getmaskarray")(arr) | Return the mask of a masked array, or full boolean array of False. [`ma.masked_array.mask`](generated/numpy.ma.masked_array.mask#numpy.ma.masked_array.mask "numpy.ma.masked_array.mask") | Current mask. ### Finding masked data [`ma.ndenumerate`](generated/numpy.ma.ndenumerate#numpy.ma.ndenumerate "numpy.ma.ndenumerate")(a[, compressed]) | Multidimensional index iterator. ---|--- [`ma.flatnotmasked_contiguous`](generated/numpy.ma.flatnotmasked_contiguous#numpy.ma.flatnotmasked_contiguous "numpy.ma.flatnotmasked_contiguous")(a) | Find contiguous unmasked data in a masked array. [`ma.flatnotmasked_edges`](generated/numpy.ma.flatnotmasked_edges#numpy.ma.flatnotmasked_edges "numpy.ma.flatnotmasked_edges")(a) | Find the indices of the first and last unmasked values. [`ma.notmasked_contiguous`](generated/numpy.ma.notmasked_contiguous#numpy.ma.notmasked_contiguous "numpy.ma.notmasked_contiguous")(a[, axis]) | Find contiguous unmasked data in a masked array along the given axis. [`ma.notmasked_edges`](generated/numpy.ma.notmasked_edges#numpy.ma.notmasked_edges "numpy.ma.notmasked_edges")(a[, axis]) | Find the indices of the first and last unmasked values along an axis. [`ma.clump_masked`](generated/numpy.ma.clump_masked#numpy.ma.clump_masked "numpy.ma.clump_masked")(a) | Returns a list of slices corresponding to the masked clumps of a 1-D array. [`ma.clump_unmasked`](generated/numpy.ma.clump_unmasked#numpy.ma.clump_unmasked "numpy.ma.clump_unmasked")(a) | Return list of slices corresponding to the unmasked clumps of a 1-D array. ### Modifying a mask [`ma.mask_cols`](generated/numpy.ma.mask_cols#numpy.ma.mask_cols "numpy.ma.mask_cols")(a[, axis]) | Mask columns of a 2D array that contain masked values. ---|--- [`ma.mask_or`](generated/numpy.ma.mask_or#numpy.ma.mask_or "numpy.ma.mask_or")(m1, m2[, copy, shrink]) | Combine two masks with the `logical_or` operator. [`ma.mask_rowcols`](generated/numpy.ma.mask_rowcols#numpy.ma.mask_rowcols "numpy.ma.mask_rowcols")(a[, axis]) | Mask rows and/or columns of a 2D array that contain masked values. [`ma.mask_rows`](generated/numpy.ma.mask_rows#numpy.ma.mask_rows "numpy.ma.mask_rows")(a[, axis]) | Mask rows of a 2D array that contain masked values. [`ma.harden_mask`](generated/numpy.ma.harden_mask#numpy.ma.harden_mask "numpy.ma.harden_mask")(self) | Force the mask to hard, preventing unmasking by assignment. [`ma.soften_mask`](generated/numpy.ma.soften_mask#numpy.ma.soften_mask "numpy.ma.soften_mask")(self) | Force the mask to soft (default), allowing unmasking by assignment. [`ma.MaskedArray.harden_mask`](generated/numpy.ma.maskedarray.harden_mask#numpy.ma.MaskedArray.harden_mask "numpy.ma.MaskedArray.harden_mask")() | Force the mask to hard, preventing unmasking by assignment. [`ma.MaskedArray.soften_mask`](generated/numpy.ma.maskedarray.soften_mask#numpy.ma.MaskedArray.soften_mask "numpy.ma.MaskedArray.soften_mask")() | Force the mask to soft (default), allowing unmasking by assignment. [`ma.MaskedArray.shrink_mask`](generated/numpy.ma.maskedarray.shrink_mask#numpy.ma.MaskedArray.shrink_mask "numpy.ma.MaskedArray.shrink_mask")() | Reduce a mask to nomask when possible. [`ma.MaskedArray.unshare_mask`](generated/numpy.ma.maskedarray.unshare_mask#numpy.ma.MaskedArray.unshare_mask "numpy.ma.MaskedArray.unshare_mask")() | Copy the mask and set the `sharedmask` flag to `False`. ## Conversion operations ### > to a masked array [`ma.asarray`](generated/numpy.ma.asarray#numpy.ma.asarray "numpy.ma.asarray")(a[, dtype, order]) | Convert the input to a masked array of the given data-type. ---|--- [`ma.asanyarray`](generated/numpy.ma.asanyarray#numpy.ma.asanyarray "numpy.ma.asanyarray")(a[, dtype]) | Convert the input to a masked array, conserving subclasses. [`ma.fix_invalid`](generated/numpy.ma.fix_invalid#numpy.ma.fix_invalid "numpy.ma.fix_invalid")(a[, mask, copy, fill_value]) | Return input with invalid data masked and replaced by a fill value. [`ma.masked_equal`](generated/numpy.ma.masked_equal#numpy.ma.masked_equal "numpy.ma.masked_equal")(x, value[, copy]) | Mask an array where equal to a given value. [`ma.masked_greater`](generated/numpy.ma.masked_greater#numpy.ma.masked_greater "numpy.ma.masked_greater")(x, value[, copy]) | Mask an array where greater than a given value. [`ma.masked_greater_equal`](generated/numpy.ma.masked_greater_equal#numpy.ma.masked_greater_equal "numpy.ma.masked_greater_equal")(x, value[, copy]) | Mask an array where greater than or equal to a given value. [`ma.masked_inside`](generated/numpy.ma.masked_inside#numpy.ma.masked_inside "numpy.ma.masked_inside")(x, v1, v2[, copy]) | Mask an array inside a given interval. [`ma.masked_invalid`](generated/numpy.ma.masked_invalid#numpy.ma.masked_invalid "numpy.ma.masked_invalid")(a[, copy]) | Mask an array where invalid values occur (NaNs or infs). [`ma.masked_less`](generated/numpy.ma.masked_less#numpy.ma.masked_less "numpy.ma.masked_less")(x, value[, copy]) | Mask an array where less than a given value. [`ma.masked_less_equal`](generated/numpy.ma.masked_less_equal#numpy.ma.masked_less_equal "numpy.ma.masked_less_equal")(x, value[, copy]) | Mask an array where less than or equal to a given value. [`ma.masked_not_equal`](generated/numpy.ma.masked_not_equal#numpy.ma.masked_not_equal "numpy.ma.masked_not_equal")(x, value[, copy]) | Mask an array where _not_ equal to a given value. [`ma.masked_object`](generated/numpy.ma.masked_object#numpy.ma.masked_object "numpy.ma.masked_object")(x, value[, copy, shrink]) | Mask the array `x` where the data are exactly equal to value. [`ma.masked_outside`](generated/numpy.ma.masked_outside#numpy.ma.masked_outside "numpy.ma.masked_outside")(x, v1, v2[, copy]) | Mask an array outside a given interval. [`ma.masked_values`](generated/numpy.ma.masked_values#numpy.ma.masked_values "numpy.ma.masked_values")(x, value[, rtol, atol, ...]) | Mask using floating point equality. [`ma.masked_where`](generated/numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where")(condition, a[, copy]) | Mask an array where a condition is met. ### > to a ndarray [`ma.compress_cols`](generated/numpy.ma.compress_cols#numpy.ma.compress_cols "numpy.ma.compress_cols")(a) | Suppress whole columns of a 2-D array that contain masked values. ---|--- [`ma.compress_rowcols`](generated/numpy.ma.compress_rowcols#numpy.ma.compress_rowcols "numpy.ma.compress_rowcols")(x[, axis]) | Suppress the rows and/or columns of a 2-D array that contain masked values. [`ma.compress_rows`](generated/numpy.ma.compress_rows#numpy.ma.compress_rows "numpy.ma.compress_rows")(a) | Suppress whole rows of a 2-D array that contain masked values. [`ma.compressed`](generated/numpy.ma.compressed#numpy.ma.compressed "numpy.ma.compressed")(x) | Return all the non-masked data as a 1-D array. [`ma.filled`](generated/numpy.ma.filled#numpy.ma.filled "numpy.ma.filled")(a[, fill_value]) | Return input as an [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"), with masked values replaced by `fill_value`. [`ma.MaskedArray.compressed`](generated/numpy.ma.maskedarray.compressed#numpy.ma.MaskedArray.compressed "numpy.ma.MaskedArray.compressed")() | Return all the non-masked data as a 1-D array. [`ma.MaskedArray.filled`](generated/numpy.ma.maskedarray.filled#numpy.ma.MaskedArray.filled "numpy.ma.MaskedArray.filled")([fill_value]) | Return a copy of self, with masked values filled with a given value. ### > to another object [`ma.MaskedArray.tofile`](generated/numpy.ma.maskedarray.tofile#numpy.ma.MaskedArray.tofile "numpy.ma.MaskedArray.tofile")(fid[, sep, format]) | Save a masked array to a file in binary format. ---|--- [`ma.MaskedArray.tolist`](generated/numpy.ma.maskedarray.tolist#numpy.ma.MaskedArray.tolist "numpy.ma.MaskedArray.tolist")([fill_value]) | Return the data portion of the masked array as a hierarchical Python list. [`ma.MaskedArray.torecords`](generated/numpy.ma.maskedarray.torecords#numpy.ma.MaskedArray.torecords "numpy.ma.MaskedArray.torecords")() | Transforms a masked array into a flexible-type array. [`ma.MaskedArray.tobytes`](generated/numpy.ma.maskedarray.tobytes#numpy.ma.MaskedArray.tobytes "numpy.ma.MaskedArray.tobytes")([fill_value, order]) | Return the array data as a string containing the raw bytes in the array. ### Filling a masked array [`ma.common_fill_value`](generated/numpy.ma.common_fill_value#numpy.ma.common_fill_value "numpy.ma.common_fill_value")(a, b) | Return the common filling value of two masked arrays, if any. ---|--- [`ma.default_fill_value`](generated/numpy.ma.default_fill_value#numpy.ma.default_fill_value "numpy.ma.default_fill_value")(obj) | Return the default fill value for the argument object. [`ma.maximum_fill_value`](generated/numpy.ma.maximum_fill_value#numpy.ma.maximum_fill_value "numpy.ma.maximum_fill_value")(obj) | Return the minimum value that can be represented by the dtype of an object. [`ma.minimum_fill_value`](generated/numpy.ma.minimum_fill_value#numpy.ma.minimum_fill_value "numpy.ma.minimum_fill_value")(obj) | Return the maximum value that can be represented by the dtype of an object. [`ma.set_fill_value`](generated/numpy.ma.set_fill_value#numpy.ma.set_fill_value "numpy.ma.set_fill_value")(a, fill_value) | Set the filling value of a, if a is a masked array. [`ma.MaskedArray.get_fill_value`](generated/numpy.ma.maskedarray.get_fill_value#numpy.ma.MaskedArray.get_fill_value "numpy.ma.MaskedArray.get_fill_value")() | The filling value of the masked array is a scalar. [`ma.MaskedArray.set_fill_value`](generated/numpy.ma.maskedarray.set_fill_value#numpy.ma.MaskedArray.set_fill_value "numpy.ma.MaskedArray.set_fill_value")([value]) | [`ma.MaskedArray.fill_value`](maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") | The filling value of the masked array is a scalar. ---|--- ## Masked arrays arithmetic ### Arithmetic [`ma.anom`](generated/numpy.ma.anom#numpy.ma.anom "numpy.ma.anom")(self[, axis, dtype]) | Compute the anomalies (deviations from the arithmetic mean) along the given axis. ---|--- [`ma.anomalies`](generated/numpy.ma.anomalies#numpy.ma.anomalies "numpy.ma.anomalies")(self[, axis, dtype]) | Compute the anomalies (deviations from the arithmetic mean) along the given axis. [`ma.average`](generated/numpy.ma.average#numpy.ma.average "numpy.ma.average")(a[, axis, weights, returned, ...]) | Return the weighted average of array over the given axis. [`ma.conjugate`](generated/numpy.ma.conjugate#numpy.ma.conjugate "numpy.ma.conjugate")(x, /[, out, where, casting, ...]) | Return the complex conjugate, element-wise. [`ma.corrcoef`](generated/numpy.ma.corrcoef#numpy.ma.corrcoef "numpy.ma.corrcoef")(x[, y, rowvar, bias, ...]) | Return Pearson product-moment correlation coefficients. [`ma.cov`](generated/numpy.ma.cov#numpy.ma.cov "numpy.ma.cov")(x[, y, rowvar, bias, allow_masked, ddof]) | Estimate the covariance matrix. [`ma.cumsum`](generated/numpy.ma.cumsum#numpy.ma.cumsum "numpy.ma.cumsum")(self[, axis, dtype, out]) | Return the cumulative sum of the array elements over the given axis. [`ma.cumprod`](generated/numpy.ma.cumprod#numpy.ma.cumprod "numpy.ma.cumprod")(self[, axis, dtype, out]) | Return the cumulative product of the array elements over the given axis. [`ma.mean`](generated/numpy.ma.mean#numpy.ma.mean "numpy.ma.mean")(self[, axis, dtype, out, keepdims]) | Returns the average of the array elements along given axis. [`ma.median`](generated/numpy.ma.median#numpy.ma.median "numpy.ma.median")(a[, axis, out, overwrite_input, ...]) | Compute the median along the specified axis. [`ma.power`](generated/numpy.ma.power#numpy.ma.power "numpy.ma.power")(a, b[, third]) | Returns element-wise base array raised to power from second array. [`ma.prod`](generated/numpy.ma.prod#numpy.ma.prod "numpy.ma.prod")(self[, axis, dtype, out, keepdims]) | Return the product of the array elements over the given axis. [`ma.std`](generated/numpy.ma.std#numpy.ma.std "numpy.ma.std")(self[, axis, dtype, out, ddof, ...]) | Returns the standard deviation of the array elements along given axis. [`ma.sum`](generated/numpy.ma.sum#numpy.ma.sum "numpy.ma.sum")(self[, axis, dtype, out, keepdims]) | Return the sum of the array elements over the given axis. [`ma.var`](generated/numpy.ma.var#numpy.ma.var "numpy.ma.var")(self[, axis, dtype, out, ddof, ...]) | Compute the variance along the specified axis. [`ma.MaskedArray.anom`](generated/numpy.ma.maskedarray.anom#numpy.ma.MaskedArray.anom "numpy.ma.MaskedArray.anom")([axis, dtype]) | Compute the anomalies (deviations from the arithmetic mean) along the given axis. [`ma.MaskedArray.cumprod`](generated/numpy.ma.maskedarray.cumprod#numpy.ma.MaskedArray.cumprod "numpy.ma.MaskedArray.cumprod")([axis, dtype, out]) | Return the cumulative product of the array elements over the given axis. [`ma.MaskedArray.cumsum`](generated/numpy.ma.maskedarray.cumsum#numpy.ma.MaskedArray.cumsum "numpy.ma.MaskedArray.cumsum")([axis, dtype, out]) | Return the cumulative sum of the array elements over the given axis. [`ma.MaskedArray.mean`](generated/numpy.ma.maskedarray.mean#numpy.ma.MaskedArray.mean "numpy.ma.MaskedArray.mean")([axis, dtype, out, keepdims]) | Returns the average of the array elements along given axis. [`ma.MaskedArray.prod`](generated/numpy.ma.maskedarray.prod#numpy.ma.MaskedArray.prod "numpy.ma.MaskedArray.prod")([axis, dtype, out, keepdims]) | Return the product of the array elements over the given axis. [`ma.MaskedArray.std`](generated/numpy.ma.maskedarray.std#numpy.ma.MaskedArray.std "numpy.ma.MaskedArray.std")([axis, dtype, out, ddof, ...]) | Returns the standard deviation of the array elements along given axis. [`ma.MaskedArray.sum`](generated/numpy.ma.maskedarray.sum#numpy.ma.MaskedArray.sum "numpy.ma.MaskedArray.sum")([axis, dtype, out, keepdims]) | Return the sum of the array elements over the given axis. [`ma.MaskedArray.var`](generated/numpy.ma.maskedarray.var#numpy.ma.MaskedArray.var "numpy.ma.MaskedArray.var")([axis, dtype, out, ddof, ...]) | Compute the variance along the specified axis. ### Minimum/maximum [`ma.argmax`](generated/numpy.ma.argmax#numpy.ma.argmax "numpy.ma.argmax")(self[, axis, fill_value, out]) | Returns array of indices of the maximum values along the given axis. ---|--- [`ma.argmin`](generated/numpy.ma.argmin#numpy.ma.argmin "numpy.ma.argmin")(self[, axis, fill_value, out]) | Return array of indices to the minimum values along the given axis. [`ma.max`](generated/numpy.ma.max#numpy.ma.max "numpy.ma.max")(obj[, axis, out, fill_value, keepdims]) | Return the maximum along a given axis. [`ma.min`](generated/numpy.ma.min#numpy.ma.min "numpy.ma.min")(obj[, axis, out, fill_value, keepdims]) | Return the minimum along a given axis. [`ma.ptp`](generated/numpy.ma.ptp#numpy.ma.ptp "numpy.ma.ptp")(obj[, axis, out, fill_value, keepdims]) | Return (maximum - minimum) along the given dimension (i.e. peak-to-peak value). [`ma.diff`](generated/numpy.ma.diff#numpy.ma.diff "numpy.ma.diff")(a, /[, n, axis, prepend, append]) | Calculate the n-th discrete difference along the given axis. [`ma.MaskedArray.argmax`](generated/numpy.ma.maskedarray.argmax#numpy.ma.MaskedArray.argmax "numpy.ma.MaskedArray.argmax")([axis, fill_value, ...]) | Returns array of indices of the maximum values along the given axis. [`ma.MaskedArray.argmin`](generated/numpy.ma.maskedarray.argmin#numpy.ma.MaskedArray.argmin "numpy.ma.MaskedArray.argmin")([axis, fill_value, ...]) | Return array of indices to the minimum values along the given axis. [`ma.MaskedArray.max`](generated/numpy.ma.maskedarray.max#numpy.ma.MaskedArray.max "numpy.ma.MaskedArray.max")([axis, out, fill_value, ...]) | Return the maximum along a given axis. [`ma.MaskedArray.min`](generated/numpy.ma.maskedarray.min#numpy.ma.MaskedArray.min "numpy.ma.MaskedArray.min")([axis, out, fill_value, ...]) | Return the minimum along a given axis. [`ma.MaskedArray.ptp`](generated/numpy.ma.maskedarray.ptp#numpy.ma.MaskedArray.ptp "numpy.ma.MaskedArray.ptp")([axis, out, fill_value, ...]) | Return (maximum - minimum) along the given dimension (i.e. peak-to-peak value). ### Sorting [`ma.argsort`](generated/numpy.ma.argsort#numpy.ma.argsort "numpy.ma.argsort")(a[, axis, kind, order, endwith, ...]) | Return an ndarray of indices that sort the array along the specified axis. ---|--- [`ma.sort`](generated/numpy.ma.sort#numpy.ma.sort "numpy.ma.sort")(a[, axis, kind, order, endwith, ...]) | Return a sorted copy of the masked array. [`ma.MaskedArray.argsort`](generated/numpy.ma.maskedarray.argsort#numpy.ma.MaskedArray.argsort "numpy.ma.MaskedArray.argsort")([axis, kind, order, ...]) | Return an ndarray of indices that sort the array along the specified axis. [`ma.MaskedArray.sort`](generated/numpy.ma.maskedarray.sort#numpy.ma.MaskedArray.sort "numpy.ma.MaskedArray.sort")([axis, kind, order, ...]) | Sort the array, in-place ### Algebra [`ma.diag`](generated/numpy.ma.diag#numpy.ma.diag "numpy.ma.diag")(v[, k]) | Extract a diagonal or construct a diagonal array. ---|--- [`ma.dot`](generated/numpy.ma.dot#numpy.ma.dot "numpy.ma.dot")(a, b[, strict, out]) | Return the dot product of two arrays. [`ma.identity`](generated/numpy.ma.identity#numpy.ma.identity "numpy.ma.identity")(n[, dtype]) | Return the identity array. [`ma.inner`](generated/numpy.ma.inner#numpy.ma.inner "numpy.ma.inner")(a, b, /) | Inner product of two arrays. [`ma.innerproduct`](generated/numpy.ma.innerproduct#numpy.ma.innerproduct "numpy.ma.innerproduct")(a, b, /) | Inner product of two arrays. [`ma.outer`](generated/numpy.ma.outer#numpy.ma.outer "numpy.ma.outer")(a, b) | Compute the outer product of two vectors. [`ma.outerproduct`](generated/numpy.ma.outerproduct#numpy.ma.outerproduct "numpy.ma.outerproduct")(a, b) | Compute the outer product of two vectors. [`ma.trace`](generated/numpy.ma.trace#numpy.ma.trace "numpy.ma.trace")(self[, offset, axis1, axis2, ...]) | Return the sum along diagonals of the array. [`ma.transpose`](generated/numpy.ma.transpose#numpy.ma.transpose "numpy.ma.transpose")(a[, axes]) | Permute the dimensions of an array. [`ma.MaskedArray.trace`](generated/numpy.ma.maskedarray.trace#numpy.ma.MaskedArray.trace "numpy.ma.MaskedArray.trace")([offset, axis1, axis2, ...]) | Return the sum along diagonals of the array. [`ma.MaskedArray.transpose`](generated/numpy.ma.maskedarray.transpose#numpy.ma.MaskedArray.transpose "numpy.ma.MaskedArray.transpose")(*axes) | Returns a view of the array with axes transposed. ### Polynomial fit [`ma.vander`](generated/numpy.ma.vander#numpy.ma.vander "numpy.ma.vander")(x[, n]) | Generate a Vandermonde matrix. ---|--- [`ma.polyfit`](generated/numpy.ma.polyfit#numpy.ma.polyfit "numpy.ma.polyfit")(x, y, deg[, rcond, full, w, cov]) | Least squares polynomial fit. ### Clipping and rounding [`ma.around`](generated/numpy.ma.around#numpy.ma.around "numpy.ma.around") | Round an array to the given number of decimals. ---|--- [`ma.clip`](generated/numpy.ma.clip#numpy.ma.clip "numpy.ma.clip") | Clip (limit) the values in an array. [`ma.round`](generated/numpy.ma.round#numpy.ma.round "numpy.ma.round")(a[, decimals, out]) | Return a copy of a, rounded to 'decimals' places. [`ma.MaskedArray.clip`](generated/numpy.ma.maskedarray.clip#numpy.ma.MaskedArray.clip "numpy.ma.MaskedArray.clip")([min, max, out]) | Return an array whose values are limited to `[min, max]`. [`ma.MaskedArray.round`](generated/numpy.ma.maskedarray.round#numpy.ma.MaskedArray.round "numpy.ma.MaskedArray.round")([decimals, out]) | Return each element rounded to the given number of decimals. ### Set operations [`ma.intersect1d`](generated/numpy.ma.intersect1d#numpy.ma.intersect1d "numpy.ma.intersect1d")(ar1, ar2[, assume_unique]) | Returns the unique elements common to both arrays. ---|--- [`ma.setdiff1d`](generated/numpy.ma.setdiff1d#numpy.ma.setdiff1d "numpy.ma.setdiff1d")(ar1, ar2[, assume_unique]) | Set difference of 1D arrays with unique elements. [`ma.setxor1d`](generated/numpy.ma.setxor1d#numpy.ma.setxor1d "numpy.ma.setxor1d")(ar1, ar2[, assume_unique]) | Set exclusive-or of 1-D arrays with unique elements. [`ma.union1d`](generated/numpy.ma.union1d#numpy.ma.union1d "numpy.ma.union1d")(ar1, ar2) | Union of two arrays. ### Miscellanea [`ma.allequal`](generated/numpy.ma.allequal#numpy.ma.allequal "numpy.ma.allequal")(a, b[, fill_value]) | Return True if all entries of a and b are equal, using fill_value as a truth value where either or both are masked. ---|--- [`ma.allclose`](generated/numpy.ma.allclose#numpy.ma.allclose "numpy.ma.allclose")(a, b[, masked_equal, rtol, atol]) | Returns True if two arrays are element-wise equal within a tolerance. [`ma.amax`](generated/numpy.ma.amax#numpy.ma.amax "numpy.ma.amax")(a[, axis, out, keepdims, initial, where]) | Return the maximum of an array or maximum along an axis. [`ma.amin`](generated/numpy.ma.amin#numpy.ma.amin "numpy.ma.amin")(a[, axis, out, keepdims, initial, where]) | Return the minimum of an array or minimum along an axis. [`ma.apply_along_axis`](generated/numpy.ma.apply_along_axis#numpy.ma.apply_along_axis "numpy.ma.apply_along_axis")(func1d, axis, arr, ...) | Apply a function to 1-D slices along the given axis. [`ma.apply_over_axes`](generated/numpy.ma.apply_over_axes#numpy.ma.apply_over_axes "numpy.ma.apply_over_axes")(func, a, axes) | Apply a function repeatedly over multiple axes. [`ma.arange`](generated/numpy.ma.arange#numpy.ma.arange "numpy.ma.arange")([start,] stop[, step,][, dtype, ...]) | Return evenly spaced values within a given interval. [`ma.choose`](generated/numpy.ma.choose#numpy.ma.choose "numpy.ma.choose")(indices, choices[, out, mode]) | Use an index array to construct a new array from a list of choices. [`ma.compress_nd`](generated/numpy.ma.compress_nd#numpy.ma.compress_nd "numpy.ma.compress_nd")(x[, axis]) | Suppress slices from multiple dimensions which contain masked values. [`ma.convolve`](generated/numpy.ma.convolve#numpy.ma.convolve "numpy.ma.convolve")(a, v[, mode, propagate_mask]) | Returns the discrete, linear convolution of two one-dimensional sequences. [`ma.correlate`](generated/numpy.ma.correlate#numpy.ma.correlate "numpy.ma.correlate")(a, v[, mode, propagate_mask]) | Cross-correlation of two 1-dimensional sequences. [`ma.ediff1d`](generated/numpy.ma.ediff1d#numpy.ma.ediff1d "numpy.ma.ediff1d")(arr[, to_end, to_begin]) | Compute the differences between consecutive elements of an array. [`ma.flatten_mask`](generated/numpy.ma.flatten_mask#numpy.ma.flatten_mask "numpy.ma.flatten_mask")(mask) | Returns a completely flattened version of the mask, where nested fields are collapsed. [`ma.flatten_structured_array`](generated/numpy.ma.flatten_structured_array#numpy.ma.flatten_structured_array "numpy.ma.flatten_structured_array")(a) | Flatten a structured array. [`ma.fromflex`](generated/numpy.ma.fromflex#numpy.ma.fromflex "numpy.ma.fromflex")(fxarray) | Build a masked array from a suitable flexible-type array. [`ma.indices`](generated/numpy.ma.indices#numpy.ma.indices "numpy.ma.indices")(dimensions[, dtype, sparse]) | Return an array representing the indices of a grid. [`ma.left_shift`](generated/numpy.ma.left_shift#numpy.ma.left_shift "numpy.ma.left_shift")(a, n) | Shift the bits of an integer to the left. [`ma.ndim`](generated/numpy.ma.ndim#numpy.ma.ndim "numpy.ma.ndim")(obj) | Return the number of dimensions of an array. [`ma.put`](generated/numpy.ma.put#numpy.ma.put "numpy.ma.put")(a, indices, values[, mode]) | Set storage-indexed locations to corresponding values. [`ma.putmask`](generated/numpy.ma.putmask#numpy.ma.putmask "numpy.ma.putmask")(a, mask, values) | Changes elements of an array based on conditional and input values. [`ma.right_shift`](generated/numpy.ma.right_shift#numpy.ma.right_shift "numpy.ma.right_shift")(a, n) | Shift the bits of an integer to the right. [`ma.round_`](generated/numpy.ma.round_#numpy.ma.round_ "numpy.ma.round_")(a[, decimals, out]) | Return a copy of a, rounded to 'decimals' places. [`ma.take`](generated/numpy.ma.take#numpy.ma.take "numpy.ma.take")(a, indices[, axis, out, mode]) | [`ma.where`](generated/numpy.ma.where#numpy.ma.where "numpy.ma.where")(condition[, x, y]) | Return a masked array with elements from `x` or `y`, depending on condition. # Mathematical functions ## Trigonometric functions [`sin`](generated/numpy.sin#numpy.sin "numpy.sin")(x, /[, out, where, casting, order, ...]) | Trigonometric sine, element-wise. ---|--- [`cos`](generated/numpy.cos#numpy.cos "numpy.cos")(x, /[, out, where, casting, order, ...]) | Cosine element-wise. [`tan`](generated/numpy.tan#numpy.tan "numpy.tan")(x, /[, out, where, casting, order, ...]) | Compute tangent element-wise. [`arcsin`](generated/numpy.arcsin#numpy.arcsin "numpy.arcsin")(x, /[, out, where, casting, order, ...]) | Inverse sine, element-wise. [`asin`](generated/numpy.asin#numpy.asin "numpy.asin")(x, /[, out, where, casting, order, ...]) | Inverse sine, element-wise. [`arccos`](generated/numpy.arccos#numpy.arccos "numpy.arccos")(x, /[, out, where, casting, order, ...]) | Trigonometric inverse cosine, element-wise. [`acos`](generated/numpy.acos#numpy.acos "numpy.acos")(x, /[, out, where, casting, order, ...]) | Trigonometric inverse cosine, element-wise. [`arctan`](generated/numpy.arctan#numpy.arctan "numpy.arctan")(x, /[, out, where, casting, order, ...]) | Trigonometric inverse tangent, element-wise. [`atan`](generated/numpy.atan#numpy.atan "numpy.atan")(x, /[, out, where, casting, order, ...]) | Trigonometric inverse tangent, element-wise. [`hypot`](generated/numpy.hypot#numpy.hypot "numpy.hypot")(x1, x2, /[, out, where, casting, ...]) | Given the "legs" of a right triangle, return its hypotenuse. [`arctan2`](generated/numpy.arctan2#numpy.arctan2 "numpy.arctan2")(x1, x2, /[, out, where, casting, ...]) | Element-wise arc tangent of `x1/x2` choosing the quadrant correctly. [`atan2`](generated/numpy.atan2#numpy.atan2 "numpy.atan2")(x1, x2, /[, out, where, casting, ...]) | Element-wise arc tangent of `x1/x2` choosing the quadrant correctly. [`degrees`](generated/numpy.degrees#numpy.degrees "numpy.degrees")(x, /[, out, where, casting, order, ...]) | Convert angles from radians to degrees. [`radians`](generated/numpy.radians#numpy.radians "numpy.radians")(x, /[, out, where, casting, order, ...]) | Convert angles from degrees to radians. [`unwrap`](generated/numpy.unwrap#numpy.unwrap "numpy.unwrap")(p[, discont, axis, period]) | Unwrap by taking the complement of large deltas with respect to the period. [`deg2rad`](generated/numpy.deg2rad#numpy.deg2rad "numpy.deg2rad")(x, /[, out, where, casting, order, ...]) | Convert angles from degrees to radians. [`rad2deg`](generated/numpy.rad2deg#numpy.rad2deg "numpy.rad2deg")(x, /[, out, where, casting, order, ...]) | Convert angles from radians to degrees. ## Hyperbolic functions [`sinh`](generated/numpy.sinh#numpy.sinh "numpy.sinh")(x, /[, out, where, casting, order, ...]) | Hyperbolic sine, element-wise. ---|--- [`cosh`](generated/numpy.cosh#numpy.cosh "numpy.cosh")(x, /[, out, where, casting, order, ...]) | Hyperbolic cosine, element-wise. [`tanh`](generated/numpy.tanh#numpy.tanh "numpy.tanh")(x, /[, out, where, casting, order, ...]) | Compute hyperbolic tangent element-wise. [`arcsinh`](generated/numpy.arcsinh#numpy.arcsinh "numpy.arcsinh")(x, /[, out, where, casting, order, ...]) | Inverse hyperbolic sine element-wise. [`asinh`](generated/numpy.asinh#numpy.asinh "numpy.asinh")(x, /[, out, where, casting, order, ...]) | Inverse hyperbolic sine element-wise. [`arccosh`](generated/numpy.arccosh#numpy.arccosh "numpy.arccosh")(x, /[, out, where, casting, order, ...]) | Inverse hyperbolic cosine, element-wise. [`acosh`](generated/numpy.acosh#numpy.acosh "numpy.acosh")(x, /[, out, where, casting, order, ...]) | Inverse hyperbolic cosine, element-wise. [`arctanh`](generated/numpy.arctanh#numpy.arctanh "numpy.arctanh")(x, /[, out, where, casting, order, ...]) | Inverse hyperbolic tangent element-wise. [`atanh`](generated/numpy.atanh#numpy.atanh "numpy.atanh")(x, /[, out, where, casting, order, ...]) | Inverse hyperbolic tangent element-wise. ## Rounding [`round`](generated/numpy.round#numpy.round "numpy.round")(a[, decimals, out]) | Evenly round to the given number of decimals. ---|--- [`around`](generated/numpy.around#numpy.around "numpy.around")(a[, decimals, out]) | Round an array to the given number of decimals. [`rint`](generated/numpy.rint#numpy.rint "numpy.rint")(x, /[, out, where, casting, order, ...]) | Round elements of the array to the nearest integer. [`fix`](generated/numpy.fix#numpy.fix "numpy.fix")(x[, out]) | Round to nearest integer towards zero. [`floor`](generated/numpy.floor#numpy.floor "numpy.floor")(x, /[, out, where, casting, order, ...]) | Return the floor of the input, element-wise. [`ceil`](generated/numpy.ceil#numpy.ceil "numpy.ceil")(x, /[, out, where, casting, order, ...]) | Return the ceiling of the input, element-wise. [`trunc`](generated/numpy.trunc#numpy.trunc "numpy.trunc")(x, /[, out, where, casting, order, ...]) | Return the truncated value of the input, element-wise. ## Sums, products, differences [`prod`](generated/numpy.prod#numpy.prod "numpy.prod")(a[, axis, dtype, out, keepdims, ...]) | Return the product of array elements over a given axis. ---|--- [`sum`](generated/numpy.sum#numpy.sum "numpy.sum")(a[, axis, dtype, out, keepdims, ...]) | Sum of array elements over a given axis. [`nanprod`](generated/numpy.nanprod#numpy.nanprod "numpy.nanprod")(a[, axis, dtype, out, keepdims, ...]) | Return the product of array elements over a given axis treating Not a Numbers (NaNs) as ones. [`nansum`](generated/numpy.nansum#numpy.nansum "numpy.nansum")(a[, axis, dtype, out, keepdims, ...]) | Return the sum of array elements over a given axis treating Not a Numbers (NaNs) as zero. [`cumulative_sum`](generated/numpy.cumulative_sum#numpy.cumulative_sum "numpy.cumulative_sum")(x, /, *[, axis, dtype, out, ...]) | Return the cumulative sum of the elements along a given axis. [`cumulative_prod`](generated/numpy.cumulative_prod#numpy.cumulative_prod "numpy.cumulative_prod")(x, /, *[, axis, dtype, out, ...]) | Return the cumulative product of elements along a given axis. [`cumprod`](generated/numpy.cumprod#numpy.cumprod "numpy.cumprod")(a[, axis, dtype, out]) | Return the cumulative product of elements along a given axis. [`cumsum`](generated/numpy.cumsum#numpy.cumsum "numpy.cumsum")(a[, axis, dtype, out]) | Return the cumulative sum of the elements along a given axis. [`nancumprod`](generated/numpy.nancumprod#numpy.nancumprod "numpy.nancumprod")(a[, axis, dtype, out]) | Return the cumulative product of array elements over a given axis treating Not a Numbers (NaNs) as one. [`nancumsum`](generated/numpy.nancumsum#numpy.nancumsum "numpy.nancumsum")(a[, axis, dtype, out]) | Return the cumulative sum of array elements over a given axis treating Not a Numbers (NaNs) as zero. [`diff`](generated/numpy.diff#numpy.diff "numpy.diff")(a[, n, axis, prepend, append]) | Calculate the n-th discrete difference along the given axis. [`ediff1d`](generated/numpy.ediff1d#numpy.ediff1d "numpy.ediff1d")(ary[, to_end, to_begin]) | The differences between consecutive elements of an array. [`gradient`](generated/numpy.gradient#numpy.gradient "numpy.gradient")(f, *varargs[, axis, edge_order]) | Return the gradient of an N-dimensional array. [`cross`](generated/numpy.cross#numpy.cross "numpy.cross")(a, b[, axisa, axisb, axisc, axis]) | Return the cross product of two (arrays of) vectors. [`trapezoid`](generated/numpy.trapezoid#numpy.trapezoid "numpy.trapezoid")(y[, x, dx, axis]) | Integrate along the given axis using the composite trapezoidal rule. ## Exponents and logarithms [`exp`](generated/numpy.exp#numpy.exp "numpy.exp")(x, /[, out, where, casting, order, ...]) | Calculate the exponential of all elements in the input array. ---|--- [`expm1`](generated/numpy.expm1#numpy.expm1 "numpy.expm1")(x, /[, out, where, casting, order, ...]) | Calculate `exp(x) - 1` for all elements in the array. [`exp2`](generated/numpy.exp2#numpy.exp2 "numpy.exp2")(x, /[, out, where, casting, order, ...]) | Calculate `2**p` for all `p` in the input array. [`log`](generated/numpy.log#numpy.log "numpy.log")(x, /[, out, where, casting, order, ...]) | Natural logarithm, element-wise. [`log10`](generated/numpy.log10#numpy.log10 "numpy.log10")(x, /[, out, where, casting, order, ...]) | Return the base 10 logarithm of the input array, element-wise. [`log2`](generated/numpy.log2#numpy.log2 "numpy.log2")(x, /[, out, where, casting, order, ...]) | Base-2 logarithm of `x`. [`log1p`](generated/numpy.log1p#numpy.log1p "numpy.log1p")(x, /[, out, where, casting, order, ...]) | Return the natural logarithm of one plus the input array, element-wise. [`logaddexp`](generated/numpy.logaddexp#numpy.logaddexp "numpy.logaddexp")(x1, x2, /[, out, where, casting, ...]) | Logarithm of the sum of exponentiations of the inputs. [`logaddexp2`](generated/numpy.logaddexp2#numpy.logaddexp2 "numpy.logaddexp2")(x1, x2, /[, out, where, casting, ...]) | Logarithm of the sum of exponentiations of the inputs in base-2. ## Other special functions [`i0`](generated/numpy.i0#numpy.i0 "numpy.i0")(x) | Modified Bessel function of the first kind, order 0. ---|--- [`sinc`](generated/numpy.sinc#numpy.sinc "numpy.sinc")(x) | Return the normalized sinc function. ## Floating point routines [`signbit`](generated/numpy.signbit#numpy.signbit "numpy.signbit")(x, /[, out, where, casting, order, ...]) | Returns element-wise True where signbit is set (less than zero). ---|--- [`copysign`](generated/numpy.copysign#numpy.copysign "numpy.copysign")(x1, x2, /[, out, where, casting, ...]) | Change the sign of x1 to that of x2, element-wise. [`frexp`](generated/numpy.frexp#numpy.frexp "numpy.frexp")(x[, out1, out2], / [[, out, where, ...]) | Decompose the elements of x into mantissa and twos exponent. [`ldexp`](generated/numpy.ldexp#numpy.ldexp "numpy.ldexp")(x1, x2, /[, out, where, casting, ...]) | Returns x1 * 2**x2, element-wise. [`nextafter`](generated/numpy.nextafter#numpy.nextafter "numpy.nextafter")(x1, x2, /[, out, where, casting, ...]) | Return the next floating-point value after x1 towards x2, element-wise. [`spacing`](generated/numpy.spacing#numpy.spacing "numpy.spacing")(x, /[, out, where, casting, order, ...]) | Return the distance between x and the nearest adjacent number. ## Rational routines [`lcm`](generated/numpy.lcm#numpy.lcm "numpy.lcm")(x1, x2, /[, out, where, casting, order, ...]) | Returns the lowest common multiple of `|x1|` and `|x2|` ---|--- [`gcd`](generated/numpy.gcd#numpy.gcd "numpy.gcd")(x1, x2, /[, out, where, casting, order, ...]) | Returns the greatest common divisor of `|x1|` and `|x2|` ## Arithmetic operations [`add`](generated/numpy.add#numpy.add "numpy.add")(x1, x2, /[, out, where, casting, order, ...]) | Add arguments element-wise. ---|--- [`reciprocal`](generated/numpy.reciprocal#numpy.reciprocal "numpy.reciprocal")(x, /[, out, where, casting, ...]) | Return the reciprocal of the argument, element-wise. [`positive`](generated/numpy.positive#numpy.positive "numpy.positive")(x, /[, out, where, casting, order, ...]) | Numerical positive, element-wise. [`negative`](generated/numpy.negative#numpy.negative "numpy.negative")(x, /[, out, where, casting, order, ...]) | Numerical negative, element-wise. [`multiply`](generated/numpy.multiply#numpy.multiply "numpy.multiply")(x1, x2, /[, out, where, casting, ...]) | Multiply arguments element-wise. [`divide`](generated/numpy.divide#numpy.divide "numpy.divide")(x1, x2, /[, out, where, casting, ...]) | Divide arguments element-wise. [`power`](generated/numpy.power#numpy.power "numpy.power")(x1, x2, /[, out, where, casting, ...]) | First array elements raised to powers from second array, element-wise. [`pow`](generated/numpy.pow#numpy.pow "numpy.pow")(x1, x2, /[, out, where, casting, order, ...]) | First array elements raised to powers from second array, element-wise. [`subtract`](generated/numpy.subtract#numpy.subtract "numpy.subtract")(x1, x2, /[, out, where, casting, ...]) | Subtract arguments, element-wise. [`true_divide`](generated/numpy.true_divide#numpy.true_divide "numpy.true_divide")(x1, x2, /[, out, where, ...]) | Divide arguments element-wise. [`floor_divide`](generated/numpy.floor_divide#numpy.floor_divide "numpy.floor_divide")(x1, x2, /[, out, where, ...]) | Return the largest integer smaller or equal to the division of the inputs. [`float_power`](generated/numpy.float_power#numpy.float_power "numpy.float_power")(x1, x2, /[, out, where, ...]) | First array elements raised to powers from second array, element-wise. [`fmod`](generated/numpy.fmod#numpy.fmod "numpy.fmod")(x1, x2, /[, out, where, casting, ...]) | Returns the element-wise remainder of division. [`mod`](generated/numpy.mod#numpy.mod "numpy.mod")(x1, x2, /[, out, where, casting, order, ...]) | Returns the element-wise remainder of division. [`modf`](generated/numpy.modf#numpy.modf "numpy.modf")(x[, out1, out2], / [[, out, where, ...]) | Return the fractional and integral parts of an array, element-wise. [`remainder`](generated/numpy.remainder#numpy.remainder "numpy.remainder")(x1, x2, /[, out, where, casting, ...]) | Returns the element-wise remainder of division. [`divmod`](generated/numpy.divmod#numpy.divmod "numpy.divmod")(x1, x2[, out1, out2], / [[, out, ...]) | Return element-wise quotient and remainder simultaneously. ## Handling complex numbers [`angle`](generated/numpy.angle#numpy.angle "numpy.angle")(z[, deg]) | Return the angle of the complex argument. ---|--- [`real`](generated/numpy.real#numpy.real "numpy.real")(val) | Return the real part of the complex argument. [`imag`](generated/numpy.imag#numpy.imag "numpy.imag")(val) | Return the imaginary part of the complex argument. [`conj`](generated/numpy.conj#numpy.conj "numpy.conj")(x, /[, out, where, casting, order, ...]) | Return the complex conjugate, element-wise. [`conjugate`](generated/numpy.conjugate#numpy.conjugate "numpy.conjugate")(x, /[, out, where, casting, ...]) | Return the complex conjugate, element-wise. ## Extrema finding [`maximum`](generated/numpy.maximum#numpy.maximum "numpy.maximum")(x1, x2, /[, out, where, casting, ...]) | Element-wise maximum of array elements. ---|--- [`max`](generated/numpy.max#numpy.max "numpy.max")(a[, axis, out, keepdims, initial, where]) | Return the maximum of an array or maximum along an axis. [`amax`](generated/numpy.amax#numpy.amax "numpy.amax")(a[, axis, out, keepdims, initial, where]) | Return the maximum of an array or maximum along an axis. [`fmax`](generated/numpy.fmax#numpy.fmax "numpy.fmax")(x1, x2, /[, out, where, casting, ...]) | Element-wise maximum of array elements. [`nanmax`](generated/numpy.nanmax#numpy.nanmax "numpy.nanmax")(a[, axis, out, keepdims, initial, where]) | Return the maximum of an array or maximum along an axis, ignoring any NaNs. [`minimum`](generated/numpy.minimum#numpy.minimum "numpy.minimum")(x1, x2, /[, out, where, casting, ...]) | Element-wise minimum of array elements. [`min`](generated/numpy.min#numpy.min "numpy.min")(a[, axis, out, keepdims, initial, where]) | Return the minimum of an array or minimum along an axis. [`amin`](generated/numpy.amin#numpy.amin "numpy.amin")(a[, axis, out, keepdims, initial, where]) | Return the minimum of an array or minimum along an axis. [`fmin`](generated/numpy.fmin#numpy.fmin "numpy.fmin")(x1, x2, /[, out, where, casting, ...]) | Element-wise minimum of array elements. [`nanmin`](generated/numpy.nanmin#numpy.nanmin "numpy.nanmin")(a[, axis, out, keepdims, initial, where]) | Return minimum of an array or minimum along an axis, ignoring any NaNs. ## Miscellaneous [`convolve`](generated/numpy.convolve#numpy.convolve "numpy.convolve")(a, v[, mode]) | Returns the discrete, linear convolution of two one-dimensional sequences. ---|--- [`clip`](generated/numpy.clip#numpy.clip "numpy.clip")(a[, a_min, a_max, out, min, max]) | Clip (limit) the values in an array. [`sqrt`](generated/numpy.sqrt#numpy.sqrt "numpy.sqrt")(x, /[, out, where, casting, order, ...]) | Return the non-negative square-root of an array, element-wise. [`cbrt`](generated/numpy.cbrt#numpy.cbrt "numpy.cbrt")(x, /[, out, where, casting, order, ...]) | Return the cube-root of an array, element-wise. [`square`](generated/numpy.square#numpy.square "numpy.square")(x, /[, out, where, casting, order, ...]) | Return the element-wise square of the input. [`absolute`](generated/numpy.absolute#numpy.absolute "numpy.absolute")(x, /[, out, where, casting, order, ...]) | Calculate the absolute value element-wise. [`fabs`](generated/numpy.fabs#numpy.fabs "numpy.fabs")(x, /[, out, where, casting, order, ...]) | Compute the absolute values element-wise. [`sign`](generated/numpy.sign#numpy.sign "numpy.sign")(x, /[, out, where, casting, order, ...]) | Returns an element-wise indication of the sign of a number. [`heaviside`](generated/numpy.heaviside#numpy.heaviside "numpy.heaviside")(x1, x2, /[, out, where, casting, ...]) | Compute the Heaviside step function. [`nan_to_num`](generated/numpy.nan_to_num#numpy.nan_to_num "numpy.nan_to_num")(x[, copy, nan, posinf, neginf]) | Replace NaN with zero and infinity with large finite numbers (default behaviour) or with the numbers defined by the user using the [`nan`](constants#numpy.nan "numpy.nan"), `posinf` and/or `neginf` keywords. [`real_if_close`](generated/numpy.real_if_close#numpy.real_if_close "numpy.real_if_close")(a[, tol]) | If input is complex with all imaginary parts close to zero, return real parts. [`interp`](generated/numpy.interp#numpy.interp "numpy.interp")(x, xp, fp[, left, right, period]) | One-dimensional linear interpolation for monotonically increasing sample points. [`bitwise_count`](generated/numpy.bitwise_count#numpy.bitwise_count "numpy.bitwise_count")(x, /[, out, where, casting, ...]) | Computes the number of 1-bits in the absolute value of `x`. # Matrix library (numpy.matlib) This module contains all functions in the [`numpy`](index#module-numpy "numpy") namespace, with the following replacement functions that return [`matrices`](generated/numpy.matrix#numpy.matrix "numpy.matrix") instead of [`ndarrays`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). Functions that are also in the numpy namespace and return matrices [`matrix`](generated/numpy.matrix#numpy.matrix "numpy.matrix")(data[, dtype, copy]) | Returns a matrix from an array-like object, or from a string of data. ---|--- [`asmatrix`](generated/numpy.asmatrix#numpy.asmatrix "numpy.asmatrix")(data[, dtype]) | Interpret the input as a matrix. [`bmat`](generated/numpy.bmat#numpy.bmat "numpy.bmat")(obj[, ldict, gdict]) | Build a matrix object from a string, nested sequence, or array. Replacement functions in `matlib` [`empty`](generated/numpy.matlib.empty#numpy.matlib.empty "numpy.matlib.empty")(shape[, dtype, order]) | Return a new matrix of given shape and type, without initializing entries. ---|--- [`zeros`](generated/numpy.matlib.zeros#numpy.matlib.zeros "numpy.matlib.zeros")(shape[, dtype, order]) | Return a matrix of given shape and type, filled with zeros. [`ones`](generated/numpy.matlib.ones#numpy.matlib.ones "numpy.matlib.ones")(shape[, dtype, order]) | Matrix of ones. [`eye`](generated/numpy.matlib.eye#numpy.matlib.eye "numpy.matlib.eye")(n[, M, k, dtype, order]) | Return a matrix with ones on the diagonal and zeros elsewhere. [`identity`](generated/numpy.matlib.identity#numpy.matlib.identity "numpy.matlib.identity")(n[, dtype]) | Returns the square identity matrix of given size. [`repmat`](generated/numpy.matlib.repmat#numpy.matlib.repmat "numpy.matlib.repmat")(a, m, n) | Repeat a 0-D to 2-D array or matrix MxN times. [`rand`](generated/numpy.matlib.rand#numpy.matlib.rand "numpy.matlib.rand")(*args) | Return a matrix of random values with given shape. [`randn`](generated/numpy.matlib.randn#numpy.matlib.randn "numpy.matlib.randn")(*args) | Return a random matrix with data from the "standard normal" distribution. # Miscellaneous routines ## Performance tuning [`setbufsize`](generated/numpy.setbufsize#numpy.setbufsize "numpy.setbufsize")(size) | Set the size of the buffer used in ufuncs. ---|--- [`getbufsize`](generated/numpy.getbufsize#numpy.getbufsize "numpy.getbufsize")() | Return the size of the buffer used in ufuncs. ## Memory ranges [`shares_memory`](generated/numpy.shares_memory#numpy.shares_memory "numpy.shares_memory")(a, b, /[, max_work]) | Determine if two arrays share memory. ---|--- [`may_share_memory`](generated/numpy.may_share_memory#numpy.may_share_memory "numpy.may_share_memory")(a, b, /[, max_work]) | Determine if two arrays might share memory ## Utility [`get_include`](generated/numpy.get_include#numpy.get_include "numpy.get_include")() | Return the directory that contains the NumPy *.h header files. ---|--- [`show_config`](generated/numpy.show_config#numpy.show_config "numpy.show_config")([mode]) | Show libraries and system information on which NumPy was built and is being used [`show_runtime`](generated/numpy.show_runtime#numpy.show_runtime "numpy.show_runtime")() | Print information about various resources in the system including available intrinsic support and BLAS/LAPACK library in use [`broadcast_shapes`](generated/numpy.broadcast_shapes#numpy.broadcast_shapes "numpy.broadcast_shapes")(*args) | Broadcast the input shapes into a single shape. ## NumPy-specific help function [`info`](generated/numpy.info#numpy.info "numpy.info")([object, maxwidth, output, toplevel]) | Get help information for an array, function, class, or module. ---|--- # Chebyshev Series (numpy.polynomial.chebyshev) This module provides a number of objects (mostly functions) useful for dealing with Chebyshev series, including a [`Chebyshev`](generated/numpy.polynomial.chebyshev.chebyshev#numpy.polynomial.chebyshev.Chebyshev "numpy.polynomial.chebyshev.Chebyshev") class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its “parent” sub-package, [`numpy.polynomial`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial")). ## Classes [`Chebyshev`](generated/numpy.polynomial.chebyshev.chebyshev#numpy.polynomial.chebyshev.Chebyshev "numpy.polynomial.chebyshev.Chebyshev")(coef[, domain, window, symbol]) | A Chebyshev series class. ---|--- ## Constants [`chebdomain`](generated/numpy.polynomial.chebyshev.chebdomain#numpy.polynomial.chebyshev.chebdomain "numpy.polynomial.chebyshev.chebdomain") | An array object represents a multidimensional, homogeneous array of fixed-size items. ---|--- [`chebzero`](generated/numpy.polynomial.chebyshev.chebzero#numpy.polynomial.chebyshev.chebzero "numpy.polynomial.chebyshev.chebzero") | An array object represents a multidimensional, homogeneous array of fixed-size items. [`chebone`](generated/numpy.polynomial.chebyshev.chebone#numpy.polynomial.chebyshev.chebone "numpy.polynomial.chebyshev.chebone") | An array object represents a multidimensional, homogeneous array of fixed-size items. [`chebx`](generated/numpy.polynomial.chebyshev.chebx#numpy.polynomial.chebyshev.chebx "numpy.polynomial.chebyshev.chebx") | An array object represents a multidimensional, homogeneous array of fixed-size items. ## Arithmetic [`chebadd`](generated/numpy.polynomial.chebyshev.chebadd#numpy.polynomial.chebyshev.chebadd "numpy.polynomial.chebyshev.chebadd")(c1, c2) | Add one Chebyshev series to another. ---|--- [`chebsub`](generated/numpy.polynomial.chebyshev.chebsub#numpy.polynomial.chebyshev.chebsub "numpy.polynomial.chebyshev.chebsub")(c1, c2) | Subtract one Chebyshev series from another. [`chebmulx`](generated/numpy.polynomial.chebyshev.chebmulx#numpy.polynomial.chebyshev.chebmulx "numpy.polynomial.chebyshev.chebmulx")(c) | Multiply a Chebyshev series by x. [`chebmul`](generated/numpy.polynomial.chebyshev.chebmul#numpy.polynomial.chebyshev.chebmul "numpy.polynomial.chebyshev.chebmul")(c1, c2) | Multiply one Chebyshev series by another. [`chebdiv`](generated/numpy.polynomial.chebyshev.chebdiv#numpy.polynomial.chebyshev.chebdiv "numpy.polynomial.chebyshev.chebdiv")(c1, c2) | Divide one Chebyshev series by another. [`chebpow`](generated/numpy.polynomial.chebyshev.chebpow#numpy.polynomial.chebyshev.chebpow "numpy.polynomial.chebyshev.chebpow")(c, pow[, maxpower]) | Raise a Chebyshev series to a power. [`chebval`](generated/numpy.polynomial.chebyshev.chebval#numpy.polynomial.chebyshev.chebval "numpy.polynomial.chebyshev.chebval")(x, c[, tensor]) | Evaluate a Chebyshev series at points x. [`chebval2d`](generated/numpy.polynomial.chebyshev.chebval2d#numpy.polynomial.chebyshev.chebval2d "numpy.polynomial.chebyshev.chebval2d")(x, y, c) | Evaluate a 2-D Chebyshev series at points (x, y). [`chebval3d`](generated/numpy.polynomial.chebyshev.chebval3d#numpy.polynomial.chebyshev.chebval3d "numpy.polynomial.chebyshev.chebval3d")(x, y, z, c) | Evaluate a 3-D Chebyshev series at points (x, y, z). [`chebgrid2d`](generated/numpy.polynomial.chebyshev.chebgrid2d#numpy.polynomial.chebyshev.chebgrid2d "numpy.polynomial.chebyshev.chebgrid2d")(x, y, c) | Evaluate a 2-D Chebyshev series on the Cartesian product of x and y. [`chebgrid3d`](generated/numpy.polynomial.chebyshev.chebgrid3d#numpy.polynomial.chebyshev.chebgrid3d "numpy.polynomial.chebyshev.chebgrid3d")(x, y, z, c) | Evaluate a 3-D Chebyshev series on the Cartesian product of x, y, and z. ## Calculus [`chebder`](generated/numpy.polynomial.chebyshev.chebder#numpy.polynomial.chebyshev.chebder "numpy.polynomial.chebyshev.chebder")(c[, m, scl, axis]) | Differentiate a Chebyshev series. ---|--- [`chebint`](generated/numpy.polynomial.chebyshev.chebint#numpy.polynomial.chebyshev.chebint "numpy.polynomial.chebyshev.chebint")(c[, m, k, lbnd, scl, axis]) | Integrate a Chebyshev series. ## Misc Functions [`chebfromroots`](generated/numpy.polynomial.chebyshev.chebfromroots#numpy.polynomial.chebyshev.chebfromroots "numpy.polynomial.chebyshev.chebfromroots")(roots) | Generate a Chebyshev series with given roots. ---|--- [`chebroots`](generated/numpy.polynomial.chebyshev.chebroots#numpy.polynomial.chebyshev.chebroots "numpy.polynomial.chebyshev.chebroots")(c) | Compute the roots of a Chebyshev series. [`chebvander`](generated/numpy.polynomial.chebyshev.chebvander#numpy.polynomial.chebyshev.chebvander "numpy.polynomial.chebyshev.chebvander")(x, deg) | Pseudo-Vandermonde matrix of given degree. [`chebvander2d`](generated/numpy.polynomial.chebyshev.chebvander2d#numpy.polynomial.chebyshev.chebvander2d "numpy.polynomial.chebyshev.chebvander2d")(x, y, deg) | Pseudo-Vandermonde matrix of given degrees. [`chebvander3d`](generated/numpy.polynomial.chebyshev.chebvander3d#numpy.polynomial.chebyshev.chebvander3d "numpy.polynomial.chebyshev.chebvander3d")(x, y, z, deg) | Pseudo-Vandermonde matrix of given degrees. [`chebgauss`](generated/numpy.polynomial.chebyshev.chebgauss#numpy.polynomial.chebyshev.chebgauss "numpy.polynomial.chebyshev.chebgauss")(deg) | Gauss-Chebyshev quadrature. [`chebweight`](generated/numpy.polynomial.chebyshev.chebweight#numpy.polynomial.chebyshev.chebweight "numpy.polynomial.chebyshev.chebweight")(x) | The weight function of the Chebyshev polynomials. [`chebcompanion`](generated/numpy.polynomial.chebyshev.chebcompanion#numpy.polynomial.chebyshev.chebcompanion "numpy.polynomial.chebyshev.chebcompanion")(c) | Return the scaled companion matrix of c. [`chebfit`](generated/numpy.polynomial.chebyshev.chebfit#numpy.polynomial.chebyshev.chebfit "numpy.polynomial.chebyshev.chebfit")(x, y, deg[, rcond, full, w]) | Least squares fit of Chebyshev series to data. [`chebpts1`](generated/numpy.polynomial.chebyshev.chebpts1#numpy.polynomial.chebyshev.chebpts1 "numpy.polynomial.chebyshev.chebpts1")(npts) | Chebyshev points of the first kind. [`chebpts2`](generated/numpy.polynomial.chebyshev.chebpts2#numpy.polynomial.chebyshev.chebpts2 "numpy.polynomial.chebyshev.chebpts2")(npts) | Chebyshev points of the second kind. [`chebtrim`](generated/numpy.polynomial.chebyshev.chebtrim#numpy.polynomial.chebyshev.chebtrim "numpy.polynomial.chebyshev.chebtrim")(c[, tol]) | Remove "small" "trailing" coefficients from a polynomial. [`chebline`](generated/numpy.polynomial.chebyshev.chebline#numpy.polynomial.chebyshev.chebline "numpy.polynomial.chebyshev.chebline")(off, scl) | Chebyshev series whose graph is a straight line. [`cheb2poly`](generated/numpy.polynomial.chebyshev.cheb2poly#numpy.polynomial.chebyshev.cheb2poly "numpy.polynomial.chebyshev.cheb2poly")(c) | Convert a Chebyshev series to a polynomial. [`poly2cheb`](generated/numpy.polynomial.chebyshev.poly2cheb#numpy.polynomial.chebyshev.poly2cheb "numpy.polynomial.chebyshev.poly2cheb")(pol) | Convert a polynomial to a Chebyshev series. [`chebinterpolate`](generated/numpy.polynomial.chebyshev.chebinterpolate#numpy.polynomial.chebyshev.chebinterpolate "numpy.polynomial.chebyshev.chebinterpolate")(func, deg[, args]) | Interpolate a function at the Chebyshev points of the first kind. ## See also [`numpy.polynomial`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial") ## Notes The implementations of multiplication, division, integration, and differentiation use the algebraic identities [1]: \\[\begin{split}T_n(x) = \frac{z^n + z^{-n}}{2} \\\ z\frac{dx}{dz} = \frac{z - z^{-1}}{2}.\end{split}\\] where \\[x = \frac{z + z^{-1}}{2}.\\] These identities allow a Chebyshev series to be expressed as a finite, symmetric Laurent series. In this module, this sort of Laurent series is referred to as a “z-series.” ## References [1] A. T. Benjamin, et al., “Combinatorial Trigonometry with Chebyshev Polynomials,” _Journal of Statistical Planning and Inference 14_ , 2008 (, pg. 4) # Using the convenience classes The convenience classes provided by the polynomial package are: Name | Provides ---|--- [`Polynomial`](generated/numpy.polynomial.polynomial.polynomial#numpy.polynomial.polynomial.Polynomial "numpy.polynomial.polynomial.Polynomial") | Power series [`Chebyshev`](generated/numpy.polynomial.chebyshev.chebyshev#numpy.polynomial.chebyshev.Chebyshev "numpy.polynomial.chebyshev.Chebyshev") | Chebyshev series [`Legendre`](generated/numpy.polynomial.legendre.legendre#numpy.polynomial.legendre.Legendre "numpy.polynomial.legendre.Legendre") | Legendre series [`Laguerre`](generated/numpy.polynomial.laguerre.laguerre#numpy.polynomial.laguerre.Laguerre "numpy.polynomial.laguerre.Laguerre") | Laguerre series [`Hermite`](generated/numpy.polynomial.hermite.hermite#numpy.polynomial.hermite.Hermite "numpy.polynomial.hermite.Hermite") | Hermite series [`HermiteE`](generated/numpy.polynomial.hermite_e.hermitee#numpy.polynomial.hermite_e.HermiteE "numpy.polynomial.hermite_e.HermiteE") | HermiteE series The series in this context are finite sums of the corresponding polynomial basis functions multiplied by coefficients. For instance, a power series looks like \\[p(x) = 1 + 2x + 3x^2\\] and has coefficients \\([1, 2, 3]\\). The Chebyshev series with the same coefficients looks like \\[p(x) = 1 T_0(x) + 2 T_1(x) + 3 T_2(x)\\] and more generally \\[p(x) = \sum_{i=0}^n c_i T_i(x)\\] where in this case the \\(T_n\\) are the Chebyshev functions of degree \\(n\\), but could just as easily be the basis functions of any of the other classes. The convention for all the classes is that the coefficient \\(c[i]\\) goes with the basis function of degree i. All of the classes are immutable and have the same methods, and especially they implement the Python numeric operators +, -, *, //, %, divmod, **, ==, and !=. The last two can be a bit problematic due to floating point roundoff errors. We now give a quick demonstration of the various operations using NumPy version 1.7.0. ## Basics First we need a polynomial class and a polynomial instance to play with. The classes can be imported directly from the polynomial package or from the module of the relevant type. Here we import from the package and use the conventional Polynomial class because of its familiarity: >>> from numpy.polynomial import Polynomial as P >>> p = P([1,2,3]) >>> p Polynomial([1., 2., 3.], domain=[-1., 1.], window=[-1., 1.], symbol='x') Note that there are three parts to the long version of the printout. The first is the coefficients, the second is the domain, and the third is the window: >>> p.coef array([1., 2., 3.]) >>> p.domain array([-1., 1.]) >>> p.window array([-1., 1.]) Printing a polynomial yields the polynomial expression in a more familiar format: >>> print(p) 1.0 + 2.0·x + 3.0·x² Note that the string representation of polynomials uses Unicode characters by default (except on Windows) to express powers and subscripts. An ASCII-based representation is also available (default on Windows). The polynomial string format can be toggled at the package-level with the [`set_default_printstyle`](generated/numpy.polynomial.set_default_printstyle#numpy.polynomial.set_default_printstyle "numpy.polynomial.set_default_printstyle") function: >>> np.polynomial.set_default_printstyle('ascii') >>> print(p) 1.0 + 2.0 x + 3.0 x**2 or controlled for individual polynomial instances with string formatting: >>> print(f"{p:unicode}") 1.0 + 2.0·x + 3.0·x² We will deal with the domain and window when we get to fitting, for the moment we ignore them and run through the basic algebraic and arithmetic operations. Addition and Subtraction: >>> p + p Polynomial([2., 4., 6.], domain=[-1., 1.], window=[-1., 1.], symbol='x') >>> p - p Polynomial([0.], domain=[-1., 1.], window=[-1., 1.], symbol='x') Multiplication: >>> p * p Polynomial([ 1., 4., 10., 12., 9.], domain=[-1., 1.], window=[-1., 1.], symbol='x') Powers: >>> p**2 Polynomial([ 1., 4., 10., 12., 9.], domain=[-1., 1.], window=[-1., 1.], symbol='x') Division: Floor division, ‘//’, is the division operator for the polynomial classes, polynomials are treated like integers in this regard. For Python versions < 3.x the ‘/’ operator maps to ‘//’, as it does for Python, for later versions the ‘/’ will only work for division by scalars. At some point it will be deprecated: >>> p // P([-1, 1]) Polynomial([5., 3.], domain=[-1., 1.], window=[-1., 1.], symbol='x') Remainder: >>> p % P([-1, 1]) Polynomial([6.], domain=[-1., 1.], window=[-1., 1.], symbol='x') Divmod: >>> quo, rem = divmod(p, P([-1, 1])) >>> quo Polynomial([5., 3.], domain=[-1., 1.], window=[-1., 1.], symbol='x') >>> rem Polynomial([6.], domain=[-1., 1.], window=[-1., 1.], symbol='x') Evaluation: >>> x = np.arange(5) >>> p(x) array([ 1., 6., 17., 34., 57.]) >>> x = np.arange(6).reshape(3,2) >>> p(x) array([[ 1., 6.], [17., 34.], [57., 86.]]) Substitution: Substitute a polynomial for x and expand the result. Here we substitute p in itself leading to a new polynomial of degree 4 after expansion. If the polynomials are regarded as functions this is composition of functions: >>> p(p) Polynomial([ 6., 16., 36., 36., 27.], domain=[-1., 1.], window=[-1., 1.], symbol='x') Roots: >>> p.roots() array([-0.33333333-0.47140452j, -0.33333333+0.47140452j]) It isn’t always convenient to explicitly use Polynomial instances, so tuples, lists, arrays, and scalars are automatically cast in the arithmetic operations: >>> p + [1, 2, 3] Polynomial([2., 4., 6.], domain=[-1., 1.], window=[-1., 1.], symbol='x') >>> [1, 2, 3] * p Polynomial([ 1., 4., 10., 12., 9.], domain=[-1., 1.], window=[-1., 1.], symbol='x') >>> p / 2 Polynomial([0.5, 1. , 1.5], domain=[-1., 1.], window=[-1., 1.], symbol='x') Polynomials that differ in domain, window, or class can’t be mixed in arithmetic: >>> from numpy.polynomial import Chebyshev as T >>> p + P([1], domain=[0,1]) Traceback (most recent call last): File "", line 1, in File "", line 213, in __add__ TypeError: Domains differ >>> p + P([1], window=[0,1]) Traceback (most recent call last): File "", line 1, in File "", line 215, in __add__ TypeError: Windows differ >>> p + T([1]) Traceback (most recent call last): File "", line 1, in File "", line 211, in __add__ TypeError: Polynomial types differ But different types can be used for substitution. In fact, this is how conversion of Polynomial classes among themselves is done for type, domain, and window casting: >>> p(T([0, 1])) Chebyshev([2.5, 2. , 1.5], domain=[-1., 1.], window=[-1., 1.], symbol='x') Which gives the polynomial `p` in Chebyshev form. This works because \\(T_1(x) = x\\) and substituting \\(x\\) for \\(x\\) doesn’t change the original polynomial. However, all the multiplications and divisions will be done using Chebyshev series, hence the type of the result. It is intended that all polynomial instances are immutable, therefore augmented operations (`+=`, `-=`, etc.) and any other functionality that would violate the immutablity of a polynomial instance are intentionally unimplemented. ## Calculus Polynomial instances can be integrated and differentiated.: >>> from numpy.polynomial import Polynomial as P >>> p = P([2, 6]) >>> p.integ() Polynomial([0., 2., 3.], domain=[-1., 1.], window=[-1., 1.], symbol='x') >>> p.integ(2) Polynomial([0., 0., 1., 1.], domain=[-1., 1.], window=[-1., 1.], symbol='x') The first example integrates `p` once, the second example integrates it twice. By default, the lower bound of the integration and the integration constant are 0, but both can be specified.: >>> p.integ(lbnd=-1) Polynomial([-1., 2., 3.], domain=[-1., 1.], window=[-1., 1.], symbol='x') >>> p.integ(lbnd=-1, k=1) Polynomial([0., 2., 3.], domain=[-1., 1.], window=[-1., 1.], symbol='x') In the first case the lower bound of the integration is set to -1 and the integration constant is 0. In the second the constant of integration is set to 1 as well. Differentiation is simpler since the only option is the number of times the polynomial is differentiated: >>> p = P([1, 2, 3]) >>> p.deriv(1) Polynomial([2., 6.], domain=[-1., 1.], window=[-1., 1.], symbol='x') >>> p.deriv(2) Polynomial([6.], domain=[-1., 1.], window=[-1., 1.], symbol='x') ## Other polynomial constructors Constructing polynomials by specifying coefficients is just one way of obtaining a polynomial instance, they may also be created by specifying their roots, by conversion from other polynomial types, and by least squares fits. Fitting is discussed in its own section, the other methods are demonstrated below: >>> from numpy.polynomial import Polynomial as P >>> from numpy.polynomial import Chebyshev as T >>> p = P.fromroots([1, 2, 3]) >>> p Polynomial([-6., 11., -6., 1.], domain=[-1., 1.], window=[-1., 1.], symbol='x') >>> p.convert(kind=T) Chebyshev([-9. , 11.75, -3. , 0.25], domain=[-1., 1.], window=[-1., 1.], symbol='x') The convert method can also convert domain and window: >>> p.convert(kind=T, domain=[0, 1]) Chebyshev([-2.4375 , 2.96875, -0.5625 , 0.03125], domain=[0., 1.], window=[-1., 1.], symbol='x') >>> p.convert(kind=P, domain=[0, 1]) Polynomial([-1.875, 2.875, -1.125, 0.125], domain=[0., 1.], window=[-1., 1.], symbol='x') In numpy versions >= 1.7.0 the `basis` and `cast` class methods are also available. The cast method works like the convert method while the basis method returns the basis polynomial of given degree: >>> P.basis(3) Polynomial([0., 0., 0., 1.], domain=[-1., 1.], window=[-1., 1.], symbol='x') >>> T.cast(p) Chebyshev([-9. , 11.75, -3. , 0.25], domain=[-1., 1.], window=[-1., 1.], symbol='x') Conversions between types can be useful, but it is _not_ recommended for routine use. The loss of numerical precision in passing from a Chebyshev series of degree 50 to a Polynomial series of the same degree can make the results of numerical evaluation essentially random. ## Fitting Fitting is the reason that the `domain` and `window` attributes are part of the convenience classes. To illustrate the problem, the values of the Chebyshev polynomials up to degree 5 are plotted below. >>> import matplotlib.pyplot as plt >>> from numpy.polynomial import Chebyshev as T >>> x = np.linspace(-1, 1, 100) >>> for i in range(6): ... ax = plt.plot(x, T.basis(i)(x), lw=2, label=f"$T_{i}$") ... >>> plt.legend(loc="upper left") >>> plt.show() In the range -1 <= `x` <= 1 they are nice, equiripple functions lying between +/- 1. The same plots over the range -2 <= `x` <= 2 look very different: >>> import matplotlib.pyplot as plt >>> from numpy.polynomial import Chebyshev as T >>> x = np.linspace(-2, 2, 100) >>> for i in range(6): ... ax = plt.plot(x, T.basis(i)(x), lw=2, label=f"$T_{i}$") ... >>> plt.legend(loc="lower right") >>> plt.show() As can be seen, the “good” parts have shrunk to insignificance. In using Chebyshev polynomials for fitting we want to use the region where `x` is between -1 and 1 and that is what the `window` specifies. However, it is unlikely that the data to be fit has all its data points in that interval, so we use `domain` to specify the interval where the data points lie. When the fit is done, the domain is first mapped to the window by a linear transformation and the usual least squares fit is done using the mapped data points. The window and domain of the fit are part of the returned series and are automatically used when computing values, derivatives, and such. If they aren’t specified in the call the fitting routine will use the default window and the smallest domain that holds all the data points. This is illustrated below for a fit to a noisy sine curve. >>> import numpy as np >>> import matplotlib.pyplot as plt >>> from numpy.polynomial import Chebyshev as T >>> np.random.seed(11) >>> x = np.linspace(0, 2*np.pi, 20) >>> y = np.sin(x) + np.random.normal(scale=.1, size=x.shape) >>> p = T.fit(x, y, 5) >>> plt.plot(x, y, 'o') >>> xx, yy = p.linspace() >>> plt.plot(xx, yy, lw=2) >>> p.domain array([0. , 6.28318531]) >>> p.window array([-1., 1.]) >>> plt.show() # Hermite Series, “Physicists” (numpy.polynomial.hermite) This module provides a number of objects (mostly functions) useful for dealing with Hermite series, including a [`Hermite`](generated/numpy.polynomial.hermite.hermite#numpy.polynomial.hermite.Hermite "numpy.polynomial.hermite.Hermite") class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its “parent” sub-package, [`numpy.polynomial`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial")). ## Classes [`Hermite`](generated/numpy.polynomial.hermite.hermite#numpy.polynomial.hermite.Hermite "numpy.polynomial.hermite.Hermite")(coef[, domain, window, symbol]) | An Hermite series class. ---|--- ## Constants [`hermdomain`](generated/numpy.polynomial.hermite.hermdomain#numpy.polynomial.hermite.hermdomain "numpy.polynomial.hermite.hermdomain") | An array object represents a multidimensional, homogeneous array of fixed-size items. ---|--- [`hermzero`](generated/numpy.polynomial.hermite.hermzero#numpy.polynomial.hermite.hermzero "numpy.polynomial.hermite.hermzero") | An array object represents a multidimensional, homogeneous array of fixed-size items. [`hermone`](generated/numpy.polynomial.hermite.hermone#numpy.polynomial.hermite.hermone "numpy.polynomial.hermite.hermone") | An array object represents a multidimensional, homogeneous array of fixed-size items. [`hermx`](generated/numpy.polynomial.hermite.hermx#numpy.polynomial.hermite.hermx "numpy.polynomial.hermite.hermx") | An array object represents a multidimensional, homogeneous array of fixed-size items. ## Arithmetic [`hermadd`](generated/numpy.polynomial.hermite.hermadd#numpy.polynomial.hermite.hermadd "numpy.polynomial.hermite.hermadd")(c1, c2) | Add one Hermite series to another. ---|--- [`hermsub`](generated/numpy.polynomial.hermite.hermsub#numpy.polynomial.hermite.hermsub "numpy.polynomial.hermite.hermsub")(c1, c2) | Subtract one Hermite series from another. [`hermmulx`](generated/numpy.polynomial.hermite.hermmulx#numpy.polynomial.hermite.hermmulx "numpy.polynomial.hermite.hermmulx")(c) | Multiply a Hermite series by x. [`hermmul`](generated/numpy.polynomial.hermite.hermmul#numpy.polynomial.hermite.hermmul "numpy.polynomial.hermite.hermmul")(c1, c2) | Multiply one Hermite series by another. [`hermdiv`](generated/numpy.polynomial.hermite.hermdiv#numpy.polynomial.hermite.hermdiv "numpy.polynomial.hermite.hermdiv")(c1, c2) | Divide one Hermite series by another. [`hermpow`](generated/numpy.polynomial.hermite.hermpow#numpy.polynomial.hermite.hermpow "numpy.polynomial.hermite.hermpow")(c, pow[, maxpower]) | Raise a Hermite series to a power. [`hermval`](generated/numpy.polynomial.hermite.hermval#numpy.polynomial.hermite.hermval "numpy.polynomial.hermite.hermval")(x, c[, tensor]) | Evaluate an Hermite series at points x. [`hermval2d`](generated/numpy.polynomial.hermite.hermval2d#numpy.polynomial.hermite.hermval2d "numpy.polynomial.hermite.hermval2d")(x, y, c) | Evaluate a 2-D Hermite series at points (x, y). [`hermval3d`](generated/numpy.polynomial.hermite.hermval3d#numpy.polynomial.hermite.hermval3d "numpy.polynomial.hermite.hermval3d")(x, y, z, c) | Evaluate a 3-D Hermite series at points (x, y, z). [`hermgrid2d`](generated/numpy.polynomial.hermite.hermgrid2d#numpy.polynomial.hermite.hermgrid2d "numpy.polynomial.hermite.hermgrid2d")(x, y, c) | Evaluate a 2-D Hermite series on the Cartesian product of x and y. [`hermgrid3d`](generated/numpy.polynomial.hermite.hermgrid3d#numpy.polynomial.hermite.hermgrid3d "numpy.polynomial.hermite.hermgrid3d")(x, y, z, c) | Evaluate a 3-D Hermite series on the Cartesian product of x, y, and z. ## Calculus [`hermder`](generated/numpy.polynomial.hermite.hermder#numpy.polynomial.hermite.hermder "numpy.polynomial.hermite.hermder")(c[, m, scl, axis]) | Differentiate a Hermite series. ---|--- [`hermint`](generated/numpy.polynomial.hermite.hermint#numpy.polynomial.hermite.hermint "numpy.polynomial.hermite.hermint")(c[, m, k, lbnd, scl, axis]) | Integrate a Hermite series. ## Misc Functions [`hermfromroots`](generated/numpy.polynomial.hermite.hermfromroots#numpy.polynomial.hermite.hermfromroots "numpy.polynomial.hermite.hermfromroots")(roots) | Generate a Hermite series with given roots. ---|--- [`hermroots`](generated/numpy.polynomial.hermite.hermroots#numpy.polynomial.hermite.hermroots "numpy.polynomial.hermite.hermroots")(c) | Compute the roots of a Hermite series. [`hermvander`](generated/numpy.polynomial.hermite.hermvander#numpy.polynomial.hermite.hermvander "numpy.polynomial.hermite.hermvander")(x, deg) | Pseudo-Vandermonde matrix of given degree. [`hermvander2d`](generated/numpy.polynomial.hermite.hermvander2d#numpy.polynomial.hermite.hermvander2d "numpy.polynomial.hermite.hermvander2d")(x, y, deg) | Pseudo-Vandermonde matrix of given degrees. [`hermvander3d`](generated/numpy.polynomial.hermite.hermvander3d#numpy.polynomial.hermite.hermvander3d "numpy.polynomial.hermite.hermvander3d")(x, y, z, deg) | Pseudo-Vandermonde matrix of given degrees. [`hermgauss`](generated/numpy.polynomial.hermite.hermgauss#numpy.polynomial.hermite.hermgauss "numpy.polynomial.hermite.hermgauss")(deg) | Gauss-Hermite quadrature. [`hermweight`](generated/numpy.polynomial.hermite.hermweight#numpy.polynomial.hermite.hermweight "numpy.polynomial.hermite.hermweight")(x) | Weight function of the Hermite polynomials. [`hermcompanion`](generated/numpy.polynomial.hermite.hermcompanion#numpy.polynomial.hermite.hermcompanion "numpy.polynomial.hermite.hermcompanion")(c) | Return the scaled companion matrix of c. [`hermfit`](generated/numpy.polynomial.hermite.hermfit#numpy.polynomial.hermite.hermfit "numpy.polynomial.hermite.hermfit")(x, y, deg[, rcond, full, w]) | Least squares fit of Hermite series to data. [`hermtrim`](generated/numpy.polynomial.hermite.hermtrim#numpy.polynomial.hermite.hermtrim "numpy.polynomial.hermite.hermtrim")(c[, tol]) | Remove "small" "trailing" coefficients from a polynomial. [`hermline`](generated/numpy.polynomial.hermite.hermline#numpy.polynomial.hermite.hermline "numpy.polynomial.hermite.hermline")(off, scl) | Hermite series whose graph is a straight line. [`herm2poly`](generated/numpy.polynomial.hermite.herm2poly#numpy.polynomial.hermite.herm2poly "numpy.polynomial.hermite.herm2poly")(c) | Convert a Hermite series to a polynomial. [`poly2herm`](generated/numpy.polynomial.hermite.poly2herm#numpy.polynomial.hermite.poly2herm "numpy.polynomial.hermite.poly2herm")(pol) | Convert a polynomial to a Hermite series. ## See also [`numpy.polynomial`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial") # HermiteE Series, “Probabilists” (numpy.polynomial.hermite_e) This module provides a number of objects (mostly functions) useful for dealing with Hermite_e series, including a [`HermiteE`](generated/numpy.polynomial.hermite_e.hermitee#numpy.polynomial.hermite_e.HermiteE "numpy.polynomial.hermite_e.HermiteE") class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its “parent” sub-package, [`numpy.polynomial`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial")). ## Classes [`HermiteE`](generated/numpy.polynomial.hermite_e.hermitee#numpy.polynomial.hermite_e.HermiteE "numpy.polynomial.hermite_e.HermiteE")(coef[, domain, window, symbol]) | An HermiteE series class. ---|--- ## Constants [`hermedomain`](generated/numpy.polynomial.hermite_e.hermedomain#numpy.polynomial.hermite_e.hermedomain "numpy.polynomial.hermite_e.hermedomain") | An array object represents a multidimensional, homogeneous array of fixed-size items. ---|--- [`hermezero`](generated/numpy.polynomial.hermite_e.hermezero#numpy.polynomial.hermite_e.hermezero "numpy.polynomial.hermite_e.hermezero") | An array object represents a multidimensional, homogeneous array of fixed-size items. [`hermeone`](generated/numpy.polynomial.hermite_e.hermeone#numpy.polynomial.hermite_e.hermeone "numpy.polynomial.hermite_e.hermeone") | An array object represents a multidimensional, homogeneous array of fixed-size items. [`hermex`](generated/numpy.polynomial.hermite_e.hermex#numpy.polynomial.hermite_e.hermex "numpy.polynomial.hermite_e.hermex") | An array object represents a multidimensional, homogeneous array of fixed-size items. ## Arithmetic [`hermeadd`](generated/numpy.polynomial.hermite_e.hermeadd#numpy.polynomial.hermite_e.hermeadd "numpy.polynomial.hermite_e.hermeadd")(c1, c2) | Add one Hermite series to another. ---|--- [`hermesub`](generated/numpy.polynomial.hermite_e.hermesub#numpy.polynomial.hermite_e.hermesub "numpy.polynomial.hermite_e.hermesub")(c1, c2) | Subtract one Hermite series from another. [`hermemulx`](generated/numpy.polynomial.hermite_e.hermemulx#numpy.polynomial.hermite_e.hermemulx "numpy.polynomial.hermite_e.hermemulx")(c) | Multiply a Hermite series by x. [`hermemul`](generated/numpy.polynomial.hermite_e.hermemul#numpy.polynomial.hermite_e.hermemul "numpy.polynomial.hermite_e.hermemul")(c1, c2) | Multiply one Hermite series by another. [`hermediv`](generated/numpy.polynomial.hermite_e.hermediv#numpy.polynomial.hermite_e.hermediv "numpy.polynomial.hermite_e.hermediv")(c1, c2) | Divide one Hermite series by another. [`hermepow`](generated/numpy.polynomial.hermite_e.hermepow#numpy.polynomial.hermite_e.hermepow "numpy.polynomial.hermite_e.hermepow")(c, pow[, maxpower]) | Raise a Hermite series to a power. [`hermeval`](generated/numpy.polynomial.hermite_e.hermeval#numpy.polynomial.hermite_e.hermeval "numpy.polynomial.hermite_e.hermeval")(x, c[, tensor]) | Evaluate an HermiteE series at points x. [`hermeval2d`](generated/numpy.polynomial.hermite_e.hermeval2d#numpy.polynomial.hermite_e.hermeval2d "numpy.polynomial.hermite_e.hermeval2d")(x, y, c) | Evaluate a 2-D HermiteE series at points (x, y). [`hermeval3d`](generated/numpy.polynomial.hermite_e.hermeval3d#numpy.polynomial.hermite_e.hermeval3d "numpy.polynomial.hermite_e.hermeval3d")(x, y, z, c) | Evaluate a 3-D Hermite_e series at points (x, y, z). [`hermegrid2d`](generated/numpy.polynomial.hermite_e.hermegrid2d#numpy.polynomial.hermite_e.hermegrid2d "numpy.polynomial.hermite_e.hermegrid2d")(x, y, c) | Evaluate a 2-D HermiteE series on the Cartesian product of x and y. [`hermegrid3d`](generated/numpy.polynomial.hermite_e.hermegrid3d#numpy.polynomial.hermite_e.hermegrid3d "numpy.polynomial.hermite_e.hermegrid3d")(x, y, z, c) | Evaluate a 3-D HermiteE series on the Cartesian product of x, y, and z. ## Calculus [`hermeder`](generated/numpy.polynomial.hermite_e.hermeder#numpy.polynomial.hermite_e.hermeder "numpy.polynomial.hermite_e.hermeder")(c[, m, scl, axis]) | Differentiate a Hermite_e series. ---|--- [`hermeint`](generated/numpy.polynomial.hermite_e.hermeint#numpy.polynomial.hermite_e.hermeint "numpy.polynomial.hermite_e.hermeint")(c[, m, k, lbnd, scl, axis]) | Integrate a Hermite_e series. ## Misc Functions [`hermefromroots`](generated/numpy.polynomial.hermite_e.hermefromroots#numpy.polynomial.hermite_e.hermefromroots "numpy.polynomial.hermite_e.hermefromroots")(roots) | Generate a HermiteE series with given roots. ---|--- [`hermeroots`](generated/numpy.polynomial.hermite_e.hermeroots#numpy.polynomial.hermite_e.hermeroots "numpy.polynomial.hermite_e.hermeroots")(c) | Compute the roots of a HermiteE series. [`hermevander`](generated/numpy.polynomial.hermite_e.hermevander#numpy.polynomial.hermite_e.hermevander "numpy.polynomial.hermite_e.hermevander")(x, deg) | Pseudo-Vandermonde matrix of given degree. [`hermevander2d`](generated/numpy.polynomial.hermite_e.hermevander2d#numpy.polynomial.hermite_e.hermevander2d "numpy.polynomial.hermite_e.hermevander2d")(x, y, deg) | Pseudo-Vandermonde matrix of given degrees. [`hermevander3d`](generated/numpy.polynomial.hermite_e.hermevander3d#numpy.polynomial.hermite_e.hermevander3d "numpy.polynomial.hermite_e.hermevander3d")(x, y, z, deg) | Pseudo-Vandermonde matrix of given degrees. [`hermegauss`](generated/numpy.polynomial.hermite_e.hermegauss#numpy.polynomial.hermite_e.hermegauss "numpy.polynomial.hermite_e.hermegauss")(deg) | Gauss-HermiteE quadrature. [`hermeweight`](generated/numpy.polynomial.hermite_e.hermeweight#numpy.polynomial.hermite_e.hermeweight "numpy.polynomial.hermite_e.hermeweight")(x) | Weight function of the Hermite_e polynomials. [`hermecompanion`](generated/numpy.polynomial.hermite_e.hermecompanion#numpy.polynomial.hermite_e.hermecompanion "numpy.polynomial.hermite_e.hermecompanion")(c) | Return the scaled companion matrix of c. [`hermefit`](generated/numpy.polynomial.hermite_e.hermefit#numpy.polynomial.hermite_e.hermefit "numpy.polynomial.hermite_e.hermefit")(x, y, deg[, rcond, full, w]) | Least squares fit of Hermite series to data. [`hermetrim`](generated/numpy.polynomial.hermite_e.hermetrim#numpy.polynomial.hermite_e.hermetrim "numpy.polynomial.hermite_e.hermetrim")(c[, tol]) | Remove "small" "trailing" coefficients from a polynomial. [`hermeline`](generated/numpy.polynomial.hermite_e.hermeline#numpy.polynomial.hermite_e.hermeline "numpy.polynomial.hermite_e.hermeline")(off, scl) | Hermite series whose graph is a straight line. [`herme2poly`](generated/numpy.polynomial.hermite_e.herme2poly#numpy.polynomial.hermite_e.herme2poly "numpy.polynomial.hermite_e.herme2poly")(c) | Convert a Hermite series to a polynomial. [`poly2herme`](generated/numpy.polynomial.hermite_e.poly2herme#numpy.polynomial.hermite_e.poly2herme "numpy.polynomial.hermite_e.poly2herme")(pol) | Convert a polynomial to a Hermite series. ## See also [`numpy.polynomial`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial") # Polynomials Polynomials in NumPy can be _created_ , _manipulated_ , and even _fitted_ using the [convenience classes](routines.polynomials.classes) of the [`numpy.polynomial`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial") package, introduced in NumPy 1.4. Prior to NumPy 1.4, [`numpy.poly1d`](generated/numpy.poly1d#numpy.poly1d "numpy.poly1d") was the class of choice and it is still available in order to maintain backward compatibility. However, the newer [`polynomial package`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial") is more complete and its `convenience classes` provide a more consistent, better-behaved interface for working with polynomial expressions. Therefore [`numpy.polynomial`](routines.polynomials- package#module-numpy.polynomial "numpy.polynomial") is recommended for new coding. Note **Terminology** The term _polynomial module_ refers to the old API defined in `numpy.lib.polynomial`, which includes the [`numpy.poly1d`](generated/numpy.poly1d#numpy.poly1d "numpy.poly1d") class and the polynomial functions prefixed with _poly_ accessible from the [`numpy`](index#module-numpy "numpy") namespace (e.g. [`numpy.polyadd`](generated/numpy.polyadd#numpy.polyadd "numpy.polyadd"), [`numpy.polyval`](generated/numpy.polyval#numpy.polyval "numpy.polyval"), [`numpy.polyfit`](generated/numpy.polyfit#numpy.polyfit "numpy.polyfit"), etc.). The term _polynomial package_ refers to the new API defined in [`numpy.polynomial`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial"), which includes the convenience classes for the different kinds of polynomials ([`Polynomial`](generated/numpy.polynomial.polynomial.polynomial#numpy.polynomial.polynomial.Polynomial "numpy.polynomial.polynomial.Polynomial"), [`Chebyshev`](generated/numpy.polynomial.chebyshev.chebyshev#numpy.polynomial.chebyshev.Chebyshev "numpy.polynomial.chebyshev.Chebyshev"), etc.). ## Transitioning from numpy.poly1d to numpy.polynomial As noted above, the [`poly1d class`](generated/numpy.poly1d#numpy.poly1d "numpy.poly1d") and associated functions defined in `numpy.lib.polynomial`, such as [`numpy.polyfit`](generated/numpy.polyfit#numpy.polyfit "numpy.polyfit") and [`numpy.poly`](generated/numpy.poly#numpy.poly "numpy.poly"), are considered legacy and should **not** be used in new code. Since NumPy version 1.4, the [`numpy.polynomial`](routines.polynomials- package#module-numpy.polynomial "numpy.polynomial") package is preferred for working with polynomials. ### Quick Reference The following table highlights some of the main differences between the legacy polynomial module and the polynomial package for common tasks. The [`Polynomial`](generated/numpy.polynomial.polynomial.polynomial#numpy.polynomial.polynomial.Polynomial "numpy.polynomial.polynomial.Polynomial") class is imported for brevity: from numpy.polynomial import Polynomial **How to…** | Legacy ([`numpy.poly1d`](generated/numpy.poly1d#numpy.poly1d "numpy.poly1d")) | [`numpy.polynomial`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial") ---|---|--- Create a polynomial object from coefficients [1] | `p = np.poly1d([1, 2, 3])` | `p = Polynomial([3, 2, 1])` Create a polynomial object from roots | `r = np.poly([-1, 1])` `p = np.poly1d(r)` | `p = Polynomial.fromroots([-1, 1])` Fit a polynomial of degree `deg` to data | `np.polyfit(x, y, deg)` | `Polynomial.fit(x, y, deg)` [1] Note the reversed ordering of the coefficients ### Transition Guide There are significant differences between `numpy.lib.polynomial` and [`numpy.polynomial`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial"). The most significant difference is the ordering of the coefficients for the polynomial expressions. The various routines in [`numpy.polynomial`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial") all deal with series whose coefficients go from degree zero upward, which is the _reverse order_ of the poly1d convention. The easy way to remember this is that indices correspond to degree, i.e., `coef[i]` is the coefficient of the term of degree _i_. Though the difference in convention may be confusing, it is straightforward to convert from the legacy polynomial API to the new. For example, the following demonstrates how you would convert a [`numpy.poly1d`](generated/numpy.poly1d#numpy.poly1d "numpy.poly1d") instance representing the expression \\(x^{2} + 2x + 3\\) to a [`Polynomial`](generated/numpy.polynomial.polynomial.polynomial#numpy.polynomial.polynomial.Polynomial "numpy.polynomial.polynomial.Polynomial") instance representing the same expression: >>> import numpy as np >>> p1d = np.poly1d([1, 2, 3]) >>> p = np.polynomial.Polynomial(p1d.coef[::-1]) In addition to the `coef` attribute, polynomials from the polynomial package also have `domain` and `window` attributes. These attributes are most relevant when fitting polynomials to data, though it should be noted that polynomials with different `domain` and `window` attributes are not considered equal, and can’t be mixed in arithmetic: >>> p1 = np.polynomial.Polynomial([1, 2, 3]) >>> p1 Polynomial([1., 2., 3.], domain=[-1., 1.], window=[-1., 1.], symbol='x') >>> p2 = np.polynomial.Polynomial([1, 2, 3], domain=[-2, 2]) >>> p1 == p2 False >>> p1 + p2 Traceback (most recent call last): ... TypeError: Domains differ See the documentation for the [convenience classes](https://numpy.org/doc/2.2/reference/routines.polynomials.classes) for further details on the `domain` and `window` attributes. Another major difference between the legacy polynomial module and the polynomial package is polynomial fitting. In the old module, fitting was done via the [`polyfit`](generated/numpy.polyfit#numpy.polyfit "numpy.polyfit") function. In the polynomial package, the [`fit`](generated/numpy.polynomial.polynomial.polynomial.fit#numpy.polynomial.polynomial.Polynomial.fit "numpy.polynomial.polynomial.Polynomial.fit") class method is preferred. For example, consider a simple linear fit to the following data: In [1]: rng = np.random.default_rng() In [2]: x = np.arange(10) In [3]: y = np.arange(10) + rng.standard_normal(10) With the legacy polynomial module, a linear fit (i.e. polynomial of degree 1) could be applied to these data with [`polyfit`](generated/numpy.polyfit#numpy.polyfit "numpy.polyfit"): In [4]: np.polyfit(x, y, deg=1) Out[4]: array([ 1.05733523, -0.04871142]) With the new polynomial API, the [`fit`](generated/numpy.polynomial.polynomial.polynomial.fit#numpy.polynomial.polynomial.Polynomial.fit "numpy.polynomial.polynomial.Polynomial.fit") class method is preferred: In [5]: p_fitted = np.polynomial.Polynomial.fit(x, y, deg=1) In [6]: p_fitted Out[6]: Polynomial([4.70929711, 4.75800853], domain=[0., 9.], window=[-1., 1.], symbol='x') Note that the coefficients are given _in the scaled domain_ defined by the linear mapping between the `window` and `domain`. [`convert`](generated/numpy.polynomial.polynomial.polynomial.convert#numpy.polynomial.polynomial.Polynomial.convert "numpy.polynomial.polynomial.Polynomial.convert") can be used to get the coefficients in the unscaled data domain. In [7]: p_fitted.convert() Out[7]: Polynomial([-0.04871142, 1.05733523], domain=[-1., 1.], window=[-1., 1.], symbol='x') ## Documentation for the polynomial package In addition to standard power series polynomials, the polynomial package provides several additional kinds of polynomials including Chebyshev, Hermite (two subtypes), Laguerre, and Legendre polynomials. Each of these has an associated `convenience class` available from the [`numpy.polynomial`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial") namespace that provides a consistent interface for working with polynomials regardless of their type. * [Using the convenience classes](routines.polynomials.classes) Documentation pertaining to specific functions defined for each kind of polynomial individually can be found in the corresponding module documentation: * [Power Series (`numpy.polynomial.polynomial`)](routines.polynomials.polynomial) * [Chebyshev Series (`numpy.polynomial.chebyshev`)](routines.polynomials.chebyshev) * [Hermite Series, “Physicists” (`numpy.polynomial.hermite`)](routines.polynomials.hermite) * [HermiteE Series, “Probabilists” (`numpy.polynomial.hermite_e`)](routines.polynomials.hermite_e) * [Laguerre Series (`numpy.polynomial.laguerre`)](routines.polynomials.laguerre) * [Legendre Series (`numpy.polynomial.legendre`)](routines.polynomials.legendre) * [Polyutils](routines.polynomials.polyutils) ## Documentation for legacy polynomials * [Poly1d](routines.polynomials.poly1d) * [Basics](routines.polynomials.poly1d#basics) * [Fitting](routines.polynomials.poly1d#fitting) * [Calculus](routines.polynomials.poly1d#calculus) * [Arithmetic](routines.polynomials.poly1d#arithmetic) # Laguerre Series (numpy.polynomial.laguerre) This module provides a number of objects (mostly functions) useful for dealing with Laguerre series, including a [`Laguerre`](generated/numpy.polynomial.laguerre.laguerre#numpy.polynomial.laguerre.Laguerre "numpy.polynomial.laguerre.Laguerre") class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its “parent” sub-package, [`numpy.polynomial`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial")). ## Classes [`Laguerre`](generated/numpy.polynomial.laguerre.laguerre#numpy.polynomial.laguerre.Laguerre "numpy.polynomial.laguerre.Laguerre")(coef[, domain, window, symbol]) | A Laguerre series class. ---|--- ## Constants [`lagdomain`](generated/numpy.polynomial.laguerre.lagdomain#numpy.polynomial.laguerre.lagdomain "numpy.polynomial.laguerre.lagdomain") | An array object represents a multidimensional, homogeneous array of fixed-size items. ---|--- [`lagzero`](generated/numpy.polynomial.laguerre.lagzero#numpy.polynomial.laguerre.lagzero "numpy.polynomial.laguerre.lagzero") | An array object represents a multidimensional, homogeneous array of fixed-size items. [`lagone`](generated/numpy.polynomial.laguerre.lagone#numpy.polynomial.laguerre.lagone "numpy.polynomial.laguerre.lagone") | An array object represents a multidimensional, homogeneous array of fixed-size items. [`lagx`](generated/numpy.polynomial.laguerre.lagx#numpy.polynomial.laguerre.lagx "numpy.polynomial.laguerre.lagx") | An array object represents a multidimensional, homogeneous array of fixed-size items. ## Arithmetic [`lagadd`](generated/numpy.polynomial.laguerre.lagadd#numpy.polynomial.laguerre.lagadd "numpy.polynomial.laguerre.lagadd")(c1, c2) | Add one Laguerre series to another. ---|--- [`lagsub`](generated/numpy.polynomial.laguerre.lagsub#numpy.polynomial.laguerre.lagsub "numpy.polynomial.laguerre.lagsub")(c1, c2) | Subtract one Laguerre series from another. [`lagmulx`](generated/numpy.polynomial.laguerre.lagmulx#numpy.polynomial.laguerre.lagmulx "numpy.polynomial.laguerre.lagmulx")(c) | Multiply a Laguerre series by x. [`lagmul`](generated/numpy.polynomial.laguerre.lagmul#numpy.polynomial.laguerre.lagmul "numpy.polynomial.laguerre.lagmul")(c1, c2) | Multiply one Laguerre series by another. [`lagdiv`](generated/numpy.polynomial.laguerre.lagdiv#numpy.polynomial.laguerre.lagdiv "numpy.polynomial.laguerre.lagdiv")(c1, c2) | Divide one Laguerre series by another. [`lagpow`](generated/numpy.polynomial.laguerre.lagpow#numpy.polynomial.laguerre.lagpow "numpy.polynomial.laguerre.lagpow")(c, pow[, maxpower]) | Raise a Laguerre series to a power. [`lagval`](generated/numpy.polynomial.laguerre.lagval#numpy.polynomial.laguerre.lagval "numpy.polynomial.laguerre.lagval")(x, c[, tensor]) | Evaluate a Laguerre series at points x. [`lagval2d`](generated/numpy.polynomial.laguerre.lagval2d#numpy.polynomial.laguerre.lagval2d "numpy.polynomial.laguerre.lagval2d")(x, y, c) | Evaluate a 2-D Laguerre series at points (x, y). [`lagval3d`](generated/numpy.polynomial.laguerre.lagval3d#numpy.polynomial.laguerre.lagval3d "numpy.polynomial.laguerre.lagval3d")(x, y, z, c) | Evaluate a 3-D Laguerre series at points (x, y, z). [`laggrid2d`](generated/numpy.polynomial.laguerre.laggrid2d#numpy.polynomial.laguerre.laggrid2d "numpy.polynomial.laguerre.laggrid2d")(x, y, c) | Evaluate a 2-D Laguerre series on the Cartesian product of x and y. [`laggrid3d`](generated/numpy.polynomial.laguerre.laggrid3d#numpy.polynomial.laguerre.laggrid3d "numpy.polynomial.laguerre.laggrid3d")(x, y, z, c) | Evaluate a 3-D Laguerre series on the Cartesian product of x, y, and z. ## Calculus [`lagder`](generated/numpy.polynomial.laguerre.lagder#numpy.polynomial.laguerre.lagder "numpy.polynomial.laguerre.lagder")(c[, m, scl, axis]) | Differentiate a Laguerre series. ---|--- [`lagint`](generated/numpy.polynomial.laguerre.lagint#numpy.polynomial.laguerre.lagint "numpy.polynomial.laguerre.lagint")(c[, m, k, lbnd, scl, axis]) | Integrate a Laguerre series. ## Misc Functions [`lagfromroots`](generated/numpy.polynomial.laguerre.lagfromroots#numpy.polynomial.laguerre.lagfromroots "numpy.polynomial.laguerre.lagfromroots")(roots) | Generate a Laguerre series with given roots. ---|--- [`lagroots`](generated/numpy.polynomial.laguerre.lagroots#numpy.polynomial.laguerre.lagroots "numpy.polynomial.laguerre.lagroots")(c) | Compute the roots of a Laguerre series. [`lagvander`](generated/numpy.polynomial.laguerre.lagvander#numpy.polynomial.laguerre.lagvander "numpy.polynomial.laguerre.lagvander")(x, deg) | Pseudo-Vandermonde matrix of given degree. [`lagvander2d`](generated/numpy.polynomial.laguerre.lagvander2d#numpy.polynomial.laguerre.lagvander2d "numpy.polynomial.laguerre.lagvander2d")(x, y, deg) | Pseudo-Vandermonde matrix of given degrees. [`lagvander3d`](generated/numpy.polynomial.laguerre.lagvander3d#numpy.polynomial.laguerre.lagvander3d "numpy.polynomial.laguerre.lagvander3d")(x, y, z, deg) | Pseudo-Vandermonde matrix of given degrees. [`laggauss`](generated/numpy.polynomial.laguerre.laggauss#numpy.polynomial.laguerre.laggauss "numpy.polynomial.laguerre.laggauss")(deg) | Gauss-Laguerre quadrature. [`lagweight`](generated/numpy.polynomial.laguerre.lagweight#numpy.polynomial.laguerre.lagweight "numpy.polynomial.laguerre.lagweight")(x) | Weight function of the Laguerre polynomials. [`lagcompanion`](generated/numpy.polynomial.laguerre.lagcompanion#numpy.polynomial.laguerre.lagcompanion "numpy.polynomial.laguerre.lagcompanion")(c) | Return the companion matrix of c. [`lagfit`](generated/numpy.polynomial.laguerre.lagfit#numpy.polynomial.laguerre.lagfit "numpy.polynomial.laguerre.lagfit")(x, y, deg[, rcond, full, w]) | Least squares fit of Laguerre series to data. [`lagtrim`](generated/numpy.polynomial.laguerre.lagtrim#numpy.polynomial.laguerre.lagtrim "numpy.polynomial.laguerre.lagtrim")(c[, tol]) | Remove "small" "trailing" coefficients from a polynomial. [`lagline`](generated/numpy.polynomial.laguerre.lagline#numpy.polynomial.laguerre.lagline "numpy.polynomial.laguerre.lagline")(off, scl) | Laguerre series whose graph is a straight line. [`lag2poly`](generated/numpy.polynomial.laguerre.lag2poly#numpy.polynomial.laguerre.lag2poly "numpy.polynomial.laguerre.lag2poly")(c) | Convert a Laguerre series to a polynomial. [`poly2lag`](generated/numpy.polynomial.laguerre.poly2lag#numpy.polynomial.laguerre.poly2lag "numpy.polynomial.laguerre.poly2lag")(pol) | Convert a polynomial to a Laguerre series. ## See also [`numpy.polynomial`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial") # Legendre Series (numpy.polynomial.legendre) This module provides a number of objects (mostly functions) useful for dealing with Legendre series, including a [`Legendre`](generated/numpy.polynomial.legendre.legendre#numpy.polynomial.legendre.Legendre "numpy.polynomial.legendre.Legendre") class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its “parent” sub-package, [`numpy.polynomial`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial")). ## Classes [`Legendre`](generated/numpy.polynomial.legendre.legendre#numpy.polynomial.legendre.Legendre "numpy.polynomial.legendre.Legendre")(coef[, domain, window, symbol]) | A Legendre series class. ---|--- ## Constants [`legdomain`](generated/numpy.polynomial.legendre.legdomain#numpy.polynomial.legendre.legdomain "numpy.polynomial.legendre.legdomain") | An array object represents a multidimensional, homogeneous array of fixed-size items. ---|--- [`legzero`](generated/numpy.polynomial.legendre.legzero#numpy.polynomial.legendre.legzero "numpy.polynomial.legendre.legzero") | An array object represents a multidimensional, homogeneous array of fixed-size items. [`legone`](generated/numpy.polynomial.legendre.legone#numpy.polynomial.legendre.legone "numpy.polynomial.legendre.legone") | An array object represents a multidimensional, homogeneous array of fixed-size items. [`legx`](generated/numpy.polynomial.legendre.legx#numpy.polynomial.legendre.legx "numpy.polynomial.legendre.legx") | An array object represents a multidimensional, homogeneous array of fixed-size items. ## Arithmetic [`legadd`](generated/numpy.polynomial.legendre.legadd#numpy.polynomial.legendre.legadd "numpy.polynomial.legendre.legadd")(c1, c2) | Add one Legendre series to another. ---|--- [`legsub`](generated/numpy.polynomial.legendre.legsub#numpy.polynomial.legendre.legsub "numpy.polynomial.legendre.legsub")(c1, c2) | Subtract one Legendre series from another. [`legmulx`](generated/numpy.polynomial.legendre.legmulx#numpy.polynomial.legendre.legmulx "numpy.polynomial.legendre.legmulx")(c) | Multiply a Legendre series by x. [`legmul`](generated/numpy.polynomial.legendre.legmul#numpy.polynomial.legendre.legmul "numpy.polynomial.legendre.legmul")(c1, c2) | Multiply one Legendre series by another. [`legdiv`](generated/numpy.polynomial.legendre.legdiv#numpy.polynomial.legendre.legdiv "numpy.polynomial.legendre.legdiv")(c1, c2) | Divide one Legendre series by another. [`legpow`](generated/numpy.polynomial.legendre.legpow#numpy.polynomial.legendre.legpow "numpy.polynomial.legendre.legpow")(c, pow[, maxpower]) | Raise a Legendre series to a power. [`legval`](generated/numpy.polynomial.legendre.legval#numpy.polynomial.legendre.legval "numpy.polynomial.legendre.legval")(x, c[, tensor]) | Evaluate a Legendre series at points x. [`legval2d`](generated/numpy.polynomial.legendre.legval2d#numpy.polynomial.legendre.legval2d "numpy.polynomial.legendre.legval2d")(x, y, c) | Evaluate a 2-D Legendre series at points (x, y). [`legval3d`](generated/numpy.polynomial.legendre.legval3d#numpy.polynomial.legendre.legval3d "numpy.polynomial.legendre.legval3d")(x, y, z, c) | Evaluate a 3-D Legendre series at points (x, y, z). [`leggrid2d`](generated/numpy.polynomial.legendre.leggrid2d#numpy.polynomial.legendre.leggrid2d "numpy.polynomial.legendre.leggrid2d")(x, y, c) | Evaluate a 2-D Legendre series on the Cartesian product of x and y. [`leggrid3d`](generated/numpy.polynomial.legendre.leggrid3d#numpy.polynomial.legendre.leggrid3d "numpy.polynomial.legendre.leggrid3d")(x, y, z, c) | Evaluate a 3-D Legendre series on the Cartesian product of x, y, and z. ## Calculus [`legder`](generated/numpy.polynomial.legendre.legder#numpy.polynomial.legendre.legder "numpy.polynomial.legendre.legder")(c[, m, scl, axis]) | Differentiate a Legendre series. ---|--- [`legint`](generated/numpy.polynomial.legendre.legint#numpy.polynomial.legendre.legint "numpy.polynomial.legendre.legint")(c[, m, k, lbnd, scl, axis]) | Integrate a Legendre series. ## Misc Functions [`legfromroots`](generated/numpy.polynomial.legendre.legfromroots#numpy.polynomial.legendre.legfromroots "numpy.polynomial.legendre.legfromroots")(roots) | Generate a Legendre series with given roots. ---|--- [`legroots`](generated/numpy.polynomial.legendre.legroots#numpy.polynomial.legendre.legroots "numpy.polynomial.legendre.legroots")(c) | Compute the roots of a Legendre series. [`legvander`](generated/numpy.polynomial.legendre.legvander#numpy.polynomial.legendre.legvander "numpy.polynomial.legendre.legvander")(x, deg) | Pseudo-Vandermonde matrix of given degree. [`legvander2d`](generated/numpy.polynomial.legendre.legvander2d#numpy.polynomial.legendre.legvander2d "numpy.polynomial.legendre.legvander2d")(x, y, deg) | Pseudo-Vandermonde matrix of given degrees. [`legvander3d`](generated/numpy.polynomial.legendre.legvander3d#numpy.polynomial.legendre.legvander3d "numpy.polynomial.legendre.legvander3d")(x, y, z, deg) | Pseudo-Vandermonde matrix of given degrees. [`leggauss`](generated/numpy.polynomial.legendre.leggauss#numpy.polynomial.legendre.leggauss "numpy.polynomial.legendre.leggauss")(deg) | Gauss-Legendre quadrature. [`legweight`](generated/numpy.polynomial.legendre.legweight#numpy.polynomial.legendre.legweight "numpy.polynomial.legendre.legweight")(x) | Weight function of the Legendre polynomials. [`legcompanion`](generated/numpy.polynomial.legendre.legcompanion#numpy.polynomial.legendre.legcompanion "numpy.polynomial.legendre.legcompanion")(c) | Return the scaled companion matrix of c. [`legfit`](generated/numpy.polynomial.legendre.legfit#numpy.polynomial.legendre.legfit "numpy.polynomial.legendre.legfit")(x, y, deg[, rcond, full, w]) | Least squares fit of Legendre series to data. [`legtrim`](generated/numpy.polynomial.legendre.legtrim#numpy.polynomial.legendre.legtrim "numpy.polynomial.legendre.legtrim")(c[, tol]) | Remove "small" "trailing" coefficients from a polynomial. [`legline`](generated/numpy.polynomial.legendre.legline#numpy.polynomial.legendre.legline "numpy.polynomial.legendre.legline")(off, scl) | Legendre series whose graph is a straight line. [`leg2poly`](generated/numpy.polynomial.legendre.leg2poly#numpy.polynomial.legendre.leg2poly "numpy.polynomial.legendre.leg2poly")(c) | Convert a Legendre series to a polynomial. [`poly2leg`](generated/numpy.polynomial.legendre.poly2leg#numpy.polynomial.legendre.poly2leg "numpy.polynomial.legendre.poly2leg")(pol) | Convert a polynomial to a Legendre series. ## See also numpy.polynomial # Poly1d ## Basics [`poly1d`](generated/numpy.poly1d#numpy.poly1d "numpy.poly1d")(c_or_r[, r, variable]) | A one-dimensional polynomial class. ---|--- [`polyval`](generated/numpy.polyval#numpy.polyval "numpy.polyval")(p, x) | Evaluate a polynomial at specific values. [`poly`](generated/numpy.poly#numpy.poly "numpy.poly")(seq_of_zeros) | Find the coefficients of a polynomial with the given sequence of roots. [`roots`](generated/numpy.roots#numpy.roots "numpy.roots")(p) | Return the roots of a polynomial with coefficients given in p. ## Fitting [`polyfit`](generated/numpy.polyfit#numpy.polyfit "numpy.polyfit")(x, y, deg[, rcond, full, w, cov]) | Least squares polynomial fit. ---|--- ## Calculus [`polyder`](generated/numpy.polyder#numpy.polyder "numpy.polyder")(p[, m]) | Return the derivative of the specified order of a polynomial. ---|--- [`polyint`](generated/numpy.polyint#numpy.polyint "numpy.polyint")(p[, m, k]) | Return an antiderivative (indefinite integral) of a polynomial. ## Arithmetic [`polyadd`](generated/numpy.polyadd#numpy.polyadd "numpy.polyadd")(a1, a2) | Find the sum of two polynomials. ---|--- [`polydiv`](generated/numpy.polydiv#numpy.polydiv "numpy.polydiv")(u, v) | Returns the quotient and remainder of polynomial division. [`polymul`](generated/numpy.polymul#numpy.polymul "numpy.polymul")(a1, a2) | Find the product of two polynomials. [`polysub`](generated/numpy.polysub#numpy.polysub "numpy.polysub")(a1, a2) | Difference (subtraction) of two polynomials. # Power Series (numpy.polynomial.polynomial) This module provides a number of objects (mostly functions) useful for dealing with polynomials, including a [`Polynomial`](generated/numpy.polynomial.polynomial.polynomial#numpy.polynomial.polynomial.Polynomial "numpy.polynomial.polynomial.Polynomial") class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with polynomial objects is in the docstring for its “parent” sub- package, [`numpy.polynomial`](routines.polynomials-package#module- numpy.polynomial "numpy.polynomial")). ## Classes [`Polynomial`](generated/numpy.polynomial.polynomial.polynomial#numpy.polynomial.polynomial.Polynomial "numpy.polynomial.polynomial.Polynomial")(coef[, domain, window, symbol]) | A power series class. ---|--- ## Constants [`polydomain`](generated/numpy.polynomial.polynomial.polydomain#numpy.polynomial.polynomial.polydomain "numpy.polynomial.polynomial.polydomain") | An array object represents a multidimensional, homogeneous array of fixed-size items. ---|--- [`polyzero`](generated/numpy.polynomial.polynomial.polyzero#numpy.polynomial.polynomial.polyzero "numpy.polynomial.polynomial.polyzero") | An array object represents a multidimensional, homogeneous array of fixed-size items. [`polyone`](generated/numpy.polynomial.polynomial.polyone#numpy.polynomial.polynomial.polyone "numpy.polynomial.polynomial.polyone") | An array object represents a multidimensional, homogeneous array of fixed-size items. [`polyx`](generated/numpy.polynomial.polynomial.polyx#numpy.polynomial.polynomial.polyx "numpy.polynomial.polynomial.polyx") | An array object represents a multidimensional, homogeneous array of fixed-size items. ## Arithmetic [`polyadd`](generated/numpy.polynomial.polynomial.polyadd#numpy.polynomial.polynomial.polyadd "numpy.polynomial.polynomial.polyadd")(c1, c2) | Add one polynomial to another. ---|--- [`polysub`](generated/numpy.polynomial.polynomial.polysub#numpy.polynomial.polynomial.polysub "numpy.polynomial.polynomial.polysub")(c1, c2) | Subtract one polynomial from another. [`polymulx`](generated/numpy.polynomial.polynomial.polymulx#numpy.polynomial.polynomial.polymulx "numpy.polynomial.polynomial.polymulx")(c) | Multiply a polynomial by x. [`polymul`](generated/numpy.polynomial.polynomial.polymul#numpy.polynomial.polynomial.polymul "numpy.polynomial.polynomial.polymul")(c1, c2) | Multiply one polynomial by another. [`polydiv`](generated/numpy.polynomial.polynomial.polydiv#numpy.polynomial.polynomial.polydiv "numpy.polynomial.polynomial.polydiv")(c1, c2) | Divide one polynomial by another. [`polypow`](generated/numpy.polynomial.polynomial.polypow#numpy.polynomial.polynomial.polypow "numpy.polynomial.polynomial.polypow")(c, pow[, maxpower]) | Raise a polynomial to a power. [`polyval`](generated/numpy.polynomial.polynomial.polyval#numpy.polynomial.polynomial.polyval "numpy.polynomial.polynomial.polyval")(x, c[, tensor]) | Evaluate a polynomial at points x. [`polyval2d`](generated/numpy.polynomial.polynomial.polyval2d#numpy.polynomial.polynomial.polyval2d "numpy.polynomial.polynomial.polyval2d")(x, y, c) | Evaluate a 2-D polynomial at points (x, y). [`polyval3d`](generated/numpy.polynomial.polynomial.polyval3d#numpy.polynomial.polynomial.polyval3d "numpy.polynomial.polynomial.polyval3d")(x, y, z, c) | Evaluate a 3-D polynomial at points (x, y, z). [`polygrid2d`](generated/numpy.polynomial.polynomial.polygrid2d#numpy.polynomial.polynomial.polygrid2d "numpy.polynomial.polynomial.polygrid2d")(x, y, c) | Evaluate a 2-D polynomial on the Cartesian product of x and y. [`polygrid3d`](generated/numpy.polynomial.polynomial.polygrid3d#numpy.polynomial.polynomial.polygrid3d "numpy.polynomial.polynomial.polygrid3d")(x, y, z, c) | Evaluate a 3-D polynomial on the Cartesian product of x, y and z. ## Calculus [`polyder`](generated/numpy.polynomial.polynomial.polyder#numpy.polynomial.polynomial.polyder "numpy.polynomial.polynomial.polyder")(c[, m, scl, axis]) | Differentiate a polynomial. ---|--- [`polyint`](generated/numpy.polynomial.polynomial.polyint#numpy.polynomial.polynomial.polyint "numpy.polynomial.polynomial.polyint")(c[, m, k, lbnd, scl, axis]) | Integrate a polynomial. ## Misc Functions [`polyfromroots`](generated/numpy.polynomial.polynomial.polyfromroots#numpy.polynomial.polynomial.polyfromroots "numpy.polynomial.polynomial.polyfromroots")(roots) | Generate a monic polynomial with given roots. ---|--- [`polyroots`](generated/numpy.polynomial.polynomial.polyroots#numpy.polynomial.polynomial.polyroots "numpy.polynomial.polynomial.polyroots")(c) | Compute the roots of a polynomial. [`polyvalfromroots`](generated/numpy.polynomial.polynomial.polyvalfromroots#numpy.polynomial.polynomial.polyvalfromroots "numpy.polynomial.polynomial.polyvalfromroots")(x, r[, tensor]) | Evaluate a polynomial specified by its roots at points x. [`polyvander`](generated/numpy.polynomial.polynomial.polyvander#numpy.polynomial.polynomial.polyvander "numpy.polynomial.polynomial.polyvander")(x, deg) | Vandermonde matrix of given degree. [`polyvander2d`](generated/numpy.polynomial.polynomial.polyvander2d#numpy.polynomial.polynomial.polyvander2d "numpy.polynomial.polynomial.polyvander2d")(x, y, deg) | Pseudo-Vandermonde matrix of given degrees. [`polyvander3d`](generated/numpy.polynomial.polynomial.polyvander3d#numpy.polynomial.polynomial.polyvander3d "numpy.polynomial.polynomial.polyvander3d")(x, y, z, deg) | Pseudo-Vandermonde matrix of given degrees. [`polycompanion`](generated/numpy.polynomial.polynomial.polycompanion#numpy.polynomial.polynomial.polycompanion "numpy.polynomial.polynomial.polycompanion")(c) | Return the companion matrix of c. [`polyfit`](generated/numpy.polynomial.polynomial.polyfit#numpy.polynomial.polynomial.polyfit "numpy.polynomial.polynomial.polyfit")(x, y, deg[, rcond, full, w]) | Least-squares fit of a polynomial to data. [`polytrim`](generated/numpy.polynomial.polynomial.polytrim#numpy.polynomial.polynomial.polytrim "numpy.polynomial.polynomial.polytrim")(c[, tol]) | Remove "small" "trailing" coefficients from a polynomial. [`polyline`](generated/numpy.polynomial.polynomial.polyline#numpy.polynomial.polynomial.polyline "numpy.polynomial.polynomial.polyline")(off, scl) | Returns an array representing a linear polynomial. ## See Also [`numpy.polynomial`](routines.polynomials-package#module-numpy.polynomial "numpy.polynomial") # Polyutils Utility classes and functions for the polynomial modules. This module provides: error and warning objects; a polynomial base class; and some routines used in both the `polynomial` and `chebyshev` modules. ## Functions [`as_series`](generated/numpy.polynomial.polyutils.as_series#numpy.polynomial.polyutils.as_series "numpy.polynomial.polyutils.as_series")(alist[, trim]) | Return argument as a list of 1-d arrays. ---|--- [`trimseq`](generated/numpy.polynomial.polyutils.trimseq#numpy.polynomial.polyutils.trimseq "numpy.polynomial.polyutils.trimseq")(seq) | Remove small Poly series coefficients. [`trimcoef`](generated/numpy.polynomial.polyutils.trimcoef#numpy.polynomial.polyutils.trimcoef "numpy.polynomial.polyutils.trimcoef")(c[, tol]) | Remove "small" "trailing" coefficients from a polynomial. [`getdomain`](generated/numpy.polynomial.polyutils.getdomain#numpy.polynomial.polyutils.getdomain "numpy.polynomial.polyutils.getdomain")(x) | Return a domain suitable for given abscissae. [`mapdomain`](generated/numpy.polynomial.polyutils.mapdomain#numpy.polynomial.polyutils.mapdomain "numpy.polynomial.polyutils.mapdomain")(x, old, new) | Apply linear map to input points. [`mapparms`](generated/numpy.polynomial.polyutils.mapparms#numpy.polynomial.polyutils.mapparms "numpy.polynomial.polyutils.mapparms")(old, new) | Linear map parameters between domains. # Record Arrays (numpy.rec) Record arrays expose the fields of structured arrays as properties. Most commonly, ndarrays contain elements of a single type, e.g. floats, integers, bools etc. However, it is possible for elements to be combinations of these using structured types, such as: >>> import numpy as np >>> a = np.array([(1, 2.0), (1, 2.0)], ... dtype=[('x', np.int64), ('y', np.float64)]) >>> a array([(1, 2.), (1, 2.)], dtype=[('x', '>> a['x'] array([1, 1]) >>> a['y'] array([2., 2.]) Record arrays allow us to access fields as properties: >>> ar = np.rec.array(a) >>> ar.x array([1, 1]) >>> ar.y array([2., 2.]) ## Functions [`array`](generated/numpy.rec.array#numpy.rec.array "numpy.rec.array")(obj[, dtype, shape, offset, strides, ...]) | Construct a record array from a wide-variety of objects. ---|--- [`find_duplicate`](generated/numpy.rec.find_duplicate#numpy.rec.find_duplicate "numpy.rec.find_duplicate")(list) | Find duplication in a list, return a list of duplicated elements [`format_parser`](generated/numpy.rec.format_parser#numpy.rec.format_parser "numpy.rec.format_parser")(formats, names, titles[, ...]) | Class to convert formats, names, titles description to a dtype. [`fromarrays`](generated/numpy.rec.fromarrays#numpy.rec.fromarrays "numpy.rec.fromarrays")(arrayList[, dtype, shape, ...]) | Create a record array from a (flat) list of arrays [`fromfile`](generated/numpy.rec.fromfile#numpy.rec.fromfile "numpy.rec.fromfile")(fd[, dtype, shape, offset, ...]) | Create an array from binary file data [`fromrecords`](generated/numpy.rec.fromrecords#numpy.rec.fromrecords "numpy.rec.fromrecords")(recList[, dtype, shape, ...]) | Create a recarray from a list of records in text form. [`fromstring`](generated/numpy.rec.fromstring#numpy.rec.fromstring "numpy.rec.fromstring")(datastring[, dtype, shape, ...]) | Create a record array from binary data Also, the [`numpy.recarray`](generated/numpy.recarray#numpy.recarray "numpy.recarray") class and the [`numpy.record`](generated/numpy.record#numpy.record "numpy.record") scalar dtype are present in this namespace. # Set routines ## Making proper sets [`unique`](generated/numpy.unique#numpy.unique "numpy.unique")(ar[, return_index, return_inverse, ...]) | Find the unique elements of an array. ---|--- [`unique_all`](generated/numpy.unique_all#numpy.unique_all "numpy.unique_all")(x) | Find the unique elements of an array, and counts, inverse, and indices. [`unique_counts`](generated/numpy.unique_counts#numpy.unique_counts "numpy.unique_counts")(x) | Find the unique elements and counts of an input array `x`. [`unique_inverse`](generated/numpy.unique_inverse#numpy.unique_inverse "numpy.unique_inverse")(x) | Find the unique elements of `x` and indices to reconstruct `x`. [`unique_values`](generated/numpy.unique_values#numpy.unique_values "numpy.unique_values")(x) | Returns the unique elements of an input array `x`. ## Boolean operations [`in1d`](generated/numpy.in1d#numpy.in1d "numpy.in1d")(ar1, ar2[, assume_unique, invert, kind]) | Test whether each element of a 1-D array is also present in a second array. ---|--- [`intersect1d`](generated/numpy.intersect1d#numpy.intersect1d "numpy.intersect1d")(ar1, ar2[, assume_unique, ...]) | Find the intersection of two arrays. [`isin`](generated/numpy.isin#numpy.isin "numpy.isin")(element, test_elements[, ...]) | Calculates `element in test_elements`, broadcasting over `element` only. [`setdiff1d`](generated/numpy.setdiff1d#numpy.setdiff1d "numpy.setdiff1d")(ar1, ar2[, assume_unique]) | Find the set difference of two arrays. [`setxor1d`](generated/numpy.setxor1d#numpy.setxor1d "numpy.setxor1d")(ar1, ar2[, assume_unique]) | Find the set exclusive-or of two arrays. [`union1d`](generated/numpy.union1d#numpy.union1d "numpy.union1d")(ar1, ar2) | Find the union of two arrays. # Sorting, searching, and counting ## Sorting [`sort`](generated/numpy.sort#numpy.sort "numpy.sort")(a[, axis, kind, order, stable]) | Return a sorted copy of an array. ---|--- [`lexsort`](generated/numpy.lexsort#numpy.lexsort "numpy.lexsort")(keys[, axis]) | Perform an indirect stable sort using a sequence of keys. [`argsort`](generated/numpy.argsort#numpy.argsort "numpy.argsort")(a[, axis, kind, order, stable]) | Returns the indices that would sort an array. [`ndarray.sort`](generated/numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort")([axis, kind, order]) | Sort an array in-place. [`sort_complex`](generated/numpy.sort_complex#numpy.sort_complex "numpy.sort_complex")(a) | Sort a complex array using the real part first, then the imaginary part. [`partition`](generated/numpy.partition#numpy.partition "numpy.partition")(a, kth[, axis, kind, order]) | Return a partitioned copy of an array. [`argpartition`](generated/numpy.argpartition#numpy.argpartition "numpy.argpartition")(a, kth[, axis, kind, order]) | Perform an indirect partition along the given axis using the algorithm specified by the `kind` keyword. ## Searching [`argmax`](generated/numpy.argmax#numpy.argmax "numpy.argmax")(a[, axis, out, keepdims]) | Returns the indices of the maximum values along an axis. ---|--- [`nanargmax`](generated/numpy.nanargmax#numpy.nanargmax "numpy.nanargmax")(a[, axis, out, keepdims]) | Return the indices of the maximum values in the specified axis ignoring NaNs. [`argmin`](generated/numpy.argmin#numpy.argmin "numpy.argmin")(a[, axis, out, keepdims]) | Returns the indices of the minimum values along an axis. [`nanargmin`](generated/numpy.nanargmin#numpy.nanargmin "numpy.nanargmin")(a[, axis, out, keepdims]) | Return the indices of the minimum values in the specified axis ignoring NaNs. [`argwhere`](generated/numpy.argwhere#numpy.argwhere "numpy.argwhere")(a) | Find the indices of array elements that are non-zero, grouped by element. [`nonzero`](generated/numpy.nonzero#numpy.nonzero "numpy.nonzero")(a) | Return the indices of the elements that are non-zero. [`flatnonzero`](generated/numpy.flatnonzero#numpy.flatnonzero "numpy.flatnonzero")(a) | Return indices that are non-zero in the flattened version of a. [`where`](generated/numpy.where#numpy.where "numpy.where")(condition, [x, y], /) | Return elements chosen from `x` or `y` depending on `condition`. [`searchsorted`](generated/numpy.searchsorted#numpy.searchsorted "numpy.searchsorted")(a, v[, side, sorter]) | Find indices where elements should be inserted to maintain order. [`extract`](generated/numpy.extract#numpy.extract "numpy.extract")(condition, arr) | Return the elements of an array that satisfy some condition. ## Counting [`count_nonzero`](generated/numpy.count_nonzero#numpy.count_nonzero "numpy.count_nonzero")(a[, axis, keepdims]) | Counts the number of non-zero values in the array `a`. ---|--- # Statistics ## Order statistics [`ptp`](generated/numpy.ptp#numpy.ptp "numpy.ptp")(a[, axis, out, keepdims]) | Range of values (maximum - minimum) along an axis. ---|--- [`percentile`](generated/numpy.percentile#numpy.percentile "numpy.percentile")(a, q[, axis, out, ...]) | Compute the q-th percentile of the data along the specified axis. [`nanpercentile`](generated/numpy.nanpercentile#numpy.nanpercentile "numpy.nanpercentile")(a, q[, axis, out, ...]) | Compute the qth percentile of the data along the specified axis, while ignoring nan values. [`quantile`](generated/numpy.quantile#numpy.quantile "numpy.quantile")(a, q[, axis, out, overwrite_input, ...]) | Compute the q-th quantile of the data along the specified axis. [`nanquantile`](generated/numpy.nanquantile#numpy.nanquantile "numpy.nanquantile")(a, q[, axis, out, ...]) | Compute the qth quantile of the data along the specified axis, while ignoring nan values. ## Averages and variances [`median`](generated/numpy.median#numpy.median "numpy.median")(a[, axis, out, overwrite_input, keepdims]) | Compute the median along the specified axis. ---|--- [`average`](generated/numpy.average#numpy.average "numpy.average")(a[, axis, weights, returned, keepdims]) | Compute the weighted average along the specified axis. [`mean`](generated/numpy.mean#numpy.mean "numpy.mean")(a[, axis, dtype, out, keepdims, where]) | Compute the arithmetic mean along the specified axis. [`std`](generated/numpy.std#numpy.std "numpy.std")(a[, axis, dtype, out, ddof, keepdims, ...]) | Compute the standard deviation along the specified axis. [`var`](generated/numpy.var#numpy.var "numpy.var")(a[, axis, dtype, out, ddof, keepdims, ...]) | Compute the variance along the specified axis. [`nanmedian`](generated/numpy.nanmedian#numpy.nanmedian "numpy.nanmedian")(a[, axis, out, overwrite_input, ...]) | Compute the median along the specified axis, while ignoring NaNs. [`nanmean`](generated/numpy.nanmean#numpy.nanmean "numpy.nanmean")(a[, axis, dtype, out, keepdims, where]) | Compute the arithmetic mean along the specified axis, ignoring NaNs. [`nanstd`](generated/numpy.nanstd#numpy.nanstd "numpy.nanstd")(a[, axis, dtype, out, ddof, ...]) | Compute the standard deviation along the specified axis, while ignoring NaNs. [`nanvar`](generated/numpy.nanvar#numpy.nanvar "numpy.nanvar")(a[, axis, dtype, out, ddof, ...]) | Compute the variance along the specified axis, while ignoring NaNs. ## Correlating [`corrcoef`](generated/numpy.corrcoef#numpy.corrcoef "numpy.corrcoef")(x[, y, rowvar, bias, ddof, dtype]) | Return Pearson product-moment correlation coefficients. ---|--- [`correlate`](generated/numpy.correlate#numpy.correlate "numpy.correlate")(a, v[, mode]) | Cross-correlation of two 1-dimensional sequences. [`cov`](generated/numpy.cov#numpy.cov "numpy.cov")(m[, y, rowvar, bias, ddof, fweights, ...]) | Estimate a covariance matrix, given data and weights. ## Histograms [`histogram`](generated/numpy.histogram#numpy.histogram "numpy.histogram")(a[, bins, range, density, weights]) | Compute the histogram of a dataset. ---|--- [`histogram2d`](generated/numpy.histogram2d#numpy.histogram2d "numpy.histogram2d")(x, y[, bins, range, density, ...]) | Compute the bi-dimensional histogram of two data samples. [`histogramdd`](generated/numpy.histogramdd#numpy.histogramdd "numpy.histogramdd")(sample[, bins, range, density, ...]) | Compute the multidimensional histogram of some data. [`bincount`](generated/numpy.bincount#numpy.bincount "numpy.bincount")(x, /[, weights, minlength]) | Count number of occurrences of each value in array of non-negative ints. [`histogram_bin_edges`](generated/numpy.histogram_bin_edges#numpy.histogram_bin_edges "numpy.histogram_bin_edges")(a[, bins, range, weights]) | Function to calculate only the edges of the bins used by the [`histogram`](generated/numpy.histogram#numpy.histogram "numpy.histogram") function. [`digitize`](generated/numpy.digitize#numpy.digitize "numpy.digitize")(x, bins[, right]) | Return the indices of the bins to which each value in input array belongs. # String functionality The `numpy.strings` module provides a set of universal functions operating on arrays of type [`numpy.str_`](arrays.scalars#numpy.str_ "numpy.str_") or [`numpy.bytes_`](arrays.scalars#numpy.bytes_ "numpy.bytes_"). For example >>> np.strings.add(["num", "doc"], ["py", "umentation"]) array(['numpy', 'documentation'], dtype='= x2) element-wise. [`less_equal`](generated/numpy.strings.less_equal#numpy.strings.less_equal "numpy.strings.less_equal")(x1, x2, /[, out, where, casting, ...]) | Return the truth value of (x1 <= x2) element-wise. [`greater`](generated/numpy.strings.greater#numpy.strings.greater "numpy.strings.greater")(x1, x2, /[, out, where, casting, ...]) | Return the truth value of (x1 > x2) element-wise. [`less`](generated/numpy.strings.less#numpy.strings.less "numpy.strings.less")(x1, x2, /[, out, where, casting, ...]) | Return the truth value of (x1 < x2) element-wise. ## String information [`count`](generated/numpy.strings.count#numpy.strings.count "numpy.strings.count")(a, sub[, start, end]) | Returns an array with the number of non-overlapping occurrences of substring `sub` in the range [`start`, `end`). ---|--- [`endswith`](generated/numpy.strings.endswith#numpy.strings.endswith "numpy.strings.endswith")(a, suffix[, start, end]) | Returns a boolean array which is `True` where the string element in `a` ends with `suffix`, otherwise `False`. [`find`](generated/numpy.strings.find#numpy.strings.find "numpy.strings.find")(a, sub[, start, end]) | For each element, return the lowest index in the string where substring `sub` is found, such that `sub` is contained in the range [`start`, `end`). [`index`](generated/numpy.strings.index#numpy.strings.index "numpy.strings.index")(a, sub[, start, end]) | Like [`find`](generated/numpy.strings.find#numpy.strings.find "numpy.strings.find"), but raises [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "\(in Python v3.13\)") when the substring is not found. [`isalnum`](generated/numpy.strings.isalnum#numpy.strings.isalnum "numpy.strings.isalnum")(x, /[, out, where, casting, order, ...]) | Returns true for each element if all characters in the string are alphanumeric and there is at least one character, false otherwise. [`isalpha`](generated/numpy.strings.isalpha#numpy.strings.isalpha "numpy.strings.isalpha")(x, /[, out, where, casting, order, ...]) | Returns true for each element if all characters in the data interpreted as a string are alphabetic and there is at least one character, false otherwise. [`isdecimal`](generated/numpy.strings.isdecimal#numpy.strings.isdecimal "numpy.strings.isdecimal")(x, /[, out, where, casting, ...]) | For each element, return True if there are only decimal characters in the element. [`isdigit`](generated/numpy.strings.isdigit#numpy.strings.isdigit "numpy.strings.isdigit")(x, /[, out, where, casting, order, ...]) | Returns true for each element if all characters in the string are digits and there is at least one character, false otherwise. [`islower`](generated/numpy.strings.islower#numpy.strings.islower "numpy.strings.islower")(x, /[, out, where, casting, order, ...]) | Returns true for each element if all cased characters in the string are lowercase and there is at least one cased character, false otherwise. [`isnumeric`](generated/numpy.strings.isnumeric#numpy.strings.isnumeric "numpy.strings.isnumeric")(x, /[, out, where, casting, ...]) | For each element, return True if there are only numeric characters in the element. [`isspace`](generated/numpy.strings.isspace#numpy.strings.isspace "numpy.strings.isspace")(x, /[, out, where, casting, order, ...]) | Returns true for each element if there are only whitespace characters in the string and there is at least one character, false otherwise. [`istitle`](generated/numpy.strings.istitle#numpy.strings.istitle "numpy.strings.istitle")(x, /[, out, where, casting, order, ...]) | Returns true for each element if the element is a titlecased string and there is at least one character, false otherwise. [`isupper`](generated/numpy.strings.isupper#numpy.strings.isupper "numpy.strings.isupper")(x, /[, out, where, casting, order, ...]) | Return true for each element if all cased characters in the string are uppercase and there is at least one character, false otherwise. [`rfind`](generated/numpy.strings.rfind#numpy.strings.rfind "numpy.strings.rfind")(a, sub[, start, end]) | For each element, return the highest index in the string where substring `sub` is found, such that `sub` is contained in the range [`start`, `end`). [`rindex`](generated/numpy.strings.rindex#numpy.strings.rindex "numpy.strings.rindex")(a, sub[, start, end]) | Like [`rfind`](generated/numpy.strings.rfind#numpy.strings.rfind "numpy.strings.rfind"), but raises [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "\(in Python v3.13\)") when the substring `sub` is not found. [`startswith`](generated/numpy.strings.startswith#numpy.strings.startswith "numpy.strings.startswith")(a, prefix[, start, end]) | Returns a boolean array which is `True` where the string element in `a` starts with `prefix`, otherwise `False`. [`str_len`](generated/numpy.strings.str_len#numpy.strings.str_len "numpy.strings.str_len")(x, /[, out, where, casting, order, ...]) | Returns the length of each element. # Test support (numpy.testing) Common test support for all numpy test scripts. This single module should provide all the common functionality for numpy tests in a single location, so that [test scripts](../dev/development_environment#development-environment) can just import it and work right away. For background, see the [Testing guidelines](testing#testing-guidelines) ## Asserts [`assert_allclose`](generated/numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose")(actual, desired[, rtol, ...]) | Raises an AssertionError if two objects are not equal up to desired tolerance. ---|--- [`assert_array_almost_equal_nulp`](generated/numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp")(x, y[, nulp]) | Compare two arrays relatively to their spacing. [`assert_array_max_ulp`](generated/numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp")(a, b[, maxulp, dtype]) | Check that all items of arrays differ in at most N Units in the Last Place. [`assert_array_equal`](generated/numpy.testing.assert_array_equal#numpy.testing.assert_array_equal "numpy.testing.assert_array_equal")(actual, desired[, ...]) | Raises an AssertionError if two array_like objects are not equal. [`assert_array_less`](generated/numpy.testing.assert_array_less#numpy.testing.assert_array_less "numpy.testing.assert_array_less")(x, y[, err_msg, verbose, ...]) | Raises an AssertionError if two array_like objects are not ordered by less than. [`assert_equal`](generated/numpy.testing.assert_equal#numpy.testing.assert_equal "numpy.testing.assert_equal")(actual, desired[, err_msg, ...]) | Raises an AssertionError if two objects are not equal. [`assert_raises`](generated/numpy.testing.assert_raises#numpy.testing.assert_raises "numpy.testing.assert_raises")(assert_raises) | Fail unless an exception of class exception_class is thrown by callable when invoked with arguments args and keyword arguments kwargs. [`assert_raises_regex`](generated/numpy.testing.assert_raises_regex#numpy.testing.assert_raises_regex "numpy.testing.assert_raises_regex")(exception_class, ...) | Fail unless an exception of class exception_class and with message that matches expected_regexp is thrown by callable when invoked with arguments args and keyword arguments kwargs. [`assert_warns`](generated/numpy.testing.assert_warns#numpy.testing.assert_warns "numpy.testing.assert_warns")(warning_class, *args, **kwargs) | Fail unless the given callable throws the specified warning. [`assert_no_warnings`](generated/numpy.testing.assert_no_warnings#numpy.testing.assert_no_warnings "numpy.testing.assert_no_warnings")(*args, **kwargs) | Fail if the given callable produces any warnings. [`assert_no_gc_cycles`](generated/numpy.testing.assert_no_gc_cycles#numpy.testing.assert_no_gc_cycles "numpy.testing.assert_no_gc_cycles")(*args, **kwargs) | Fail if the given callable produces any reference cycles. [`assert_string_equal`](generated/numpy.testing.assert_string_equal#numpy.testing.assert_string_equal "numpy.testing.assert_string_equal")(actual, desired) | Test if two strings are equal. ## Asserts (not recommended) It is recommended to use one of [`assert_allclose`](generated/numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose"), [`assert_array_almost_equal_nulp`](generated/numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp") or [`assert_array_max_ulp`](generated/numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp") instead of these functions for more consistent floating point comparisons. [`assert_`](generated/numpy.testing.assert_#numpy.testing.assert_ "numpy.testing.assert_")(val[, msg]) | Assert that works in release mode. ---|--- [`assert_almost_equal`](generated/numpy.testing.assert_almost_equal#numpy.testing.assert_almost_equal "numpy.testing.assert_almost_equal")(actual, desired[, ...]) | Raises an AssertionError if two items are not equal up to desired precision. [`assert_approx_equal`](generated/numpy.testing.assert_approx_equal#numpy.testing.assert_approx_equal "numpy.testing.assert_approx_equal")(actual, desired[, ...]) | Raises an AssertionError if two items are not equal up to significant digits. [`assert_array_almost_equal`](generated/numpy.testing.assert_array_almost_equal#numpy.testing.assert_array_almost_equal "numpy.testing.assert_array_almost_equal")(actual, desired[, ...]) | Raises an AssertionError if two objects are not equal up to desired precision. [`print_assert_equal`](generated/numpy.testing.print_assert_equal#numpy.testing.print_assert_equal "numpy.testing.print_assert_equal")(test_string, actual, desired) | Test if two objects are equal, and print an error message if test fails. ## Decorators [`decorate_methods`](generated/numpy.testing.decorate_methods#numpy.testing.decorate_methods "numpy.testing.decorate_methods")(cls, decorator[, testmatch]) | Apply a decorator to all methods in a class matching a regular expression. ---|--- ## Test running [`clear_and_catch_warnings`](generated/numpy.testing.clear_and_catch_warnings#numpy.testing.clear_and_catch_warnings "numpy.testing.clear_and_catch_warnings")([record, modules]) | Context manager that resets warning registry for catching warnings ---|--- [`measure`](generated/numpy.testing.measure#numpy.testing.measure "numpy.testing.measure")(code_str[, times, label]) | Return elapsed time for executing code in the namespace of the caller. [`rundocs`](generated/numpy.testing.rundocs#numpy.testing.rundocs "numpy.testing.rundocs")([filename, raise_on_error]) | Run doctests found in the given file. [`suppress_warnings`](generated/numpy.testing.suppress_warnings#numpy.testing.suppress_warnings "numpy.testing.suppress_warnings")([forwarding_rule]) | Context manager and decorator doing much the same as `warnings.catch_warnings`. ## Testing custom array containers (numpy.testing.overrides) These functions can be useful when testing custom array container implementations which make use of `__array_ufunc__`/`__array_function__`. [`allows_array_function_override`](generated/numpy.testing.overrides.allows_array_function_override#numpy.testing.overrides.allows_array_function_override "numpy.testing.overrides.allows_array_function_override")(func) | Determine if a Numpy function can be overridden via `__array_function__` ---|--- [`allows_array_ufunc_override`](generated/numpy.testing.overrides.allows_array_ufunc_override#numpy.testing.overrides.allows_array_ufunc_override "numpy.testing.overrides.allows_array_ufunc_override")(func) | Determine if a function can be overridden via `__array_ufunc__` [`get_overridable_numpy_ufuncs`](generated/numpy.testing.overrides.get_overridable_numpy_ufuncs#numpy.testing.overrides.get_overridable_numpy_ufuncs "numpy.testing.overrides.get_overridable_numpy_ufuncs")() | List all numpy ufuncs overridable via `__array_ufunc__` [`get_overridable_numpy_array_functions`](generated/numpy.testing.overrides.get_overridable_numpy_array_functions#numpy.testing.overrides.get_overridable_numpy_array_functions "numpy.testing.overrides.get_overridable_numpy_array_functions")() | List all numpy functions overridable via `__array_function__` ## Guidelines * [Testing guidelines](testing) * [Introduction](testing#introduction) * [Testing NumPy](testing#testing-numpy) * [Running tests from inside Python](testing#running-tests-from-inside-python) * [Running tests from the command line](testing#running-tests-from-the-command-line) * [Running doctests](testing#running-doctests) * [Other methods of running tests](testing#other-methods-of-running-tests) * [Writing your own tests](testing#writing-your-own-tests) * [Using C code in tests](testing#using-c-code-in-tests) * [`build_and_import_extension`](testing#numpy.testing.extbuild.build_and_import_extension) * [Labeling tests](testing#labeling-tests) * [Easier setup and teardown functions / methods](testing#easier-setup-and-teardown-functions-methods) * [Parametric tests](testing#parametric-tests) * [Doctests](testing#doctests) * [`tests/`](testing#tests) * [`__init__.py` and `setup.py`](testing#init-py-and-setup-py) * [Tips & Tricks](testing#tips-tricks) * [Known failures & skipping tests](testing#known-failures-skipping-tests) * [Tests on random data](testing#tests-on-random-data) * [Documentation for `numpy.test`](testing#documentation-for-numpy-test) * [`test`](testing#numpy.test) # Version information The `numpy.version` submodule includes several constants that expose more detailed information about the exact version of the installed `numpy` package: numpy.version.version Version string for the installed package - matches `numpy.__version__`. numpy.version.full_version Version string - the same as `numpy.version.version`. numpy.version.short_version Version string without any local build identifiers. #### Examples >>> np.__version__ '2.1.0.dev0+git20240319.2ea7ce0' # may vary >>> np.version.short_version '2.1.0.dev0' # may vary numpy.version.git_revision String containing the git hash of the commit from which `numpy` was built. numpy.version.release `True` if this version is a `numpy` release, `False` if a dev version. # Window functions ## Various windows [`bartlett`](generated/numpy.bartlett#numpy.bartlett "numpy.bartlett")(M) | Return the Bartlett window. ---|--- [`blackman`](generated/numpy.blackman#numpy.blackman "numpy.blackman")(M) | Return the Blackman window. [`hamming`](generated/numpy.hamming#numpy.hamming "numpy.hamming")(M) | Return the Hamming window. [`hanning`](generated/numpy.hanning#numpy.hanning "numpy.hanning")(M) | Return the Hanning window. [`kaiser`](generated/numpy.kaiser#numpy.kaiser "numpy.kaiser")(M, beta) | Return the Kaiser window. # NumPy security Security issues can be reported privately as described in the project README and when opening a [new issue on the issue tracker](https://github.com/numpy/numpy/issues/new/choose). The [Python security reporting guidelines](https://www.python.org/dev/security/) are a good resource and its notes apply also to NumPy. NumPy’s maintainers are not security experts. However, we are conscientious about security and experts of both the NumPy codebase and how it’s used. Please do notify us before creating security advisories against NumPy as we are happy to prioritize issues or help with assessing the severity of a bug. A security advisory we are not aware of beforehand can lead to a lot of work for all involved parties. ## Advice for using NumPy on untrusted data A user who can freely execute NumPy (or Python) functions must be considered to have the same privilege as the process/Python interpreter. That said, NumPy should be generally safe to use on _data_ provided by unprivileged users and read through safe API functions (e.g. loaded from a text file or `.npy` file without pickle support). Malicious _values_ or _data sizes_ should never lead to privilege escalation. Note that the above refers to array data. We do not currently consider for example `f2py` to be safe: it is typically used to compile a program that is then run. Any `f2py` invocation must thus use the same privilege as the later execution. The following points may be useful or should be noted when working with untrusted data: * Exhausting memory can result in an out-of-memory kill, which is a possible denial of service attack. Possible causes could be: * Functions reading text files, which may require much more memory than the original input file size. * If users can create arbitrarily shaped arrays, NumPy’s broadcasting means that intermediate or result arrays can be much larger than the inputs. * NumPy structured dtypes allow for a large amount of complexity. Fortunately, most code fails gracefully when a structured dtype is provided unexpectedly. However, code should either disallow untrusted users to provide these (e.g. via `.npy` files) or carefully check the fields included for nested structured/subarray dtypes. * Passing on user input should generally be considered unsafe (except for the data being read). An example would be `np.dtype(user_string)` or `dtype=user_string`. * The speed of operations can depend on values and memory order can lead to larger temporary memory use and slower execution. This means that operations may be significantly slower or use more memory compared to simple test cases. * When reading data, consider enforcing a specific shape (e.g. one dimensional) or dtype such as `float64`, `float32`, or `int64` to reduce complexity. When working with non-trivial untrusted data, it is advisable to sandbox the analysis to guard against potential privilege escalation. This is especially advisable if further libraries based on NumPy are used since these add additional complexity and potential security issues. # CPU build options ## Description The following options are mainly used to change the default behavior of optimizations that target certain CPU features: * `cpu-baseline`: minimal set of required CPU features. Default value is `min` which provides the minimum CPU features that can safely run on a wide range of platforms within the processor family. Note During the runtime, NumPy modules will fail to load if any of specified features are not supported by the target CPU (raises Python runtime error). * `cpu-dispatch`: dispatched set of additional CPU features. Default value is `max -xop -fma4` which enables all CPU features, except for AMD legacy features (in case of X86). Note During the runtime, NumPy modules will skip any specified features that are not available in the target CPU. These options are accessible at build time by passing setup arguments to meson-python via the build frontend (e.g., `pip` or `build`). They accept a set of CPU features or groups of features that gather several features or special options that perform a series of procedures. To customize CPU/build options: pip install . -Csetup-args=-Dcpu-baseline="avx2 fma3" -Csetup-args=-Dcpu-dispatch="max" ## Quick start In general, the default settings tend to not impose certain CPU features that may not be available on some older processors. Raising the ceiling of the baseline features will often improve performance and may also reduce binary size. The following are the most common scenarios that may require changing the default settings: ### I am building NumPy for my local use And I do not intend to export the build to other users or target a different CPU than what the host has. Set `native` for baseline, or manually specify the CPU features in case of option `native` isn’t supported by your platform: python -m build --wheel -Csetup-args=-Dcpu-baseline="native" Building NumPy with extra CPU features isn’t necessary for this case, since all supported features are already defined within the baseline features: python -m build --wheel -Csetup-args=-Dcpu-baseline="native" \ -Csetup-args=-Dcpu-dispatch="none" Note A fatal error will be raised if `native` isn’t supported by the host platform. ### I do not want to support the old processors of the x86 architecture Since most of the CPUs nowadays support at least `AVX`, `F16C` features, you can use: python -m build --wheel -Csetup-args=-Dcpu-baseline="avx f16c" Note `cpu-baseline` force combine all implied features, so there’s no need to add SSE features. ### I’m facing the same case above but with ppc64 architecture Then raise the ceiling of the baseline features to Power8: python -m build --wheel -Csetup-args=-Dcpu-baseline="vsx2" ### Having issues with AVX512 features? You may have some reservations about including of `AVX512` or any other CPU feature and you want to exclude from the dispatched features: python -m build --wheel -Csetup-args=-Dcpu-dispatch="max -avx512f -avx512cd \ -avx512_knl -avx512_knm -avx512_skx -avx512_clx -avx512_cnl -avx512_icl" ## Supported features The names of the features can express one feature or a group of features, as shown in the following tables supported depend on the lowest interest: Note The following features may not be supported by all compilers, also some compilers may produce different set of implied features when it comes to features like `AVX512`, `AVX2`, and `FMA3`. See Platform differences for more details. ### On x86 Name | Implies | Gathers ---|---|--- `SSE` | `SSE2` | `SSE2` | `SSE` | `SSE3` | `SSE` `SSE2` | `SSSE3` | `SSE` `SSE2` `SSE3` | `SSE41` | `SSE` `SSE2` `SSE3` `SSSE3` | `POPCNT` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` | `SSE42` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` | `AVX` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` | `XOP` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` | `FMA4` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` | `F16C` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` | `FMA3` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` | `AVX2` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` | `AVX512F` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` | `AVX512CD` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` `AVX512F` | `AVX512_KNL` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` `AVX512F` `AVX512CD` | `AVX512ER` `AVX512PF` `AVX512_KNM` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` `AVX512F` `AVX512CD` `AVX512_KNL` | `AVX5124FMAPS` `AVX5124VNNIW` `AVX512VPOPCNTDQ` `AVX512_SKX` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` `AVX512F` `AVX512CD` | `AVX512VL` `AVX512BW` `AVX512DQ` `AVX512_CLX` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` `AVX512F` `AVX512CD` `AVX512_SKX` | `AVX512VNNI` `AVX512_CNL` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` `AVX512F` `AVX512CD` `AVX512_SKX` | `AVX512IFMA` `AVX512VBMI` `AVX512_ICL` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` `AVX512F` `AVX512CD` `AVX512_SKX` `AVX512_CLX` `AVX512_CNL` | `AVX512VBMI2` `AVX512BITALG` `AVX512VPOPCNTDQ` `AVX512_SPR` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` `AVX512F` `AVX512CD` `AVX512_SKX` `AVX512_CLX` `AVX512_CNL` `AVX512_ICL` | `AVX512FP16` ### On IBM/POWER big-endian Name | Implies ---|--- `VSX` | `VSX2` | `VSX` `VSX3` | `VSX` `VSX2` `VSX4` | `VSX` `VSX2` `VSX3` ### On IBM/POWER little-endian Name | Implies ---|--- `VSX` | `VSX2` `VSX2` | `VSX` `VSX3` | `VSX` `VSX2` `VSX4` | `VSX` `VSX2` `VSX3` ### On ARMv7/A32 Name | Implies ---|--- `NEON` | `NEON_FP16` | `NEON` `NEON_VFPV4` | `NEON` `NEON_FP16` `ASIMD` | `NEON` `NEON_FP16` `NEON_VFPV4` `ASIMDHP` | `NEON` `NEON_FP16` `NEON_VFPV4` `ASIMD` `ASIMDDP` | `NEON` `NEON_FP16` `NEON_VFPV4` `ASIMD` `ASIMDFHM` | `NEON` `NEON_FP16` `NEON_VFPV4` `ASIMD` `ASIMDHP` ### On ARMv8/A64 Name | Implies ---|--- `NEON` | `NEON_FP16` `NEON_VFPV4` `ASIMD` `NEON_FP16` | `NEON` `NEON_VFPV4` `ASIMD` `NEON_VFPV4` | `NEON` `NEON_FP16` `ASIMD` `ASIMD` | `NEON` `NEON_FP16` `NEON_VFPV4` `ASIMDHP` | `NEON` `NEON_FP16` `NEON_VFPV4` `ASIMD` `ASIMDDP` | `NEON` `NEON_FP16` `NEON_VFPV4` `ASIMD` `ASIMDFHM` | `NEON` `NEON_FP16` `NEON_VFPV4` `ASIMD` `ASIMDHP` ### On IBM/ZSYSTEM(S390X) Name | Implies ---|--- `VX` | `VXE` | `VX` `VXE2` | `VX` `VXE` ## Special options * `NONE`: enable no features. * `NATIVE`: Enables all CPU features that supported by the host CPU, this operation is based on the compiler flags (`-march=native`, `-xHost`, `/QxHost`) * `MIN`: Enables the minimum CPU features that can safely run on a wide range of platforms: For Arch | Implies ---|--- x86 (32-bit mode) | `SSE` `SSE2` x86_64 | `SSE` `SSE2` `SSE3` IBM/POWER (big-endian mode) | `NONE` IBM/POWER (little-endian mode) | `VSX` `VSX2` ARMHF | `NONE` ARM64 A.K. AARCH64 | `NEON` `NEON_FP16` `NEON_VFPV4` `ASIMD` IBM/ZSYSTEM(S390X) | `NONE` * `MAX`: Enables all supported CPU features by the compiler and platform. * `Operators-/+`: remove or add features, useful with options `MAX`, `MIN` and `NATIVE`. ## Behaviors * CPU features and other options are case-insensitive, for example: python -m build --wheel -Csetup-args=-Dcpu-dispatch="SSE41 avx2 FMA3" * The order of the requested optimizations doesn’t matter: python -m build --wheel -Csetup-args=-Dcpu-dispatch="SSE41 AVX2 FMA3" # equivalent to python -m build --wheel -Csetup-args=-Dcpu-dispatch="FMA3 AVX2 SSE41" * Either commas or spaces or ‘+’ can be used as a separator, for example: python -m build --wheel -Csetup-args=-Dcpu-dispatch="avx2 avx512f" # or python -m build --wheel -Csetup-args=-Dcpu-dispatch=avx2,avx512f # or python -m build --wheel -Csetup-args=-Dcpu-dispatch="avx2+avx512f" all works but arguments should be enclosed in quotes or escaped by backslash if any spaces are used. * `cpu-baseline` combines all implied CPU features, for example: python -m build --wheel -Csetup-args=-Dcpu-baseline=sse42 # equivalent to python -m build --wheel -Csetup-args=-Dcpu-baseline="sse sse2 sse3 ssse3 sse41 popcnt sse42" * `cpu-baseline` will be treated as “native” if compiler native flag `-march=native` or `-xHost` or `/QxHost` is enabled through environment variable `CFLAGS`: export CFLAGS="-march=native" pip install . # is equivalent to pip install . -Csetup-args=-Dcpu-baseline=native * `cpu-baseline` escapes any specified features that aren’t supported by the target platform or compiler rather than raising fatal errors. Note Since `cpu-baseline` combines all implied features, the maximum supported of implied features will be enabled rather than escape all of them. For example: # Requesting `AVX2,FMA3` but the compiler only support **SSE** features python -m build --wheel -Csetup-args=-Dcpu-baseline="avx2 fma3" # is equivalent to python -m build --wheel -Csetup-args=-Dcpu-baseline="sse sse2 sse3 ssse3 sse41 popcnt sse42" * `cpu-dispatch` does not combine any of implied CPU features, so you must add them unless you want to disable one or all of them: # Only dispatches AVX2 and FMA3 python -m build --wheel -Csetup-args=-Dcpu-dispatch=avx2,fma3 # Dispatches AVX and SSE features python -m build --wheel -Csetup-args=-Dcpu-dispatch=ssse3,sse41,sse42,avx,avx2,fma3 * `cpu-dispatch` escapes any specified baseline features and also escapes any features not supported by the target platform or compiler without raising fatal errors. Eventually, you should always check the final report through the build log to verify the enabled features. See Build report for more details. ## Platform differences Some exceptional conditions force us to link some features together when it come to certain compilers or architectures, resulting in the impossibility of building them separately. These conditions can be divided into two parts, as follows: **Architectural compatibility** The need to align certain CPU features that are assured to be supported by successive generations of the same architecture, some cases: * On ppc64le `VSX(ISA 2.06)` and `VSX2(ISA 2.07)` both imply one another since the first generation that supports little-endian mode is Power-8`(ISA 2.07)` * On AArch64 `NEON NEON_FP16 NEON_VFPV4 ASIMD` implies each other since they are part of the hardware baseline. For example: # On ARMv8/A64, specify NEON is going to enable Advanced SIMD # and all predecessor extensions python -m build --wheel -Csetup-args=-Dcpu-baseline=neon # which is equivalent to python -m build --wheel -Csetup-args=-Dcpu-baseline="neon neon_fp16 neon_vfpv4 asimd" Note Please take a deep look at Supported features, in order to determine the features that imply one another. **Compilation compatibility** Some compilers don’t provide independent support for all CPU features. For instance **Intel** ’s compiler doesn’t provide separated flags for `AVX2` and `FMA3`, it makes sense since all Intel CPUs that comes with `AVX2` also support `FMA3`, but this approach is incompatible with other **x86** CPUs from **AMD** or **VIA**. For example: # Specify AVX2 will force enables FMA3 on Intel compilers python -m build --wheel -Csetup-args=-Dcpu-baseline=avx2 # which is equivalent to python -m build --wheel -Csetup-args=-Dcpu-baseline="avx2 fma3" The following tables only show the differences imposed by some compilers from the general context that been shown in the Supported features tables: Note Features names with strikeout represent the unsupported CPU features. ### On x86::Intel Compiler Name | Implies | Gathers ---|---|--- FMA3 | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C AVX2 | AVX2 | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 | AVX512F | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512CD | XOP | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX | FMA4 | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX | AVX512_SPR | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512F AVX512CD AVX512_SKX AVX512_CLX AVX512_CNL AVX512_ICL | AVX512FP16 ### On x86::Microsoft Visual C/C++ Name | Implies | Gathers ---|---|--- FMA3 | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C AVX2 | AVX2 | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 | AVX512F | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512CD AVX512_SKX | AVX512CD | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512F AVX512_SKX | AVX512_KNL | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512F AVX512CD | AVX512ER AVX512PF AVX512_KNM | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512F AVX512CD AVX512_KNL | AVX5124FMAPS AVX5124VNNIW AVX512VPOPCNTDQ AVX512_SPR | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512F AVX512CD AVX512_SKX AVX512_CLX AVX512_CNL AVX512_ICL | AVX512FP16 ## Build report In most cases, the CPU build options do not produce any fatal errors that lead to hanging the build. Most of the errors that may appear in the build log serve as heavy warnings due to the lack of some expected CPU features by the compiler. So we strongly recommend checking the final report log, to be aware of what kind of CPU features are enabled and what are not. You can find the final report of CPU optimizations at the end of the build log, and here is how it looks on x86_64/gcc: ########### EXT COMPILER OPTIMIZATION ########### Platform : Architecture: x64 Compiler : gcc CPU baseline : Requested : 'min' Enabled : SSE SSE2 SSE3 Flags : -msse -msse2 -msse3 Extra checks: none CPU dispatch : Requested : 'max -xop -fma4' Enabled : SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512F AVX512CD AVX512_KNL AVX512_KNM AVX512_SKX AVX512_CLX AVX512_CNL AVX512_ICL Generated : : SSE41 : SSE SSE2 SSE3 SSSE3 Flags : -msse -msse2 -msse3 -mssse3 -msse4.1 Extra checks: none Detect : SSE SSE2 SSE3 SSSE3 SSE41 : build/src.linux-x86_64-3.9/numpy/_core/src/umath/loops_arithmetic.dispatch.c : numpy/_core/src/umath/_umath_tests.dispatch.c : SSE42 : SSE SSE2 SSE3 SSSE3 SSE41 POPCNT Flags : -msse -msse2 -msse3 -mssse3 -msse4.1 -mpopcnt -msse4.2 Extra checks: none Detect : SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 : build/src.linux-x86_64-3.9/numpy/_core/src/_simd/_simd.dispatch.c : AVX2 : SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C Flags : -msse -msse2 -msse3 -mssse3 -msse4.1 -mpopcnt -msse4.2 -mavx -mf16c -mavx2 Extra checks: none Detect : AVX F16C AVX2 : build/src.linux-x86_64-3.9/numpy/_core/src/umath/loops_arithm_fp.dispatch.c : build/src.linux-x86_64-3.9/numpy/_core/src/umath/loops_arithmetic.dispatch.c : numpy/_core/src/umath/_umath_tests.dispatch.c : (FMA3 AVX2) : SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C Flags : -msse -msse2 -msse3 -mssse3 -msse4.1 -mpopcnt -msse4.2 -mavx -mf16c -mfma -mavx2 Extra checks: none Detect : AVX F16C FMA3 AVX2 : build/src.linux-x86_64-3.9/numpy/_core/src/_simd/_simd.dispatch.c : build/src.linux-x86_64-3.9/numpy/_core/src/umath/loops_exponent_log.dispatch.c : build/src.linux-x86_64-3.9/numpy/_core/src/umath/loops_trigonometric.dispatch.c : AVX512F : SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 Flags : -msse -msse2 -msse3 -mssse3 -msse4.1 -mpopcnt -msse4.2 -mavx -mf16c -mfma -mavx2 -mavx512f Extra checks: AVX512F_REDUCE Detect : AVX512F : build/src.linux-x86_64-3.9/numpy/_core/src/_simd/_simd.dispatch.c : build/src.linux-x86_64-3.9/numpy/_core/src/umath/loops_arithm_fp.dispatch.c : build/src.linux-x86_64-3.9/numpy/_core/src/umath/loops_arithmetic.dispatch.c : build/src.linux-x86_64-3.9/numpy/_core/src/umath/loops_exponent_log.dispatch.c : build/src.linux-x86_64-3.9/numpy/_core/src/umath/loops_trigonometric.dispatch.c : AVX512_SKX : SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512F AVX512CD Flags : -msse -msse2 -msse3 -mssse3 -msse4.1 -mpopcnt -msse4.2 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq Extra checks: AVX512BW_MASK AVX512DQ_MASK Detect : AVX512_SKX : build/src.linux-x86_64-3.9/numpy/_core/src/_simd/_simd.dispatch.c : build/src.linux-x86_64-3.9/numpy/_core/src/umath/loops_arithmetic.dispatch.c : build/src.linux-x86_64-3.9/numpy/_core/src/umath/loops_exponent_log.dispatch.c CCompilerOpt.cache_flush[804] : write cache to path -> /home/seiko/work/repos/numpy/build/temp.linux-x86_64-3.9/ccompiler_opt_cache_ext.py ########### CLIB COMPILER OPTIMIZATION ########### Platform : Architecture: x64 Compiler : gcc CPU baseline : Requested : 'min' Enabled : SSE SSE2 SSE3 Flags : -msse -msse2 -msse3 Extra checks: none CPU dispatch : Requested : 'max -xop -fma4' Enabled : SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512F AVX512CD AVX512_KNL AVX512_KNM AVX512_SKX AVX512_CLX AVX512_CNL AVX512_ICL Generated : none There is a separate report for each of `build_ext` and `build_clib` that includes several sections, and each section has several values, representing the following: **Platform** : * Architecture: The architecture name of target CPU. It should be one of `x86`, `x64`, `ppc64`, `ppc64le`, `armhf`, `aarch64`, `s390x` or `unknown`. * Compiler: The compiler name. It should be one of gcc, clang, msvc, icc, iccw or unix-like. **CPU baseline** : * Requested: The specific features and options to `cpu-baseline` as-is. * Enabled: The final set of enabled CPU features. * Flags: The compiler flags that were used to all NumPy C/C++ sources during the compilation except for temporary sources that have been used for generating the binary objects of dispatched features. * Extra checks: list of internal checks that activate certain functionality or intrinsics related to the enabled features, useful for debugging when it comes to developing SIMD kernels. **CPU dispatch** : * Requested: The specific features and options to `cpu-dispatch` as-is. * Enabled: The final set of enabled CPU features. * Generated: At the beginning of the next row of this property, the features for which optimizations have been generated are shown in the form of several sections with similar properties explained as follows: * One or multiple dispatched feature: The implied CPU features. * Flags: The compiler flags that been used for these features. * Extra checks: Similar to the baseline but for these dispatched features. * Detect: Set of CPU features that need be detected in runtime in order to execute the generated optimizations. * The lines that come after the above property and end with a ‘:’ on a separate line, represent the paths of c/c++ sources that define the generated optimizations. ## Runtime dispatch Importing NumPy triggers a scan of the available CPU features from the set of dispatchable features. This can be further restricted by setting the environment variable `NPY_DISABLE_CPU_FEATURES` to a comma-, tab-, or space- separated list of features to disable. This will raise an error if parsing fails or if the feature was not enabled. For instance, on `x86_64` this will disable `AVX2` and `FMA3`: NPY_DISABLE_CPU_FEATURES="AVX2,FMA3" If the feature is not available, a warning will be emitted. ## Tracking dispatched functions Discovering which CPU targets are enabled for different optimized functions is achievable through the Python function `numpy.lib.introspect.opt_func_info`. This function offers the flexibility of applying filters using two optional arguments: one for refining function names and the other for specifying data types in the signatures. For example: >> func_info = numpy.lib.introspect.opt_func_info(func_name='add|abs', signature='float64|complex64') >> print(json.dumps(func_info, indent=2)) { "absolute": { "dd": { "current": "SSE41", "available": "SSE41 baseline(SSE SSE2 SSE3)" }, "Ff": { "current": "FMA3__AVX2", "available": "AVX512F FMA3__AVX2 baseline(SSE SSE2 SSE3)" }, "Dd": { "current": "FMA3__AVX2", "available": "AVX512F FMA3__AVX2 baseline(SSE SSE2 SSE3)" } }, "add": { "ddd": { "current": "FMA3__AVX2", "available": "FMA3__AVX2 baseline(SSE SSE2 SSE3)" }, "FFF": { "current": "FMA3__AVX2", "available": "FMA3__AVX2 baseline(SSE SSE2 SSE3)" } } } # How does the CPU dispatcher work? NumPy dispatcher is based on multi-source compiling, which means taking a certain source and compiling it multiple times with different compiler flags and also with different **C** definitions that affect the code paths. This enables certain instruction-sets for each compiled object depending on the required optimizations and ends with linking the returned objects together. This mechanism should support all compilers and it doesn’t require any compiler-specific extension, but at the same time it adds a few steps to normal compilation that are explained as follows. ## 1- Configuration Configuring the required optimization by the user before starting to build the source files via the two command arguments as explained above: * `--cpu-baseline`: minimal set of required optimizations. * `--cpu-dispatch`: dispatched set of additional optimizations. ## 2- Discovering the environment In this part, we check the compiler and platform architecture and cache some of the intermediary results to speed up rebuilding. ## 3- Validating the requested optimizations By testing them against the compiler, and seeing what the compiler can support according to the requested optimizations. ## 4- Generating the main configuration header The generated header `_cpu_dispatch.h` contains all the definitions and headers of instruction-sets for the required optimizations that have been validated during the previous step. It also contains extra C definitions that are used for defining NumPy’s Python-level module attributes `__cpu_baseline__` and `__cpu_dispatch__`. **What is in this header?** The example header was dynamically generated by gcc on an X86 machine. The compiler supports `--cpu-baseline="sse sse2 sse3"` and `--cpu-dispatch="ssse3 sse41"`, and the result is below. // The header should be located at numpy/numpy/_core/src/common/_cpu_dispatch.h /**NOTE ** C definitions prefixed with "NPY_HAVE_" represent ** the required optimizations. ** ** C definitions prefixed with 'NPY__CPU_TARGET_' are protected and ** shouldn't be used by any NumPy C sources. */ /******* baseline features *******/ /** SSE **/ #define NPY_HAVE_SSE 1 #include /** SSE2 **/ #define NPY_HAVE_SSE2 1 #include /** SSE3 **/ #define NPY_HAVE_SSE3 1 #include /******* dispatch-able features *******/ #ifdef NPY__CPU_TARGET_SSSE3 /** SSSE3 **/ #define NPY_HAVE_SSSE3 1 #include #endif #ifdef NPY__CPU_TARGET_SSE41 /** SSE41 **/ #define NPY_HAVE_SSE41 1 #include #endif **Baseline features** are the minimal set of required optimizations configured via `--cpu-baseline`. They have no preprocessor guards and they’re always on, which means they can be used in any source. Does this mean NumPy’s infrastructure passes the compiler’s flags of baseline features to all sources? Definitely, yes. But the dispatch-able sources are treated differently. What if the user specifies certain **baseline features** during the build but at runtime the machine doesn’t support even these features? Will the compiled code be called via one of these definitions, or maybe the compiler itself auto-generated/vectorized certain piece of code based on the provided command line compiler flags? During the loading of the NumPy module, there’s a validation step which detects this behavior. It will raise a Python runtime error to inform the user. This is to prevent the CPU reaching an illegal instruction error causing a segfault. **Dispatch-able features** are our dispatched set of additional optimizations that were configured via `--cpu-dispatch`. They are not activated by default and are always guarded by other C definitions prefixed with `NPY__CPU_TARGET_`. C definitions `NPY__CPU_TARGET_` are only enabled within **dispatch-able sources**. ## 5- Dispatch-able sources and configuration statements Dispatch-able sources are special **C** files that can be compiled multiple times with different compiler flags and also with different **C** definitions. These affect code paths to enable certain instruction-sets for each compiled object according to “**the configuration statements** ” that must be declared between a **C** comment`(/**/)` and start with a special mark **@targets** at the top of each dispatch-able source. At the same time, dispatch-able sources will be treated as normal **C** sources if the optimization was disabled by the command argument `--disable-optimization` . **What are configuration statements?** Configuration statements are sort of keywords combined together to determine the required optimization for the dispatch-able source. Example: /*@targets avx2 avx512f vsx2 vsx3 asimd asimdhp */ // C code The keywords mainly represent the additional optimizations configured through `--cpu-dispatch`, but it can also represent other options such as: * Target groups: pre-configured configuration statements used for managing the required optimizations from outside the dispatch-able source. * Policies: collections of options used for changing the default behaviors or forcing the compilers to perform certain things. * “baseline”: a unique keyword represents the minimal optimizations that configured through `--cpu-baseline` **Numpy’s infrastructure handles dispatch-able sources in four steps** : * **(A) Recognition** : Just like source templates and F2PY, the dispatch-able sources requires a special extension `*.dispatch.c` to mark C dispatch-able source files, and for C++ `*.dispatch.cpp` or `*.dispatch.cxx` **NOTE** : C++ not supported yet. * **(B) Parsing and validating** : In this step, the dispatch-able sources that had been filtered by the previous step are parsed and validated by the configuration statements for each one of them one by one in order to determine the required optimizations. * **(C) Wrapping** : This is the approach taken by NumPy’s infrastructure, which has proved to be sufficiently flexible in order to compile a single source multiple times with different **C** definitions and flags that affect the code paths. The process is achieved by creating a temporary **C** source for each required optimization that related to the additional optimization, which contains the declarations of the **C** definitions and includes the involved source via the **C** directive **#include**. For more clarification take a look at the following code for AVX512F : /* * this definition is used by NumPy utilities as suffixes for the * exported symbols */ #define NPY__CPU_TARGET_CURRENT AVX512F /* * The following definitions enable * definitions of the dispatch-able features that are defined within the main * configuration header. These are definitions for the implied features. */ #define NPY__CPU_TARGET_SSE #define NPY__CPU_TARGET_SSE2 #define NPY__CPU_TARGET_SSE3 #define NPY__CPU_TARGET_SSSE3 #define NPY__CPU_TARGET_SSE41 #define NPY__CPU_TARGET_POPCNT #define NPY__CPU_TARGET_SSE42 #define NPY__CPU_TARGET_AVX #define NPY__CPU_TARGET_F16C #define NPY__CPU_TARGET_FMA3 #define NPY__CPU_TARGET_AVX2 #define NPY__CPU_TARGET_AVX512F // our dispatch-able source #include "/the/absolute/path/of/hello.dispatch.c" * **(D) Dispatch-able configuration header** : The infrastructure generates a config header for each dispatch-able source, this header mainly contains two abstract **C** macros used for identifying the generated objects, so they can be used for runtime dispatching certain symbols from the generated objects by any **C** source. It is also used for forward declarations. The generated header takes the name of the dispatch-able source after excluding the extension and replace it with `.h`, for example assume we have a dispatch-able source called `hello.dispatch.c` and contains the following: // hello.dispatch.c /*@targets baseline sse42 avx512f */ #include #include "numpy/utils.h" // NPY_CAT, NPY_TOSTR #ifndef NPY__CPU_TARGET_CURRENT // wrapping the dispatch-able source only happens to the additional optimizations // but if the keyword 'baseline' provided within the configuration statements, // the infrastructure will add extra compiling for the dispatch-able source by // passing it as-is to the compiler without any changes. #define CURRENT_TARGET(X) X #define NPY__CPU_TARGET_CURRENT baseline // for printing only #else // since we reach to this point, that's mean we're dealing with // the additional optimizations, so it could be SSE42 or AVX512F #define CURRENT_TARGET(X) NPY_CAT(NPY_CAT(X, _), NPY__CPU_TARGET_CURRENT) #endif // Macro 'CURRENT_TARGET' adding the current target as suffix to the exported symbols, // to avoid linking duplications, NumPy already has a macro called // 'NPY_CPU_DISPATCH_CURFX' similar to it, located at // numpy/numpy/_core/src/common/npy_cpu_dispatch.h // NOTE: we tend to not adding suffixes to the baseline exported symbols void CURRENT_TARGET(simd_whoami)(const char *extra_info) { printf("I'm " NPY_TOSTR(NPY__CPU_TARGET_CURRENT) ", %s\n", extra_info); } Now assume you attached **hello.dispatch.c** to the source tree, then the infrastructure should generate a temporary config header called **hello.dispatch.h** that can be reached by any source in the source tree, and it should contain the following code : #ifndef NPY__CPU_DISPATCH_EXPAND_ // To expand the macro calls in this header #define NPY__CPU_DISPATCH_EXPAND_(X) X #endif // Undefining the following macros, due to the possibility of including config headers // multiple times within the same source and since each config header represents // different required optimizations according to the specified configuration // statements in the dispatch-able source that derived from it. #undef NPY__CPU_DISPATCH_BASELINE_CALL #undef NPY__CPU_DISPATCH_CALL // nothing strange here, just a normal preprocessor callback // enabled only if 'baseline' specified within the configuration statements #define NPY__CPU_DISPATCH_BASELINE_CALL(CB, ...) \ NPY__CPU_DISPATCH_EXPAND_(CB(__VA_ARGS__)) // 'NPY__CPU_DISPATCH_CALL' is an abstract macro is used for dispatching // the required optimizations that specified within the configuration statements. // // @param CHK, Expected a macro that can be used to detect CPU features // in runtime, which takes a CPU feature name without string quotes and // returns the testing result in a shape of boolean value. // NumPy already has macro called "NPY_CPU_HAVE", which fits this requirement. // // @param CB, a callback macro that expected to be called multiple times depending // on the required optimizations, the callback should receive the following arguments: // 1- The pending calls of @param CHK filled up with the required CPU features, // that need to be tested first in runtime before executing call belong to // the compiled object. // 2- The required optimization name, same as in 'NPY__CPU_TARGET_CURRENT' // 3- Extra arguments in the macro itself // // By default the callback calls are sorted depending on the highest interest // unless the policy "$keep_sort" was in place within the configuration statements // see "Dive into the CPU dispatcher" for more clarification. #define NPY__CPU_DISPATCH_CALL(CHK, CB, ...) \ NPY__CPU_DISPATCH_EXPAND_(CB((CHK(AVX512F)), AVX512F, __VA_ARGS__)) \ NPY__CPU_DISPATCH_EXPAND_(CB((CHK(SSE)&&CHK(SSE2)&&CHK(SSE3)&&CHK(SSSE3)&&CHK(SSE41)), SSE41, __VA_ARGS__)) An example of using the config header in light of the above: // NOTE: The following macros are only defined for demonstration purposes only. // NumPy already has a collections of macros located at // numpy/numpy/_core/src/common/npy_cpu_dispatch.h, that covers all dispatching // and declarations scenarios. #include "numpy/npy_cpu_features.h" // NPY_CPU_HAVE #include "numpy/utils.h" // NPY_CAT, NPY_EXPAND // An example for setting a macro that calls all the exported symbols at once // after checking if they're supported by the running machine. #define DISPATCH_CALL_ALL(FN, ARGS) \ NPY__CPU_DISPATCH_CALL(NPY_CPU_HAVE, DISPATCH_CALL_ALL_CB, FN, ARGS) \ NPY__CPU_DISPATCH_BASELINE_CALL(DISPATCH_CALL_BASELINE_ALL_CB, FN, ARGS) // The preprocessor callbacks. // The same suffixes as we define it in the dispatch-able source. #define DISPATCH_CALL_ALL_CB(CHECK, TARGET_NAME, FN, ARGS) \ if (CHECK) { NPY_CAT(NPY_CAT(FN, _), TARGET_NAME) ARGS; } #define DISPATCH_CALL_BASELINE_ALL_CB(FN, ARGS) \ FN NPY_EXPAND(ARGS); // An example for setting a macro that calls the exported symbols of highest // interest optimization, after checking if they're supported by the running machine. #define DISPATCH_CALL_HIGH(FN, ARGS) \ if (0) {} \ NPY__CPU_DISPATCH_CALL(NPY_CPU_HAVE, DISPATCH_CALL_HIGH_CB, FN, ARGS) \ NPY__CPU_DISPATCH_BASELINE_CALL(DISPATCH_CALL_BASELINE_HIGH_CB, FN, ARGS) // The preprocessor callbacks // The same suffixes as we define it in the dispatch-able source. #define DISPATCH_CALL_HIGH_CB(CHECK, TARGET_NAME, FN, ARGS) \ else if (CHECK) { NPY_CAT(NPY_CAT(FN, _), TARGET_NAME) ARGS; } #define DISPATCH_CALL_BASELINE_HIGH_CB(FN, ARGS) \ else { FN NPY_EXPAND(ARGS); } // NumPy has a macro called 'NPY_CPU_DISPATCH_DECLARE' can be used // for forward declarations any kind of prototypes based on // 'NPY__CPU_DISPATCH_CALL' and 'NPY__CPU_DISPATCH_BASELINE_CALL'. // However in this example, we just handle it manually. void simd_whoami(const char *extra_info); void simd_whoami_AVX512F(const char *extra_info); void simd_whoami_SSE41(const char *extra_info); void trigger_me(void) { // bring the auto-generated config header // which contains config macros 'NPY__CPU_DISPATCH_CALL' and // 'NPY__CPU_DISPATCH_BASELINE_CALL'. // it is highly recommended to include the config header before executing // the dispatching macros in case if there's another header in the scope. #include "hello.dispatch.h" DISPATCH_CALL_ALL(simd_whoami, ("all")) DISPATCH_CALL_HIGH(simd_whoami, ("the highest interest")) // An example of including multiple config headers in the same source // #include "hello2.dispatch.h" // DISPATCH_CALL_HIGH(another_function, ("the highest interest")) } # CPU/SIMD optimizations NumPy comes with a flexible working mechanism that allows it to harness the SIMD features that CPUs own, in order to provide faster and more stable performance on all popular platforms. Currently, NumPy supports the X86, IBM/Power, ARM7 and ARM8 architectures. The optimization process in NumPy is carried out in three layers: * Code is _written_ using the universal intrinsics which is a set of types, macros and functions that are mapped to each supported instruction-sets by using guards that will enable use of the them only when the compiler recognizes them. This allow us to generate multiple kernels for the same functionality, in which each generated kernel represents a set of instructions that related one or multiple certain CPU features. The first kernel represents the minimum (baseline) CPU features, and the other kernels represent the additional (dispatched) CPU features. * At _compile_ time, CPU build options are used to define the minimum and additional features to support, based on user choice and compiler support. The appropriate intrinsics are overlaid with the platform / architecture intrinsics, and multiple kernels are compiled. * At _runtime import_ , the CPU is probed for the set of supported CPU features. A mechanism is used to grab the pointer to the most appropriate kernel, and this will be the one called for the function. Note NumPy community had a deep discussion before implementing this work, please check [NEP-38](https://numpy.org/neps/nep-0038-SIMD-optimizations.html) for more clarification. * [CPU build options](build-options) * [Description](build-options#description) * [Quick start](build-options#quick-start) * [I am building NumPy for my local use](build-options#i-am-building-numpy-for-my-local-use) * [I do not want to support the old processors of the x86 architecture](build-options#i-do-not-want-to-support-the-old-processors-of-the-x86-architecture) * [I’m facing the same case above but with ppc64 architecture](build-options#i-m-facing-the-same-case-above-but-with-ppc64-architecture) * [Having issues with AVX512 features?](build-options#having-issues-with-avx512-features) * [Supported features](build-options#supported-features) * [On x86](build-options#on-x86) * [On IBM/POWER big-endian](build-options#on-ibm-power-big-endian) * [On IBM/POWER little-endian](build-options#on-ibm-power-little-endian) * [On ARMv7/A32](build-options#on-armv7-a32) * [On ARMv8/A64](build-options#on-armv8-a64) * [On IBM/ZSYSTEM(S390X)](build-options#on-ibm-zsystem-s390x) * [Special options](build-options#special-options) * [Behaviors](build-options#behaviors) * [Platform differences](build-options#platform-differences) * [On x86::Intel Compiler](build-options#on-x86-intel-compiler) * [On x86::Microsoft Visual C/C++](build-options#on-x86-microsoft-visual-c-c) * [Build report](build-options#build-report) * [Runtime dispatch](build-options#runtime-dispatch) * [Tracking dispatched functions](build-options#tracking-dispatched-functions) * [How does the CPU dispatcher work?](how-it-works) * [1- Configuration](how-it-works#configuration) * [2- Discovering the environment](how-it-works#discovering-the-environment) * [3- Validating the requested optimizations](how-it-works#validating-the-requested-optimizations) * [4- Generating the main configuration header](how-it-works#generating-the-main-configuration-header) * [5- Dispatch-able sources and configuration statements](how-it-works#dispatch-able-sources-and-configuration-statements) # NumPy and SWIG * [numpy.i: a SWIG interface file for NumPy](swig.interface-file) * [Introduction](swig.interface-file#introduction) * [Using numpy.i](swig.interface-file#using-numpy-i) * [Available typemaps](swig.interface-file#available-typemaps) * [NumPy array scalars and SWIG](swig.interface-file#numpy-array-scalars-and-swig) * [Helper functions](swig.interface-file#helper-functions) * [Beyond the provided typemaps](swig.interface-file#beyond-the-provided-typemaps) * [Summary](swig.interface-file#summary) * [Testing the numpy.i typemaps](swig.testing) * [Introduction](swig.testing#introduction) * [Testing organization](swig.testing#testing-organization) * [Testing header files](swig.testing#testing-header-files) * [Testing source files](swig.testing#testing-source-files) * [Testing SWIG interface files](swig.testing#testing-swig-interface-files) * [Testing Python scripts](swig.testing#testing-python-scripts) # numpy.i: a SWIG interface file for NumPy ## Introduction The Simple Wrapper and Interface Generator (or [SWIG](https://www.swig.org)) is a powerful tool for generating wrapper code for interfacing to a wide variety of scripting languages. [SWIG](https://www.swig.org) can parse header files, and using only the code prototypes, create an interface to the target language. But [SWIG](https://www.swig.org) is not omnipotent. For example, it cannot know from the prototype: double rms(double* seq, int n); what exactly `seq` is. Is it a single value to be altered in-place? Is it an array, and if so what is its length? Is it input-only? Output-only? Input- output? [SWIG](https://www.swig.org) cannot determine these details, and does not attempt to do so. If we designed `rms`, we probably made it a routine that takes an input-only array of length `n` of `double` values called `seq` and returns the root mean square. The default behavior of [SWIG](https://www.swig.org), however, will be to create a wrapper function that compiles, but is nearly impossible to use from the scripting language in the way the C routine was intended. For Python, the preferred way of handling contiguous (or technically, _strided_) blocks of homogeneous data is with NumPy, which provides full object-oriented access to multidimensial arrays of data. Therefore, the most logical Python interface for the `rms` function would be (including doc string): def rms(seq): """ rms: return the root mean square of a sequence rms(numpy.ndarray) -> double rms(list) -> double rms(tuple) -> double """ where `seq` would be a NumPy array of `double` values, and its length `n` would be extracted from `seq` internally before being passed to the C routine. Even better, since NumPy supports construction of arrays from arbitrary Python sequences, `seq` itself could be a nearly arbitrary sequence (so long as each element can be converted to a `double`) and the wrapper code would internally convert it to a NumPy array before extracting its data and length. [SWIG](https://www.swig.org) allows these types of conversions to be defined via a mechanism called _typemaps_. This document provides information on how to use `numpy.i`, a [SWIG](https://www.swig.org) interface file that defines a series of typemaps intended to make the type of array-related conversions described above relatively simple to implement. For example, suppose that the `rms` function prototype defined above was in a header file named `rms.h`. To obtain the Python interface discussed above, your [SWIG](https://www.swig.org) interface file would need the following: %{ #define SWIG_FILE_WITH_INIT #include "rms.h" %} %include "numpy.i" %init %{ import_array(); %} %apply (double* IN_ARRAY1, int DIM1) {(double* seq, int n)}; %include "rms.h" Typemaps are keyed off a list of one or more function arguments, either by type or by type and name. We will refer to such lists as _signatures_. One of the many typemaps defined by `numpy.i` is used above and has the signature `(double* IN_ARRAY1, int DIM1)`. The argument names are intended to suggest that the `double*` argument is an input array of one dimension and that the `int` represents the size of that dimension. This is precisely the pattern in the `rms` prototype. Most likely, no actual prototypes to be wrapped will have the argument names `IN_ARRAY1` and `DIM1`. We use the [SWIG](https://www.swig.org) `%apply` directive to apply the typemap for one-dimensional input arrays of type `double` to the actual prototype used by `rms`. Using `numpy.i` effectively, therefore, requires knowing what typemaps are available and what they do. A [SWIG](https://www.swig.org) interface file that includes the [SWIG](https://www.swig.org) directives given above will produce wrapper code that looks something like: 1 PyObject *_wrap_rms(PyObject *args) { 2 PyObject *resultobj = 0; 3 double *arg1 = (double *) 0 ; 4 int arg2 ; 5 double result; 6 PyArrayObject *array1 = NULL ; 7 int is_new_object1 = 0 ; 8 PyObject * obj0 = 0 ; 9 10 if (!PyArg_ParseTuple(args,(char *)"O:rms",&obj0)) SWIG_fail; 11 { 12 array1 = obj_to_array_contiguous_allow_conversion( 13 obj0, NPY_DOUBLE, &is_new_object1); 14 npy_intp size[1] = { 15 -1 16 }; 17 if (!array1 || !require_dimensions(array1, 1) || 18 !require_size(array1, size, 1)) SWIG_fail; 19 arg1 = (double*) array1->data; 20 arg2 = (int) array1->dimensions[0]; 21 } 22 result = (double)rms(arg1,arg2); 23 resultobj = SWIG_From_double((double)(result)); 24 { 25 if (is_new_object1 && array1) Py_DECREF(array1); 26 } 27 return resultobj; 28 fail: 29 { 30 if (is_new_object1 && array1) Py_DECREF(array1); 31 } 32 return NULL; 33 } The typemaps from `numpy.i` are responsible for the following lines of code: 12–20, 25 and 30. Line 10 parses the input to the `rms` function. From the format string `"O:rms"`, we can see that the argument list is expected to be a single Python object (specified by the `O` before the colon) and whose pointer is stored in `obj0`. A number of functions, supplied by `numpy.i`, are called to make and check the (possible) conversion from a generic Python object to a NumPy array. These functions are explained in the section Helper Functions, but hopefully their names are self-explanatory. At line 12 we use `obj0` to construct a NumPy array. At line 17, we check the validity of the result: that it is non-null and that it has a single dimension of arbitrary length. Once these states are verified, we extract the data buffer and length in lines 19 and 20 so that we can call the underlying C function at line 22. Line 25 performs memory management for the case where we have created a new array that is no longer needed. This code has a significant amount of error handling. Note the `SWIG_fail` is a macro for `goto fail`, referring to the label at line 28. If the user provides the wrong number of arguments, this will be caught at line 10. If construction of the NumPy array fails or produces an array with the wrong number of dimensions, these errors are caught at line 17. And finally, if an error is detected, memory is still managed correctly at line 30. Note that if the C function signature was in a different order: double rms(int n, double* seq); that [SWIG](https://www.swig.org) would not match the typemap signature given above with the argument list for `rms`. Fortunately, `numpy.i` has a set of typemaps with the data pointer given last: %apply (int DIM1, double* IN_ARRAY1) {(int n, double* seq)}; This simply has the effect of switching the definitions of `arg1` and `arg2` in lines 3 and 4 of the generated code above, and their assignments in lines 19 and 20. ## Using numpy.i The `numpy.i` file is currently located in the `tools/swig` sub-directory under the `numpy` installation directory. Typically, you will want to copy it to the directory where you are developing your wrappers. A simple module that only uses a single [SWIG](https://www.swig.org) interface file should include the following: %{ #define SWIG_FILE_WITH_INIT %} %include "numpy.i" %init %{ import_array(); %} Within a compiled Python module, `import_array()` should only get called once. This could be in a C/C++ file that you have written and is linked to the module. If this is the case, then none of your interface files should `#define SWIG_FILE_WITH_INIT` or call `import_array()`. Or, this initialization call could be in a wrapper file generated by [SWIG](https://www.swig.org) from an interface file that has the `%init` block as above. If this is the case, and you have more than one [SWIG](https://www.swig.org) interface file, then only one interface file should `#define SWIG_FILE_WITH_INIT` and call `import_array()`. ## Available typemaps The typemap directives provided by `numpy.i` for arrays of different data types, say `double` and `int`, and dimensions of different types, say `int` or `long`, are identical to one another except for the C and NumPy type specifications. The typemaps are therefore implemented (typically behind the scenes) via a macro: %numpy_typemaps(DATA_TYPE, DATA_TYPECODE, DIM_TYPE) that can be invoked for appropriate `(DATA_TYPE, DATA_TYPECODE, DIM_TYPE)` triplets. For example: %numpy_typemaps(double, NPY_DOUBLE, int) %numpy_typemaps(int, NPY_INT , int) The `numpy.i` interface file uses the `%numpy_typemaps` macro to implement typemaps for the following C data types and `int` dimension types: * `signed char` * `unsigned char` * `short` * `unsigned short` * `int` * `unsigned int` * `long` * `unsigned long` * `long long` * `unsigned long long` * `float` * `double` In the following descriptions, we reference a generic `DATA_TYPE`, which could be any of the C data types listed above, and `DIM_TYPE` which should be one of the many types of integers. The typemap signatures are largely differentiated on the name given to the buffer pointer. Names with `FARRAY` are for Fortran-ordered arrays, and names with `ARRAY` are for C-ordered (or 1D arrays). ### Input Arrays Input arrays are defined as arrays of data that are passed into a routine but are not altered in-place or returned to the user. The Python input array is therefore allowed to be almost any Python sequence (such as a list) that can be converted to the requested type of array. The input array signatures are 1D: * `( DATA_TYPE IN_ARRAY1[ANY] )` * `( DATA_TYPE* IN_ARRAY1, int DIM1 )` * `( int DIM1, DATA_TYPE* IN_ARRAY1 )` 2D: * `( DATA_TYPE IN_ARRAY2[ANY][ANY] )` * `( DATA_TYPE* IN_ARRAY2, int DIM1, int DIM2 )` * `( int DIM1, int DIM2, DATA_TYPE* IN_ARRAY2 )` * `( DATA_TYPE* IN_FARRAY2, int DIM1, int DIM2 )` * `( int DIM1, int DIM2, DATA_TYPE* IN_FARRAY2 )` 3D: * `( DATA_TYPE IN_ARRAY3[ANY][ANY][ANY] )` * `( DATA_TYPE* IN_ARRAY3, int DIM1, int DIM2, int DIM3 )` * `( int DIM1, int DIM2, int DIM3, DATA_TYPE* IN_ARRAY3 )` * `( DATA_TYPE* IN_FARRAY3, int DIM1, int DIM2, int DIM3 )` * `( int DIM1, int DIM2, int DIM3, DATA_TYPE* IN_FARRAY3 )` 4D: * `(DATA_TYPE IN_ARRAY4[ANY][ANY][ANY][ANY])` * `(DATA_TYPE* IN_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4)` * `(DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, , DIM_TYPE DIM4, DATA_TYPE* IN_ARRAY4)` * `(DATA_TYPE* IN_FARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4)` * `(DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* IN_FARRAY4)` The first signature listed, `( DATA_TYPE IN_ARRAY[ANY] )` is for one- dimensional arrays with hard-coded dimensions. Likewise, `( DATA_TYPE IN_ARRAY2[ANY][ANY] )` is for two-dimensional arrays with hard-coded dimensions, and similarly for three-dimensional. ### In-Place Arrays In-place arrays are defined as arrays that are modified in-place. The input values may or may not be used, but the values at the time the function returns are significant. The provided Python argument must therefore be a NumPy array of the required type. The in-place signatures are 1D: * `( DATA_TYPE INPLACE_ARRAY1[ANY] )` * `( DATA_TYPE* INPLACE_ARRAY1, int DIM1 )` * `( int DIM1, DATA_TYPE* INPLACE_ARRAY1 )` 2D: * `( DATA_TYPE INPLACE_ARRAY2[ANY][ANY] )` * `( DATA_TYPE* INPLACE_ARRAY2, int DIM1, int DIM2 )` * `( int DIM1, int DIM2, DATA_TYPE* INPLACE_ARRAY2 )` * `( DATA_TYPE* INPLACE_FARRAY2, int DIM1, int DIM2 )` * `( int DIM1, int DIM2, DATA_TYPE* INPLACE_FARRAY2 )` 3D: * `( DATA_TYPE INPLACE_ARRAY3[ANY][ANY][ANY] )` * `( DATA_TYPE* INPLACE_ARRAY3, int DIM1, int DIM2, int DIM3 )` * `( int DIM1, int DIM2, int DIM3, DATA_TYPE* INPLACE_ARRAY3 )` * `( DATA_TYPE* INPLACE_FARRAY3, int DIM1, int DIM2, int DIM3 )` * `( int DIM1, int DIM2, int DIM3, DATA_TYPE* INPLACE_FARRAY3 )` 4D: * `(DATA_TYPE INPLACE_ARRAY4[ANY][ANY][ANY][ANY])` * `(DATA_TYPE* INPLACE_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4)` * `(DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, , DIM_TYPE DIM4, DATA_TYPE* INPLACE_ARRAY4)` * `(DATA_TYPE* INPLACE_FARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4)` * `(DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* INPLACE_FARRAY4)` These typemaps now check to make sure that the `INPLACE_ARRAY` arguments use native byte ordering. If not, an exception is raised. There is also a “flat” in-place array for situations in which you would like to modify or process each element, regardless of the number of dimensions. One example is a “quantization” function that quantizes each element of an array in-place, be it 1D, 2D or whatever. This form checks for continuity but allows either C or Fortran ordering. ND: * `(DATA_TYPE* INPLACE_ARRAY_FLAT, DIM_TYPE DIM_FLAT)` ### Argout Arrays Argout arrays are arrays that appear in the input arguments in C, but are in fact output arrays. This pattern occurs often when there is more than one output variable and the single return argument is therefore not sufficient. In Python, the conventional way to return multiple arguments is to pack them into a sequence (tuple, list, etc.) and return the sequence. This is what the argout typemaps do. If a wrapped function that uses these argout typemaps has more than one return argument, they are packed into a tuple or list, depending on the version of Python. The Python user does not pass these arrays in, they simply get returned. For the case where a dimension is specified, the python user must provide that dimension as an argument. The argout signatures are 1D: * `( DATA_TYPE ARGOUT_ARRAY1[ANY] )` * `( DATA_TYPE* ARGOUT_ARRAY1, int DIM1 )` * `( int DIM1, DATA_TYPE* ARGOUT_ARRAY1 )` 2D: * `( DATA_TYPE ARGOUT_ARRAY2[ANY][ANY] )` 3D: * `( DATA_TYPE ARGOUT_ARRAY3[ANY][ANY][ANY] )` 4D: * `( DATA_TYPE ARGOUT_ARRAY4[ANY][ANY][ANY][ANY] )` These are typically used in situations where in C/C++, you would allocate a(n) array(s) on the heap, and call the function to fill the array(s) values. In Python, the arrays are allocated for you and returned as new array objects. Note that we support `DATA_TYPE*` argout typemaps in 1D, but not 2D or 3D. This is because of a quirk with the [SWIG](https://www.swig.org) typemap syntax and cannot be avoided. Note that for these types of 1D typemaps, the Python function will take a single argument representing `DIM1`. ### Argout View Arrays Argoutview arrays are for when your C code provides you with a view of its internal data and does not require any memory to be allocated by the user. This can be dangerous. There is almost no way to guarantee that the internal data from the C code will remain in existence for the entire lifetime of the NumPy array that encapsulates it. If the user destroys the object that provides the view of the data before destroying the NumPy array, then using that array may result in bad memory references or segmentation faults. Nevertheless, there are situations, working with large data sets, where you simply have no other choice. The C code to be wrapped for argoutview arrays are characterized by pointers: pointers to the dimensions and double pointers to the data, so that these values can be passed back to the user. The argoutview typemap signatures are therefore 1D: * `( DATA_TYPE** ARGOUTVIEW_ARRAY1, DIM_TYPE* DIM1 )` * `( DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEW_ARRAY1 )` 2D: * `( DATA_TYPE** ARGOUTVIEW_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2 )` * `( DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_ARRAY2 )` * `( DATA_TYPE** ARGOUTVIEW_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2 )` * `( DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_FARRAY2 )` 3D: * `( DATA_TYPE** ARGOUTVIEW_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3)` * `( DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_ARRAY3)` * `( DATA_TYPE** ARGOUTVIEW_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3)` * `( DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_FARRAY3)` 4D: * `(DATA_TYPE** ARGOUTVIEW_ARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4)` * `(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEW_ARRAY4)` * `(DATA_TYPE** ARGOUTVIEW_FARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4)` * `(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEW_FARRAY4)` Note that arrays with hard-coded dimensions are not supported. These cannot follow the double pointer signatures of these typemaps. ### Memory Managed Argout View Arrays A recent addition to `numpy.i` are typemaps that permit argout arrays with views into memory that is managed. 1D: * `(DATA_TYPE** ARGOUTVIEWM_ARRAY1, DIM_TYPE* DIM1)` * `(DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEWM_ARRAY1)` 2D: * `(DATA_TYPE** ARGOUTVIEWM_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2)` * `(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEWM_ARRAY2)` * `(DATA_TYPE** ARGOUTVIEWM_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2)` * `(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEWM_FARRAY2)` 3D: * `(DATA_TYPE** ARGOUTVIEWM_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3)` * `(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEWM_ARRAY3)` * `(DATA_TYPE** ARGOUTVIEWM_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3)` * `(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEWM_FARRAY3)` 4D: * `(DATA_TYPE** ARGOUTVIEWM_ARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4)` * `(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEWM_ARRAY4)` * `(DATA_TYPE** ARGOUTVIEWM_FARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4)` * `(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEWM_FARRAY4)` ### Output Arrays The `numpy.i` interface file does not support typemaps for output arrays, for several reasons. First, C/C++ return arguments are limited to a single value. This prevents obtaining dimension information in a general way. Second, arrays with hard-coded lengths are not permitted as return arguments. In other words: double[3] newVector(double x, double y, double z); is not legal C/C++ syntax. Therefore, we cannot provide typemaps of the form: %typemap(out) (TYPE[ANY]); If you run into a situation where a function or method is returning a pointer to an array, your best bet is to write your own version of the function to be wrapped, either with `%extend` for the case of class methods or `%ignore` and `%rename` for the case of functions. ### Other Common Types: bool Note that C++ type `bool` is not supported in the list in the Available Typemaps section. NumPy bools are a single byte, while the C++ `bool` is four bytes (at least on my system). Therefore: %numpy_typemaps(bool, NPY_BOOL, int) will result in typemaps that will produce code that reference improper data lengths. You can implement the following macro expansion: %numpy_typemaps(bool, NPY_UINT, int) to fix the data length problem, and Input Arrays will work fine, but In-Place Arrays might fail type-checking. ### Other Common Types: complex Typemap conversions for complex floating-point types is also not supported automatically. This is because Python and NumPy are written in C, which does not have native complex types. Both Python and NumPy implement their own (essentially equivalent) `struct` definitions for complex variables: /* Python */ typedef struct {double real; double imag;} Py_complex; /* NumPy */ typedef struct {float real, imag;} npy_cfloat; typedef struct {double real, imag;} npy_cdouble; We could have implemented: %numpy_typemaps(Py_complex , NPY_CDOUBLE, int) %numpy_typemaps(npy_cfloat , NPY_CFLOAT , int) %numpy_typemaps(npy_cdouble, NPY_CDOUBLE, int) which would have provided automatic type conversions for arrays of type `Py_complex`, `npy_cfloat` and `npy_cdouble`. However, it seemed unlikely that there would be any independent (non-Python, non-NumPy) application code that people would be using [SWIG](https://www.swig.org) to generate a Python interface to, that also used these definitions for complex types. More likely, these application codes will define their own complex types, or in the case of C++, use `std::complex`. Assuming these data structures are compatible with Python and NumPy complex types, `%numpy_typemap` expansions as above (with the user’s complex type substituted for the first argument) should work. ## NumPy array scalars and SWIG [SWIG](https://www.swig.org) has sophisticated type checking for numerical types. For example, if your C/C++ routine expects an integer as input, the code generated by [SWIG](https://www.swig.org) will check for both Python integers and Python long integers, and raise an overflow error if the provided Python integer is too big to cast down to a C integer. With the introduction of NumPy scalar arrays into your Python code, you might conceivably extract an integer from a NumPy array and attempt to pass this to a [SWIG](https://www.swig.org)-wrapped C/C++ function that expects an `int`, but the [SWIG](https://www.swig.org) type checking will not recognize the NumPy array scalar as an integer. (Often, this does in fact work – it depends on whether NumPy recognizes the integer type you are using as inheriting from the Python integer type on the platform you are using. Sometimes, this means that code that works on a 32-bit machine will fail on a 64-bit machine.) If you get a Python error that looks like the following: TypeError: in method 'MyClass_MyMethod', argument 2 of type 'int' and the argument you are passing is an integer extracted from a NumPy array, then you have stumbled upon this problem. The solution is to modify the [SWIG](https://www.swig.org) type conversion system to accept NumPy array scalars in addition to the standard integer types. Fortunately, this capability has been provided for you. Simply copy the file: pyfragments.swg to the working build directory for you project, and this problem will be fixed. It is suggested that you do this anyway, as it only increases the capabilities of your Python interface. ### Why is There a Second File? The [SWIG](https://www.swig.org) type checking and conversion system is a complicated combination of C macros, [SWIG](https://www.swig.org) macros, [SWIG](https://www.swig.org) typemaps and [SWIG](https://www.swig.org) fragments. Fragments are a way to conditionally insert code into your wrapper file if it is needed, and not insert it if not needed. If multiple typemaps require the same fragment, the fragment only gets inserted into your wrapper code once. There is a fragment for converting a Python integer to a C `long`. There is a different fragment that converts a Python integer to a C `int`, that calls the routine defined in the `long` fragment. We can make the changes we want here by changing the definition for the `long` fragment. [SWIG](https://www.swig.org) determines the active definition for a fragment using a “first come, first served” system. That is, we need to define the fragment for `long` conversions prior to [SWIG](https://www.swig.org) doing it internally. [SWIG](https://www.swig.org) allows us to do this by putting our fragment definitions in the file `pyfragments.swg`. If we were to put the new fragment definitions in `numpy.i`, they would be ignored. ## Helper functions The `numpy.i` file contains several macros and routines that it uses internally to build its typemaps. However, these functions may be useful elsewhere in your interface file. These macros and routines are implemented as fragments, which are described briefly in the previous section. If you try to use one or more of the following macros or functions, but your compiler complains that it does not recognize the symbol, then you need to force these fragments to appear in your code using: %fragment("NumPy_Fragments"); in your [SWIG](https://www.swig.org) interface file. ### Macros **is_array(a)** Evaluates as true if `a` is non-`NULL` and can be cast to a `PyArrayObject*`. **array_type(a)** Evaluates to the integer data type code of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_numdims(a)** Evaluates to the integer number of dimensions of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_dimensions(a)** Evaluates to an array of type `npy_intp` and length `array_numdims(a)`, giving the lengths of all of the dimensions of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_size(a,i)** Evaluates to the `i`-th dimension size of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_strides(a)** Evaluates to an array of type `npy_intp` and length `array_numdims(a)`, giving the stridess of all of the dimensions of `a`, assuming `a` can be cast to a `PyArrayObject*`. A stride is the distance in bytes between an element and its immediate neighbor along the same axis. **array_stride(a,i)** Evaluates to the `i`-th stride of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_data(a)** Evaluates to a pointer of type `void*` that points to the data buffer of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_descr(a)** Returns a borrowed reference to the dtype property (`PyArray_Descr*`) of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_flags(a)** Returns an integer representing the flags of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_enableflags(a,f)** Sets the flag represented by `f` of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_is_contiguous(a)** Evaluates as true if `a` is a contiguous array. Equivalent to `(PyArray_ISCONTIGUOUS(a))`. **array_is_native(a)** Evaluates as true if the data buffer of `a` uses native byte order. Equivalent to `(PyArray_ISNOTSWAPPED(a))`. **array_is_fortran(a)** Evaluates as true if `a` is FORTRAN ordered. ### Routines **pytype_string()** Return type: `const char*` Arguments: * `PyObject* py_obj`, a general Python object. Return a string describing the type of `py_obj`. **typecode_string()** Return type: `const char*` Arguments: * `int typecode`, a NumPy integer typecode. Return a string describing the type corresponding to the NumPy `typecode`. **type_match()** Return type: `int` Arguments: * `int actual_type`, the NumPy typecode of a NumPy array. * `int desired_type`, the desired NumPy typecode. Make sure that `actual_type` is compatible with `desired_type`. For example, this allows character and byte types, or int and long types, to match. This is now equivalent to `PyArray_EquivTypenums()`. **obj_to_array_no_conversion()** Return type: `PyArrayObject*` Arguments: * `PyObject* input`, a general Python object. * `int typecode`, the desired NumPy typecode. Cast `input` to a `PyArrayObject*` if legal, and ensure that it is of type `typecode`. If `input` cannot be cast, or the `typecode` is wrong, set a Python error and return `NULL`. **obj_to_array_allow_conversion()** Return type: `PyArrayObject*` Arguments: * `PyObject* input`, a general Python object. * `int typecode`, the desired NumPy typecode of the resulting array. * `int* is_new_object`, returns a value of 0 if no conversion performed, else 1. Convert `input` to a NumPy array with the given `typecode`. On success, return a valid `PyArrayObject*` with the correct type. On failure, the Python error string will be set and the routine returns `NULL`. **make_contiguous()** Return type: `PyArrayObject*` Arguments: * `PyArrayObject* ary`, a NumPy array. * `int* is_new_object`, returns a value of 0 if no conversion performed, else 1. * `int min_dims`, minimum allowable dimensions. * `int max_dims`, maximum allowable dimensions. Check to see if `ary` is contiguous. If so, return the input pointer and flag it as not a new object. If it is not contiguous, create a new `PyArrayObject*` using the original data, flag it as a new object and return the pointer. **make_fortran()** Return type: `PyArrayObject*` Arguments * `PyArrayObject* ary`, a NumPy array. * `int* is_new_object`, returns a value of 0 if no conversion performed, else 1. Check to see if `ary` is Fortran contiguous. If so, return the input pointer and flag it as not a new object. If it is not Fortran contiguous, create a new `PyArrayObject*` using the original data, flag it as a new object and return the pointer. **obj_to_array_contiguous_allow_conversion()** Return type: `PyArrayObject*` Arguments: * `PyObject* input`, a general Python object. * `int typecode`, the desired NumPy typecode of the resulting array. * `int* is_new_object`, returns a value of 0 if no conversion performed, else 1. Convert `input` to a contiguous `PyArrayObject*` of the specified type. If the input object is not a contiguous `PyArrayObject*`, a new one will be created and the new object flag will be set. **obj_to_array_fortran_allow_conversion()** Return type: `PyArrayObject*` Arguments: * `PyObject* input`, a general Python object. * `int typecode`, the desired NumPy typecode of the resulting array. * `int* is_new_object`, returns a value of 0 if no conversion performed, else 1. Convert `input` to a Fortran contiguous `PyArrayObject*` of the specified type. If the input object is not a Fortran contiguous `PyArrayObject*`, a new one will be created and the new object flag will be set. **require_contiguous()** Return type: `int` Arguments: * `PyArrayObject* ary`, a NumPy array. Test whether `ary` is contiguous. If so, return 1. Otherwise, set a Python error and return 0. **require_native()** Return type: `int` Arguments: * `PyArray_Object* ary`, a NumPy array. Require that `ary` is not byte-swapped. If the array is not byte-swapped, return 1. Otherwise, set a Python error and return 0. **require_dimensions()** Return type: `int` Arguments: * `PyArrayObject* ary`, a NumPy array. * `int exact_dimensions`, the desired number of dimensions. Require `ary` to have a specified number of dimensions. If the array has the specified number of dimensions, return 1. Otherwise, set a Python error and return 0. **require_dimensions_n()** Return type: `int` Arguments: * `PyArrayObject* ary`, a NumPy array. * `int* exact_dimensions`, an array of integers representing acceptable numbers of dimensions. * `int n`, the length of `exact_dimensions`. Require `ary` to have one of a list of specified number of dimensions. If the array has one of the specified number of dimensions, return 1. Otherwise, set the Python error string and return 0. **require_size()** Return type: `int` Arguments: * `PyArrayObject* ary`, a NumPy array. * `npy_int* size`, an array representing the desired lengths of each dimension. * `int n`, the length of `size`. Require `ary` to have a specified shape. If the array has the specified shape, return 1. Otherwise, set the Python error string and return 0. **require_fortran()** Return type: `int` Arguments: * `PyArrayObject* ary`, a NumPy array. Require the given `PyArrayObject` to be Fortran ordered. If the `PyArrayObject` is already Fortran ordered, do nothing. Else, set the Fortran ordering flag and recompute the strides. ## Beyond the provided typemaps There are many C or C++ array/NumPy array situations not covered by a simple `%include "numpy.i"` and subsequent `%apply` directives. ### A Common Example Consider a reasonable prototype for a dot product function: double dot(int len, double* vec1, double* vec2); The Python interface that we want is: def dot(vec1, vec2): """ dot(PyObject,PyObject) -> double """ The problem here is that there is one dimension argument and two array arguments, and our typemaps are set up for dimensions that apply to a single array (in fact, [SWIG](https://www.swig.org) does not provide a mechanism for associating `len` with `vec2` that takes two Python input arguments). The recommended solution is the following: %apply (int DIM1, double* IN_ARRAY1) {(int len1, double* vec1), (int len2, double* vec2)} %rename (dot) my_dot; %exception my_dot { $action if (PyErr_Occurred()) SWIG_fail; } %inline %{ double my_dot(int len1, double* vec1, int len2, double* vec2) { if (len1 != len2) { PyErr_Format(PyExc_ValueError, "Arrays of lengths (%d,%d) given", len1, len2); return 0.0; } return dot(len1, vec1, vec2); } %} If the header file that contains the prototype for `double dot()` also contains other prototypes that you want to wrap, so that you need to `%include` this header file, then you will also need a `%ignore dot;` directive, placed after the `%rename` and before the `%include` directives. Or, if the function in question is a class method, you will want to use `%extend` rather than `%inline` in addition to `%ignore`. **A note on error handling:** Note that `my_dot` returns a `double` but that it can also raise a Python error. The resulting wrapper function will return a Python float representation of 0.0 when the vector lengths do not match. Since this is not `NULL`, the Python interpreter will not know to check for an error. For this reason, we add the `%exception` directive above for `my_dot` to get the behavior we want (note that `$action` is a macro that gets expanded to a valid call to `my_dot`). In general, you will probably want to write a [SWIG](https://www.swig.org) macro to perform this task. ### Other Situations There are other wrapping situations in which `numpy.i` may be helpful when you encounter them. * In some situations, it is possible that you could use the `%numpy_typemaps` macro to implement typemaps for your own types. See the Other Common Types: bool or Other Common Types: complex sections for examples. Another situation is if your dimensions are of a type other than `int` (say `long` for example): %numpy_typemaps(double, NPY_DOUBLE, long) * You can use the code in `numpy.i` to write your own typemaps. For example, if you had a five-dimensional array as a function argument, you could cut-and-paste the appropriate four-dimensional typemaps into your interface file. The modifications for the fourth dimension would be trivial. * Sometimes, the best approach is to use the `%extend` directive to define new methods for your classes (or overload existing ones) that take a `PyObject*` (that either is or can be converted to a `PyArrayObject*`) instead of a pointer to a buffer. In this case, the helper routines in `numpy.i` can be very useful. * Writing typemaps can be a bit nonintuitive. If you have specific questions about writing [SWIG](https://www.swig.org) typemaps for NumPy, the developers of `numpy.i` do monitor the [Numpy-discussion](https://numpy.org/cdn-cgi/l/email-protection#034d766e737a2e676a70607670706a6c6d252030343825203631382520373b38737a776b6c6d25203735386c7164) and [Swig-user](https://numpy.org/cdn-cgi/l/email-protection#f5a6829c92d880869087d3d6c6c2ced3d6c0c7ced3d6c1cdce999c868186d3d6c1c3ce869a80879690939a879290d3d6c1c3ce9b9081) mail lists. ### A Final Note When you use the `%apply` directive, as is usually necessary to use `numpy.i`, it will remain in effect until you tell [SWIG](https://www.swig.org) that it shouldn’t be. If the arguments to the functions or methods that you are wrapping have common names, such as `length` or `vector`, these typemaps may get applied in situations you do not expect or want. Therefore, it is always a good idea to add a `%clear` directive after you are done with a specific typemap: %apply (double* IN_ARRAY1, int DIM1) {(double* vector, int length)} %include "my_header.h" %clear (double* vector, int length); In general, you should target these typemap signatures specifically where you want them, and then clear them after you are done. ## Summary Out of the box, `numpy.i` provides typemaps that support conversion between NumPy arrays and C arrays: * That can be one of 12 different scalar types: `signed char`, `unsigned char`, `short`, `unsigned short`, `int`, `unsigned int`, `long`, `unsigned long`, `long long`, `unsigned long long`, `float` and `double`. * That support 74 different argument signatures for each data type, including: * One-dimensional, two-dimensional, three-dimensional and four-dimensional arrays. * Input-only, in-place, argout, argoutview, and memory managed argoutview behavior. * Hard-coded dimensions, data-buffer-then-dimensions specification, and dimensions-then-data-buffer specification. * Both C-ordering (“last dimension fastest”) or Fortran-ordering (“first dimension fastest”) support for 2D, 3D and 4D arrays. The `numpy.i` interface file also provides additional tools for wrapper developers, including: * A [SWIG](https://www.swig.org) macro (`%numpy_typemaps`) with three arguments for implementing the 74 argument signatures for the user’s choice of (1) C data type, (2) NumPy data type (assuming they match), and (3) dimension type. * Fourteen C macros and fifteen C functions that can be used to write specialized typemaps, extensions, or inlined functions that handle cases not covered by the provided typemaps. Note that the macros and functions are coded specifically to work with the NumPy C/API regardless of NumPy version number, both before and after the deprecation of some aspects of the API after version 1.6. # Testing the numpy.i typemaps ## Introduction Writing tests for the `numpy.i` [SWIG](https://www.swig.org/) interface file is a combinatorial headache. At present, 12 different data types are supported, each with 74 different argument signatures, for a total of 888 typemaps supported “out of the box”. Each of these typemaps, in turn, might require several unit tests in order to verify expected behavior for both proper and improper inputs. Currently, this results in more than 1,000 individual unit tests executed when `make test` is run in the `numpy/tools/swig` subdirectory. To facilitate this many similar unit tests, some high-level programming techniques are employed, including C and [SWIG](https://www.swig.org/) macros, as well as Python inheritance. The purpose of this document is to describe the testing infrastructure employed to verify that the `numpy.i` typemaps are working as expected. ## Testing organization There are three independent testing frameworks supported, for one-, two-, and three-dimensional arrays respectively. For one-dimensional arrays, there are two C++ files, a header and a source, named: Vector.h Vector.cxx that contain prototypes and code for a variety of functions that have one- dimensional arrays as function arguments. The file: Vector.i is a [SWIG](https://www.swig.org/) interface file that defines a python module `Vector` that wraps the functions in `Vector.h` while utilizing the typemaps in `numpy.i` to correctly handle the C arrays. The `Makefile` calls `swig` to generate `Vector.py` and `Vector_wrap.cxx`, and also executes the `setup.py` script that compiles `Vector_wrap.cxx` and links together the extension module `_Vector.so` or `_Vector.dylib`, depending on the platform. This extension module and the proxy file `Vector.py` are both placed in a subdirectory under the `build` directory. The actual testing takes place with a Python script named: testVector.py that uses the standard Python library module `unittest`, which performs several tests of each function defined in `Vector.h` for each data type supported. Two-dimensional arrays are tested in exactly the same manner. The above description applies, but with `Matrix` substituted for `Vector`. For three- dimensional tests, substitute `Tensor` for `Vector`. For four-dimensional tests, substitute `SuperTensor` for `Vector`. For flat in-place array tests, substitute `Flat` for `Vector`. For the descriptions that follow, we will reference the `Vector` tests, but the same information applies to `Matrix`, `Tensor` and `SuperTensor` tests. The command `make test` will ensure that all of the test software is built and then run all three test scripts. ## Testing header files `Vector.h` is a C++ header file that defines a C macro called `TEST_FUNC_PROTOS` that takes two arguments: `TYPE`, which is a data type name such as `unsigned int`; and `SNAME`, which is a short name for the same data type with no spaces, e.g. `uint`. This macro defines several function prototypes that have the prefix `SNAME` and have at least one argument that is an array of type `TYPE`. Those functions that have return arguments return a `TYPE` value. `TEST_FUNC_PROTOS` is then implemented for all of the data types supported by `numpy.i`: * `signed char` * `unsigned char` * `short` * `unsigned short` * `int` * `unsigned int` * `long` * `unsigned long` * `long long` * `unsigned long long` * `float` * `double` ## Testing source files `Vector.cxx` is a C++ source file that implements compilable code for each of the function prototypes specified in `Vector.h`. It defines a C macro `TEST_FUNCS` that has the same arguments and works in the same way as `TEST_FUNC_PROTOS` does in `Vector.h`. `TEST_FUNCS` is implemented for each of the 12 data types as above. ## Testing SWIG interface files `Vector.i` is a [SWIG](https://www.swig.org/) interface file that defines python module `Vector`. It follows the conventions for using `numpy.i` as described in this chapter. It defines a [SWIG](https://www.swig.org/) macro `%apply_numpy_typemaps` that has a single argument `TYPE`. It uses the [SWIG](https://www.swig.org/) directive `%apply` to apply the provided typemaps to the argument signatures found in `Vector.h`. This macro is then implemented for all of the data types supported by `numpy.i`. It then does a `%include "Vector.h"` to wrap all of the function prototypes in `Vector.h` using the typemaps in `numpy.i`. ## Testing Python scripts After `make` is used to build the testing extension modules, `testVector.py` can be run to execute the tests. As with other scripts that use `unittest` to facilitate unit testing, `testVector.py` defines a class that inherits from `unittest.TestCase`: class VectorTestCase(unittest.TestCase): However, this class is not run directly. Rather, it serves as a base class to several other python classes, each one specific to a particular data type. The `VectorTestCase` class stores two strings for typing information: **self.typeStr** A string that matches one of the `SNAME` prefixes used in `Vector.h` and `Vector.cxx`. For example, `"double"`. **self.typeCode** A short (typically single-character) string that represents a data type in numpy and corresponds to `self.typeStr`. For example, if `self.typeStr` is `"double"`, then `self.typeCode` should be `"d"`. Each test defined by the `VectorTestCase` class extracts the python function it is trying to test by accessing the `Vector` module’s dictionary: length = Vector.__dict__[self.typeStr + "Length"] In the case of double precision tests, this will return the python function `Vector.doubleLength`. We then define a new test case class for each supported data type with a short definition such as: class doubleTestCase(VectorTestCase): def __init__(self, methodName="runTest"): VectorTestCase.__init__(self, methodName) self.typeStr = "double" self.typeCode = "d" Each of these 12 classes is collected into a `unittest.TestSuite`, which is then executed. Errors and failures are summed together and returned as the exit argument. Any non-zero result indicates that at least one test did not pass. # Testing guidelines ## Introduction Until the 1.15 release, NumPy used the [nose](https://nose.readthedocs.io/en/latest/) testing framework, it now uses the [pytest](https://pytest.readthedocs.io) framework. The older framework is still maintained in order to support downstream projects that use the old numpy framework, but all tests for NumPy should use pytest. Our goal is that every module and package in NumPy should have a thorough set of unit tests. These tests should exercise the full functionality of a given routine as well as its robustness to erroneous or unexpected input arguments. Well-designed tests with good coverage make an enormous difference to the ease of refactoring. Whenever a new bug is found in a routine, you should write a new test for that specific case and add it to the test suite to prevent that bug from creeping back in unnoticed. Note SciPy uses the testing framework from [`numpy.testing`](routines.testing#module-numpy.testing "numpy.testing"), so all of the NumPy examples shown below are also applicable to SciPy ## Testing NumPy NumPy can be tested in a number of ways, choose any way you feel comfortable. ### Running tests from inside Python You can test an installed NumPy by `numpy.test`, for example, To run NumPy’s full test suite, use the following: >>> import numpy >>> numpy.test(label='slow') The test method may take two or more arguments; the first `label` is a string specifying what should be tested and the second `verbose` is an integer giving the level of output verbosity. See the docstring `numpy.test` for details. The default value for `label` is ‘fast’ - which will run the standard tests. The string ‘full’ will run the full battery of tests, including those identified as being slow to run. If `verbose` is 1 or less, the tests will just show information messages about the tests that are run; but if it is greater than 1, then the tests will also provide warnings on missing tests. So if you want to run every test and get messages about which modules don’t have tests: >>> numpy.test(label='full', verbose=2) # or numpy.test('full', 2) Finally, if you are only interested in testing a subset of NumPy, for example, the `_core` module, use the following: >>> numpy._core.test() ### Running tests from the command line If you want to build NumPy in order to work on NumPy itself, use the `spin` utility. To run NumPy’s full test suite: $ spin test -m full Testing a subset of NumPy: $ spin test -t numpy/_core/tests For detailed info on testing, see [Testing builds](../dev/development_environment#testing-builds) ### Running doctests NumPy documentation contains code examples, “doctests”. To check that the examples are correct, install the `scipy-doctest` package: $ pip install scipy-doctest and run one of: $ spin check-docs -v $ spin check-docs numpy/linalg $ spin check-docs -- -k 'det and not slogdet' Note that the doctests are not run when you use `spin test`. ### Other methods of running tests Run tests using your favourite IDE such as [vscode](https://code.visualstudio.com/docs/python/testing#_enable-a-test- framework) or [pycharm](https://www.jetbrains.com/help/pycharm/testing-your- first-python-application.html) ## Writing your own tests If you are writing code that you’d like to become part of NumPy, please write the tests as you develop your code. Every Python module, extension module, or subpackage in the NumPy package directory should have a corresponding `test_.py` file. Pytest examines these files for test methods (named `test*`) and test classes (named `Test*`). Suppose you have a NumPy module `numpy/xxx/yyy.py` containing a function `zzz()`. To test this function you would create a test module called `test_yyy.py`. If you only need to test one aspect of `zzz`, you can simply add a test function: def test_zzz(): assert zzz() == 'Hello from zzz' More often, we need to group a number of tests together, so we create a test class: import pytest # import xxx symbols from numpy.xxx.yyy import zzz import pytest class TestZzz: def test_simple(self): assert zzz() == 'Hello from zzz' def test_invalid_parameter(self): with pytest.raises(ValueError, match='.*some matching regex.*'): ... Within these test methods, the `assert` statement or a specialized assertion function is used to test whether a certain assumption is valid. If the assertion fails, the test fails. Common assertion functions include: * [`numpy.testing.assert_equal`](generated/numpy.testing.assert_equal#numpy.testing.assert_equal "numpy.testing.assert_equal") for testing exact elementwise equality between a result array and a reference, * [`numpy.testing.assert_allclose`](generated/numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose") for testing near elementwise equality between a result array and a reference (i.e. with specified relative and absolute tolerances), and * [`numpy.testing.assert_array_less`](generated/numpy.testing.assert_array_less#numpy.testing.assert_array_less "numpy.testing.assert_array_less") for testing (strict) elementwise ordering between a result array and a reference. By default, these assertion functions only compare the numerical values in the arrays. Consider using the `strict=True` option to check the array dtype and shape, too. When you need custom assertions, use the Python `assert` statement. Note that `pytest` internally rewrites `assert` statements to give informative output when it fails, so it should be preferred over the legacy variant `numpy.testing.assert_`. Whereas plain `assert` statements are ignored when running Python in optimized mode with `-O`, this is not an issue when running tests with pytest. Similarly, the pytest functions [`pytest.raises`](https://docs.pytest.org/en/stable/reference/reference.html#pytest.raises "\(in pytest v8.3.4\)") and [`pytest.warns`](https://docs.pytest.org/en/stable/reference/reference.html#pytest.warns "\(in pytest v8.3.4\)") should be preferred over their legacy counterparts [`numpy.testing.assert_raises`](generated/numpy.testing.assert_raises#numpy.testing.assert_raises "numpy.testing.assert_raises") and [`numpy.testing.assert_warns`](generated/numpy.testing.assert_warns#numpy.testing.assert_warns "numpy.testing.assert_warns"), which are more broadly used. These versions also accept a `match` parameter, which should always be used to precisely target the intended warning or error. Note that `test_` functions or methods should not have a docstring, because that makes it hard to identify the test from the output of running the test suite with `verbose=2` (or similar verbosity setting). Use plain comments (`#`) to describe the intent of the test and help the unfamiliar reader to interpret the code. Also, since much of NumPy is legacy code that was originally written without unit tests, there are still several modules that don’t have tests yet. Please feel free to choose one of these modules and develop tests for it. ### Using C code in tests NumPy exposes a rich [C-API](c-api/index#c-api) . These are tested using c-extension modules written “as-if” they know nothing about the internals of NumPy, rather using the official C-API interfaces only. Examples of such modules are tests for a user-defined `rational` dtype in `_rational_tests` or the ufunc machinery tests in `_umath_tests` which are part of the binary distribution. Starting from version 1.21, you can also write snippets of C code in tests that will be compiled locally into c-extension modules and loaded into python. numpy.testing.extbuild.build_and_import_extension(_modname_ , _functions_ , _*_ , _prologue =''_, _build_dir =None_, _include_dirs =[]_, _more_init =''_) Build and imports a c-extension module `modname` from a list of function fragments `functions`. Parameters: **functions** list of fragments Each fragment is a sequence of func_name, calling convention, snippet. **prologue** string Code to precede the rest, usually extra `#include` or `#define` macros. **build_dir** pathlib.Path Where to build the module, usually a temporary directory **include_dirs** list Extra directories to find include files when compiling **more_init** string Code to appear in the module PyMODINIT_FUNC Returns: out: module The module will have been loaded and is ready for use #### Examples >>> functions = [("test_bytes", "METH_O", """ if ( !PyBytesCheck(args)) { Py_RETURN_FALSE; } Py_RETURN_TRUE; """)] >>> mod = build_and_import_extension("testme", functions) >>> assert not mod.test_bytes('abc') >>> assert mod.test_bytes(b'abc') ### Labeling tests Unlabeled tests like the ones above are run in the default `numpy.test()` run. If you want to label your test as slow - and therefore reserved for a full `numpy.test(label='full')` run, you can label it with `pytest.mark.slow`: import pytest @pytest.mark.slow def test_big(self): print('Big, slow test') Similarly for methods: class test_zzz: @pytest.mark.slow def test_simple(self): assert_(zzz() == 'Hello from zzz') ### Easier setup and teardown functions / methods Testing looks for module-level or class method-level setup and teardown functions by name; thus: def setup_module(): """Module-level setup""" print('doing setup') def teardown_module(): """Module-level teardown""" print('doing teardown') class TestMe: def setup_method(self): """Class-level setup""" print('doing setup') def teardown_method(): """Class-level teardown""" print('doing teardown') Setup and teardown functions to functions and methods are known as “fixtures”, and they should be used sparingly. `pytest` supports more general fixture at various scopes which may be used automatically via special arguments. For example, the special argument name `tmpdir` is used in test to create a temporary directory. ### Parametric tests One very nice feature of `pytest` is the ease of testing across a range of parameter values using the `pytest.mark.parametrize` decorator. For example, suppose you wish to test `linalg.solve` for all combinations of three array sizes and two data types: @pytest.mark.parametrize('dimensionality', [3, 10, 25]) @pytest.mark.parametrize('dtype', [np.float32, np.float64]) def test_solve(dimensionality, dtype): np.random.seed(842523) A = np.random.random(size=(dimensionality, dimensionality)).astype(dtype) b = np.random.random(size=dimensionality).astype(dtype) x = np.linalg.solve(A, b) eps = np.finfo(dtype).eps assert_allclose(A @ x, b, rtol=eps*1e2, atol=0) assert x.dtype == np.dtype(dtype) ### Doctests Doctests are a convenient way of documenting the behavior of a function and allowing that behavior to be tested at the same time. The output of an interactive Python session can be included in the docstring of a function, and the test framework can run the example and compare the actual output to the expected output. The doctests can be run by adding the `doctests` argument to the `test()` call; for example, to run all tests (including doctests) for numpy.lib: >>> import numpy as np >>> np.lib.test(doctests=True) The doctests are run as if they are in a fresh Python instance which has executed `import numpy as np`. Tests that are part of a NumPy subpackage will have that subpackage already imported. E.g. for a test in `numpy/linalg/tests/`, the namespace will be created such that `from numpy import linalg` has already executed. ### `tests/` Rather than keeping the code and the tests in the same directory, we put all the tests for a given subpackage in a `tests/` subdirectory. For our example, if it doesn’t already exist you will need to create a `tests/` directory in `numpy/xxx/`. So the path for `test_yyy.py` is `numpy/xxx/tests/test_yyy.py`. Once the `numpy/xxx/tests/test_yyy.py` is written, its possible to run the tests by going to the `tests/` directory and typing: python test_yyy.py Or if you add `numpy/xxx/tests/` to the Python path, you could run the tests interactively in the interpreter like this: >>> import test_yyy >>> test_yyy.test() ### `__init__.py` and `setup.py` Usually, however, adding the `tests/` directory to the python path isn’t desirable. Instead it would better to invoke the test straight from the module `xxx`. To this end, simply place the following lines at the end of your package’s `__init__.py` file: ... def test(level=1, verbosity=1): from numpy.testing import Tester return Tester().test(level, verbosity) You will also need to add the tests directory in the configuration section of your setup.py: ... def configuration(parent_package='', top_path=None): ... config.add_subpackage('tests') return config ... Now you can do the following to test your module: >>> import numpy >>> numpy.xxx.test() Also, when invoking the entire NumPy test suite, your tests will be found and run: >>> import numpy >>> numpy.test() # your tests are included and run automatically! ## Tips & Tricks ### Known failures & skipping tests Sometimes you might want to skip a test or mark it as a known failure, such as when the test suite is being written before the code it’s meant to test, or if a test only fails on a particular architecture. To skip a test, simply use `skipif`: import pytest @pytest.mark.skipif(SkipMyTest, reason="Skipping this test because...") def test_something(foo): ... The test is marked as skipped if `SkipMyTest` evaluates to nonzero, and the message in verbose test output is the second argument given to `skipif`. Similarly, a test can be marked as a known failure by using `xfail`: import pytest @pytest.mark.xfail(MyTestFails, reason="This test is known to fail because...") def test_something_else(foo): ... Of course, a test can be unconditionally skipped or marked as a known failure by using `skip` or `xfail` without argument, respectively. A total of the number of skipped and known failing tests is displayed at the end of the test run. Skipped tests are marked as `'S'` in the test results (or `'SKIPPED'` for `verbose > 1`), and known failing tests are marked as `'x'` (or `'XFAIL'` if `verbose > 1`). ### Tests on random data Tests on random data are good, but since test failures are meant to expose new bugs or regressions, a test that passes most of the time but fails occasionally with no code changes is not helpful. Make the random data deterministic by setting the random number seed before generating it. Use either Python’s `random.seed(some_number)` or NumPy’s `numpy.random.seed(some_number)`, depending on the source of random numbers. Alternatively, you can use [Hypothesis](https://hypothesis.readthedocs.io/en/latest/) to generate arbitrary data. Hypothesis manages both Python’s and Numpy’s random seeds for you, and provides a very concise and powerful way to describe data (including `hypothesis.extra.numpy`, e.g. for a set of mutually-broadcastable shapes). The advantages over random generation include tools to replay and share failures without requiring a fixed seed, reporting _minimal_ examples for each failure, and better-than-naive-random techniques for triggering bugs. ### Documentation for `numpy.test` numpy.test(_label ='fast'_, _verbose =1_, _extra_argv =None_, _doctests =False_, _coverage =False_, _durations =-1_, _tests =None_) Pytest test runner. A test function is typically added to a package’s __init__.py like so: from numpy._pytesttester import PytestTester test = PytestTester(__name__).test del PytestTester Calling this test function finds and runs all tests associated with the module and all its sub-modules. Parameters: **module_name** module name The name of the module to test. #### Notes Unlike the previous `nose`-based implementation, this class is not publicly exposed as it performs some `numpy`-specific warning suppression. Attributes: **module_name** str Full path to the package to test. # Thread Safety NumPy supports use in a multithreaded context via the [`threading`](https://docs.python.org/3/library/threading.html#module- threading "\(in Python v3.13\)") module in the standard library. Many NumPy operations release the GIL, so unlike many situations in Python, it is possible to improve parallel performance by exploiting multithreaded parallelism in Python. The easiest performance gains happen when each worker thread owns its own array or set of array objects, with no data directly shared between threads. Because NumPy releases the GIL for many low-level operations, threads that spend most of the time in low-level code will run in parallel. It is possible to share NumPy arrays between threads, but extreme care must be taken to avoid creating thread safety issues when mutating arrays that are shared between multiple threads. If two threads simultaneously read from and write to the same array, they will at best produce inconsistent, racey results that are not reproducible, let alone correct. It is also possible to crash the Python interpreter by, for example, resizing an array while another thread is reading from it to compute a ufunc operation. In the future, we may add locking to ndarray to make writing multithreaded algorithms using NumPy arrays safer, but for now we suggest focusing on read- only access of arrays that are shared between threads, or adding your own locking if you need to mutation and multithreading. Note that operations that _do not_ release the GIL will see no performance gains from use of the [`threading`](https://docs.python.org/3/library/threading.html#module- threading "\(in Python v3.13\)") module, and instead might be better served with [`multiprocessing`](https://docs.python.org/3/library/multiprocessing.html#module- multiprocessing "\(in Python v3.13\)"). In particular, operations on arrays with `dtype=object` do not release the GIL. ## Free-threaded Python New in version 2.1. Starting with NumPy 2.1 and CPython 3.13, NumPy also has experimental support for python runtimes with the GIL disabled. See for more information about installing and using free- threaded Python, as well as information about supporting it in libraries that depend on NumPy. Because free-threaded Python does not have a global interpreter lock to serialize access to Python objects, there are more opportunities for threads to mutate shared state and create thread safety issues. In addition to the limitations about locking of the ndarray object noted above, this also means that arrays with `dtype=object` are not protected by the GIL, creating data races for python objects that are not possible outside free-threaded python. # Typing (numpy.typing) New in version 1.20. Large parts of the NumPy API have [**PEP 484**](https://peps.python.org/pep-0484/)-style type annotations. In addition a number of type aliases are available to users, most prominently the two below: * `ArrayLike`: objects that can be converted to arrays * `DTypeLike`: objects that can be converted to dtypes ## Mypy plugin New in version 1.21. A [mypy](https://mypy-lang.org/) plugin for managing a number of platform- specific annotations. Its functionality can be split into three distinct parts: * Assigning the (platform-dependent) precisions of certain [`number`](arrays.scalars#numpy.number "numpy.number") subclasses, including the likes of [`int_`](arrays.scalars#numpy.int_ "numpy.int_"), [`intp`](arrays.scalars#numpy.intp "numpy.intp") and [`longlong`](arrays.scalars#numpy.longlong "numpy.longlong"). See the documentation on [scalar types](arrays.scalars#arrays-scalars-built-in) for a comprehensive overview of the affected classes. Without the plugin the precision of all relevant classes will be inferred as [`Any`](https://docs.python.org/3/library/typing.html#typing.Any "\(in Python v3.13\)"). * Removing all extended-precision [`number`](arrays.scalars#numpy.number "numpy.number") subclasses that are unavailable for the platform in question. Most notably this includes the likes of [`float128`](arrays.scalars#numpy.float128 "numpy.float128") and [`complex256`](arrays.scalars#numpy.complex256 "numpy.complex256"). Without the plugin _all_ extended-precision types will, as far as mypy is concerned, be available to all platforms. * Assigning the (platform-dependent) precision of [`c_intp`](routines.ctypeslib#numpy.ctypeslib.c_intp "numpy.ctypeslib.c_intp"). Without the plugin the type will default to [`ctypes.c_int64`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int64 "\(in Python v3.13\)"). New in version 1.22. ### Examples To enable the plugin, one must add it to their mypy [configuration file](https://mypy.readthedocs.io/en/stable/config_file.html): [mypy] plugins = numpy.typing.mypy_plugin ## Differences from the runtime NumPy API NumPy is very flexible. Trying to describe the full range of possibilities statically would result in types that are not very helpful. For that reason, the typed NumPy API is often stricter than the runtime NumPy API. This section describes some notable differences. ### ArrayLike The `ArrayLike` type tries to avoid creating object arrays. For example, >>> np.array(x**2 for x in range(10)) array( at ...>, dtype=object) is valid NumPy code which will create a 0-dimensional object array. Type checkers will complain about the above example when using the NumPy types however. If you really intended to do the above, then you can either use a `# type: ignore` comment: >>> np.array(x**2 for x in range(10)) # type: ignore or explicitly type the array like object as [`Any`](https://docs.python.org/3/library/typing.html#typing.Any "\(in Python v3.13\)"): >>> from typing import Any >>> array_like: Any = (x**2 for x in range(10)) >>> np.array(array_like) array( at ...>, dtype=object) ### ndarray It’s possible to mutate the dtype of an array at runtime. For example, the following code is valid: >>> x = np.array([1, 2]) >>> x.dtype = np.bool This sort of mutation is not allowed by the types. Users who want to write statically typed code should instead use the [`numpy.ndarray.view`](generated/numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") method to create a view of the array with a different dtype. ### DTypeLike The `DTypeLike` type tries to avoid creation of dtype objects using dictionary of fields like below: >>> x = np.dtype({"field1": (float, 1), "field2": (int, 3)}) Although this is valid NumPy code, the type checker will complain about it, since its usage is discouraged. Please see : [Data type objects](arrays.dtypes#arrays-dtypes) ### Number precision The precision of [`numpy.number`](arrays.scalars#numpy.number "numpy.number") subclasses is treated as a invariant generic parameter (see `NBitBase`), simplifying the annotating of processes involving precision-based casting. >>> from typing import TypeVar >>> import numpy as np >>> import numpy.typing as npt >>> T = TypeVar("T", bound=npt.NBitBase) >>> def func(a: "np.floating[T]", b: "np.floating[T]") -> "np.floating[T]": ... ... Consequently, the likes of [`float16`](arrays.scalars#numpy.float16 "numpy.float16"), [`float32`](arrays.scalars#numpy.float32 "numpy.float32") and [`float64`](arrays.scalars#numpy.float64 "numpy.float64") are still sub- types of [`floating`](arrays.scalars#numpy.floating "numpy.floating"), but, contrary to runtime, they’re not necessarily considered as sub-classes. ### Timedelta64 The [`timedelta64`](arrays.scalars#numpy.timedelta64 "numpy.timedelta64") class is not considered a subclass of [`signedinteger`](arrays.scalars#numpy.signedinteger "numpy.signedinteger"), the former only inheriting from [`generic`](arrays.scalars#numpy.generic "numpy.generic") while static type checking. ### 0D arrays During runtime numpy aggressively casts any passed 0D arrays into their corresponding [`generic`](arrays.scalars#numpy.generic "numpy.generic") instance. Until the introduction of shape typing (see [**PEP 646**](https://peps.python.org/pep-0646/)) it is unfortunately not possible to make the necessary distinction between 0D and >0D arrays. While thus not strictly correct, all operations are that can potentially perform a 0D-array -> scalar cast are currently annotated as exclusively returning an [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). If it is known in advance that an operation _will_ perform a 0D-array -> scalar cast, then one can consider manually remedying the situation with either [`typing.cast`](https://docs.python.org/3/library/typing.html#typing.cast "\(in Python v3.13\)") or a `# type: ignore` comment. ### Record array dtypes The dtype of [`numpy.recarray`](generated/numpy.recarray#numpy.recarray "numpy.recarray"), and the [Creating record arrays](routines.array- creation#routines-array-creation-rec) functions in general, can be specified in one of two ways: * Directly via the `dtype` argument. * With up to five helper arguments that operate via [`numpy.rec.format_parser`](generated/numpy.rec.format_parser#numpy.rec.format_parser "numpy.rec.format_parser"): `formats`, `names`, `titles`, `aligned` and `byteorder`. These two approaches are currently typed as being mutually exclusive, _i.e._ if `dtype` is specified than one may not specify `formats`. While this mutual exclusivity is not (strictly) enforced during runtime, combining both dtype specifiers can lead to unexpected or even downright buggy behavior. ## API numpy.typing.ArrayLike _= typing.Union[...]_ A [`Union`](https://docs.python.org/3/library/typing.html#typing.Union "\(in Python v3.13\)") representing objects that can be coerced into an [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). Among others this includes the likes of: * Scalars. * (Nested) sequences. * Objects implementing the `__array__` protocol. New in version 1.20. See Also [array_like](../glossary#term-array_like): Any scalar or sequence that can be interpreted as an ndarray. #### Examples >>> import numpy as np >>> import numpy.typing as npt >>> def as_array(a: npt.ArrayLike) -> np.ndarray: ... return np.array(a) numpy.typing.DTypeLike _= typing.Union[...]_ A [`Union`](https://docs.python.org/3/library/typing.html#typing.Union "\(in Python v3.13\)") representing objects that can be coerced into a [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype"). Among others this includes the likes of: * [`type`](https://docs.python.org/3/library/functions.html#type "\(in Python v3.13\)") objects. * Character codes or the names of [`type`](https://docs.python.org/3/library/functions.html#type "\(in Python v3.13\)") objects. * Objects with the `.dtype` attribute. New in version 1.20. See Also [Specifying and constructing data types](arrays.dtypes#arrays-dtypes- constructing) A comprehensive overview of all objects that can be coerced into data types. #### Examples >>> import numpy as np >>> import numpy.typing as npt >>> def as_dtype(d: npt.DTypeLike) -> np.dtype: ... return np.dtype(d) numpy.typing.NDArray _= numpy.ndarray[tuple[int, ...], numpy.dtype[+_ScalarType_co]]_[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/__init__.py) A [`np.ndarray[tuple[int, ...], np.dtype[+ScalarType]]`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") type alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "\(in Python v3.13\)") w.r.t. its [`dtype.type`](generated/numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). Can be used during runtime for typing arrays with a given dtype and unspecified shape. New in version 1.21. #### Examples >>> import numpy as np >>> import numpy.typing as npt >>> print(npt.NDArray) numpy.ndarray[tuple[int, ...], numpy.dtype[+_ScalarType_co]] >>> print(npt.NDArray[np.float64]) numpy.ndarray[tuple[int, ...], numpy.dtype[numpy.float64]] >>> NDArrayInt = npt.NDArray[np.int_] >>> a: NDArrayInt = np.arange(10) >>> def func(a: npt.ArrayLike) -> npt.NDArray[Any]: ... return np.array(a) _class_ numpy.typing.NBitBase[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/typing/__init__.py) A type representing [`numpy.number`](arrays.scalars#numpy.number "numpy.number") precision during static type checking. Used exclusively for the purpose static type checking, `NBitBase` represents the base of a hierarchical set of subclasses. Each subsequent subclass is herein used for representing a lower level of precision, _e.g._ `64Bit > 32Bit > 16Bit`. New in version 1.20. #### Examples Below is a typical usage example: `NBitBase` is herein used for annotating a function that takes a float and integer of arbitrary precision as arguments and returns a new float of whichever precision is largest (_e.g._ `np.float16 + np.int64 -> np.float64`). >>> from __future__ import annotations >>> from typing import TypeVar, TYPE_CHECKING >>> import numpy as np >>> import numpy.typing as npt >>> S = TypeVar("S", bound=npt.NBitBase) >>> T = TypeVar("T", bound=npt.NBitBase) >>> def add(a: np.floating[S], b: np.integer[T]) -> np.floating[S | T]: ... return a + b >>> a = np.float16() >>> b = np.int64() >>> out = add(a, b) >>> if TYPE_CHECKING: ... reveal_locals() ... # note: Revealed local types are: ... # note: a: numpy.floating[numpy.typing._16Bit*] ... # note: b: numpy.signedinteger[numpy.typing._64Bit*] ... # note: out: numpy.floating[numpy.typing._64Bit*] # Universal functions (ufunc) See also [Universal functions (ufunc) basics](../user/basics.ufuncs#ufuncs-basics) A universal function (or [ufunc](../glossary#term-ufunc) for short) is a function that operates on [`ndarrays`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") in an element-by-element fashion, supporting [array broadcasting](../user/basics.ufuncs#ufuncs-broadcasting), [type casting](../user/basics.ufuncs#ufuncs-casting), and several other standard features. That is, a ufunc is a “[vectorized](../glossary#term-vectorization)” wrapper for a function that takes a fixed number of specific inputs and produces a fixed number of specific outputs. For detailed information on universal functions, see [Universal functions (ufunc) basics](../user/basics.ufuncs#ufuncs-basics). ## ufunc [`numpy.ufunc`](generated/numpy.ufunc#numpy.ufunc "numpy.ufunc")() | Functions that operate element by element on whole arrays. ---|--- ### Optional keyword arguments All ufuncs take optional keyword arguments. Most of these represent advanced usage and will not typically be used. #### _out_ The first output can be provided as either a positional or a keyword parameter. Keyword ‘out’ arguments are incompatible with positional ones. The ‘out’ keyword argument is expected to be a tuple with one entry per output (which can be None for arrays to be allocated by the ufunc). For ufuncs with a single output, passing a single array (instead of a tuple holding a single array) is also valid. Passing a single array in the ‘out’ keyword argument to a ufunc with multiple outputs is deprecated, and will raise a warning in numpy 1.10, and an error in a future release. If ‘out’ is None (the default), a uninitialized return array is created. The output array is then filled with the results of the ufunc in the places that the broadcast ‘where’ is True. If ‘where’ is the scalar True (the default), then this corresponds to the entire output being filled. Note that outputs not explicitly filled are left with their uninitialized values. Operations where ufunc input and output operands have memory overlap are defined to be the same as for equivalent operations where there is no memory overlap. Operations affected make temporary copies as needed to eliminate data dependency. As detecting these cases is computationally expensive, a heuristic is used, which may in rare cases result in needless temporary copies. For operations where the data dependency is simple enough for the heuristic to analyze, temporary copies will not be made even if the arrays overlap, if it can be deduced copies are not necessary. As an example, `np.add(a, b, out=a)` will not involve copies. #### _where_ Accepts a boolean array which is broadcast together with the operands. Values of True indicate to calculate the ufunc at that position, values of False indicate to leave the value in the output alone. This argument cannot be used for generalized ufuncs as those take non-scalar input. Note that if an uninitialized return array is created, values of False will leave those values **uninitialized**. #### _axes_ A list of tuples with indices of axes a generalized ufunc should operate on. For instance, for a signature of `(i,j),(j,k)->(i,k)` appropriate for matrix multiplication, the base elements are two-dimensional matrices and these are taken to be stored in the two last axes of each argument. The corresponding axes keyword would be `[(-2, -1), (-2, -1), (-2, -1)]`. For simplicity, for generalized ufuncs that operate on 1-dimensional arrays (vectors), a single integer is accepted instead of a single-element tuple, and for generalized ufuncs for which all outputs are scalars, the output tuples can be omitted. #### _axis_ A single axis over which a generalized ufunc should operate. This is a short- cut for ufuncs that operate over a single, shared core dimension, equivalent to passing in `axes` with entries of `(axis,)` for each single-core-dimension argument and `()` for all others. For instance, for a signature `(i),(i)->()`, it is equivalent to passing in `axes=[(axis,), (axis,), ()]`. #### _keepdims_ If this is set to `True`, axes which are reduced over will be left in the result as a dimension with size one, so that the result will broadcast correctly against the inputs. This option can only be used for generalized ufuncs that operate on inputs that all have the same number of core dimensions and with outputs that have no core dimensions, i.e., with signatures like `(i),(i)->()` or `(m,m)->()`. If used, the location of the dimensions in the output can be controlled with `axes` and `axis`. #### _casting_ May be ‘no’, ‘equiv’, ‘safe’, ‘same_kind’, or ‘unsafe’. See [`can_cast`](generated/numpy.can_cast#numpy.can_cast "numpy.can_cast") for explanations of the parameter values. Provides a policy for what kind of casting is permitted. For compatibility with previous versions of NumPy, this defaults to ‘unsafe’ for numpy < 1.7. In numpy 1.7 a transition to ‘same_kind’ was begun where ufuncs produce a DeprecationWarning for calls which are allowed under the ‘unsafe’ rules, but not under the ‘same_kind’ rules. From numpy 1.10 and onwards, the default is ‘same_kind’. #### _order_ Specifies the calculation iteration order/memory layout of the output array. Defaults to ‘K’. ‘C’ means the output should be C-contiguous, ‘F’ means F-contiguous, ‘A’ means F-contiguous if the inputs are F-contiguous and not also not C-contiguous, C-contiguous otherwise, and ‘K’ means to match the element ordering of the inputs as closely as possible. #### _dtype_ Overrides the DType of the output arrays the same way as the _signature_. This should ensure a matching precision of the calculation. The exact calculation DTypes chosen may depend on the ufunc and the inputs may be cast to this DType to perform the calculation. #### _subok_ Defaults to true. If set to false, the output will always be a strict array, not a subtype. #### _signature_ Either a Dtype, a tuple of DTypes, or a special signature string indicating the input and output types of a ufunc. This argument allows the user to specify exact DTypes to be used for the calculation. Casting will be used as necessary. The actual DType of the input arrays is not considered unless `signature` is `None` for that array. When all DTypes are fixed, a specific loop is chosen or an error raised if no matching loop exists. If some DTypes are not specified and left `None`, the behaviour may depend on the ufunc. At this time, a list of available signatures is provided by the **types** attribute of the ufunc. (This list may be missing DTypes not defined by NumPy.) The `signature` only specifies the DType class/type. For example, it can specify that the operation should be `datetime64` or `float64` operation. It does not specify the `datetime64` time-unit or the `float64` byte-order. For backwards compatibility this argument can also be provided as _sig_ , although the long form is preferred. Note that this should not be confused with the generalized ufunc [signature](c-api/generalized-ufuncs#details-of- signature) that is stored in the **signature** attribute of the of the ufunc object. ### Attributes There are some informational attributes that universal functions possess. None of the attributes can be set. **__doc__** | A docstring for each ufunc. The first part of the docstring is dynamically generated from the number of outputs, the name, and the number of inputs. The second part of the docstring is provided at creation time and stored with the ufunc. ---|--- **__name__** | The name of the ufunc. [`ufunc.nin`](generated/numpy.ufunc.nin#numpy.ufunc.nin "numpy.ufunc.nin") | The number of inputs. ---|--- [`ufunc.nout`](generated/numpy.ufunc.nout#numpy.ufunc.nout "numpy.ufunc.nout") | The number of outputs. [`ufunc.nargs`](generated/numpy.ufunc.nargs#numpy.ufunc.nargs "numpy.ufunc.nargs") | The number of arguments. [`ufunc.ntypes`](generated/numpy.ufunc.ntypes#numpy.ufunc.ntypes "numpy.ufunc.ntypes") | The number of types. [`ufunc.types`](generated/numpy.ufunc.types#numpy.ufunc.types "numpy.ufunc.types") | Returns a list with types grouped input->output. [`ufunc.identity`](generated/numpy.ufunc.identity#numpy.ufunc.identity "numpy.ufunc.identity") | The identity value. [`ufunc.signature`](generated/numpy.ufunc.signature#numpy.ufunc.signature "numpy.ufunc.signature") | Definition of the core elements a generalized ufunc operates on. ### Methods [`ufunc.reduce`](generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce")(array[, axis, dtype, out, ...]) | Reduces [`array`](generated/numpy.array#numpy.array "numpy.array")'s dimension by one, by applying ufunc along one axis. ---|--- [`ufunc.accumulate`](generated/numpy.ufunc.accumulate#numpy.ufunc.accumulate "numpy.ufunc.accumulate")(array[, axis, dtype, out]) | Accumulate the result of applying the operator to all elements. [`ufunc.reduceat`](generated/numpy.ufunc.reduceat#numpy.ufunc.reduceat "numpy.ufunc.reduceat")(array, indices[, axis, ...]) | Performs a (local) reduce with specified slices over a single axis. [`ufunc.outer`](generated/numpy.ufunc.outer#numpy.ufunc.outer "numpy.ufunc.outer")(A, B, /, **kwargs) | Apply the ufunc `op` to all pairs (a, b) with a in `A` and b in `B`. [`ufunc.at`](generated/numpy.ufunc.at#numpy.ufunc.at "numpy.ufunc.at")(a, indices[, b]) | Performs unbuffered in place operation on operand 'a' for elements specified by 'indices'. Warning A reduce-like operation on an array with a data-type that has a range “too small” to handle the result will silently wrap. One should use [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") to increase the size of the data-type over which reduction takes place. ## Available ufuncs There are currently more than 60 universal functions defined in [`numpy`](index#module-numpy "numpy") on one or more types, covering a wide variety of operations. Some of these ufuncs are called automatically on arrays when the relevant infix notation is used (_e.g._ , [`add(a, b)`](generated/numpy.add#numpy.add "numpy.add") is called internally when `a + b` is written and _a_ or _b_ is an [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray")). Nevertheless, you may still want to use the ufunc call in order to use the optional output argument(s) to place the output(s) in an object (or objects) of your choice. Recall that each ufunc operates element-by-element. Therefore, each scalar ufunc will be described as if acting on a set of scalar inputs to return a set of scalar outputs. Note The ufunc still returns its output(s) even if you use the optional output argument(s). ### Math operations [`add`](generated/numpy.add#numpy.add "numpy.add")(x1, x2, /[, out, where, casting, order, ...]) | Add arguments element-wise. ---|--- [`subtract`](generated/numpy.subtract#numpy.subtract "numpy.subtract")(x1, x2, /[, out, where, casting, ...]) | Subtract arguments, element-wise. [`multiply`](generated/numpy.multiply#numpy.multiply "numpy.multiply")(x1, x2, /[, out, where, casting, ...]) | Multiply arguments element-wise. [`matmul`](generated/numpy.matmul#numpy.matmul "numpy.matmul")(x1, x2, /[, out, casting, order, ...]) | Matrix product of two arrays. [`divide`](generated/numpy.divide#numpy.divide "numpy.divide")(x1, x2, /[, out, where, casting, ...]) | Divide arguments element-wise. [`logaddexp`](generated/numpy.logaddexp#numpy.logaddexp "numpy.logaddexp")(x1, x2, /[, out, where, casting, ...]) | Logarithm of the sum of exponentiations of the inputs. [`logaddexp2`](generated/numpy.logaddexp2#numpy.logaddexp2 "numpy.logaddexp2")(x1, x2, /[, out, where, casting, ...]) | Logarithm of the sum of exponentiations of the inputs in base-2. [`true_divide`](generated/numpy.true_divide#numpy.true_divide "numpy.true_divide")(x1, x2, /[, out, where, ...]) | Divide arguments element-wise. [`floor_divide`](generated/numpy.floor_divide#numpy.floor_divide "numpy.floor_divide")(x1, x2, /[, out, where, ...]) | Return the largest integer smaller or equal to the division of the inputs. [`negative`](generated/numpy.negative#numpy.negative "numpy.negative")(x, /[, out, where, casting, order, ...]) | Numerical negative, element-wise. [`positive`](generated/numpy.positive#numpy.positive "numpy.positive")(x, /[, out, where, casting, order, ...]) | Numerical positive, element-wise. [`power`](generated/numpy.power#numpy.power "numpy.power")(x1, x2, /[, out, where, casting, ...]) | First array elements raised to powers from second array, element-wise. [`float_power`](generated/numpy.float_power#numpy.float_power "numpy.float_power")(x1, x2, /[, out, where, ...]) | First array elements raised to powers from second array, element-wise. [`remainder`](generated/numpy.remainder#numpy.remainder "numpy.remainder")(x1, x2, /[, out, where, casting, ...]) | Returns the element-wise remainder of division. [`mod`](generated/numpy.mod#numpy.mod "numpy.mod")(x1, x2, /[, out, where, casting, order, ...]) | Returns the element-wise remainder of division. [`fmod`](generated/numpy.fmod#numpy.fmod "numpy.fmod")(x1, x2, /[, out, where, casting, ...]) | Returns the element-wise remainder of division. [`divmod`](generated/numpy.divmod#numpy.divmod "numpy.divmod")(x1, x2[, out1, out2], / [[, out, ...]) | Return element-wise quotient and remainder simultaneously. [`absolute`](generated/numpy.absolute#numpy.absolute "numpy.absolute")(x, /[, out, where, casting, order, ...]) | Calculate the absolute value element-wise. [`fabs`](generated/numpy.fabs#numpy.fabs "numpy.fabs")(x, /[, out, where, casting, order, ...]) | Compute the absolute values element-wise. [`rint`](generated/numpy.rint#numpy.rint "numpy.rint")(x, /[, out, where, casting, order, ...]) | Round elements of the array to the nearest integer. [`sign`](generated/numpy.sign#numpy.sign "numpy.sign")(x, /[, out, where, casting, order, ...]) | Returns an element-wise indication of the sign of a number. [`heaviside`](generated/numpy.heaviside#numpy.heaviside "numpy.heaviside")(x1, x2, /[, out, where, casting, ...]) | Compute the Heaviside step function. [`conj`](generated/numpy.conj#numpy.conj "numpy.conj")(x, /[, out, where, casting, order, ...]) | Return the complex conjugate, element-wise. [`conjugate`](generated/numpy.conjugate#numpy.conjugate "numpy.conjugate")(x, /[, out, where, casting, ...]) | Return the complex conjugate, element-wise. [`exp`](generated/numpy.exp#numpy.exp "numpy.exp")(x, /[, out, where, casting, order, ...]) | Calculate the exponential of all elements in the input array. [`exp2`](generated/numpy.exp2#numpy.exp2 "numpy.exp2")(x, /[, out, where, casting, order, ...]) | Calculate `2**p` for all `p` in the input array. [`log`](generated/numpy.log#numpy.log "numpy.log")(x, /[, out, where, casting, order, ...]) | Natural logarithm, element-wise. [`log2`](generated/numpy.log2#numpy.log2 "numpy.log2")(x, /[, out, where, casting, order, ...]) | Base-2 logarithm of `x`. [`log10`](generated/numpy.log10#numpy.log10 "numpy.log10")(x, /[, out, where, casting, order, ...]) | Return the base 10 logarithm of the input array, element-wise. [`expm1`](generated/numpy.expm1#numpy.expm1 "numpy.expm1")(x, /[, out, where, casting, order, ...]) | Calculate `exp(x) - 1` for all elements in the array. [`log1p`](generated/numpy.log1p#numpy.log1p "numpy.log1p")(x, /[, out, where, casting, order, ...]) | Return the natural logarithm of one plus the input array, element-wise. [`sqrt`](generated/numpy.sqrt#numpy.sqrt "numpy.sqrt")(x, /[, out, where, casting, order, ...]) | Return the non-negative square-root of an array, element-wise. [`square`](generated/numpy.square#numpy.square "numpy.square")(x, /[, out, where, casting, order, ...]) | Return the element-wise square of the input. [`cbrt`](generated/numpy.cbrt#numpy.cbrt "numpy.cbrt")(x, /[, out, where, casting, order, ...]) | Return the cube-root of an array, element-wise. [`reciprocal`](generated/numpy.reciprocal#numpy.reciprocal "numpy.reciprocal")(x, /[, out, where, casting, ...]) | Return the reciprocal of the argument, element-wise. [`gcd`](generated/numpy.gcd#numpy.gcd "numpy.gcd")(x1, x2, /[, out, where, casting, order, ...]) | Returns the greatest common divisor of `|x1|` and `|x2|` [`lcm`](generated/numpy.lcm#numpy.lcm "numpy.lcm")(x1, x2, /[, out, where, casting, order, ...]) | Returns the lowest common multiple of `|x1|` and `|x2|` Tip The optional output arguments can be used to help you save memory for large calculations. If your arrays are large, complicated expressions can take longer than absolutely necessary due to the creation and (later) destruction of temporary calculation spaces. For example, the expression `G = A * B + C` is equivalent to `T1 = A * B; G = T1 + C; del T1`. It will be more quickly executed as `G = A * B; add(G, C, G)` which is the same as `G = A * B; G += C`. ### Trigonometric functions All trigonometric functions use radians when an angle is called for. The ratio of degrees to radians is \\(180^{\circ}/\pi.\\) [`sin`](generated/numpy.sin#numpy.sin "numpy.sin")(x, /[, out, where, casting, order, ...]) | Trigonometric sine, element-wise. ---|--- [`cos`](generated/numpy.cos#numpy.cos "numpy.cos")(x, /[, out, where, casting, order, ...]) | Cosine element-wise. [`tan`](generated/numpy.tan#numpy.tan "numpy.tan")(x, /[, out, where, casting, order, ...]) | Compute tangent element-wise. [`arcsin`](generated/numpy.arcsin#numpy.arcsin "numpy.arcsin")(x, /[, out, where, casting, order, ...]) | Inverse sine, element-wise. [`arccos`](generated/numpy.arccos#numpy.arccos "numpy.arccos")(x, /[, out, where, casting, order, ...]) | Trigonometric inverse cosine, element-wise. [`arctan`](generated/numpy.arctan#numpy.arctan "numpy.arctan")(x, /[, out, where, casting, order, ...]) | Trigonometric inverse tangent, element-wise. [`arctan2`](generated/numpy.arctan2#numpy.arctan2 "numpy.arctan2")(x1, x2, /[, out, where, casting, ...]) | Element-wise arc tangent of `x1/x2` choosing the quadrant correctly. [`hypot`](generated/numpy.hypot#numpy.hypot "numpy.hypot")(x1, x2, /[, out, where, casting, ...]) | Given the "legs" of a right triangle, return its hypotenuse. [`sinh`](generated/numpy.sinh#numpy.sinh "numpy.sinh")(x, /[, out, where, casting, order, ...]) | Hyperbolic sine, element-wise. [`cosh`](generated/numpy.cosh#numpy.cosh "numpy.cosh")(x, /[, out, where, casting, order, ...]) | Hyperbolic cosine, element-wise. [`tanh`](generated/numpy.tanh#numpy.tanh "numpy.tanh")(x, /[, out, where, casting, order, ...]) | Compute hyperbolic tangent element-wise. [`arcsinh`](generated/numpy.arcsinh#numpy.arcsinh "numpy.arcsinh")(x, /[, out, where, casting, order, ...]) | Inverse hyperbolic sine element-wise. [`arccosh`](generated/numpy.arccosh#numpy.arccosh "numpy.arccosh")(x, /[, out, where, casting, order, ...]) | Inverse hyperbolic cosine, element-wise. [`arctanh`](generated/numpy.arctanh#numpy.arctanh "numpy.arctanh")(x, /[, out, where, casting, order, ...]) | Inverse hyperbolic tangent element-wise. [`degrees`](generated/numpy.degrees#numpy.degrees "numpy.degrees")(x, /[, out, where, casting, order, ...]) | Convert angles from radians to degrees. [`radians`](generated/numpy.radians#numpy.radians "numpy.radians")(x, /[, out, where, casting, order, ...]) | Convert angles from degrees to radians. [`deg2rad`](generated/numpy.deg2rad#numpy.deg2rad "numpy.deg2rad")(x, /[, out, where, casting, order, ...]) | Convert angles from degrees to radians. [`rad2deg`](generated/numpy.rad2deg#numpy.rad2deg "numpy.rad2deg")(x, /[, out, where, casting, order, ...]) | Convert angles from radians to degrees. ### Bit-twiddling functions These function all require integer arguments and they manipulate the bit- pattern of those arguments. [`bitwise_and`](generated/numpy.bitwise_and#numpy.bitwise_and "numpy.bitwise_and")(x1, x2, /[, out, where, ...]) | Compute the bit-wise AND of two arrays element-wise. ---|--- [`bitwise_or`](generated/numpy.bitwise_or#numpy.bitwise_or "numpy.bitwise_or")(x1, x2, /[, out, where, casting, ...]) | Compute the bit-wise OR of two arrays element-wise. [`bitwise_xor`](generated/numpy.bitwise_xor#numpy.bitwise_xor "numpy.bitwise_xor")(x1, x2, /[, out, where, ...]) | Compute the bit-wise XOR of two arrays element-wise. [`invert`](generated/numpy.invert#numpy.invert "numpy.invert")(x, /[, out, where, casting, order, ...]) | Compute bit-wise inversion, or bit-wise NOT, element-wise. [`left_shift`](generated/numpy.left_shift#numpy.left_shift "numpy.left_shift")(x1, x2, /[, out, where, casting, ...]) | Shift the bits of an integer to the left. [`right_shift`](generated/numpy.right_shift#numpy.right_shift "numpy.right_shift")(x1, x2, /[, out, where, ...]) | Shift the bits of an integer to the right. ### Comparison functions [`greater`](generated/numpy.greater#numpy.greater "numpy.greater")(x1, x2, /[, out, where, casting, ...]) | Return the truth value of (x1 > x2) element-wise. ---|--- [`greater_equal`](generated/numpy.greater_equal#numpy.greater_equal "numpy.greater_equal")(x1, x2, /[, out, where, ...]) | Return the truth value of (x1 >= x2) element-wise. [`less`](generated/numpy.less#numpy.less "numpy.less")(x1, x2, /[, out, where, casting, ...]) | Return the truth value of (x1 < x2) element-wise. [`less_equal`](generated/numpy.less_equal#numpy.less_equal "numpy.less_equal")(x1, x2, /[, out, where, casting, ...]) | Return the truth value of (x1 <= x2) element-wise. [`not_equal`](generated/numpy.not_equal#numpy.not_equal "numpy.not_equal")(x1, x2, /[, out, where, casting, ...]) | Return (x1 != x2) element-wise. [`equal`](generated/numpy.equal#numpy.equal "numpy.equal")(x1, x2, /[, out, where, casting, ...]) | Return (x1 == x2) element-wise. Warning Do not use the Python keywords `and` and `or` to combine logical array expressions. These keywords will test the truth value of the entire array (not element-by-element as you might expect). Use the bitwise operators & and | instead. [`logical_and`](generated/numpy.logical_and#numpy.logical_and "numpy.logical_and")(x1, x2, /[, out, where, ...]) | Compute the truth value of x1 AND x2 element-wise. ---|--- [`logical_or`](generated/numpy.logical_or#numpy.logical_or "numpy.logical_or")(x1, x2, /[, out, where, casting, ...]) | Compute the truth value of x1 OR x2 element-wise. [`logical_xor`](generated/numpy.logical_xor#numpy.logical_xor "numpy.logical_xor")(x1, x2, /[, out, where, ...]) | Compute the truth value of x1 XOR x2, element-wise. [`logical_not`](generated/numpy.logical_not#numpy.logical_not "numpy.logical_not")(x, /[, out, where, casting, ...]) | Compute the truth value of NOT x element-wise. Warning The bit-wise operators & and | are the proper way to perform element-by-element array comparisons. Be sure you understand the operator precedence: `(a > 2) & (a < 5)` is the proper syntax because `a > 2 & a < 5` will result in an error due to the fact that `2 & a` is evaluated first. [`maximum`](generated/numpy.maximum#numpy.maximum "numpy.maximum")(x1, x2, /[, out, where, casting, ...]) | Element-wise maximum of array elements. ---|--- Tip The Python function `max()` will find the maximum over a one-dimensional array, but it will do so using a slower sequence interface. The reduce method of the maximum ufunc is much faster. Also, the `max()` method will not give answers you might expect for arrays with greater than one dimension. The reduce method of minimum also allows you to compute a total minimum over an array. [`minimum`](generated/numpy.minimum#numpy.minimum "numpy.minimum")(x1, x2, /[, out, where, casting, ...]) | Element-wise minimum of array elements. ---|--- Warning the behavior of `maximum(a, b)` is different than that of `max(a, b)`. As a ufunc, `maximum(a, b)` performs an element-by-element comparison of `a` and `b` and chooses each element of the result according to which element in the two arrays is larger. In contrast, `max(a, b)` treats the objects `a` and `b` as a whole, looks at the (total) truth value of `a > b` and uses it to return either `a` or `b` (as a whole). A similar difference exists between `minimum(a, b)` and `min(a, b)`. [`fmax`](generated/numpy.fmax#numpy.fmax "numpy.fmax")(x1, x2, /[, out, where, casting, ...]) | Element-wise maximum of array elements. ---|--- [`fmin`](generated/numpy.fmin#numpy.fmin "numpy.fmin")(x1, x2, /[, out, where, casting, ...]) | Element-wise minimum of array elements. ### Floating functions Recall that all of these functions work element-by-element over an array, returning an array output. The description details only a single operation. [`isfinite`](generated/numpy.isfinite#numpy.isfinite "numpy.isfinite")(x, /[, out, where, casting, order, ...]) | Test element-wise for finiteness (not infinity and not Not a Number). ---|--- [`isinf`](generated/numpy.isinf#numpy.isinf "numpy.isinf")(x, /[, out, where, casting, order, ...]) | Test element-wise for positive or negative infinity. [`isnan`](generated/numpy.isnan#numpy.isnan "numpy.isnan")(x, /[, out, where, casting, order, ...]) | Test element-wise for NaN and return result as a boolean array. [`isnat`](generated/numpy.isnat#numpy.isnat "numpy.isnat")(x, /[, out, where, casting, order, ...]) | Test element-wise for NaT (not a time) and return result as a boolean array. [`fabs`](generated/numpy.fabs#numpy.fabs "numpy.fabs")(x, /[, out, where, casting, order, ...]) | Compute the absolute values element-wise. [`signbit`](generated/numpy.signbit#numpy.signbit "numpy.signbit")(x, /[, out, where, casting, order, ...]) | Returns element-wise True where signbit is set (less than zero). [`copysign`](generated/numpy.copysign#numpy.copysign "numpy.copysign")(x1, x2, /[, out, where, casting, ...]) | Change the sign of x1 to that of x2, element-wise. [`nextafter`](generated/numpy.nextafter#numpy.nextafter "numpy.nextafter")(x1, x2, /[, out, where, casting, ...]) | Return the next floating-point value after x1 towards x2, element-wise. [`spacing`](generated/numpy.spacing#numpy.spacing "numpy.spacing")(x, /[, out, where, casting, order, ...]) | Return the distance between x and the nearest adjacent number. [`modf`](generated/numpy.modf#numpy.modf "numpy.modf")(x[, out1, out2], / [[, out, where, ...]) | Return the fractional and integral parts of an array, element-wise. [`ldexp`](generated/numpy.ldexp#numpy.ldexp "numpy.ldexp")(x1, x2, /[, out, where, casting, ...]) | Returns x1 * 2**x2, element-wise. [`frexp`](generated/numpy.frexp#numpy.frexp "numpy.frexp")(x[, out1, out2], / [[, out, where, ...]) | Decompose the elements of x into mantissa and twos exponent. [`fmod`](generated/numpy.fmod#numpy.fmod "numpy.fmod")(x1, x2, /[, out, where, casting, ...]) | Returns the element-wise remainder of division. [`floor`](generated/numpy.floor#numpy.floor "numpy.floor")(x, /[, out, where, casting, order, ...]) | Return the floor of the input, element-wise. [`ceil`](generated/numpy.ceil#numpy.ceil "numpy.ceil")(x, /[, out, where, casting, order, ...]) | Return the ceiling of the input, element-wise. [`trunc`](generated/numpy.trunc#numpy.trunc "numpy.trunc")(x, /[, out, where, casting, order, ...]) | Return the truncated value of the input, element-wise. # Release notes * [2.2.0](https://numpy.org/doc/2.2/release/2.2.0-notes.html) * [Deprecations](https://numpy.org/doc/2.2/release/2.2.0-notes.html#deprecations) * [Expired deprecations](https://numpy.org/doc/2.2/release/2.2.0-notes.html#expired-deprecations) * [Compatibility notes](https://numpy.org/doc/2.2/release/2.2.0-notes.html#compatibility-notes) * [New Features](https://numpy.org/doc/2.2/release/2.2.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/2.2.0-notes.html#improvements) * [Performance improvements and changes](https://numpy.org/doc/2.2/release/2.2.0-notes.html#performance-improvements-and-changes) * [Changes](https://numpy.org/doc/2.2/release/2.2.0-notes.html#changes) * [2.1.3](https://numpy.org/doc/2.2/release/2.1.3-notes.html) * [Improvements](https://numpy.org/doc/2.2/release/2.1.3-notes.html#improvements) * [Changes](https://numpy.org/doc/2.2/release/2.1.3-notes.html#changes) * [Contributors](https://numpy.org/doc/2.2/release/2.1.3-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/2.1.3-notes.html#pull-requests-merged) * [2.1.2](https://numpy.org/doc/2.2/release/2.1.2-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/2.1.2-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/2.1.2-notes.html#pull-requests-merged) * [2.1.1](https://numpy.org/doc/2.2/release/2.1.1-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/2.1.1-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/2.1.1-notes.html#pull-requests-merged) * [2.1.0](https://numpy.org/doc/2.2/release/2.1.0-notes.html) * [New functions](https://numpy.org/doc/2.2/release/2.1.0-notes.html#new-functions) * [Deprecations](https://numpy.org/doc/2.2/release/2.1.0-notes.html#deprecations) * [Expired deprecations](https://numpy.org/doc/2.2/release/2.1.0-notes.html#expired-deprecations) * [C API changes](https://numpy.org/doc/2.2/release/2.1.0-notes.html#c-api-changes) * [New Features](https://numpy.org/doc/2.2/release/2.1.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/2.1.0-notes.html#improvements) * [Performance improvements and changes](https://numpy.org/doc/2.2/release/2.1.0-notes.html#performance-improvements-and-changes) * [Changes](https://numpy.org/doc/2.2/release/2.1.0-notes.html#changes) * [2.0.2](https://numpy.org/doc/2.2/release/2.0.2-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/2.0.2-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/2.0.2-notes.html#pull-requests-merged) * [2.0.1](https://numpy.org/doc/2.2/release/2.0.1-notes.html) * [Improvements](https://numpy.org/doc/2.2/release/2.0.1-notes.html#improvements) * [Contributors](https://numpy.org/doc/2.2/release/2.0.1-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/2.0.1-notes.html#pull-requests-merged) * [2.0.0](https://numpy.org/doc/2.2/release/2.0.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/2.0.0-notes.html#highlights) * [NumPy 2.0 Python API removals](https://numpy.org/doc/2.2/release/2.0.0-notes.html#numpy-2-0-python-api-removals) * [Deprecations](https://numpy.org/doc/2.2/release/2.0.0-notes.html#deprecations) * [Expired deprecations](https://numpy.org/doc/2.2/release/2.0.0-notes.html#expired-deprecations) * [Compatibility notes](https://numpy.org/doc/2.2/release/2.0.0-notes.html#compatibility-notes) * [C API changes](https://numpy.org/doc/2.2/release/2.0.0-notes.html#c-api-changes) * [NumPy 2.0 C API removals](https://numpy.org/doc/2.2/release/2.0.0-notes.html#numpy-2-0-c-api-removals) * [New Features](https://numpy.org/doc/2.2/release/2.0.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/2.0.0-notes.html#improvements) * [Changes](https://numpy.org/doc/2.2/release/2.0.0-notes.html#changes) * [1.26.4](https://numpy.org/doc/2.2/release/1.26.4-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.26.4-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.26.4-notes.html#pull-requests-merged) * [1.26.3](https://numpy.org/doc/2.2/release/1.26.3-notes.html) * [Compatibility](https://numpy.org/doc/2.2/release/1.26.3-notes.html#compatibility) * [Improvements](https://numpy.org/doc/2.2/release/1.26.3-notes.html#improvements) * [Contributors](https://numpy.org/doc/2.2/release/1.26.3-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.26.3-notes.html#pull-requests-merged) * [1.26.2](https://numpy.org/doc/2.2/release/1.26.2-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.26.2-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.26.2-notes.html#pull-requests-merged) * [1.26.1](https://numpy.org/doc/2.2/release/1.26.1-notes.html) * [Build system changes](https://numpy.org/doc/2.2/release/1.26.1-notes.html#build-system-changes) * [New features](https://numpy.org/doc/2.2/release/1.26.1-notes.html#new-features) * [Contributors](https://numpy.org/doc/2.2/release/1.26.1-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.26.1-notes.html#pull-requests-merged) * [1.26.0](https://numpy.org/doc/2.2/release/1.26.0-notes.html) * [New Features](https://numpy.org/doc/2.2/release/1.26.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.26.0-notes.html#improvements) * [Build system changes](https://numpy.org/doc/2.2/release/1.26.0-notes.html#build-system-changes) * [Contributors](https://numpy.org/doc/2.2/release/1.26.0-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.26.0-notes.html#pull-requests-merged) * [1.25.2](https://numpy.org/doc/2.2/release/1.25.2-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.25.2-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.25.2-notes.html#pull-requests-merged) * [1.25.1](https://numpy.org/doc/2.2/release/1.25.1-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.25.1-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.25.1-notes.html#pull-requests-merged) * [1.25.0](https://numpy.org/doc/2.2/release/1.25.0-notes.html) * [Deprecations](https://numpy.org/doc/2.2/release/1.25.0-notes.html#deprecations) * [Expired deprecations](https://numpy.org/doc/2.2/release/1.25.0-notes.html#expired-deprecations) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.25.0-notes.html#compatibility-notes) * [New Features](https://numpy.org/doc/2.2/release/1.25.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.25.0-notes.html#improvements) * [Performance improvements and changes](https://numpy.org/doc/2.2/release/1.25.0-notes.html#performance-improvements-and-changes) * [Changes](https://numpy.org/doc/2.2/release/1.25.0-notes.html#changes) * [1.24.4](https://numpy.org/doc/2.2/release/1.24.4-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.24.4-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.24.4-notes.html#pull-requests-merged) * [1.24.3](https://numpy.org/doc/2.2/release/1.24.3-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.24.3-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.24.3-notes.html#pull-requests-merged) * [1.24.2](https://numpy.org/doc/2.2/release/1.24.2-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.24.2-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.24.2-notes.html#pull-requests-merged) * [1.24.1](https://numpy.org/doc/2.2/release/1.24.1-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.24.1-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.24.1-notes.html#pull-requests-merged) * [1.24.0](https://numpy.org/doc/2.2/release/1.24.0-notes.html) * [Deprecations](https://numpy.org/doc/2.2/release/1.24.0-notes.html#deprecations) * [Expired deprecations](https://numpy.org/doc/2.2/release/1.24.0-notes.html#expired-deprecations) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.24.0-notes.html#compatibility-notes) * [New Features](https://numpy.org/doc/2.2/release/1.24.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.24.0-notes.html#improvements) * [Performance improvements and changes](https://numpy.org/doc/2.2/release/1.24.0-notes.html#performance-improvements-and-changes) * [Changes](https://numpy.org/doc/2.2/release/1.24.0-notes.html#changes) * [1.23.5](https://numpy.org/doc/2.2/release/1.23.5-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.23.5-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.23.5-notes.html#pull-requests-merged) * [1.23.4](https://numpy.org/doc/2.2/release/1.23.4-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.23.4-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.23.4-notes.html#pull-requests-merged) * [1.23.3](https://numpy.org/doc/2.2/release/1.23.3-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.23.3-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.23.3-notes.html#pull-requests-merged) * [1.23.2](https://numpy.org/doc/2.2/release/1.23.2-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.23.2-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.23.2-notes.html#pull-requests-merged) * [1.23.1](https://numpy.org/doc/2.2/release/1.23.1-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.23.1-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.23.1-notes.html#pull-requests-merged) * [1.23.0](https://numpy.org/doc/2.2/release/1.23.0-notes.html) * [New functions](https://numpy.org/doc/2.2/release/1.23.0-notes.html#new-functions) * [Deprecations](https://numpy.org/doc/2.2/release/1.23.0-notes.html#deprecations) * [Expired deprecations](https://numpy.org/doc/2.2/release/1.23.0-notes.html#expired-deprecations) * [New Features](https://numpy.org/doc/2.2/release/1.23.0-notes.html#new-features) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.23.0-notes.html#compatibility-notes) * [Improvements](https://numpy.org/doc/2.2/release/1.23.0-notes.html#improvements) * [Performance improvements and changes](https://numpy.org/doc/2.2/release/1.23.0-notes.html#performance-improvements-and-changes) * [1.22.4](https://numpy.org/doc/2.2/release/1.22.4-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.22.4-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.22.4-notes.html#pull-requests-merged) * [1.22.3](https://numpy.org/doc/2.2/release/1.22.3-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.22.3-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.22.3-notes.html#pull-requests-merged) * [1.22.2](https://numpy.org/doc/2.2/release/1.22.2-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.22.2-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.22.2-notes.html#pull-requests-merged) * [1.22.1](https://numpy.org/doc/2.2/release/1.22.1-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.22.1-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.22.1-notes.html#pull-requests-merged) * [1.22.0](https://numpy.org/doc/2.2/release/1.22.0-notes.html) * [Expired deprecations](https://numpy.org/doc/2.2/release/1.22.0-notes.html#expired-deprecations) * [Deprecations](https://numpy.org/doc/2.2/release/1.22.0-notes.html#deprecations) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.22.0-notes.html#compatibility-notes) * [C API changes](https://numpy.org/doc/2.2/release/1.22.0-notes.html#c-api-changes) * [New Features](https://numpy.org/doc/2.2/release/1.22.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.22.0-notes.html#improvements) * [1.21.6](https://numpy.org/doc/2.2/release/1.21.6-notes.html) * [1.21.5](https://numpy.org/doc/2.2/release/1.21.5-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.21.5-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.21.5-notes.html#pull-requests-merged) * [1.21.4](https://numpy.org/doc/2.2/release/1.21.4-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.21.4-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.21.4-notes.html#pull-requests-merged) * [1.21.3](https://numpy.org/doc/2.2/release/1.21.3-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.21.3-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.21.3-notes.html#pull-requests-merged) * [1.21.2](https://numpy.org/doc/2.2/release/1.21.2-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.21.2-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.21.2-notes.html#pull-requests-merged) * [1.21.1](https://numpy.org/doc/2.2/release/1.21.1-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.21.1-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.21.1-notes.html#pull-requests-merged) * [1.21.0](https://numpy.org/doc/2.2/release/1.21.0-notes.html) * [New functions](https://numpy.org/doc/2.2/release/1.21.0-notes.html#new-functions) * [Expired deprecations](https://numpy.org/doc/2.2/release/1.21.0-notes.html#expired-deprecations) * [Deprecations](https://numpy.org/doc/2.2/release/1.21.0-notes.html#deprecations) * [Expired deprecations](https://numpy.org/doc/2.2/release/1.21.0-notes.html#id2) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.21.0-notes.html#compatibility-notes) * [C API changes](https://numpy.org/doc/2.2/release/1.21.0-notes.html#c-api-changes) * [New Features](https://numpy.org/doc/2.2/release/1.21.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.21.0-notes.html#improvements) * [Performance improvements](https://numpy.org/doc/2.2/release/1.21.0-notes.html#performance-improvements) * [Changes](https://numpy.org/doc/2.2/release/1.21.0-notes.html#changes) * [1.20.3](https://numpy.org/doc/2.2/release/1.20.3-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.20.3-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.20.3-notes.html#pull-requests-merged) * [1.20.2](https://numpy.org/doc/2.2/release/1.20.2-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.20.2-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.20.2-notes.html#pull-requests-merged) * [1.20.1](https://numpy.org/doc/2.2/release/1.20.1-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.20.1-notes.html#highlights) * [Contributors](https://numpy.org/doc/2.2/release/1.20.1-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.20.1-notes.html#pull-requests-merged) * [1.20.0](https://numpy.org/doc/2.2/release/1.20.0-notes.html) * [New functions](https://numpy.org/doc/2.2/release/1.20.0-notes.html#new-functions) * [Deprecations](https://numpy.org/doc/2.2/release/1.20.0-notes.html#deprecations) * [Future Changes](https://numpy.org/doc/2.2/release/1.20.0-notes.html#future-changes) * [Expired deprecations](https://numpy.org/doc/2.2/release/1.20.0-notes.html#expired-deprecations) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.20.0-notes.html#compatibility-notes) * [C API changes](https://numpy.org/doc/2.2/release/1.20.0-notes.html#c-api-changes) * [New Features](https://numpy.org/doc/2.2/release/1.20.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.20.0-notes.html#improvements) * [Performance improvements and changes](https://numpy.org/doc/2.2/release/1.20.0-notes.html#performance-improvements-and-changes) * [Changes](https://numpy.org/doc/2.2/release/1.20.0-notes.html#changes) * [1.19.5](https://numpy.org/doc/2.2/release/1.19.5-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.19.5-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.19.5-notes.html#pull-requests-merged) * [1.19.4](https://numpy.org/doc/2.2/release/1.19.4-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.19.4-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.19.4-notes.html#pull-requests-merged) * [1.19.3](https://numpy.org/doc/2.2/release/1.19.3-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.19.3-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.19.3-notes.html#pull-requests-merged) * [1.19.2](https://numpy.org/doc/2.2/release/1.19.2-notes.html) * [Improvements](https://numpy.org/doc/2.2/release/1.19.2-notes.html#improvements) * [Contributors](https://numpy.org/doc/2.2/release/1.19.2-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.19.2-notes.html#pull-requests-merged) * [1.19.1](https://numpy.org/doc/2.2/release/1.19.1-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.19.1-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.19.1-notes.html#pull-requests-merged) * [1.19.0](https://numpy.org/doc/2.2/release/1.19.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.19.0-notes.html#highlights) * [Expired deprecations](https://numpy.org/doc/2.2/release/1.19.0-notes.html#expired-deprecations) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.19.0-notes.html#compatibility-notes) * [Deprecations](https://numpy.org/doc/2.2/release/1.19.0-notes.html#deprecations) * [C API changes](https://numpy.org/doc/2.2/release/1.19.0-notes.html#c-api-changes) * [New Features](https://numpy.org/doc/2.2/release/1.19.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.19.0-notes.html#improvements) * [Improve detection of CPU features](https://numpy.org/doc/2.2/release/1.19.0-notes.html#improve-detection-of-cpu-features) * [Changes](https://numpy.org/doc/2.2/release/1.19.0-notes.html#changes) * [1.18.5](https://numpy.org/doc/2.2/release/1.18.5-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.18.5-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.18.5-notes.html#pull-requests-merged) * [1.18.4](https://numpy.org/doc/2.2/release/1.18.4-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.18.4-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.18.4-notes.html#pull-requests-merged) * [1.18.3](https://numpy.org/doc/2.2/release/1.18.3-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.18.3-notes.html#highlights) * [Contributors](https://numpy.org/doc/2.2/release/1.18.3-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.18.3-notes.html#pull-requests-merged) * [1.18.2](https://numpy.org/doc/2.2/release/1.18.2-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.18.2-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.18.2-notes.html#pull-requests-merged) * [1.18.1](https://numpy.org/doc/2.2/release/1.18.1-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.18.1-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.18.1-notes.html#pull-requests-merged) * [1.18.0](https://numpy.org/doc/2.2/release/1.18.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.18.0-notes.html#highlights) * [New functions](https://numpy.org/doc/2.2/release/1.18.0-notes.html#new-functions) * [Deprecations](https://numpy.org/doc/2.2/release/1.18.0-notes.html#deprecations) * [Expired deprecations](https://numpy.org/doc/2.2/release/1.18.0-notes.html#expired-deprecations) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.18.0-notes.html#compatibility-notes) * [C API changes](https://numpy.org/doc/2.2/release/1.18.0-notes.html#c-api-changes) * [New Features](https://numpy.org/doc/2.2/release/1.18.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.18.0-notes.html#improvements) * [Changes](https://numpy.org/doc/2.2/release/1.18.0-notes.html#changes) * [1.17.5](https://numpy.org/doc/2.2/release/1.17.5-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.17.5-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.17.5-notes.html#pull-requests-merged) * [1.17.4](https://numpy.org/doc/2.2/release/1.17.4-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.17.4-notes.html#highlights) * [Contributors](https://numpy.org/doc/2.2/release/1.17.4-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.17.4-notes.html#pull-requests-merged) * [1.17.3](https://numpy.org/doc/2.2/release/1.17.3-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.17.3-notes.html#highlights) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.17.3-notes.html#compatibility-notes) * [Contributors](https://numpy.org/doc/2.2/release/1.17.3-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.17.3-notes.html#pull-requests-merged) * [1.17.2](https://numpy.org/doc/2.2/release/1.17.2-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.17.2-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.17.2-notes.html#pull-requests-merged) * [1.17.1](https://numpy.org/doc/2.2/release/1.17.1-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.17.1-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.17.1-notes.html#pull-requests-merged) * [1.17.0](https://numpy.org/doc/2.2/release/1.17.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.17.0-notes.html#highlights) * [New functions](https://numpy.org/doc/2.2/release/1.17.0-notes.html#new-functions) * [Deprecations](https://numpy.org/doc/2.2/release/1.17.0-notes.html#deprecations) * [Future Changes](https://numpy.org/doc/2.2/release/1.17.0-notes.html#future-changes) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.17.0-notes.html#compatibility-notes) * [C API changes](https://numpy.org/doc/2.2/release/1.17.0-notes.html#c-api-changes) * [New Features](https://numpy.org/doc/2.2/release/1.17.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.17.0-notes.html#improvements) * [Changes](https://numpy.org/doc/2.2/release/1.17.0-notes.html#changes) * [1.16.6](https://numpy.org/doc/2.2/release/1.16.6-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.16.6-notes.html#highlights) * [New functions](https://numpy.org/doc/2.2/release/1.16.6-notes.html#new-functions) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.16.6-notes.html#compatibility-notes) * [Improvements](https://numpy.org/doc/2.2/release/1.16.6-notes.html#improvements) * [Contributors](https://numpy.org/doc/2.2/release/1.16.6-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.16.6-notes.html#pull-requests-merged) * [1.16.5](https://numpy.org/doc/2.2/release/1.16.5-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.16.5-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.16.5-notes.html#pull-requests-merged) * [1.16.4](https://numpy.org/doc/2.2/release/1.16.4-notes.html) * [New deprecations](https://numpy.org/doc/2.2/release/1.16.4-notes.html#new-deprecations) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.16.4-notes.html#compatibility-notes) * [Changes](https://numpy.org/doc/2.2/release/1.16.4-notes.html#changes) * [Contributors](https://numpy.org/doc/2.2/release/1.16.4-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.16.4-notes.html#pull-requests-merged) * [1.16.3](https://numpy.org/doc/2.2/release/1.16.3-notes.html) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.16.3-notes.html#compatibility-notes) * [Improvements](https://numpy.org/doc/2.2/release/1.16.3-notes.html#improvements) * [Changes](https://numpy.org/doc/2.2/release/1.16.3-notes.html#changes) * [1.16.2](https://numpy.org/doc/2.2/release/1.16.2-notes.html) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.16.2-notes.html#compatibility-notes) * [Contributors](https://numpy.org/doc/2.2/release/1.16.2-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.16.2-notes.html#pull-requests-merged) * [1.16.1](https://numpy.org/doc/2.2/release/1.16.1-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.16.1-notes.html#contributors) * [Enhancements](https://numpy.org/doc/2.2/release/1.16.1-notes.html#enhancements) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.16.1-notes.html#compatibility-notes) * [New Features](https://numpy.org/doc/2.2/release/1.16.1-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.16.1-notes.html#improvements) * [Changes](https://numpy.org/doc/2.2/release/1.16.1-notes.html#changes) * [1.16.0](https://numpy.org/doc/2.2/release/1.16.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.16.0-notes.html#highlights) * [New functions](https://numpy.org/doc/2.2/release/1.16.0-notes.html#new-functions) * [New deprecations](https://numpy.org/doc/2.2/release/1.16.0-notes.html#new-deprecations) * [Expired deprecations](https://numpy.org/doc/2.2/release/1.16.0-notes.html#expired-deprecations) * [Future changes](https://numpy.org/doc/2.2/release/1.16.0-notes.html#future-changes) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.16.0-notes.html#compatibility-notes) * [C API changes](https://numpy.org/doc/2.2/release/1.16.0-notes.html#c-api-changes) * [New Features](https://numpy.org/doc/2.2/release/1.16.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.16.0-notes.html#improvements) * [Changes](https://numpy.org/doc/2.2/release/1.16.0-notes.html#changes) * [1.15.4](https://numpy.org/doc/2.2/release/1.15.4-notes.html) * [Compatibility Note](https://numpy.org/doc/2.2/release/1.15.4-notes.html#compatibility-note) * [Contributors](https://numpy.org/doc/2.2/release/1.15.4-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.15.4-notes.html#pull-requests-merged) * [1.15.3](https://numpy.org/doc/2.2/release/1.15.3-notes.html) * [Compatibility Note](https://numpy.org/doc/2.2/release/1.15.3-notes.html#compatibility-note) * [Contributors](https://numpy.org/doc/2.2/release/1.15.3-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.15.3-notes.html#pull-requests-merged) * [1.15.2](https://numpy.org/doc/2.2/release/1.15.2-notes.html) * [Compatibility Note](https://numpy.org/doc/2.2/release/1.15.2-notes.html#compatibility-note) * [Contributors](https://numpy.org/doc/2.2/release/1.15.2-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.15.2-notes.html#pull-requests-merged) * [1.15.1](https://numpy.org/doc/2.2/release/1.15.1-notes.html) * [Compatibility Note](https://numpy.org/doc/2.2/release/1.15.1-notes.html#compatibility-note) * [Contributors](https://numpy.org/doc/2.2/release/1.15.1-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.15.1-notes.html#pull-requests-merged) * [1.15.0](https://numpy.org/doc/2.2/release/1.15.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.15.0-notes.html#highlights) * [New functions](https://numpy.org/doc/2.2/release/1.15.0-notes.html#new-functions) * [Deprecations](https://numpy.org/doc/2.2/release/1.15.0-notes.html#deprecations) * [Future Changes](https://numpy.org/doc/2.2/release/1.15.0-notes.html#future-changes) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.15.0-notes.html#compatibility-notes) * [C API changes](https://numpy.org/doc/2.2/release/1.15.0-notes.html#c-api-changes) * [New Features](https://numpy.org/doc/2.2/release/1.15.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.15.0-notes.html#improvements) * [1.14.6](https://numpy.org/doc/2.2/release/1.14.6-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.14.6-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.14.6-notes.html#pull-requests-merged) * [1.14.5](https://numpy.org/doc/2.2/release/1.14.5-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.14.5-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.14.5-notes.html#pull-requests-merged) * [1.14.4](https://numpy.org/doc/2.2/release/1.14.4-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.14.4-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.14.4-notes.html#pull-requests-merged) * [1.14.3](https://numpy.org/doc/2.2/release/1.14.3-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.14.3-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.14.3-notes.html#pull-requests-merged) * [1.14.2](https://numpy.org/doc/2.2/release/1.14.2-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.14.2-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.14.2-notes.html#pull-requests-merged) * [1.14.1](https://numpy.org/doc/2.2/release/1.14.1-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.14.1-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.14.1-notes.html#pull-requests-merged) * [1.14.0](https://numpy.org/doc/2.2/release/1.14.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.14.0-notes.html#highlights) * [New functions](https://numpy.org/doc/2.2/release/1.14.0-notes.html#new-functions) * [Deprecations](https://numpy.org/doc/2.2/release/1.14.0-notes.html#deprecations) * [Future Changes](https://numpy.org/doc/2.2/release/1.14.0-notes.html#future-changes) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.14.0-notes.html#compatibility-notes) * [C API changes](https://numpy.org/doc/2.2/release/1.14.0-notes.html#c-api-changes) * [New Features](https://numpy.org/doc/2.2/release/1.14.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.14.0-notes.html#improvements) * [Changes](https://numpy.org/doc/2.2/release/1.14.0-notes.html#changes) * [1.13.3](https://numpy.org/doc/2.2/release/1.13.3-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.13.3-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.13.3-notes.html#pull-requests-merged) * [1.13.2](https://numpy.org/doc/2.2/release/1.13.2-notes.html) * [Contributors](https://numpy.org/doc/2.2/release/1.13.2-notes.html#contributors) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.13.2-notes.html#pull-requests-merged) * [1.13.1](https://numpy.org/doc/2.2/release/1.13.1-notes.html) * [Pull requests merged](https://numpy.org/doc/2.2/release/1.13.1-notes.html#pull-requests-merged) * [Contributors](https://numpy.org/doc/2.2/release/1.13.1-notes.html#contributors) * [1.13.0](https://numpy.org/doc/2.2/release/1.13.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.13.0-notes.html#highlights) * [New functions](https://numpy.org/doc/2.2/release/1.13.0-notes.html#new-functions) * [Deprecations](https://numpy.org/doc/2.2/release/1.13.0-notes.html#deprecations) * [Future Changes](https://numpy.org/doc/2.2/release/1.13.0-notes.html#future-changes) * [Build System Changes](https://numpy.org/doc/2.2/release/1.13.0-notes.html#build-system-changes) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.13.0-notes.html#compatibility-notes) * [C API changes](https://numpy.org/doc/2.2/release/1.13.0-notes.html#c-api-changes) * [New Features](https://numpy.org/doc/2.2/release/1.13.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.13.0-notes.html#improvements) * [Changes](https://numpy.org/doc/2.2/release/1.13.0-notes.html#changes) * [1.12.1](https://numpy.org/doc/2.2/release/1.12.1-notes.html) * [Bugs Fixed](https://numpy.org/doc/2.2/release/1.12.1-notes.html#bugs-fixed) * [1.12.0](https://numpy.org/doc/2.2/release/1.12.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.12.0-notes.html#highlights) * [Dropped Support](https://numpy.org/doc/2.2/release/1.12.0-notes.html#dropped-support) * [Added Support](https://numpy.org/doc/2.2/release/1.12.0-notes.html#added-support) * [Build System Changes](https://numpy.org/doc/2.2/release/1.12.0-notes.html#build-system-changes) * [Deprecations](https://numpy.org/doc/2.2/release/1.12.0-notes.html#deprecations) * [Future Changes](https://numpy.org/doc/2.2/release/1.12.0-notes.html#future-changes) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.12.0-notes.html#compatibility-notes) * [New Features](https://numpy.org/doc/2.2/release/1.12.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.12.0-notes.html#improvements) * [Changes](https://numpy.org/doc/2.2/release/1.12.0-notes.html#changes) * [1.11.3](https://numpy.org/doc/2.2/release/1.11.3-notes.html) * [Contributors to maintenance/1.11.3](https://numpy.org/doc/2.2/release/1.11.3-notes.html#contributors-to-maintenance-1-11-3) * [Pull Requests Merged](https://numpy.org/doc/2.2/release/1.11.3-notes.html#pull-requests-merged) * [1.11.2](https://numpy.org/doc/2.2/release/1.11.2-notes.html) * [Pull Requests Merged](https://numpy.org/doc/2.2/release/1.11.2-notes.html#pull-requests-merged) * [1.11.1](https://numpy.org/doc/2.2/release/1.11.1-notes.html) * [Fixes Merged](https://numpy.org/doc/2.2/release/1.11.1-notes.html#fixes-merged) * [1.11.0](https://numpy.org/doc/2.2/release/1.11.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.11.0-notes.html#highlights) * [Build System Changes](https://numpy.org/doc/2.2/release/1.11.0-notes.html#build-system-changes) * [Future Changes](https://numpy.org/doc/2.2/release/1.11.0-notes.html#future-changes) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.11.0-notes.html#compatibility-notes) * [New Features](https://numpy.org/doc/2.2/release/1.11.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.11.0-notes.html#improvements) * [Changes](https://numpy.org/doc/2.2/release/1.11.0-notes.html#changes) * [Deprecations](https://numpy.org/doc/2.2/release/1.11.0-notes.html#deprecations) * [FutureWarnings](https://numpy.org/doc/2.2/release/1.11.0-notes.html#futurewarnings) * [1.10.4](https://numpy.org/doc/2.2/release/1.10.4-notes.html) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.10.4-notes.html#compatibility-notes) * [Issues Fixed](https://numpy.org/doc/2.2/release/1.10.4-notes.html#issues-fixed) * [Merged PRs](https://numpy.org/doc/2.2/release/1.10.4-notes.html#merged-prs) * [1.10.3](https://numpy.org/doc/2.2/release/1.10.3-notes.html) * [1.10.2](https://numpy.org/doc/2.2/release/1.10.2-notes.html) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.10.2-notes.html#compatibility-notes) * [Issues Fixed](https://numpy.org/doc/2.2/release/1.10.2-notes.html#issues-fixed) * [Merged PRs](https://numpy.org/doc/2.2/release/1.10.2-notes.html#merged-prs) * [Notes](https://numpy.org/doc/2.2/release/1.10.2-notes.html#notes) * [1.10.1](https://numpy.org/doc/2.2/release/1.10.1-notes.html) * [1.10.0](https://numpy.org/doc/2.2/release/1.10.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.10.0-notes.html#highlights) * [Dropped Support](https://numpy.org/doc/2.2/release/1.10.0-notes.html#dropped-support) * [Future Changes](https://numpy.org/doc/2.2/release/1.10.0-notes.html#future-changes) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.10.0-notes.html#compatibility-notes) * [New Features](https://numpy.org/doc/2.2/release/1.10.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.10.0-notes.html#improvements) * [Changes](https://numpy.org/doc/2.2/release/1.10.0-notes.html#changes) * [Deprecations](https://numpy.org/doc/2.2/release/1.10.0-notes.html#deprecations) * [1.9.2](https://numpy.org/doc/2.2/release/1.9.2-notes.html) * [Issues fixed](https://numpy.org/doc/2.2/release/1.9.2-notes.html#issues-fixed) * [1.9.1](https://numpy.org/doc/2.2/release/1.9.1-notes.html) * [Issues fixed](https://numpy.org/doc/2.2/release/1.9.1-notes.html#issues-fixed) * [1.9.0](https://numpy.org/doc/2.2/release/1.9.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.9.0-notes.html#highlights) * [Dropped Support](https://numpy.org/doc/2.2/release/1.9.0-notes.html#dropped-support) * [Future Changes](https://numpy.org/doc/2.2/release/1.9.0-notes.html#future-changes) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.9.0-notes.html#compatibility-notes) * [New Features](https://numpy.org/doc/2.2/release/1.9.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.9.0-notes.html#improvements) * [Deprecations](https://numpy.org/doc/2.2/release/1.9.0-notes.html#deprecations) * [1.8.2](https://numpy.org/doc/2.2/release/1.8.2-notes.html) * [Issues fixed](https://numpy.org/doc/2.2/release/1.8.2-notes.html#issues-fixed) * [1.8.1](https://numpy.org/doc/2.2/release/1.8.1-notes.html) * [Issues fixed](https://numpy.org/doc/2.2/release/1.8.1-notes.html#issues-fixed) * [Changes](https://numpy.org/doc/2.2/release/1.8.1-notes.html#changes) * [Deprecations](https://numpy.org/doc/2.2/release/1.8.1-notes.html#deprecations) * [1.8.0](https://numpy.org/doc/2.2/release/1.8.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.8.0-notes.html#highlights) * [Dropped Support](https://numpy.org/doc/2.2/release/1.8.0-notes.html#dropped-support) * [Future Changes](https://numpy.org/doc/2.2/release/1.8.0-notes.html#future-changes) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.8.0-notes.html#compatibility-notes) * [New Features](https://numpy.org/doc/2.2/release/1.8.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.8.0-notes.html#improvements) * [Changes](https://numpy.org/doc/2.2/release/1.8.0-notes.html#changes) * [Deprecations](https://numpy.org/doc/2.2/release/1.8.0-notes.html#deprecations) * [Authors](https://numpy.org/doc/2.2/release/1.8.0-notes.html#authors) * [1.7.2](https://numpy.org/doc/2.2/release/1.7.2-notes.html) * [Issues fixed](https://numpy.org/doc/2.2/release/1.7.2-notes.html#issues-fixed) * [1.7.1](https://numpy.org/doc/2.2/release/1.7.1-notes.html) * [Issues fixed](https://numpy.org/doc/2.2/release/1.7.1-notes.html#issues-fixed) * [1.7.0](https://numpy.org/doc/2.2/release/1.7.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.7.0-notes.html#highlights) * [Compatibility notes](https://numpy.org/doc/2.2/release/1.7.0-notes.html#compatibility-notes) * [New features](https://numpy.org/doc/2.2/release/1.7.0-notes.html#new-features) * [Changes](https://numpy.org/doc/2.2/release/1.7.0-notes.html#changes) * [Deprecations](https://numpy.org/doc/2.2/release/1.7.0-notes.html#deprecations) * [1.6.2](https://numpy.org/doc/2.2/release/1.6.2-notes.html) * [Issues fixed](https://numpy.org/doc/2.2/release/1.6.2-notes.html#issues-fixed) * [Changes](https://numpy.org/doc/2.2/release/1.6.2-notes.html#changes) * [1.6.1](https://numpy.org/doc/2.2/release/1.6.1-notes.html) * [Issues Fixed](https://numpy.org/doc/2.2/release/1.6.1-notes.html#issues-fixed) * [1.6.0](https://numpy.org/doc/2.2/release/1.6.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.6.0-notes.html#highlights) * [New features](https://numpy.org/doc/2.2/release/1.6.0-notes.html#new-features) * [Changes](https://numpy.org/doc/2.2/release/1.6.0-notes.html#changes) * [Deprecated features](https://numpy.org/doc/2.2/release/1.6.0-notes.html#deprecated-features) * [Removed features](https://numpy.org/doc/2.2/release/1.6.0-notes.html#removed-features) * [1.5.0](https://numpy.org/doc/2.2/release/1.5.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.5.0-notes.html#highlights) * [New features](https://numpy.org/doc/2.2/release/1.5.0-notes.html#new-features) * [Changes](https://numpy.org/doc/2.2/release/1.5.0-notes.html#changes) * [1.4.0](https://numpy.org/doc/2.2/release/1.4.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.4.0-notes.html#highlights) * [New features](https://numpy.org/doc/2.2/release/1.4.0-notes.html#new-features) * [Improvements](https://numpy.org/doc/2.2/release/1.4.0-notes.html#improvements) * [Deprecations](https://numpy.org/doc/2.2/release/1.4.0-notes.html#deprecations) * [Internal changes](https://numpy.org/doc/2.2/release/1.4.0-notes.html#internal-changes) * [1.3.0](https://numpy.org/doc/2.2/release/1.3.0-notes.html) * [Highlights](https://numpy.org/doc/2.2/release/1.3.0-notes.html#highlights) * [New features](https://numpy.org/doc/2.2/release/1.3.0-notes.html#new-features) * [Deprecated features](https://numpy.org/doc/2.2/release/1.3.0-notes.html#deprecated-features) * [Documentation changes](https://numpy.org/doc/2.2/release/1.3.0-notes.html#documentation-changes) * [New C API](https://numpy.org/doc/2.2/release/1.3.0-notes.html#new-c-api) * [Internal changes](https://numpy.org/doc/2.2/release/1.3.0-notes.html#internal-changes) # NumPy: the absolute basics for beginners Welcome to the absolute beginner’s guide to NumPy! NumPy (**Num** erical **Py** thon) is an open source Python library that’s widely used in science and engineering. The NumPy library contains multidimensional array data structures, such as the homogeneous, N-dimensional `ndarray`, and a large library of functions that operate efficiently on these data structures. Learn more about NumPy at [What is NumPy](whatisnumpy#whatisnumpy), and if you have comments or suggestions, please [reach out](https://numpy.org/community/)! ## How to import NumPy After [installing NumPy](https://numpy.org/install/), it may be imported into Python code like: import numpy as np This widespread convention allows access to NumPy features with a short, recognizable prefix (`np.`) while distinguishing NumPy features from others that have the same name. ## Reading the example code Throughout the NumPy documentation, you will find blocks that look like: >>> a = np.array([[1, 2, 3], ... [4, 5, 6]]) >>> a.shape (2, 3) Text preceded by `>>>` or `...` is **input** , the code that you would enter in a script or at a Python prompt. Everything else is **output** , the results of running your code. Note that `>>>` and `...` are not part of the code and may cause an error if entered at a Python prompt. ## Why use NumPy? Python lists are excellent, general-purpose containers. They can be “heterogeneous”, meaning that they can contain elements of a variety of types, and they are quite fast when used to perform individual operations on a handful of elements. Depending on the characteristics of the data and the types of operations that need to be performed, other containers may be more appropriate; by exploiting these characteristics, we can improve speed, reduce memory consumption, and offer a high-level syntax for performing a variety of common processing tasks. NumPy shines when there are large quantities of “homogeneous” (same-type) data to be processed on the CPU. ## What is an “array”? In computer programming, an array is a structure for storing and retrieving data. We often talk about an array as if it were a grid in space, with each cell storing one element of the data. For instance, if each element of the data were a number, we might visualize a “one-dimensional” array like a list: \\[\begin{split}\begin{array}{|c||c|c|c|} \hline 1 & 5 & 2 & 0 \\\ \hline \end{array}\end{split}\\] A two-dimensional array would be like a table: \\[\begin{split}\begin{array}{|c||c|c|c|} \hline 1 & 5 & 2 & 0 \\\ \hline 8 & 3 & 6 & 1 \\\ \hline 1 & 7 & 2 & 9 \\\ \hline \end{array}\end{split}\\] A three-dimensional array would be like a set of tables, perhaps stacked as though they were printed on separate pages. In NumPy, this idea is generalized to an arbitrary number of dimensions, and so the fundamental array class is called `ndarray`: it represents an “N-dimensional array”. Most NumPy arrays have some restrictions. For instance: * All elements of the array must be of the same type of data. * Once created, the total size of the array can’t change. * The shape must be “rectangular”, not “jagged”; e.g., each row of a two-dimensional array must have the same number of columns. When these conditions are met, NumPy exploits these characteristics to make the array faster, more memory efficient, and more convenient to use than less restrictive data structures. For the remainder of this document, we will use the word “array” to refer to an instance of `ndarray`. ## Array fundamentals One way to initialize an array is using a Python sequence, such as a list. For example: >>> a = np.array([1, 2, 3, 4, 5, 6]) >>> a array([1, 2, 3, 4, 5, 6]) Elements of an array can be accessed in [various ways](quickstart#quickstart- indexing-slicing-and-iterating). For instance, we can access an individual element of this array as we would access an element in the original list: using the integer index of the element within square brackets. >>> a[0] 1 Note As with built-in Python sequences, NumPy arrays are “0-indexed”: the first element of the array is accessed using index `0`, not `1`. Like the original list, the array is mutable. >>> a[0] = 10 >>> a array([10, 2, 3, 4, 5, 6]) Also like the original list, Python slice notation can be used for indexing. >>> a[:3] array([10, 2, 3]) One major difference is that slice indexing of a list copies the elements into a new list, but slicing an array returns a _view_ : an object that refers to the data in the original array. The original array can be mutated using the view. >>> b = a[3:] >>> b array([4, 5, 6]) >>> b[0] = 40 >>> a array([ 10, 2, 3, 40, 5, 6]) See [Copies and views](basics.copies#basics-copies-and-views) for a more comprehensive explanation of when array operations return views rather than copies. Two- and higher-dimensional arrays can be initialized from nested Python sequences: >>> a = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) >>> a array([[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12]]) In NumPy, a dimension of an array is sometimes referred to as an “axis”. This terminology may be useful to disambiguate between the dimensionality of an array and the dimensionality of the data represented by the array. For instance, the array `a` could represent three points, each lying within a four-dimensional space, but `a` has only two “axes”. Another difference between an array and a list of lists is that an element of the array can be accessed by specifying the index along each axis within a _single_ set of square brackets, separated by commas. For instance, the element `8` is in row `1` and column `3`: >>> a[1, 3] 8 Note It is familiar practice in mathematics to refer to elements of a matrix by the row index first and the column index second. This happens to be true for two- dimensional arrays, but a better mental model is to think of the column index as coming _last_ and the row index as _second to last_. This generalizes to arrays with _any_ number of dimensions. Note You might hear of a 0-D (zero-dimensional) array referred to as a “scalar”, a 1-D (one-dimensional) array as a “vector”, a 2-D (two-dimensional) array as a “matrix”, or an N-D (N-dimensional, where “N” is typically an integer greater than 2) array as a “tensor”. For clarity, it is best to avoid the mathematical terms when referring to an array because the mathematical objects with these names behave differently than arrays (e.g. “matrix” multiplication is fundamentally different from “array” multiplication), and there are other objects in the scientific Python ecosystem that have these names (e.g. the fundamental data structure of PyTorch is the “tensor”). ## Array attributes _This section covers the_ `ndim`, `shape`, `size`, _and_ `dtype` _attributes of an array_. The number of dimensions of an array is contained in the `ndim` attribute. >>> a.ndim 2 The shape of an array is a tuple of non-negative integers that specify the number of elements along each dimension. >>> a.shape (3, 4) >>> len(a.shape) == a.ndim True The fixed, total number of elements in array is contained in the `size` attribute. >>> a.size 12 >>> import math >>> a.size == math.prod(a.shape) True Arrays are typically “homogeneous”, meaning that they contain elements of only one “data type”. The data type is recorded in the `dtype` attribute. >>> a.dtype dtype('int64') # "int" for integer, "64" for 64-bit [Read more about array attributes here](../reference/arrays.ndarray#arrays- ndarray) and learn about [array objects here](../reference/arrays#arrays). ## How to create a basic array _This section covers_ `np.zeros()`, `np.ones()`, `np.empty()`, `np.arange()`, `np.linspace()` Besides creating an array from a sequence of elements, you can easily create an array filled with `0`’s: >>> np.zeros(2) array([0., 0.]) Or an array filled with `1`’s: >>> np.ones(2) array([1., 1.]) Or even an empty array! The function `empty` creates an array whose initial content is random and depends on the state of the memory. The reason to use `empty` over `zeros` (or something similar) is speed - just make sure to fill every element afterwards! >>> # Create an empty array with 2 elements >>> np.empty(2) array([3.14, 42. ]) # may vary You can create an array with a range of elements: >>> np.arange(4) array([0, 1, 2, 3]) And even an array that contains a range of evenly spaced intervals. To do this, you will specify the **first number** , **last number** , and the **step size**. >>> np.arange(2, 9, 2) array([2, 4, 6, 8]) You can also use `np.linspace()` to create an array with values that are spaced linearly in a specified interval: >>> np.linspace(0, 10, num=5) array([ 0. , 2.5, 5. , 7.5, 10. ]) **Specifying your data type** While the default data type is floating point (`np.float64`), you can explicitly specify which data type you want using the `dtype` keyword. >>> x = np.ones(2, dtype=np.int64) >>> x array([1, 1]) [Learn more about creating arrays here](quickstart#quickstart-array-creation) ## Adding, removing, and sorting elements _This section covers_ `np.sort()`, `np.concatenate()` Sorting an array is simple with `np.sort()`. You can specify the axis, kind, and order when you call the function. If you start with this array: >>> arr = np.array([2, 1, 5, 3, 7, 4, 6, 8]) You can quickly sort the numbers in ascending order with: >>> np.sort(arr) array([1, 2, 3, 4, 5, 6, 7, 8]) In addition to sort, which returns a sorted copy of an array, you can use: * [`argsort`](../reference/generated/numpy.argsort#numpy.argsort "numpy.argsort"), which is an indirect sort along a specified axis, * [`lexsort`](../reference/generated/numpy.lexsort#numpy.lexsort "numpy.lexsort"), which is an indirect stable sort on multiple keys, * [`searchsorted`](../reference/generated/numpy.searchsorted#numpy.searchsorted "numpy.searchsorted"), which will find elements in a sorted array, and * [`partition`](../reference/generated/numpy.partition#numpy.partition "numpy.partition"), which is a partial sort. To read more about sorting an array, see: [`sort`](../reference/generated/numpy.sort#numpy.sort "numpy.sort"). If you start with these arrays: >>> a = np.array([1, 2, 3, 4]) >>> b = np.array([5, 6, 7, 8]) You can concatenate them with `np.concatenate()`. >>> np.concatenate((a, b)) array([1, 2, 3, 4, 5, 6, 7, 8]) Or, if you start with these arrays: >>> x = np.array([[1, 2], [3, 4]]) >>> y = np.array([[5, 6]]) You can concatenate them with: >>> np.concatenate((x, y), axis=0) array([[1, 2], [3, 4], [5, 6]]) In order to remove elements from an array, it’s simple to use indexing to select the elements that you want to keep. To read more about concatenate, see: [`concatenate`](../reference/generated/numpy.concatenate#numpy.concatenate "numpy.concatenate"). ## How do you know the shape and size of an array? _This section covers_ `ndarray.ndim`, `ndarray.size`, `ndarray.shape` `ndarray.ndim` will tell you the number of axes, or dimensions, of the array. `ndarray.size` will tell you the total number of elements of the array. This is the _product_ of the elements of the array’s shape. `ndarray.shape` will display a tuple of integers that indicate the number of elements stored along each dimension of the array. If, for example, you have a 2-D array with 2 rows and 3 columns, the shape of your array is `(2, 3)`. For example, if you create this array: >>> array_example = np.array([[[0, 1, 2, 3], ... [4, 5, 6, 7]], ... ... [[0, 1, 2, 3], ... [4, 5, 6, 7]], ... ... [[0 ,1 ,2, 3], ... [4, 5, 6, 7]]]) To find the number of dimensions of the array, run: >>> array_example.ndim 3 To find the total number of elements in the array, run: >>> array_example.size 24 And to find the shape of your array, run: >>> array_example.shape (3, 2, 4) ## Can you reshape an array? _This section covers_ `arr.reshape()` **Yes!** Using `arr.reshape()` will give a new shape to an array without changing the data. Just remember that when you use the reshape method, the array you want to produce needs to have the same number of elements as the original array. If you start with an array with 12 elements, you’ll need to make sure that your new array also has a total of 12 elements. If you start with this array: >>> a = np.arange(6) >>> print(a) [0 1 2 3 4 5] You can use `reshape()` to reshape your array. For example, you can reshape this array to an array with three rows and two columns: >>> b = a.reshape(3, 2) >>> print(b) [[0 1] [2 3] [4 5]] With `np.reshape`, you can specify a few optional parameters: >>> np.reshape(a, shape=(1, 6), order='C') array([[0, 1, 2, 3, 4, 5]]) `a` is the array to be reshaped. `shape` is the new shape you want. You can specify an integer or a tuple of integers. If you specify an integer, the result will be an array of that length. The shape should be compatible with the original shape. `order:` `C` means to read/write the elements using C-like index order, `F` means to read/write the elements using Fortran-like index order, `A` means to read/write the elements in Fortran-like index order if a is Fortran contiguous in memory, C-like order otherwise. (This is an optional parameter and doesn’t need to be specified.) If you want to learn more about C and Fortran order, you can [read more about the internal organization of NumPy arrays here](../dev/internals#numpy- internals). Essentially, C and Fortran orders have to do with how indices correspond to the order the array is stored in memory. In Fortran, when moving through the elements of a two-dimensional array as it is stored in memory, the **first** index is the most rapidly varying index. As the first index moves to the next row as it changes, the matrix is stored one column at a time. This is why Fortran is thought of as a **Column-major language**. In C on the other hand, the **last** index changes the most rapidly. The matrix is stored by rows, making it a **Row-major language**. What you do for C or Fortran depends on whether it’s more important to preserve the indexing convention or not reorder the data. [Learn more about shape manipulation here](quickstart#quickstart-shape- manipulation). ## How to convert a 1D array into a 2D array (how to add a new axis to an array) _This section covers_ `np.newaxis`, `np.expand_dims` You can use `np.newaxis` and `np.expand_dims` to increase the dimensions of your existing array. Using `np.newaxis` will increase the dimensions of your array by one dimension when used once. This means that a **1D** array will become a **2D** array, a **2D** array will become a **3D** array, and so on. For example, if you start with this array: >>> a = np.array([1, 2, 3, 4, 5, 6]) >>> a.shape (6,) You can use `np.newaxis` to add a new axis: >>> a2 = a[np.newaxis, :] >>> a2.shape (1, 6) You can explicitly convert a 1D array to either a row vector or a column vector using `np.newaxis`. For example, you can convert a 1D array to a row vector by inserting an axis along the first dimension: >>> row_vector = a[np.newaxis, :] >>> row_vector.shape (1, 6) Or, for a column vector, you can insert an axis along the second dimension: >>> col_vector = a[:, np.newaxis] >>> col_vector.shape (6, 1) You can also expand an array by inserting a new axis at a specified position with `np.expand_dims`. For example, if you start with this array: >>> a = np.array([1, 2, 3, 4, 5, 6]) >>> a.shape (6,) You can use `np.expand_dims` to add an axis at index position 1 with: >>> b = np.expand_dims(a, axis=1) >>> b.shape (6, 1) You can add an axis at index position 0 with: >>> c = np.expand_dims(a, axis=0) >>> c.shape (1, 6) Find more information about [newaxis here](../reference/routines.indexing#arrays-indexing) and `expand_dims` at [`expand_dims`](../reference/generated/numpy.expand_dims#numpy.expand_dims "numpy.expand_dims"). ## Indexing and slicing You can index and slice NumPy arrays in the same ways you can slice Python lists. >>> data = np.array([1, 2, 3]) >>> data[1] 2 >>> data[0:2] array([1, 2]) >>> data[1:] array([2, 3]) >>> data[-2:] array([2, 3]) You can visualize it this way: You may want to take a section of your array or specific array elements to use in further analysis or additional operations. To do that, you’ll need to subset, slice, and/or index your arrays. If you want to select values from your array that fulfill certain conditions, it’s straightforward with NumPy. For example, if you start with this array: >>> a = np.array([[1 , 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) You can easily print all of the values in the array that are less than 5. >>> print(a[a < 5]) [1 2 3 4] You can also select, for example, numbers that are equal to or greater than 5, and use that condition to index an array. >>> five_up = (a >= 5) >>> print(a[five_up]) [ 5 6 7 8 9 10 11 12] You can select elements that are divisible by 2: >>> divisible_by_2 = a[a%2==0] >>> print(divisible_by_2) [ 2 4 6 8 10 12] Or you can select elements that satisfy two conditions using the `&` and `|` operators: >>> c = a[(a > 2) & (a < 11)] >>> print(c) [ 3 4 5 6 7 8 9 10] You can also make use of the logical operators **&** and **|** in order to return boolean values that specify whether or not the values in an array fulfill a certain condition. This can be useful with arrays that contain names or other categorical values. >>> five_up = (a > 5) | (a == 5) >>> print(five_up) [[False False False False] [ True True True True] [ True True True True]] You can also use `np.nonzero()` to select elements or indices from an array. Starting with this array: >>> a = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) You can use `np.nonzero()` to print the indices of elements that are, for example, less than 5: >>> b = np.nonzero(a < 5) >>> print(b) (array([0, 0, 0, 0]), array([0, 1, 2, 3])) In this example, a tuple of arrays was returned: one for each dimension. The first array represents the row indices where these values are found, and the second array represents the column indices where the values are found. If you want to generate a list of coordinates where the elements exist, you can zip the arrays, iterate over the list of coordinates, and print them. For example: >>> list_of_coordinates= list(zip(b[0], b[1])) >>> for coord in list_of_coordinates: ... print(coord) (np.int64(0), np.int64(0)) (np.int64(0), np.int64(1)) (np.int64(0), np.int64(2)) (np.int64(0), np.int64(3)) You can also use `np.nonzero()` to print the elements in an array that are less than 5 with: >>> print(a[b]) [1 2 3 4] If the element you’re looking for doesn’t exist in the array, then the returned array of indices will be empty. For example: >>> not_there = np.nonzero(a == 42) >>> print(not_there) (array([], dtype=int64), array([], dtype=int64)) Learn more about [indexing and slicing here](quickstart#quickstart-indexing- slicing-and-iterating) and [here](basics.indexing#basics-indexing). Read more about using the nonzero function at: [`nonzero`](../reference/generated/numpy.nonzero#numpy.nonzero "numpy.nonzero"). ## How to create an array from existing data _This section covers_ `slicing and indexing`, `np.vstack()`, `np.hstack()`, `np.hsplit()`, `.view()`, `copy()` You can easily create a new array from a section of an existing array. Let’s say you have this array: >>> a = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) You can create a new array from a section of your array any time by specifying where you want to slice your array. >>> arr1 = a[3:8] >>> arr1 array([4, 5, 6, 7, 8]) Here, you grabbed a section of your array from index position 3 through index position 8 but not including position 8 itself. _Reminder: Array indexes begin at 0. This means the first element of the array is at index 0, the second element is at index 1, and so on._ You can also stack two existing arrays, both vertically and horizontally. Let’s say you have two arrays, `a1` and `a2`: >>> a1 = np.array([[1, 1], ... [2, 2]]) >>> a2 = np.array([[3, 3], ... [4, 4]]) You can stack them vertically with `vstack`: >>> np.vstack((a1, a2)) array([[1, 1], [2, 2], [3, 3], [4, 4]]) Or stack them horizontally with `hstack`: >>> np.hstack((a1, a2)) array([[1, 1, 3, 3], [2, 2, 4, 4]]) You can split an array into several smaller arrays using `hsplit`. You can specify either the number of equally shaped arrays to return or the columns _after_ which the division should occur. Let’s say you have this array: >>> x = np.arange(1, 25).reshape(2, 12) >>> x array([[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]]) If you wanted to split this array into three equally shaped arrays, you would run: >>> np.hsplit(x, 3) [array([[ 1, 2, 3, 4], [13, 14, 15, 16]]), array([[ 5, 6, 7, 8], [17, 18, 19, 20]]), array([[ 9, 10, 11, 12], [21, 22, 23, 24]])] If you wanted to split your array after the third and fourth column, you’d run: >>> np.hsplit(x, (3, 4)) [array([[ 1, 2, 3], [13, 14, 15]]), array([[ 4], [16]]), array([[ 5, 6, 7, 8, 9, 10, 11, 12], [17, 18, 19, 20, 21, 22, 23, 24]])] [Learn more about stacking and splitting arrays here](quickstart#quickstart- stacking-arrays). You can use the `view` method to create a new array object that looks at the same data as the original array (a _shallow copy_). Views are an important NumPy concept! NumPy functions, as well as operations like indexing and slicing, will return views whenever possible. This saves memory and is faster (no copy of the data has to be made). However it’s important to be aware of this - modifying data in a view also modifies the original array! Let’s say you create this array: >>> a = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) Now we create an array `b1` by slicing `a` and modify the first element of `b1`. This will modify the corresponding element in `a` as well! >>> b1 = a[0, :] >>> b1 array([1, 2, 3, 4]) >>> b1[0] = 99 >>> b1 array([99, 2, 3, 4]) >>> a array([[99, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12]]) Using the `copy` method will make a complete copy of the array and its data (a _deep copy_). To use this on your array, you could run: >>> b2 = a.copy() [Learn more about copies and views here](quickstart#quickstart-copies-and- views). ## Basic array operations _This section covers addition, subtraction, multiplication, division, and more_ Once you’ve created your arrays, you can start to work with them. Let’s say, for example, that you’ve created two arrays, one called “data” and one called “ones” You can add the arrays together with the plus sign. >>> data = np.array([1, 2]) >>> ones = np.ones(2, dtype=int) >>> data + ones array([2, 3]) You can, of course, do more than just addition! >>> data - ones array([0, 1]) >>> data * data array([1, 4]) >>> data / data array([1., 1.]) Basic operations are simple with NumPy. If you want to find the sum of the elements in an array, you’d use `sum()`. This works for 1D arrays, 2D arrays, and arrays in higher dimensions. >>> a = np.array([1, 2, 3, 4]) >>> a.sum() 10 To add the rows or the columns in a 2D array, you would specify the axis. If you start with this array: >>> b = np.array([[1, 1], [2, 2]]) You can sum over the axis of rows with: >>> b.sum(axis=0) array([3, 3]) You can sum over the axis of columns with: >>> b.sum(axis=1) array([2, 4]) [Learn more about basic operations here](quickstart#quickstart-basic- operations). ## Broadcasting There are times when you might want to carry out an operation between an array and a single number (also called _an operation between a vector and a scalar_) or between arrays of two different sizes. For example, your array (we’ll call it “data”) might contain information about distance in miles but you want to convert the information to kilometers. You can perform this operation with: >>> data = np.array([1.0, 2.0]) >>> data * 1.6 array([1.6, 3.2]) NumPy understands that the multiplication should happen with each cell. That concept is called **broadcasting**. Broadcasting is a mechanism that allows NumPy to perform operations on arrays of different shapes. The dimensions of your array must be compatible, for example, when the dimensions of both arrays are equal or when one of them is 1. If the dimensions are not compatible, you will get a `ValueError`. [Learn more about broadcasting here](basics.broadcasting#basics-broadcasting). ## More useful array operations _This section covers maximum, minimum, sum, mean, product, standard deviation, and more_ NumPy also performs aggregation functions. In addition to `min`, `max`, and `sum`, you can easily run `mean` to get the average, `prod` to get the result of multiplying the elements together, `std` to get the standard deviation, and more. >>> data.max() 2.0 >>> data.min() 1.0 >>> data.sum() 3.0 Let’s start with this array, called “a” >>> a = np.array([[0.45053314, 0.17296777, 0.34376245, 0.5510652], ... [0.54627315, 0.05093587, 0.40067661, 0.55645993], ... [0.12697628, 0.82485143, 0.26590556, 0.56917101]]) It’s very common to want to aggregate along a row or column. By default, every NumPy aggregation function will return the aggregate of the entire array. To find the sum or the minimum of the elements in your array, run: >>> a.sum() 4.8595784 Or: >>> a.min() 0.05093587 You can specify on which axis you want the aggregation function to be computed. For example, you can find the minimum value within each column by specifying `axis=0`. >>> a.min(axis=0) array([0.12697628, 0.05093587, 0.26590556, 0.5510652 ]) The four values listed above correspond to the number of columns in your array. With a four-column array, you will get four values as your result. Read more about [array methods here](../reference/arrays.ndarray#array- ndarray-methods). ## Creating matrices You can pass Python lists of lists to create a 2-D array (or “matrix”) to represent them in NumPy. >>> data = np.array([[1, 2], [3, 4], [5, 6]]) >>> data array([[1, 2], [3, 4], [5, 6]]) Indexing and slicing operations are useful when you’re manipulating matrices: >>> data[0, 1] 2 >>> data[1:3] array([[3, 4], [5, 6]]) >>> data[0:2, 0] array([1, 3]) You can aggregate matrices the same way you aggregated vectors: >>> data.max() 6 >>> data.min() 1 >>> data.sum() 21 You can aggregate all the values in a matrix and you can aggregate them across columns or rows using the `axis` parameter. To illustrate this point, let’s look at a slightly modified dataset: >>> data = np.array([[1, 2], [5, 3], [4, 6]]) >>> data array([[1, 2], [5, 3], [4, 6]]) >>> data.max(axis=0) array([5, 6]) >>> data.max(axis=1) array([2, 5, 6]) Once you’ve created your matrices, you can add and multiply them using arithmetic operators if you have two matrices that are the same size. >>> data = np.array([[1, 2], [3, 4]]) >>> ones = np.array([[1, 1], [1, 1]]) >>> data + ones array([[2, 3], [4, 5]]) You can do these arithmetic operations on matrices of different sizes, but only if one matrix has only one column or one row. In this case, NumPy will use its broadcast rules for the operation. >>> data = np.array([[1, 2], [3, 4], [5, 6]]) >>> ones_row = np.array([[1, 1]]) >>> data + ones_row array([[2, 3], [4, 5], [6, 7]]) Be aware that when NumPy prints N-dimensional arrays, the last axis is looped over the fastest while the first axis is the slowest. For instance: >>> np.ones((4, 3, 2)) array([[[1., 1.], [1., 1.], [1., 1.]], [[1., 1.], [1., 1.], [1., 1.]], [[1., 1.], [1., 1.], [1., 1.]], [[1., 1.], [1., 1.], [1., 1.]]]) There are often instances where we want NumPy to initialize the values of an array. NumPy offers functions like `ones()` and `zeros()`, and the `random.Generator` class for random number generation for that. All you need to do is pass in the number of elements you want it to generate: >>> np.ones(3) array([1., 1., 1.]) >>> np.zeros(3) array([0., 0., 0.]) >>> rng = np.random.default_rng() # the simplest way to generate random numbers >>> rng.random(3) array([0.63696169, 0.26978671, 0.04097352]) You can also use `ones()`, `zeros()`, and `random()` to create a 2D array if you give them a tuple describing the dimensions of the matrix: >>> np.ones((3, 2)) array([[1., 1.], [1., 1.], [1., 1.]]) >>> np.zeros((3, 2)) array([[0., 0.], [0., 0.], [0., 0.]]) >>> rng.random((3, 2)) array([[0.01652764, 0.81327024], [0.91275558, 0.60663578], [0.72949656, 0.54362499]]) # may vary Read more about creating arrays, filled with `0`’s, `1`’s, other values or uninitialized, at [array creation routines](../reference/routines.array- creation#routines-array-creation). ## Generating random numbers The use of random number generation is an important part of the configuration and evaluation of many numerical and machine learning algorithms. Whether you need to randomly initialize weights in an artificial neural network, split data into random sets, or randomly shuffle your dataset, being able to generate random numbers (actually, repeatable pseudo-random numbers) is essential. With `Generator.integers`, you can generate random integers from low (remember that this is inclusive with NumPy) to high (exclusive). You can set `endpoint=True` to make the high number inclusive. You can generate a 2 x 4 array of random integers between 0 and 4 with: >>> rng.integers(5, size=(2, 4)) array([[2, 1, 1, 0], [0, 0, 0, 4]]) # may vary [Read more about random number generation here](../reference/random/index#numpyrandom). ## How to get unique items and counts _This section covers_ `np.unique()` You can find the unique elements in an array easily with `np.unique`. For example, if you start with this array: >>> a = np.array([11, 11, 12, 13, 14, 15, 16, 17, 12, 13, 11, 14, 18, 19, 20]) you can use `np.unique` to print the unique values in your array: >>> unique_values = np.unique(a) >>> print(unique_values) [11 12 13 14 15 16 17 18 19 20] To get the indices of unique values in a NumPy array (an array of first index positions of unique values in the array), just pass the `return_index` argument in `np.unique()` as well as your array. >>> unique_values, indices_list = np.unique(a, return_index=True) >>> print(indices_list) [ 0 2 3 4 5 6 7 12 13 14] You can pass the `return_counts` argument in `np.unique()` along with your array to get the frequency count of unique values in a NumPy array. >>> unique_values, occurrence_count = np.unique(a, return_counts=True) >>> print(occurrence_count) [3 2 2 2 1 1 1 1 1 1] This also works with 2D arrays! If you start with this array: >>> a_2d = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [1, 2, 3, 4]]) You can find unique values with: >>> unique_values = np.unique(a_2d) >>> print(unique_values) [ 1 2 3 4 5 6 7 8 9 10 11 12] If the axis argument isn’t passed, your 2D array will be flattened. If you want to get the unique rows or columns, make sure to pass the `axis` argument. To find the unique rows, specify `axis=0` and for columns, specify `axis=1`. >>> unique_rows = np.unique(a_2d, axis=0) >>> print(unique_rows) [[ 1 2 3 4] [ 5 6 7 8] [ 9 10 11 12]] To get the unique rows, index position, and occurrence count, you can use: >>> unique_rows, indices, occurrence_count = np.unique( ... a_2d, axis=0, return_counts=True, return_index=True) >>> print(unique_rows) [[ 1 2 3 4] [ 5 6 7 8] [ 9 10 11 12]] >>> print(indices) [0 1 2] >>> print(occurrence_count) [2 1 1] To learn more about finding the unique elements in an array, see [`unique`](../reference/generated/numpy.unique#numpy.unique "numpy.unique"). ## Transposing and reshaping a matrix _This section covers_ `arr.reshape()`, `arr.transpose()`, `arr.T` It’s common to need to transpose your matrices. NumPy arrays have the property `T` that allows you to transpose a matrix. You may also need to switch the dimensions of a matrix. This can happen when, for example, you have a model that expects a certain input shape that is different from your dataset. This is where the `reshape` method can be useful. You simply need to pass in the new dimensions that you want for the matrix. >>> data.reshape(2, 3) array([[1, 2, 3], [4, 5, 6]]) >>> data.reshape(3, 2) array([[1, 2], [3, 4], [5, 6]]) You can also use `.transpose()` to reverse or change the axes of an array according to the values you specify. If you start with this array: >>> arr = np.arange(6).reshape((2, 3)) >>> arr array([[0, 1, 2], [3, 4, 5]]) You can transpose your array with `arr.transpose()`. >>> arr.transpose() array([[0, 3], [1, 4], [2, 5]]) You can also use `arr.T`: >>> arr.T array([[0, 3], [1, 4], [2, 5]]) To learn more about transposing and reshaping arrays, see [`transpose`](../reference/generated/numpy.transpose#numpy.transpose "numpy.transpose") and [`reshape`](../reference/generated/numpy.reshape#numpy.reshape "numpy.reshape"). ## How to reverse an array _This section covers_ `np.flip()` NumPy’s `np.flip()` function allows you to flip, or reverse, the contents of an array along an axis. When using `np.flip()`, specify the array you would like to reverse and the axis. If you don’t specify the axis, NumPy will reverse the contents along all of the axes of your input array. **Reversing a 1D array** If you begin with a 1D array like this one: >>> arr = np.array([1, 2, 3, 4, 5, 6, 7, 8]) You can reverse it with: >>> reversed_arr = np.flip(arr) If you want to print your reversed array, you can run: >>> print('Reversed Array: ', reversed_arr) Reversed Array: [8 7 6 5 4 3 2 1] **Reversing a 2D array** A 2D array works much the same way. If you start with this array: >>> arr_2d = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) You can reverse the content in all of the rows and all of the columns with: >>> reversed_arr = np.flip(arr_2d) >>> print(reversed_arr) [[12 11 10 9] [ 8 7 6 5] [ 4 3 2 1]] You can easily reverse only the _rows_ with: >>> reversed_arr_rows = np.flip(arr_2d, axis=0) >>> print(reversed_arr_rows) [[ 9 10 11 12] [ 5 6 7 8] [ 1 2 3 4]] Or reverse only the _columns_ with: >>> reversed_arr_columns = np.flip(arr_2d, axis=1) >>> print(reversed_arr_columns) [[ 4 3 2 1] [ 8 7 6 5] [12 11 10 9]] You can also reverse the contents of only one column or row. For example, you can reverse the contents of the row at index position 1 (the second row): >>> arr_2d[1] = np.flip(arr_2d[1]) >>> print(arr_2d) [[ 1 2 3 4] [ 8 7 6 5] [ 9 10 11 12]] You can also reverse the column at index position 1 (the second column): >>> arr_2d[:,1] = np.flip(arr_2d[:,1]) >>> print(arr_2d) [[ 1 10 3 4] [ 8 7 6 5] [ 9 2 11 12]] Read more about reversing arrays at [`flip`](../reference/generated/numpy.flip#numpy.flip "numpy.flip"). ## Reshaping and flattening multidimensional arrays _This section covers_ `.flatten()`, `ravel()` There are two popular ways to flatten an array: `.flatten()` and `.ravel()`. The primary difference between the two is that the new array created using `ravel()` is actually a reference to the parent array (i.e., a “view”). This means that any changes to the new array will affect the parent array as well. Since `ravel` does not create a copy, it’s memory efficient. If you start with this array: >>> x = np.array([[1 , 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) You can use `flatten` to flatten your array into a 1D array. >>> x.flatten() array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]) When you use `flatten`, changes to your new array won’t change the parent array. For example: >>> a1 = x.flatten() >>> a1[0] = 99 >>> print(x) # Original array [[ 1 2 3 4] [ 5 6 7 8] [ 9 10 11 12]] >>> print(a1) # New array [99 2 3 4 5 6 7 8 9 10 11 12] But when you use `ravel`, the changes you make to the new array will affect the parent array. For example: >>> a2 = x.ravel() >>> a2[0] = 98 >>> print(x) # Original array [[98 2 3 4] [ 5 6 7 8] [ 9 10 11 12]] >>> print(a2) # New array [98 2 3 4 5 6 7 8 9 10 11 12] Read more about `flatten` at [`ndarray.flatten`](../reference/generated/numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten") and `ravel` at [`ravel`](../reference/generated/numpy.ravel#numpy.ravel "numpy.ravel"). ## How to access the docstring for more information _This section covers_ `help()`, `?`, `??` When it comes to the data science ecosystem, Python and NumPy are built with the user in mind. One of the best examples of this is the built-in access to documentation. Every object contains the reference to a string, which is known as the **docstring**. In most cases, this docstring contains a quick and concise summary of the object and how to use it. Python has a built-in `help()` function that can help you access this information. This means that nearly any time you need more information, you can use `help()` to quickly find the information that you need. For example: >>> help(max) Help on built-in function max in module builtins: max(...) max(iterable, *[, default=obj, key=func]) -> value max(arg1, arg2, *args, *[, key=func]) -> value With a single iterable argument, return its biggest item. The default keyword-only argument specifies an object to return if the provided iterable is empty. With two or more arguments, return the largest argument. Because access to additional information is so useful, IPython uses the `?` character as a shorthand for accessing this documentation along with other relevant information. IPython is a command shell for interactive computing in multiple languages. [You can find more information about IPython here](https://ipython.org/). For example: In [0]: max? max(iterable, *[, default=obj, key=func]) -> value max(arg1, arg2, *args, *[, key=func]) -> value With a single iterable argument, return its biggest item. The default keyword-only argument specifies an object to return if the provided iterable is empty. With two or more arguments, return the largest argument. Type: builtin_function_or_method You can even use this notation for object methods and objects themselves. Let’s say you create this array: >>> a = np.array([1, 2, 3, 4, 5, 6]) Then you can obtain a lot of useful information (first details about `a` itself, followed by the docstring of `ndarray` of which `a` is an instance): In [1]: a? Type: ndarray String form: [1 2 3 4 5 6] Length: 6 File: ~/anaconda3/lib/python3.9/site-packages/numpy/__init__.py Docstring: Class docstring: ndarray(shape, dtype=float, buffer=None, offset=0, strides=None, order=None) An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using `array`, `zeros` or `empty` (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(...)`) for instantiating an array. For more information, refer to the `numpy` module and examine the methods and attributes of an array. Parameters ---------- (for the __new__ method; see Notes below) shape : tuple of ints Shape of created array. ... This also works for functions and other objects that **you** create. Just remember to include a docstring with your function using a string literal (`""" """` or `''' '''` around your documentation). For example, if you create this function: >>> def double(a): ... '''Return a * 2''' ... return a * 2 You can obtain information about the function: In [2]: double? Signature: double(a) Docstring: Return a * 2 File: ~/Desktop/ Type: function You can reach another level of information by reading the source code of the object you’re interested in. Using a double question mark (`??`) allows you to access the source code. For example: In [3]: double?? Signature: double(a) Source: def double(a): '''Return a * 2''' return a * 2 File: ~/Desktop/ Type: function If the object in question is compiled in a language other than Python, using `??` will return the same information as `?`. You’ll find this with a lot of built-in objects and types, for example: In [4]: len? Signature: len(obj, /) Docstring: Return the number of items in a container. Type: builtin_function_or_method and : In [5]: len?? Signature: len(obj, /) Docstring: Return the number of items in a container. Type: builtin_function_or_method have the same output because they were compiled in a programming language other than Python. ## Working with mathematical formulas The ease of implementing mathematical formulas that work on arrays is one of the things that make NumPy so widely used in the scientific Python community. For example, this is the mean square error formula (a central formula used in supervised machine learning models that deal with regression): Implementing this formula is simple and straightforward in NumPy: What makes this work so well is that `predictions` and `labels` can contain one or a thousand values. They only need to be the same size. You can visualize it this way: In this example, both the predictions and labels vectors contain three values, meaning `n` has a value of three. After we carry out subtractions the values in the vector are squared. Then NumPy sums the values, and your result is the error value for that prediction and a score for the quality of the model. ## How to save and load NumPy objects _This section covers_ `np.save`, `np.savez`, `np.savetxt`, `np.load`, `np.loadtxt` You will, at some point, want to save your arrays to disk and load them back without having to re-run the code. Fortunately, there are several ways to save and load objects with NumPy. The ndarray objects can be saved to and loaded from the disk files with `loadtxt` and `savetxt` functions that handle normal text files, `load` and `save` functions that handle NumPy binary files with a **.npy** file extension, and a `savez` function that handles NumPy files with a **.npz** file extension. The **.npy** and **.npz** files store data, shape, dtype, and other information required to reconstruct the ndarray in a way that allows the array to be correctly retrieved, even when the file is on another machine with different architecture. If you want to store a single ndarray object, store it as a .npy file using `np.save`. If you want to store more than one ndarray object in a single file, save it as a .npz file using `np.savez`. You can also save several arrays into a single file in compressed npz format with [`savez_compressed`](../reference/generated/numpy.savez_compressed#numpy.savez_compressed "numpy.savez_compressed"). It’s easy to save and load an array with `np.save()`. Just make sure to specify the array you want to save and a file name. For example, if you create this array: >>> a = np.array([1, 2, 3, 4, 5, 6]) You can save it as “filename.npy” with: >>> np.save('filename', a) You can use `np.load()` to reconstruct your array. >>> b = np.load('filename.npy') If you want to check your array, you can run: >>> print(b) [1 2 3 4 5 6] You can save a NumPy array as a plain text file like a **.csv** or **.txt** file with `np.savetxt`. For example, if you create this array: >>> csv_arr = np.array([1, 2, 3, 4, 5, 6, 7, 8]) You can easily save it as a .csv file with the name “new_file.csv” like this: >>> np.savetxt('new_file.csv', csv_arr) You can quickly and easily load your saved text file using `loadtxt()`: >>> np.loadtxt('new_file.csv') array([1., 2., 3., 4., 5., 6., 7., 8.]) The `savetxt()` and `loadtxt()` functions accept additional optional parameters such as header, footer, and delimiter. While text files can be easier for sharing, .npy and .npz files are smaller and faster to read. If you need more sophisticated handling of your text file (for example, if you need to work with lines that contain missing values), you will want to use the [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") function. With [`savetxt`](../reference/generated/numpy.savetxt#numpy.savetxt "numpy.savetxt"), you can specify headers, footers, comments, and more. Learn more about [input and output routines here](../reference/routines.io#routines-io). ## Importing and exporting a CSV It’s simple to read in a CSV that contains existing information. The best and easiest way to do this is to use [Pandas](https://pandas.pydata.org). >>> import pandas as pd >>> # If all of your columns are the same type: >>> x = pd.read_csv('music.csv', header=0).values >>> print(x) [['Billie Holiday' 'Jazz' 1300000 27000000] ['Jimmie Hendrix' 'Rock' 2700000 70000000] ['Miles Davis' 'Jazz' 1500000 48000000] ['SIA' 'Pop' 2000000 74000000]] >>> # You can also simply select the columns you need: >>> x = pd.read_csv('music.csv', usecols=['Artist', 'Plays']).values >>> print(x) [['Billie Holiday' 27000000] ['Jimmie Hendrix' 70000000] ['Miles Davis' 48000000] ['SIA' 74000000]] It’s simple to use Pandas in order to export your array as well. If you are new to NumPy, you may want to create a Pandas dataframe from the values in your array and then write the data frame to a CSV file with Pandas. If you created this array “a” >>> a = np.array([[-2.58289208, 0.43014843, -1.24082018, 1.59572603], ... [ 0.99027828, 1.17150989, 0.94125714, -0.14692469], ... [ 0.76989341, 0.81299683, -0.95068423, 0.11769564], ... [ 0.20484034, 0.34784527, 1.96979195, 0.51992837]]) You could create a Pandas dataframe >>> df = pd.DataFrame(a) >>> print(df) 0 1 2 3 0 -2.582892 0.430148 -1.240820 1.595726 1 0.990278 1.171510 0.941257 -0.146925 2 0.769893 0.812997 -0.950684 0.117696 3 0.204840 0.347845 1.969792 0.519928 You can easily save your dataframe with: >>> df.to_csv('pd.csv') And read your CSV with: >>> data = pd.read_csv('pd.csv') You can also save your array with the NumPy `savetxt` method. >>> np.savetxt('np.csv', a, fmt='%.2f', delimiter=',', header='1, 2, 3, 4') If you’re using the command line, you can read your saved CSV any time with a command such as: $ cat np.csv # 1, 2, 3, 4 -2.58,0.43,-1.24,1.60 0.99,1.17,0.94,-0.15 0.77,0.81,-0.95,0.12 0.20,0.35,1.97,0.52 Or you can open the file any time with a text editor! If you’re interested in learning more about Pandas, take a look at the [official Pandas documentation](https://pandas.pydata.org/index.html). Learn how to install Pandas with the [official Pandas installation information](https://pandas.pydata.org/pandas-docs/stable/install.html). ## Plotting arrays with Matplotlib If you need to generate a plot for your values, it’s very simple with [Matplotlib](https://matplotlib.org/). For example, you may have an array like this one: >>> a = np.array([2, 1, 5, 7, 4, 6, 8, 14, 10, 9, 18, 20, 22]) If you already have Matplotlib installed, you can import it with: >>> import matplotlib.pyplot as plt # If you're using Jupyter Notebook, you may also want to run the following # line of code to display your code in the notebook: %matplotlib inline All you need to do to plot your values is run: >>> plt.plot(a) # If you are running from a command line, you may need to do this: # >>> plt.show() For example, you can plot a 1D array like this: >>> x = np.linspace(0, 5, 20) >>> y = np.linspace(0, 10, 20) >>> plt.plot(x, y, 'purple') # line >>> plt.plot(x, y, 'o') # dots With Matplotlib, you have access to an enormous number of visualization options. >>> fig = plt.figure() >>> ax = fig.add_subplot(projection='3d') >>> X = np.arange(-5, 5, 0.15) >>> Y = np.arange(-5, 5, 0.15) >>> X, Y = np.meshgrid(X, Y) >>> R = np.sqrt(X**2 + Y**2) >>> Z = np.sin(R) >>> ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap='viridis') To read more about Matplotlib and what it can do, take a look at [the official documentation](https://matplotlib.org/). For directions regarding installing Matplotlib, see the official [installation section](https://matplotlib.org/users/installing.html). _Image credits: Jay Alammar https://jalammar.github.io/_ # Broadcasting See also [`numpy.broadcast`](../reference/generated/numpy.broadcast#numpy.broadcast "numpy.broadcast") The term broadcasting describes how NumPy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes. Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python. It does this without making needless copies of data and usually leads to efficient algorithm implementations. There are, however, cases where broadcasting is a bad idea because it leads to inefficient use of memory that slows computation. NumPy operations are usually done on pairs of arrays on an element-by-element basis. In the simplest case, the two arrays must have exactly the same shape, as in the following example: >>> import numpy as np >>> a = np.array([1.0, 2.0, 3.0]) >>> b = np.array([2.0, 2.0, 2.0]) >>> a * b array([2., 4., 6.]) NumPy’s broadcasting rule relaxes this constraint when the arrays’ shapes meet certain constraints. The simplest broadcasting example occurs when an array and a scalar value are combined in an operation: >>> import numpy as np >>> a = np.array([1.0, 2.0, 3.0]) >>> b = 2.0 >>> a * b array([2., 4., 6.]) The result is equivalent to the previous example where `b` was an array. We can think of the scalar `b` being _stretched_ during the arithmetic operation into an array with the same shape as `a`. The new elements in `b`, as shown in Figure 1, are simply copies of the original scalar. The stretching analogy is only conceptual. NumPy is smart enough to use the original scalar value without actually making copies so that broadcasting operations are as memory and computationally efficient as possible. _Figure 1_ _In the simplest example of broadcasting, the scalar_ `b` _is stretched to become an array of same shape as_ `a` _so the shapes are compatible for element-by-element multiplication._ The code in the second example is more efficient than that in the first because broadcasting moves less memory around during the multiplication (`b` is a scalar rather than an array). ## General broadcasting rules When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing (i.e. rightmost) dimension and works its way left. Two dimensions are compatible when 1. they are equal, or 2. one of them is 1. If these conditions are not met, a `ValueError: operands could not be broadcast together` exception is thrown, indicating that the arrays have incompatible shapes. Input arrays do not need to have the same _number_ of dimensions. The resulting array will have the same number of dimensions as the input array with the greatest number of dimensions, where the _size_ of each dimension is the largest size of the corresponding dimension among the input arrays. Note that missing dimensions are assumed to have size one. For example, if you have a `256x256x3` array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible: Image (3d array): 256 x 256 x 3 Scale (1d array): 3 Result (3d array): 256 x 256 x 3 When either of the dimensions compared is one, the other is used. In other words, dimensions with size 1 are stretched or “copied” to match the other. In the following example, both the `A` and `B` arrays have axes with length one that are expanded to a larger size during the broadcast operation: A (4d array): 8 x 1 x 6 x 1 B (3d array): 7 x 1 x 5 Result (4d array): 8 x 7 x 6 x 5 ## Broadcastable arrays A set of arrays is called “broadcastable” to the same shape if the above rules produce a valid result. For example, if `a.shape` is (5,1), `b.shape` is (1,6), `c.shape` is (6,) and `d.shape` is () so that _d_ is a scalar, then _a_ , _b_ , _c_ , and _d_ are all broadcastable to dimension (5,6); and * _a_ acts like a (5,6) array where `a[:,0]` is broadcast to the other columns, * _b_ acts like a (5,6) array where `b[0,:]` is broadcast to the other rows, * _c_ acts like a (1,6) array and therefore like a (5,6) array where `c[:]` is broadcast to every row, and finally, * _d_ acts like a (5,6) array where the single value is repeated. Here are some more examples: A (2d array): 5 x 4 B (1d array): 1 Result (2d array): 5 x 4 A (2d array): 5 x 4 B (1d array): 4 Result (2d array): 5 x 4 A (3d array): 15 x 3 x 5 B (3d array): 15 x 1 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 1 Result (3d array): 15 x 3 x 5 Here are examples of shapes that do not broadcast: A (1d array): 3 B (1d array): 4 # trailing dimensions do not match A (2d array): 2 x 1 B (3d array): 8 x 4 x 3 # second from last dimensions mismatched An example of broadcasting when a 1-d array is added to a 2-d array: >>> import numpy as np >>> a = np.array([[ 0.0, 0.0, 0.0], ... [10.0, 10.0, 10.0], ... [20.0, 20.0, 20.0], ... [30.0, 30.0, 30.0]]) >>> b = np.array([1.0, 2.0, 3.0]) >>> a + b array([[ 1., 2., 3.], [11., 12., 13.], [21., 22., 23.], [31., 32., 33.]]) >>> b = np.array([1.0, 2.0, 3.0, 4.0]) >>> a + b Traceback (most recent call last): ValueError: operands could not be broadcast together with shapes (4,3) (4,) As shown in Figure 2, `b` is added to each row of `a`. In Figure 3, an exception is raised because of the incompatible shapes. _Figure 2_ _A one dimensional array added to a two dimensional array results in broadcasting if number of 1-d array elements matches the number of 2-d array columns._ _Figure 3_ _When the trailing dimensions of the arrays are unequal, broadcasting fails because it is impossible to align the values in the rows of the 1st array with the elements of the 2nd arrays for element-by-element addition._ Broadcasting provides a convenient way of taking the outer product (or any other outer operation) of two arrays. The following example shows an outer addition operation of two 1-d arrays: >>> import numpy as np >>> a = np.array([0.0, 10.0, 20.0, 30.0]) >>> b = np.array([1.0, 2.0, 3.0]) >>> a[:, np.newaxis] + b array([[ 1., 2., 3.], [11., 12., 13.], [21., 22., 23.], [31., 32., 33.]]) _Figure 4_ _In some cases, broadcasting stretches both arrays to form an output array larger than either of the initial arrays._ Here the `newaxis` index operator inserts a new axis into `a`, making it a two-dimensional `4x1` array. Combining the `4x1` array with `b`, which has shape `(3,)`, yields a `4x3` array. ## A practical example: vector quantization Broadcasting comes up quite often in real world problems. A typical example occurs in the vector quantization (VQ) algorithm used in information theory, classification, and other related areas. The basic operation in VQ finds the closest point in a set of points, called `codes` in VQ jargon, to a given point, called the `observation`. In the very simple, two-dimensional case shown below, the values in `observation` describe the weight and height of an athlete to be classified. The `codes` represent different classes of athletes. [1] Finding the closest point requires calculating the distance between observation and each of the codes. The shortest distance provides the best match. In this example, `codes[0]` is the closest class indicating that the athlete is likely a basketball player. >>> from numpy import array, argmin, sqrt, sum >>> observation = array([111.0, 188.0]) >>> codes = array([[102.0, 203.0], ... [132.0, 193.0], ... [45.0, 155.0], ... [57.0, 173.0]]) >>> diff = codes - observation # the broadcast happens here >>> dist = sqrt(sum(diff**2,axis=-1)) >>> argmin(dist) 0 In this example, the `observation` array is stretched to match the shape of the `codes` array: Observation (1d array): 2 Codes (2d array): 4 x 2 Diff (2d array): 4 x 2 _Figure 5_ _The basic operation of vector quantization calculates the distance between an object to be classified, the dark square, and multiple known codes, the gray circles. In this simple case, the codes represent individual classes. More complex cases use multiple codes per class._ Typically, a large number of `observations`, perhaps read from a database, are compared to a set of `codes`. Consider this scenario: Observation (2d array): 10 x 3 Codes (3d array): 5 x 1 x 3 Diff (3d array): 5 x 10 x 3 The three-dimensional array, `diff`, is a consequence of broadcasting, not a necessity for the calculation. Large data sets will generate a large intermediate array that is computationally inefficient. Instead, if each observation is calculated individually using a Python loop around the code in the two-dimensional example above, a much smaller array is used. Broadcasting is a powerful tool for writing short and usually intuitive code that does its computations very efficiently in C. However, there are cases when broadcasting uses unnecessarily large amounts of memory for a particular algorithm. In these cases, it is better to write the algorithm’s outer loop in Python. This may also produce more readable code, as algorithms that use broadcasting tend to become more difficult to interpret as the number of dimensions in the broadcast increases. #### Footnotes [1] In this example, weight has more impact on the distance calculation than height because of the larger values. In practice, it is important to normalize the height and weight, often by their standard deviation across the data set, so that both have equal influence on the distance calculation. # Copies and views When operating on NumPy arrays, it is possible to access the internal data buffer directly using a view without copying data around. This ensures good performance but can also cause unwanted problems if the user is not aware of how this works. Hence, it is important to know the difference between these two terms and to know which operations return copies and which return views. The NumPy array is a data structure consisting of two parts: the [contiguous](../glossary#term-contiguous) data buffer with the actual data elements and the metadata that contains information about the data buffer. The metadata includes data type, strides, and other important information that helps manipulate the [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") easily. See the [Internal organization of NumPy arrays](../dev/internals#numpy-internals) section for a detailed look. ## View It is possible to access the array differently by just changing certain metadata like [stride](../glossary#term-stride) and [dtype](../glossary#term- dtype) without changing the data buffer. This creates a new way of looking at the data and these new arrays are called views. The data buffer remains the same, so any changes made to a view reflects in the original copy. A view can be forced through the [`ndarray.view`](../reference/generated/numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") method. ## Copy When a new array is created by duplicating the data buffer as well as the metadata, it is called a copy. Changes made to the copy do not reflect on the original array. Making a copy is slower and memory-consuming but sometimes necessary. A copy can be forced by using [`ndarray.copy`](../reference/generated/numpy.ndarray.copy#numpy.ndarray.copy "numpy.ndarray.copy"). ## Indexing operations See also [Indexing on ndarrays](basics.indexing#basics-indexing) Views are created when elements can be addressed with offsets and strides in the original array. Hence, basic indexing always creates views. For example: >>> import numpy as np >>> x = np.arange(10) >>> x array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> y = x[1:3] # creates a view >>> y array([1, 2]) >>> x[1:3] = [10, 11] >>> x array([ 0, 10, 11, 3, 4, 5, 6, 7, 8, 9]) >>> y array([10, 11]) Here, `y` gets changed when `x` is changed because it is a view. [Advanced indexing](basics.indexing#advanced-indexing), on the other hand, always creates copies. For example: >>> import numpy as np >>> x = np.arange(9).reshape(3, 3) >>> x array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> y = x[[1, 2]] >>> y array([[3, 4, 5], [6, 7, 8]]) >>> y.base is None True Here, ``y`` is a copy, as signified by the :attr:`base <.ndarray.base>` attribute. We can also confirm this by assigning new values to ``x[[1, 2]]`` which in turn will not affect ``y`` at all:: >>> x[[1, 2]] = [[10, 11, 12], [13, 14, 15]] >>> x array([[ 0, 1, 2], [10, 11, 12], [13, 14, 15]]) >>> y array([[3, 4, 5], [6, 7, 8]]) It must be noted here that during the assignment of `x[[1, 2]]` no view or copy is created as the assignment happens in-place. ## Other operations The [`numpy.reshape`](../reference/generated/numpy.reshape#numpy.reshape "numpy.reshape") function creates a view where possible or a copy otherwise. In most cases, the strides can be modified to reshape the array with a view. However, in some cases where the array becomes non-contiguous (perhaps after a [`ndarray.transpose`](../reference/generated/numpy.ndarray.transpose#numpy.ndarray.transpose "numpy.ndarray.transpose") operation), the reshaping cannot be done by modifying strides and requires a copy. In these cases, we can raise an error by assigning the new shape to the shape attribute of the array. For example: >>> import numpy as np >>> x = np.ones((2, 3)) >>> y = x.T # makes the array non-contiguous >>> y array([[1., 1.], [1., 1.], [1., 1.]]) >>> z = y.view() >>> z.shape = 6 Traceback (most recent call last): ... AttributeError: Incompatible shape for in-place modification. Use `.reshape()` to make a copy with the desired shape. Taking the example of another operation, [`ravel`](../reference/generated/numpy.ravel#numpy.ravel "numpy.ravel") returns a contiguous flattened view of the array wherever possible. On the other hand, [`ndarray.flatten`](../reference/generated/numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten") always returns a flattened copy of the array. However, to guarantee a view in most cases, `x.reshape(-1)` may be preferable. ## How to tell if the array is a view or a copy The [`base`](../reference/generated/numpy.ndarray.base#numpy.ndarray.base "numpy.ndarray.base") attribute of the ndarray makes it easy to tell if an array is a view or a copy. The base attribute of a view returns the original array while it returns `None` for a copy. >>> import numpy as np >>> x = np.arange(9) >>> x array([0, 1, 2, 3, 4, 5, 6, 7, 8]) >>> y = x.reshape(3, 3) >>> y array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> y.base # .reshape() creates a view array([0, 1, 2, 3, 4, 5, 6, 7, 8]) >>> z = y[[2, 1]] >>> z array([[6, 7, 8], [3, 4, 5]]) >>> z.base is None # advanced indexing creates a copy True Note that the `base` attribute should not be used to determine if an ndarray object is _new_ ; only if it is a view or a copy of another ndarray. # Array creation See also [Array creation routines](../reference/routines.array-creation#routines-array- creation) ## Introduction There are 6 general mechanisms for creating arrays: 1. Conversion from other Python structures (i.e. lists and tuples) 2. Intrinsic NumPy array creation functions (e.g. arange, ones, zeros, etc.) 3. Replicating, joining, or mutating existing arrays 4. Reading arrays from disk, either from standard or custom formats 5. Creating arrays from raw bytes through the use of strings or buffers 6. Use of special library functions (e.g., random) You can use these methods to create ndarrays or [Structured arrays](basics.rec#structured-arrays). This document will cover general methods for ndarray creation. ## 1) Converting Python sequences to NumPy arrays NumPy arrays can be defined using Python sequences such as lists and tuples. Lists and tuples are defined using `[...]` and `(...)`, respectively. Lists and tuples can define ndarray creation: * a list of numbers will create a 1D array, * a list of lists will create a 2D array, * further nested lists will create higher-dimensional arrays. In general, any array object is called an **ndarray** in NumPy. >>> import numpy as np >>> a1D = np.array([1, 2, 3, 4]) >>> a2D = np.array([[1, 2], [3, 4]]) >>> a3D = np.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) When you use [`numpy.array`](../reference/generated/numpy.array#numpy.array "numpy.array") to define a new array, you should consider the [dtype](basics.types) of the elements in the array, which can be specified explicitly. This feature gives you more control over the underlying data structures and how the elements are handled in C/C++ functions. When values do not fit and you are using a `dtype`, NumPy may raise an error: >>> import numpy as np >>> np.array([127, 128, 129], dtype=np.int8) Traceback (most recent call last): ... OverflowError: Python integer 128 out of bounds for int8 An 8-bit signed integer represents integers from -128 to 127. Assigning the `int8` array to integers outside of this range results in overflow. This feature can often be misunderstood. If you perform calculations with mismatching `dtypes`, you can get unwanted results, for example: >>> import numpy as np >>> a = np.array([2, 3, 4], dtype=np.uint32) >>> b = np.array([5, 6, 7], dtype=np.uint32) >>> c_unsigned32 = a - b >>> print('unsigned c:', c_unsigned32, c_unsigned32.dtype) unsigned c: [4294967293 4294967293 4294967293] uint32 >>> c_signed32 = a - b.astype(np.int32) >>> print('signed c:', c_signed32, c_signed32.dtype) signed c: [-3 -3 -3] int64 Notice when you perform operations with two arrays of the same `dtype`: `uint32`, the resulting array is the same type. When you perform operations with different `dtype`, NumPy will assign a new type that satisfies all of the array elements involved in the computation, here `uint32` and `int32` can both be represented in as `int64`. The default NumPy behavior is to create arrays in either 32 or 64-bit signed integers (platform dependent and matches C `long` size) or double precision floating point numbers. If you expect your integer arrays to be a specific type, then you need to specify the dtype while you create the array. ## 2) Intrinsic NumPy array creation functions NumPy has over 40 built-in functions for creating arrays as laid out in the [Array creation routines](../reference/routines.array-creation#routines-array- creation). These functions can be split into roughly three categories, based on the dimension of the array they create: 1. 1D arrays 2. 2D arrays 3. ndarrays ### 1 - 1D array creation functions The 1D array creation functions e.g. [`numpy.linspace`](../reference/generated/numpy.linspace#numpy.linspace "numpy.linspace") and [`numpy.arange`](../reference/generated/numpy.arange#numpy.arange "numpy.arange") generally need at least two inputs, `start` and `stop`. [`numpy.arange`](../reference/generated/numpy.arange#numpy.arange "numpy.arange") creates arrays with regularly incrementing values. Check the documentation for complete information and examples. A few examples are shown: >>> import numpy as np >>> np.arange(10) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.arange(2, 10, dtype=float) array([2., 3., 4., 5., 6., 7., 8., 9.]) >>> np.arange(2, 3, 0.1) array([2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9]) Note: best practice for [`numpy.arange`](../reference/generated/numpy.arange#numpy.arange "numpy.arange") is to use integer start, end, and step values. There are some subtleties regarding `dtype`. In the second example, the `dtype` is defined. In the third example, the array is `dtype=float` to accommodate the step size of `0.1`. Due to roundoff error, the `stop` value is sometimes included. [`numpy.linspace`](../reference/generated/numpy.linspace#numpy.linspace "numpy.linspace") will create arrays with a specified number of elements, and spaced equally between the specified beginning and end values. For example: >>> import numpy as np >>> np.linspace(1., 4., 6) array([1. , 1.6, 2.2, 2.8, 3.4, 4. ]) The advantage of this creation function is that you guarantee the number of elements and the starting and end point. The previous `arange(start, stop, step)` will not include the value `stop`. ### 2 - 2D array creation functions The 2D array creation functions e.g. [`numpy.eye`](../reference/generated/numpy.eye#numpy.eye "numpy.eye"), [`numpy.diag`](../reference/generated/numpy.diag#numpy.diag "numpy.diag"), and [`numpy.vander`](../reference/generated/numpy.vander#numpy.vander "numpy.vander") define properties of special matrices represented as 2D arrays. `np.eye(n, m)` defines a 2D identity matrix. The elements where i=j (row index and column index are equal) are 1 and the rest are 0, as such: >>> import numpy as np >>> np.eye(3) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) >>> np.eye(3, 5) array([[1., 0., 0., 0., 0.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.]]) [`numpy.diag`](../reference/generated/numpy.diag#numpy.diag "numpy.diag") can define either a square 2D array with given values along the diagonal _or_ if given a 2D array returns a 1D array that is only the diagonal elements. The two array creation functions can be helpful while doing linear algebra, as such: >>> import numpy as np >>> np.diag([1, 2, 3]) array([[1, 0, 0], [0, 2, 0], [0, 0, 3]]) >>> np.diag([1, 2, 3], 1) array([[0, 1, 0, 0], [0, 0, 2, 0], [0, 0, 0, 3], [0, 0, 0, 0]]) >>> a = np.array([[1, 2], [3, 4]]) >>> np.diag(a) array([1, 4]) `vander(x, n)` defines a Vandermonde matrix as a 2D NumPy array. Each column of the Vandermonde matrix is a decreasing power of the input 1D array or list or tuple, `x` where the highest polynomial order is `n-1`. This array creation routine is helpful in generating linear least squares models, as such: >>> import numpy as np >>> np.vander(np.linspace(0, 2, 5), 2) array([[0. , 1. ], [0.5, 1. ], [1. , 1. ], [1.5, 1. ], [2. , 1. ]]) >>> np.vander([1, 2, 3, 4], 2) array([[1, 1], [2, 1], [3, 1], [4, 1]]) >>> np.vander((1, 2, 3, 4), 4) array([[ 1, 1, 1, 1], [ 8, 4, 2, 1], [27, 9, 3, 1], [64, 16, 4, 1]]) ### 3 - general ndarray creation functions The ndarray creation functions e.g. [`numpy.ones`](../reference/generated/numpy.ones#numpy.ones "numpy.ones"), [`numpy.zeros`](../reference/generated/numpy.zeros#numpy.zeros "numpy.zeros"), and [`random`](../reference/random/generated/numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random") define arrays based upon the desired shape. The ndarray creation functions can create arrays with any dimension by specifying how many dimensions and length along that dimension in a tuple or list. [`numpy.zeros`](../reference/generated/numpy.zeros#numpy.zeros "numpy.zeros") will create an array filled with 0 values with the specified shape. The default dtype is `float64`: >>> import numpy as np >>> np.zeros((2, 3)) array([[0., 0., 0.], [0., 0., 0.]]) >>> np.zeros((2, 3, 2)) array([[[0., 0.], [0., 0.], [0., 0.]], [[0., 0.], [0., 0.], [0., 0.]]]) [`numpy.ones`](../reference/generated/numpy.ones#numpy.ones "numpy.ones") will create an array filled with 1 values. It is identical to `zeros` in all other respects as such: >>> import numpy as np >>> np.ones((2, 3)) array([[1., 1., 1.], [1., 1., 1.]]) >>> np.ones((2, 3, 2)) array([[[1., 1.], [1., 1.], [1., 1.]], [[1., 1.], [1., 1.], [1., 1.]]]) The [`random`](../reference/random/generated/numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random") method of the result of `default_rng` will create an array filled with random values between 0 and 1. It is included with the [`numpy.random`](../reference/random/index#module-numpy.random "numpy.random") library. Below, two arrays are created with shapes (2,3) and (2,3,2), respectively. The seed is set to 42 so you can reproduce these pseudorandom numbers: >>> import numpy as np >>> from numpy.random import default_rng >>> default_rng(42).random((2,3)) array([[0.77395605, 0.43887844, 0.85859792], [0.69736803, 0.09417735, 0.97562235]]) >>> default_rng(42).random((2,3,2)) array([[[0.77395605, 0.43887844], [0.85859792, 0.69736803], [0.09417735, 0.97562235]], [[0.7611397 , 0.78606431], [0.12811363, 0.45038594], [0.37079802, 0.92676499]]]) [`numpy.indices`](../reference/generated/numpy.indices#numpy.indices "numpy.indices") will create a set of arrays (stacked as a one-higher dimensioned array), one per dimension with each representing variation in that dimension: >>> import numpy as np >>> np.indices((3,3)) array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]]) This is particularly useful for evaluating functions of multiple dimensions on a regular grid. ## 3) Replicating, joining, or mutating existing arrays Once you have created arrays, you can replicate, join, or mutate those existing arrays to create new arrays. When you assign an array or its elements to a new variable, you have to explicitly [`numpy.copy`](../reference/generated/numpy.copy#numpy.copy "numpy.copy") the array, otherwise the variable is a view into the original array. Consider the following example: >>> import numpy as np >>> a = np.array([1, 2, 3, 4, 5, 6]) >>> b = a[:2] >>> b += 1 >>> print('a =', a, '; b =', b) a = [2 3 3 4 5 6] ; b = [2 3] In this example, you did not create a new array. You created a variable, `b` that viewed the first 2 elements of `a`. When you added 1 to `b` you would get the same result by adding 1 to `a[:2]`. If you want to create a _new_ array, use the [`numpy.copy`](../reference/generated/numpy.copy#numpy.copy "numpy.copy") array creation routine as such: >>> import numpy as np >>> a = np.array([1, 2, 3, 4]) >>> b = a[:2].copy() >>> b += 1 >>> print('a = ', a, 'b = ', b) a = [1 2 3 4] b = [2 3] For more information and examples look at [Copies and Views](quickstart#quickstart-copies-and-views). There are a number of routines to join existing arrays e.g. [`numpy.vstack`](../reference/generated/numpy.vstack#numpy.vstack "numpy.vstack"), [`numpy.hstack`](../reference/generated/numpy.hstack#numpy.hstack "numpy.hstack"), and [`numpy.block`](../reference/generated/numpy.block#numpy.block "numpy.block"). Here is an example of joining four 2-by-2 arrays into a 4-by-4 array using `block`: >>> import numpy as np >>> A = np.ones((2, 2)) >>> B = np.eye(2, 2) >>> C = np.zeros((2, 2)) >>> D = np.diag((-3, -4)) >>> np.block([[A, B], [C, D]]) array([[ 1., 1., 1., 0.], [ 1., 1., 0., 1.], [ 0., 0., -3., 0.], [ 0., 0., 0., -4.]]) Other routines use similar syntax to join ndarrays. Check the routine’s documentation for further examples and syntax. ## 4) Reading arrays from disk, either from standard or custom formats This is the most common case of large array creation. The details depend greatly on the format of data on disk. This section gives general pointers on how to handle various formats. For more detailed examples of IO look at [How to Read and Write files](how-to-io#how-to-io). ### Standard binary formats Various fields have standard formats for array data. The following lists the ones with known Python libraries to read them and return NumPy arrays (there may be others for which it is possible to read and convert to NumPy arrays so check the last section as well) HDF5: h5py FITS: Astropy Examples of formats that cannot be read directly but for which it is not hard to convert are those formats supported by libraries like PIL (able to read and write many image formats such as jpg, png, etc). ### Common ASCII formats Delimited files such as comma separated value (csv) and tab separated value (tsv) files are used for programs like Excel and LabView. Python functions can read and parse these files line-by-line. NumPy has two standard routines for importing a file with delimited data [`numpy.loadtxt`](../reference/generated/numpy.loadtxt#numpy.loadtxt "numpy.loadtxt") and [`numpy.genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt"). These functions have more involved use cases in [Reading and writing files](how-to-io). A simple example given a `simple.csv`: $ cat simple.csv x, y 0, 0 1, 1 2, 4 3, 9 Importing `simple.csv` is accomplished using [`numpy.loadtxt`](../reference/generated/numpy.loadtxt#numpy.loadtxt "numpy.loadtxt"): >>> import numpy as np >>> np.loadtxt('simple.csv', delimiter = ',', skiprows = 1) array([[0., 0.], [1., 1.], [2., 4.], [3., 9.]]) More generic ASCII files can be read using [`scipy.io`](https://docs.scipy.org/doc/scipy/reference/io.html#module- scipy.io "\(in SciPy v1.14.1\)") and [Pandas](https://pandas.pydata.org/). ## 5) Creating arrays from raw bytes through the use of strings or buffers There are a variety of approaches one can use. If the file has a relatively simple format then one can write a simple I/O library and use the NumPy `fromfile()` function and `.tofile()` method to read and write NumPy arrays directly (mind your byteorder though!) If a good C or C++ library exists that read the data, one can wrap that library with a variety of techniques though that certainly is much more work and requires significantly more advanced knowledge to interface with C or C++. ## 6) Use of special library functions (e.g., SciPy, pandas, and OpenCV) NumPy is the fundamental library for array containers in the Python Scientific Computing stack. Many Python libraries, including SciPy, Pandas, and OpenCV, use NumPy ndarrays as the common format for data exchange, These libraries can create, operate on, and work with NumPy arrays. # Writing custom array containers Numpy’s dispatch mechanism, introduced in numpy version v1.16 is the recommended approach for writing custom N-dimensional array containers that are compatible with the numpy API and provide custom implementations of numpy functionality. Applications include [dask](http://dask.pydata.org) arrays, an N-dimensional array distributed across multiple nodes, and [cupy](https://docs-cupy.chainer.org/en/stable/) arrays, an N-dimensional array on a GPU. To get a feel for writing custom array containers, we’ll begin with a simple example that has rather narrow utility but illustrates the concepts involved. >>> import numpy as np >>> class DiagonalArray: ... def __init__(self, N, value): ... self._N = N ... self._i = value ... def __repr__(self): ... return f"{self.__class__.__name__}(N={self._N}, value={self._i})" ... def __array__(self, dtype=None, copy=None): ... if copy is False: ... raise ValueError( ... "`copy=False` isn't supported. A copy is always created." ... ) ... return self._i * np.eye(self._N, dtype=dtype) Our custom array can be instantiated like: >>> arr = DiagonalArray(5, 1) >>> arr DiagonalArray(N=5, value=1) We can convert to a numpy array using [`numpy.array`](../reference/generated/numpy.array#numpy.array "numpy.array") or [`numpy.asarray`](../reference/generated/numpy.asarray#numpy.asarray "numpy.asarray"), which will call its `__array__` method to obtain a standard `numpy.ndarray`. >>> np.asarray(arr) array([[1., 0., 0., 0., 0.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.], [0., 0., 0., 1., 0.], [0., 0., 0., 0., 1.]]) If we operate on `arr` with a numpy function, numpy will again use the `__array__` interface to convert it to an array and then apply the function in the usual way. >>> np.multiply(arr, 2) array([[2., 0., 0., 0., 0.], [0., 2., 0., 0., 0.], [0., 0., 2., 0., 0.], [0., 0., 0., 2., 0.], [0., 0., 0., 0., 2.]]) Notice that the return type is a standard `numpy.ndarray`. >>> type(np.multiply(arr, 2)) How can we pass our custom array type through this function? Numpy allows a class to indicate that it would like to handle computations in a custom- defined way through the interfaces `__array_ufunc__` and `__array_function__`. Let’s take one at a time, starting with `__array_ufunc__`. This method covers [Universal functions (ufunc)](../reference/ufuncs#ufuncs), a class of functions that includes, for example, [`numpy.multiply`](../reference/generated/numpy.multiply#numpy.multiply "numpy.multiply") and [`numpy.sin`](../reference/generated/numpy.sin#numpy.sin "numpy.sin"). The `__array_ufunc__` receives: * `ufunc`, a function like `numpy.multiply` * `method`, a string, differentiating between `numpy.multiply(...)` and variants like `numpy.multiply.outer`, `numpy.multiply.accumulate`, and so on. For the common case, `numpy.multiply(...)`, `method == '__call__'`. * `inputs`, which could be a mixture of different types * `kwargs`, keyword arguments passed to the function For this example we will only handle the method `__call__` >>> from numbers import Number >>> class DiagonalArray: ... def __init__(self, N, value): ... self._N = N ... self._i = value ... def __repr__(self): ... return f"{self.__class__.__name__}(N={self._N}, value={self._i})" ... def __array__(self, dtype=None, copy=None): ... if copy is False: ... raise ValueError( ... "`copy=False` isn't supported. A copy is always created." ... ) ... return self._i * np.eye(self._N, dtype=dtype) ... def __array_ufunc__(self, ufunc, method, *inputs, **kwargs): ... if method == '__call__': ... N = None ... scalars = [] ... for input in inputs: ... if isinstance(input, Number): ... scalars.append(input) ... elif isinstance(input, self.__class__): ... scalars.append(input._i) ... if N is not None: ... if N != input._N: ... raise TypeError("inconsistent sizes") ... else: ... N = input._N ... else: ... return NotImplemented ... return self.__class__(N, ufunc(*scalars, **kwargs)) ... else: ... return NotImplemented Now our custom array type passes through numpy functions. >>> arr = DiagonalArray(5, 1) >>> np.multiply(arr, 3) DiagonalArray(N=5, value=3) >>> np.add(arr, 3) DiagonalArray(N=5, value=4) >>> np.sin(arr) DiagonalArray(N=5, value=0.8414709848078965) At this point `arr + 3` does not work. >>> arr + 3 Traceback (most recent call last): ... TypeError: unsupported operand type(s) for +: 'DiagonalArray' and 'int' To support it, we need to define the Python interfaces `__add__`, `__lt__`, and so on to dispatch to the corresponding ufunc. We can achieve this conveniently by inheriting from the mixin [`NDArrayOperatorsMixin`](../reference/generated/numpy.lib.mixins.ndarrayoperatorsmixin#numpy.lib.mixins.NDArrayOperatorsMixin "numpy.lib.mixins.NDArrayOperatorsMixin"). >>> import numpy.lib.mixins >>> class DiagonalArray(numpy.lib.mixins.NDArrayOperatorsMixin): ... def __init__(self, N, value): ... self._N = N ... self._i = value ... def __repr__(self): ... return f"{self.__class__.__name__}(N={self._N}, value={self._i})" ... def __array__(self, dtype=None, copy=None): ... if copy is False: ... raise ValueError( ... "`copy=False` isn't supported. A copy is always created." ... ) ... return self._i * np.eye(self._N, dtype=dtype) ... def __array_ufunc__(self, ufunc, method, *inputs, **kwargs): ... if method == '__call__': ... N = None ... scalars = [] ... for input in inputs: ... if isinstance(input, Number): ... scalars.append(input) ... elif isinstance(input, self.__class__): ... scalars.append(input._i) ... if N is not None: ... if N != input._N: ... raise TypeError("inconsistent sizes") ... else: ... N = input._N ... else: ... return NotImplemented ... return self.__class__(N, ufunc(*scalars, **kwargs)) ... else: ... return NotImplemented >>> arr = DiagonalArray(5, 1) >>> arr + 3 DiagonalArray(N=5, value=4) >>> arr > 0 DiagonalArray(N=5, value=True) Now let’s tackle `__array_function__`. We’ll create dict that maps numpy functions to our custom variants. >>> HANDLED_FUNCTIONS = {} >>> class DiagonalArray(numpy.lib.mixins.NDArrayOperatorsMixin): ... def __init__(self, N, value): ... self._N = N ... self._i = value ... def __repr__(self): ... return f"{self.__class__.__name__}(N={self._N}, value={self._i})" ... def __array__(self, dtype=None, copy=None): ... if copy is False: ... raise ValueError( ... "`copy=False` isn't supported. A copy is always created." ... ) ... return self._i * np.eye(self._N, dtype=dtype) ... def __array_ufunc__(self, ufunc, method, *inputs, **kwargs): ... if method == '__call__': ... N = None ... scalars = [] ... for input in inputs: ... # In this case we accept only scalar numbers or DiagonalArrays. ... if isinstance(input, Number): ... scalars.append(input) ... elif isinstance(input, self.__class__): ... scalars.append(input._i) ... if N is not None: ... if N != input._N: ... raise TypeError("inconsistent sizes") ... else: ... N = input._N ... else: ... return NotImplemented ... return self.__class__(N, ufunc(*scalars, **kwargs)) ... else: ... return NotImplemented ... def __array_function__(self, func, types, args, kwargs): ... if func not in HANDLED_FUNCTIONS: ... return NotImplemented ... # Note: this allows subclasses that don't override ... # __array_function__ to handle DiagonalArray objects. ... if not all(issubclass(t, self.__class__) for t in types): ... return NotImplemented ... return HANDLED_FUNCTIONS[func](*args, **kwargs) ... A convenient pattern is to define a decorator `implements` that can be used to add functions to `HANDLED_FUNCTIONS`. >>> def implements(np_function): ... "Register an __array_function__ implementation for DiagonalArray objects." ... def decorator(func): ... HANDLED_FUNCTIONS[np_function] = func ... return func ... return decorator ... Now we write implementations of numpy functions for `DiagonalArray`. For completeness, to support the usage `arr.sum()` add a method `sum` that calls `numpy.sum(self)`, and the same for `mean`. >>> @implements(np.sum) ... def sum(arr): ... "Implementation of np.sum for DiagonalArray objects" ... return arr._i * arr._N ... >>> @implements(np.mean) ... def mean(arr): ... "Implementation of np.mean for DiagonalArray objects" ... return arr._i / arr._N ... >>> arr = DiagonalArray(5, 1) >>> np.sum(arr) 5 >>> np.mean(arr) 0.2 If the user tries to use any numpy functions not included in `HANDLED_FUNCTIONS`, a `TypeError` will be raised by numpy, indicating that this operation is not supported. For example, concatenating two `DiagonalArrays` does not produce another diagonal array, so it is not supported. >>> np.concatenate([arr, arr]) Traceback (most recent call last): ... TypeError: no implementation found for 'numpy.concatenate' on types that implement __array_function__: [] Additionally, our implementations of `sum` and `mean` do not accept the optional arguments that numpy’s implementation does. >>> np.sum(arr, axis=0) Traceback (most recent call last): ... TypeError: sum() got an unexpected keyword argument 'axis' The user always has the option of converting to a normal `numpy.ndarray` with [`numpy.asarray`](../reference/generated/numpy.asarray#numpy.asarray "numpy.asarray") and using standard numpy from there. >>> np.concatenate([np.asarray(arr), np.asarray(arr)]) array([[1., 0., 0., 0., 0.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.], [0., 0., 0., 1., 0.], [0., 0., 0., 0., 1.], [1., 0., 0., 0., 0.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.], [0., 0., 0., 1., 0.], [0., 0., 0., 0., 1.]]) The implementation of `DiagonalArray` in this example only handles the `np.sum` and `np.mean` functions for brevity. Many other functions in the Numpy API are also available to wrap and a full-fledged custom array container can explicitly support all functions that Numpy makes available to wrap. Numpy provides some utilities to aid testing of custom array containers that implement the `__array_ufunc__` and `__array_function__` protocols in the `numpy.testing.overrides` namespace. To check if a Numpy function can be overridden via `__array_ufunc__`, you can use [`allows_array_ufunc_override`](../reference/generated/numpy.testing.overrides.allows_array_ufunc_override#numpy.testing.overrides.allows_array_ufunc_override "numpy.testing.overrides.allows_array_ufunc_override"): >>> from numpy.testing.overrides import allows_array_ufunc_override >>> allows_array_ufunc_override(np.add) True Similarly, you can check if a function can be overridden via `__array_function__` using [`allows_array_function_override`](../reference/generated/numpy.testing.overrides.allows_array_function_override#numpy.testing.overrides.allows_array_function_override "numpy.testing.overrides.allows_array_function_override"). Lists of every overridable function in the Numpy API are also available via [`get_overridable_numpy_array_functions`](../reference/generated/numpy.testing.overrides.get_overridable_numpy_array_functions#numpy.testing.overrides.get_overridable_numpy_array_functions "numpy.testing.overrides.get_overridable_numpy_array_functions") for functions that support the `__array_function__` protocol and [`get_overridable_numpy_ufuncs`](../reference/generated/numpy.testing.overrides.get_overridable_numpy_ufuncs#numpy.testing.overrides.get_overridable_numpy_ufuncs "numpy.testing.overrides.get_overridable_numpy_ufuncs") for functions that support the `__array_ufunc__` protocol. Both functions return sets of functions that are present in the Numpy public API. User-defined ufuncs or ufuncs defined in other libraries that depend on Numpy are not present in these sets. Refer to the [dask source code](https://github.com/dask/dask) and [cupy source code](https://github.com/cupy/cupy) for more fully-worked examples of custom array containers. See also [NEP 18](https://numpy.org/neps/nep-0018-array-function-protocol.html "\(in NumPy Enhancement Proposals\)"). # NumPy fundamentals These documents clarify concepts, design decisions, and technical constraints in NumPy. This is a great place to understand the fundamental NumPy ideas and philosophy. * [Array creation](basics.creation) * [Indexing on `ndarrays`](basics.indexing) * [I/O with NumPy](basics.io) * [Data types](basics.types) * [Broadcasting](basics.broadcasting) * [Copies and views](basics.copies) * [Working with Arrays of Strings And Bytes](basics.strings) * [Structured arrays](basics.rec) * [Universal functions (`ufunc`) basics](basics.ufuncs) # Indexing on ndarrays See also [Indexing routines](../reference/routines.indexing#routines-indexing) [`ndarrays`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") can be indexed using the standard Python `x[obj]` syntax, where _x_ is the array and _obj_ the selection. There are different kinds of indexing available depending on _obj_ : basic indexing, advanced indexing and field access. Most of the following examples show the use of indexing when referencing data in an array. The examples work just as well when assigning to an array. See Assigning values to indexed arrays for specific examples and explanations on how assignments work. Note that in Python, `x[(exp1, exp2, ..., expN)]` is equivalent to `x[exp1, exp2, ..., expN]`; the latter is just syntactic sugar for the former. ## Basic indexing ### Single element indexing Single element indexing works exactly like that for other standard Python sequences. It is 0-based, and accepts negative indices for indexing from the end of the array. >>> x = np.arange(10) >>> x[2] 2 >>> x[-2] 8 It is not necessary to separate each dimension’s index into its own set of square brackets. >>> x.shape = (2, 5) # now x is 2-dimensional >>> x[1, 3] 8 >>> x[1, -1] 9 Note that if one indexes a multidimensional array with fewer indices than dimensions, one gets a subdimensional array. For example: >>> x[0] array([0, 1, 2, 3, 4]) That is, each index specified selects the array corresponding to the rest of the dimensions selected. In the above example, choosing 0 means that the remaining dimension of length 5 is being left unspecified, and that what is returned is an array of that dimensionality and size. It must be noted that the returned array is a [view](../glossary#term-view), i.e., it is not a copy of the original, but points to the same values in memory as does the original array. In this case, the 1-D array at the first position (0) is returned. So using a single index on the returned array, results in a single element being returned. That is: >>> x[0][2] 2 So note that `x[0, 2] == x[0][2]` though the second case is more inefficient as a new temporary array is created after the first index that is subsequently indexed by 2. Note NumPy uses C-order indexing. That means that the last index usually represents the most rapidly changing memory location, unlike Fortran or IDL, where the first index represents the most rapidly changing location in memory. This difference represents a great potential for confusion. ### Slicing and striding Basic slicing extends Python’s basic concept of slicing to N dimensions. Basic slicing occurs when _obj_ is a [`slice`](https://docs.python.org/3/library/functions.html#slice "\(in Python v3.13\)") object (constructed by `start:stop:step` notation inside of brackets), an integer, or a tuple of slice objects and integers. [`Ellipsis`](https://docs.python.org/3/library/constants.html#Ellipsis "\(in Python v3.13\)") and [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis") objects can be interspersed with these as well. The simplest case of indexing with _N_ integers returns an [array scalar](../reference/arrays.scalars#arrays-scalars) representing the corresponding item. As in Python, all indices are zero-based: for the _i_ -th index \\(n_i\\), the valid range is \\(0 \le n_i < d_i\\) where \\(d_i\\) is the _i_ -th element of the shape of the array. Negative indices are interpreted as counting from the end of the array (_i.e._ , if \\(n_i < 0\\), it means \\(n_i + d_i\\)). All arrays generated by basic slicing are always [views](../glossary#term- view) of the original array. Note NumPy slicing creates a [view](../glossary#term-view) instead of a copy as in the case of built-in Python sequences such as string, tuple and list. Care must be taken when extracting a small portion from a large array which becomes useless after the extraction, because the small portion extracted contains a reference to the large original array whose memory will not be released until all arrays derived from it are garbage-collected. In such cases an explicit `copy()` is recommended. The standard rules of sequence slicing apply to basic slicing on a per- dimension basis (including using a step index). Some useful concepts to remember include: * The basic slice syntax is `i:j:k` where _i_ is the starting index, _j_ is the stopping index, and _k_ is the step (\\(k\neq0\\)). This selects the _m_ elements (in the corresponding dimension) with index values _i_ , _i + k_ , …, _i + (m - 1) k_ where \\(m = q + (r\neq0)\\) and _q_ and _r_ are the quotient and remainder obtained by dividing _j - i_ by _k_ : _j - i = q k + r_ , so that _i + (m - 1) k < j_. For example: >>> x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> x[1:7:2] array([1, 3, 5]) * Negative _i_ and _j_ are interpreted as _n + i_ and _n + j_ where _n_ is the number of elements in the corresponding dimension. Negative _k_ makes stepping go towards smaller indices. From the above example: >>> x[-2:10] array([8, 9]) >>> x[-3:3:-1] array([7, 6, 5, 4]) * Assume _n_ is the number of elements in the dimension being sliced. Then, if _i_ is not given it defaults to 0 for _k > 0_ and _n - 1_ for _k < 0_ . If _j_ is not given it defaults to _n_ for _k > 0_ and _-n-1_ for _k < 0_ . If _k_ is not given it defaults to 1. Note that `::` is the same as `:` and means select all indices along this axis. From the above example: >>> x[5:] array([5, 6, 7, 8, 9]) * If the number of objects in the selection tuple is less than _N_ , then `:` is assumed for any subsequent dimensions. For example: >>> x = np.array([[[1],[2],[3]], [[4],[5],[6]]]) >>> x.shape (2, 3, 1) >>> x[1:2] array([[[4], [5], [6]]]) * An integer, _i_ , returns the same values as `i:i+1` **except** the dimensionality of the returned object is reduced by 1. In particular, a selection tuple with the _p_ -th element an integer (and all other entries `:`) returns the corresponding sub-array with dimension _N - 1_. If _N = 1_ then the returned object is an array scalar. These objects are explained in [Scalars](../reference/arrays.scalars#arrays-scalars). * If the selection tuple has all entries `:` except the _p_ -th entry which is a slice object `i:j:k`, then the returned array has dimension _N_ formed by stacking, along the _p_ -th axis, the sub-arrays returned by integer indexing of elements _i_ , _i+k_ , …, _i + (m - 1) k < j_. * Basic slicing with more than one non-`:` entry in the slicing tuple, acts like repeated application of slicing using a single non-`:` entry, where the non-`:` entries are successively taken (with all other non-`:` entries replaced by `:`). Thus, `x[ind1, ..., ind2,:]` acts like `x[ind1][..., ind2, :]` under basic slicing. Warning The above is **not** true for advanced indexing. * You may use slicing to set values in the array, but (unlike lists) you can never grow the array. The size of the value to be set in `x[obj] = value` must be (broadcastable to) the same shape as `x[obj]`. * A slicing tuple can always be constructed as _obj_ and used in the `x[obj]` notation. Slice objects can be used in the construction in place of the `[start:stop:step]` notation. For example, `x[1:10:5, ::-1]` can also be implemented as `obj = (slice(1, 10, 5), slice(None, None, -1)); x[obj]` . This can be useful for constructing generic code that works on arrays of arbitrary dimensions. See Dealing with variable numbers of indices within programs for more information. ### Dimensional indexing tools There are some tools to facilitate the easy matching of array shapes with expressions and in assignments. [`Ellipsis`](https://docs.python.org/3/library/constants.html#Ellipsis "\(in Python v3.13\)") expands to the number of `:` objects needed for the selection tuple to index all dimensions. In most cases, this means that the length of the expanded selection tuple is `x.ndim`. There may only be a single ellipsis present. From the above example: >>> x[..., 0] array([[1, 2, 3], [4, 5, 6]]) This is equivalent to: >>> x[:, :, 0] array([[1, 2, 3], [4, 5, 6]]) Each [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis") object in the selection tuple serves to expand the dimensions of the resulting selection by one unit-length dimension. The added dimension is the position of the [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis") object in the selection tuple. [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis") is an alias for `None`, and `None` can be used in place of this with the same result. From the above example: >>> x[:, np.newaxis, :, :].shape (2, 1, 3, 1) >>> x[:, None, :, :].shape (2, 1, 3, 1) This can be handy to combine two arrays in a way that otherwise would require explicit reshaping operations. For example: >>> x = np.arange(5) >>> x[:, np.newaxis] + x[np.newaxis, :] array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]]) ## Advanced indexing Advanced indexing is triggered when the selection object, _obj_ , is a non- tuple sequence object, an [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") (of data type integer or bool), or a tuple with at least one sequence object or ndarray (of data type integer or bool). There are two types of advanced indexing: integer and Boolean. Advanced indexing always returns a _copy_ of the data (contrast with basic slicing that returns a [view](../glossary#term-view)). Warning The definition of advanced indexing means that `x[(1, 2, 3),]` is fundamentally different than `x[(1, 2, 3)]`. The latter is equivalent to `x[1, 2, 3]` which will trigger basic selection while the former will trigger advanced indexing. Be sure to understand why this occurs. ### Integer array indexing Integer array indexing allows selection of arbitrary items in the array based on their _N_ -dimensional index. Each integer array represents a number of indices into that dimension. Negative values are permitted in the index arrays and work as they do with single indices or slices: >>> x = np.arange(10, 1, -1) >>> x array([10, 9, 8, 7, 6, 5, 4, 3, 2]) >>> x[np.array([3, 3, 1, 8])] array([7, 7, 9, 2]) >>> x[np.array([3, 3, -3, 8])] array([7, 7, 4, 2]) If the index values are out of bounds then an `IndexError` is thrown: >>> x = np.array([[1, 2], [3, 4], [5, 6]]) >>> x[np.array([1, -1])] array([[3, 4], [5, 6]]) >>> x[np.array([3, 4])] Traceback (most recent call last): ... IndexError: index 3 is out of bounds for axis 0 with size 3 When the index consists of as many integer arrays as dimensions of the array being indexed, the indexing is straightforward, but different from slicing. Advanced indices always are [broadcast](basics.broadcasting#basics- broadcasting) and iterated as _one_ : result[i_1, ..., i_M] == x[ind_1[i_1, ..., i_M], ind_2[i_1, ..., i_M], ..., ind_N[i_1, ..., i_M]] Note that the resulting shape is identical to the (broadcast) indexing array shapes `ind_1, ..., ind_N`. If the indices cannot be broadcast to the same shape, an exception `IndexError: shape mismatch: indexing arrays could not be broadcast together with shapes...` is raised. Indexing with multidimensional index arrays tend to be more unusual uses, but they are permitted, and they are useful for some problems. We’ll start with the simplest multidimensional case: >>> y = np.arange(35).reshape(5, 7) >>> y array([[ 0, 1, 2, 3, 4, 5, 6], [ 7, 8, 9, 10, 11, 12, 13], [14, 15, 16, 17, 18, 19, 20], [21, 22, 23, 24, 25, 26, 27], [28, 29, 30, 31, 32, 33, 34]]) >>> y[np.array([0, 2, 4]), np.array([0, 1, 2])] array([ 0, 15, 30]) In this case, if the index arrays have a matching shape, and there is an index array for each dimension of the array being indexed, the resultant array has the same shape as the index arrays, and the values correspond to the index set for each position in the index arrays. In this example, the first index value is 0 for both index arrays, and thus the first value of the resultant array is `y[0, 0]`. The next value is `y[2, 1]`, and the last is `y[4, 2]`. If the index arrays do not have the same shape, there is an attempt to broadcast them to the same shape. If they cannot be broadcast to the same shape, an exception is raised: >>> y[np.array([0, 2, 4]), np.array([0, 1])] Traceback (most recent call last): ... IndexError: shape mismatch: indexing arrays could not be broadcast together with shapes (3,) (2,) The broadcasting mechanism permits index arrays to be combined with scalars for other indices. The effect is that the scalar value is used for all the corresponding values of the index arrays: >>> y[np.array([0, 2, 4]), 1] array([ 1, 15, 29]) Jumping to the next level of complexity, it is possible to only partially index an array with index arrays. It takes a bit of thought to understand what happens in such cases. For example if we just use one index array with y: >>> y[np.array([0, 2, 4])] array([[ 0, 1, 2, 3, 4, 5, 6], [14, 15, 16, 17, 18, 19, 20], [28, 29, 30, 31, 32, 33, 34]]) It results in the construction of a new array where each value of the index array selects one row from the array being indexed and the resultant array has the resulting shape (number of index elements, size of row). In general, the shape of the resultant array will be the concatenation of the shape of the index array (or the shape that all the index arrays were broadcast to) with the shape of any unused dimensions (those not indexed) in the array being indexed. #### Example From each row, a specific element should be selected. The row index is just `[0, 1, 2]` and the column index specifies the element to choose for the corresponding row, here `[0, 1, 0]`. Using both together the task can be solved using advanced indexing: >>> x = np.array([[1, 2], [3, 4], [5, 6]]) >>> x[[0, 1, 2], [0, 1, 0]] array([1, 4, 5]) To achieve a behaviour similar to the basic slicing above, broadcasting can be used. The function [`ix_`](../reference/generated/numpy.ix_#numpy.ix_ "numpy.ix_") can help with this broadcasting. This is best understood with an example. #### Example From a 4x3 array the corner elements should be selected using advanced indexing. Thus all elements for which the column is one of `[0, 2]` and the row is one of `[0, 3]` need to be selected. To use advanced indexing one needs to select all elements _explicitly_. Using the method explained previously one could write: >>> x = np.array([[ 0, 1, 2], ... [ 3, 4, 5], ... [ 6, 7, 8], ... [ 9, 10, 11]]) >>> rows = np.array([[0, 0], ... [3, 3]], dtype=np.intp) >>> columns = np.array([[0, 2], ... [0, 2]], dtype=np.intp) >>> x[rows, columns] array([[ 0, 2], [ 9, 11]]) However, since the indexing arrays above just repeat themselves, broadcasting can be used (compare operations such as `rows[:, np.newaxis] + columns`) to simplify this: >>> rows = np.array([0, 3], dtype=np.intp) >>> columns = np.array([0, 2], dtype=np.intp) >>> rows[:, np.newaxis] array([[0], [3]]) >>> x[rows[:, np.newaxis], columns] array([[ 0, 2], [ 9, 11]]) This broadcasting can also be achieved using the function [`ix_`](../reference/generated/numpy.ix_#numpy.ix_ "numpy.ix_"): >>> x[np.ix_(rows, columns)] array([[ 0, 2], [ 9, 11]]) Note that without the `np.ix_` call, only the diagonal elements would be selected: >>> x[rows, columns] array([ 0, 11]) This difference is the most important thing to remember about indexing with multiple advanced indices. #### Example A real-life example of where advanced indexing may be useful is for a color lookup table where we want to map the values of an image into RGB triples for display. The lookup table could have a shape (nlookup, 3). Indexing such an array with an image with shape (ny, nx) with dtype=np.uint8 (or any integer type so long as values are with the bounds of the lookup table) will result in an array of shape (ny, nx, 3) where a triple of RGB values is associated with each pixel location. ### Boolean array indexing This advanced indexing occurs when _obj_ is an array object of Boolean type, such as may be returned from comparison operators. A single boolean index array is practically identical to `x[obj.nonzero()]` where, as described above, [`obj.nonzero()`](../reference/generated/numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero") returns a tuple (of length [`obj.ndim`](../reference/generated/numpy.ndarray.ndim#numpy.ndarray.ndim "numpy.ndarray.ndim")) of integer index arrays showing the [`True`](https://docs.python.org/3/library/constants.html#True "\(in Python v3.13\)") elements of _obj_. However, it is faster when `obj.shape == x.shape`. If `obj.ndim == x.ndim`, `x[obj]` returns a 1-dimensional array filled with the elements of _x_ corresponding to the [`True`](https://docs.python.org/3/library/constants.html#True "\(in Python v3.13\)") values of _obj_. The search order will be [row- major](../glossary#term-row-major), C-style. An index error will be raised if the shape of _obj_ does not match the corresponding dimensions of _x_ , regardless of whether those values are [`True`](https://docs.python.org/3/library/constants.html#True "\(in Python v3.13\)") or [`False`](https://docs.python.org/3/library/constants.html#False "\(in Python v3.13\)"). A common use case for this is filtering for desired element values. For example, one may wish to select all entries from an array which are not [`numpy.nan`](../reference/constants#numpy.nan "numpy.nan"): >>> x = np.array([[1., 2.], [np.nan, 3.], [np.nan, np.nan]]) >>> x[~np.isnan(x)] array([1., 2., 3.]) Or wish to add a constant to all negative elements: >>> x = np.array([1., -1., -2., 3]) >>> x[x < 0] += 20 >>> x array([ 1., 19., 18., 3.]) In general if an index includes a Boolean array, the result will be identical to inserting `obj.nonzero()` into the same position and using the integer array indexing mechanism described above. `x[ind_1, boolean_array, ind_2]` is equivalent to `x[(ind_1,) + boolean_array.nonzero() + (ind_2,)]`. If there is only one Boolean array and no integer indexing array present, this is straightforward. Care must only be taken to make sure that the boolean index has _exactly_ as many dimensions as it is supposed to work with. In general, when the boolean array has fewer dimensions than the array being indexed, this is equivalent to `x[b, ...]`, which means x is indexed by b followed by as many `:` as are needed to fill out the rank of x. Thus the shape of the result is one dimension containing the number of True elements of the boolean array, followed by the remaining dimensions of the array being indexed: >>> x = np.arange(35).reshape(5, 7) >>> b = x > 20 >>> b[:, 5] array([False, False, False, True, True]) >>> x[b[:, 5]] array([[21, 22, 23, 24, 25, 26, 27], [28, 29, 30, 31, 32, 33, 34]]) Here the 4th and 5th rows are selected from the indexed array and combined to make a 2-D array. #### Example From an array, select all rows which sum up to less or equal two: >>> x = np.array([[0, 1], [1, 1], [2, 2]]) >>> rowsum = x.sum(-1) >>> x[rowsum <= 2, :] array([[0, 1], [1, 1]]) Combining multiple Boolean indexing arrays or a Boolean with an integer indexing array can best be understood with the [`obj.nonzero()`](../reference/generated/numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero") analogy. The function [`ix_`](../reference/generated/numpy.ix_#numpy.ix_ "numpy.ix_") also supports boolean arrays and will work without any surprises. #### Example Use boolean indexing to select all rows adding up to an even number. At the same time columns 0 and 2 should be selected with an advanced integer index. Using the [`ix_`](../reference/generated/numpy.ix_#numpy.ix_ "numpy.ix_") function this can be done with: >>> x = np.array([[ 0, 1, 2], ... [ 3, 4, 5], ... [ 6, 7, 8], ... [ 9, 10, 11]]) >>> rows = (x.sum(-1) % 2) == 0 >>> rows array([False, True, False, True]) >>> columns = [0, 2] >>> x[np.ix_(rows, columns)] array([[ 3, 5], [ 9, 11]]) Without the `np.ix_` call, only the diagonal elements would be selected. Or without `np.ix_` (compare the integer array examples): >>> rows = rows.nonzero()[0] >>> x[rows[:, np.newaxis], columns] array([[ 3, 5], [ 9, 11]]) #### Example Use a 2-D boolean array of shape (2, 3) with four True elements to select rows from a 3-D array of shape (2, 3, 5) results in a 2-D result of shape (4, 5): >>> x = np.arange(30).reshape(2, 3, 5) >>> x array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]], [[15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]]) >>> b = np.array([[True, True, False], [False, True, True]]) >>> x[b] array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]) ### Combining advanced and basic indexing When there is at least one slice (`:`), ellipsis (`...`) or [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis") in the index (or the array has more dimensions than there are advanced indices), then the behaviour can be more complicated. It is like concatenating the indexing result for each advanced index element. In the simplest case, there is only a _single_ advanced index combined with a slice. For example: >>> y = np.arange(35).reshape(5,7) >>> y[np.array([0, 2, 4]), 1:3] array([[ 1, 2], [15, 16], [29, 30]]) In effect, the slice and index array operation are independent. The slice operation extracts columns with index 1 and 2, (i.e. the 2nd and 3rd columns), followed by the index array operation which extracts rows with index 0, 2 and 4 (i.e the first, third and fifth rows). This is equivalent to: >>> y[:, 1:3][np.array([0, 2, 4]), :] array([[ 1, 2], [15, 16], [29, 30]]) A single advanced index can, for example, replace a slice and the result array will be the same. However, it is a copy and may have a different memory layout. A slice is preferable when it is possible. For example: >>> x = np.array([[ 0, 1, 2], ... [ 3, 4, 5], ... [ 6, 7, 8], ... [ 9, 10, 11]]) >>> x[1:2, 1:3] array([[4, 5]]) >>> x[1:2, [1, 2]] array([[4, 5]]) The easiest way to understand a combination of _multiple_ advanced indices may be to think in terms of the resulting shape. There are two parts to the indexing operation, the subspace defined by the basic indexing (excluding integers) and the subspace from the advanced indexing part. Two cases of index combination need to be distinguished: * The advanced indices are separated by a slice, [`Ellipsis`](https://docs.python.org/3/library/constants.html#Ellipsis "\(in Python v3.13\)") or [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis"). For example `x[arr1, :, arr2]`. * The advanced indices are all next to each other. For example `x[..., arr1, arr2, :]` but _not_ `x[arr1, :, 1]` since `1` is an advanced index in this regard. In the first case, the dimensions resulting from the advanced indexing operation come first in the result array, and the subspace dimensions after that. In the second case, the dimensions from the advanced indexing operations are inserted into the result array at the same spot as they were in the initial array (the latter logic is what makes simple advanced indexing behave just like slicing). #### Example Suppose `x.shape` is (10, 20, 30) and `ind` is a (2, 5, 2)-shaped indexing [`intp`](../reference/arrays.scalars#numpy.intp "numpy.intp") array, then `result = x[..., ind, :]` has shape (10, 2, 5, 2, 30) because the (20,)-shaped subspace has been replaced with a (2, 5, 2)-shaped broadcasted indexing subspace. If we let _i, j, k_ loop over the (2, 5, 2)-shaped subspace then `result[..., i, j, k, :] = x[..., ind[i, j, k], :]`. This example produces the same result as [`x.take(ind, axis=-2)`](../reference/generated/numpy.ndarray.take#numpy.ndarray.take "numpy.ndarray.take"). #### Example Let `x.shape` be (10, 20, 30, 40, 50) and suppose `ind_1` and `ind_2` can be broadcast to the shape (2, 3, 4). Then `x[:, ind_1, ind_2]` has shape (10, 2, 3, 4, 40, 50) because the (20, 30)-shaped subspace from X has been replaced with the (2, 3, 4) subspace from the indices. However, `x[:, ind_1, :, ind_2]` has shape (2, 3, 4, 10, 30, 50) because there is no unambiguous place to drop in the indexing subspace, thus it is tacked-on to the beginning. It is always possible to use [`.transpose()`](../reference/generated/numpy.ndarray.transpose#numpy.ndarray.transpose "numpy.ndarray.transpose") to move the subspace anywhere desired. Note that this example cannot be replicated using [`take`](../reference/generated/numpy.take#numpy.take "numpy.take"). #### Example Slicing can be combined with broadcasted boolean indices: >>> x = np.arange(35).reshape(5, 7) >>> b = x > 20 >>> b array([[False, False, False, False, False, False, False], [False, False, False, False, False, False, False], [False, False, False, False, False, False, False], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True]]) >>> x[b[:, 5], 1:3] array([[22, 23], [29, 30]]) ## Field access See also [Structured arrays](basics.rec#structured-arrays) If the [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") object is a structured array the [fields](../glossary#term- field) of the array can be accessed by indexing the array with strings, dictionary-like. Indexing `x['field-name']` returns a new [view](../glossary#term-view) to the array, which is of the same shape as _x_ (except when the field is a sub- array) but of data type `x.dtype['field-name']` and contains only the part of the data in the specified field. Also, [record array](../reference/arrays.classes#arrays-classes-rec) scalars can be “indexed” this way. Indexing into a structured array can also be done with a list of field names, e.g. `x[['field-name1', 'field-name2']]`. As of NumPy 1.16, this returns a view containing only those fields. In older versions of NumPy, it returned a copy. See the user guide section on [Structured arrays](basics.rec#structured- arrays) for more information on multifield indexing. If the accessed field is a sub-array, the dimensions of the sub-array are appended to the shape of the result. For example: >>> x = np.zeros((2, 2), dtype=[('a', np.int32), ('b', np.float64, (3, 3))]) >>> x['a'].shape (2, 2) >>> x['a'].dtype dtype('int32') >>> x['b'].shape (2, 2, 3, 3) >>> x['b'].dtype dtype('float64') ## Flat iterator indexing [`x.flat`](../reference/generated/numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") returns an iterator that will iterate over the entire array (in C-contiguous style with the last index varying the fastest). This iterator object can also be indexed using basic slicing or advanced indexing as long as the selection object is not a tuple. This should be clear from the fact that [`x.flat`](../reference/generated/numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") is a 1-dimensional view. It can be used for integer indexing with 1-dimensional C-style-flat indices. The shape of any returned array is therefore the shape of the integer indexing object. ## Assigning values to indexed arrays As mentioned, one can select a subset of an array to assign to using a single index, slices, and index and mask arrays. The value being assigned to the indexed array must be shape consistent (the same shape or broadcastable to the shape the index produces). For example, it is permitted to assign a constant to a slice: >>> x = np.arange(10) >>> x[2:7] = 1 or an array of the right size: >>> x[2:7] = np.arange(5) Note that assignments may result in changes if assigning higher types to lower types (like floats to ints) or even exceptions (assigning complex to floats or ints): >>> x[1] = 1.2 >>> x[1] 1 >>> x[1] = 1.2j Traceback (most recent call last): ... TypeError: can't convert complex to int Unlike some of the references (such as array and mask indices) assignments are always made to the original data in the array (indeed, nothing else would make sense!). Note though, that some actions may not work as one may naively expect. This particular example is often surprising to people: >>> x = np.arange(0, 50, 10) >>> x array([ 0, 10, 20, 30, 40]) >>> x[np.array([1, 1, 3, 1])] += 1 >>> x array([ 0, 11, 20, 31, 40]) Where people expect that the 1st location will be incremented by 3. In fact, it will only be incremented by 1. The reason is that a new array is extracted from the original (as a temporary) containing the values at 1, 1, 3, 1, then the value 1 is added to the temporary, and then the temporary is assigned back to the original array. Thus the value of the array at `x[1] + 1` is assigned to `x[1]` three times, rather than being incremented 3 times. ## Dealing with variable numbers of indices within programs The indexing syntax is very powerful but limiting when dealing with a variable number of indices. For example, if you want to write a function that can handle arguments with various numbers of dimensions without having to write special case code for each number of possible dimensions, how can that be done? If one supplies to the index a tuple, the tuple will be interpreted as a list of indices. For example: >>> z = np.arange(81).reshape(3, 3, 3, 3) >>> indices = (1, 1, 1, 1) >>> z[indices] 40 So one can use code to construct tuples of any number of indices and then use these within an index. Slices can be specified within programs by using the slice() function in Python. For example: >>> indices = (1, 1, 1, slice(0, 2)) # same as [1, 1, 1, 0:2] >>> z[indices] array([39, 40]) Likewise, ellipsis can be specified by code by using the Ellipsis object: >>> indices = (1, Ellipsis, 1) # same as [1, ..., 1] >>> z[indices] array([[28, 31, 34], [37, 40, 43], [46, 49, 52]]) For this reason, it is possible to use the output from the [`np.nonzero()`](../reference/generated/numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero") function directly as an index since it always returns a tuple of index arrays. Because of the special treatment of tuples, they are not automatically converted to an array as a list would be. As an example: >>> z[[1, 1, 1, 1]] # produces a large array array([[[[27, 28, 29], [30, 31, 32], ... >>> z[(1, 1, 1, 1)] # returns a single value 40 ## Detailed notes These are some detailed notes, which are not of importance for day to day indexing (in no particular order): * The native NumPy indexing type is `intp` and may differ from the default integer array type. `intp` is the smallest data type sufficient to safely index any array; for advanced indexing it may be faster than other types. * For advanced assignments, there is in general no guarantee for the iteration order. This means that if an element is set more than once, it is not possible to predict the final result. * An empty (tuple) index is a full scalar index into a zero-dimensional array. `x[()]` returns a _scalar_ if `x` is zero-dimensional and a view otherwise. On the other hand, `x[...]` always returns a view. * If a zero-dimensional array is present in the index _and_ it is a full integer index the result will be a _scalar_ and not a zero-dimensional array. (Advanced indexing is not triggered.) * When an ellipsis (`...`) is present but has no size (i.e. replaces zero `:`) the result will still always be an array. A view if no advanced index is present, otherwise a copy. * The `nonzero` equivalence for Boolean arrays does not hold for zero dimensional boolean arrays. * When the result of an advanced indexing operation has no elements but an individual index is out of bounds, whether or not an `IndexError` is raised is undefined (e.g. `x[[], [123]]` with `123` being out of bounds). * When a _casting_ error occurs during assignment (for example updating a numerical array using a sequence of strings), the array being assigned to may end up in an unpredictable partially updated state. However, if any other error (such as an out of bounds index) occurs, the array will remain unchanged. * The memory layout of an advanced indexing result is optimized for each indexing operation and no particular memory order can be assumed. * When using a subclass (especially one which manipulates its shape), the default `ndarray.__setitem__` behaviour will call `__getitem__` for _basic_ indexing but not for _advanced_ indexing. For such a subclass it may be preferable to call `ndarray.__setitem__` with a _base class_ ndarray view on the data. This _must_ be done if the subclasses `__getitem__` does not return views. # Interoperability with NumPy NumPy’s ndarray objects provide both a high-level API for operations on array- structured data and a concrete implementation of the API based on [strided in- RAM storage](../reference/arrays#arrays). While this API is powerful and fairly general, its concrete implementation has limitations. As datasets grow and NumPy becomes used in a variety of new environments and architectures, there are cases where the strided in-RAM storage strategy is inappropriate, which has caused different libraries to reimplement this API for their own uses. This includes GPU arrays ([CuPy](https://cupy.dev/)), Sparse arrays ([`scipy.sparse`](https://docs.scipy.org/doc/scipy/reference/sparse.html#module- scipy.sparse "\(in SciPy v1.14.1\)"), [PyData/Sparse](https://sparse.pydata.org/)) and parallel arrays ([Dask](https://docs.dask.org/) arrays) as well as various NumPy-like implementations in deep learning frameworks, like [TensorFlow](https://www.tensorflow.org/) and [PyTorch](https://pytorch.org/). Similarly, there are many projects that build on top of the NumPy API for labeled and indexed arrays ([XArray](https://xarray.dev/)), automatic differentiation ([JAX](https://jax.readthedocs.io/)), masked arrays ([`numpy.ma`](../reference/maskedarray.generic#module-numpy.ma "numpy.ma")), physical units ([astropy.units](https://docs.astropy.org/en/stable/units/), [pint](https://pint.readthedocs.io/), [unyt](https://unyt.readthedocs.io/)), among others that add additional functionality on top of the NumPy API. Yet, users still want to work with these arrays using the familiar NumPy API and reuse existing code with minimal (ideally zero) porting overhead. With this goal in mind, various protocols are defined for implementations of multi- dimensional arrays with high-level APIs matching NumPy. Broadly speaking, there are three groups of features used for interoperability with NumPy: 1. Methods of turning a foreign object into an ndarray; 2. Methods of deferring execution from a NumPy function to another array library; 3. Methods that use NumPy functions and return an instance of a foreign object. We describe these features below. ## 1\. Using arbitrary objects in NumPy The first set of interoperability features from the NumPy API allows foreign objects to be treated as NumPy arrays whenever possible. When NumPy functions encounter a foreign object, they will try (in order): 1. The buffer protocol, described [in the Python C-API documentation](https://docs.python.org/3/c-api/buffer.html "\(in Python v3.13\)"). 2. The `__array_interface__` protocol, described [in this page](../reference/arrays.interface#arrays-interface). A precursor to Python’s buffer protocol, it defines a way to access the contents of a NumPy array from other C extensions. 3. The `__array__()` method, which asks an arbitrary object to convert itself into an array. For both the buffer and the `__array_interface__` protocols, the object describes its memory layout and NumPy does everything else (zero-copy if possible). If that’s not possible, the object itself is responsible for returning a `ndarray` from `__array__()`. [DLPack](https://dmlc.github.io/dlpack/latest/index.html "\(in DLPack\)") is yet another protocol to convert foreign objects to NumPy arrays in a language and device agnostic manner. NumPy doesn’t implicitly convert objects to ndarrays using DLPack. It provides the function [`numpy.from_dlpack`](../reference/generated/numpy.from_dlpack#numpy.from_dlpack "numpy.from_dlpack") that accepts any object implementing the `__dlpack__` method and outputs a NumPy ndarray (which is generally a view of the input object’s data buffer). The [Python Specification for DLPack](https://dmlc.github.io/dlpack/latest/python_spec.html#python-spec "\(in DLPack\)") page explains the `__dlpack__` protocol in detail. ### The array interface protocol The [array interface protocol](../reference/arrays.interface#arrays-interface) defines a way for array-like objects to reuse each other’s data buffers. Its implementation relies on the existence of the following attributes or methods: * `__array_interface__`: a Python dictionary containing the shape, the element type, and optionally, the data buffer address and the strides of an array-like object; * `__array__()`: a method returning the NumPy ndarray copy or a view of an array-like object; The `__array_interface__` attribute can be inspected directly: >>> import numpy as np >>> x = np.array([1, 2, 5.0, 8]) >>> x.__array_interface__ {'data': (94708397920832, False), 'strides': None, 'descr': [('', '>> class wrapper(): ... pass ... >>> arr = np.array([1, 2, 3, 4]) >>> buf = arr.__array_interface__ >>> buf {'data': (140497590272032, False), 'strides': None, 'descr': [('', '>> buf['shape'] = (2, 2) >>> w = wrapper() >>> w.__array_interface__ = buf >>> new_arr = np.array(w, copy=False) >>> new_arr array([[1, 2], [3, 4]]) We can check that `arr` and `new_arr` share the same data buffer: >>> new_arr[0, 0] = 1000 >>> new_arr array([[1000, 2], [ 3, 4]]) >>> arr array([1000, 2, 3, 4]) ### The `__array__()` method The `__array__()` method ensures that any NumPy-like object (an array, any object exposing the array interface, an object whose `__array__()` method returns an array or any nested sequence) that implements it can be used as a NumPy array. If possible, this will mean using `__array__()` to create a NumPy ndarray view of the array-like object. Otherwise, this copies the data into a new ndarray object. This is not optimal, as coercing arrays into ndarrays may cause performance problems or create the need for copies and loss of metadata, as the original object and any attributes/behavior it may have had, is lost. The signature of the method should be `__array__(self, dtype=None, copy=None)`. If a passed `dtype` isn’t `None` and different than the object’s data type, a casting should happen to a specified type. If `copy` is `None`, a copy should be made only if `dtype` argument enforces it. For `copy=True`, a copy should always be made, where `copy=False` should raise an exception if a copy is needed. If a class implements the old signature `__array__(self)`, for `np.array(a)` a warning will be raised saying that `dtype` and `copy` arguments are missing. To see an example of a custom array implementation including the use of `__array__()`, see [Writing custom array containers](basics.dispatch#basics- dispatch). ### The DLPack Protocol The [DLPack](https://dmlc.github.io/dlpack/latest/index.html "\(in DLPack\)") protocol defines a memory-layout of strided n-dimensional array objects. It offers the following syntax for data exchange: 1. A [`numpy.from_dlpack`](../reference/generated/numpy.from_dlpack#numpy.from_dlpack "numpy.from_dlpack") function, which accepts (array) objects with a `__dlpack__` method and uses that method to construct a new array containing the data from `x`. 2. `__dlpack__(self, stream=None)` and `__dlpack_device__` methods on the array object, which will be called from within `from_dlpack`, to query what device the array is on (may be needed to pass in the correct stream, e.g. in the case of multiple GPUs) and to access the data. Unlike the buffer protocol, DLPack allows exchanging arrays containing data on devices other than the CPU (e.g. Vulkan or GPU). Since NumPy only supports CPU, it can only convert objects whose data exists on the CPU. But other libraries, like [PyTorch](https://pytorch.org/) and [CuPy](https://cupy.dev/), may exchange data on GPU using this protocol. ## 2\. Operating on foreign objects without converting A second set of methods defined by the NumPy API allows us to defer the execution from a NumPy function to another array library. Consider the following function. >>> import numpy as np >>> def f(x): ... return np.mean(np.exp(x)) Note that [`np.exp`](../reference/generated/numpy.exp#numpy.exp "numpy.exp") is a [ufunc](basics.ufuncs#ufuncs-basics), which means that it operates on ndarrays in an element-by-element fashion. On the other hand, [`np.mean`](../reference/generated/numpy.mean#numpy.mean "numpy.mean") operates along one of the array’s axes. We can apply `f` to a NumPy ndarray object directly: >>> x = np.array([1, 2, 3, 4]) >>> f(x) 21.1977562209304 We would like this function to work equally well with any NumPy-like array object. NumPy allows a class to indicate that it would like to handle computations in a custom-defined way through the following interfaces: * `__array_ufunc__`: allows third-party objects to support and override [ufuncs](basics.ufuncs#ufuncs-basics). * `__array_function__`: a catch-all for NumPy functionality that is not covered by the `__array_ufunc__` protocol for universal functions. As long as foreign objects implement the `__array_ufunc__` or `__array_function__` protocols, it is possible to operate on them without the need for explicit conversion. ### The `__array_ufunc__` protocol A [universal function (or ufunc for short)](basics.ufuncs#ufuncs-basics) is a “vectorized” wrapper for a function that takes a fixed number of specific inputs and produces a fixed number of specific outputs. The output of the ufunc (and its methods) is not necessarily a ndarray, if not all input arguments are ndarrays. Indeed, if any input defines an `__array_ufunc__` method, control will be passed completely to that function, i.e., the ufunc is overridden. The `__array_ufunc__` method defined on that (non-ndarray) object has access to the NumPy ufunc. Because ufuncs have a well-defined structure, the foreign `__array_ufunc__` method may rely on ufunc attributes like `.at()`, `.reduce()`, and others. A subclass can override what happens when executing NumPy ufuncs on it by overriding the default `ndarray.__array_ufunc__` method. This method is executed instead of the ufunc and should return either the result of the operation, or `NotImplemented` if the operation requested is not implemented. ### The `__array_function__` protocol To achieve enough coverage of the NumPy API to support downstream projects, there is a need to go beyond `__array_ufunc__` and implement a protocol that allows arguments of a NumPy function to take control and divert execution to another function (for example, a GPU or parallel implementation) in a way that is safe and consistent across projects. The semantics of `__array_function__` are very similar to `__array_ufunc__`, except the operation is specified by an arbitrary callable object rather than a ufunc instance and method. For more details, see [NEP 18 — A dispatch mechanism for NumPy’s high level array functions](https://numpy.org/neps/nep-0018-array-function-protocol.html#nep18 "\(in NumPy Enhancement Proposals\)"). ## 3\. Returning foreign objects A third type of feature set is meant to use the NumPy function implementation and then convert the return value back into an instance of the foreign object. The `__array_finalize__` and `__array_wrap__` methods act behind the scenes to ensure that the return type of a NumPy function can be specified as needed. The `__array_finalize__` method is the mechanism that NumPy provides to allow subclasses to handle the various ways that new instances get created. This method is called whenever the system internally allocates a new array from an object which is a subclass (subtype) of the ndarray. It can be used to change attributes after construction, or to update meta-information from the “parent.” The `__array_wrap__` method “wraps up the action” in the sense of allowing any object (such as user-defined functions) to set the type of its return value and update attributes and metadata. This can be seen as the opposite of the `__array__` method. At the end of every object that implements `__array_wrap__`, this method is called on the input object with the highest _array priority_ , or the output object if one was specified. The `__array_priority__` attribute is used to determine what type of object to return in situations where there is more than one possibility for the Python type of the returned object. For example, subclasses may opt to use this method to transform the output array into an instance of the subclass and update metadata before returning the array to the user. For more information on these methods, see [Subclassing ndarray](basics.subclassing#basics-subclassing) and [Specific features of ndarray sub-typing](c-info.beyond-basics#specific-array-subtyping). ## Interoperability examples ### Example: Pandas `Series` objects Consider the following: >>> import pandas as pd >>> ser = pd.Series([1, 2, 3, 4]) >>> type(ser) pandas.core.series.Series Now, `ser` is **not** a ndarray, but because it [implements the __array_ufunc__ protocol](https://pandas.pydata.org/docs/user_guide/dsintro.html#dataframe- interoperability-with-numpy-functions), we can apply ufuncs to it as if it were a ndarray: >>> np.exp(ser) 0 2.718282 1 7.389056 2 20.085537 3 54.598150 dtype: float64 >>> np.sin(ser) 0 0.841471 1 0.909297 2 0.141120 3 -0.756802 dtype: float64 We can even do operations with other ndarrays: >>> np.add(ser, np.array([5, 6, 7, 8])) 0 6 1 8 2 10 3 12 dtype: int64 >>> f(ser) 21.1977562209304 >>> result = ser.__array__() >>> type(result) numpy.ndarray ### Example: PyTorch tensors [PyTorch](https://pytorch.org/) is an optimized tensor library for deep learning using GPUs and CPUs. PyTorch arrays are commonly called _tensors_. Tensors are similar to NumPy’s ndarrays, except that tensors can run on GPUs or other hardware accelerators. In fact, tensors and NumPy arrays can often share the same underlying memory, eliminating the need to copy data. >>> import torch >>> data = [[1, 2],[3, 4]] >>> x_np = np.array(data) >>> x_tensor = torch.tensor(data) Note that `x_np` and `x_tensor` are different kinds of objects: >>> x_np array([[1, 2], [3, 4]]) >>> x_tensor tensor([[1, 2], [3, 4]]) However, we can treat PyTorch tensors as NumPy arrays without the need for explicit conversion: >>> np.exp(x_tensor) tensor([[ 2.7183, 7.3891], [20.0855, 54.5982]], dtype=torch.float64) Also, note that the return type of this function is compatible with the initial data type. Warning While this mixing of ndarrays and tensors may be convenient, it is not recommended. It will not work for non-CPU tensors, and will have unexpected behavior in corner cases. Users should prefer explicitly converting the ndarray to a tensor. Note PyTorch does not implement `__array_function__` or `__array_ufunc__`. Under the hood, the `Tensor.__array__()` method returns a NumPy ndarray as a view of the tensor data buffer. See [this issue](https://github.com/pytorch/pytorch/issues/24015) and the [__torch_function__ implementation](https://github.com/pytorch/pytorch/blob/master/torch/overrides.py) for details. Note also that we can see `__array_wrap__` in action here, even though `torch.Tensor` is not a subclass of ndarray: >>> import torch >>> t = torch.arange(4) >>> np.abs(t) tensor([0, 1, 2, 3]) PyTorch implements `__array_wrap__` to be able to get tensors back from NumPy functions, and we can modify it directly to control which type of objects are returned from these functions. ### Example: CuPy arrays CuPy is a NumPy/SciPy-compatible array library for GPU-accelerated computing with Python. CuPy implements a subset of the NumPy interface by implementing `cupy.ndarray`, [a counterpart to NumPy ndarrays](https://docs.cupy.dev/en/stable/reference/ndarray.html). >>> import cupy as cp >>> x_gpu = cp.array([1, 2, 3, 4]) The `cupy.ndarray` object implements the `__array_ufunc__` interface. This enables NumPy ufuncs to be applied to CuPy arrays (this will defer operation to the matching CuPy CUDA/ROCm implementation of the ufunc): >>> np.mean(np.exp(x_gpu)) array(21.19775622) Note that the return type of these operations is still consistent with the initial type: >>> arr = cp.random.randn(1, 2, 3, 4).astype(cp.float32) >>> result = np.sum(arr) >>> print(type(result)) See [this page in the CuPy documentation for details](https://docs.cupy.dev/en/stable/reference/ufunc.html). `cupy.ndarray` also implements the `__array_function__` interface, meaning it is possible to do operations such as >>> a = np.random.randn(100, 100) >>> a_gpu = cp.asarray(a) >>> qr_gpu = np.linalg.qr(a_gpu) CuPy implements many NumPy functions on `cupy.ndarray` objects, but not all. See [the CuPy documentation](https://docs.cupy.dev/en/stable/user_guide/difference.html) for details. ### Example: Dask arrays Dask is a flexible library for parallel computing in Python. Dask Array implements a subset of the NumPy ndarray interface using blocked algorithms, cutting up the large array into many small arrays. This allows computations on larger-than-memory arrays using multiple cores. Dask supports `__array__()` and `__array_ufunc__`. >>> import dask.array as da >>> x = da.random.normal(1, 0.1, size=(20, 20), chunks=(10, 10)) >>> np.mean(np.exp(x)) dask.array >>> np.mean(np.exp(x)).compute() 5.090097550553843 Note Dask is lazily evaluated, and the result from a computation isn’t computed until you ask for it by invoking `compute()`. See [the Dask array documentation](https://docs.dask.org/en/stable/array.html) and the [scope of Dask arrays interoperability with NumPy arrays](https://docs.dask.org/en/stable/array.html#scope) for details. ### Example: DLPack Several Python data science libraries implement the `__dlpack__` protocol. Among them are [PyTorch](https://pytorch.org/) and [CuPy](https://cupy.dev/). A full list of libraries that implement this protocol can be found on [this page of DLPack documentation](https://dmlc.github.io/dlpack/latest/index.html "\(in DLPack\)"). Convert a PyTorch CPU tensor to NumPy array: >>> import torch >>> x_torch = torch.arange(5) >>> x_torch tensor([0, 1, 2, 3, 4]) >>> x_np = np.from_dlpack(x_torch) >>> x_np array([0, 1, 2, 3, 4]) >>> # note that x_np is a view of x_torch >>> x_torch[1] = 100 >>> x_torch tensor([ 0, 100, 2, 3, 4]) >>> x_np array([ 0, 100, 2, 3, 4]) The imported arrays are read-only so writing or operating in-place will fail: >>> x.flags.writeable False >>> x_np[1] = 1 Traceback (most recent call last): File "", line 1, in ValueError: assignment destination is read-only A copy must be created in order to operate on the imported arrays in-place, but will mean duplicating the memory. Do not do this for very large arrays: >>> x_np_copy = x_np.copy() >>> x_np_copy.sort() # works Note Note that GPU tensors can’t be converted to NumPy arrays since NumPy doesn’t support GPU devices: >>> x_torch = torch.arange(5, device='cuda') >>> np.from_dlpack(x_torch) Traceback (most recent call last): File "", line 1, in RuntimeError: Unsupported device in DLTensor. But, if both libraries support the device the data buffer is on, it is possible to use the `__dlpack__` protocol (e.g. [PyTorch](https://pytorch.org/) and [CuPy](https://cupy.dev/)): >>> x_torch = torch.arange(5, device='cuda') >>> x_cupy = cupy.from_dlpack(x_torch) Similarly, a NumPy array can be converted to a PyTorch tensor: >>> x_np = np.arange(5) >>> x_torch = torch.from_dlpack(x_np) Read-only arrays cannot be exported: >>> x_np = np.arange(5) >>> x_np.flags.writeable = False >>> torch.from_dlpack(x_np) Traceback (most recent call last): File "", line 1, in File ".../site-packages/torch/utils/dlpack.py", line 63, in from_dlpack dlpack = ext_tensor.__dlpack__() TypeError: NumPy currently only supports dlpack for writeable arrays ## Further reading * [The array interface protocol](../reference/arrays.interface#arrays-interface) * [Writing custom array containers](basics.dispatch#basics-dispatch) * [Special attributes and methods](../reference/arrays.classes#special-attributes-and-methods) (details on the `__array_ufunc__` and `__array_function__` protocols) * [Subclassing ndarray](basics.subclassing#basics-subclassing) (details on the `__array_wrap__` and `__array_finalize__` methods) * [Specific features of ndarray sub-typing](c-info.beyond-basics#specific-array-subtyping) (more details on the implementation of `__array_finalize__`, `__array_wrap__` and `__array_priority__`) * [NumPy roadmap: interoperability](https://numpy.org/neps/roadmap.html "\(in NumPy Enhancement Proposals\)") * [PyTorch documentation on the Bridge with NumPy](https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#bridge-to-np-label) # Importing data with genfromtxt NumPy provides several functions to create arrays from tabular data. We focus here on the [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") function. In a nutshell, [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") runs two main loops. The first loop converts each line of the file in a sequence of strings. The second loop converts each string to the appropriate data type. This mechanism is slower than a single loop, but gives more flexibility. In particular, [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") is able to take missing data into account, when other faster and simpler functions like [`loadtxt`](../reference/generated/numpy.loadtxt#numpy.loadtxt "numpy.loadtxt") cannot. Note When giving examples, we will use the following conventions: >>> import numpy as np >>> from io import StringIO ## Defining the input The only mandatory argument of [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") is the source of the data. It can be a string, a list of strings, a generator or an open file-like object with a `read` method, for example, a file or [`io.StringIO`](https://docs.python.org/3/library/io.html#io.StringIO "\(in Python v3.13\)") object. If a single string is provided, it is assumed to be the name of a local or remote file. If a list of strings or a generator returning strings is provided, each string is treated as one line in a file. When the URL of a remote file is passed, the file is automatically downloaded to the current directory and opened. Recognized file types are text files and archives. Currently, the function recognizes `gzip` and `bz2` (`bzip2`) archives. The type of the archive is determined from the extension of the file: if the filename ends with `'.gz'`, a `gzip` archive is expected; if it ends with `'bz2'`, a `bzip2` archive is assumed. ## Splitting the lines into columns ### The `delimiter` argument Once the file is defined and open for reading, [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") splits each non-empty line into a sequence of strings. Empty or commented lines are just skipped. The `delimiter` keyword is used to define how the splitting should take place. Quite often, a single character marks the separation between columns. For example, comma-separated files (CSV) use a comma (`,`) or a semicolon (`;`) as delimiter: >>> data = "1, 2, 3\n4, 5, 6" >>> np.genfromtxt(StringIO(data), delimiter=",") array([[1., 2., 3.], [4., 5., 6.]]) Another common separator is `"\t"`, the tabulation character. However, we are not limited to a single character, any string will do. By default, [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") assumes `delimiter=None`, meaning that the line is split along white spaces (including tabs) and that consecutive white spaces are considered as a single white space. Alternatively, we may be dealing with a fixed-width file, where columns are defined as a given number of characters. In that case, we need to set `delimiter` to a single integer (if all the columns have the same size) or to a sequence of integers (if columns can have different sizes): >>> data = " 1 2 3\n 4 5 67\n890123 4" >>> np.genfromtxt(StringIO(data), delimiter=3) array([[ 1., 2., 3.], [ 4., 5., 67.], [890., 123., 4.]]) >>> data = "123456789\n 4 7 9\n 4567 9" >>> np.genfromtxt(StringIO(data), delimiter=(4, 3, 2)) array([[1234., 567., 89.], [ 4., 7., 9.], [ 4., 567., 9.]]) ### The `autostrip` argument By default, when a line is decomposed into a series of strings, the individual entries are not stripped of leading nor trailing white spaces. This behavior can be overwritten by setting the optional argument `autostrip` to a value of `True`: >>> data = "1, abc , 2\n 3, xxx, 4" >>> # Without autostrip >>> np.genfromtxt(StringIO(data), delimiter=",", dtype="|U5") array([['1', ' abc ', ' 2'], ['3', ' xxx', ' 4']], dtype='>> # With autostrip >>> np.genfromtxt(StringIO(data), delimiter=",", dtype="|U5", autostrip=True) array([['1', 'abc', '2'], ['3', 'xxx', '4']], dtype='>> data = """# ... # Skip me ! ... # Skip me too ! ... 1, 2 ... 3, 4 ... 5, 6 #This is the third line of the data ... 7, 8 ... # And here comes the last line ... 9, 0 ... """ >>> np.genfromtxt(StringIO(data), comments="#", delimiter=",") array([[1., 2.], [3., 4.], [5., 6.], [7., 8.], [9., 0.]]) Note There is one notable exception to this behavior: if the optional argument `names=True`, the first commented line will be examined for names. ## Skipping lines and choosing columns ### The `skip_header` and `skip_footer` arguments The presence of a header in the file can hinder data processing. In that case, we need to use the `skip_header` optional argument. The values of this argument must be an integer which corresponds to the number of lines to skip at the beginning of the file, before any other action is performed. Similarly, we can skip the last `n` lines of the file by using the `skip_footer` attribute and giving it a value of `n`: >>> data = "\n".join(str(i) for i in range(10)) >>> np.genfromtxt(StringIO(data),) array([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) >>> np.genfromtxt(StringIO(data), ... skip_header=3, skip_footer=5) array([3., 4.]) By default, `skip_header=0` and `skip_footer=0`, meaning that no lines are skipped. ### The `usecols` argument In some cases, we are not interested in all the columns of the data but only a few of them. We can select which columns to import with the `usecols` argument. This argument accepts a single integer or a sequence of integers corresponding to the indices of the columns to import. Remember that by convention, the first column has an index of 0. Negative integers behave the same as regular Python negative indexes. For example, if we want to import only the first and the last columns, we can use `usecols=(0, -1)`: >>> data = "1 2 3\n4 5 6" >>> np.genfromtxt(StringIO(data), usecols=(0, -1)) array([[1., 3.], [4., 6.]]) If the columns have names, we can also select which columns to import by giving their name to the `usecols` argument, either as a sequence of strings or a comma-separated string: >>> data = "1 2 3\n4 5 6" >>> np.genfromtxt(StringIO(data), ... names="a, b, c", usecols=("a", "c")) array([(1., 3.), (4., 6.)], dtype=[('a', '>> np.genfromtxt(StringIO(data), ... names="a, b, c", usecols=("a, c")) array([(1., 3.), (4., 6.)], dtype=[('a', '>> data = StringIO("1 2 3\n 4 5 6") >>> np.genfromtxt(data, dtype=[(_, int) for _ in "abc"]) array([(1, 2, 3), (4, 5, 6)], dtype=[('a', '>> data = StringIO("1 2 3\n 4 5 6") >>> np.genfromtxt(data, names="A, B, C") array([(1., 2., 3.), (4., 5., 6.)], dtype=[('A', '>> data = StringIO("So it goes\n#a b c\n1 2 3\n 4 5 6") >>> np.genfromtxt(data, skip_header=1, names=True) array([(1., 2., 3.), (4., 5., 6.)], dtype=[('a', '>> data = StringIO("1 2 3\n 4 5 6") >>> ndtype=[('a',int), ('b', float), ('c', int)] >>> names = ["A", "B", "C"] >>> np.genfromtxt(data, names=names, dtype=ndtype) array([(1, 2., 3), (4, 5., 6)], dtype=[('A', '>> data = StringIO("1 2 3\n 4 5 6") >>> np.genfromtxt(data, dtype=(int, float, int)) array([(1, 2., 3), (4, 5., 6)], dtype=[('f0', '>> data = StringIO("1 2 3\n 4 5 6") >>> np.genfromtxt(data, dtype=(int, float, int), names="a") array([(1, 2., 3), (4, 5., 6)], dtype=[('a', '>> data = StringIO("1 2 3\n 4 5 6") >>> np.genfromtxt(data, dtype=(int, float, int), defaultfmt="var_%02i") array([(1, 2., 3), (4, 5., 6)], dtype=[('var_00', ',<`. `excludelist` Gives a list of the names to exclude, such as `return`, `file`, `print`… If one of the input name is part of this list, an underscore character (`'_'`) will be appended to it. `case_sensitive` Whether the names should be case-sensitive (`case_sensitive=True`), converted to upper case (`case_sensitive=False` or `case_sensitive='upper'`) or to lower case (`case_sensitive='lower'`). ## Tweaking the conversion ### The `converters` argument Usually, defining a dtype is sufficient to define how the sequence of strings must be converted. However, some additional control may sometimes be required. For example, we may want to make sure that a date in a format `YYYY/MM/DD` is converted to a [`datetime`](https://docs.python.org/3/library/datetime.html#datetime.datetime "\(in Python v3.13\)") object, or that a string like `xx%` is properly converted to a float between 0 and 1. In such cases, we should define conversion functions with the `converters` arguments. The value of this argument is typically a dictionary with column indices or column names as keys and a conversion functions as values. These conversion functions can either be actual functions or lambda functions. In any case, they should accept only a string as input and output only a single element of the wanted type. In the following example, the second column is converted from as string representing a percentage to a float between 0 and 1: >>> convertfunc = lambda x: float(x.strip("%"))/100. >>> data = "1, 2.3%, 45.\n6, 78.9%, 0" >>> names = ("i", "p", "n") >>> # General case ..... >>> np.genfromtxt(StringIO(data), delimiter=",", names=names) array([(1., nan, 45.), (6., nan, 0.)], dtype=[('i', '>> # Converted case ... >>> np.genfromtxt(StringIO(data), delimiter=",", names=names, ... converters={1: convertfunc}) array([(1., 0.023, 45.), (6., 0.789, 0.)], dtype=[('i', '>> # Using a name for the converter ... >>> np.genfromtxt(StringIO(data), delimiter=",", names=names, ... converters={"p": convertfunc}) array([(1., 0.023, 45.), (6., 0.789, 0.)], dtype=[('i', '>> data = "1, , 3\n 4, 5, 6" >>> convert = lambda x: float(x.strip() or -999) >>> np.genfromtxt(StringIO(data), delimiter=",", ... converters={1: convert}) array([[ 1., -999., 3.], [ 4., 5., 6.]]) ### Using missing and filling values Some entries may be missing in the dataset we are trying to import. In a previous example, we used a converter to transform an empty string into a float. However, user-defined converters may rapidly become cumbersome to manage. The [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") function provides two other complementary mechanisms: the `missing_values` argument is used to recognize missing data and a second argument, `filling_values`, is used to process these missing data. ### `missing_values` By default, any empty string is marked as missing. We can also consider more complex strings, such as `"N/A"` or `"???"` to represent missing or invalid data. The `missing_values` argument accepts three kinds of values: a string or a comma-separated string This string will be used as the marker for missing data for all the columns a sequence of strings In that case, each item is associated to a column, in order. a dictionary Values of the dictionary are strings or sequence of strings. The corresponding keys can be column indices (integers) or column names (strings). In addition, the special key `None` can be used to define a default applicable to all columns. ### `filling_values` We know how to recognize missing data, but we still need to provide a value for these missing entries. By default, this value is determined from the expected dtype according to this table: Expected type | Default ---|--- `bool` | `False` `int` | `-1` `float` | `np.nan` `complex` | `np.nan+0j` `string` | `'???'` We can get a finer control on the conversion of missing values with the `filling_values` optional argument. Like `missing_values`, this argument accepts different kind of values: a single value This will be the default for all columns a sequence of values Each entry will be the default for the corresponding column a dictionary Each key can be a column index or a column name, and the corresponding value should be a single object. We can use the special key `None` to define a default for all columns. In the following example, we suppose that the missing values are flagged with `"N/A"` in the first column and by `"???"` in the third column. We wish to transform these missing values to 0 if they occur in the first and second column, and to -999 if they occur in the last column: >>> data = "N/A, 2, 3\n4, ,???" >>> kwargs = dict(delimiter=",", ... dtype=int, ... names="a,b,c", ... missing_values={0:"N/A", 'b':" ", 2:"???"}, ... filling_values={0:0, 'b':0, 2:-999}) >>> np.genfromtxt(StringIO(data), **kwargs) array([(0, 2, 3), (4, 0, -999)], dtype=[('a', '>> x = np.array([('Rex', 9, 81.0), ('Fido', 3, 27.0)], ... dtype=[('name', 'U10'), ('age', 'i4'), ('weight', 'f4')]) >>> x array([('Rex', 9, 81.), ('Fido', 3, 27.)], dtype=[('name', '>> x[1] np.void(('Fido', 3, 27.0), dtype=[('name', '>> x['age'] array([9, 3], dtype=int32) >>> x['age'] = 5 >>> x array([('Rex', 5, 81.), ('Fido', 5, 27.)], dtype=[('name', '>> np.dtype([('x', 'f4'), ('y', np.float32), ('z', 'f4', (2, 2))]) dtype([('x', '>> np.dtype([('x', 'f4'), ('', 'i4'), ('z', 'i8')]) dtype([('x', '>> np.dtype('i8, f4, S3') dtype([('f0', '>> np.dtype('3int8, float32, (2, 3)float64') dtype([('f0', 'i1', (3,)), ('f1', '>> np.dtype({'names': ['col1', 'col2'], 'formats': ['i4', 'f4']}) dtype([('col1', '>> np.dtype({'names': ['col1', 'col2'], ... 'formats': ['i4', 'f4'], ... 'offsets': [0, 4], ... 'itemsize': 12}) dtype({'names': ['col1', 'col2'], 'formats': ['>> np.dtype({'col1': ('i1', 0), 'col2': ('f4', 1)}) dtype([('col1', 'i1'), ('col2', '>> d = np.dtype([('x', 'i8'), ('y', 'f4')]) >>> d.names ('x', 'y') The dtype of each individual field can be looked up by name: >>> d['x'] dtype('int64') The field names may be modified by assigning to the `names` attribute using a sequence of strings of the same length. The dtype object also has a dictionary-like attribute, `fields`, whose keys are the field names (and Field Titles, see below) and whose values are tuples containing the dtype and byte offset of each field. >>> d.fields mappingproxy({'x': (dtype('int64'), 0), 'y': (dtype('float32'), 8)}) Both the `names` and `fields` attributes will equal `None` for unstructured arrays. The recommended way to test if a dtype is structured is with `if dt.names is not None` rather than `if dt.names`, to account for dtypes with 0 fields. The string representation of a structured datatype is shown in the “list of tuples” form if possible, otherwise numpy falls back to using the more general dictionary form. ### Automatic byte offsets and alignment Numpy uses one of two methods to automatically determine the field byte offsets and the overall itemsize of a structured datatype, depending on whether `align=True` was specified as a keyword argument to [`numpy.dtype`](../reference/generated/numpy.dtype#numpy.dtype "numpy.dtype"). By default (`align=False`), numpy will pack the fields together such that each field starts at the byte offset the previous field ended, and the fields are contiguous in memory. >>> def print_offsets(d): ... print("offsets:", [d.fields[name][1] for name in d.names]) ... print("itemsize:", d.itemsize) >>> print_offsets(np.dtype('u1, u1, i4, u1, i8, u2')) offsets: [0, 1, 2, 6, 7, 15] itemsize: 17 If `align=True` is set, numpy will pad the structure in the same way many C compilers would pad a C-struct. Aligned structures can give a performance improvement in some cases, at the cost of increased datatype size. Padding bytes are inserted between fields such that each field’s byte offset will be a multiple of that field’s alignment, which is usually equal to the field’s size in bytes for simple datatypes, see [`PyArray_Descr.alignment`](../reference/c-api/types-and- structures#c.PyArray_Descr.alignment "PyArray_Descr.alignment"). The structure will also have trailing padding added so that its itemsize is a multiple of the largest field’s alignment. >>> print_offsets(np.dtype('u1, u1, i4, u1, i8, u2', align=True)) offsets: [0, 1, 4, 8, 16, 24] itemsize: 32 Note that although almost all modern C compilers pad in this way by default, padding in C structs is C-implementation-dependent so this memory layout is not guaranteed to exactly match that of a corresponding struct in a C program. Some work may be needed, either on the numpy side or the C side, to obtain exact correspondence. If offsets were specified using the optional `offsets` key in the dictionary- based dtype specification, setting `align=True` will check that each field’s offset is a multiple of its size and that the itemsize is a multiple of the largest field size, and raise an exception if not. If the offsets of the fields and itemsize of a structured array satisfy the alignment conditions, the array will have the `ALIGNED` [`flag`](../reference/generated/numpy.ndarray.flags#numpy.ndarray.flags "numpy.ndarray.flags") set. A convenience function `numpy.lib.recfunctions.repack_fields` converts an aligned dtype or array to a packed one and vice versa. It takes either a dtype or structured ndarray as an argument, and returns a copy with fields re- packed, with or without padding bytes. ### Field titles In addition to field names, fields may also have an associated [title](../glossary#term-title), an alternate name, which is sometimes used as an additional description or alias for the field. The title may be used to index an array, just like a field name. To add titles when using the list-of-tuples form of dtype specification, the field name may be specified as a tuple of two strings instead of a single string, which will be the field’s title and field name respectively. For example: >>> np.dtype([(('my title', 'name'), 'f4')]) dtype([(('my title', 'name'), '>> np.dtype({'name': ('i4', 0, 'my title')}) dtype([(('my title', 'name'), '>> for name in d.names: ... print(d.fields[name][:2]) (dtype('int64'), 0) (dtype('float32'), 8) ### Union types Structured datatypes are implemented in numpy to have base type [`numpy.void`](../reference/arrays.scalars#numpy.void "numpy.void") by default, but it is possible to interpret other numpy types as structured types using the `(base_dtype, dtype)` form of dtype specification described in [Data Type Objects](../reference/arrays.dtypes#arrays-dtypes-constructing). Here, `base_dtype` is the desired underlying dtype, and fields and flags will be copied from `dtype`. This dtype is similar to a ‘union’ in C. ## Indexing and assignment to structured arrays ### Assigning data to a structured array There are a number of ways to assign values to a structured array: Using python tuples, using scalar values, or using other structured arrays. #### Assignment from Python Native Types (Tuples) The simplest way to assign values to a structured array is using python tuples. Each assigned value should be a tuple of length equal to the number of fields in the array, and not a list or array as these will trigger numpy’s broadcasting rules. The tuple’s elements are assigned to the successive fields of the array, from left to right: >>> x = np.array([(1, 2, 3), (4, 5, 6)], dtype='i8, f4, f8') >>> x[1] = (7, 8, 9) >>> x array([(1, 2., 3.), (7, 8., 9.)], dtype=[('f0', '>> x = np.zeros(2, dtype='i8, f4, ?, S1') >>> x[:] = 3 >>> x array([(3, 3., True, b'3'), (3, 3., True, b'3')], dtype=[('f0', '>> x[:] = np.arange(2) >>> x array([(0, 0., False, b'0'), (1, 1., True, b'1')], dtype=[('f0', '>> twofield = np.zeros(2, dtype=[('A', 'i4'), ('B', 'i4')]) >>> onefield = np.zeros(2, dtype=[('A', 'i4')]) >>> nostruct = np.zeros(2, dtype='i4') >>> nostruct[:] = twofield Traceback (most recent call last): ... TypeError: Cannot cast array data from dtype([('A', '>> a = np.zeros(3, dtype=[('a', 'i8'), ('b', 'f4'), ('c', 'S3')]) >>> b = np.ones(3, dtype=[('x', 'f4'), ('y', 'S3'), ('z', 'O')]) >>> b[:] = a >>> b array([(0., b'0.0', b''), (0., b'0.0', b''), (0., b'0.0', b'')], dtype=[('x', '>> x = np.array([(1, 2), (3, 4)], dtype=[('foo', 'i8'), ('bar', 'f4')]) >>> x['foo'] array([1, 3]) >>> x['foo'] = 10 >>> x array([(10, 2.), (10, 4.)], dtype=[('foo', '>> y = x['bar'] >>> y[:] = 11 >>> x array([(10, 11.), (10, 11.)], dtype=[('foo', '>> y.dtype, y.shape, y.strides (dtype('float32'), (2,), (12,)) If the accessed field is a subarray, the dimensions of the subarray are appended to the shape of the result: >>> x = np.zeros((2, 2), dtype=[('a', np.int32), ('b', np.float64, (3, 3))]) >>> x['a'].shape (2, 2) >>> x['b'].shape (2, 2, 3, 3) #### Accessing Multiple Fields One can index and assign to a structured array with a multi-field index, where the index is a list of field names. Warning The behavior of multi-field indexes changed from Numpy 1.15 to Numpy 1.16. The result of indexing with a multi-field index is a view into the original array, as follows: >>> a = np.zeros(3, dtype=[('a', 'i4'), ('b', 'i4'), ('c', 'f4')]) >>> a[['a', 'c']] array([(0, 0.), (0, 0.), (0, 0.)], dtype={'names': ['a', 'c'], 'formats': ['>> a[['a', 'c']].view('i8') # Fails in Numpy 1.16 Traceback (most recent call last): File "", line 1, in ValueError: When changing to a smaller dtype, its size must be a divisor of the size of original dtype will need to be changed. This code has raised a `FutureWarning` since Numpy 1.12, and similar code has raised `FutureWarning` since 1.7. In 1.16 a number of functions have been introduced in the `numpy.lib.recfunctions` module to help users account for this change. These are `numpy.lib.recfunctions.repack_fields`. `numpy.lib.recfunctions.structured_to_unstructured`, `numpy.lib.recfunctions.unstructured_to_structured`, `numpy.lib.recfunctions.apply_along_fields`, `numpy.lib.recfunctions.assign_fields_by_name`, and `numpy.lib.recfunctions.require_fields`. The function `numpy.lib.recfunctions.repack_fields` can always be used to reproduce the old behavior, as it will return a packed copy of the structured array. The code above, for example, can be replaced with: >>> from numpy.lib.recfunctions import repack_fields >>> repack_fields(a[['a', 'c']]).view('i8') # supported in 1.16 array([0, 0, 0]) Furthermore, numpy now provides a new function `numpy.lib.recfunctions.structured_to_unstructured` which is a safer and more efficient alternative for users who wish to convert structured arrays to unstructured arrays, as the view above is often intended to do. This function allows safe conversion to an unstructured type taking into account padding, often avoids a copy, and also casts the datatypes as needed, unlike the view. Code such as: >>> b = np.zeros(3, dtype=[('x', 'f4'), ('y', 'f4'), ('z', 'f4')]) >>> b[['x', 'z']].view('f4') array([0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32) can be made safer by replacing with: >>> from numpy.lib.recfunctions import structured_to_unstructured >>> structured_to_unstructured(b[['x', 'z']]) array([[0., 0.], [0., 0.], [0., 0.]], dtype=float32) Assignment to an array with a multi-field index modifies the original array: >>> a[['a', 'c']] = (2, 3) >>> a array([(2, 0, 3.), (2, 0, 3.), (2, 0, 3.)], dtype=[('a', '>> a[['a', 'c']] = a[['c', 'a']] #### Indexing with an Integer to get a Structured Scalar Indexing a single element of a structured array (with an integer index) returns a structured scalar: >>> x = np.array([(1, 2., 3.)], dtype='i, f, f') >>> scalar = x[0] >>> scalar np.void((1, 2.0, 3.0), dtype=[('f0', '>> type(scalar) Unlike other numpy scalars, structured scalars are mutable and act like views into the original array, such that modifying the scalar will modify the original array. Structured scalars also support access and assignment by field name: >>> x = np.array([(1, 2), (3, 4)], dtype=[('foo', 'i8'), ('bar', 'f4')]) >>> s = x[0] >>> s['bar'] = 100 >>> x array([(1, 100.), (3, 4.)], dtype=[('foo', '>> scalar = np.array([(1, 2., 3.)], dtype='i, f, f')[0] >>> scalar[0] np.int32(1) >>> scalar[1] = 4 Thus, tuples might be thought of as the native Python equivalent to numpy’s structured types, much like native python integers are the equivalent to numpy’s integer types. Structured scalars may be converted to a tuple by calling [`numpy.ndarray.item`](../reference/generated/numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item"): >>> scalar.item(), type(scalar.item()) ((1, 4.0, 3.0), ) ### Viewing structured arrays containing objects In order to prevent clobbering object pointers in fields of [`object`](https://docs.python.org/3/library/functions.html#object "\(in Python v3.13\)") type, numpy currently does not allow views of structured arrays containing objects. ### Structure comparison and promotion If the dtypes of two void structured arrays are equal, testing the equality of the arrays will result in a boolean array with the dimensions of the original arrays, with elements set to `True` where all fields of the corresponding structures are equal: >>> a = np.array([(1, 1), (2, 2)], dtype=[('a', 'i4'), ('b', 'i4')]) >>> b = np.array([(1, 1), (2, 3)], dtype=[('a', 'i4'), ('b', 'i4')]) >>> a == b array([True, False]) NumPy will promote individual field datatypes to perform the comparison. So the following is also valid (note the `'f4'` dtype for the `'a'` field): >>> b = np.array([(1.0, 1), (2.5, 2)], dtype=[("a", "f4"), ("b", "i4")]) >>> a == b array([True, False]) To compare two structured arrays, it must be possible to promote them to a common dtype as returned by [`numpy.result_type`](../reference/generated/numpy.result_type#numpy.result_type "numpy.result_type") and [`numpy.promote_types`](../reference/generated/numpy.promote_types#numpy.promote_types "numpy.promote_types"). This enforces that the number of fields, the field names, and the field titles must match precisely. When promotion is not possible, for example due to mismatching field names, NumPy will raise an error. Promotion between two structured dtypes results in a canonical dtype that ensures native byte-order for all fields: >>> np.result_type(np.dtype("i,>i")) dtype([('f0', '>> np.result_type(np.dtype("i,>i"), np.dtype("i,i")) dtype([('f0', '>> dt = np.dtype("i1,V3,i4,V1")[["f0", "f2"]] >>> dt dtype({'names': ['f0', 'f2'], 'formats': ['i1', '>> np.result_type(dt) dtype([('f0', 'i1'), ('f2', '>> dt = np.dtype("i1,V3,i4,V1", align=True)[["f0", "f2"]] >>> dt dtype({'names': ['f0', 'f2'], 'formats': ['i1', '>> np.result_type(dt) dtype([('f0', 'i1'), ('f2', '>> np.result_type(dt).isalignedstruct True When promoting multiple dtypes, the result is aligned if any of the inputs is: >>> np.result_type(np.dtype("i,i"), np.dtype("i,i", align=True)) dtype([('f0', '` operators always return `False` when comparing void structured arrays, and arithmetic and bitwise operations are not supported. Changed in version 1.23: Before NumPy 1.23, a warning was given and `False` returned when promotion to a common dtype failed. Further, promotion was much more restrictive: It would reject the mixed float/integer comparison example above. ## Record arrays As an optional convenience numpy provides an ndarray subclass, [`numpy.recarray`](../reference/generated/numpy.recarray#numpy.recarray "numpy.recarray") that allows access to fields of structured arrays by attribute instead of only by index. Record arrays use a special datatype, [`numpy.record`](../reference/generated/numpy.record#numpy.record "numpy.record"), that allows field access by attribute on the structured scalars obtained from the array. The `numpy.rec` module provides functions for creating recarrays from various objects. Additional helper functions for creating and manipulating structured arrays can be found in `numpy.lib.recfunctions`. The simplest way to create a record array is with [`numpy.rec.array`](../reference/generated/numpy.rec.array#numpy.rec.array "numpy.rec.array"): >>> recordarr = np.rec.array([(1, 2., 'Hello'), (2, 3., "World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) >>> recordarr.bar array([2., 3.], dtype=float32) >>> recordarr[1:2] rec.array([(2, 3., b'World')], dtype=[('foo', '>> recordarr[1:2].foo array([2], dtype=int32) >>> recordarr.foo[1:2] array([2], dtype=int32) >>> recordarr[1].baz b'World' [`numpy.rec.array`](../reference/generated/numpy.rec.array#numpy.rec.array "numpy.rec.array") can convert a wide variety of arguments into record arrays, including structured arrays: >>> arr = np.array([(1, 2., 'Hello'), (2, 3., "World")], ... dtype=[('foo', 'i4'), ('bar', 'f4'), ('baz', 'S10')]) >>> recordarr = np.rec.array(arr) The `numpy.rec` module provides a number of other convenience functions for creating record arrays, see [record array creation routines](../reference/routines.array-creation#routines-array-creation-rec). A record array representation of a structured array can be obtained using the appropriate [`view`](../reference/generated/numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view"): >>> arr = np.array([(1, 2., 'Hello'), (2, 3., "World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) >>> recordarr = arr.view(dtype=np.dtype((np.record, arr.dtype)), ... type=np.recarray) For convenience, viewing an ndarray as type [`numpy.recarray`](../reference/generated/numpy.recarray#numpy.recarray "numpy.recarray") will automatically convert to [`numpy.record`](../reference/generated/numpy.record#numpy.record "numpy.record") datatype, so the dtype can be left out of the view: >>> recordarr = arr.view(np.recarray) >>> recordarr.dtype dtype((numpy.record, [('foo', '>> arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray) Record array fields accessed by index or by attribute are returned as a record array if the field has a structured type but as a plain ndarray otherwise. >>> recordarr = np.rec.array([('Hello', (1, 2)), ("World", (3, 4))], ... dtype=[('foo', 'S6'),('bar', [('A', int), ('B', int)])]) >>> type(recordarr.foo) >>> type(recordarr.bar) Note that if a field has the same name as an ndarray attribute, the ndarray attribute takes precedence. Such fields will be inaccessible by attribute but will still be accessible by index. ### Recarray helper functions Collection of utilities to manipulate structured arrays. Most of these functions were initially implemented by John Hunter for matplotlib. They have been rewritten and extended for convenience. numpy.lib.recfunctions.append_fields(_base_ , _names_ , _data_ , _dtypes =None_, _fill_value =-1_, _usemask =True_, _asrecarray =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/recfunctions.py#L655-L723) Add new fields to an existing array. The names of the fields are given with the `names` arguments, the corresponding values with the `data` arguments. If a single field is appended, `names`, `data` and `dtypes` do not have to be lists but just values. Parameters: **base** array Input array to extend. **names** string, sequence String or sequence of strings corresponding to the names of the new fields. **data** array or sequence of arrays Array or sequence of arrays storing the fields to add to the base. **dtypes** sequence of datatypes, optional Datatype or sequence of datatypes. If None, the datatypes are estimated from the `data`. **fill_value**{float}, optional Filling value used to pad missing data on the shorter arrays. **usemask**{False, True}, optional Whether to return a masked array or not. **asrecarray**{False, True}, optional Whether to return a recarray (MaskedRecords) or not. numpy.lib.recfunctions.apply_along_fields(_func_ , _arr_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/recfunctions.py#L1186-L1228) Apply function ‘func’ as a reduction across fields of a structured array. This is similar to [`numpy.apply_along_axis`](../reference/generated/numpy.apply_along_axis#numpy.apply_along_axis "numpy.apply_along_axis"), but treats the fields of a structured array as an extra axis. The fields are all first cast to a common type following the type- promotion rules from [`numpy.result_type`](../reference/generated/numpy.result_type#numpy.result_type "numpy.result_type") applied to the field’s dtypes. Parameters: **func** function Function to apply on the “field” dimension. This function must support an `axis` argument, like [`numpy.mean`](../reference/generated/numpy.mean#numpy.mean "numpy.mean"), [`numpy.sum`](../reference/generated/numpy.sum#numpy.sum "numpy.sum"), etc. **arr** ndarray Structured array for which to apply func. Returns: **out** ndarray Result of the reduction operation #### Examples >>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> b = np.array([(1, 2, 5), (4, 5, 7), (7, 8 ,11), (10, 11, 12)], ... dtype=[('x', 'i4'), ('y', 'f4'), ('z', 'f8')]) >>> rfn.apply_along_fields(np.mean, b) array([ 2.66666667, 5.33333333, 8.66666667, 11. ]) >>> rfn.apply_along_fields(np.mean, b[['x', 'z']]) array([ 3. , 5.5, 9. , 11. ]) numpy.lib.recfunctions.assign_fields_by_name(_dst_ , _src_ , _zero_unassigned =True_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/recfunctions.py#L1233-L1269) Assigns values from one structured array to another by field name. Normally in numpy >= 1.14, assignment of one structured array to another copies fields “by position”, meaning that the first field from the src is copied to the first field of the dst, and so on, regardless of field name. This function instead copies “by field name”, such that fields in the dst are assigned from the identically named field in the src. This applies recursively for nested structures. This is how structure assignment worked in numpy >= 1.6 to <= 1.13. Parameters: **dst** ndarray **src** ndarray The source and destination arrays during assignment. **zero_unassigned** bool, optional If True, fields in the dst for which there was no matching field in the src are filled with the value 0 (zero). This was the behavior of numpy <= 1.13. If False, those fields are not modified. numpy.lib.recfunctions.drop_fields(_base_ , _drop_names_ , _usemask =True_, _asrecarray =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/recfunctions.py#L505-L564) Return a new array with fields in `drop_names` dropped. Nested fields are supported. Parameters: **base** array Input array **drop_names** string or sequence String or sequence of strings corresponding to the names of the fields to drop. **usemask**{False, True}, optional Whether to return a masked array or not. **asrecarray** string or sequence, optional Whether to return a recarray or a mrecarray (`asrecarray=True`) or a plain ndarray or masked array with flexible dtype. The default is False. #### Examples >>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> a = np.array([(1, (2, 3.0)), (4, (5, 6.0))], ... dtype=[('a', np.int64), ('b', [('ba', np.double), ('bb', np.int64)])]) >>> rfn.drop_fields(a, 'a') array([((2., 3),), ((5., 6),)], dtype=[('b', [('ba', '>> rfn.drop_fields(a, 'ba') array([(1, (3,)), (4, (6,))], dtype=[('a', '>> rfn.drop_fields(a, ['ba', 'bb']) array([(1,), (4,)], dtype=[('a', '>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> ndtype = [('a', int)] >>> a = np.ma.array([1, 1, 1, 2, 2, 3, 3], ... mask=[0, 0, 1, 0, 0, 0, 1]).view(ndtype) >>> rfn.find_duplicates(a, ignoremask=True, return_index=True) (masked_array(data=[(1,), (1,), (2,), (2,)], mask=[(False,), (False,), (False,), (False,)], fill_value=(999999,), dtype=[('a', '>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> ndtype = np.dtype([('a', '>> rfn.flatten_descr(ndtype) (('a', dtype('int32')), ('ba', dtype('float64')), ('bb', dtype('int32'))) numpy.lib.recfunctions.get_fieldstructure(_adtype_ , _lastname =None_, _parents =None_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/recfunctions.py#L227-L272) Returns a dictionary with fields indexing lists of their parent fields. This function is used to simplify access to fields nested in other fields. Parameters: **adtype** np.dtype Input datatype **lastname** optional Last processed field name (used internally during recursion). **parents** dictionary Dictionary of parent fields (used internally during recursion). #### Examples >>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> ndtype = np.dtype([('A', int), ... ('B', [('BA', int), ... ('BB', [('BBA', int), ('BBB', int)])])]) >>> rfn.get_fieldstructure(ndtype) ... # XXX: possible regression, order of BBA and BBB is swapped {'A': [], 'B': [], 'BA': ['B'], 'BB': ['B'], 'BBA': ['B', 'BB'], 'BBB': ['B', 'BB']} numpy.lib.recfunctions.get_names(_adtype_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/recfunctions.py#L104-L134) Returns the field names of the input datatype as a tuple. Input datatype must have fields otherwise error is raised. Parameters: **adtype** dtype Input datatype #### Examples >>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> rfn.get_names(np.empty((1,), dtype=[('A', int)]).dtype) ('A',) >>> rfn.get_names(np.empty((1,), dtype=[('A',int), ('B', float)]).dtype) ('A', 'B') >>> adtype = np.dtype([('a', int), ('b', [('ba', int), ('bb', int)])]) >>> rfn.get_names(adtype) ('a', ('b', ('ba', 'bb'))) numpy.lib.recfunctions.get_names_flat(_adtype_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/recfunctions.py#L137-L167) Returns the field names of the input datatype as a tuple. Input datatype must have fields otherwise error is raised. Nested structure are flattened beforehand. Parameters: **adtype** dtype Input datatype #### Examples >>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> rfn.get_names_flat(np.empty((1,), dtype=[('A', int)]).dtype) is None False >>> rfn.get_names_flat(np.empty((1,), dtype=[('A',int), ('B', str)]).dtype) ('A', 'B') >>> adtype = np.dtype([('a', int), ('b', [('ba', int), ('bb', int)])]) >>> rfn.get_names_flat(adtype) ('a', 'b', 'ba', 'bb') numpy.lib.recfunctions.join_by(_key_ , _r1_ , _r2_ , _jointype ='inner'_, _r1postfix ='1'_, _r2postfix ='2'_, _defaults =None_, _usemask =True_, _asrecarray =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/recfunctions.py#L1483-L1660) Join arrays `r1` and `r2` on key `key`. The key should be either a string or a sequence of string corresponding to the fields used to join the array. An exception is raised if the `key` field cannot be found in the two input arrays. Neither `r1` nor `r2` should have any duplicates along `key`: the presence of duplicates will make the output quite unreliable. Note that duplicates are not looked for by the algorithm. Parameters: **key**{string, sequence} A string or a sequence of strings corresponding to the fields used for comparison. **r1, r2** arrays Structured arrays. **jointype**{‘inner’, ‘outer’, ‘leftouter’}, optional If ‘inner’, returns the elements common to both r1 and r2. If ‘outer’, returns the common elements as well as the elements of r1 not in r2 and the elements of not in r2. If ‘leftouter’, returns the common elements and the elements of r1 not in r2. **r1postfix** string, optional String appended to the names of the fields of r1 that are present in r2 but absent of the key. **r2postfix** string, optional String appended to the names of the fields of r2 that are present in r1 but absent of the key. **defaults**{dictionary}, optional Dictionary mapping field names to the corresponding default values. **usemask**{True, False}, optional Whether to return a MaskedArray (or MaskedRecords is `asrecarray==True`) or a ndarray. **asrecarray**{False, True}, optional Whether to return a recarray (or MaskedRecords if `usemask==True`) or just a flexible-type ndarray. #### Notes * The output is sorted along the key. * A temporary array is formed by dropping the fields not in the key for the two arrays and concatenating the result. This array is then sorted, and the common entries selected. The output is constructed by filling the fields with the selected entries. Matching is not preserved if there are some duplicates… numpy.lib.recfunctions.merge_arrays(_seqarrays_ , _fill_value =-1_, _flatten =False_, _usemask =False_, _asrecarray =False_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/recfunctions.py#L364-L498) Merge arrays field by field. Parameters: **seqarrays** sequence of ndarrays Sequence of arrays **fill_value**{float}, optional Filling value used to pad missing data on the shorter arrays. **flatten**{False, True}, optional Whether to collapse nested fields. **usemask**{False, True}, optional Whether to return a masked array or not. **asrecarray**{False, True}, optional Whether to return a recarray (MaskedRecords) or not. #### Notes * Without a mask, the missing value will be filled with something, depending on what its corresponding type: * `-1` for integers * `-1.0` for floating point numbers * `'-'` for characters * `'-1'` for strings * `True` for boolean values * XXX: I just obtained these values empirically #### Examples >>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> rfn.merge_arrays((np.array([1, 2]), np.array([10., 20., 30.]))) array([( 1, 10.), ( 2, 20.), (-1, 30.)], dtype=[('f0', '>> rfn.merge_arrays((np.array([1, 2], dtype=np.int64), ... np.array([10., 20., 30.])), usemask=False) array([(1, 10.0), (2, 20.0), (-1, 30.0)], dtype=[('f0', '>> rfn.merge_arrays((np.array([1, 2]).view([('a', np.int64)]), ... np.array([10., 20., 30.])), ... usemask=False, asrecarray=True) rec.array([( 1, 10.), ( 2, 20.), (-1, 30.)], dtype=[('a', '>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> a = np.array([(1, 10.), (2, 20.)], dtype=[('A', np.int64), ('B', np.float64)]) >>> b = np.zeros((3,), dtype=a.dtype) >>> rfn.recursive_fill_fields(a, b) array([(1, 10.), (2, 20.), (0, 0.)], dtype=[('A', '>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> a = np.array([(1, (2, [3.0, 30.])), (4, (5, [6.0, 60.]))], ... dtype=[('a', int),('b', [('ba', float), ('bb', (float, 2))])]) >>> rfn.rename_fields(a, {'a':'A', 'bb':'BB'}) array([(1, (2., [ 3., 30.])), (4, (5., [ 6., 60.]))], dtype=[('A', '>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> def print_offsets(d): ... print("offsets:", [d.fields[name][1] for name in d.names]) ... print("itemsize:", d.itemsize) ... >>> dt = np.dtype('u1, >> dt dtype({'names': ['f0', 'f1', 'f2'], 'formats': ['u1', '>> print_offsets(dt) offsets: [0, 8, 16] itemsize: 24 >>> packed_dt = rfn.repack_fields(dt) >>> packed_dt dtype([('f0', 'u1'), ('f1', '>> print_offsets(packed_dt) offsets: [0, 1, 9] itemsize: 17 numpy.lib.recfunctions.require_fields(_array_ , _required_dtype_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/recfunctions.py#L1274-L1316) Casts a structured array to a new dtype using assignment by field-name. This function assigns from the old to the new array by name, so the value of a field in the output array is the value of the field with the same name in the source array. This has the effect of creating a new ndarray containing only the fields “required” by the required_dtype. If a field name in the required_dtype does not exist in the input array, that field is created and set to 0 in the output array. Parameters: **a** ndarray array to cast **required_dtype** dtype datatype for output array Returns: **out** ndarray array with the new dtype, with field values copied from the fields in the input array with the same name #### Examples >>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> a = np.ones(4, dtype=[('a', 'i4'), ('b', 'f8'), ('c', 'u1')]) >>> rfn.require_fields(a, [('b', 'f4'), ('c', 'u1')]) array([(1., 1), (1., 1), (1., 1), (1., 1)], dtype=[('b', '>> rfn.require_fields(a, [('b', 'f4'), ('newf', 'u1')]) array([(1., 0), (1., 0), (1., 0), (1., 0)], dtype=[('b', '>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> x = np.array([1, 2,]) >>> rfn.stack_arrays(x) is x True >>> z = np.array([('A', 1), ('B', 2)], dtype=[('A', '|S3'), ('B', float)]) >>> zz = np.array([('a', 10., 100.), ('b', 20., 200.), ('c', 30., 300.)], ... dtype=[('A', '|S3'), ('B', np.double), ('C', np.double)]) >>> test = rfn.stack_arrays((z,zz)) >>> test masked_array(data=[(b'A', 1.0, --), (b'B', 2.0, --), (b'a', 10.0, 100.0), (b'b', 20.0, 200.0), (b'c', 30.0, 300.0)], mask=[(False, False, True), (False, False, True), (False, False, False), (False, False, False), (False, False, False)], fill_value=(b'N/A', 1e+20, 1e+20), dtype=[('A', 'S3'), ('B', '>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> a = np.zeros(4, dtype=[('a', 'i4'), ('b', 'f4,u2'), ('c', 'f4', 2)]) >>> a array([(0, (0., 0), [0., 0.]), (0, (0., 0), [0., 0.]), (0, (0., 0), [0., 0.]), (0, (0., 0), [0., 0.])], dtype=[('a', '>> rfn.structured_to_unstructured(a) array([[0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.]]) >>> b = np.array([(1, 2, 5), (4, 5, 7), (7, 8 ,11), (10, 11, 12)], ... dtype=[('x', 'i4'), ('y', 'f4'), ('z', 'f8')]) >>> np.mean(rfn.structured_to_unstructured(b[['x', 'z']]), axis=-1) array([ 3. , 5.5, 9. , 11. ]) numpy.lib.recfunctions.unstructured_to_structured(_arr_ , _dtype =None_, _names =None_, _align =False_, _copy =False_, _casting ='unsafe'_)[[source]](https://github.com/numpy/numpy/blob/v2.2.0/numpy/lib/recfunctions.py#L1075-L1181) Converts an n-D unstructured array into an (n-1)-D structured array. The last dimension of the input array is converted into a structure, with number of field-elements equal to the size of the last dimension of the input array. By default all output fields have the input array’s dtype, but an output structured dtype with an equal number of fields-elements can be supplied instead. Nested fields, as well as each element of any subarray fields, all count towards the number of field-elements. Parameters: **arr** ndarray Unstructured array or dtype to convert. **dtype** dtype, optional The structured dtype of the output array **names** list of strings, optional If dtype is not supplied, this specifies the field names for the output dtype, in order. The field dtypes will be the same as the input array. **align** boolean, optional Whether to create an aligned memory layout. **copy** bool, optional See copy argument to [`numpy.ndarray.astype`](../reference/generated/numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype"). If true, always return a copy. If false, and `dtype` requirements are satisfied, a view is returned. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional See casting argument of [`numpy.ndarray.astype`](../reference/generated/numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype"). Controls what kind of data casting may occur. Returns: **structured** ndarray Structured array with fewer dimensions. #### Examples >>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> dt = np.dtype([('a', 'i4'), ('b', 'f4,u2'), ('c', 'f4', 2)]) >>> a = np.arange(20).reshape((4,5)) >>> a array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]]) >>> rfn.unstructured_to_structured(a, dt) array([( 0, ( 1., 2), [ 3., 4.]), ( 5, ( 6., 7), [ 8., 9.]), (10, (11., 12), [13., 14.]), (15, (16., 17), [18., 19.])], dtype=[('a', '>> np.array(["hello", "world"]) array(['hello', 'world'], dtype='>> np.array([b"hello", b"world"]) array([b'hello', b'world'], dtype='|S5') Since this is a one-byte encoding, the byteorder is `‘|’` (not applicable), and the data type detected is a maximum 5 character bytestring. You can also use [`numpy.void`](../reference/arrays.scalars#numpy.void "numpy.void") to represent bytestrings: >>> np.array([b"hello", b"world"]).astype(np.void) array([b'\x68\x65\x6C\x6C\x6F', b'\x77\x6F\x72\x6C\x64'], dtype='|V5') This is most useful when working with byte streams that are not well represented as bytestrings, and instead are better thought of as collections of 8-bit integers. ## Variable-width strings New in version 2.0. Note [`numpy.dtypes.StringDType`](../reference/routines.dtypes#numpy.dtypes.StringDType "numpy.dtypes.StringDType") is a new addition to NumPy, implemented using the new support in NumPy for flexible user-defined data types and is not as extensively tested in production workflows as the older NumPy data types. Often, real-world string data does not have a predictable length. In these cases it is awkward to use fixed-width strings, since storing all the data without truncation requires knowing the length of the longest string one would like to store in the array before the array is created. To support situations like this, NumPy provides [`numpy.dtypes.StringDType`](../reference/routines.dtypes#numpy.dtypes.StringDType "numpy.dtypes.StringDType"), which stores variable-width string data in a UTF-8 encoding in a NumPy array: >>> from numpy.dtypes import StringDType >>> data = ["this is a longer string", "short string"] >>> arr = np.array(data, dtype=StringDType()) >>> arr array(['this is a longer string', 'short string'], dtype=StringDType()) Note that unlike fixed-width strings, `StringDType` is not parameterized by the maximum length of an array element, arbitrarily long or short strings can live in the same array without needing to reserve storage for padding bytes in the short strings. Also note that unlike fixed-width strings and most other NumPy data types, `StringDType` does not store the string data in the “main” `ndarray` data buffer. Instead, the array buffer is used to store metadata about where the string data are stored in memory. This difference means that code expecting the array buffer to contain string data will not function correctly, and will need to be updated to support `StringDType`. ### Missing data support Often string datasets are not complete, and a special label is needed to indicate that a value is missing. By default `StringDType` does not have any special support for missing values, besides the fact that empty strings are used to populate empty arrays: >>> np.empty(3, dtype=StringDType()) array(['', '', ''], dtype=StringDType()) Optionally, you can pass create an instance of `StringDType` with support for missing values by passing `na_object` as a keyword argument for the initializer: >>> dt = StringDType(na_object=None) >>> arr = np.array(["this array has", None, "as an entry"], dtype=dt) >>> arr array(['this array has', None, 'as an entry'], dtype=StringDType(na_object=None)) >>> arr[1] is None True The `na_object` can be any arbitrary python object. Common choices are [`numpy.nan`](../reference/constants#numpy.nan "numpy.nan"), `float('nan')`, `None`, an object specifically intended to represent missing data like `pandas.NA`, or a (hopefully) unique string like `"__placeholder__"`. NumPy has special handling for NaN-like sentinels and string sentinels. #### NaN-like Missing Data Sentinels A NaN-like sentinel returns itself as the result of arithmetic operations. This includes the python `nan` float and the Pandas missing data sentinel `pd.NA`. NaN-like sentinels inherit these behaviors in string operations. This means that, for example, the result of addition with any other string is the sentinel: >>> dt = StringDType(na_object=np.nan) >>> arr = np.array(["hello", np.nan, "world"], dtype=dt) >>> arr + arr array(['hellohello', nan, 'worldworld'], dtype=StringDType(na_object=nan)) Following the behavior of `nan` in float arrays, NaN-like sentinels sort to the end of the array: >>> np.sort(arr) array(['hello', 'world', nan], dtype=StringDType(na_object=nan)) #### String Missing Data Sentinels A string missing data value is an instance of `str` or subtype of `str`. If such an array is passed to a string operation or a cast, “missing” entries are treated as if they have a value given by the string sentinel. Comparison operations similarly use the sentinel value directly for missing entries. #### Other Sentinels Other objects, such as `None` are also supported as missing data sentinels. If any missing data are present in an array using such a sentinel, then string operations will raise an error: >>> dt = StringDType(na_object=None) >>> arr = np.array(["this array has", None, "as an entry"]) >>> np.sort(arr) Traceback (most recent call last): ... TypeError: '<' not supported between instances of 'NoneType' and 'str' ### Coercing Non-strings By default, non-string data are coerced to strings: >>> np.array([1, object(), 3.4], dtype=StringDType()) array(['1', '', '3.4'], dtype=StringDType()) If this behavior is not desired, an instance of the DType can be created that disables string coercion by setting `coerce=False` in the initializer: >>> np.array([1, object(), 3.4], dtype=StringDType(coerce=False)) Traceback (most recent call last): ... ValueError: StringDType only allows string data when string coercion is disabled. This allows strict data validation in the same pass over the data NumPy uses to create the array. Setting `coerce=True` recovers the default behavior allowing coercion to strings. ### Casting To and From Fixed-Width Strings `StringDType` supports round-trip casts between [`numpy.str_`](../reference/arrays.scalars#numpy.str_ "numpy.str_"), [`numpy.bytes_`](../reference/arrays.scalars#numpy.bytes_ "numpy.bytes_"), and [`numpy.void`](../reference/arrays.scalars#numpy.void "numpy.void"). Casting to a fixed-width string is most useful when strings need to be memory-mapped in an ndarray or when a fixed-width string is needed for reading and writing to a columnar data format with a known maximum string length. In all cases, casting to a fixed-width string requires specifying the maximum allowed string length: >>> arr = np.array(["hello", "world"], dtype=StringDType()) >>> arr.astype(np.str_) Traceback (most recent call last): ... TypeError: Casting from StringDType to a fixed-width dtype with an unspecified size is not currently supported, specify an explicit size for the output dtype instead. The above exception was the direct cause of the following exception: TypeError: cannot cast dtype StringDType() to . >>> arr.astype("U5") array(['hello', 'world'], dtype='>> arr = np.array(["hello", "world"], dtype=StringDType()) >>> arr.astype("V5") array([b'\x68\x65\x6C\x6C\x6F', b'\x77\x6F\x72\x6C\x64'], dtype='|V5') Care must be taken to ensure that the output array has enough space for the UTF-8 bytes in the string, since the size of a UTF-8 bytestream in bytes is not necessarily the same as the number of characters in the string. # Subclassing ndarray ## Introduction Subclassing ndarray is relatively simple, but it has some complications compared to other Python objects. On this page we explain the machinery that allows you to subclass ndarray, and the implications for implementing a subclass. ### ndarrays and object creation Subclassing ndarray is complicated by the fact that new instances of ndarray classes can come about in three different ways. These are: 1. Explicit constructor call - as in `MySubClass(params)`. This is the usual route to Python instance creation. 2. View casting - casting an existing ndarray as a given subclass 3. New from template - creating a new instance from a template instance. Examples include returning slices from a subclassed array, creating return types from ufuncs, and copying arrays. See Creating new from template for more details The last two are characteristics of ndarrays - in order to support things like array slicing. The complications of subclassing ndarray are due to the mechanisms numpy has to support these latter two routes of instance creation. ### When to use subclassing Besides the additional complexities of subclassing a NumPy array, subclasses can run into unexpected behaviour because some functions may convert the subclass to a baseclass and “forget” any additional information associated with the subclass. This can result in surprising behavior if you use NumPy methods or functions you have not explicitly tested. On the other hand, compared to other interoperability approaches, subclassing can be useful because many things will “just work”. This means that subclassing can be a convenient approach and for a long time it was also often the only available approach. However, NumPy now provides additional interoperability protocols described in “[Interoperability with NumPy](basics.interoperability#basics-interoperability)”. For many use-cases these interoperability protocols may now be a better fit or supplement the use of subclassing. Subclassing can be a good fit if: * you are less worried about maintainability or users other than yourself: Subclass will be faster to implement and additional interoperability can be added “as-needed”. And with few users, possible surprises are not an issue. * you do not think it is problematic if the subclass information is ignored or lost silently. An example is `np.memmap` where “forgetting” about data being memory mapped cannot lead to a wrong result. An example of a subclass that sometimes confuses users are NumPy’s masked arrays. When they were introduced, subclassing was the only approach for implementation. However, today we would possibly try to avoid subclassing and rely only on interoperability protocols. Note that also subclass authors may wish to study [Interoperability with NumPy](basics.interoperability#basics-interoperability) to support more complex use-cases or work around the surprising behavior. `astropy.units.Quantity` and `xarray` are examples for array-like objects that interoperate well with NumPy. Astropy’s `Quantity` is an example which uses a dual approach of both subclassing and interoperability protocols. ## View casting _View casting_ is the standard ndarray mechanism by which you take an ndarray of any subclass, and return a view of the array as another (specified) subclass: >>> import numpy as np >>> # create a completely useless ndarray subclass >>> class C(np.ndarray): pass >>> # create a standard ndarray >>> arr = np.zeros((3,)) >>> # take a view of it, as our useless subclass >>> c_arr = arr.view(C) >>> type(c_arr) ## Creating new from template New instances of an ndarray subclass can also come about by a very similar mechanism to View casting, when numpy finds it needs to create a new instance from a template instance. The most obvious place this has to happen is when you are taking slices of subclassed arrays. For example: >>> v = c_arr[1:] >>> type(v) # the view is of type 'C' >>> v is c_arr # but it's a new instance False The slice is a _view_ onto the original `c_arr` data. So, when we take a view from the ndarray, we return a new ndarray, of the same class, that points to the data in the original. There are other points in the use of ndarrays where we need such views, such as copying arrays (`c_arr.copy()`), creating ufunc output arrays (see also __array_wrap__ for ufuncs and other functions), and reducing methods (like `c_arr.mean()`). ## Relationship of view casting and new-from-template These paths both use the same machinery. We make the distinction here, because they result in different input to your methods. Specifically, View casting means you have created a new instance of your array type from any potential subclass of ndarray. Creating new from template means you have created a new instance of your class from a pre-existing instance, allowing you - for example - to copy across attributes that are particular to your subclass. ## Implications for subclassing If we subclass ndarray, we need to deal not only with explicit construction of our array type, but also View casting or Creating new from template. NumPy has the machinery to do this, and it is this machinery that makes subclassing slightly non-standard. There are two aspects to the machinery that ndarray uses to support views and new-from-template in subclasses. The first is the use of the `ndarray.__new__` method for the main work of object initialization, rather then the more usual `__init__` method. The second is the use of the `__array_finalize__` method to allow subclasses to clean up after the creation of views and new instances from templates. ### A brief Python primer on `__new__` and `__init__` `__new__` is a standard Python method, and, if present, is called before `__init__` when we create a class instance. See the [python __new__ documentation](https://docs.python.org/reference/datamodel.html#object.__new__) for more detail. For example, consider the following Python code: >>> class C: ... def __new__(cls, *args): ... print('Cls in __new__:', cls) ... print('Args in __new__:', args) ... # The `object` type __new__ method takes a single argument. ... return object.__new__(cls) ... def __init__(self, *args): ... print('type(self) in __init__:', type(self)) ... print('Args in __init__:', args) meaning that we get: >>> c = C('hello') Cls in __new__: Args in __new__: ('hello',) type(self) in __init__: Args in __init__: ('hello',) When we call `C('hello')`, the `__new__` method gets its own class as first argument, and the passed argument, which is the string `'hello'`. After python calls `__new__`, it usually (see below) calls our `__init__` method, with the output of `__new__` as the first argument (now a class instance), and the passed arguments following. As you can see, the object can be initialized in the `__new__` method or the `__init__` method, or both, and in fact ndarray does not have an `__init__` method, because all the initialization is done in the `__new__` method. Why use `__new__` rather than just the usual `__init__`? Because in some cases, as for ndarray, we want to be able to return an object of some other class. Consider the following: class D(C): def __new__(cls, *args): print('D cls is:', cls) print('D args in __new__:', args) return C.__new__(C, *args) def __init__(self, *args): # we never get here print('In D __init__') meaning that: >>> obj = D('hello') D cls is: D args in __new__: ('hello',) Cls in __new__: Args in __new__: ('hello',) >>> type(obj) The definition of `C` is the same as before, but for `D`, the `__new__` method returns an instance of class `C` rather than `D`. Note that the `__init__` method of `D` does not get called. In general, when the `__new__` method returns an object of class other than the class in which it is defined, the `__init__` method of that class is not called. This is how subclasses of the ndarray class are able to return views that preserve the class type. When taking a view, the standard ndarray machinery creates the new ndarray object with something like: obj = ndarray.__new__(subtype, shape, ... where `subtype` is the subclass. Thus the returned view is of the same class as the subclass, rather than being of class `ndarray`. That solves the problem of returning views of the same type, but now we have a new problem. The machinery of ndarray can set the class this way, in its standard methods for taking views, but the ndarray `__new__` method knows nothing of what we have done in our own `__new__` method in order to set attributes, and so on. (Aside - why not call `obj = subdtype.__new__(...` then? Because we may not have a `__new__` method with the same call signature). ### The role of `__array_finalize__` `__array_finalize__` is the mechanism that numpy provides to allow subclasses to handle the various ways that new instances get created. Remember that subclass instances can come about in these three ways: 1. explicit constructor call (`obj = MySubClass(params)`). This will call the usual sequence of `MySubClass.__new__` then (if it exists) `MySubClass.__init__`. 2. View casting 3. Creating new from template Our `MySubClass.__new__` method only gets called in the case of the explicit constructor call, so we can’t rely on `MySubClass.__new__` or `MySubClass.__init__` to deal with the view casting and new-from-template. It turns out that `MySubClass.__array_finalize__` _does_ get called for all three methods of object creation, so this is where our object creation housekeeping usually goes. * For the explicit constructor call, our subclass will need to create a new ndarray instance of its own class. In practice this means that we, the authors of the code, will need to make a call to `ndarray.__new__(MySubClass,...)`, a class-hierarchy prepared call to `super().__new__(cls, ...)`, or do view casting of an existing array (see below) * For view casting and new-from-template, the equivalent of `ndarray.__new__(MySubClass,...` is called, at the C level. The arguments that `__array_finalize__` receives differ for the three methods of instance creation above. The following code allows us to look at the call sequences and arguments: import numpy as np class C(np.ndarray): def __new__(cls, *args, **kwargs): print('In __new__ with class %s' % cls) return super().__new__(cls, *args, **kwargs) def __init__(self, *args, **kwargs): # in practice you probably will not need or want an __init__ # method for your subclass print('In __init__ with class %s' % self.__class__) def __array_finalize__(self, obj): print('In array_finalize:') print(' self type is %s' % type(self)) print(' obj type is %s' % type(obj)) Now: >>> # Explicit constructor >>> c = C((10,)) In __new__ with class In array_finalize: self type is obj type is In __init__ with class >>> # View casting >>> a = np.arange(10) >>> cast_a = a.view(C) In array_finalize: self type is obj type is >>> # Slicing (example of new-from-template) >>> cv = c[:1] In array_finalize: self type is obj type is The signature of `__array_finalize__` is: def __array_finalize__(self, obj): One sees that the `super` call, which goes to `ndarray.__new__`, passes `__array_finalize__` the new object, of our own class (`self`) as well as the object from which the view has been taken (`obj`). As you can see from the output above, the `self` is always a newly created instance of our subclass, and the type of `obj` differs for the three instance creation methods: * When called from the explicit constructor, `obj` is `None` * When called from view casting, `obj` can be an instance of any subclass of ndarray, including our own. * When called in new-from-template, `obj` is another instance of our own subclass, that we might use to update the new `self` instance. Because `__array_finalize__` is the only method that always sees new instances being created, it is the sensible place to fill in instance defaults for new object attributes, among other tasks. This may be clearer with an example. ## Simple example - adding an extra attribute to ndarray import numpy as np class InfoArray(np.ndarray): def __new__(subtype, shape, dtype=float, buffer=None, offset=0, strides=None, order=None, info=None): # Create the ndarray instance of our type, given the usual # ndarray input arguments. This will call the standard # ndarray constructor, but return an object of our type. # It also triggers a call to InfoArray.__array_finalize__ obj = super().__new__(subtype, shape, dtype, buffer, offset, strides, order) # set the new 'info' attribute to the value passed obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # ``self`` is a new object resulting from # ndarray.__new__(InfoArray, ...), therefore it only has # attributes that the ndarray.__new__ constructor gave it - # i.e. those of a standard ndarray. # # We could have got to the ndarray.__new__ call in 3 ways: # From an explicit constructor - e.g. InfoArray(): # obj is None # (we're in the middle of the InfoArray.__new__ # constructor, and self.info will be set when we return to # InfoArray.__new__) if obj is None: return # From view casting - e.g arr.view(InfoArray): # obj is arr # (type(obj) can be InfoArray) # From new-from-template - e.g infoarr[:3] # type(obj) is InfoArray # # Note that it is here, rather than in the __new__ method, # that we set the default value for 'info', because this # method sees all creation of default objects - with the # InfoArray.__new__ constructor, but also with # arr.view(InfoArray). self.info = getattr(obj, 'info', None) # We do not need to return anything Using the object looks like this: >>> obj = InfoArray(shape=(3,)) # explicit constructor >>> type(obj) >>> obj.info is None True >>> obj = InfoArray(shape=(3,), info='information') >>> obj.info 'information' >>> v = obj[1:] # new-from-template - here - slicing >>> type(v) >>> v.info 'information' >>> arr = np.arange(10) >>> cast_arr = arr.view(InfoArray) # view casting >>> type(cast_arr) >>> cast_arr.info is None True This class isn’t very useful, because it has the same constructor as the bare ndarray object, including passing in buffers and shapes and so on. We would probably prefer the constructor to be able to take an already formed ndarray from the usual numpy calls to `np.array` and return an object. ## Slightly more realistic example - attribute added to existing array Here is a class that takes a standard ndarray that already exists, casts as our type, and adds an extra attribute. import numpy as np class RealisticInfoArray(np.ndarray): def __new__(cls, input_array, info=None): # Input array is an already formed ndarray instance # We first cast to be our class type obj = np.asarray(input_array).view(cls) # add the new attribute to the created instance obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # see InfoArray.__array_finalize__ for comments if obj is None: return self.info = getattr(obj, 'info', None) So: >>> arr = np.arange(5) >>> obj = RealisticInfoArray(arr, info='information') >>> type(obj) >>> obj.info 'information' >>> v = obj[1:] >>> type(v) >>> v.info 'information' ## `__array_ufunc__` for ufuncs A subclass can override what happens when executing numpy ufuncs on it by overriding the default `ndarray.__array_ufunc__` method. This method is executed _instead_ of the ufunc and should return either the result of the operation, or [`NotImplemented`](https://docs.python.org/3/library/constants.html#NotImplemented "\(in Python v3.13\)") if the operation requested is not implemented. The signature of `__array_ufunc__` is: def __array_ufunc__(ufunc, method, *inputs, **kwargs): * _ufunc_ is the ufunc object that was called. * _method_ is a string indicating how the Ufunc was called, either `"__call__"` to indicate it was called directly, or one of its [methods](../reference/ufuncs#ufuncs-methods): `"reduce"`, `"accumulate"`, `"reduceat"`, `"outer"`, or `"at"`. * _inputs_ is a tuple of the input arguments to the `ufunc` * _kwargs_ contains any optional or keyword arguments passed to the function. This includes any `out` arguments, which are always contained in a tuple. A typical implementation would convert any inputs or outputs that are instances of one’s own class, pass everything on to a superclass using `super()`, and finally return the results after possible back-conversion. An example, taken from the test case `test_ufunc_override_with_super` in `_core/tests/test_umath.py`, is the following. input numpy as np class A(np.ndarray): def __array_ufunc__(self, ufunc, method, *inputs, out=None, **kwargs): args = [] in_no = [] for i, input_ in enumerate(inputs): if isinstance(input_, A): in_no.append(i) args.append(input_.view(np.ndarray)) else: args.append(input_) outputs = out out_no = [] if outputs: out_args = [] for j, output in enumerate(outputs): if isinstance(output, A): out_no.append(j) out_args.append(output.view(np.ndarray)) else: out_args.append(output) kwargs['out'] = tuple(out_args) else: outputs = (None,) * ufunc.nout info = {} if in_no: info['inputs'] = in_no if out_no: info['outputs'] = out_no results = super().__array_ufunc__(ufunc, method, *args, **kwargs) if results is NotImplemented: return NotImplemented if method == 'at': if isinstance(inputs[0], A): inputs[0].info = info return if ufunc.nout == 1: results = (results,) results = tuple((np.asarray(result).view(A) if output is None else output) for result, output in zip(results, outputs)) if results and isinstance(results[0], A): results[0].info = info return results[0] if len(results) == 1 else results So, this class does not actually do anything interesting: it just converts any instances of its own to regular ndarray (otherwise, we’d get infinite recursion!), and adds an `info` dictionary that tells which inputs and outputs it converted. Hence, e.g., >>> a = np.arange(5.).view(A) >>> b = np.sin(a) >>> b.info {'inputs': [0]} >>> b = np.sin(np.arange(5.), out=(a,)) >>> b.info {'outputs': [0]} >>> a = np.arange(5.).view(A) >>> b = np.ones(1).view(A) >>> c = a + b >>> c.info {'inputs': [0, 1]} >>> a += b >>> a.info {'inputs': [0, 1], 'outputs': [0]} Note that another approach would be to use `getattr(ufunc, methods)(*inputs, **kwargs)` instead of the `super` call. For this example, the result would be identical, but there is a difference if another operand also defines `__array_ufunc__`. E.g., lets assume that we evaluate `np.add(a, b)`, where `b` is an instance of another class `B` that has an override. If you use `super` as in the example, `ndarray.__array_ufunc__` will notice that `b` has an override, which means it cannot evaluate the result itself. Thus, it will return `NotImplemented` and so will our class `A`. Then, control will be passed over to `b`, which either knows how to deal with us and produces a result, or does not and returns `NotImplemented`, raising a `TypeError`. If instead, we replace our `super` call with `getattr(ufunc, method)`, we effectively do `np.add(a.view(np.ndarray), b)`. Again, `B.__array_ufunc__` will be called, but now it sees an `ndarray` as the other argument. Likely, it will know how to handle this, and return a new instance of the `B` class to us. Our example class is not set up to handle this, but it might well be the best approach if, e.g., one were to re-implement `MaskedArray` using `__array_ufunc__`. As a final note: if the `super` route is suited to a given class, an advantage of using it is that it helps in constructing class hierarchies. E.g., suppose that our other class `B` also used the `super` in its `__array_ufunc__` implementation, and we created a class `C` that depended on both, i.e., `class C(A, B)` (with, for simplicity, not another `__array_ufunc__` override). Then any ufunc on an instance of `C` would pass on to `A.__array_ufunc__`, the `super` call in `A` would go to `B.__array_ufunc__`, and the `super` call in `B` would go to `ndarray.__array_ufunc__`, thus allowing `A` and `B` to collaborate. ## `__array_wrap__` for ufuncs and other functions Prior to numpy 1.13, the behaviour of ufuncs could only be tuned using `__array_wrap__` and `__array_prepare__` (the latter is now removed). These two allowed one to change the output type of a ufunc, but, in contrast to `__array_ufunc__`, did not allow one to make any changes to the inputs. It is hoped to eventually deprecate these, but `__array_wrap__` is also used by other numpy functions and methods, such as `squeeze`, so at the present time is still needed for full functionality. Conceptually, `__array_wrap__` “wraps up the action” in the sense of allowing a subclass to set the type of the return value and update attributes and metadata. Let’s show how this works with an example. First we return to the simpler example subclass, but with a different name and some print statements: import numpy as np class MySubClass(np.ndarray): def __new__(cls, input_array, info=None): obj = np.asarray(input_array).view(cls) obj.info = info return obj def __array_finalize__(self, obj): print('In __array_finalize__:') print(' self is %s' % repr(self)) print(' obj is %s' % repr(obj)) if obj is None: return self.info = getattr(obj, 'info', None) def __array_wrap__(self, out_arr, context=None, return_scalar=False): print('In __array_wrap__:') print(' self is %s' % repr(self)) print(' arr is %s' % repr(out_arr)) # then just call the parent return super().__array_wrap__(self, out_arr, context, return_scalar) We run a ufunc on an instance of our new array: >>> obj = MySubClass(np.arange(5), info='spam') In __array_finalize__: self is MySubClass([0, 1, 2, 3, 4]) obj is array([0, 1, 2, 3, 4]) >>> arr2 = np.arange(5)+1 >>> ret = np.add(arr2, obj) In __array_wrap__: self is MySubClass([0, 1, 2, 3, 4]) arr is array([1, 3, 5, 7, 9]) In __array_finalize__: self is MySubClass([1, 3, 5, 7, 9]) obj is MySubClass([0, 1, 2, 3, 4]) >>> ret MySubClass([1, 3, 5, 7, 9]) >>> ret.info 'spam' Note that the ufunc (`np.add`) has called the `__array_wrap__` method with arguments `self` as `obj`, and `out_arr` as the (ndarray) result of the addition. In turn, the default `__array_wrap__` (`ndarray.__array_wrap__`) has cast the result to class `MySubClass`, and called `__array_finalize__` \- hence the copying of the `info` attribute. This has all happened at the C level. But, we could do anything we wanted: class SillySubClass(np.ndarray): def __array_wrap__(self, arr, context=None, return_scalar=False): return 'I lost your data' >>> arr1 = np.arange(5) >>> obj = arr1.view(SillySubClass) >>> arr2 = np.arange(5) >>> ret = np.multiply(obj, arr2) >>> ret 'I lost your data' So, by defining a specific `__array_wrap__` method for our subclass, we can tweak the output from ufuncs. The `__array_wrap__` method requires `self`, then an argument - which is the result of the ufunc or another NumPy function - and an optional parameter _context_. This parameter is passed by ufuncs as a 3-element tuple: (name of the ufunc, arguments of the ufunc, domain of the ufunc), but is not passed by other numpy functions. Though, as seen above, it is possible to do otherwise, `__array_wrap__` should return an instance of its containing class. See the masked array subclass for an implementation. `__array_wrap__` is always passed a NumPy array which may or may not be a subclass (usually of the caller). ## Extra gotchas - custom `__del__` methods and ndarray.base One of the problems that ndarray solves is keeping track of memory ownership of ndarrays and their views. Consider the case where we have created an ndarray, `arr` and have taken a slice with `v = arr[1:]`. The two objects are looking at the same memory. NumPy keeps track of where the data came from for a particular array or view, with the `base` attribute: >>> # A normal ndarray, that owns its own data >>> arr = np.zeros((4,)) >>> # In this case, base is None >>> arr.base is None True >>> # We take a view >>> v1 = arr[1:] >>> # base now points to the array that it derived from >>> v1.base is arr True >>> # Take a view of a view >>> v2 = v1[1:] >>> # base points to the original array that it was derived from >>> v2.base is arr True In general, if the array owns its own memory, as for `arr` in this case, then `arr.base` will be None - there are some exceptions to this - see the numpy book for more details. The `base` attribute is useful in being able to tell whether we have a view or the original array. This in turn can be useful if we need to know whether or not to do some specific cleanup when the subclassed array is deleted. For example, we may only want to do the cleanup if the original array is deleted, but not the views. For an example of how this can work, have a look at the `memmap` class in `numpy._core`. ## Subclassing and downstream compatibility When sub-classing `ndarray` or creating duck-types that mimic the `ndarray` interface, it is your responsibility to decide how aligned your APIs will be with those of numpy. For convenience, many numpy functions that have a corresponding `ndarray` method (e.g., `sum`, `mean`, `take`, `reshape`) work by checking if the first argument to a function has a method of the same name. If it exists, the method is called instead of coercing the arguments to a numpy array. For example, if you want your sub-class or duck-type to be compatible with numpy’s `sum` function, the method signature for this object’s `sum` method should be the following: def sum(self, axis=None, dtype=None, out=None, keepdims=False): ... This is the exact same method signature for `np.sum`, so now if a user calls `np.sum` on this object, numpy will call the object’s own `sum` method and pass in these arguments enumerated above in the signature, and no errors will be raised because the signatures are completely compatible with each other. If, however, you decide to deviate from this signature and do something like this: def sum(self, axis=None, dtype=None): ... This object is no longer compatible with `np.sum` because if you call `np.sum`, it will pass in unexpected arguments `out` and `keepdims`, causing a TypeError to be raised. If you wish to maintain compatibility with numpy and its subsequent versions (which might add new keyword arguments) but do not want to surface all of numpy’s arguments, your function’s signature should accept `**kwargs`. For example: def sum(self, axis=None, dtype=None, **unused_kwargs): ... This object is now compatible with `np.sum` again because any extraneous arguments (i.e. keywords that are not `axis` or `dtype`) will be hidden away in the `**unused_kwargs` parameter. # Data types See also [Data type objects](../reference/arrays.dtypes#arrays-dtypes) ## Array types and conversions between types NumPy supports a much greater variety of numerical types than Python does. This section shows which are available, and how to modify an array’s data- type. NumPy numerical types are instances of [`numpy.dtype`](../reference/generated/numpy.dtype#numpy.dtype "numpy.dtype") (data-type) objects, each having unique characteristics. Once you have imported NumPy using `import numpy as np` you can create arrays with a specified dtype using the scalar types in the numpy top-level API, e.g. [`numpy.bool`](../reference/arrays.scalars#numpy.bool "numpy.bool"), [`numpy.float32`](../reference/arrays.scalars#numpy.float32 "numpy.float32"), etc. These scalar types as arguments to the dtype keyword that many numpy functions or methods accept. For example: >>> z = np.arange(3, dtype=np.uint8) >>> z array([0, 1, 2], dtype=uint8) Array types can also be referred to by character codes, for example: >>> np.array([1, 2, 3], dtype='f') array([1., 2., 3.], dtype=float32) >>> np.array([1, 2, 3], dtype='d') array([1., 2., 3.], dtype=float64) See [Specifying and constructing data types](../reference/arrays.dtypes#arrays-dtypes-constructing) for more information about specifying and constructing data type objects, including how to specify parameters like the byte order. To convert the type of an array, use the .astype() method. For example: >>> z.astype(np.float64) array([0., 1., 2.]) Note that, above, we could have used the _Python_ float object as a dtype instead of [`numpy.float64`](../reference/arrays.scalars#numpy.float64 "numpy.float64"). NumPy knows that [`int`](https://docs.python.org/3/library/functions.html#int "\(in Python v3.13\)") refers to [`numpy.int_`](../reference/arrays.scalars#numpy.int_ "numpy.int_"), [`bool`](https://docs.python.org/3/library/functions.html#bool "\(in Python v3.13\)") means [`numpy.bool`](../reference/arrays.scalars#numpy.bool "numpy.bool"), that [`float`](https://docs.python.org/3/library/functions.html#float "\(in Python v3.13\)") is [`numpy.float64`](../reference/arrays.scalars#numpy.float64 "numpy.float64") and [`complex`](https://docs.python.org/3/library/functions.html#complex "\(in Python v3.13\)") is [`numpy.complex128`](../reference/arrays.scalars#numpy.complex128 "numpy.complex128"). The other data-types do not have Python equivalents. To determine the type of an array, look at the dtype attribute: >>> z.dtype dtype('uint8') dtype objects also contain information about the type, such as its bit-width and its byte-order. The data type can also be used indirectly to query properties of the type, such as whether it is an integer: >>> d = np.dtype(np.int64) >>> d dtype('int64') >>> np.issubdtype(d, np.integer) True >>> np.issubdtype(d, np.floating) False ### Numerical Data Types There are 5 basic numerical types representing booleans (`bool`), integers (`int`), unsigned integers (`uint`) floating point (`float`) and `complex`. A basic numerical type name combined with a numeric bitsize defines a concrete type. The bitsize is the number of bits that are needed to represent a single value in memory. For example, [`numpy.float64`](../reference/arrays.scalars#numpy.float64 "numpy.float64") is a 64 bit floating point data type. Some types, such as [`numpy.int_`](../reference/arrays.scalars#numpy.int_ "numpy.int_") and [`numpy.intp`](../reference/arrays.scalars#numpy.intp "numpy.intp"), have differing bitsizes, dependent on the platforms (e.g. 32-bit vs. 64-bit CPU architectures). This should be taken into account when interfacing with low- level code (such as C or Fortran) where the raw memory is addressed. ### Data Types for Strings and Bytes In addition to numerical types, NumPy also supports storing unicode strings, via the [`numpy.str_`](../reference/arrays.scalars#numpy.str_ "numpy.str_") dtype (`U` character code), null-terminated byte sequences via [`numpy.bytes_`](../reference/arrays.scalars#numpy.bytes_ "numpy.bytes_") (`S` character code), and arbitrary byte sequences, via [`numpy.void`](../reference/arrays.scalars#numpy.void "numpy.void") (`V` character code). All of the above are _fixed-width_ data types. They are parameterized by a width, in either bytes or unicode points, that a single data element in the array must fit inside. This means that storing an array of byte sequences or strings using this dtype requires knowing or calculating the sizes of the longest text or byte sequence in advance. As an example, we can create an array storing the words `"hello"` and `"world!"`: >>> np.array(["hello", "world!"]) array(['hello', 'world!'], dtype='>> np.array(["hello", "world!"], dtype="U5") array(['hello', 'world'], dtype='>> np.array(["hello", "world!"], dtype="U7") array(['hello', 'world!'], dtype='>> np.array(["hello", "world"], dtype="S7").tobytes() b'hello\x00\x00world\x00\x00' Each entry is padded with two extra null bytes. Note however that NumPy cannot tell the difference between intentionally stored trailing nulls and padding nulls: >>> x = [b"hello\0\0", b"world"] >>> a = np.array(x, dtype="S7") >>> print(a[0]) b"hello" >>> a[0] == x[0] False If you need to store and round-trip any trailing null bytes, you will need to use an unstructured void data type: >>> a = np.array(x, dtype="V7") >>> a array([b'\x68\x65\x6C\x6C\x6F\x00\x00', b'\x77\x6F\x72\x6C\x64\x00\x00'], dtype='|V7') >>> a[0] == np.void(x[0]) True Advanced types, not listed above, are explored in section [Structured arrays](basics.rec#structured-arrays). ## Relationship Between NumPy Data Types and C Data Types NumPy provides both bit sized type names and names based on the names of C types. Since the definition of C types are platform dependent, this means the explicitly bit sized should be preferred to avoid platform-dependent behavior in programs using NumPy. To ease integration with C code, where it is more natural to refer to platform-dependent C types, NumPy also provides type aliases that correspond to the C types for the platform. Some dtypes have trailing underscore to avoid confusion with builtin python type names, such as [`numpy.bool_`](../reference/arrays.scalars#numpy.bool_ "numpy.bool_"). Canonical Python API name | Python API “C-like” name | Actual C type | Description ---|---|---|--- [`numpy.bool`](../reference/arrays.scalars#numpy.bool "numpy.bool") or [`numpy.bool_`](../reference/arrays.scalars#numpy.bool_ "numpy.bool_") | N/A | `bool` (defined in `stdbool.h`) | Boolean (True or False) stored as a byte. [`numpy.int8`](../reference/arrays.scalars#numpy.int8 "numpy.int8") | [`numpy.byte`](../reference/arrays.scalars#numpy.byte "numpy.byte") | `signed char` | Platform-defined integer type with 8 bits. [`numpy.uint8`](../reference/arrays.scalars#numpy.uint8 "numpy.uint8") | [`numpy.ubyte`](../reference/arrays.scalars#numpy.ubyte "numpy.ubyte") | `unsigned char` | Platform-defined integer type with 8 bits without sign. [`numpy.int16`](../reference/arrays.scalars#numpy.int16 "numpy.int16") | [`numpy.short`](../reference/arrays.scalars#numpy.short "numpy.short") | `short` | Platform-defined integer type with 16 bits. [`numpy.uint16`](../reference/arrays.scalars#numpy.uint16 "numpy.uint16") | [`numpy.ushort`](../reference/arrays.scalars#numpy.ushort "numpy.ushort") | `unsigned short` | Platform-defined integer type with 16 bits without sign. [`numpy.int32`](../reference/arrays.scalars#numpy.int32 "numpy.int32") | [`numpy.intc`](../reference/arrays.scalars#numpy.intc "numpy.intc") | `int` | Platform-defined integer type with 32 bits. [`numpy.uint32`](../reference/arrays.scalars#numpy.uint32 "numpy.uint32") | [`numpy.uintc`](../reference/arrays.scalars#numpy.uintc "numpy.uintc") | `unsigned int` | Platform-defined integer type with 32 bits without sign. [`numpy.intp`](../reference/arrays.scalars#numpy.intp "numpy.intp") | N/A | `ssize_t`/`Py_ssize_t` | Platform-defined integer of size `size_t`; used e.g. for sizes. [`numpy.uintp`](../reference/arrays.scalars#numpy.uintp "numpy.uintp") | N/A | `size_t` | Platform-defined integer type capable of storing the maximum allocation size. N/A | `'p'` | `intptr_t` | Guaranteed to hold pointers. Character code only (Python and C). N/A | `'P'` | `uintptr_t` | Guaranteed to hold pointers. Character code only (Python and C). [`numpy.int32`](../reference/arrays.scalars#numpy.int32 "numpy.int32") or [`numpy.int64`](../reference/arrays.scalars#numpy.int64 "numpy.int64") | [`numpy.long`](../reference/arrays.scalars#numpy.long "numpy.long") | `long` | Platform-defined integer type with at least 32 bits. [`numpy.uint32`](../reference/arrays.scalars#numpy.uint32 "numpy.uint32") or [`numpy.uint64`](../reference/arrays.scalars#numpy.uint64 "numpy.uint64") | [`numpy.ulong`](../reference/arrays.scalars#numpy.ulong "numpy.ulong") | `unsigned long` | Platform-defined integer type with at least 32 bits without sign. N/A | [`numpy.longlong`](../reference/arrays.scalars#numpy.longlong "numpy.longlong") | `long long` | Platform-defined integer type with at least 64 bits. N/A | [`numpy.ulonglong`](../reference/arrays.scalars#numpy.ulonglong "numpy.ulonglong") | `unsigned long long` | Platform-defined integer type with at least 64 bits without sign. [`numpy.float16`](../reference/arrays.scalars#numpy.float16 "numpy.float16") | [`numpy.half`](../reference/arrays.scalars#numpy.half "numpy.half") | N/A | Half precision float: sign bit, 5 bits exponent, 10 bits mantissa. [`numpy.float32`](../reference/arrays.scalars#numpy.float32 "numpy.float32") | [`numpy.single`](../reference/arrays.scalars#numpy.single "numpy.single") | `float` | Platform-defined single precision float: typically sign bit, 8 bits exponent, 23 bits mantissa. [`numpy.float64`](../reference/arrays.scalars#numpy.float64 "numpy.float64") | [`numpy.double`](../reference/arrays.scalars#numpy.double "numpy.double") | `double` | Platform-defined double precision float: typically sign bit, 11 bits exponent, 52 bits mantissa. `numpy.float96` or [`numpy.float128`](../reference/arrays.scalars#numpy.float128 "numpy.float128") | [`numpy.longdouble`](../reference/arrays.scalars#numpy.longdouble "numpy.longdouble") | `long double` | Platform-defined extended-precision float. [`numpy.complex64`](../reference/arrays.scalars#numpy.complex64 "numpy.complex64") | [`numpy.csingle`](../reference/arrays.scalars#numpy.csingle "numpy.csingle") | `float complex` | Complex number, represented by two single-precision floats (real and imaginary components). [`numpy.complex128`](../reference/arrays.scalars#numpy.complex128 "numpy.complex128") | [`numpy.cdouble`](../reference/arrays.scalars#numpy.cdouble "numpy.cdouble") | `double complex` | Complex number, represented by two double-precision floats (real and imaginary components). `numpy.complex192` or [`numpy.complex256`](../reference/arrays.scalars#numpy.complex256 "numpy.complex256") | [`numpy.clongdouble`](../reference/arrays.scalars#numpy.clongdouble "numpy.clongdouble") | `long double complex` | Complex number, represented by two extended-precision floats (real and imaginary components). Since many of these have platform-dependent definitions, a set of fixed-size aliases are provided (See [Sized aliases](../reference/arrays.scalars#sized- aliases)). ## Array scalars NumPy generally returns elements of arrays as array scalars (a scalar with an associated dtype). Array scalars differ from Python scalars, but for the most part they can be used interchangeably (the primary exception is for versions of Python older than v2.x, where integer array scalars cannot act as indices for lists and tuples). There are some exceptions, such as when code requires very specific attributes of a scalar or when it checks specifically whether a value is a Python scalar. Generally, problems are easily fixed by explicitly converting array scalars to Python scalars, using the corresponding Python type function (e.g., [`int`](https://docs.python.org/3/library/functions.html#int "\(in Python v3.13\)"), [`float`](https://docs.python.org/3/library/functions.html#float "\(in Python v3.13\)"), [`complex`](https://docs.python.org/3/library/functions.html#complex "\(in Python v3.13\)"), [`str`](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.13\)")). The primary advantage of using array scalars is that they preserve the array type (Python may not have a matching scalar type available, e.g. `int16`). Therefore, the use of array scalars ensures identical behaviour between arrays and scalars, irrespective of whether the value is inside an array or not. NumPy scalars also have many of the same methods arrays do. ## Overflow errors The fixed size of NumPy numeric types may cause overflow errors when a value requires more memory than available in the data type. For example, [`numpy.power`](../reference/generated/numpy.power#numpy.power "numpy.power") evaluates `100 ** 9` correctly for 64-bit integers, but gives -1486618624 (incorrect) for a 32-bit integer. >>> np.power(100, 9, dtype=np.int64) 1000000000000000000 >>> np.power(100, 9, dtype=np.int32) np.int32(-1486618624) The behaviour of NumPy and Python integer types differs significantly for integer overflows and may confuse users expecting NumPy integers to behave similar to Python’s [`int`](https://docs.python.org/3/library/functions.html#int "\(in Python v3.13\)"). Unlike NumPy, the size of Python’s [`int`](https://docs.python.org/3/library/functions.html#int "\(in Python v3.13\)") is flexible. This means Python integers may expand to accommodate any integer and will not overflow. NumPy provides [`numpy.iinfo`](../reference/generated/numpy.iinfo#numpy.iinfo "numpy.iinfo") and [`numpy.finfo`](../reference/generated/numpy.finfo#numpy.finfo "numpy.finfo") to verify the minimum or maximum values of NumPy integer and floating point values respectively >>> np.iinfo(int) # Bounds of the default integer on this system. iinfo(min=-9223372036854775808, max=9223372036854775807, dtype=int64) >>> np.iinfo(np.int32) # Bounds of a 32-bit integer iinfo(min=-2147483648, max=2147483647, dtype=int32) >>> np.iinfo(np.int64) # Bounds of a 64-bit integer iinfo(min=-9223372036854775808, max=9223372036854775807, dtype=int64) If 64-bit integers are still too small the result may be cast to a floating point number. Floating point numbers offer a larger, but inexact, range of possible values. >>> np.power(100, 100, dtype=np.int64) # Incorrect even with 64-bit int 0 >>> np.power(100, 100, dtype=np.float64) 1e+200 ## Floating point precision Many functions in NumPy, especially those in [`numpy.linalg`](../reference/routines.linalg#module-numpy.linalg "numpy.linalg"), involve floating-point arithmetic, which can introduce small inaccuracies due to the way computers represent decimal numbers. For instance, when performing basic arithmetic operations involving floating-point numbers: >>> 0.3 - 0.2 - 0.1 # This does not equal 0 due to floating-point precision -2.7755575615628914e-17 To handle such cases, it’s advisable to use functions like `np.isclose` to compare values, rather than checking for exact equality: >>> np.isclose(0.3 - 0.2 - 0.1, 0, rtol=1e-05) # Check for closeness to 0 True In this example, `np.isclose` accounts for the minor inaccuracies that occur in floating-point calculations by applying a relative tolerance, ensuring that results within a small threshold are considered close. For information about precision in calculations, see [Floating-Point Arithmetic](https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html). ## Extended precision Python’s floating-point numbers are usually 64-bit floating-point numbers, nearly equivalent to [`numpy.float64`](../reference/arrays.scalars#numpy.float64 "numpy.float64"). In some unusual situations it may be useful to use floating-point numbers with more precision. Whether this is possible in numpy depends on the hardware and on the development environment: specifically, x86 machines provide hardware floating-point with 80-bit precision, and while most C compilers provide this as their `long double` type, MSVC (standard for Windows builds) makes `long double` identical to `double` (64 bits). NumPy makes the compiler’s `long double` available as [`numpy.longdouble`](../reference/arrays.scalars#numpy.longdouble "numpy.longdouble") (and `np.clongdouble` for the complex numbers). You can find out what your numpy provides with `np.finfo(np.longdouble)`. NumPy does not provide a dtype with more precision than C’s `long double`; in particular, the 128-bit IEEE quad precision data type (FORTRAN’s `REAL*16`) is not available. For efficient memory alignment, [`numpy.longdouble`](../reference/arrays.scalars#numpy.longdouble "numpy.longdouble") is usually stored padded with zero bits, either to 96 or 128 bits. Which is more efficient depends on hardware and development environment; typically on 32-bit systems they are padded to 96 bits, while on 64-bit systems they are typically padded to 128 bits. `np.longdouble` is padded to the system default; `np.float96` and `np.float128` are provided for users who want specific padding. In spite of the names, `np.float96` and `np.float128` provide only as much precision as `np.longdouble`, that is, 80 bits on most x86 machines and 64 bits in standard Windows builds. Be warned that even if [`numpy.longdouble`](../reference/arrays.scalars#numpy.longdouble "numpy.longdouble") offers more precision than python [`float`](https://docs.python.org/3/library/functions.html#float "\(in Python v3.13\)"), it is easy to lose that extra precision, since python often forces values to pass through `float`. For example, the `%` formatting operator requires its arguments to be converted to standard python types, and it is therefore impossible to preserve extended precision even if many decimal places are requested. It can be useful to test your code with the value `1 + np.finfo(np.longdouble).eps`. # Universal functions (ufunc) basics See also [Universal functions (ufunc)](../reference/ufuncs#ufuncs) A universal function (or [ufunc](../glossary#term-ufunc) for short) is a function that operates on [`ndarrays`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") in an element-by-element fashion, supporting array broadcasting, type casting, and several other standard features. That is, a ufunc is a “[vectorized](../glossary#term-vectorization)” wrapper for a function that takes a fixed number of specific inputs and produces a fixed number of specific outputs. In NumPy, universal functions are instances of the [`numpy.ufunc`](../reference/generated/numpy.ufunc#numpy.ufunc "numpy.ufunc") class. Many of the built-in functions are implemented in compiled C code. The basic ufuncs operate on scalars, but there is also a generalized kind for which the basic elements are sub-arrays (vectors, matrices, etc.), and broadcasting is done over other dimensions. The simplest example is the addition operator: >>> np.array([0,2,3,4]) + np.array([1,1,-1,2]) array([1, 3, 2, 6]) One can also produce custom [`numpy.ufunc`](../reference/generated/numpy.ufunc#numpy.ufunc "numpy.ufunc") instances using the [`numpy.frompyfunc`](../reference/generated/numpy.frompyfunc#numpy.frompyfunc "numpy.frompyfunc") factory function. ## Ufunc methods All ufuncs have four methods. They can be found at [Methods](../reference/ufuncs#ufuncs-methods). However, these methods only make sense on scalar ufuncs that take two input arguments and return one output argument. Attempting to call these methods on other ufuncs will cause a [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "\(in Python v3.13\)"). The reduce-like methods all take an _axis_ keyword, a _dtype_ keyword, and an _out_ keyword, and the arrays must all have dimension >= 1. The _axis_ keyword specifies the axis of the array over which the reduction will take place (with negative values counting backwards). Generally, it is an integer, though for [`numpy.ufunc.reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce"), it can also be a tuple of `int` to reduce over several axes at once, or `None`, to reduce over all axes. For example: >>> x = np.arange(9).reshape(3,3) >>> x array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> np.add.reduce(x, 1) array([ 3, 12, 21]) >>> np.add.reduce(x, (0, 1)) 36 The _dtype_ keyword allows you to manage a very common problem that arises when naively using [`ufunc.reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce"). Sometimes you may have an array of a certain data type and wish to add up all of its elements, but the result does not fit into the data type of the array. This commonly happens if you have an array of single- byte integers. The _dtype_ keyword allows you to alter the data type over which the reduction takes place (and therefore the type of the output). Thus, you can ensure that the output is a data type with precision large enough to handle your output. The responsibility of altering the reduce type is mostly up to you. There is one exception: if no _dtype_ is given for a reduction on the “add” or “multiply” operations, then if the input type is an integer (or Boolean) data-type and smaller than the size of the [`numpy.int_`](../reference/arrays.scalars#numpy.int_ "numpy.int_") data type, it will be internally upcast to the [`int_`](../reference/arrays.scalars#numpy.int_ "numpy.int_") (or [`numpy.uint`](../reference/arrays.scalars#numpy.uint "numpy.uint")) data- type. In the previous example: >>> x.dtype dtype('int64') >>> np.multiply.reduce(x, dtype=float) array([ 0., 28., 80.]) Finally, the _out_ keyword allows you to provide an output array (or a tuple of output arrays for multi-output ufuncs). If _out_ is given, the _dtype_ argument is only used for the internal computations. Considering `x` from the previous example: >>> y = np.zeros(3, dtype=int) >>> y array([0, 0, 0]) >>> np.multiply.reduce(x, dtype=float, out=y) array([ 0, 28, 80]) Ufuncs also have a fifth method, [`numpy.ufunc.at`](../reference/generated/numpy.ufunc.at#numpy.ufunc.at "numpy.ufunc.at"), that allows in place operations to be performed using advanced indexing. No buffering is used on the dimensions where advanced indexing is used, so the advanced index can list an item more than once and the operation will be performed on the result of the previous operation for that item. ## Output type determination The output of the ufunc (and its methods) is not necessarily an [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"), if all input arguments are not [`ndarrays`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). Indeed, if any input defines an [`__array_ufunc__`](../reference/arrays.classes#numpy.class.__array_ufunc__ "numpy.class.__array_ufunc__") method, control will be passed completely to that function, i.e., the ufunc is overridden. If none of the inputs overrides the ufunc, then all output arrays will be passed to the [`__array_wrap__`](../reference/arrays.classes#numpy.class.__array_wrap__ "numpy.class.__array_wrap__") method of the input (besides [`ndarrays`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"), and scalars) that defines it **and** has the highest [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") of any other input to the universal function. The default [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") of the ndarray is 0.0, and the default [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") of a subtype is 0.0. Matrices have [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") equal to 10.0. All ufuncs can also take output arguments which must be arrays or subclasses. If necessary, the result will be cast to the data-type(s) of the provided output array(s). If the output has an [`__array_wrap__`](../reference/arrays.classes#numpy.class.__array_wrap__ "numpy.class.__array_wrap__") method it is called instead of the one found on the inputs. ## Broadcasting See also [Broadcasting basics](basics.broadcasting) Each universal function takes array inputs and produces array outputs by performing the core function element-wise on the inputs (where an element is generally a scalar, but can be a vector or higher-order sub-array for generalized ufuncs). Standard [broadcasting rules](basics.broadcasting#general-broadcasting-rules) are applied so that inputs not sharing exactly the same shapes can still be usefully operated on. By these rules, if an input has a dimension size of 1 in its shape, the first data entry in that dimension will be used for all calculations along that dimension. In other words, the stepping machinery of the [ufunc](../glossary#term-ufunc) will simply not step along that dimension (the [stride](../reference/arrays.ndarray#memory-layout) will be 0 for that dimension). ## Type casting rules Note In NumPy 1.6.0, a type promotion API was created to encapsulate the mechanism for determining output types. See the functions [`numpy.result_type`](../reference/generated/numpy.result_type#numpy.result_type "numpy.result_type"), [`numpy.promote_types`](../reference/generated/numpy.promote_types#numpy.promote_types "numpy.promote_types"), and [`numpy.min_scalar_type`](../reference/generated/numpy.min_scalar_type#numpy.min_scalar_type "numpy.min_scalar_type") for more details. At the core of every ufunc is a one-dimensional strided loop that implements the actual function for a specific type combination. When a ufunc is created, it is given a static list of inner loops and a corresponding list of type signatures over which the ufunc operates. The ufunc machinery uses this list to determine which inner loop to use for a particular case. You can inspect the [`.types`](../reference/generated/numpy.ufunc.types#numpy.ufunc.types "numpy.ufunc.types") attribute for a particular ufunc to see which type combinations have a defined inner loop and which output type they produce ([character codes](../reference/arrays.scalars#arrays-scalars-character-codes) are used in said output for brevity). Casting must be done on one or more of the inputs whenever the ufunc does not have a core loop implementation for the input types provided. If an implementation for the input types cannot be found, then the algorithm searches for an implementation with a type signature to which all of the inputs can be cast “safely.” The first one it finds in its internal list of loops is selected and performed, after all necessary type casting. Recall that internal copies during ufuncs (even for casting) are limited to the size of an internal buffer (which is user settable). Note Universal functions in NumPy are flexible enough to have mixed type signatures. Thus, for example, a universal function could be defined that works with floating-point and integer values. See [`numpy.ldexp`](../reference/generated/numpy.ldexp#numpy.ldexp "numpy.ldexp") for an example. By the above description, the casting rules are essentially implemented by the question of when a data type can be cast “safely” to another data type. The answer to this question can be determined in Python with a function call: [`can_cast(fromtype, totype)`](../reference/generated/numpy.can_cast#numpy.can_cast "numpy.can_cast"). The example below shows the results of this call for the 24 internally supported types on the author’s 64-bit system. You can generate this table for your system with the code given in the example. #### Example Code segment showing the “can cast safely” table for a 64-bit system. Generally the output depends on the system; your system might result in a different table. >>> mark = {False: ' -', True: ' Y'} >>> def print_table(ntypes): ... print('X ' + ' '.join(ntypes)) ... for row in ntypes: ... print(row, end='') ... for col in ntypes: ... print(mark[np.can_cast(row, col)], end='') ... print() ... >>> print_table(np.typecodes['All']) X ? b h i l q n p B H I L Q N P e f d g F D G S U V O M m ? Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y - Y b - Y Y Y Y Y Y Y - - - - - - - Y Y Y Y Y Y Y Y Y Y Y - Y h - - Y Y Y Y Y Y - - - - - - - - Y Y Y Y Y Y Y Y Y Y - Y i - - - Y Y Y Y Y - - - - - - - - - Y Y - Y Y Y Y Y Y - Y l - - - - Y Y Y Y - - - - - - - - - Y Y - Y Y Y Y Y Y - Y q - - - - Y Y Y Y - - - - - - - - - Y Y - Y Y Y Y Y Y - Y n - - - - Y Y Y Y - - - - - - - - - Y Y - Y Y Y Y Y Y - Y p - - - - Y Y Y Y - - - - - - - - - Y Y - Y Y Y Y Y Y - Y B - - Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y - Y H - - - Y Y Y Y Y - Y Y Y Y Y Y - Y Y Y Y Y Y Y Y Y Y - Y I - - - - Y Y Y Y - - Y Y Y Y Y - - Y Y - Y Y Y Y Y Y - Y L - - - - - - - - - - - Y Y Y Y - - Y Y - Y Y Y Y Y Y - - Q - - - - - - - - - - - Y Y Y Y - - Y Y - Y Y Y Y Y Y - - N - - - - - - - - - - - Y Y Y Y - - Y Y - Y Y Y Y Y Y - - P - - - - - - - - - - - Y Y Y Y - - Y Y - Y Y Y Y Y Y - - e - - - - - - - - - - - - - - - Y Y Y Y Y Y Y Y Y Y Y - - f - - - - - - - - - - - - - - - - Y Y Y Y Y Y Y Y Y Y - - d - - - - - - - - - - - - - - - - - Y Y - Y Y Y Y Y Y - - g - - - - - - - - - - - - - - - - - - Y - - Y Y Y Y Y - - F - - - - - - - - - - - - - - - - - - - Y Y Y Y Y Y Y - - D - - - - - - - - - - - - - - - - - - - - Y Y Y Y Y Y - - G - - - - - - - - - - - - - - - - - - - - - Y Y Y Y Y - - S - - - - - - - - - - - - - - - - - - - - - - Y Y Y Y - - U - - - - - - - - - - - - - - - - - - - - - - - Y Y Y - - V - - - - - - - - - - - - - - - - - - - - - - - - Y Y - - O - - - - - - - - - - - - - - - - - - - - - - - - - Y - - M - - - - - - - - - - - - - - - - - - - - - - - - Y Y Y - m - - - - - - - - - - - - - - - - - - - - - - - - Y Y - Y You should note that, while included in the table for completeness, the ‘S’, ‘U’, and ‘V’ types cannot be operated on by ufuncs. Also, note that on a 32-bit system the integer types may have different sizes, resulting in a slightly altered table. Mixed scalar-array operations use a different set of casting rules that ensure that a scalar cannot “upcast” an array unless the scalar is of a fundamentally different kind of data (i.e., under a different hierarchy in the data-type hierarchy) than the array. This rule enables you to use scalar constants in your code (which, as Python types, are interpreted accordingly in ufuncs) without worrying about whether the precision of the scalar constant will cause upcasting on your large (small precision) array. ## Use of internal buffers Internally, buffers are used for misaligned data, swapped data, and data that has to be converted from one data type to another. The size of internal buffers is settable on a per-thread basis. There can be up to \\(2 (n_{\mathrm{inputs}} + n_{\mathrm{outputs}})\\) buffers of the specified size created to handle the data from all the inputs and outputs of a ufunc. The default size of a buffer is 10,000 elements. Whenever buffer-based calculation would be needed, but all input arrays are smaller than the buffer size, those misbehaved or incorrectly-typed arrays will be copied before the calculation proceeds. Adjusting the size of the buffer may therefore alter the speed at which ufunc calculations of various sorts are completed. A simple interface for setting this variable is accessible using the function [`numpy.setbufsize`](../reference/generated/numpy.setbufsize#numpy.setbufsize "numpy.setbufsize"). ## Error handling Universal functions can trip special floating-point status registers in your hardware (such as divide-by-zero). If available on your platform, these registers will be regularly checked during calculation. Error handling is controlled on a per-thread basis, and can be configured using the functions [`numpy.seterr`](../reference/generated/numpy.seterr#numpy.seterr "numpy.seterr") and [`numpy.seterrcall`](../reference/generated/numpy.seterrcall#numpy.seterrcall "numpy.seterrcall"). ## Overriding ufunc behavior Classes (including ndarray subclasses) can override how ufuncs act on them by defining certain special methods. For details, see [Standard array subclasses](../reference/arrays.classes#arrays-classes). # Byte-swapping ## Introduction to byte ordering and ndarrays The `ndarray` is an object that provides a python array interface to data in memory. It often happens that the memory that you want to view with an array is not of the same byte ordering as the computer on which you are running Python. For example, I might be working on a computer with a little-endian CPU - such as an Intel Pentium, but I have loaded some data from a file written by a computer that is big-endian. Let’s say I have loaded 4 bytes from a file written by a Sun (big-endian) computer. I know that these 4 bytes represent two 16-bit integers. On a big-endian machine, a two-byte integer is stored with the Most Significant Byte (MSB) first, and then the Least Significant Byte (LSB). Thus the bytes are, in memory order: 1. MSB integer 1 2. LSB integer 1 3. MSB integer 2 4. LSB integer 2 Let’s say the two integers were in fact 1 and 770. Because 770 = 256 * 3 + 2, the 4 bytes in memory would contain respectively: 0, 1, 3, 2. The bytes I have loaded from the file would have these contents: >>> big_end_buffer = bytearray([0,1,3,2]) >>> big_end_buffer bytearray(b'\x00\x01\x03\x02') We might want to use an `ndarray` to access these integers. In that case, we can create an array around this memory, and tell numpy that there are two integers, and that they are 16 bit and big-endian: >>> import numpy as np >>> big_end_arr = np.ndarray(shape=(2,),dtype='>i2', buffer=big_end_buffer) >>> big_end_arr[0] np.int16(1) >>> big_end_arr[1] np.int16(770) Note the array `dtype` above of `>i2`. The `>` means ‘big-endian’ (`<` is little-endian) and `i2` means ‘signed 2-byte integer’. For example, if our data represented a single unsigned 4-byte little-endian integer, the dtype string would be `>> little_end_u4 = np.ndarray(shape=(1,),dtype='>> little_end_u4[0] == 1 * 256**1 + 3 * 256**2 + 2 * 256**3 True Returning to our `big_end_arr` \- in this case our underlying data is big- endian (data endianness) and we’ve set the dtype to match (the dtype is also big-endian). However, sometimes you need to flip these around. Warning Scalars do not include byte order information, so extracting a scalar from an array will return an integer in native byte order. Hence: >>> big_end_arr[0].dtype.byteorder == little_end_u4[0].dtype.byteorder True NumPy intentionally does not attempt to always preserve byte-order and for example converts to native byte-order in [`numpy.concatenate`](../reference/generated/numpy.concatenate#numpy.concatenate "numpy.concatenate"). ## Changing byte ordering As you can imagine from the introduction, there are two ways you can affect the relationship between the byte ordering of the array and the underlying memory it is looking at: * Change the byte-ordering information in the array dtype so that it interprets the underlying data as being in a different byte order. This is the role of `arr.view(arr.dtype.newbyteorder())` * Change the byte-ordering of the underlying data, leaving the dtype interpretation as it was. This is what `arr.byteswap()` does. The common situations in which you need to change byte ordering are: 1. Your data and dtype endianness don’t match, and you want to change the dtype so that it matches the data. 2. Your data and dtype endianness don’t match, and you want to swap the data so that they match the dtype 3. Your data and dtype endianness match, but you want the data swapped and the dtype to reflect this ### Data and dtype endianness don’t match, change dtype to match data We make something where they don’t match: >>> wrong_end_dtype_arr = np.ndarray(shape=(2,),dtype='>> wrong_end_dtype_arr[0] np.int16(256) The obvious fix for this situation is to change the dtype so it gives the correct endianness: >>> fixed_end_dtype_arr = wrong_end_dtype_arr.view(np.dtype('>> fixed_end_dtype_arr[0] np.int16(1) Note the array has not changed in memory: >>> fixed_end_dtype_arr.tobytes() == big_end_buffer True ### Data and type endianness don’t match, change data to match dtype You might want to do this if you need the data in memory to be a certain ordering. For example you might be writing the memory out to a file that needs a certain byte ordering. >>> fixed_end_mem_arr = wrong_end_dtype_arr.byteswap() >>> fixed_end_mem_arr[0] np.int16(1) Now the array _has_ changed in memory: >>> fixed_end_mem_arr.tobytes() == big_end_buffer False ### Data and dtype endianness match, swap data and dtype You may have a correctly specified array dtype, but you need the array to have the opposite byte order in memory, and you want the dtype to match so the array values make sense. In this case you just do both of the previous operations: >>> swapped_end_arr = big_end_arr.byteswap() >>> swapped_end_arr = swapped_end_arr.view(swapped_end_arr.dtype.newbyteorder()) >>> swapped_end_arr[0] np.int16(1) >>> swapped_end_arr.tobytes() == big_end_buffer False An easier way of casting the data to a specific dtype and byte ordering can be achieved with the ndarray astype method: >>> swapped_end_arr = big_end_arr.astype('>> swapped_end_arr[0] np.int16(1) >>> swapped_end_arr.tobytes() == big_end_buffer False # Beyond the basics ## Iterating over elements in the array ### Basic iteration One common algorithmic requirement is to be able to walk over all elements in a multidimensional array. The array iterator object makes this easy to do in a generic way that works for arrays of any dimension. Naturally, if you know the number of dimensions you will be using, then you can always write nested for loops to accomplish the iteration. If, however, you want to write code that works with any number of dimensions, then you can make use of the array iterator. An array iterator object is returned when accessing the .flat attribute of an array. Basic usage is to call [`PyArray_IterNew`](../reference/c-api/array#c.PyArray_IterNew "PyArray_IterNew") ( `array` ) where array is an ndarray object (or one of its sub-classes). The returned object is an array-iterator object (the same object returned by the .flat attribute of the ndarray). This object is usually cast to PyArrayIterObject* so that its members can be accessed. The only members that are needed are `iter->size` which contains the total size of the array, `iter->index`, which contains the current 1-d index into the array, and `iter->dataptr` which is a pointer to the data for the current element of the array. Sometimes it is also useful to access `iter->ao` which is a pointer to the underlying ndarray object. After processing data at the current element of the array, the next element of the array can be obtained using the macro [`PyArray_ITER_NEXT`](../reference/c-api/array#c.PyArray_ITER_NEXT "PyArray_ITER_NEXT") ( `iter` ). The iteration always proceeds in a C-style contiguous fashion (last index varying the fastest). The [`PyArray_ITER_GOTO`](../reference/c-api/array#c.PyArray_ITER_GOTO "PyArray_ITER_GOTO") ( `iter`, `destination` ) can be used to jump to a particular point in the array, where `destination` is an array of npy_intp data-type with space to handle at least the number of dimensions in the underlying array. Occasionally it is useful to use [`PyArray_ITER_GOTO1D`](../reference/c-api/array#c.PyArray_ITER_GOTO1D "PyArray_ITER_GOTO1D") ( `iter`, `index` ) which will jump to the 1-d index given by the value of `index`. The most common usage, however, is given in the following example. PyObject *obj; /* assumed to be some ndarray object */ PyArrayIterObject *iter; ... iter = (PyArrayIterObject *)PyArray_IterNew(obj); if (iter == NULL) goto fail; /* Assume fail has clean-up code */ while (iter->index < iter->size) { /* do something with the data at it->dataptr */ PyArray_ITER_NEXT(it); } ... You can also use [`PyArrayIter_Check`](../reference/c-api/array#c.PyArrayIter_Check "PyArrayIter_Check") ( `obj` ) to ensure you have an iterator object and [`PyArray_ITER_RESET`](../reference/c-api/array#c.PyArray_ITER_RESET "PyArray_ITER_RESET") ( `iter` ) to reset an iterator object back to the beginning of the array. It should be emphasized at this point that you may not need the array iterator if your array is already contiguous (using an array iterator will work but will be slower than the fastest code you could write). The major purpose of array iterators is to encapsulate iteration over N-dimensional arrays with arbitrary strides. They are used in many, many places in the NumPy source code itself. If you already know your array is contiguous (Fortran or C), then simply adding the element- size to a running pointer variable will step you through the array very efficiently. In other words, code like this will probably be faster for you in the contiguous case (assuming doubles). npy_intp size; double *dptr; /* could make this any variable type */ size = PyArray_SIZE(obj); dptr = PyArray_DATA(obj); while(size--) { /* do something with the data at dptr */ dptr++; } ### Iterating over all but one axis A common algorithm is to loop over all elements of an array and perform some function with each element by issuing a function call. As function calls can be time consuming, one way to speed up this kind of algorithm is to write the function so it takes a vector of data and then write the iteration so the function call is performed for an entire dimension of data at a time. This increases the amount of work done per function call, thereby reducing the function-call over-head to a small(er) fraction of the total time. Even if the interior of the loop is performed without a function call it can be advantageous to perform the inner loop over the dimension with the highest number of elements to take advantage of speed enhancements available on micro- processors that use pipelining to enhance fundamental operations. The [`PyArray_IterAllButAxis`](../reference/c-api/array#c.PyArray_IterAllButAxis "PyArray_IterAllButAxis") ( `array`, `&dim` ) constructs an iterator object that is modified so that it will not iterate over the dimension indicated by dim. The only restriction on this iterator object, is that the [`PyArray_ITER_GOTO1D`](../reference/c-api/array#c.PyArray_ITER_GOTO1D "PyArray_ITER_GOTO1D") ( `it`, `ind` ) macro cannot be used (thus flat indexing won’t work either if you pass this object back to Python — so you shouldn’t do this). Note that the returned object from this routine is still usually cast to PyArrayIterObject *. All that’s been done is to modify the strides and dimensions of the returned iterator to simulate iterating over array[…,0,…] where 0 is placed on the \\(\textrm{dim}^{\textrm{th}}\\) dimension. If dim is negative, then the dimension with the largest axis is found and used. ### Iterating over multiple arrays Very often, it is desirable to iterate over several arrays at the same time. The universal functions are an example of this kind of behavior. If all you want to do is iterate over arrays with the same shape, then simply creating several iterator objects is the standard procedure. For example, the following code iterates over two arrays assumed to be the same shape and size (actually obj1 just has to have at least as many total elements as does obj2): /* It is already assumed that obj1 and obj2 are ndarrays of the same shape and size. */ iter1 = (PyArrayIterObject *)PyArray_IterNew(obj1); if (iter1 == NULL) goto fail; iter2 = (PyArrayIterObject *)PyArray_IterNew(obj2); if (iter2 == NULL) goto fail; /* assume iter1 is DECREF'd at fail */ while (iter2->index < iter2->size) { /* process with iter1->dataptr and iter2->dataptr */ PyArray_ITER_NEXT(iter1); PyArray_ITER_NEXT(iter2); } ### Broadcasting over multiple arrays When multiple arrays are involved in an operation, you may want to use the same broadcasting rules that the math operations (_i.e._ the ufuncs) use. This can be done easily using the [`PyArrayMultiIterObject`](../reference/c-api/types-and- structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject"). This is the object returned from the Python command numpy.broadcast and it is almost as easy to use from C. The function [`PyArray_MultiIterNew`](../reference/c-api/array#c.PyArray_MultiIterNew "PyArray_MultiIterNew") ( `n`, `...` ) is used (with `n` input objects in place of `...` ). The input objects can be arrays or anything that can be converted into an array. A pointer to a PyArrayMultiIterObject is returned. Broadcasting has already been accomplished which adjusts the iterators so that all that needs to be done to advance to the next element in each array is for PyArray_ITER_NEXT to be called for each of the inputs. This incrementing is automatically performed by [`PyArray_MultiIter_NEXT`](../reference/c-api/array#c.PyArray_MultiIter_NEXT "PyArray_MultiIter_NEXT") ( `obj` ) macro (which can handle a multiterator `obj` as either a [PyArrayMultiIterObject](../reference/c-api/types-and- structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject")* or a [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*). The data from input number `i` is available using [`PyArray_MultiIter_DATA`](../reference/c-api/array#c.PyArray_MultiIter_DATA "PyArray_MultiIter_DATA") ( `obj`, `i` ). An example of using this feature follows. mobj = PyArray_MultiIterNew(2, obj1, obj2); size = mobj->size; while(size--) { ptr1 = PyArray_MultiIter_DATA(mobj, 0); ptr2 = PyArray_MultiIter_DATA(mobj, 1); /* code using contents of ptr1 and ptr2 */ PyArray_MultiIter_NEXT(mobj); } The function [`PyArray_RemoveSmallest`](../reference/c-api/array#c.PyArray_RemoveSmallest "PyArray_RemoveSmallest") ( `multi` ) can be used to take a multi-iterator object and adjust all the iterators so that iteration does not take place over the largest dimension (it makes that dimension of size 1). The code being looped over that makes use of the pointers will very-likely also need the strides data for each of the iterators. This information is stored in multi->iters[i]->strides. There are several examples of using the multi-iterator in the NumPy source code as it makes N-dimensional broadcasting-code very simple to write. Browse the source for more examples. ## User-defined data-types NumPy comes with 24 builtin data-types. While this covers a large majority of possible use cases, it is conceivable that a user may have a need for an additional data-type. There is some support for adding an additional data-type into the NumPy system. This additional data- type will behave much like a regular data-type except ufuncs must have 1-d loops registered to handle it separately. Also checking for whether or not other data-types can be cast “safely” to and from this new type or not will always return “can cast” unless you also register which types your new data-type can be cast to and from. The NumPy source code includes an example of a custom data-type as part of its test suite. The file `_rational_tests.c.src` in the source code directory `numpy/_core/src/umath/` contains an implementation of a data-type that represents a rational number as the ratio of two 32 bit integers. ### Adding the new data-type To begin to make use of the new data-type, you need to first define a new Python type to hold the scalars of your new data-type. It should be acceptable to inherit from one of the array scalars if your new type has a binary compatible layout. This will allow your new data type to have the methods and attributes of array scalars. New data- types must have a fixed memory size (if you want to define a data-type that needs a flexible representation, like a variable-precision number, then use a pointer to the object as the data-type). The memory layout of the object structure for the new Python type must be PyObject_HEAD followed by the fixed-size memory needed for the data- type. For example, a suitable structure for the new Python type is: typedef struct { PyObject_HEAD; some_data_type obval; /* the name can be whatever you want */ } PySomeDataTypeObject; After you have defined a new Python type object, you must then define a new [`PyArray_Descr`](../reference/c-api/types-and-structures#c.PyArray_Descr "PyArray_Descr") structure whose typeobject member will contain a pointer to the data-type you’ve just defined. In addition, the required functions in the “.f” member must be defined: nonzero, copyswap, copyswapn, setitem, getitem, and cast. The more functions in the “.f” member you define, however, the more useful the new data-type will be. It is very important to initialize unused functions to NULL. This can be achieved using [`PyArray_InitArrFuncs`](../reference/c-api/array#c.PyArray_InitArrFuncs "PyArray_InitArrFuncs") (f). Once a new [`PyArray_Descr`](../reference/c-api/types-and- structures#c.PyArray_Descr "PyArray_Descr") structure is created and filled with the needed information and useful functions you call [`PyArray_RegisterDataType`](../reference/c-api/array#c.PyArray_RegisterDataType "PyArray_RegisterDataType") (new_descr). The return value from this call is an integer providing you with a unique type_number that specifies your data-type. This type number should be stored and made available by your module so that other modules can use it to recognize your data-type. Note that this API is inherently thread-unsafe. See `thread_safety` for more details about thread safety in NumPy. ### Registering a casting function You may want to allow builtin (and other user-defined) data-types to be cast automatically to your data-type. In order to make this possible, you must register a casting function with the data-type you want to be able to cast from. This requires writing low-level casting functions for each conversion you want to support and then registering these functions with the data-type descriptor. A low-level casting function has the signature. voidcastfunc(void*from, void*to, [npy_intp](../reference/c-api/dtype#c.npy_intp "npy_intp")n, void*fromarr, void*toarr) Cast `n` elements `from` one type `to` another. The data to cast from is in a contiguous, correctly-swapped and aligned chunk of memory pointed to by from. The buffer to cast to is also contiguous, correctly-swapped and aligned. The fromarr and toarr arguments should only be used for flexible-element-sized arrays (string, unicode, void). An example castfunc is: static void double_to_float(double *from, float* to, npy_intp n, void* ignore1, void* ignore2) { while (n--) { (*to++) = (double) *(from++); } } This could then be registered to convert doubles to floats using the code: doub = PyArray_DescrFromType(NPY_DOUBLE); PyArray_RegisterCastFunc(doub, NPY_FLOAT, (PyArray_VectorUnaryFunc *)double_to_float); Py_DECREF(doub); ### Registering coercion rules By default, all user-defined data-types are not presumed to be safely castable to any builtin data-types. In addition builtin data-types are not presumed to be safely castable to user-defined data-types. This situation limits the ability of user-defined data-types to participate in the coercion system used by ufuncs and other times when automatic coercion takes place in NumPy. This can be changed by registering data-types as safely castable from a particular data-type object. The function [`PyArray_RegisterCanCast`](../reference/c-api/array#c.PyArray_RegisterCanCast "PyArray_RegisterCanCast") (from_descr, totype_number, scalarkind) should be used to specify that the data-type object from_descr can be cast to the data- type with type number totype_number. If you are not trying to alter scalar coercion rules, then use [`NPY_NOSCALAR`](../reference/c-api/array#c.NPY_SCALARKIND.NPY_NOSCALAR "NPY_NOSCALAR") for the scalarkind argument. If you want to allow your new data-type to also be able to share in the scalar coercion rules, then you need to specify the scalarkind function in the data- type object’s “.f” member to return the kind of scalar the new data-type should be seen as (the value of the scalar is available to that function). Then, you can register data-types that can be cast to separately for each scalar kind that may be returned from your user-defined data-type. If you don’t register scalar coercion handling, then all of your user-defined data- types will be seen as [`NPY_NOSCALAR`](../reference/c-api/array#c.NPY_SCALARKIND.NPY_NOSCALAR "NPY_NOSCALAR"). ### Registering a ufunc loop You may also want to register low-level ufunc loops for your data-type so that an ndarray of your data-type can have math applied to it seamlessly. Registering a new loop with exactly the same arg_types signature, silently replaces any previously registered loops for that data-type. Before you can register a 1-d loop for a ufunc, the ufunc must be previously created. Then you call [`PyUFunc_RegisterLoopForType`](../reference/c-api/ufunc#c.PyUFunc_RegisterLoopForType "PyUFunc_RegisterLoopForType") (…) with the information needed for the loop. The return value of this function is `0` if the process was successful and `-1` with an error condition set if it was not successful. ## Subtyping the ndarray in C One of the lesser-used features that has been lurking in Python since 2.2 is the ability to sub-class types in C. This facility is one of the important reasons for basing NumPy off of the Numeric code-base which was already in C. A sub-type in C allows much more flexibility with regards to memory management. Sub-typing in C is not difficult even if you have only a rudimentary understanding of how to create new types for Python. While it is easiest to sub-type from a single parent type, sub-typing from multiple parent types is also possible. Multiple inheritance in C is generally less useful than it is in Python because a restriction on Python sub-types is that they have a binary compatible memory layout. Perhaps for this reason, it is somewhat easier to sub-type from a single parent type. All C-structures corresponding to Python objects must begin with [`PyObject_HEAD`](https://docs.python.org/3/c-api/structures.html#c.PyObject_HEAD "\(in Python v3.13\)") (or [`PyObject_VAR_HEAD`](https://docs.python.org/3/c-api/structures.html#c.PyObject_VAR_HEAD "\(in Python v3.13\)")). In the same way, any sub-type must have a C-structure that begins with exactly the same memory layout as the parent type (or all of the parent types in the case of multiple-inheritance). The reason for this is that Python may attempt to access a member of the sub-type structure as if it had the parent structure ( _i.e._ it will cast a given pointer to a pointer to the parent structure and then dereference one of it’s members). If the memory layouts are not compatible, then this attempt will cause unpredictable behavior (eventually leading to a memory violation and program crash). One of the elements in [`PyObject_HEAD`](https://docs.python.org/3/c-api/structures.html#c.PyObject_HEAD "\(in Python v3.13\)") is a pointer to a type-object structure. A new Python type is created by creating a new type-object structure and populating it with functions and pointers to describe the desired behavior of the type. Typically, a new C-structure is also created to contain the instance-specific information needed for each object of the type as well. For example, [`&PyArray_Type`](../reference/c-api/types-and-structures#c.PyArray_Type "PyArray_Type") is a pointer to the type-object table for the ndarray while a [PyArrayObject](../reference/c-api/types-and-structures#c.PyArrayObject "PyArrayObject")* variable is a pointer to a particular instance of an ndarray (one of the members of the ndarray structure is, in turn, a pointer to the type- object table [`&PyArray_Type`](../reference/c-api/types-and- structures#c.PyArray_Type "PyArray_Type")). Finally [`PyType_Ready`](https://docs.python.org/3/c-api/type.html#c.PyType_Ready "\(in Python v3.13\)") () must be called for every new Python type. ### Creating sub-types To create a sub-type, a similar procedure must be followed except only behaviors that are different require new entries in the type- object structure. All other entries can be NULL and will be filled in by [`PyType_Ready`](https://docs.python.org/3/c-api/type.html#c.PyType_Ready "\(in Python v3.13\)") with appropriate functions from the parent type(s). In particular, to create a sub-type in C follow these steps: 1. If needed create a new C-structure to handle each instance of your type. A typical C-structure would be: typedef _new_struct { PyArrayObject base; /* new things here */ } NewArrayObject; Notice that the full PyArrayObject is used as the first entry in order to ensure that the binary layout of instances of the new type is identical to the PyArrayObject. 2. Fill in a new Python type-object structure with pointers to new functions that will over-ride the default behavior while leaving any function that should remain the same unfilled (or NULL). The tp_name element should be different. 3. Fill in the tp_base member of the new type-object structure with a pointer to the (main) parent type object. For multiple-inheritance, also fill in the tp_bases member with a tuple containing all of the parent objects in the order they should be used to define inheritance. Remember, all parent-types must have the same C-structure for multiple inheritance to work properly. 4. Call [`PyType_Ready`](https://docs.python.org/3/c-api/type.html#c.PyType_Ready "\(in Python v3.13\)") (). If this function returns a negative number, a failure occurred and the type is not initialized. Otherwise, the type is ready to be used. It is generally important to place a reference to the new type into the module dictionary so it can be accessed from Python. More information on creating sub-types in C can be learned by reading PEP 253 (available at ). ### Specific features of ndarray sub-typing Some special methods and attributes are used by arrays in order to facilitate the interoperation of sub-types with the base ndarray type. #### The __array_finalize__ method ndarray.__array_finalize__ Several array-creation functions of the ndarray allow specification of a particular sub-type to be created. This allows sub-types to be handled seamlessly in many routines. When a sub-type is created in such a fashion, however, neither the __new__ method nor the __init__ method gets called. Instead, the sub-type is allocated and the appropriate instance-structure members are filled in. Finally, the [`__array_finalize__`](../reference/arrays.classes#numpy.class.__array_finalize__ "numpy.class.__array_finalize__") attribute is looked-up in the object dictionary. If it is present and not None, then it can be either a [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "\(in Python v3.13\)") containing a pointer to a [`PyArray_FinalizeFunc`](../reference/c-api/array#c.PyArray_FinalizeFunc "PyArray_FinalizeFunc") or it can be a method taking a single argument (which could be None) If the [`__array_finalize__`](../reference/arrays.classes#numpy.class.__array_finalize__ "numpy.class.__array_finalize__") attribute is a [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "\(in Python v3.13\)"), then the pointer must be a pointer to a function with the signature: (int) (PyArrayObject *, PyObject *) The first argument is the newly created sub-type. The second argument (if not NULL) is the “parent” array (if the array was created using slicing or some other operation where a clearly-distinguishable parent is present). This routine can do anything it wants to. It should return a -1 on error and 0 otherwise. If the [`__array_finalize__`](../reference/arrays.classes#numpy.class.__array_finalize__ "numpy.class.__array_finalize__") attribute is not None nor a [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "\(in Python v3.13\)"), then it must be a Python method that takes the parent array as an argument (which could be None if there is no parent), and returns nothing. Errors in this method will be caught and handled. #### The __array_priority__ attribute ndarray.__array_priority__ This attribute allows simple but flexible determination of which sub- type should be considered “primary” when an operation involving two or more sub- types arises. In operations where different sub-types are being used, the sub- type with the largest [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") attribute will determine the sub-type of the output(s). If two sub- types have the same [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") then the sub-type of the first argument determines the output. The default [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") attribute returns a value of 0.0 for the base ndarray type and 1.0 for a sub-type. This attribute can also be defined by objects that are not sub-types of the ndarray and can be used to determine which [`__array_wrap__`](../reference/arrays.classes#numpy.class.__array_wrap__ "numpy.class.__array_wrap__") method should be called for the return output. #### The __array_wrap__ method ndarray.__array_wrap__ Any class or type can define this method which should take an ndarray argument and return an instance of the type. It can be seen as the opposite of the [`__array__`](../reference/arrays.classes#numpy.class.__array__ "numpy.class.__array__") method. This method is used by the ufuncs (and other NumPy functions) to allow other objects to pass through. For Python >2.4, it can also be used to write a decorator that converts a function that works only with ndarrays to one that works with any type with [`__array__`](../reference/arrays.classes#numpy.class.__array__ "numpy.class.__array__") and [`__array_wrap__`](../reference/arrays.classes#numpy.class.__array_wrap__ "numpy.class.__array_wrap__") methods. # How to extend NumPy ## Writing an extension module While the ndarray object is designed to allow rapid computation in Python, it is also designed to be general-purpose and satisfy a wide- variety of computational needs. As a result, if absolute speed is essential, there is no replacement for a well-crafted, compiled loop specific to your application and hardware. This is one of the reasons that numpy includes f2py so that an easy- to-use mechanisms for linking (simple) C/C++ and (arbitrary) Fortran code directly into Python are available. You are encouraged to use and improve this mechanism. The purpose of this section is not to document this tool but to document the more basic steps to writing an extension module that this tool depends on. When an extension module is written, compiled, and installed to somewhere in the Python path (sys.path), the code can then be imported into Python as if it were a standard python file. It will contain objects and methods that have been defined and compiled in C code. The basic steps for doing this in Python are well-documented and you can find more information in the documentation for Python itself available online at [www.python.org](https://www.python.org) . In addition to the Python C-API, there is a full and rich C-API for NumPy allowing sophisticated manipulations on a C-level. However, for most applications, only a few API calls will typically be used. For example, if you need to just extract a pointer to memory along with some shape information to pass to another calculation routine, then you will use very different calls than if you are trying to create a new array-like type or add a new data type for ndarrays. This chapter documents the API calls and macros that are most commonly used. ## Required subroutine There is exactly one function that must be defined in your C-code in order for Python to use it as an extension module. The function must be called init{name} where {name} is the name of the module from Python. This function must be declared so that it is visible to code outside of the routine. Besides adding the methods and constants you desire, this subroutine must also contain calls like `import_array()` and/or `import_ufunc()` depending on which C-API is needed. Forgetting to place these commands will show itself as an ugly segmentation fault (crash) as soon as any C-API subroutine is actually called. It is actually possible to have multiple init{name} functions in a single file in which case multiple modules will be defined by that file. However, there are some tricks to get that to work correctly and it is not covered here. A minimal `init{name}` method looks like: PyMODINIT_FUNC init{name}(void) { (void)Py_InitModule({name}, mymethods); import_array(); } The mymethods must be an array (usually statically declared) of PyMethodDef structures which contain method names, actual C-functions, a variable indicating whether the method uses keyword arguments or not, and docstrings. These are explained in the next section. If you want to add constants to the module, then you store the returned value from Py_InitModule which is a module object. The most general way to add items to the module is to get the module dictionary using PyModule_GetDict(module). With the module dictionary, you can add whatever you like to the module manually. An easier way to add objects to the module is to use one of three additional Python C-API calls that do not require a separate extraction of the module dictionary. These are documented in the Python documentation, but repeated here for convenience: intPyModule_AddObject([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*module, char*name, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*value) intPyModule_AddIntConstant([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*module, char*name, longvalue) intPyModule_AddStringConstant([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")*module, char*name, char*value) All three of these functions require the _module_ object (the return value of Py_InitModule). The _name_ is a string that labels the value in the module. Depending on which function is called, the _value_ argument is either a general object (`PyModule_AddObject` steals a reference to it), an integer constant, or a string constant. ## Defining functions The second argument passed in to the Py_InitModule function is a structure that makes it easy to define functions in the module. In the example given above, the mymethods structure would have been defined earlier in the file (usually right before the init{name} subroutine) to: static PyMethodDef mymethods[] = { { nokeywordfunc,nokeyword_cfunc, METH_VARARGS, Doc string}, { keywordfunc, keyword_cfunc, METH_VARARGS|METH_KEYWORDS, Doc string}, {NULL, NULL, 0, NULL} /* Sentinel */ } Each entry in the mymethods array is a [`PyMethodDef`](https://docs.python.org/3/c-api/structures.html#c.PyMethodDef "\(in Python v3.13\)") structure containing 1) the Python name, 2) the C-function that implements the function, 3) flags indicating whether or not keywords are accepted for this function, and 4) The docstring for the function. Any number of functions may be defined for a single module by adding more entries to this table. The last entry must be all NULL as shown to act as a sentinel. Python looks for this entry to know that all of the functions for the module have been defined. The last thing that must be done to finish the extension module is to actually write the code that performs the desired functions. There are two kinds of functions: those that don’t accept keyword arguments, and those that do. ### Functions without keyword arguments Functions that don’t accept keyword arguments should be written as: static PyObject* nokeyword_cfunc (PyObject *dummy, PyObject *args) { /* convert Python arguments */ /* do function */ /* return something */ } The dummy argument is not used in this context and can be safely ignored. The _args_ argument contains all of the arguments passed in to the function as a tuple. You can do anything you want at this point, but usually the easiest way to manage the input arguments is to call [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "\(in Python v3.13\)") (args, format_string, addresses_to_C_variables…) or [`PyArg_UnpackTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_UnpackTuple "\(in Python v3.13\)") (tuple, “name”, min, max, …). A good description of how to use the first function is contained in the Python C-API reference manual under section 5.5 (Parsing arguments and building values). You should pay particular attention to the “O&” format which uses converter functions to go between the Python object and the C object. All of the other format functions can be (mostly) thought of as special cases of this general rule. There are several converter functions defined in the NumPy C-API that may be of use. In particular, the [`PyArray_DescrConverter`](../reference/c-api/array#c.PyArray_DescrConverter "PyArray_DescrConverter") function is very useful to support arbitrary data- type specification. This function transforms any valid data-type Python object into a [PyArray_Descr](../reference/c-api/types-and-structures#c.PyArray_Descr "PyArray_Descr")* object. Remember to pass in the address of the C-variables that should be filled in. There are lots of examples of how to use [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "\(in Python v3.13\)") throughout the NumPy source code. The standard usage is like this: PyObject *input; PyArray_Descr *dtype; if (!PyArg_ParseTuple(args, "OO&", &input, PyArray_DescrConverter, &dtype)) return NULL; It is important to keep in mind that you get a _borrowed_ reference to the object when using the “O” format string. However, the converter functions usually require some form of memory handling. In this example, if the conversion is successful, _dtype_ will hold a new reference to a [PyArray_Descr](../reference/c-api/types-and-structures#c.PyArray_Descr "PyArray_Descr")* object, while _input_ will hold a borrowed reference. Therefore, if this conversion were mixed with another conversion (say to an integer) and the data-type conversion was successful but the integer conversion failed, then you would need to release the reference count to the data-type object before returning. A typical way to do this is to set _dtype_ to `NULL` before calling [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "\(in Python v3.13\)") and then use [`Py_XDECREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_XDECREF "\(in Python v3.13\)") on _dtype_ before returning. After the input arguments are processed, the code that actually does the work is written (likely calling other functions as needed). The final step of the C-function is to return something. If an error is encountered then `NULL` should be returned (making sure an error has actually been set). If nothing should be returned then increment [`Py_None`](https://docs.python.org/3/c-api/none.html#c.Py_None "\(in Python v3.13\)") and return it. If a single object should be returned then it is returned (ensuring that you own a reference to it first). If multiple objects should be returned then you need to return a tuple. The [`Py_BuildValue`](https://docs.python.org/3/c-api/arg.html#c.Py_BuildValue "\(in Python v3.13\)") (format_string, c_variables…) function makes it easy to build tuples of Python objects from C variables. Pay special attention to the difference between ‘N’ and ‘O’ in the format string or you can easily create memory leaks. The ‘O’ format string increments the reference count of the [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")* C-variable it corresponds to, while the ‘N’ format string steals a reference to the corresponding [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "\(in Python v3.13\)")* C-variable. You should use ‘N’ if you have already created a reference for the object and just want to give that reference to the tuple. You should use ‘O’ if you only have a borrowed reference to an object and need to create one to provide for the tuple. ### Functions with keyword arguments These functions are very similar to functions without keyword arguments. The only difference is that the function signature is: static PyObject* keyword_cfunc (PyObject *dummy, PyObject *args, PyObject *kwds) { ... } The kwds argument holds a Python dictionary whose keys are the names of the keyword arguments and whose values are the corresponding keyword-argument values. This dictionary can be processed however you see fit. The easiest way to handle it, however, is to replace the [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "\(in Python v3.13\)") (args, format_string, addresses…) function with a call to [`PyArg_ParseTupleAndKeywords`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTupleAndKeywords "\(in Python v3.13\)") (args, kwds, format_string, char *kwlist[], addresses…). The kwlist parameter to this function is a `NULL` -terminated array of strings providing the expected keyword arguments. There should be one string for each entry in the format_string. Using this function will raise a TypeError if invalid keyword arguments are passed in. For more help on this function please see section 1.8 (Keyword Parameters for Extension Functions) of the Extending and Embedding tutorial in the Python documentation. ### Reference counting The biggest difficulty when writing extension modules is reference counting. It is an important reason for the popularity of f2py, weave, Cython, ctypes, etc…. If you mis-handle reference counts you can get problems from memory- leaks to segmentation faults. The only strategy I know of to handle reference counts correctly is blood, sweat, and tears. First, you force it into your head that every Python variable has a reference count. Then, you understand exactly what each function does to the reference count of your objects, so that you can properly use DECREF and INCREF when you need them. Reference counting can really test the amount of patience and diligence you have towards your programming craft. Despite the grim depiction, most cases of reference counting are quite straightforward with the most common difficulty being not using DECREF on objects before exiting early from a routine due to some error. In second place, is the common error of not owning the reference on an object that is passed to a function or macro that is going to steal the reference ( _e.g._ [`PyTuple_SET_ITEM`](https://docs.python.org/3/c-api/tuple.html#c.PyTuple_SET_ITEM "\(in Python v3.13\)"), and most functions that take [`PyArray_Descr`](../reference/c-api/types-and-structures#c.PyArray_Descr "PyArray_Descr") objects). Typically you get a new reference to a variable when it is created or is the return value of some function (there are some prominent exceptions, however — such as getting an item out of a tuple or a dictionary). When you own the reference, you are responsible to make sure that [`Py_DECREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_DECREF "\(in Python v3.13\)") (var) is called when the variable is no longer necessary (and no other function has “stolen” its reference). Also, if you are passing a Python object to a function that will “steal” the reference, then you need to make sure you own it (or use [`Py_INCREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_INCREF "\(in Python v3.13\)") to get your own reference). You will also encounter the notion of borrowing a reference. A function that borrows a reference does not alter the reference count of the object and does not expect to “hold on “to the reference. It’s just going to use the object temporarily. When you use [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "\(in Python v3.13\)") or [`PyArg_UnpackTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_UnpackTuple "\(in Python v3.13\)") you receive a borrowed reference to the objects in the tuple and should not alter their reference count inside your function. With practice, you can learn to get reference counting right, but it can be frustrating at first. One common source of reference-count errors is the [`Py_BuildValue`](https://docs.python.org/3/c-api/arg.html#c.Py_BuildValue "\(in Python v3.13\)") function. Pay careful attention to the difference between the ‘N’ format character and the ‘O’ format character. If you create a new object in your subroutine (such as an output array), and you are passing it back in a tuple of return values, then you should most- likely use the ‘N’ format character in [`Py_BuildValue`](https://docs.python.org/3/c-api/arg.html#c.Py_BuildValue "\(in Python v3.13\)"). The ‘O’ character will increase the reference count by one. This will leave the caller with two reference counts for a brand-new array. When the variable is deleted and the reference count decremented by one, there will still be that extra reference count, and the array will never be deallocated. You will have a reference-counting induced memory leak. Using the ‘N’ character will avoid this situation as it will return to the caller an object (inside the tuple) with a single reference count. ## Dealing with array objects Most extension modules for NumPy will need to access the memory for an ndarray object (or one of it’s sub-classes). The easiest way to do this doesn’t require you to know much about the internals of NumPy. The method is to 1. Ensure you are dealing with a well-behaved array (aligned, in machine byte-order and single-segment) of the correct type and number of dimensions. 1. By converting it from some Python object using [`PyArray_FromAny`](../reference/c-api/array#c.PyArray_FromAny "PyArray_FromAny") or a macro built on it. 2. By constructing a new ndarray of your desired shape and type using [`PyArray_NewFromDescr`](../reference/c-api/array#c.PyArray_NewFromDescr "PyArray_NewFromDescr") or a simpler macro or function based on it. 2. Get the shape of the array and a pointer to its actual data. 3. Pass the data and shape information on to a subroutine or other section of code that actually performs the computation. 4. If you are writing the algorithm, then I recommend that you use the stride information contained in the array to access the elements of the array (the [`PyArray_GetPtr`](../reference/c-api/array#c.PyArray_GetPtr "PyArray_GetPtr") macros make this painless). Then, you can relax your requirements so as not to force a single-segment array and the data-copying that might result. Each of these sub-topics is covered in the following sub-sections. ### Converting an arbitrary sequence object The main routine for obtaining an array from any Python object that can be converted to an array is [`PyArray_FromAny`](../reference/c-api/array#c.PyArray_FromAny "PyArray_FromAny"). This function is very flexible with many input arguments. Several macros make it easier to use the basic function. [`PyArray_FROM_OTF`](../reference/c-api/array#c.PyArray_FROM_OTF "PyArray_FROM_OTF") is arguably the most useful of these macros for the most common uses. It allows you to convert an arbitrary Python object to an array of a specific builtin data-type ( _e.g._ float), while specifying a particular set of requirements ( _e.g._ contiguous, aligned, and writeable). The syntax is [`PyArray_FROM_OTF`](../reference/c-api/array#c.PyArray_FROM_OTF "PyArray_FROM_OTF") Return an ndarray from any Python object, _obj_ , that can be converted to an array. The number of dimensions in the returned array is determined by the object. The desired data-type of the returned array is provided in _typenum_ which should be one of the enumerated types. The _requirements_ for the returned array can be any combination of standard array flags. Each of these arguments is explained in more detail below. You receive a new reference to the array on success. On failure, `NULL` is returned and an exception is set. _obj_ The object can be any Python object convertible to an ndarray. If the object is already (a subclass of) the ndarray that satisfies the requirements then a new reference is returned. Otherwise, a new array is constructed. The contents of _obj_ are copied to the new array unless the array interface is used so that data does not have to be copied. Objects that can be converted to an array include: 1) any nested sequence object, 2) any object exposing the array interface, 3) any object with an [`__array__`](../reference/arrays.classes#numpy.class.__array__ "numpy.class.__array__") method (which should return an ndarray), and 4) any scalar object (becomes a zero-dimensional array). Sub-classes of the ndarray that otherwise fit the requirements will be passed through. If you want to ensure a base-class ndarray, then use [`NPY_ARRAY_ENSUREARRAY`](../reference/c-api/array#c.NPY_ARRAY_ENSUREARRAY "NPY_ARRAY_ENSUREARRAY") in the requirements flag. A copy is made only if necessary. If you want to guarantee a copy, then pass in [`NPY_ARRAY_ENSURECOPY`](../reference/c-api/array#c.NPY_ARRAY_ENSURECOPY "NPY_ARRAY_ENSURECOPY") to the requirements flag. _typenum_ One of the enumerated types or [`NPY_NOTYPE`](../reference/c-api/dtype#c.NPY_NOTYPE "NPY_NOTYPE") if the data-type should be determined from the object itself. The C-based names can be used: [`NPY_BOOL`](../reference/c-api/dtype#c.NPY_TYPES.NPY_BOOL "NPY_BOOL"), [`NPY_BYTE`](../reference/c-api/dtype#c.NPY_TYPES.NPY_BYTE "NPY_BYTE"), [`NPY_UBYTE`](../reference/c-api/dtype#c.NPY_TYPES.NPY_UBYTE "NPY_UBYTE"), [`NPY_SHORT`](../reference/c-api/dtype#c.NPY_TYPES.NPY_SHORT "NPY_SHORT"), [`NPY_USHORT`](../reference/c-api/dtype#c.NPY_TYPES.NPY_USHORT "NPY_USHORT"), [`NPY_INT`](../reference/c-api/dtype#c.NPY_TYPES.NPY_INT "NPY_INT"), [`NPY_UINT`](../reference/c-api/dtype#c.NPY_TYPES.NPY_UINT "NPY_UINT"), [`NPY_LONG`](../reference/c-api/dtype#c.NPY_TYPES.NPY_LONG "NPY_LONG"), [`NPY_ULONG`](../reference/c-api/dtype#c.NPY_TYPES.NPY_ULONG "NPY_ULONG"), [`NPY_LONGLONG`](../reference/c-api/dtype#c.NPY_TYPES.NPY_LONGLONG "NPY_LONGLONG"), [`NPY_ULONGLONG`](../reference/c-api/dtype#c.NPY_TYPES.NPY_ULONGLONG "NPY_ULONGLONG"), [`NPY_DOUBLE`](../reference/c-api/dtype#c.NPY_TYPES.NPY_DOUBLE "NPY_DOUBLE"), [`NPY_LONGDOUBLE`](../reference/c-api/dtype#c.NPY_TYPES.NPY_LONGDOUBLE "NPY_LONGDOUBLE"), [`NPY_CFLOAT`](../reference/c-api/dtype#c.NPY_TYPES.NPY_CFLOAT "NPY_CFLOAT"), [`NPY_CDOUBLE`](../reference/c-api/dtype#c.NPY_TYPES.NPY_CDOUBLE "NPY_CDOUBLE"), [`NPY_CLONGDOUBLE`](../reference/c-api/dtype#c.NPY_TYPES.NPY_CLONGDOUBLE "NPY_CLONGDOUBLE"), [`NPY_OBJECT`](../reference/c-api/dtype#c.NPY_TYPES.NPY_OBJECT "NPY_OBJECT"). Alternatively, the bit-width names can be used as supported on the platform. For example: [`NPY_INT8`](../reference/c-api/dtype#c.NPY_TYPES.NPY_INT8 "NPY_INT8"), [`NPY_INT16`](../reference/c-api/dtype#c.NPY_TYPES.NPY_INT16 "NPY_INT16"), [`NPY_INT32`](../reference/c-api/dtype#c.NPY_TYPES.NPY_INT32 "NPY_INT32"), [`NPY_INT64`](../reference/c-api/dtype#c.NPY_TYPES.NPY_INT64 "NPY_INT64"), [`NPY_UINT8`](../reference/c-api/dtype#c.NPY_TYPES.NPY_UINT8 "NPY_UINT8"), [`NPY_UINT16`](../reference/c-api/dtype#c.NPY_TYPES.NPY_UINT16 "NPY_UINT16"), [`NPY_UINT32`](../reference/c-api/dtype#c.NPY_TYPES.NPY_UINT32 "NPY_UINT32"), [`NPY_UINT64`](../reference/c-api/dtype#c.NPY_TYPES.NPY_UINT64 "NPY_UINT64"), [`NPY_FLOAT32`](../reference/c-api/dtype#c.NPY_TYPES.NPY_FLOAT32 "NPY_FLOAT32"), [`NPY_FLOAT64`](../reference/c-api/dtype#c.NPY_TYPES.NPY_FLOAT64 "NPY_FLOAT64"), [`NPY_COMPLEX64`](../reference/c-api/dtype#c.NPY_TYPES.NPY_COMPLEX64 "NPY_COMPLEX64"), [`NPY_COMPLEX128`](../reference/c-api/dtype#c.NPY_TYPES.NPY_COMPLEX128 "NPY_COMPLEX128"). The object will be converted to the desired type only if it can be done without losing precision. Otherwise `NULL` will be returned and an error raised. Use [`NPY_ARRAY_FORCECAST`](../reference/c-api/array#c.NPY_ARRAY_FORCECAST "NPY_ARRAY_FORCECAST") in the requirements flag to override this behavior. _requirements_ The memory model for an ndarray admits arbitrary strides in each dimension to advance to the next element of the array. Often, however, you need to interface with code that expects a C-contiguous or a Fortran-contiguous memory layout. In addition, an ndarray can be misaligned (the address of an element is not at an integral multiple of the size of the element) which can cause your program to crash (or at least work more slowly) if you try and dereference a pointer into the array data. Both of these problems can be solved by converting the Python object into an array that is more “well- behaved” for your specific usage. The requirements flag allows specification of what kind of array is acceptable. If the object passed in does not satisfy this requirements then a copy is made so that the returned object will satisfy the requirements. these ndarray can use a very generic pointer to memory. This flag allows specification of the desired properties of the returned array object. All of the flags are explained in the detailed API chapter. The flags most commonly needed are [`NPY_ARRAY_IN_ARRAY`](../reference/c-api/array#c.NPY_ARRAY_IN_ARRAY "NPY_ARRAY_IN_ARRAY"), [`NPY_ARRAY_OUT_ARRAY`](../reference/c-api/array#c.NPY_ARRAY_OUT_ARRAY "NPY_ARRAY_OUT_ARRAY"), and [`NPY_ARRAY_INOUT_ARRAY`](../reference/c-api/array#c.NPY_ARRAY_INOUT_ARRAY "NPY_ARRAY_INOUT_ARRAY"): [`NPY_ARRAY_IN_ARRAY`](../reference/c-api/array#c.NPY_ARRAY_IN_ARRAY "NPY_ARRAY_IN_ARRAY") This flag is useful for arrays that must be in C-contiguous order and aligned. These kinds of arrays are usually input arrays for some algorithm. [`NPY_ARRAY_OUT_ARRAY`](../reference/c-api/array#c.NPY_ARRAY_OUT_ARRAY "NPY_ARRAY_OUT_ARRAY") This flag is useful to specify an array that is in C-contiguous order, is aligned, and can be written to as well. Such an array is usually returned as output (although normally such output arrays are created from scratch). [`NPY_ARRAY_INOUT_ARRAY`](../reference/c-api/array#c.NPY_ARRAY_INOUT_ARRAY "NPY_ARRAY_INOUT_ARRAY") This flag is useful to specify an array that will be used for both input and output. [`PyArray_ResolveWritebackIfCopy`](../reference/c-api/array#c.PyArray_ResolveWritebackIfCopy "PyArray_ResolveWritebackIfCopy") must be called before [`Py_DECREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_DECREF "\(in Python v3.13\)") at the end of the interface routine to write back the temporary data into the original array passed in. Use of the [`NPY_ARRAY_WRITEBACKIFCOPY`](../reference/c-api/array#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY") flag requires that the input object is already an array (because other objects cannot be automatically updated in this fashion). If an error occurs use [`PyArray_DiscardWritebackIfCopy`](../reference/c-api/array#c.PyArray_DiscardWritebackIfCopy "PyArray_DiscardWritebackIfCopy") (obj) on an array with these flags set. This will set the underlying base array writable without causing the contents to be copied back into the original array. Other useful flags that can be OR’d as additional requirements are: [`NPY_ARRAY_FORCECAST`](../reference/c-api/array#c.NPY_ARRAY_FORCECAST "NPY_ARRAY_FORCECAST") Cast to the desired type, even if it can’t be done without losing information. [`NPY_ARRAY_ENSURECOPY`](../reference/c-api/array#c.NPY_ARRAY_ENSURECOPY "NPY_ARRAY_ENSURECOPY") Make sure the resulting array is a copy of the original. [`NPY_ARRAY_ENSUREARRAY`](../reference/c-api/array#c.NPY_ARRAY_ENSUREARRAY "NPY_ARRAY_ENSUREARRAY") Make sure the resulting object is an actual ndarray and not a sub- class. Note Whether or not an array is byte-swapped is determined by the data-type of the array. Native byte-order arrays are always requested by [`PyArray_FROM_OTF`](../reference/c-api/array#c.PyArray_FROM_OTF "PyArray_FROM_OTF") and so there is no need for a [`NPY_ARRAY_NOTSWAPPED`](../reference/c-api/array#c.NPY_ARRAY_NOTSWAPPED "NPY_ARRAY_NOTSWAPPED") flag in the requirements argument. There is also no way to get a byte-swapped array from this routine. ### Creating a brand-new ndarray Quite often, new arrays must be created from within extension-module code. Perhaps an output array is needed and you don’t want the caller to have to supply it. Perhaps only a temporary array is needed to hold an intermediate calculation. Whatever the need there are simple ways to get an ndarray object of whatever data-type is needed. The most general function for doing this is [`PyArray_NewFromDescr`](../reference/c-api/array#c.PyArray_NewFromDescr "PyArray_NewFromDescr"). All array creation functions go through this heavily re-used code. Because of its flexibility, it can be somewhat confusing to use. As a result, simpler forms exist that are easier to use. These forms are part of the [`PyArray_SimpleNew`](../reference/c-api/array#c.PyArray_SimpleNew "PyArray_SimpleNew") family of functions, which simplify the interface by providing default values for common use cases. ### Getting at ndarray memory and accessing elements of the ndarray If obj is an ndarray ([PyArrayObject](../reference/c-api/types-and- structures#c.PyArrayObject "PyArrayObject")*), then the data-area of the ndarray is pointed to by the void* pointer [`PyArray_DATA`](../reference/c-api/array#c.PyArray_DATA "PyArray_DATA") (obj) or the char* pointer [`PyArray_BYTES`](../reference/c-api/array#c.PyArray_BYTES "PyArray_BYTES") (obj). Remember that (in general) this data-area may not be aligned according to the data-type, it may represent byte-swapped data, and/or it may not be writeable. If the data area is aligned and in native byte-order, then how to get at a specific element of the array is determined only by the array of npy_intp variables, [`PyArray_STRIDES`](../reference/c-api/array#c.PyArray_STRIDES "PyArray_STRIDES") (obj). In particular, this c-array of integers shows how many **bytes** must be added to the current element pointer to get to the next element in each dimension. For arrays less than 4-dimensions there are `PyArray_GETPTR{k}` (obj, …) macros where {k} is the integer 1, 2, 3, or 4 that make using the array strides easier. The arguments …. represent {k} non- negative integer indices into the array. For example, suppose `E` is a 3-dimensional ndarray. A (void*) pointer to the element `E[i,j,k]` is obtained as [`PyArray_GETPTR3`](../reference/c-api/array#c.PyArray_GETPTR3 "PyArray_GETPTR3") (E, i, j, k). As explained previously, C-style contiguous arrays and Fortran-style contiguous arrays have particular striding patterns. Two array flags ([`NPY_ARRAY_C_CONTIGUOUS`](../reference/c-api/array#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS") and [`NPY_ARRAY_F_CONTIGUOUS`](../reference/c-api/array#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS")) indicate whether or not the striding pattern of a particular array matches the C-style contiguous or Fortran-style contiguous or neither. Whether or not the striding pattern matches a standard C or Fortran one can be tested Using [`PyArray_IS_C_CONTIGUOUS`](../reference/c-api/array#c.PyArray_IS_C_CONTIGUOUS "PyArray_IS_C_CONTIGUOUS") (obj) and [`PyArray_ISFORTRAN`](../reference/c-api/array#c.PyArray_ISFORTRAN "PyArray_ISFORTRAN") (obj) respectively. Most third-party libraries expect contiguous arrays. But, often it is not difficult to support general-purpose striding. I encourage you to use the striding information in your own code whenever possible, and reserve single-segment requirements for wrapping third- party code. Using the striding information provided with the ndarray rather than requiring a contiguous striding reduces copying that otherwise must be made. ## Example The following example shows how you might write a wrapper that accepts two input arguments (that will be converted to an array) and an output argument (that must be an array). The function returns None and updates the output array. Note the updated use of WRITEBACKIFCOPY semantics for NumPy v1.14 and above static PyObject * example_wrapper(PyObject *dummy, PyObject *args) { PyObject *arg1=NULL, *arg2=NULL, *out=NULL; PyObject *arr1=NULL, *arr2=NULL, *oarr=NULL; if (!PyArg_ParseTuple(args, "OOO!", &arg1, &arg2, &PyArray_Type, &out)) return NULL; arr1 = PyArray_FROM_OTF(arg1, NPY_DOUBLE, NPY_ARRAY_IN_ARRAY); if (arr1 == NULL) return NULL; arr2 = PyArray_FROM_OTF(arg2, NPY_DOUBLE, NPY_ARRAY_IN_ARRAY); if (arr2 == NULL) goto fail; #if NPY_API_VERSION >= 0x0000000c oarr = PyArray_FROM_OTF(out, NPY_DOUBLE, NPY_ARRAY_INOUT_ARRAY2); #else oarr = PyArray_FROM_OTF(out, NPY_DOUBLE, NPY_ARRAY_INOUT_ARRAY); #endif if (oarr == NULL) goto fail; /* code that makes use of arguments */ /* You will probably need at least nd = PyArray_NDIM(<..>) -- number of dimensions dims = PyArray_DIMS(<..>) -- npy_intp array of length nd showing length in each dim. dptr = (double *)PyArray_DATA(<..>) -- pointer to data. If an error occurs goto fail. */ Py_DECREF(arr1); Py_DECREF(arr2); #if NPY_API_VERSION >= 0x0000000c PyArray_ResolveWritebackIfCopy(oarr); #endif Py_DECREF(oarr); Py_INCREF(Py_None); return Py_None; fail: Py_XDECREF(arr1); Py_XDECREF(arr2); #if NPY_API_VERSION >= 0x0000000c PyArray_DiscardWritebackIfCopy(oarr); #endif Py_XDECREF(oarr); return NULL; } # Using NumPy C-API * [How to extend NumPy](c-info.how-to-extend) * [Writing an extension module](c-info.how-to-extend#writing-an-extension-module) * [Required subroutine](c-info.how-to-extend#required-subroutine) * [Defining functions](c-info.how-to-extend#defining-functions) * [Functions without keyword arguments](c-info.how-to-extend#functions-without-keyword-arguments) * [Functions with keyword arguments](c-info.how-to-extend#functions-with-keyword-arguments) * [Reference counting](c-info.how-to-extend#reference-counting) * [Dealing with array objects](c-info.how-to-extend#dealing-with-array-objects) * [Converting an arbitrary sequence object](c-info.how-to-extend#converting-an-arbitrary-sequence-object) * [Creating a brand-new ndarray](c-info.how-to-extend#creating-a-brand-new-ndarray) * [Getting at ndarray memory and accessing elements of the ndarray](c-info.how-to-extend#getting-at-ndarray-memory-and-accessing-elements-of-the-ndarray) * [Example](c-info.how-to-extend#example) * [Using Python as glue](c-info.python-as-glue) * [Calling other compiled libraries from Python](c-info.python-as-glue#calling-other-compiled-libraries-from-python) * [Hand-generated wrappers](c-info.python-as-glue#hand-generated-wrappers) * [F2PY](c-info.python-as-glue#f2py) * [Cython](c-info.python-as-glue#cython) * [Complex addition in Cython](c-info.python-as-glue#complex-addition-in-cython) * [Image filter in Cython](c-info.python-as-glue#image-filter-in-cython) * [Conclusion](c-info.python-as-glue#conclusion) * [ctypes](c-info.python-as-glue#index-2) * [Having a shared library](c-info.python-as-glue#having-a-shared-library) * [Loading the shared library](c-info.python-as-glue#loading-the-shared-library) * [Converting arguments](c-info.python-as-glue#converting-arguments) * [Calling the function](c-info.python-as-glue#calling-the-function) * [`ndpointer`](c-info.python-as-glue#ndpointer) * [Complete example](c-info.python-as-glue#complete-example) * [Conclusion](c-info.python-as-glue#id4) * [Additional tools you may find useful](c-info.python-as-glue#additional-tools-you-may-find-useful) * [SWIG](c-info.python-as-glue#swig) * [SIP](c-info.python-as-glue#sip) * [Boost Python](c-info.python-as-glue#boost-python) * [Pyfort](c-info.python-as-glue#pyfort) * [Writing your own ufunc](c-info.ufunc-tutorial) * [Creating a new universal function](c-info.ufunc-tutorial#creating-a-new-universal-function) * [Example non-ufunc extension](c-info.ufunc-tutorial#example-non-ufunc-extension) * [Example NumPy ufunc for one dtype](c-info.ufunc-tutorial#example-numpy-ufunc-for-one-dtype) * [Example NumPy ufunc with multiple dtypes](c-info.ufunc-tutorial#example-numpy-ufunc-with-multiple-dtypes) * [Example NumPy ufunc with multiple arguments/return values](c-info.ufunc-tutorial#example-numpy-ufunc-with-multiple-arguments-return-values) * [Example NumPy ufunc with structured array dtype arguments](c-info.ufunc-tutorial#example-numpy-ufunc-with-structured-array-dtype-arguments) * [Beyond the basics](c-info.beyond-basics) * [Iterating over elements in the array](c-info.beyond-basics#iterating-over-elements-in-the-array) * [Basic iteration](c-info.beyond-basics#basic-iteration) * [Iterating over all but one axis](c-info.beyond-basics#iterating-over-all-but-one-axis) * [Iterating over multiple arrays](c-info.beyond-basics#iterating-over-multiple-arrays) * [Broadcasting over multiple arrays](c-info.beyond-basics#broadcasting-over-multiple-arrays) * [User-defined data-types](c-info.beyond-basics#user-defined-data-types) * [Adding the new data-type](c-info.beyond-basics#adding-the-new-data-type) * [Registering a casting function](c-info.beyond-basics#registering-a-casting-function) * [Registering coercion rules](c-info.beyond-basics#registering-coercion-rules) * [Registering a ufunc loop](c-info.beyond-basics#registering-a-ufunc-loop) * [Subtyping the ndarray in C](c-info.beyond-basics#subtyping-the-ndarray-in-c) * [Creating sub-types](c-info.beyond-basics#creating-sub-types) * [Specific features of ndarray sub-typing](c-info.beyond-basics#specific-features-of-ndarray-sub-typing) * [The __array_finalize__ method](c-info.beyond-basics#the-array-finalize-method) * [`ndarray.__array_finalize__`](c-info.beyond-basics#ndarray.__array_finalize__) * [The __array_priority__ attribute](c-info.beyond-basics#the-array-priority-attribute) * [`ndarray.__array_priority__`](c-info.beyond-basics#ndarray.__array_priority__) * [The __array_wrap__ method](c-info.beyond-basics#the-array-wrap-method) * [`ndarray.__array_wrap__`](c-info.beyond-basics#ndarray.__array_wrap__) # Using Python as glue Warning This was written in 2008 as part of the original [Guide to NumPy](https://archive.org/details/NumPyBook) book by Travis E. Oliphant and is out of date. Many people like to say that Python is a fantastic glue language. Hopefully, this Chapter will convince you that this is true. The first adopters of Python for science were typically people who used it to glue together large application codes running on super-computers. Not only was it much nicer to code in Python than in a shell script or Perl, in addition, the ability to easily extend Python made it relatively easy to create new classes and types specifically adapted to the problems being solved. From the interactions of these early contributors, Numeric emerged as an array-like object that could be used to pass data between these applications. As Numeric has matured and developed into NumPy, people have been able to write more code directly in NumPy. Often this code is fast-enough for production use, but there are still times that there is a need to access compiled code. Either to get that last bit of efficiency out of the algorithm or to make it easier to access widely-available codes written in C/C++ or Fortran. This chapter will review many of the tools that are available for the purpose of accessing code written in other compiled languages. There are many resources available for learning to call other compiled libraries from Python and the purpose of this Chapter is not to make you an expert. The main goal is to make you aware of some of the possibilities so that you will know what to “Google” in order to learn more. ## Calling other compiled libraries from Python While Python is a great language and a pleasure to code in, its dynamic nature results in overhead that can cause some code ( _i.e._ raw computations inside of for loops) to be up 10-100 times slower than equivalent code written in a static compiled language. In addition, it can cause memory usage to be larger than necessary as temporary arrays are created and destroyed during computation. For many types of computing needs, the extra slow-down and memory consumption can often not be spared (at least for time- or memory- critical portions of your code). Therefore one of the most common needs is to call out from Python code to a fast, machine-code routine (e.g. compiled using C/C++ or Fortran). The fact that this is relatively easy to do is a big reason why Python is such an excellent high-level language for scientific and engineering programming. There are two basic approaches to calling compiled code: writing an extension module that is then imported to Python using the import command, or calling a shared-library subroutine directly from Python using the [ctypes](https://docs.python.org/3/library/ctypes.html) module. Writing an extension module is the most common method. Warning Calling C-code from Python can result in Python crashes if you are not careful. None of the approaches in this chapter are immune. You have to know something about the way data is handled by both NumPy and by the third-party library being used. ## Hand-generated wrappers Extension modules were discussed in [Writing an extension module](c-info.how- to-extend#writing-an-extension). The most basic way to interface with compiled code is to write an extension module and construct a module method that calls the compiled code. For improved readability, your method should take advantage of the `PyArg_ParseTuple` call to convert between Python objects and C data- types. For standard C data-types there is probably already a built-in converter. For others you may need to write your own converter and use the `"O&"` format string which allows you to specify a function that will be used to perform the conversion from the Python object to whatever C-structures are needed. Once the conversions to the appropriate C-structures and C data-types have been performed, the next step in the wrapper is to call the underlying function. This is straightforward if the underlying function is in C or C++. However, in order to call Fortran code you must be familiar with how Fortran subroutines are called from C/C++ using your compiler and platform. This can vary somewhat platforms and compilers (which is another reason f2py makes life much simpler for interfacing Fortran code) but generally involves underscore mangling of the name and the fact that all variables are passed by reference (i.e. all arguments are pointers). The advantage of the hand-generated wrapper is that you have complete control over how the C-library gets used and called which can lead to a lean and tight interface with minimal over-head. The disadvantage is that you have to write, debug, and maintain C-code, although most of it can be adapted using the time- honored technique of “cutting-pasting-and-modifying” from other extension modules. Because the procedure of calling out to additional C-code is fairly regimented, code-generation procedures have been developed to make this process easier. One of these code-generation techniques is distributed with NumPy and allows easy integration with Fortran and (simple) C code. This package, f2py, will be covered briefly in the next section. ## F2PY F2PY allows you to automatically construct an extension module that interfaces to routines in Fortran 77/90/95 code. It has the ability to parse Fortran 77/90/95 code and automatically generate Python signatures for the subroutines it encounters, or you can guide how the subroutine interfaces with Python by constructing an interface-definition-file (or modifying the f2py-produced one). See the [F2PY documentation](../f2py/index#f2py) for more information and examples. The f2py method of linking compiled code is currently the most sophisticated and integrated approach. It allows clean separation of Python with compiled code while still allowing for separate distribution of the extension module. The only draw-back is that it requires the existence of a Fortran compiler in order for a user to install the code. However, with the existence of the free- compilers g77, gfortran, and g95, as well as high-quality commercial compilers, this restriction is not particularly onerous. In our opinion, Fortran is still the easiest way to write fast and clear code for scientific computing. It handles complex numbers, and multi-dimensional indexing in the most straightforward way. Be aware, however, that some Fortran compilers will not be able to optimize code as well as good hand- written C-code. ## Cython [Cython](http://cython.org) is a compiler for a Python dialect that adds (optional) static typing for speed, and allows mixing C or C++ code into your modules. It produces C or C++ extensions that can be compiled and imported in Python code. If you are writing an extension module that will include quite a bit of your own algorithmic code as well, then Cython is a good match. Among its features is the ability to easily and quickly work with multidimensional arrays. Notice that Cython is an extension-module generator only. Unlike f2py, it includes no automatic facility for compiling and linking the extension module (which must be done in the usual fashion). It does provide a modified distutils class called `build_ext` which lets you build an extension module from a `.pyx` source. Thus, you could write in a `setup.py` file: from Cython.Distutils import build_ext from distutils.extension import Extension from distutils.core import setup import numpy setup(name='mine', description='Nothing', ext_modules=[Extension('filter', ['filter.pyx'], include_dirs=[numpy.get_include()])], cmdclass = {'build_ext':build_ext}) Adding the NumPy include directory is, of course, only necessary if you are using NumPy arrays in the extension module (which is what we assume you are using Cython for). The distutils extensions in NumPy also include support for automatically producing the extension-module and linking it from a `.pyx` file. It works so that if the user does not have Cython installed, then it looks for a file with the same file-name but a `.c` extension which it then uses instead of trying to produce the `.c` file again. If you just use Cython to compile a standard Python module, then you will get a C extension module that typically runs a bit faster than the equivalent Python module. Further speed increases can be gained by using the `cdef` keyword to statically define C variables. Let’s look at two examples we’ve seen before to see how they might be implemented using Cython. These examples were compiled into extension modules using Cython 0.21.1. ### Complex addition in Cython Here is part of a Cython module named `add.pyx` which implements the complex addition functions we previously implemented using f2py: cimport cython cimport numpy as np import numpy as np # We need to initialize NumPy. np.import_array() #@cython.boundscheck(False) def zadd(in1, in2): cdef double complex[:] a = in1.ravel() cdef double complex[:] b = in2.ravel() out = np.empty(a.shape[0], np.complex64) cdef double complex[:] c = out.ravel() for i in range(c.shape[0]): c[i].real = a[i].real + b[i].real c[i].imag = a[i].imag + b[i].imag return out This module shows use of the `cimport` statement to load the definitions from the `numpy.pxd` header that ships with Cython. It looks like NumPy is imported twice; `cimport` only makes the NumPy C-API available, while the regular `import` causes a Python-style import at runtime and makes it possible to call into the familiar NumPy Python API. The example also demonstrates Cython’s “typed memoryviews”, which are like NumPy arrays at the C level, in the sense that they are shaped and strided arrays that know their own extent (unlike a C array addressed through a bare pointer). The syntax `double complex[:]` denotes a one-dimensional array (vector) of doubles, with arbitrary strides. A contiguous array of ints would be `int[::1]`, while a matrix of floats would be `float[:, :]`. Shown commented is the `cython.boundscheck` decorator, which turns bounds- checking for memory view accesses on or off on a per-function basis. We can use this to further speed up our code, at the expense of safety (or a manual check prior to entering the loop). Other than the view syntax, the function is immediately readable to a Python programmer. Static typing of the variable `i` is implicit. Instead of the view syntax, we could also have used Cython’s special NumPy array syntax, but the view syntax is preferred. ### Image filter in Cython The two-dimensional example we created using Fortran is just as easy to write in Cython: cimport numpy as np import numpy as np np.import_array() def filter(img): cdef double[:, :] a = np.asarray(img, dtype=np.double) out = np.zeros(img.shape, dtype=np.double) cdef double[:, ::1] b = out cdef np.npy_intp i, j for i in range(1, a.shape[0] - 1): for j in range(1, a.shape[1] - 1): b[i, j] = (a[i, j] + .5 * ( a[i-1, j] + a[i+1, j] + a[i, j-1] + a[i, j+1]) + .25 * ( a[i-1, j-1] + a[i-1, j+1] + a[i+1, j-1] + a[i+1, j+1])) return out This 2-d averaging filter runs quickly because the loop is in C and the pointer computations are done only as needed. If the code above is compiled as a module `image`, then a 2-d image, `img`, can be filtered using this code very quickly using: import image out = image.filter(img) Regarding the code, two things are of note: firstly, it is impossible to return a memory view to Python. Instead, a NumPy array `out` is first created, and then a view `b` onto this array is used for the computation. Secondly, the view `b` is typed `double[:, ::1]`. This means 2-d array with contiguous rows, i.e., C matrix order. Specifying the order explicitly can speed up some algorithms since they can skip stride computations. ### Conclusion Cython is the extension mechanism of choice for several scientific Python libraries, including Scipy, Pandas, SAGE, scikit-image and scikit-learn, as well as the XML processing library LXML. The language and compiler are well- maintained. There are several disadvantages of using Cython: 1. When coding custom algorithms, and sometimes when wrapping existing C libraries, some familiarity with C is required. In particular, when using C memory management (`malloc` and friends), it’s easy to introduce memory leaks. However, just compiling a Python module renamed to `.pyx` can already speed it up, and adding a few type declarations can give dramatic speedups in some code. 2. It is easy to lose a clean separation between Python and C which makes re-using your C-code for other non-Python-related projects more difficult. 3. The C-code generated by Cython is hard to read and modify (and typically compiles with annoying but harmless warnings). One big advantage of Cython-generated extension modules is that they are easy to distribute. In summary, Cython is a very capable tool for either gluing C code or generating an extension module quickly and should not be over-looked. It is especially useful for people that can’t or won’t write C or Fortran code. ## ctypes [ctypes](https://docs.python.org/3/library/ctypes.html) is a Python extension module, included in the stdlib, that allows you to call an arbitrary function in a shared library directly from Python. This approach allows you to interface with C-code directly from Python. This opens up an enormous number of libraries for use from Python. The drawback, however, is that coding mistakes can lead to ugly program crashes very easily (just as can happen in C) because there is little type or bounds checking done on the parameters. This is especially true when array data is passed in as a pointer to a raw memory location. The responsibility is then on you that the subroutine will not access memory outside the actual array area. But, if you don’t mind living a little dangerously ctypes can be an effective tool for quickly taking advantage of a large shared library (or writing extended functionality in your own shared library). Because the ctypes approach exposes a raw interface to the compiled code it is not always tolerant of user mistakes. Robust use of the ctypes module typically involves an additional layer of Python code in order to check the data types and array bounds of objects passed to the underlying subroutine. This additional layer of checking (not to mention the conversion from ctypes objects to C-data-types that ctypes itself performs), will make the interface slower than a hand-written extension-module interface. However, this overhead should be negligible if the C-routine being called is doing any significant amount of work. If you are a great Python programmer with weak C skills, ctypes is an easy way to write a useful interface to a (shared) library of compiled code. To use ctypes you must 1. Have a shared library. 2. Load the shared library. 3. Convert the Python objects to ctypes-understood arguments. 4. Call the function from the library with the ctypes arguments. ### Having a shared library There are several requirements for a shared library that can be used with ctypes that are platform specific. This guide assumes you have some familiarity with making a shared library on your system (or simply have a shared library available to you). Items to remember are: * A shared library must be compiled in a special way ( _e.g._ using the `-shared` flag with gcc). * On some platforms (_e.g._ Windows), a shared library requires a .def file that specifies the functions to be exported. For example a mylib.def file might contain: LIBRARY mylib.dll EXPORTS cool_function1 cool_function2 Alternatively, you may be able to use the storage-class specifier `__declspec(dllexport)` in the C-definition of the function to avoid the need for this `.def` file. There is no standard way in Python distutils to create a standard shared library (an extension module is a “special” shared library Python understands) in a cross-platform manner. Thus, a big disadvantage of ctypes at the time of writing this book is that it is difficult to distribute in a cross-platform manner a Python extension that uses ctypes and includes your own code which should be compiled as a shared library on the users system. ### Loading the shared library A simple, but robust way to load the shared library is to get the absolute path name and load it using the cdll object of ctypes: lib = ctypes.cdll[] However, on Windows accessing an attribute of the `cdll` method will load the first DLL by that name found in the current directory or on the PATH. Loading the absolute path name requires a little finesse for cross-platform work since the extension of shared libraries varies. There is a `ctypes.util.find_library` utility available that can simplify the process of finding the library to load but it is not foolproof. Complicating matters, different platforms have different default extensions used by shared libraries (e.g. .dll – Windows, .so – Linux, .dylib – Mac OS X). This must also be taken into account if you are using ctypes to wrap code that needs to work on several platforms. NumPy provides a convenience function called `ctypeslib.load_library` (name, path). This function takes the name of the shared library (including any prefix like ‘lib’ but excluding the extension) and a path where the shared library can be located. It returns a ctypes library object or raises an `OSError` if the library cannot be found or raises an `ImportError` if the ctypes module is not available. (Windows users: the ctypes library object loaded using `load_library` is always loaded assuming cdecl calling convention. See the ctypes documentation under `ctypes.windll` and/or `ctypes.oledll` for ways to load libraries under other calling conventions). The functions in the shared library are available as attributes of the ctypes library object (returned from `ctypeslib.load_library`) or as items using `lib['func_name']` syntax. The latter method for retrieving a function name is particularly useful if the function name contains characters that are not allowable in Python variable names. ### Converting arguments Python ints/longs, strings, and unicode objects are automatically converted as needed to equivalent ctypes arguments The None object is also converted automatically to a NULL pointer. All other Python objects must be converted to ctypes-specific types. There are two ways around this restriction that allow ctypes to integrate with other objects. 1. Don’t set the argtypes attribute of the function object and define an `_as_parameter_` method for the object you want to pass in. The `_as_parameter_` method must return a Python int which will be passed directly to the function. 2. Set the argtypes attribute to a list whose entries contain objects with a classmethod named from_param that knows how to convert your object to an object that ctypes can understand (an int/long, string, unicode, or object with the `_as_parameter_` attribute). NumPy uses both methods with a preference for the second method because it can be safer. The ctypes attribute of the ndarray returns an object that has an `_as_parameter_` attribute which returns an integer representing the address of the ndarray to which it is associated. As a result, one can pass this ctypes attribute object directly to a function expecting a pointer to the data in your ndarray. The caller must be sure that the ndarray object is of the correct type, shape, and has the correct flags set or risk nasty crashes if the data-pointer to inappropriate arrays are passed in. To implement the second method, NumPy provides the class-factory function `ndpointer` in the [`numpy.ctypeslib`](../reference/routines.ctypeslib#module- numpy.ctypeslib "numpy.ctypeslib") module. This class-factory function produces an appropriate class that can be placed in an argtypes attribute entry of a ctypes function. The class will contain a from_param method which ctypes will use to convert any ndarray passed in to the function to a ctypes- recognized object. In the process, the conversion will perform checking on any properties of the ndarray that were specified by the user in the call to `ndpointer`. Aspects of the ndarray that can be checked include the data-type, the number-of-dimensions, the shape, and/or the state of the flags on any array passed. The return value of the from_param method is the ctypes attribute of the array which (because it contains the `_as_parameter_` attribute pointing to the array data area) can be used by ctypes directly. The ctypes attribute of an ndarray is also endowed with additional attributes that may be convenient when passing additional information about the array into a ctypes function. The attributes **data** , **shape** , and **strides** can provide ctypes compatible types corresponding to the data-area, the shape, and the strides of the array. The data attribute returns a `c_void_p` representing a pointer to the data area. The shape and strides attributes each return an array of ctypes integers (or None representing a NULL pointer, if a 0-d array). The base ctype of the array is a ctype integer of the same size as a pointer on the platform. There are also methods `data_as({ctype})`, `shape_as()`, and `strides_as()`. These return the data as a ctype object of your choice and the shape/strides arrays using an underlying base type of your choice. For convenience, the `ctypeslib` module also contains `c_intp` as a ctypes integer data-type whose size is the same as the size of `c_void_p` on the platform (its value is None if ctypes is not installed). ### Calling the function The function is accessed as an attribute of or an item from the loaded shared- library. Thus, if `./mylib.so` has a function named `cool_function1`, it may be accessed either as: lib = numpy.ctypeslib.load_library('mylib','.') func1 = lib.cool_function1 # or equivalently func1 = lib['cool_function1'] In ctypes, the return-value of a function is set to be ‘int’ by default. This behavior can be changed by setting the restype attribute of the function. Use None for the restype if the function has no return value (‘void’): func1.restype = None As previously discussed, you can also set the argtypes attribute of the function in order to have ctypes check the types of the input arguments when the function is called. Use the `ndpointer` factory function to generate a ready-made class for data-type, shape, and flags checking on your new function. The `ndpointer` function has the signature ndpointer(_dtype =None_, _ndim =None_, _shape =None_, _flags =None_) Keyword arguments with the value `None` are not checked. Specifying a keyword enforces checking of that aspect of the ndarray on conversion to a ctypes- compatible object. The dtype keyword can be any object understood as a data- type object. The ndim keyword should be an integer, and the shape keyword should be an integer or a sequence of integers. The flags keyword specifies the minimal flags that are required on any array passed in. This can be specified as a string of comma separated requirements, an integer indicating the requirement bits OR’d together, or a flags object returned from the flags attribute of an array with the necessary requirements. Using an ndpointer class in the argtypes method can make it significantly safer to call a C function using ctypes and the data- area of an ndarray. You may still want to wrap the function in an additional Python wrapper to make it user-friendly (hiding some obvious arguments and making some arguments output arguments). In this process, the `requires` function in NumPy may be useful to return the right kind of array from a given input. ### Complete example In this example, we will demonstrate how the addition function and the filter function implemented previously using the other approaches can be implemented using ctypes. First, the C code which implements the algorithms contains the functions `zadd`, `dadd`, `sadd`, `cadd`, and `dfilter2d`. The `zadd` function is: /* Add arrays of contiguous data */ typedef struct {double real; double imag;} cdouble; typedef struct {float real; float imag;} cfloat; void zadd(cdouble *a, cdouble *b, cdouble *c, long n) { while (n--) { c->real = a->real + b->real; c->imag = a->imag + b->imag; a++; b++; c++; } } with similar code for `cadd`, `dadd`, and `sadd` that handles complex float, double, and float data-types, respectively: void cadd(cfloat *a, cfloat *b, cfloat *c, long n) { while (n--) { c->real = a->real + b->real; c->imag = a->imag + b->imag; a++; b++; c++; } } void dadd(double *a, double *b, double *c, long n) { while (n--) { *c++ = *a++ + *b++; } } void sadd(float *a, float *b, float *c, long n) { while (n--) { *c++ = *a++ + *b++; } } The `code.c` file also contains the function `dfilter2d`: /* * Assumes b is contiguous and has strides that are multiples of * sizeof(double) */ void dfilter2d(double *a, double *b, ssize_t *astrides, ssize_t *dims) { ssize_t i, j, M, N, S0, S1; ssize_t r, c, rm1, rp1, cp1, cm1; M = dims[0]; N = dims[1]; S0 = astrides[0]/sizeof(double); S1 = astrides[1]/sizeof(double); for (i = 1; i < M - 1; i++) { r = i*S0; rp1 = r + S0; rm1 = r - S0; for (j = 1; j < N - 1; j++) { c = j*S1; cp1 = j + S1; cm1 = j - S1; b[i*N + j] = a[r + c] + (a[rp1 + c] + a[rm1 + c] + a[r + cp1] + a[r + cm1])*0.5 + (a[rp1 + cp1] + a[rp1 + cm1] + a[rm1 + cp1] + a[rm1 + cp1])*0.25; } } } A possible advantage this code has over the Fortran-equivalent code is that it takes arbitrarily strided (i.e. non-contiguous arrays) and may also run faster depending on the optimization capability of your compiler. But, it is an obviously more complicated than the simple code in `filter.f`. This code must be compiled into a shared library. On my Linux system this is accomplished using: gcc -o code.so -shared code.c Which creates a shared_library named code.so in the current directory. On Windows don’t forget to either add `__declspec(dllexport)` in front of void on the line preceding each function definition, or write a `code.def` file that lists the names of the functions to be exported. A suitable Python interface to this shared library should be constructed. To do this create a file named interface.py with the following lines at the top: __all__ = ['add', 'filter2d'] import numpy as np import os _path = os.path.dirname('__file__') lib = np.ctypeslib.load_library('code', _path) _typedict = {'zadd' : complex, 'sadd' : np.single, 'cadd' : np.csingle, 'dadd' : float} for name in _typedict.keys(): val = getattr(lib, name) val.restype = None _type = _typedict[name] val.argtypes = [np.ctypeslib.ndpointer(_type, flags='aligned, contiguous'), np.ctypeslib.ndpointer(_type, flags='aligned, contiguous'), np.ctypeslib.ndpointer(_type, flags='aligned, contiguous,'\ 'writeable'), np.ctypeslib.c_intp] This code loads the shared library named `code.{ext}` located in the same path as this file. It then adds a return type of void to the functions contained in the library. It also adds argument checking to the functions in the library so that ndarrays can be passed as the first three arguments along with an integer (large enough to hold a pointer on the platform) as the fourth argument. Setting up the filtering function is similar and allows the filtering function to be called with ndarray arguments as the first two arguments and with pointers to integers (large enough to handle the strides and shape of an ndarray) as the last two arguments.: lib.dfilter2d.restype=None lib.dfilter2d.argtypes = [np.ctypeslib.ndpointer(float, ndim=2, flags='aligned'), np.ctypeslib.ndpointer(float, ndim=2, flags='aligned, contiguous,'\ 'writeable'), ctypes.POINTER(np.ctypeslib.c_intp), ctypes.POINTER(np.ctypeslib.c_intp)] Next, define a simple selection function that chooses which addition function to call in the shared library based on the data-type: def select(dtype): if dtype.char in ['?bBhHf']: return lib.sadd, single elif dtype.char in ['F']: return lib.cadd, csingle elif dtype.char in ['DG']: return lib.zadd, complex else: return lib.dadd, float return func, ntype Finally, the two functions to be exported by the interface can be written simply as: def add(a, b): requires = ['CONTIGUOUS', 'ALIGNED'] a = np.asanyarray(a) func, dtype = select(a.dtype) a = np.require(a, dtype, requires) b = np.require(b, dtype, requires) c = np.empty_like(a) func(a,b,c,a.size) return c and: def filter2d(a): a = np.require(a, float, ['ALIGNED']) b = np.zeros_like(a) lib.dfilter2d(a, b, a.ctypes.strides, a.ctypes.shape) return b ### Conclusion Using ctypes is a powerful way to connect Python with arbitrary C-code. Its advantages for extending Python include * clean separation of C code from Python code * no need to learn a new syntax except Python and C * allows reuse of C code * functionality in shared libraries written for other purposes can be obtained with a simple Python wrapper and search for the library. * easy integration with NumPy through the ctypes attribute * full argument checking with the ndpointer class factory Its disadvantages include * It is difficult to distribute an extension module made using ctypes because of a lack of support for building shared libraries in distutils. * You must have shared-libraries of your code (no static libraries). * Very little support for C++ code and its different library-calling conventions. You will probably need a C wrapper around C++ code to use with ctypes (or just use Boost.Python instead). Because of the difficulty in distributing an extension module made using ctypes, f2py and Cython are still the easiest ways to extend Python for package creation. However, ctypes is in some cases a useful alternative. This should bring more features to ctypes that should eliminate the difficulty in extending Python and distributing the extension using ctypes. ## Additional tools you may find useful These tools have been found useful by others using Python and so are included here. They are discussed separately because they are either older ways to do things now handled by f2py, Cython, or ctypes (SWIG, PyFort) or because of a lack of reasonable documentation (SIP, Boost). Links to these methods are not included since the most relevant can be found using Google or some other search engine, and any links provided here would be quickly dated. Do not assume that inclusion in this list means that the package deserves attention. Information about these packages are collected here because many people have found them useful and we’d like to give you as many options as possible for tackling the problem of easily integrating your code. ### SWIG Simplified Wrapper and Interface Generator (SWIG) is an old and fairly stable method for wrapping C/C++-libraries to a large variety of other languages. It does not specifically understand NumPy arrays but can be made usable with NumPy through the use of typemaps. There are some sample typemaps in the numpy/tools/swig directory under numpy.i together with an example module that makes use of them. SWIG excels at wrapping large C/C++ libraries because it can (almost) parse their headers and auto-produce an interface. Technically, you need to generate a `.i` file that defines the interface. Often, however, this `.i` file can be parts of the header itself. The interface usually needs a bit of tweaking to be very useful. This ability to parse C/C++ headers and auto-generate the interface still makes SWIG a useful approach to adding functionality from C/C++ into Python, despite the other methods that have emerged that are more targeted to Python. SWIG can actually target extensions for several languages, but the typemaps usually have to be language-specific. Nonetheless, with modifications to the Python-specific typemaps, SWIG can be used to interface a library with other languages such as Perl, Tcl, and Ruby. My experience with SWIG has been generally positive in that it is relatively easy to use and quite powerful. It has been used often before becoming more proficient at writing C-extensions. However, writing custom interfaces with SWIG is often troublesome because it must be done using the concept of typemaps which are not Python specific and are written in a C-like syntax. Therefore, other gluing strategies are preferred and SWIG would be probably considered only to wrap a very-large C/C++ library. Nonetheless, there are others who use SWIG quite happily. ### SIP SIP is another tool for wrapping C/C++ libraries that is Python specific and appears to have very good support for C++. Riverbank Computing developed SIP in order to create Python bindings to the QT library. An interface file must be written to generate the binding, but the interface file looks a lot like a C/C++ header file. While SIP is not a full C++ parser, it understands quite a bit of C++ syntax as well as its own special directives that allow modification of how the Python binding is accomplished. It also allows the user to define mappings between Python types and C/C++ structures and classes. ### Boost Python Boost is a repository of C++ libraries and Boost.Python is one of those libraries which provides a concise interface for binding C++ classes and functions to Python. The amazing part of the Boost.Python approach is that it works entirely in pure C++ without introducing a new syntax. Many users of C++ report that Boost.Python makes it possible to combine the best of both worlds in a seamless fashion. Using Boost to wrap simple C-subroutines is usually over-kill. Its primary purpose is to make C++ classes available in Python. So, if you have a set of C++ classes that need to be integrated cleanly into Python, consider learning about and using Boost.Python. ### Pyfort Pyfort is a nice tool for wrapping Fortran and Fortran-like C-code into Python with support for Numeric arrays. It was written by Paul Dubois, a distinguished computer scientist and the very first maintainer of Numeric (now retired). It is worth mentioning in the hopes that somebody will update PyFort to work with NumPy arrays as well which now support either Fortran or C-style contiguous arrays. # Writing your own ufunc ## Creating a new universal function Before reading this, it may help to familiarize yourself with the basics of C extensions for Python by reading/skimming the tutorials in Section 1 of [Extending and Embedding the Python Interpreter](https://docs.python.org/extending/index.html) and in [How to extend NumPy](c-info.how-to-extend) The umath module is a computer-generated C-module that creates many ufuncs. It provides a great many examples of how to create a universal function. Creating your own ufunc that will make use of the ufunc machinery is not difficult either. Suppose you have a function that you want to operate element-by- element over its inputs. By creating a new ufunc you will obtain a function that handles * broadcasting * N-dimensional looping * automatic type-conversions with minimal memory usage * optional output arrays It is not difficult to create your own ufunc. All that is required is a 1-d loop for each data-type you want to support. Each 1-d loop must have a specific signature, and only ufuncs for fixed-size data-types can be used. The function call used to create a new ufunc to work on built-in data-types is given below. A different mechanism is used to register ufuncs for user-defined data-types. In the next several sections we give example code that can be easily modified to create your own ufuncs. The examples are successively more complete or complicated versions of the logit function, a common function in statistical modeling. Logit is also interesting because, due to the magic of IEEE standards (specifically IEEE 754), all of the logit functions created below automatically have the following behavior. >>> logit(0) -inf >>> logit(1) inf >>> logit(2) nan >>> logit(-2) nan This is wonderful because the function writer doesn’t have to manually propagate infs or nans. ## Example non-ufunc extension For comparison and general edification of the reader we provide a simple implementation of a C extension of `logit` that uses no numpy. To do this we need two files. The first is the C file which contains the actual code, and the second is the `setup.py` file used to create the module. #define PY_SSIZE_T_CLEAN #include #include /* * spammodule.c * This is the C code for a non-numpy Python extension to * define the logit function, where logit(p) = log(p/(1-p)). * This function will not work on numpy arrays automatically. * numpy.vectorize must be called in python to generate * a numpy-friendly function. * * Details explaining the Python-C API can be found under * 'Extending and Embedding' and 'Python/C API' at * docs.python.org . */ /* This declares the logit function */ static PyObject *spam_logit(PyObject *self, PyObject *args); /* * This tells Python what methods this module has. * See the Python-C API for more information. */ static PyMethodDef SpamMethods[] = { {"logit", spam_logit, METH_VARARGS, "compute logit"}, {NULL, NULL, 0, NULL} }; /* * This actually defines the logit function for * input args from Python. */ static PyObject *spam_logit(PyObject *self, PyObject *args) { double p; /* This parses the Python argument into a double */ if(!PyArg_ParseTuple(args, "d", &p)) { return NULL; } /* THE ACTUAL LOGIT FUNCTION */ p = p/(1-p); p = log(p); /*This builds the answer back into a python object */ return Py_BuildValue("d", p); } /* This initiates the module using the above definitions. */ static struct PyModuleDef moduledef = { PyModuleDef_HEAD_INIT, "spam", NULL, -1, SpamMethods, NULL, NULL, NULL, NULL }; PyMODINIT_FUNC PyInit_spam(void) { PyObject *m; m = PyModule_Create(&moduledef); if (!m) { return NULL; } return m; } To use the `setup.py file`, place `setup.py` and `spammodule.c` in the same folder. Then `python setup.py build` will build the module to import, or `python setup.py install` will install the module to your site-packages directory. ''' setup.py file for spammodule.c Calling $python setup.py build_ext --inplace will build the extension library in the current file. Calling $python setup.py build will build a file that looks like ./build/lib*, where lib* is a file that begins with lib. The library will be in this file and end with a C library extension, such as .so Calling $python setup.py install will install the module in your site-packages file. See the setuptools section 'Building Extension Modules' at setuptools.pypa.io for more information. ''' from setuptools import setup, Extension import numpy as np module1 = Extension('spam', sources=['spammodule.c']) setup(name='spam', version='1.0', ext_modules=[module1]) Once the spam module is imported into python, you can call logit via `spam.logit`. Note that the function used above cannot be applied as-is to numpy arrays. To do so we must call [`numpy.vectorize`](../reference/generated/numpy.vectorize#numpy.vectorize "numpy.vectorize") on it. For example, if a python interpreter is opened in the file containing the spam library or spam has been installed, one can perform the following commands: >>> import numpy as np >>> import spam >>> spam.logit(0) -inf >>> spam.logit(1) inf >>> spam.logit(0.5) 0.0 >>> x = np.linspace(0,1,10) >>> spam.logit(x) TypeError: only length-1 arrays can be converted to Python scalars >>> f = np.vectorize(spam.logit) >>> f(x) array([ -inf, -2.07944154, -1.25276297, -0.69314718, -0.22314355, 0.22314355, 0.69314718, 1.25276297, 2.07944154, inf]) THE RESULTING LOGIT FUNCTION IS NOT FAST! `numpy.vectorize` simply loops over `spam.logit`. The loop is done at the C level, but the numpy array is constantly being parsed and build back up. This is expensive. When the author compared `numpy.vectorize(spam.logit)` against the logit ufuncs constructed below, the logit ufuncs were almost exactly 4 times faster. Larger or smaller speedups are, of course, possible depending on the nature of the function. ## Example NumPy ufunc for one dtype For simplicity we give a ufunc for a single dtype, the `'f8'` `double`. As in the previous section, we first give the `.c` file and then the `setup.py` file used to create the module containing the ufunc. The place in the code corresponding to the actual computations for the ufunc are marked with `/\* BEGIN main ufunc computation \*/` and `/\* END main ufunc computation \*/`. The code in between those lines is the primary thing that must be changed to create your own ufunc. #define PY_SSIZE_T_CLEAN #include #include "numpy/ndarraytypes.h" #include "numpy/ufuncobject.h" #include "numpy/npy_3kcompat.h" #include /* * single_type_logit.c * This is the C code for creating your own * NumPy ufunc for a logit function. * * In this code we only define the ufunc for * a single dtype. The computations that must * be replaced to create a ufunc for * a different function are marked with BEGIN * and END. * * Details explaining the Python-C API can be found under * 'Extending and Embedding' and 'Python/C API' at * docs.python.org . */ static PyMethodDef LogitMethods[] = { {NULL, NULL, 0, NULL} }; /* The loop definition must precede the PyMODINIT_FUNC. */ static void double_logit(char **args, const npy_intp *dimensions, const npy_intp *steps, void *data) { npy_intp i; npy_intp n = dimensions[0]; char *in = args[0], *out = args[1]; npy_intp in_step = steps[0], out_step = steps[1]; double tmp; for (i = 0; i < n; i++) { /* BEGIN main ufunc computation */ tmp = *(double *)in; tmp /= 1 - tmp; *((double *)out) = log(tmp); /* END main ufunc computation */ in += in_step; out += out_step; } } /* This a pointer to the above function */ PyUFuncGenericFunction funcs[1] = {&double_logit}; /* These are the input and return dtypes of logit.*/ static const char types[2] = {NPY_DOUBLE, NPY_DOUBLE}; static struct PyModuleDef moduledef = { PyModuleDef_HEAD_INIT, "npufunc", NULL, -1, LogitMethods, NULL, NULL, NULL, NULL }; PyMODINIT_FUNC PyInit_npufunc(void) { PyObject *m, *logit, *d; import_array(); import_umath(); m = PyModule_Create(&moduledef); if (!m) { return NULL; } logit = PyUFunc_FromFuncAndData(funcs, NULL, types, 1, 1, 1, PyUFunc_None, "logit", "logit_docstring", 0); d = PyModule_GetDict(m); PyDict_SetItemString(d, "logit", logit); Py_DECREF(logit); return m; } This is a `setup.py file` for the above code. As before, the module can be build via calling `python setup.py build` at the command prompt, or installed to site-packages via `python setup.py install`. The module can also be placed into a local folder e.g. `npufunc_directory` below using `python setup.py build_ext --inplace`. ''' setup.py file for single_type_logit.c Note that since this is a numpy extension we add an include_dirs=[get_include()] so that the extension is built with numpy's C/C++ header files. Calling $python setup.py build_ext --inplace will build the extension library in the npufunc_directory. Calling $python setup.py build will build a file that looks like ./build/lib*, where lib* is a file that begins with lib. The library will be in this file and end with a C library extension, such as .so Calling $python setup.py install will install the module in your site-packages file. See the setuptools section 'Building Extension Modules' at setuptools.pypa.io for more information. ''' from setuptools import setup, Extension from numpy import get_include npufunc = Extension('npufunc', sources=['single_type_logit.c'], include_dirs=[get_include()]) setup(name='npufunc', version='1.0', ext_modules=[npufunc]) After the above has been installed, it can be imported and used as follows. >>> import numpy as np >>> import npufunc >>> npufunc.logit(0.5) np.float64(0.0) >>> a = np.linspace(0,1,5) >>> npufunc.logit(a) array([ -inf, -1.09861229, 0. , 1.09861229, inf]) ## Example NumPy ufunc with multiple dtypes We finally give an example of a full ufunc, with inner loops for half-floats, floats, doubles, and long doubles. As in the previous sections we first give the `.c` file and then the corresponding `setup.py` file. The places in the code corresponding to the actual computations for the ufunc are marked with `/\* BEGIN main ufunc computation \*/` and `/\* END main ufunc computation \*/`. The code in between those lines is the primary thing that must be changed to create your own ufunc. #define PY_SSIZE_T_CLEAN #include #include "numpy/ndarraytypes.h" #include "numpy/ufuncobject.h" #include "numpy/halffloat.h" #include /* * multi_type_logit.c * This is the C code for creating your own * NumPy ufunc for a logit function. * * Each function of the form type_logit defines the * logit function for a different numpy dtype. Each * of these functions must be modified when you * create your own ufunc. The computations that must * be replaced to create a ufunc for * a different function are marked with BEGIN * and END. * * Details explaining the Python-C API can be found under * 'Extending and Embedding' and 'Python/C API' at * docs.python.org . * */ static PyMethodDef LogitMethods[] = { {NULL, NULL, 0, NULL} }; /* The loop definitions must precede the PyMODINIT_FUNC. */ static void long_double_logit(char **args, const npy_intp *dimensions, const npy_intp *steps, void *data) { npy_intp i; npy_intp n = dimensions[0]; char *in = args[0], *out = args[1]; npy_intp in_step = steps[0], out_step = steps[1]; long double tmp; for (i = 0; i < n; i++) { /* BEGIN main ufunc computation */ tmp = *(long double *)in; tmp /= 1 - tmp; *((long double *)out) = logl(tmp); /* END main ufunc computation */ in += in_step; out += out_step; } } static void double_logit(char **args, const npy_intp *dimensions, const npy_intp *steps, void *data) { npy_intp i; npy_intp n = dimensions[0]; char *in = args[0], *out = args[1]; npy_intp in_step = steps[0], out_step = steps[1]; double tmp; for (i = 0; i < n; i++) { /* BEGIN main ufunc computation */ tmp = *(double *)in; tmp /= 1 - tmp; *((double *)out) = log(tmp); /* END main ufunc computation */ in += in_step; out += out_step; } } static void float_logit(char **args, const npy_intp *dimensions, const npy_intp *steps, void *data) { npy_intp i; npy_intp n = dimensions[0]; char *in = args[0], *out = args[1]; npy_intp in_step = steps[0], out_step = steps[1]; float tmp; for (i = 0; i < n; i++) { /* BEGIN main ufunc computation */ tmp = *(float *)in; tmp /= 1 - tmp; *((float *)out) = logf(tmp); /* END main ufunc computation */ in += in_step; out += out_step; } } static void half_float_logit(char **args, const npy_intp *dimensions, const npy_intp *steps, void *data) { npy_intp i; npy_intp n = dimensions[0]; char *in = args[0], *out = args[1]; npy_intp in_step = steps[0], out_step = steps[1]; float tmp; for (i = 0; i < n; i++) { /* BEGIN main ufunc computation */ tmp = npy_half_to_float(*(npy_half *)in); tmp /= 1 - tmp; tmp = logf(tmp); *((npy_half *)out) = npy_float_to_half(tmp); /* END main ufunc computation */ in += in_step; out += out_step; } } /*This gives pointers to the above functions*/ PyUFuncGenericFunction funcs[4] = {&half_float_logit, &float_logit, &double_logit, &long_double_logit}; static const char types[8] = {NPY_HALF, NPY_HALF, NPY_FLOAT, NPY_FLOAT, NPY_DOUBLE, NPY_DOUBLE, NPY_LONGDOUBLE, NPY_LONGDOUBLE}; static struct PyModuleDef moduledef = { PyModuleDef_HEAD_INIT, "npufunc", NULL, -1, LogitMethods, NULL, NULL, NULL, NULL }; PyMODINIT_FUNC PyInit_npufunc(void) { PyObject *m, *logit, *d; import_array(); import_umath(); m = PyModule_Create(&moduledef); if (!m) { return NULL; } logit = PyUFunc_FromFuncAndData(funcs, NULL, types, 4, 1, 1, PyUFunc_None, "logit", "logit_docstring", 0); d = PyModule_GetDict(m); PyDict_SetItemString(d, "logit", logit); Py_DECREF(logit); return m; } This is a `setup.py` file for the above code. As before, the module can be build via calling `python setup.py build` at the command prompt, or installed to site-packages via `python setup.py install`. ''' setup.py file for multi_type_logit.c Note that since this is a numpy extension we add an include_dirs=[get_include()] so that the extension is built with numpy's C/C++ header files. Furthermore, we also have to include the npymath lib for half-float d-type. Calling $python setup.py build_ext --inplace will build the extension library in the current file. Calling $python setup.py build will build a file that looks like ./build/lib*, where lib* is a file that begins with lib. The library will be in this file and end with a C library extension, such as .so Calling $python setup.py install will install the module in your site-packages file. See the setuptools section 'Building Extension Modules' at setuptools.pypa.io for more information. ''' from setuptools import setup, Extension from numpy import get_include from os import path path_to_npymath = path.join(get_include(), '..', 'lib') npufunc = Extension('npufunc', sources=['multi_type_logit.c'], include_dirs=[get_include()], # Necessary for the half-float d-type. library_dirs=[path_to_npymath], libraries=["npymath"]) setup(name='npufunc', version='1.0', ext_modules=[npufunc]) After the above has been installed, it can be imported and used as follows. >>> import numpy as np >>> import npufunc >>> npufunc.logit(0.5) np.float64(0.0) >>> a = np.linspace(0,1,5) >>> npufunc.logit(a) array([ -inf, -1.09861229, 0. , 1.09861229, inf]) ## Example NumPy ufunc with multiple arguments/return values Our final example is a ufunc with multiple arguments. It is a modification of the code for a logit ufunc for data with a single dtype. We compute `(A * B, logit(A * B))`. We only give the C code as the setup.py file is exactly the same as the `setup.py` file in Example NumPy ufunc for one dtype, except that the line npufunc = Extension('npufunc', sources=['single_type_logit.c'], include_dirs=[get_include()]) is replaced with npufunc = Extension('npufunc', sources=['multi_arg_logit.c'], include_dirs=[get_include()]) The C file is given below. The ufunc generated takes two arguments `A` and `B`. It returns a tuple whose first element is `A * B` and whose second element is `logit(A * B)`. Note that it automatically supports broadcasting, as well as all other properties of a ufunc. #define PY_SSIZE_T_CLEAN #include #include "numpy/ndarraytypes.h" #include "numpy/ufuncobject.h" #include "numpy/halffloat.h" #include /* * multi_arg_logit.c * This is the C code for creating your own * NumPy ufunc for a multiple argument, multiple * return value ufunc. The places where the * ufunc computation is carried out are marked * with comments. * * Details explaining the Python-C API can be found under * 'Extending and Embedding' and 'Python/C API' at * docs.python.org. */ static PyMethodDef LogitMethods[] = { {NULL, NULL, 0, NULL} }; /* The loop definition must precede the PyMODINIT_FUNC. */ static void double_logitprod(char **args, const npy_intp *dimensions, const npy_intp *steps, void *data) { npy_intp i; npy_intp n = dimensions[0]; char *in1 = args[0], *in2 = args[1]; char *out1 = args[2], *out2 = args[3]; npy_intp in1_step = steps[0], in2_step = steps[1]; npy_intp out1_step = steps[2], out2_step = steps[3]; double tmp; for (i = 0; i < n; i++) { /* BEGIN main ufunc computation */ tmp = *(double *)in1; tmp *= *(double *)in2; *((double *)out1) = tmp; *((double *)out2) = log(tmp / (1 - tmp)); /* END main ufunc computation */ in1 += in1_step; in2 += in2_step; out1 += out1_step; out2 += out2_step; } } /*This a pointer to the above function*/ PyUFuncGenericFunction funcs[1] = {&double_logitprod}; /* These are the input and return dtypes of logit.*/ static const char types[4] = {NPY_DOUBLE, NPY_DOUBLE, NPY_DOUBLE, NPY_DOUBLE}; static struct PyModuleDef moduledef = { PyModuleDef_HEAD_INIT, "npufunc", NULL, -1, LogitMethods, NULL, NULL, NULL, NULL }; PyMODINIT_FUNC PyInit_npufunc(void) { PyObject *m, *logit, *d; import_array(); import_umath(); m = PyModule_Create(&moduledef); if (!m) { return NULL; } logit = PyUFunc_FromFuncAndData(funcs, NULL, types, 1, 2, 2, PyUFunc_None, "logit", "logit_docstring", 0); d = PyModule_GetDict(m); PyDict_SetItemString(d, "logit", logit); Py_DECREF(logit); return m; } ## Example NumPy ufunc with structured array dtype arguments This example shows how to create a ufunc for a structured array dtype. For the example we show a trivial ufunc for adding two arrays with dtype `'u8,u8,u8'`. The process is a bit different from the other examples since a call to [`PyUFunc_FromFuncAndData`](../reference/c-api/ufunc#c.PyUFunc_FromFuncAndData "PyUFunc_FromFuncAndData") doesn’t fully register ufuncs for custom dtypes and structured array dtypes. We need to also call [`PyUFunc_RegisterLoopForDescr`](../reference/c-api/ufunc#c.PyUFunc_RegisterLoopForDescr "PyUFunc_RegisterLoopForDescr") to finish setting up the ufunc. We only give the C code as the `setup.py` file is exactly the same as the `setup.py` file in Example NumPy ufunc for one dtype, except that the line npufunc = Extension('npufunc', sources=['single_type_logit.c'], include_dirs=[get_include()]) is replaced with npufunc = Extension('npufunc', sources=['add_triplet.c'], include_dirs=[get_include()]) The C file is given below. #define PY_SSIZE_T_CLEAN #include #include "numpy/ndarraytypes.h" #include "numpy/ufuncobject.h" #include "numpy/npy_3kcompat.h" #include /* * add_triplet.c * This is the C code for creating your own * NumPy ufunc for a structured array dtype. * * Details explaining the Python-C API can be found under * 'Extending and Embedding' and 'Python/C API' at * docs.python.org. */ static PyMethodDef StructUfuncTestMethods[] = { {NULL, NULL, 0, NULL} }; /* The loop definition must precede the PyMODINIT_FUNC. */ static void add_uint64_triplet(char **args, const npy_intp *dimensions, const npy_intp *steps, void *data) { npy_intp i; npy_intp is1 = steps[0]; npy_intp is2 = steps[1]; npy_intp os = steps[2]; npy_intp n = dimensions[0]; uint64_t *x, *y, *z; char *i1 = args[0]; char *i2 = args[1]; char *op = args[2]; for (i = 0; i < n; i++) { x = (uint64_t *)i1; y = (uint64_t *)i2; z = (uint64_t *)op; z[0] = x[0] + y[0]; z[1] = x[1] + y[1]; z[2] = x[2] + y[2]; i1 += is1; i2 += is2; op += os; } } /* This a pointer to the above function */ PyUFuncGenericFunction funcs[1] = {&add_uint64_triplet}; /* These are the input and return dtypes of add_uint64_triplet. */ static const char types[3] = {NPY_UINT64, NPY_UINT64, NPY_UINT64}; static struct PyModuleDef moduledef = { PyModuleDef_HEAD_INIT, "struct_ufunc_test", NULL, -1, StructUfuncTestMethods, NULL, NULL, NULL, NULL }; PyMODINIT_FUNC PyInit_npufunc(void) { PyObject *m, *add_triplet, *d; PyObject *dtype_dict; PyArray_Descr *dtype; PyArray_Descr *dtypes[3]; import_array(); import_umath(); m = PyModule_Create(&moduledef); if (m == NULL) { return NULL; } /* Create a new ufunc object */ add_triplet = PyUFunc_FromFuncAndData(NULL, NULL, NULL, 0, 2, 1, PyUFunc_None, "add_triplet", "add_triplet_docstring", 0); dtype_dict = Py_BuildValue("[(s, s), (s, s), (s, s)]", "f0", "u8", "f1", "u8", "f2", "u8"); PyArray_DescrConverter(dtype_dict, &dtype); Py_DECREF(dtype_dict); dtypes[0] = dtype; dtypes[1] = dtype; dtypes[2] = dtype; /* Register ufunc for structured dtype */ PyUFunc_RegisterLoopForDescr(add_triplet, dtype, &add_uint64_triplet, dtypes, NULL); d = PyModule_GetDict(m); PyDict_SetItemString(d, "add_triplet", add_triplet); Py_DECREF(add_triplet); return m; } The returned ufunc object is a callable Python object. It should be placed in a (module) dictionary under the same name as was used in the name argument to the ufunc-creation routine. The following example is adapted from the umath module static PyUFuncGenericFunction atan2_functions[] = { PyUFunc_ff_f, PyUFunc_dd_d, PyUFunc_gg_g, PyUFunc_OO_O_method}; static void *atan2_data[] = { (void *)atan2f, (void *)atan2, (void *)atan2l, (void *)"arctan2"}; static const char atan2_signatures[] = { NPY_FLOAT, NPY_FLOAT, NPY_FLOAT, NPY_DOUBLE, NPY_DOUBLE, NPY_DOUBLE, NPY_LONGDOUBLE, NPY_LONGDOUBLE, NPY_LONGDOUBLE NPY_OBJECT, NPY_OBJECT, NPY_OBJECT}; ... /* in the module initialization code */ PyObject *f, *dict, *module; ... dict = PyModule_GetDict(module); ... f = PyUFunc_FromFuncAndData(atan2_functions, atan2_data, atan2_signatures, 4, 2, 1, PyUFunc_None, "arctan2", "a safe and correct arctan(x1/x2)", 0); PyDict_SetItemString(dict, "arctan2", f); Py_DECREF(f); ... # How to write a NumPy how-to How-tos get straight to the point – they * answer a focused question, or * narrow a broad question into focused questions that the user can choose among. ## A stranger has asked for directions… **“I need to refuel my car.”** ## Give a brief but explicit answer * _“Three kilometers/miles, take a right at Hayseed Road, it’s on your left.”_ Add helpful details for newcomers (“Hayseed Road”, even though it’s the only turnoff at three km/mi). But not irrelevant ones: * Don’t also give directions from Route 7. * Don’t explain why the town has only one filling station. If there’s related background (tutorial, explanation, reference, alternative approach), bring it to the user’s attention with a link (“Directions from Route 7,” “Why so few filling stations?”). ## Delegate * _“Three km/mi, take a right at Hayseed Road, follow the signs.”_ If the information is already documented and succinct enough for a how-to, just link to it, possibly after an introduction (“Three km/mi, take a right”). ## If the question is broad, narrow and redirect it **“I want to see the sights.”** The _See the sights_ how-to should link to a set of narrower how-tos: * Find historic buildings * Find scenic lookouts * Find the town center and these might in turn link to still narrower how-tos – so the town center page might link to * Find the court house * Find city hall By organizing how-tos this way, you not only display the options for people who need to narrow their question, you also have provided answers for users who start with narrower questions (“I want to see historic buildings,” “Which way to city hall?”). ## If there are many steps, break them up If a how-to has many steps: * Consider breaking a step out into an individual how-to and linking to it. * Include subheadings. They help readers grasp what’s coming and return where they left off. ## Why write how-tos when there’s Stack Overflow, Reddit, Gitter…? * We have authoritative answers. * How-tos make the site less forbidding to non-experts. * How-tos bring people into the site and help them discover other information that’s here . * Creating how-tos helps us see NumPy usability through new eyes. ## Aren’t how-tos and tutorials the same thing? People use the terms “how-to” and “tutorial” interchangeably, but we draw a distinction, following Daniele Procida’s [taxonomy of documentation](https://documentation.divio.com/). Documentation needs to meet users where they are. _How-tos_ offer get-it-done information; the user wants steps to copy and doesn’t necessarily want to understand NumPy. _Tutorials_ are warm-fuzzy information; the user wants a feel for some aspect of NumPy (and again, may or may not care about deeper knowledge). We distinguish both tutorials and how-tos from _Explanations_ , which are deep dives intended to give understanding rather than immediate assistance, and _References_ , which give complete, authoritative data on some concrete part of NumPy (like its API) but aren’t obligated to paint a broader picture. For more on tutorials, see [Learn to write a NumPy tutorial](https://numpy.org/numpy-tutorials/content/tutorial-style-guide.html "\(in NumPy tutorials\)") ## Is this page an example of a how-to? Yes – until the sections with question-mark headings; they explain rather than giving directions. In a how-to, those would be links. # How to index ndarrays See also [Indexing on ndarrays](basics.indexing#basics-indexing) This page tackles common examples. For an in-depth look into indexing, refer to [Indexing on ndarrays](basics.indexing#basics-indexing). ## Access specific/arbitrary rows and columns Use [Basic indexing](basics.indexing#basic-indexing) features like [Slicing and striding](basics.indexing#slicing-and-striding), and [Dimensional indexing tools](basics.indexing#dimensional-indexing-tools). >>> a = np.arange(30).reshape(2, 3, 5) >>> a array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]], [[15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]]) >>> a[0, 2, :] array([10, 11, 12, 13, 14]) >>> a[0, :, 3] array([ 3, 8, 13]) Note that the output from indexing operations can have different shape from the original object. To preserve the original dimensions after indexing, you can use [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis"). To use other such tools, refer to [Dimensional indexing tools](basics.indexing#dimensional-indexing-tools). >>> a[0, :, 3].shape (3,) >>> a[0, :, 3, np.newaxis].shape (3, 1) >>> a[0, :, 3, np.newaxis, np.newaxis].shape (3, 1, 1) Variables can also be used to index: >>> y = 0 >>> a[y, :, y+3] array([ 3, 8, 13]) Refer to [Dealing with variable numbers of indices within programs](basics.indexing#dealing-with-variable-indices) to see how to use [slice](https://docs.python.org/3/glossary.html#term-slice "\(in Python v3.13\)") and [`Ellipsis`](https://docs.python.org/3/library/constants.html#Ellipsis "\(in Python v3.13\)") in your index variables. ### Index columns To index columns, you have to index the last axis. Use [Dimensional indexing tools](basics.indexing#dimensional-indexing-tools) to get the desired number of dimensions: >>> a = np.arange(24).reshape(2, 3, 4) >>> a array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> a[..., 3] array([[ 3, 7, 11], [15, 19, 23]]) To index specific elements in each column, make use of [Advanced indexing](basics.indexing#advanced-indexing) as below: >>> arr = np.arange(3*4).reshape(3, 4) >>> arr array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> column_indices = [[1, 3], [0, 2], [2, 2]] >>> np.arange(arr.shape[0]) array([0, 1, 2]) >>> row_indices = np.arange(arr.shape[0])[:, np.newaxis] >>> row_indices array([[0], [1], [2]]) Use the `row_indices` and `column_indices` for advanced indexing: >>> arr[row_indices, column_indices] array([[ 1, 3], [ 4, 6], [10, 10]]) ### Index along a specific axis Use [`take`](../reference/generated/numpy.take#numpy.take "numpy.take"). See also [`take_along_axis`](../reference/generated/numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") and [`put_along_axis`](../reference/generated/numpy.put_along_axis#numpy.put_along_axis "numpy.put_along_axis"). >>> a = np.arange(30).reshape(2, 3, 5) >>> a array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]], [[15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]]) >>> np.take(a, [2, 3], axis=2) array([[[ 2, 3], [ 7, 8], [12, 13]], [[17, 18], [22, 23], [27, 28]]]) >>> np.take(a, [2], axis=1) array([[[10, 11, 12, 13, 14]], [[25, 26, 27, 28, 29]]]) ## Create subsets of larger matrices Use [Slicing and striding](basics.indexing#slicing-and-striding) to access chunks of a large array: >>> a = np.arange(100).reshape(10, 10) >>> a array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29], [30, 31, 32, 33, 34, 35, 36, 37, 38, 39], [40, 41, 42, 43, 44, 45, 46, 47, 48, 49], [50, 51, 52, 53, 54, 55, 56, 57, 58, 59], [60, 61, 62, 63, 64, 65, 66, 67, 68, 69], [70, 71, 72, 73, 74, 75, 76, 77, 78, 79], [80, 81, 82, 83, 84, 85, 86, 87, 88, 89], [90, 91, 92, 93, 94, 95, 96, 97, 98, 99]]) >>> a[2:5, 2:5] array([[22, 23, 24], [32, 33, 34], [42, 43, 44]]) >>> a[2:5, 1:3] array([[21, 22], [31, 32], [41, 42]]) >>> a[:5, :5] array([[ 0, 1, 2, 3, 4], [10, 11, 12, 13, 14], [20, 21, 22, 23, 24], [30, 31, 32, 33, 34], [40, 41, 42, 43, 44]]) The same thing can be done with advanced indexing in a slightly more complex way. Remember that [advanced indexing creates a copy](basics.copies#indexing- operations): >>> a[np.arange(5)[:, None], np.arange(5)[None, :]] array([[ 0, 1, 2, 3, 4], [10, 11, 12, 13, 14], [20, 21, 22, 23, 24], [30, 31, 32, 33, 34], [40, 41, 42, 43, 44]]) You can also use [`mgrid`](../reference/generated/numpy.mgrid#numpy.mgrid "numpy.mgrid") to generate indices: >>> indices = np.mgrid[0:6:2] >>> indices array([0, 2, 4]) >>> a[:, indices] array([[ 0, 2, 4], [10, 12, 14], [20, 22, 24], [30, 32, 34], [40, 42, 44], [50, 52, 54], [60, 62, 64], [70, 72, 74], [80, 82, 84], [90, 92, 94]]) ## Filter values ### Non-zero elements Use [`nonzero`](../reference/generated/numpy.nonzero#numpy.nonzero "numpy.nonzero") to get a tuple of array indices of non-zero elements corresponding to every dimension: >>> z = np.array([[1, 2, 3, 0], [0, 0, 5, 3], [4, 6, 0, 0]]) >>> z array([[1, 2, 3, 0], [0, 0, 5, 3], [4, 6, 0, 0]]) >>> np.nonzero(z) (array([0, 0, 0, 1, 1, 2, 2]), array([0, 1, 2, 2, 3, 0, 1])) Use [`flatnonzero`](../reference/generated/numpy.flatnonzero#numpy.flatnonzero "numpy.flatnonzero") to fetch indices of elements that are non-zero in the flattened version of the ndarray: >>> np.flatnonzero(z) array([0, 1, 2, 6, 7, 8, 9]) ### Arbitrary conditions Use [`where`](../reference/generated/numpy.where#numpy.where "numpy.where") to generate indices based on conditions and then use [Advanced indexing](basics.indexing#advanced-indexing). >>> a = np.arange(30).reshape(2, 3, 5) >>> indices = np.where(a % 2 == 0) >>> indices (array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]), array([0, 0, 0, 1, 1, 2, 2, 2, 0, 0, 1, 1, 1, 2, 2]), array([0, 2, 4, 1, 3, 0, 2, 4, 1, 3, 0, 2, 4, 1, 3])) >>> a[indices] array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28]) Or, use [Boolean array indexing](basics.indexing#boolean-indexing): >>> a > 14 array([[[False, False, False, False, False], [False, False, False, False, False], [False, False, False, False, False]], [[ True, True, True, True, True], [ True, True, True, True, True], [ True, True, True, True, True]]]) >>> a[a > 14] array([15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]) ### Replace values after filtering Use assignment with filtering to replace desired values: >>> p = np.arange(-10, 10).reshape(2, 2, 5) >>> p array([[[-10, -9, -8, -7, -6], [ -5, -4, -3, -2, -1]], [[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9]]]) >>> q = p < 0 >>> q array([[[ True, True, True, True, True], [ True, True, True, True, True]], [[False, False, False, False, False], [False, False, False, False, False]]]) >>> p[q] = 0 >>> p array([[[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]], [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]]) ## Fetch indices of max/min values Use [`argmax`](../reference/generated/numpy.argmax#numpy.argmax "numpy.argmax") and [`argmin`](../reference/generated/numpy.argmin#numpy.argmin "numpy.argmin"): >>> a = np.arange(30).reshape(2, 3, 5) >>> np.argmax(a) 29 >>> np.argmin(a) 0 Use the `axis` keyword to get the indices of maximum and minimum values along a specific axis: >>> np.argmax(a, axis=0) array([[1, 1, 1, 1, 1], [1, 1, 1, 1, 1], [1, 1, 1, 1, 1]]) >>> np.argmax(a, axis=1) array([[2, 2, 2, 2, 2], [2, 2, 2, 2, 2]]) >>> np.argmax(a, axis=2) array([[4, 4, 4], [4, 4, 4]]) >>> np.argmin(a, axis=1) array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]) >>> np.argmin(a, axis=2) array([[0, 0, 0], [0, 0, 0]]) Set `keepdims` to `True` to keep the axes which are reduced in the result as dimensions with size one: >>> np.argmin(a, axis=2, keepdims=True) array([[[0], [0], [0]], [[0], [0], [0]]]) >>> np.argmax(a, axis=1, keepdims=True) array([[[2, 2, 2, 2, 2]], [[2, 2, 2, 2, 2]]]) To get the indices of each maximum or minimum value for each (N-1)-dimensional array in an N-dimensional array, use [`reshape`](../reference/generated/numpy.reshape#numpy.reshape "numpy.reshape") to reshape the array to a 2D array, apply [`argmax`](../reference/generated/numpy.argmax#numpy.argmax "numpy.argmax") or [`argmin`](../reference/generated/numpy.argmin#numpy.argmin "numpy.argmin") along `axis=1` and use [`unravel_index`](../reference/generated/numpy.unravel_index#numpy.unravel_index "numpy.unravel_index") to recover the index of the values per slice: >>> x = np.arange(2*2*3).reshape(2, 2, 3) % 7 # 3D example array >>> x array([[[0, 1, 2], [3, 4, 5]], [[6, 0, 1], [2, 3, 4]]]) >>> x_2d = np.reshape(x, (x.shape[0], -1)) >>> indices_2d = np.argmax(x_2d, axis=1) >>> indices_2d array([5, 0]) >>> np.unravel_index(indices_2d, x.shape[1:]) (array([1, 0]), array([2, 0])) The first array returned contains the indices along axis 1 in the original array, the second array contains the indices along axis 2. The highest value in `x[0]` is therefore `x[0, 1, 2]`. ## Index the same ndarray multiple times efficiently It must be kept in mind that basic indexing produces [views](../glossary#term- view) and advanced indexing produces [copies](../glossary#term-copy), which are computationally less efficient. Hence, you should take care to use basic indexing wherever possible instead of advanced indexing. ## Further reading Nicolas Rougier’s [100 NumPy exercises](https://github.com/rougier/numpy-100) provide a good insight into how indexing is combined with other operations. Exercises [6](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#6-create- a-null-vector-of-size-10-but-the-fifth-value-which-is-1-), [8](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#8-reverse- a-vector-first-element-becomes-last-), [10](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#10-find- indices-of-non-zero-elements-from-120040-), [15](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#15-create-a-2d-array- with-1-on-the-border-and-0-inside-), [16](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#16-how- to-add-a-border-filled-with-0s-around-an-existing-array-), [19](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#19-create-a-8x8-matrix- and-fill-it-with-a-checkerboard-pattern-), [20](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#20-consider-a-678-shape- array-what-is-the-index-xyz-of-the-100th-element-), [45](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#45-create- random-vector-of-size-10-and-replace-the-maximum-value-by-0-), [59](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#59-how- to-sort-an-array-by-the-nth-column-), [64](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#64-consider- a-given-vector-how-to-add-1-to-each-element-indexed-by-a-second-vector-be- careful-with-repeated-indices-), [65](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#65-how- to-accumulate-elements-of-a-vector-x-to-an-array-f-based-on-an-index-list-i-), [70](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#70-consider- the-vector-1-2-3-4-5-how-to-build-a-new-vector-with-3-consecutive-zeros- interleaved-between-each-value-), [71](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#71-consider- an-array-of-dimension-553-how-to-mulitply-it-by-an-array-with-dimensions-55-), [72](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#72-how- to-swap-two-rows-of-an-array-), [76](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#76-consider- a-one-dimensional-array-z-build-a-two-dimensional-array-whose-first-row- is-z0z1z2-and-each-subsequent-row-is--shifted-by-1-last-row-should- be-z-3z-2z-1-), [80](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#80-consider- an-arbitrary-array-write-a-function-that-extract-a-subpart-with-a-fixed-shape- and-centered-on-a-given-element-pad-with-a-fill-value-when-necessary-), [81](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#81-consider- an-array-z--1234567891011121314-how-to-generate-an-array-r--1234-2345-3456-- 11121314-), [84](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#84-extract- all-the-contiguous-3x3-blocks-from-a-random-10x10-matrix-), [87](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#87-consider-a-16x16-array- how-to-get-the-block-sum-block-size-is-4x4-), [90](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#90-given- an-arbitrary-number-of-vectors-build-the-cartesian-product-every-combinations- of-every-item-), [93](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#93-consider- two-arrays-a-and-b-of-shape-83-and-22-how-to-find-rows-of-a-that-contain- elements-of-each-row-of-b-regardless-of-the-order-of-the-elements-in-b-), [94](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#94-considering-a-10x3-matrix- extract-rows-with-unequal-values-eg-223-) are specially focused on indexing. # Reading and writing files This page tackles common applications; for the full collection of I/O routines, see [Input and output](../reference/routines.io#routines-io). ## Reading text and [CSV](https://en.wikipedia.org/wiki/Comma- separated_values) files ### With no missing values Use [`numpy.loadtxt`](../reference/generated/numpy.loadtxt#numpy.loadtxt "numpy.loadtxt"). ### With missing values Use [`numpy.genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt"). [`numpy.genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") will either * return a [masked array](../reference/maskedarray.generic#maskedarray-generic) **masking out missing values** (if `usemask=True`), or * **fill in the missing value** with the value specified in `filling_values` (default is `np.nan` for float, -1 for int). #### With non-whitespace delimiters >>> with open("csv.txt", "r") as f: ... print(f.read()) 1, 2, 3 4,, 6 7, 8, 9 ##### Masked-array output >>> np.genfromtxt("csv.txt", delimiter=",", usemask=True) masked_array( data=[[1.0, 2.0, 3.0], [4.0, --, 6.0], [7.0, 8.0, 9.0]], mask=[[False, False, False], [False, True, False], [False, False, False]], fill_value=1e+20) ##### Array output >>> np.genfromtxt("csv.txt", delimiter=",") array([[ 1., 2., 3.], [ 4., nan, 6.], [ 7., 8., 9.]]) ##### Array output, specified fill-in value >>> np.genfromtxt("csv.txt", delimiter=",", dtype=np.int8, filling_values=99) array([[ 1, 2, 3], [ 4, 99, 6], [ 7, 8, 9]], dtype=int8) #### Whitespace-delimited [`numpy.genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") can also parse whitespace-delimited data files that have missing values if * **Each field has a fixed width** : Use the width as the `delimiter` argument.: # File with width=4. The data does not have to be justified (for example, # the 2 in row 1), the last column can be less than width (for example, the 6 # in row 2), and no delimiting character is required (for instance 8888 and 9 # in row 3) >>> with open("fixedwidth.txt", "r") as f: ... data = (f.read()) >>> print(data) 1 2 3 44 6 7 88889 # Showing spaces as ^ >>> print(data.replace(" ","^")) 1^^^2^^^^^^3 44^^^^^^6 7^^^88889 >>> np.genfromtxt("fixedwidth.txt", delimiter=4) array([[1.000e+00, 2.000e+00, 3.000e+00], [4.400e+01, nan, 6.000e+00], [7.000e+00, 8.888e+03, 9.000e+00]]) * **A special value (e.g. “x”) indicates a missing field** : Use it as the `missing_values` argument. >>> with open("nan.txt", "r") as f: ... print(f.read()) 1 2 3 44 x 6 7 8888 9 >>> np.genfromtxt("nan.txt", missing_values="x") array([[1.000e+00, 2.000e+00, 3.000e+00], [4.400e+01, nan, 6.000e+00], [7.000e+00, 8.888e+03, 9.000e+00]]) * **You want to skip the rows with missing values** : Set `invalid_raise=False`. >>> with open("skip.txt", "r") as f: ... print(f.read()) 1 2 3 44 6 7 888 9 >>> np.genfromtxt("skip.txt", invalid_raise=False) __main__:1: ConversionWarning: Some errors were detected ! Line #2 (got 2 columns instead of 3) array([[ 1., 2., 3.], [ 7., 888., 9.]]) * **The delimiter whitespace character is different from the whitespace that indicates missing data**. For instance, if columns are delimited by `\t`, then missing data will be recognized if it consists of one or more spaces.: >>> with open("tabs.txt", "r") as f: ... data = (f.read()) >>> print(data) 1 2 3 44 6 7 888 9 # Tabs vs. spaces >>> print(data.replace("\t","^")) 1^2^3 44^ ^6 7^888^9 >>> np.genfromtxt("tabs.txt", delimiter="\t", missing_values=" +") array([[ 1., 2., 3.], [ 44., nan, 6.], [ 7., 888., 9.]]) ## Read a file in .npy or .npz format Choices: * Use [`numpy.load`](../reference/generated/numpy.load#numpy.load "numpy.load"). It can read files generated by any of [`numpy.save`](../reference/generated/numpy.save#numpy.save "numpy.save"), [`numpy.savez`](../reference/generated/numpy.savez#numpy.savez "numpy.savez"), or [`numpy.savez_compressed`](../reference/generated/numpy.savez_compressed#numpy.savez_compressed "numpy.savez_compressed"). * Use memory mapping. See [`numpy.lib.format.open_memmap`](../reference/generated/numpy.lib.format.open_memmap#numpy.lib.format.open_memmap "numpy.lib.format.open_memmap"). ## Write to a file to be read back by NumPy ### Binary Use [`numpy.save`](../reference/generated/numpy.save#numpy.save "numpy.save"), or to store multiple arrays [`numpy.savez`](../reference/generated/numpy.savez#numpy.savez "numpy.savez") or [`numpy.savez_compressed`](../reference/generated/numpy.savez_compressed#numpy.savez_compressed "numpy.savez_compressed"). For security and portability, set `allow_pickle=False` unless the dtype contains Python objects, which requires pickling. Masked arrays [`can't currently be saved`](../reference/generated/numpy.ma.maskedarray.tofile#numpy.ma.MaskedArray.tofile "numpy.ma.MaskedArray.tofile"), nor can other arbitrary array subclasses. ### Human-readable [`numpy.save`](../reference/generated/numpy.save#numpy.save "numpy.save") and [`numpy.savez`](../reference/generated/numpy.savez#numpy.savez "numpy.savez") create binary files. To **write a human-readable file** , use [`numpy.savetxt`](../reference/generated/numpy.savetxt#numpy.savetxt "numpy.savetxt"). The array can only be 1- or 2-dimensional, and there’s no ` savetxtz` for multiple files. ### Large arrays See Write or read large arrays. ## Read an arbitrarily formatted binary file (“binary blob”) Use a [structured array](basics.rec). **Example:** The `.wav` file header is a 44-byte block preceding `data_size` bytes of the actual sound data: chunk_id "RIFF" chunk_size 4-byte unsigned little-endian integer format "WAVE" fmt_id "fmt " fmt_size 4-byte unsigned little-endian integer audio_fmt 2-byte unsigned little-endian integer num_channels 2-byte unsigned little-endian integer sample_rate 4-byte unsigned little-endian integer byte_rate 4-byte unsigned little-endian integer block_align 2-byte unsigned little-endian integer bits_per_sample 2-byte unsigned little-endian integer data_id "data" data_size 4-byte unsigned little-endian integer The `.wav` file header as a NumPy structured dtype: wav_header_dtype = np.dtype([ ("chunk_id", (bytes, 4)), # flexible-sized scalar type, item size 4 ("chunk_size", ">> np.arange(0, 10, 2) # np.arange(start, stop, step) array([0, 2, 4, 6, 8]) The arguments `start` and `stop` should be integer or real, but not complex numbers. [`numpy.arange`](../reference/generated/numpy.arange#numpy.arange "numpy.arange") is similar to the Python built-in [`range`](https://docs.python.org/3/library/stdtypes.html#range "\(in Python v3.13\)"). Floating-point inaccuracies can make `arange` results with floating-point numbers confusing. In this case, you should use [`numpy.linspace`](../reference/generated/numpy.linspace#numpy.linspace "numpy.linspace") instead. * **Use** [`numpy.linspace`](../reference/generated/numpy.linspace#numpy.linspace "numpy.linspace") **if you want the endpoint to be included in the result, or if you are using a non-integer step size.** [`numpy.linspace`](../reference/generated/numpy.linspace#numpy.linspace "numpy.linspace") _can_ include the endpoint and determines step size from the `num` argument, which specifies the number of elements in the returned array. The inclusion of the endpoint is determined by an optional boolean argument `endpoint`, which defaults to `True`. Note that selecting `endpoint=False` will change the step size computation, and the subsequent output for the function. Example: >>> np.linspace(0.1, 0.2, num=5) # np.linspace(start, stop, num) array([0.1 , 0.125, 0.15 , 0.175, 0.2 ]) >>> np.linspace(0.1, 0.2, num=5, endpoint=False) array([0.1, 0.12, 0.14, 0.16, 0.18]) [`numpy.linspace`](../reference/generated/numpy.linspace#numpy.linspace "numpy.linspace") can also be used with complex arguments: >>> np.linspace(1+1.j, 4, 5, dtype=np.complex64) array([1. +1.j , 1.75+0.75j, 2.5 +0.5j , 3.25+0.25j, 4. +0.j ], dtype=complex64) ### Other examples 1. Unexpected results may happen if floating point values are used as `step` in `numpy.arange`. To avoid this, make sure all floating point conversion happens after the computation of results. For example, replace >>> list(np.arange(0.1,0.4,0.1).round(1)) [0.1, 0.2, 0.3, 0.4] # endpoint should not be included! with >>> list(np.arange(1, 4, 1) / 10.0) [0.1, 0.2, 0.3] # expected result 2. Note that >>> np.arange(0, 1.12, 0.04) array([0. , 0.04, 0.08, 0.12, 0.16, 0.2 , 0.24, 0.28, 0.32, 0.36, 0.4 , 0.44, 0.48, 0.52, 0.56, 0.6 , 0.64, 0.68, 0.72, 0.76, 0.8 , 0.84, 0.88, 0.92, 0.96, 1. , 1.04, 1.08, 1.12]) and >>> np.arange(0, 1.08, 0.04) array([0. , 0.04, 0.08, 0.12, 0.16, 0.2 , 0.24, 0.28, 0.32, 0.36, 0.4 , 0.44, 0.48, 0.52, 0.56, 0.6 , 0.64, 0.68, 0.72, 0.76, 0.8 , 0.84, 0.88, 0.92, 0.96, 1. , 1.04]) These differ because of numeric noise. When using floating point values, it is possible that `0 + 0.04 * 28 < 1.12`, and so `1.12` is in the interval. In fact, this is exactly the case: >>> 1.12/0.04 28.000000000000004 But `0 + 0.04 * 27 >= 1.08` so that 1.08 is excluded: >>> 1.08/0.04 27.0 Alternatively, you could use `np.arange(0, 28)*0.04` which would always give you precise control of the end point since it is integral: >>> np.arange(0, 28)*0.04 array([0. , 0.04, 0.08, 0.12, 0.16, 0.2 , 0.24, 0.28, 0.32, 0.36, 0.4 , 0.44, 0.48, 0.52, 0.56, 0.6 , 0.64, 0.68, 0.72, 0.76, 0.8 , 0.84, 0.88, 0.92, 0.96, 1. , 1.04, 1.08]) ### `geomspace` and `logspace` `numpy.geomspace` is similar to `numpy.linspace`, but with numbers spaced evenly on a log scale (a geometric progression). The endpoint is included in the result. Example: >>> np.geomspace(2, 3, num=5) array([2. , 2.21336384, 2.44948974, 2.71080601, 3. ]) `numpy.logspace` is similar to `numpy.geomspace`, but with the start and end points specified as logarithms (with base 10 as default): >>> np.logspace(2, 3, num=5) array([ 100. , 177.827941 , 316.22776602, 562.34132519, 1000. ]) In linear space, the sequence starts at `base ** start` (`base` to the power of `start`) and ends with `base ** stop`: >>> np.logspace(2, 3, num=5, base=2) array([4. , 4.75682846, 5.65685425, 6.72717132, 8. ]) ## N-D domains N-D domains can be partitioned into _grids_. This can be done using one of the following functions. ### `meshgrid` The purpose of `numpy.meshgrid` is to create a rectangular grid out of a set of one-dimensional coordinate arrays. Given arrays: >>> x = np.array([0, 1, 2, 3]) >>> y = np.array([0, 1, 2, 3, 4, 5]) `meshgrid` will create two coordinate arrays, which can be used to generate the coordinate pairs determining this grid.: >>> xx, yy = np.meshgrid(x, y) >>> xx array([[0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3]]) >>> yy array([[0, 0, 0, 0], [1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4], [5, 5, 5, 5]]) >>> import matplotlib.pyplot as plt >>> plt.plot(xx, yy, marker='.', color='k', linestyle='none') ### `mgrid` `numpy.mgrid` can be used as a shortcut for creating meshgrids. It is not a function, but when indexed, returns a multidimensional meshgrid. >>> xx, yy = np.meshgrid(np.array([0, 1, 2, 3]), np.array([0, 1, 2, 3, 4, 5])) >>> xx.T, yy.T (array([[0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]]), array([[0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5]])) >>> np.mgrid[0:4, 0:6] array([[[0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]], [[0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5]]]) ### `ogrid` Similar to `numpy.mgrid`, `numpy.ogrid` returns an _open_ multidimensional meshgrid. This means that when it is indexed, only one dimension of each returned array is greater than 1. This avoids repeating the data and thus saves memory, which is often desirable. These sparse coordinate grids are intended to be use with [Broadcasting](https://scipy- lectures.org/intro/numpy/operations.html#broadcasting "\(in Scipy lecture notes v2022.1\)"). When all coordinates are used in an expression, broadcasting still leads to a fully-dimensional result array. >>> np.ogrid[0:4, 0:6] (array([[0], [1], [2], [3]]), array([[0, 1, 2, 3, 4, 5]])) All three methods described here can be used to evaluate function values on a grid. >>> g = np.ogrid[0:4, 0:6] >>> zg = np.sqrt(g[0]**2 + g[1]**2) >>> g[0].shape, g[1].shape, zg.shape ((4, 1), (1, 6), (4, 6)) >>> m = np.mgrid[0:4, 0:6] >>> zm = np.sqrt(m[0]**2 + m[1]**2) >>> np.array_equal(zm, zg) True # Verifying bugs and bug fixes in NumPy In this how-to you will learn how to: * Verify the existence of a bug in NumPy * Verify the fix, if any, made for the bug While you walk through the verification process, you will learn how to: * Set up a Python virtual environment (using `virtualenv`) * Install appropriate versions of NumPy, first to see the bug in action, then to verify its fix [Issue 16354](https://github.com/numpy/numpy/issues/16354) is used as an example. This issue was: **Title** : _np.polymul return type is np.float64 or np.complex128 when given an all-zero argument_ _np.polymul returns an object with type np.float64 when one argument is all zero, and both arguments have type np.int64 or np.float32. Something similar happens with all zero np.complex64 giving result type np.complex128._ _This doesn’t happen with non-zero arguments; there the result is as expected._ _This bug isn’t present in np.convolve._ **Reproducing code example** : >>> import numpy as np >>> np.__version__ '1.18.4' >>> a = np.array([1,2,3]) >>> z = np.array([0,0,0]) >>> np.polymul(a.astype(np.int64), a.astype(np.int64)).dtype dtype('int64') >>> np.polymul(a.astype(np.int64), z.astype(np.int64)).dtype dtype('float64') >>> np.polymul(a.astype(np.float32), z.astype(np.float32)).dtype dtype('float64') >>> np.polymul(a.astype(np.complex64), z.astype(np.complex64)).dtype dtype('complex128') Numpy/Python version information: >>> import sys, numpy; print(numpy.__version__, sys.version) 1.18.4 3.7.5 (default, Nov 7 2019, 10:50:52) [GCC 8.3.0] ## 1\. Set up a virtual environment Create a new directory, enter into it, and set up a virtual environment using your preferred method. For example, this is how to do it using `virtualenv` on linux or macOS: virtualenv venv_np_bug source venv_np_bug/bin/activate This ensures the system/global/default Python/NumPy installation will not be altered. ## 2\. Install the NumPy version in which the bug was reported The report references NumPy version 1.18.4, so that is the version you need to install in this case. Since this bug is tied to a release and not a specific commit, a pre-built wheel installed in your virtual environment via `pip` will suffice: pip install numpy==1.18.4 Some bugs may require you to build the NumPy version referenced in the issue report. To learn how to do that, visit [Building from source](../building/index#building-from-source). ## 3\. Reproduce the bug The issue reported in [#16354](https://github.com/numpy/numpy/issues/16354) is that the wrong `dtype` is returned if one of the inputs of the method [`numpy.polymul`](../reference/generated/numpy.polymul#numpy.polymul "numpy.polymul") is a zero array. To reproduce the bug, start a Python terminal, enter the code snippet shown in the bug report, and ensure that the results match those in the issue: >>> import numpy as np >>> np.__version__ '...' # 1.18.4 >>> a = np.array([1,2,3]) >>> z = np.array([0,0,0]) >>> np.polymul(a.astype(np.int64), a.astype(np.int64)).dtype dtype('int64') >>> np.polymul(a.astype(np.int64), z.astype(np.int64)).dtype dtype('...') # float64 >>> np.polymul(a.astype(np.float32), z.astype(np.float32)).dtype dtype('...') # float64 >>> np.polymul(a.astype(np.complex64), z.astype(np.complex64)).dtype dtype('...') # complex128 As reported, whenever the zero array, `z` in the example above, is one of the arguments to [`numpy.polymul`](../reference/generated/numpy.polymul#numpy.polymul "numpy.polymul"), an incorrect `dtype` is returned. ## 4\. Check for fixes in the latest version of NumPy If the issue report for your bug has not yet been resolved, further action or patches need to be submitted. In this case, however, the issue was resolved by [PR 17577](https://github.com/numpy/numpy/pull/17577) and is now closed. So you can try to verify the fix. To verify the fix: 1. Uninstall the version of NumPy in which the bug still exists: pip uninstall numpy 2. Install the latest version of NumPy: pip install numpy 3. In your Python terminal, run the reported code snippet you used to verify the existence of the bug and confirm that the issue has been resolved: >>> import numpy as np >>> np.__version__ '...' # 1.18.4 >>> a = np.array([1,2,3]) >>> z = np.array([0,0,0]) >>> np.polymul(a.astype(np.int64), a.astype(np.int64)).dtype dtype('int64') >>> np.polymul(a.astype(np.int64), z.astype(np.int64)).dtype dtype('int64') >>> np.polymul(a.astype(np.float32), z.astype(np.float32)).dtype dtype('float32') >>> np.polymul(a.astype(np.complex64), z.astype(np.complex64)).dtype dtype('complex64') Note that the correct `dtype` is now returned even when a zero array is one of the arguments to [`numpy.polymul`](../reference/generated/numpy.polymul#numpy.polymul "numpy.polymul"). ## 5\. Support NumPy development by verifying and fixing bugs Go to the [NumPy GitHub issues page](https://github.com/numpy/numpy/issues) and see if you can confirm the existence of any other bugs which have not been confirmed yet. In particular, it is useful for the developers to know if a bug can be reproduced on a newer version of NumPy. Comments verifying the existence of bugs alert the NumPy developers that more than one user can reproduce the issue. # NumPy how-tos These documents are intended as recipes to common tasks using NumPy. For detailed reference documentation of the functions and classes contained in the package, see the [API reference](../reference/index#reference). * [How to write a NumPy how-to](how-to-how-to) * [Reading and writing files](how-to-io) * [How to index `ndarrays`](how-to-index) * [Verifying bugs and bug fixes in NumPy](how-to-verify-bug) * [How to create arrays with regularly-spaced values](how-to-partition) # NumPy user guide This guide is an overview and explains the important features; details are found in [NumPy reference](../reference/index#reference). Getting started * [What is NumPy?](whatisnumpy) * [Installation](https://numpy.org/install/) * [NumPy quickstart](quickstart) * [NumPy: the absolute basics for beginners](absolute_beginners) Fundamentals and usage * [NumPy fundamentals](basics) * [Array creation](basics.creation) * [Indexing on `ndarrays`](basics.indexing) * [I/O with NumPy](basics.io) * [Data types](basics.types) * [Broadcasting](basics.broadcasting) * [Copies and views](basics.copies) * [Working with Arrays of Strings And Bytes](basics.strings) * [Structured arrays](basics.rec) * [Universal functions (`ufunc`) basics](basics.ufuncs) * [NumPy for MATLAB users](numpy-for-matlab-users) * [NumPy tutorials](https://numpy.org/numpy-tutorials/) * [NumPy how-tos](howtos_index) Advanced usage and interoperability * [Using NumPy C-API](c-info) * [F2PY user guide and reference manual](../f2py/index) * [Under-the-hood documentation for developers](../dev/underthehood) * [Interoperability with NumPy](basics.interoperability) # NumPy for MATLAB users ## Introduction MATLAB® and NumPy have a lot in common, but NumPy was created to work with Python, not to be a MATLAB clone. This guide will help MATLAB users get started with NumPy. ## Some key differences In MATLAB, the basic type, even for scalars, is a multidimensional array. Array assignments in MATLAB are stored as 2D arrays of double precision floating point numbers, unless you specify the number of dimensions and type. Operations on the 2D instances of these arrays are modeled on matrix operations in linear algebra. | In NumPy, the basic type is a multidimensional `array`. Array assignments in NumPy are usually stored as [n-dimensional arrays](../reference/arrays#arrays) with the minimum type required to hold the objects in sequence, unless you specify the number of dimensions and type. NumPy performs operations element-by-element, so multiplying 2D arrays with `*` is not a matrix multiplication – it’s an element-by-element multiplication. (The `@` operator, available since Python 3.5, can be used for conventional matrix multiplication.) ---|--- MATLAB numbers indices from 1; `a(1)` is the first element. See note INDEXING | NumPy, like Python, numbers indices from 0; `a[0]` is the first element. MATLAB’s scripting language was created for linear algebra so the syntax for some array manipulations is more compact than NumPy’s. On the other hand, the API for adding GUIs and creating full-fledged applications is more or less an afterthought. | NumPy is based on Python, a general-purpose language. The advantage to NumPy is access to Python libraries including: [SciPy](https://www.scipy.org/), [Matplotlib](https://matplotlib.org/), [Pandas](https://pandas.pydata.org/), [OpenCV](https://opencv.org/), and more. In addition, Python is often [embedded as a scripting language](https://en.wikipedia.org/wiki/List_of_Python_software#Embedded_as_a_scripting_language) in other software, allowing NumPy to be used there too. MATLAB array slicing uses pass-by-value semantics, with a lazy copy-on-write scheme to prevent creating copies until they are needed. Slicing operations copy parts of the array. | NumPy array slicing uses pass-by-reference, that does not copy the arguments. Slicing operations are views into an array. ## Rough equivalents The table below gives rough equivalents for some common MATLAB expressions. These are similar expressions, not equivalents. For details, see the [documentation](../reference/index#reference). In the table below, it is assumed that you have executed the following commands in Python: import numpy as np from scipy import io, integrate, linalg, signal from scipy.sparse.linalg import cg, eigs Also assume below that if the Notes talk about “matrix” that the arguments are two-dimensional entities. ### General purpose equivalents MATLAB | NumPy | Notes ---|---|--- `help func` | `info(func)` or `help(func)` or `func?` (in IPython) | get help on the function _func_ `which func` | see note HELP | find out where _func_ is defined `type func` | `np.source(func)` or `func??` (in IPython) | print source for _func_ (if not a native function) `% comment` | `# comment` | comment a line of code with the text `comment` for i=1:3 fprintf('%i\n',i) end | for i in range(1, 4): print(i) | use a for-loop to print the numbers 1, 2, and 3 using [`range`](https://docs.python.org/3/library/stdtypes.html#range "\(in Python v3.13\)") `a && b` | `a and b` | short-circuiting logical AND operator ([Python native operator](https://docs.python.org/3/library/stdtypes.html#boolean "\(in Python v3.13\)")); scalar arguments only `a || b` | `a or b` | short-circuiting logical OR operator ([Python native operator](https://docs.python.org/3/library/stdtypes.html#boolean "\(in Python v3.13\)")); scalar arguments only >> 4 == 4 ans = 1 >> 4 == 5 ans = 0 | >>> 4 == 4 True >>> 4 == 5 False | The [boolean objects](https://docs.python.org/3/library/stdtypes.html#bltin- boolean-values "\(in Python v3.13\)") in Python are `True` and `False`, as opposed to MATLAB logical types of `1` and `0`. a=4 if a==4 fprintf('a = 4\n') elseif a==5 fprintf('a = 5\n') end | a = 4 if a == 4: print('a = 4') elif a == 5: print('a = 5') | create an if-else statement to check if `a` is 4 or 5 and print result `1*i`, `1*j`, `1i`, `1j` | `1j` | complex numbers `eps` | `np.finfo(float).eps` or `np.spacing(1)` | distance from 1 to the next larger representable real number in double precision `load data.mat` | `io.loadmat('data.mat')` | Load MATLAB variables saved to the file `data.mat`. (Note: When saving arrays to `data.mat` in MATLAB/Octave, use a recent binary format. [`scipy.io.loadmat`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.loadmat.html#scipy.io.loadmat "\(in SciPy v1.14.1\)") will create a dictionary with the saved arrays and further information.) `ode45` | `integrate.solve_ivp(f)` | integrate an ODE with Runge-Kutta 4,5 `ode15s` | `integrate.solve_ivp(f, method='BDF')` | integrate an ODE with BDF method ### Linear algebra equivalents MATLAB | NumPy | Notes ---|---|--- `ndims(a)` | `np.ndim(a)` or `a.ndim` | number of dimensions of array `a` `numel(a)` | `np.size(a)` or `a.size` | number of elements of array `a` `size(a)` | `np.shape(a)` or `a.shape` | “size” of array `a` `size(a,n)` | `a.shape[n-1]` | get the number of elements of the n-th dimension of array `a`. (Note that MATLAB uses 1 based indexing while Python uses 0 based indexing, See note INDEXING) `[ 1 2 3; 4 5 6 ]` | `np.array([[1., 2., 3.], [4., 5., 6.]])` | define a 2x3 2D array `[ a b; c d ]` | `np.block([[a, b], [c, d]])` | construct a matrix from blocks `a`, `b`, `c`, and `d` `a(end)` | `a[-1]` | access last element in MATLAB vector (1xn or nx1) or 1D NumPy array `a` (length n) `a(2,5)` | `a[1, 4]` | access element in second row, fifth column in 2D array `a` `a(2,:)` | `a[1]` or `a[1, :]` | entire second row of 2D array `a` `a(1:5,:)` | `a[0:5]` or `a[:5]` or `a[0:5, :]` | first 5 rows of 2D array `a` `a(end-4:end,:)` | `a[-5:]` | last 5 rows of 2D array `a` `a(1:3,5:9)` | `a[0:3, 4:9]` | The first through third rows and fifth through ninth columns of a 2D array, `a`. `a([2,4,5],[1,3])` | `a[np.ix_([1, 3, 4], [0, 2])]` | rows 2,4 and 5 and columns 1 and 3. This allows the matrix to be modified, and doesn’t require a regular slice. `a(3:2:21,:)` | `a[2:21:2,:]` | every other row of `a`, starting with the third and going to the twenty-first `a(1:2:end,:)` | `a[::2, :]` | every other row of `a`, starting with the first `a(end:-1:1,:)` or `flipud(a)` | `a[::-1,:]` | `a` with rows in reverse order `a([1:end 1],:)` | `a[np.r_[:len(a),0]]` | `a` with copy of the first row appended to the end `a.'` | `a.transpose()` or `a.T` | transpose of `a` `a'` | `a.conj().transpose()` or `a.conj().T` | conjugate transpose of `a` `a * b` | `a @ b` | matrix multiply `a .* b` | `a * b` | element-wise multiply `a./b` | `a/b` | element-wise divide `a.^3` | `a**3` | element-wise exponentiation `(a > 0.5)` | `(a > 0.5)` | matrix whose i,jth element is (a_ij > 0.5). The MATLAB result is an array of logical values 0 and 1. The NumPy result is an array of the boolean values `False` and `True`. `find(a > 0.5)` | `np.nonzero(a > 0.5)` | find the indices where (`a` > 0.5) `a(:,find(v > 0.5))` | `a[:,np.nonzero(v > 0.5)[0]]` | extract the columns of `a` where vector v > 0.5 `a(:,find(v>0.5))` | `a[:, v.T > 0.5]` | extract the columns of `a` where column vector v > 0.5 `a(a<0.5)=0` | `a[a < 0.5]=0` | `a` with elements less than 0.5 zeroed out `a .* (a>0.5)` | `a * (a > 0.5)` | `a` with elements less than 0.5 zeroed out `a(:) = 3` | `a[:] = 3` | set all values to the same scalar value `y=x` | `y = x.copy()` | NumPy assigns by reference `y=x(2,:)` | `y = x[1, :].copy()` | NumPy slices are by reference `y=x(:)` | `y = x.flatten()` | turn array into vector (note that this forces a copy). To obtain the same data ordering as in MATLAB, use `x.flatten('F')`. `1:10` | `np.arange(1., 11.)` or `np.r_[1.:11.]` or `np.r_[1:10:10j]` | create an increasing vector (see note RANGES) `0:9` | `np.arange(10.)` or `np.r_[:10.]` or `np.r_[:9:10j]` | create an increasing vector (see note RANGES) `[1:10]'` | `np.arange(1.,11.)[:, np.newaxis]` | create a column vector `zeros(3,4)` | `np.zeros((3, 4))` | 3x4 two-dimensional array full of 64-bit floating point zeros `zeros(3,4,5)` | `np.zeros((3, 4, 5))` | 3x4x5 three-dimensional array full of 64-bit floating point zeros `ones(3,4)` | `np.ones((3, 4))` | 3x4 two-dimensional array full of 64-bit floating point ones `eye(3)` | `np.eye(3)` | 3x3 identity matrix `diag(a)` | `np.diag(a)` | returns a vector of the diagonal elements of 2D array, `a` `diag(v,0)` | `np.diag(v, 0)` | returns a square diagonal matrix whose nonzero values are the elements of vector, `v` rng(42,'twister') rand(3,4) | from numpy.random import default_rng rng = default_rng(42) rng.random((3, 4)) or older version: `random.rand((3, 4))` | generate a random 3x4 array with default random number generator and seed = 42 `linspace(1,3,4)` | `np.linspace(1,3,4)` | 4 equally spaced samples between 1 and 3, inclusive `[x,y]=meshgrid(0:8,0:5)` | `np.mgrid[0:9.,0:6.]` or `np.meshgrid(r_[0:9.],r_[0:6.])` | two 2D arrays: one of x values, the other of y values | `ogrid[0:9.,0:6.]` or `np.ix_(np.r_[0:9.],np.r_[0:6.]` | the best way to eval functions on a grid `[x,y]=meshgrid([1,2,4],[2,4,5])` | `np.meshgrid([1,2,4],[2,4,5])` | | `np.ix_([1,2,4],[2,4,5])` | the best way to eval functions on a grid `repmat(a, m, n)` | `np.tile(a, (m, n))` | create m by n copies of `a` `[a b]` | `np.concatenate((a,b),1)` or `np.hstack((a,b))` or `np.column_stack((a,b))` or `np.c_[a,b]` | concatenate columns of `a` and `b` `[a; b]` | `np.concatenate((a,b))` or `np.vstack((a,b))` or `np.r_[a,b]` | concatenate rows of `a` and `b` `max(max(a))` | `a.max()` or `np.nanmax(a)` | maximum element of `a` (with ndims(a)<=2 for MATLAB, if there are NaN’s, `nanmax` will ignore these and return largest value) `max(a)` | `a.max(0)` | maximum element of each column of array `a` `max(a,[],2)` | `a.max(1)` | maximum element of each row of array `a` `max(a,b)` | `np.maximum(a, b)` | compares `a` and `b` element-wise, and returns the maximum value from each pair `norm(v)` | `np.sqrt(v @ v)` or `np.linalg.norm(v)` | L2 norm of vector `v` `a & b` | `logical_and(a,b)` | element-by-element AND operator (NumPy ufunc) See note LOGICOPS `a | b` | `np.logical_or(a,b)` | element-by-element OR operator (NumPy ufunc) See note LOGICOPS `bitand(a,b)` | `a & b` | bitwise AND operator (Python native and NumPy ufunc) `bitor(a,b)` | `a | b` | bitwise OR operator (Python native and NumPy ufunc) `inv(a)` | `linalg.inv(a)` | inverse of square 2D array `a` `pinv(a)` | `linalg.pinv(a)` | pseudo-inverse of 2D array `a` `rank(a)` | `np.linalg.matrix_rank(a)` | matrix rank of a 2D array `a` `a\b` | `linalg.solve(a, b)` if `a` is square; `linalg.lstsq(a, b)` otherwise | solution of a x = b for x `b/a` | Solve `a.T x.T = b.T` instead | solution of x a = b for x `[U,S,V]=svd(a)` | `U, S, Vh = linalg.svd(a); V = Vh.T` | singular value decomposition of `a` `chol(a)` | `linalg.cholesky(a)` | Cholesky factorization of a 2D array `[V,D]=eig(a)` | `D,V = linalg.eig(a)` | eigenvalues \\(\lambda\\) and eigenvectors \\(v\\) of `a`, where \\(\mathbf{a} v = \lambda v\\) `[V,D]=eig(a,b)` | `D,V = linalg.eig(a, b)` | eigenvalues \\(\lambda\\) and eigenvectors \\(v\\) of `a`, `b` where \\(\mathbf{a} v = \lambda \mathbf{b} v\\) `[V,D]=eigs(a,3)` | `D,V = eigs(a, k=3)` | find the `k=3` largest eigenvalues and eigenvectors of 2D array, `a` `[Q,R]=qr(a,0)` | `Q,R = linalg.qr(a)` | QR decomposition `[L,U,P]=lu(a)` where `a==P'*L*U` | `P,L,U = linalg.lu(a)` where `a == P@L@U` | LU decomposition with partial pivoting (note: P(MATLAB) == transpose(P(NumPy))) `conjgrad` | `cg` | conjugate gradients solver `fft(a)` | `np.fft.fft(a)` | Fourier transform of `a` `ifft(a)` | `np.fft.ifft(a)` | inverse Fourier transform of `a` `sort(a)` | `np.sort(a)` or `a.sort(axis=0)` | sort each column of a 2D array, `a` `sort(a, 2)` | `np.sort(a, axis=1)` or `a.sort(axis=1)` | sort the each row of 2D array, `a` `[b,I]=sortrows(a,1)` | `I = np.argsort(a[:, 0]); b = a[I,:]` | save the array `a` as array `b` with rows sorted by the first column `x = Z\y` | `x = linalg.lstsq(Z, y)` | perform a linear regression of the form \\(\mathbf{Zx}=\mathbf{y}\\) `decimate(x, q)` | `signal.resample(x, np.ceil(len(x)/q))` | downsample with low-pass filtering `unique(a)` | `np.unique(a)` | a vector of unique values in array `a` `squeeze(a)` | `a.squeeze()` | remove singleton dimensions of array `a`. Note that MATLAB will always return arrays of 2D or higher while NumPy will return arrays of 0D or higher ## Notes **Submatrix** : Assignment to a submatrix can be done with lists of indices using the `ix_` command. E.g., for 2D array `a`, one might do: `ind=[1, 3]; a[np.ix_(ind, ind)] += 100`. **HELP** : There is no direct equivalent of MATLAB’s `which` command, but the commands [`help`](https://docs.python.org/3/library/functions.html#help "\(in Python v3.13\)") will usually list the filename where the function is located. Python also has an `inspect` module (do `import inspect`) which provides a `getfile` that often works. **INDEXING** : MATLAB uses one based indexing, so the initial element of a sequence has index 1. Python uses zero based indexing, so the initial element of a sequence has index 0. Confusion and flamewars arise because each has advantages and disadvantages. One based indexing is consistent with common human language usage, where the “first” element of a sequence has index 1. Zero based indexing [simplifies indexing](https://groups.google.com/group/comp.lang.python/msg/1bf4d925dfbf368?q=g:thl3498076713d&hl=en). See also [a text by prof.dr. Edsger W. Dijkstra](https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html). **RANGES** : In MATLAB, `0:5` can be used as both a range literal and a ‘slice’ index (inside parentheses); however, in Python, constructs like `0:5` can _only_ be used as a slice index (inside square brackets). Thus the somewhat quirky `r_` object was created to allow NumPy to have a similarly terse range construction mechanism. Note that `r_` is not called like a function or a constructor, but rather _indexed_ using square brackets, which allows the use of Python’s slice syntax in the arguments. **LOGICOPS** : `&` or `|` in NumPy is bitwise AND/OR, while in MATLAB & and `|` are logical AND/OR. The two can appear to work the same, but there are important differences. If you would have used MATLAB’s `&` or `|` operators, you should use the NumPy ufuncs `logical_and`/`logical_or`. The notable differences between MATLAB’s and NumPy’s `&` and `|` operators are: * Non-logical {0,1} inputs: NumPy’s output is the bitwise AND of the inputs. MATLAB treats any non-zero value as 1 and returns the logical AND. For example `(3 & 4)` in NumPy is `0`, while in MATLAB both `3` and `4` are considered logical true and `(3 & 4)` returns `1`. * Precedence: NumPy’s & operator is higher precedence than logical operators like `<` and `>`; MATLAB’s is the reverse. If you know you have boolean arguments, you can get away with using NumPy’s bitwise operators, but be careful with parentheses, like this: `z = (x > 1) & (x < 2)`. The absence of NumPy operator forms of `logical_and` and `logical_or` is an unfortunate consequence of Python’s design. **RESHAPE and LINEAR INDEXING** : MATLAB always allows multi-dimensional arrays to be accessed using scalar or linear indices, NumPy does not. Linear indices are common in MATLAB programs, e.g. `find()` on a matrix returns them, whereas NumPy’s find behaves differently. When converting MATLAB code it might be necessary to first reshape a matrix to a linear sequence, perform some indexing operations and then reshape back. As reshape (usually) produces views onto the same storage, it should be possible to do this fairly efficiently. Note that the scan order used by reshape in NumPy defaults to the ‘C’ order, whereas MATLAB uses the Fortran order. If you are simply converting to a linear sequence and back this doesn’t matter. But if you are converting reshapes from MATLAB code which relies on the scan order, then this MATLAB code: `z = reshape(x,3,4);` should become `z = x.reshape(3,4,order='F').copy()` in NumPy. ## ‘array’ or ‘matrix’? Which should I use? Historically, NumPy has provided a special matrix type, `np.matrix`, which is a subclass of ndarray which makes binary operations linear algebra operations. You may see it used in some existing code instead of `np.array`. So, which one to use? ### Short answer **Use arrays**. * They support multidimensional array algebra that is supported in MATLAB * They are the standard vector/matrix/tensor type of NumPy. Many NumPy functions return arrays, not matrices. * There is a clear distinction between element-wise operations and linear algebra operations. * You can have standard vectors or row/column vectors if you like. Until Python 3.5 the only disadvantage of using the array type was that you had to use `dot` instead of `*` to multiply (reduce) two tensors (scalar product, matrix vector multiplication etc.). Since Python 3.5 you can use the matrix multiplication `@` operator. Given the above, we intend to deprecate `matrix` eventually. ### Long answer NumPy contains both an `array` class and a `matrix` class. The `array` class is intended to be a general-purpose n-dimensional array for many kinds of numerical computing, while `matrix` is intended to facilitate linear algebra computations specifically. In practice there are only a handful of key differences between the two. * Operators `*` and `@`, functions `dot()`, and `multiply()`: * For `array`, **``*`` means element-wise multiplication** , while **``@`` means matrix multiplication** ; they have associated functions `multiply()` and `dot()`. (Before Python 3.5, `@` did not exist and one had to use `dot()` for matrix multiplication). * For `matrix`, **``*`` means matrix multiplication** , and for element-wise multiplication one has to use the `multiply()` function. * Handling of vectors (one-dimensional arrays) * For `array`, the **vector shapes 1xN, Nx1, and N are all different things**. Operations like `A[:,1]` return a one-dimensional array of shape N, not a two-dimensional array of shape Nx1. Transpose on a one-dimensional `array` does nothing. * For `matrix`, **one-dimensional arrays are always upconverted to 1xN or Nx1 matrices** (row or column vectors). `A[:,1]` returns a two-dimensional matrix of shape Nx1. * Handling of higher-dimensional arrays (ndim > 2) * `array` objects **can have number of dimensions > 2**; * `matrix` objects **always have exactly two dimensions**. * Convenience attributes * `array` **has a .T attribute** , which returns the transpose of the data. * `matrix` **also has .H, .I, and .A attributes** , which return the conjugate transpose, inverse, and `asarray()` of the matrix, respectively. * Convenience constructor * The `array` constructor **takes (nested) Python sequences as initializers**. As in, `array([[1,2,3],[4,5,6]])`. * The `matrix` constructor additionally **takes a convenient string initializer**. As in `matrix("[1 2 3; 4 5 6]")`. There are pros and cons to using both: * `array` * `:)` Element-wise multiplication is easy: `A*B`. * `:(` You have to remember that matrix multiplication has its own operator, `@`. * `:)` You can treat one-dimensional arrays as _either_ row or column vectors. `A @ v` treats `v` as a column vector, while `v @ A` treats `v` as a row vector. This can save you having to type a lot of transposes. * `:)` `array` is the “default” NumPy type, so it gets the most testing, and is the type most likely to be returned by 3rd party code that uses NumPy. * `:)` Is quite at home handling data of any number of dimensions. * `:)` Closer in semantics to tensor algebra, if you are familiar with that. * `:)` _All_ operations (`*`, `/`, `+`, `-` etc.) are element-wise. * `:(` Sparse matrices from `scipy.sparse` do not interact as well with arrays. * `matrix` * `:\\` Behavior is more like that of MATLAB matrices. * `<:(` Maximum of two-dimensional. To hold three-dimensional data you need `array` or perhaps a Python list of `matrix`. * `<:(` Minimum of two-dimensional. You cannot have vectors. They must be cast as single-column or single-row matrices. * `<:(` Since `array` is the default in NumPy, some functions may return an `array` even if you give them a `matrix` as an argument. This shouldn’t happen with NumPy functions (if it does it’s a bug), but 3rd party code based on NumPy may not honor type preservation like NumPy does. * `:)` `A*B` is matrix multiplication, so it looks just like you write it in linear algebra (For Python >= 3.5 plain arrays have the same convenience with the `@` operator). * `<:(` Element-wise multiplication requires calling a function, `multiply(A,B)`. * `<:(` The use of operator overloading is a bit illogical: `*` does not work element-wise but `/` does. * Interaction with `scipy.sparse` is a bit cleaner. The `array` is thus much more advisable to use. Indeed, we intend to deprecate `matrix` eventually. ## Customizing your environment In MATLAB the main tool available to you for customizing the environment is to modify the search path with the locations of your favorite functions. You can put such customizations into a startup script that MATLAB will run on startup. NumPy, or rather Python, has similar facilities. * To modify your Python search path to include the locations of your own modules, define the `PYTHONPATH` environment variable. * To have a particular script file executed when the interactive Python interpreter is started, define the `PYTHONSTARTUP` environment variable to contain the name of your startup script. Unlike MATLAB, where anything on your path can be called immediately, with Python you need to first do an ‘import’ statement to make functions in a particular file accessible. For example you might make a startup script that looks like this (Note: this is just an example, not a statement of “best practices”): # Make all numpy available via shorter 'np' prefix import numpy as np # # Make the SciPy linear algebra functions available as linalg.func() # e.g. linalg.lu, linalg.eig (for general l*B@u==A@u solution) from scipy import linalg # # Define a Hermitian function def hermitian(A, **kwargs): return np.conj(A,**kwargs).T # Make a shortcut for hermitian: # hermitian(A) --> H(A) H = hermitian To use the deprecated `matrix` and other `matlib` functions: # Make all matlib functions accessible at the top level via M.func() import numpy.matlib as M # Make some matlib functions accessible directly at the top level via, e.g. rand(3,3) from numpy.matlib import matrix,rand,zeros,ones,empty,eye ## Links Another somewhat outdated MATLAB/NumPy cross-reference can be found at An extensive list of tools for scientific work with Python can be found in the [topical software page](https://scipy.org/topical-software.html). See [List of Python software: scripting](https://en.wikipedia.org/wiki/List_of_Python_software#Embedded_as_a_scripting_language) for a list of software that use Python as a scripting language MATLAB® and SimuLink® are registered trademarks of The MathWorks, Inc. # NumPy quickstart ## Prerequisites You’ll need to know a bit of Python. For a refresher, see the [Python tutorial](https://docs.python.org/tutorial/). To work the examples, you’ll need `matplotlib` installed in addition to NumPy. **Learner profile** This is a quick overview of arrays in NumPy. It demonstrates how n-dimensional (\\(n>=2\\)) arrays are represented and can be manipulated. In particular, if you don’t know how to apply common functions to n-dimensional arrays (without using for-loops), or if you want to understand axis and shape properties for n-dimensional arrays, this article might be of help. **Learning Objectives** After reading, you should be able to: * Understand the difference between one-, two- and n-dimensional arrays in NumPy; * Understand how to apply some linear algebra operations to n-dimensional arrays without using for-loops; * Understand axis and shape properties for n-dimensional arrays. ## The basics NumPy’s main object is the homogeneous multidimensional array. It is a table of elements (usually numbers), all of the same type, indexed by a tuple of non-negative integers. In NumPy dimensions are called _axes_. For example, the array for the coordinates of a point in 3D space, `[1, 2, 1]`, has one axis. That axis has 3 elements in it, so we say it has a length of 3. In the example pictured below, the array has 2 axes. The first axis has a length of 2, the second axis has a length of 3. [[1., 0., 0.], [0., 1., 2.]] NumPy’s array class is called `ndarray`. It is also known by the alias `array`. Note that `numpy.array` is not the same as the Standard Python Library class `array.array`, which only handles one-dimensional arrays and offers less functionality. The more important attributes of an `ndarray` object are: ndarray.ndim the number of axes (dimensions) of the array. ndarray.shape the dimensions of the array. This is a tuple of integers indicating the size of the array in each dimension. For a matrix with _n_ rows and _m_ columns, `shape` will be `(n,m)`. The length of the `shape` tuple is therefore the number of axes, `ndim`. ndarray.size the total number of elements of the array. This is equal to the product of the elements of `shape`. ndarray.dtype an object describing the type of the elements in the array. One can create or specify dtype’s using standard Python types. Additionally NumPy provides types of its own. numpy.int32, numpy.int16, and numpy.float64 are some examples. ndarray.itemsize the size in bytes of each element of the array. For example, an array of elements of type `float64` has `itemsize` 8 (=64/8), while one of type `complex32` has `itemsize` 4 (=32/8). It is equivalent to `ndarray.dtype.itemsize`. ndarray.data the buffer containing the actual elements of the array. Normally, we won’t need to use this attribute because we will access the elements in an array using indexing facilities. ### An example >>> import numpy as np >>> a = np.arange(15).reshape(3, 5) >>> a array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]]) >>> a.shape (3, 5) >>> a.ndim 2 >>> a.dtype.name 'int64' >>> a.itemsize 8 >>> a.size 15 >>> type(a) >>> b = np.array([6, 7, 8]) >>> b array([6, 7, 8]) >>> type(b) ### Array creation There are several ways to create arrays. For example, you can create an array from a regular Python list or tuple using the `array` function. The type of the resulting array is deduced from the type of the elements in the sequences. >>> import numpy as np >>> a = np.array([2, 3, 4]) >>> a array([2, 3, 4]) >>> a.dtype dtype('int64') >>> b = np.array([1.2, 3.5, 5.1]) >>> b.dtype dtype('float64') A frequent error consists in calling `array` with multiple arguments, rather than providing a single sequence as an argument. >>> a = np.array(1, 2, 3, 4) # WRONG Traceback (most recent call last): ... TypeError: array() takes from 1 to 2 positional arguments but 4 were given >>> a = np.array([1, 2, 3, 4]) # RIGHT `array` transforms sequences of sequences into two-dimensional arrays, sequences of sequences of sequences into three-dimensional arrays, and so on. >>> b = np.array([(1.5, 2, 3), (4, 5, 6)]) >>> b array([[1.5, 2. , 3. ], [4. , 5. , 6. ]]) The type of the array can also be explicitly specified at creation time: >>> c = np.array([[1, 2], [3, 4]], dtype=complex) >>> c array([[1.+0.j, 2.+0.j], [3.+0.j, 4.+0.j]]) Often, the elements of an array are originally unknown, but its size is known. Hence, NumPy offers several functions to create arrays with initial placeholder content. These minimize the necessity of growing arrays, an expensive operation. The function `zeros` creates an array full of zeros, the function `ones` creates an array full of ones, and the function `empty` creates an array whose initial content is random and depends on the state of the memory. By default, the dtype of the created array is `float64`, but it can be specified via the key word argument `dtype`. >>> np.zeros((3, 4)) array([[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]]) >>> np.ones((2, 3, 4), dtype=np.int16) array([[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]], dtype=int16) >>> np.empty((2, 3)) array([[3.73603959e-262, 6.02658058e-154, 6.55490914e-260], # may vary [5.30498948e-313, 3.14673309e-307, 1.00000000e+000]]) To create sequences of numbers, NumPy provides the `arange` function which is analogous to the Python built-in `range`, but returns an array. >>> np.arange(10, 30, 5) array([10, 15, 20, 25]) >>> np.arange(0, 2, 0.3) # it accepts float arguments array([0. , 0.3, 0.6, 0.9, 1.2, 1.5, 1.8]) When `arange` is used with floating point arguments, it is generally not possible to predict the number of elements obtained, due to the finite floating point precision. For this reason, it is usually better to use the function `linspace` that receives as an argument the number of elements that we want, instead of the step: >>> from numpy import pi >>> np.linspace(0, 2, 9) # 9 numbers from 0 to 2 array([0. , 0.25, 0.5 , 0.75, 1. , 1.25, 1.5 , 1.75, 2. ]) >>> x = np.linspace(0, 2 * pi, 100) # useful to evaluate function at lots of points >>> f = np.sin(x) See also [`array`](../reference/generated/numpy.array#numpy.array "numpy.array"), [`zeros`](../reference/generated/numpy.zeros#numpy.zeros "numpy.zeros"), [`zeros_like`](../reference/generated/numpy.zeros_like#numpy.zeros_like "numpy.zeros_like"), [`ones`](../reference/generated/numpy.ones#numpy.ones "numpy.ones"), [`ones_like`](../reference/generated/numpy.ones_like#numpy.ones_like "numpy.ones_like"), [`empty`](../reference/generated/numpy.empty#numpy.empty "numpy.empty"), [`empty_like`](../reference/generated/numpy.empty_like#numpy.empty_like "numpy.empty_like"), [`arange`](../reference/generated/numpy.arange#numpy.arange "numpy.arange"), [`linspace`](../reference/generated/numpy.linspace#numpy.linspace "numpy.linspace"), [`random.Generator.random`](../reference/random/generated/numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random"), [`random.Generator.normal`](../reference/random/generated/numpy.random.generator.normal#numpy.random.Generator.normal "numpy.random.Generator.normal"), [`fromfunction`](../reference/generated/numpy.fromfunction#numpy.fromfunction "numpy.fromfunction"), [`fromfile`](../reference/generated/numpy.fromfile#numpy.fromfile "numpy.fromfile") ### Printing arrays When you print an array, NumPy displays it in a similar way to nested lists, but with the following layout: * the last axis is printed from left to right, * the second-to-last is printed from top to bottom, * the rest are also printed from top to bottom, with each slice separated from the next by an empty line. One-dimensional arrays are then printed as rows, bidimensionals as matrices and tridimensionals as lists of matrices. >>> a = np.arange(6) # 1d array >>> print(a) [0 1 2 3 4 5] >>> >>> b = np.arange(12).reshape(4, 3) # 2d array >>> print(b) [[ 0 1 2] [ 3 4 5] [ 6 7 8] [ 9 10 11]] >>> >>> c = np.arange(24).reshape(2, 3, 4) # 3d array >>> print(c) [[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[12 13 14 15] [16 17 18 19] [20 21 22 23]]] See below to get more details on `reshape`. If an array is too large to be printed, NumPy automatically skips the central part of the array and only prints the corners: >>> print(np.arange(10000)) [ 0 1 2 ... 9997 9998 9999] >>> >>> print(np.arange(10000).reshape(100, 100)) [[ 0 1 2 ... 97 98 99] [ 100 101 102 ... 197 198 199] [ 200 201 202 ... 297 298 299] ... [9700 9701 9702 ... 9797 9798 9799] [9800 9801 9802 ... 9897 9898 9899] [9900 9901 9902 ... 9997 9998 9999]] To disable this behaviour and force NumPy to print the entire array, you can change the printing options using `set_printoptions`. >>> np.set_printoptions(threshold=sys.maxsize) # sys module should be imported ### Basic operations Arithmetic operators on arrays apply _elementwise_. A new array is created and filled with the result. >>> a = np.array([20, 30, 40, 50]) >>> b = np.arange(4) >>> b array([0, 1, 2, 3]) >>> c = a - b >>> c array([20, 29, 38, 47]) >>> b**2 array([0, 1, 4, 9]) >>> 10 * np.sin(a) array([ 9.12945251, -9.88031624, 7.4511316 , -2.62374854]) >>> a < 35 array([ True, True, False, False]) Unlike in many matrix languages, the product operator `*` operates elementwise in NumPy arrays. The matrix product can be performed using the `@` operator (in python >=3.5) or the `dot` function or method: >>> A = np.array([[1, 1], ... [0, 1]]) >>> B = np.array([[2, 0], ... [3, 4]]) >>> A * B # elementwise product array([[2, 0], [0, 4]]) >>> A @ B # matrix product array([[5, 4], [3, 4]]) >>> A.dot(B) # another matrix product array([[5, 4], [3, 4]]) Some operations, such as `+=` and `*=`, act in place to modify an existing array rather than create a new one. >>> rg = np.random.default_rng(1) # create instance of default random number generator >>> a = np.ones((2, 3), dtype=int) >>> b = rg.random((2, 3)) >>> a *= 3 >>> a array([[3, 3, 3], [3, 3, 3]]) >>> b += a >>> b array([[3.51182162, 3.9504637 , 3.14415961], [3.94864945, 3.31183145, 3.42332645]]) >>> a += b # b is not automatically converted to integer type Traceback (most recent call last): ... numpy._core._exceptions._UFuncOutputCastingError: Cannot cast ufunc 'add' output from dtype('float64') to dtype('int64') with casting rule 'same_kind' When operating with arrays of different types, the type of the resulting array corresponds to the more general or precise one (a behavior known as upcasting). >>> a = np.ones(3, dtype=np.int32) >>> b = np.linspace(0, pi, 3) >>> b.dtype.name 'float64' >>> c = a + b >>> c array([1. , 2.57079633, 4.14159265]) >>> c.dtype.name 'float64' >>> d = np.exp(c * 1j) >>> d array([ 0.54030231+0.84147098j, -0.84147098+0.54030231j, -0.54030231-0.84147098j]) >>> d.dtype.name 'complex128' Many unary operations, such as computing the sum of all the elements in the array, are implemented as methods of the `ndarray` class. >>> a = rg.random((2, 3)) >>> a array([[0.82770259, 0.40919914, 0.54959369], [0.02755911, 0.75351311, 0.53814331]]) >>> a.sum() 3.1057109529998157 >>> a.min() 0.027559113243068367 >>> a.max() 0.8277025938204418 By default, these operations apply to the array as though it were a list of numbers, regardless of its shape. However, by specifying the `axis` parameter you can apply an operation along the specified axis of an array: >>> b = np.arange(12).reshape(3, 4) >>> b array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> >>> b.sum(axis=0) # sum of each column array([12, 15, 18, 21]) >>> >>> b.min(axis=1) # min of each row array([0, 4, 8]) >>> >>> b.cumsum(axis=1) # cumulative sum along each row array([[ 0, 1, 3, 6], [ 4, 9, 15, 22], [ 8, 17, 27, 38]]) ### Universal functions NumPy provides familiar mathematical functions such as sin, cos, and exp. In NumPy, these are called “universal functions” (`ufunc`). Within NumPy, these functions operate elementwise on an array, producing an array as output. >>> B = np.arange(3) >>> B array([0, 1, 2]) >>> np.exp(B) array([1. , 2.71828183, 7.3890561 ]) >>> np.sqrt(B) array([0. , 1. , 1.41421356]) >>> C = np.array([2., -1., 4.]) >>> np.add(B, C) array([2., 0., 6.]) See also [`all`](../reference/generated/numpy.all#numpy.all "numpy.all"), [`any`](../reference/generated/numpy.any#numpy.any "numpy.any"), [`apply_along_axis`](../reference/generated/numpy.apply_along_axis#numpy.apply_along_axis "numpy.apply_along_axis"), [`argmax`](../reference/generated/numpy.argmax#numpy.argmax "numpy.argmax"), [`argmin`](../reference/generated/numpy.argmin#numpy.argmin "numpy.argmin"), [`argsort`](../reference/generated/numpy.argsort#numpy.argsort "numpy.argsort"), [`average`](../reference/generated/numpy.average#numpy.average "numpy.average"), [`bincount`](../reference/generated/numpy.bincount#numpy.bincount "numpy.bincount"), [`ceil`](../reference/generated/numpy.ceil#numpy.ceil "numpy.ceil"), [`clip`](../reference/generated/numpy.clip#numpy.clip "numpy.clip"), [`conj`](../reference/generated/numpy.conj#numpy.conj "numpy.conj"), [`corrcoef`](../reference/generated/numpy.corrcoef#numpy.corrcoef "numpy.corrcoef"), [`cov`](../reference/generated/numpy.cov#numpy.cov "numpy.cov"), [`cross`](../reference/generated/numpy.cross#numpy.cross "numpy.cross"), [`cumprod`](../reference/generated/numpy.cumprod#numpy.cumprod "numpy.cumprod"), [`cumsum`](../reference/generated/numpy.cumsum#numpy.cumsum "numpy.cumsum"), [`diff`](../reference/generated/numpy.diff#numpy.diff "numpy.diff"), [`dot`](../reference/generated/numpy.dot#numpy.dot "numpy.dot"), [`floor`](../reference/generated/numpy.floor#numpy.floor "numpy.floor"), [`inner`](../reference/generated/numpy.inner#numpy.inner "numpy.inner"), [`invert`](../reference/generated/numpy.invert#numpy.invert "numpy.invert"), [`lexsort`](../reference/generated/numpy.lexsort#numpy.lexsort "numpy.lexsort"), [`max`](../reference/generated/numpy.max#numpy.max "numpy.max"), [`maximum`](../reference/generated/numpy.maximum#numpy.maximum "numpy.maximum"), [`mean`](../reference/generated/numpy.mean#numpy.mean "numpy.mean"), [`median`](../reference/generated/numpy.median#numpy.median "numpy.median"), [`min`](../reference/generated/numpy.min#numpy.min "numpy.min"), [`minimum`](../reference/generated/numpy.minimum#numpy.minimum "numpy.minimum"), [`nonzero`](../reference/generated/numpy.nonzero#numpy.nonzero "numpy.nonzero"), [`outer`](../reference/generated/numpy.outer#numpy.outer "numpy.outer"), [`prod`](../reference/generated/numpy.prod#numpy.prod "numpy.prod"), [`re`](https://docs.python.org/3/library/re.html#module-re "\(in Python v3.13\)"), [`round`](../reference/generated/numpy.round#numpy.round "numpy.round"), [`sort`](../reference/generated/numpy.sort#numpy.sort "numpy.sort"), [`std`](../reference/generated/numpy.std#numpy.std "numpy.std"), [`sum`](../reference/generated/numpy.sum#numpy.sum "numpy.sum"), [`trace`](../reference/generated/numpy.trace#numpy.trace "numpy.trace"), [`transpose`](../reference/generated/numpy.transpose#numpy.transpose "numpy.transpose"), [`var`](../reference/generated/numpy.var#numpy.var "numpy.var"), [`vdot`](../reference/generated/numpy.vdot#numpy.vdot "numpy.vdot"), [`vectorize`](../reference/generated/numpy.vectorize#numpy.vectorize "numpy.vectorize"), [`where`](../reference/generated/numpy.where#numpy.where "numpy.where") ### Indexing, slicing and iterating **One-dimensional** arrays can be indexed, sliced and iterated over, much like [lists](https://docs.python.org/tutorial/introduction.html#lists) and other Python sequences. >>> a = np.arange(10)**3 >>> a array([ 0, 1, 8, 27, 64, 125, 216, 343, 512, 729]) >>> a[2] 8 >>> a[2:5] array([ 8, 27, 64]) >>> # equivalent to a[0:6:2] = 1000; >>> # from start to position 6, exclusive, set every 2nd element to 1000 >>> a[:6:2] = 1000 >>> a array([1000, 1, 1000, 27, 1000, 125, 216, 343, 512, 729]) >>> a[::-1] # reversed a array([ 729, 512, 343, 216, 125, 1000, 27, 1000, 1, 1000]) >>> for i in a: ... print(i**(1 / 3.)) ... 9.999999999999998 # may vary 1.0 9.999999999999998 3.0 9.999999999999998 4.999999999999999 5.999999999999999 6.999999999999999 7.999999999999999 8.999999999999998 **Multidimensional** arrays can have one index per axis. These indices are given in a tuple separated by commas: >>> def f(x, y): ... return 10 * x + y ... >>> b = np.fromfunction(f, (5, 4), dtype=int) >>> b array([[ 0, 1, 2, 3], [10, 11, 12, 13], [20, 21, 22, 23], [30, 31, 32, 33], [40, 41, 42, 43]]) >>> b[2, 3] 23 >>> b[0:5, 1] # each row in the second column of b array([ 1, 11, 21, 31, 41]) >>> b[:, 1] # equivalent to the previous example array([ 1, 11, 21, 31, 41]) >>> b[1:3, :] # each column in the second and third row of b array([[10, 11, 12, 13], [20, 21, 22, 23]]) When fewer indices are provided than the number of axes, the missing indices are considered complete slices`:` >>> b[-1] # the last row. Equivalent to b[-1, :] array([40, 41, 42, 43]) The expression within brackets in `b[i]` is treated as an `i` followed by as many instances of `:` as needed to represent the remaining axes. NumPy also allows you to write this using dots as `b[i, ...]`. The **dots** (`...`) represent as many colons as needed to produce a complete indexing tuple. For example, if `x` is an array with 5 axes, then * `x[1, 2, ...]` is equivalent to `x[1, 2, :, :, :]`, * `x[..., 3]` to `x[:, :, :, :, 3]` and * `x[4, ..., 5, :]` to `x[4, :, :, 5, :]`. >>> c = np.array([[[ 0, 1, 2], # a 3D array (two stacked 2D arrays) ... [ 10, 12, 13]], ... [[100, 101, 102], ... [110, 112, 113]]]) >>> c.shape (2, 2, 3) >>> c[1, ...] # same as c[1, :, :] or c[1] array([[100, 101, 102], [110, 112, 113]]) >>> c[..., 2] # same as c[:, :, 2] array([[ 2, 13], [102, 113]]) **Iterating** over multidimensional arrays is done with respect to the first axis: >>> for row in b: ... print(row) ... [0 1 2 3] [10 11 12 13] [20 21 22 23] [30 31 32 33] [40 41 42 43] However, if one wants to perform an operation on each element in the array, one can use the `flat` attribute which is an [iterator](https://docs.python.org/tutorial/classes.html#iterators) over all the elements of the array: >>> for element in b.flat: ... print(element) ... 0 1 2 3 10 11 12 13 20 21 22 23 30 31 32 33 40 41 42 43 See also [Indexing on ndarrays](basics.indexing#basics-indexing), [Indexing routines](../reference/routines.indexing#arrays-indexing) (reference), [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis"), [`ndenumerate`](../reference/generated/numpy.ndenumerate#numpy.ndenumerate "numpy.ndenumerate"), [`indices`](../reference/generated/numpy.indices#numpy.indices "numpy.indices") ## Shape manipulation ### Changing the shape of an array An array has a shape given by the number of elements along each axis: >>> a = np.floor(10 * rg.random((3, 4))) >>> a array([[3., 7., 3., 4.], [1., 4., 2., 2.], [7., 2., 4., 9.]]) >>> a.shape (3, 4) The shape of an array can be changed with various commands. Note that the following three commands all return a modified array, but do not change the original array: >>> a.ravel() # returns the array, flattened array([3., 7., 3., 4., 1., 4., 2., 2., 7., 2., 4., 9.]) >>> a.reshape(6, 2) # returns the array with a modified shape array([[3., 7.], [3., 4.], [1., 4.], [2., 2.], [7., 2.], [4., 9.]]) >>> a.T # returns the array, transposed array([[3., 1., 7.], [7., 4., 2.], [3., 2., 4.], [4., 2., 9.]]) >>> a.T.shape (4, 3) >>> a.shape (3, 4) The order of the elements in the array resulting from `ravel` is normally “C-style”, that is, the rightmost index “changes the fastest”, so the element after `a[0, 0]` is `a[0, 1]`. If the array is reshaped to some other shape, again the array is treated as “C-style”. NumPy normally creates arrays stored in this order, so `ravel` will usually not need to copy its argument, but if the array was made by taking slices of another array or created with unusual options, it may need to be copied. The functions `ravel` and `reshape` can also be instructed, using an optional argument, to use FORTRAN-style arrays, in which the leftmost index changes the fastest. The [`reshape`](../reference/generated/numpy.reshape#numpy.reshape "numpy.reshape") function returns its argument with a modified shape, whereas the [`ndarray.resize`](../reference/generated/numpy.ndarray.resize#numpy.ndarray.resize "numpy.ndarray.resize") method modifies the array itself: >>> a array([[3., 7., 3., 4.], [1., 4., 2., 2.], [7., 2., 4., 9.]]) >>> a.resize((2, 6)) >>> a array([[3., 7., 3., 4., 1., 4.], [2., 2., 7., 2., 4., 9.]]) If a dimension is given as `-1` in a reshaping operation, the other dimensions are automatically calculated: >>> a.reshape(3, -1) array([[3., 7., 3., 4.], [1., 4., 2., 2.], [7., 2., 4., 9.]]) See also [`ndarray.shape`](../reference/generated/numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape"), [`reshape`](../reference/generated/numpy.reshape#numpy.reshape "numpy.reshape"), [`resize`](../reference/generated/numpy.resize#numpy.resize "numpy.resize"), [`ravel`](../reference/generated/numpy.ravel#numpy.ravel "numpy.ravel") ### Stacking together different arrays Several arrays can be stacked together along different axes: >>> a = np.floor(10 * rg.random((2, 2))) >>> a array([[9., 7.], [5., 2.]]) >>> b = np.floor(10 * rg.random((2, 2))) >>> b array([[1., 9.], [5., 1.]]) >>> np.vstack((a, b)) array([[9., 7.], [5., 2.], [1., 9.], [5., 1.]]) >>> np.hstack((a, b)) array([[9., 7., 1., 9.], [5., 2., 5., 1.]]) The function [`column_stack`](../reference/generated/numpy.column_stack#numpy.column_stack "numpy.column_stack") stacks 1D arrays as columns into a 2D array. It is equivalent to [`hstack`](../reference/generated/numpy.hstack#numpy.hstack "numpy.hstack") only for 2D arrays: >>> from numpy import newaxis >>> np.column_stack((a, b)) # with 2D arrays array([[9., 7., 1., 9.], [5., 2., 5., 1.]]) >>> a = np.array([4., 2.]) >>> b = np.array([3., 8.]) >>> np.column_stack((a, b)) # returns a 2D array array([[4., 3.], [2., 8.]]) >>> np.hstack((a, b)) # the result is different array([4., 2., 3., 8.]) >>> a[:, newaxis] # view `a` as a 2D column vector array([[4.], [2.]]) >>> np.column_stack((a[:, newaxis], b[:, newaxis])) array([[4., 3.], [2., 8.]]) >>> np.hstack((a[:, newaxis], b[:, newaxis])) # the result is the same array([[4., 3.], [2., 8.]]) In general, for arrays with more than two dimensions, [`hstack`](../reference/generated/numpy.hstack#numpy.hstack "numpy.hstack") stacks along their second axes, [`vstack`](../reference/generated/numpy.vstack#numpy.vstack "numpy.vstack") stacks along their first axes, and [`concatenate`](../reference/generated/numpy.concatenate#numpy.concatenate "numpy.concatenate") allows for an optional arguments giving the number of the axis along which the concatenation should happen. **Note** In complex cases, [`r_`](../reference/generated/numpy.r_#numpy.r_ "numpy.r_") and [`c_`](../reference/generated/numpy.c_#numpy.c_ "numpy.c_") are useful for creating arrays by stacking numbers along one axis. They allow the use of range literals `:`. >>> np.r_[1:4, 0, 4] array([1, 2, 3, 0, 4]) When used with arrays as arguments, [`r_`](../reference/generated/numpy.r_#numpy.r_ "numpy.r_") and [`c_`](../reference/generated/numpy.c_#numpy.c_ "numpy.c_") are similar to [`vstack`](../reference/generated/numpy.vstack#numpy.vstack "numpy.vstack") and [`hstack`](../reference/generated/numpy.hstack#numpy.hstack "numpy.hstack") in their default behavior, but allow for an optional argument giving the number of the axis along which to concatenate. See also [`hstack`](../reference/generated/numpy.hstack#numpy.hstack "numpy.hstack"), [`vstack`](../reference/generated/numpy.vstack#numpy.vstack "numpy.vstack"), [`column_stack`](../reference/generated/numpy.column_stack#numpy.column_stack "numpy.column_stack"), [`concatenate`](../reference/generated/numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`c_`](../reference/generated/numpy.c_#numpy.c_ "numpy.c_"), [`r_`](../reference/generated/numpy.r_#numpy.r_ "numpy.r_") ### Splitting one array into several smaller ones Using [`hsplit`](../reference/generated/numpy.hsplit#numpy.hsplit "numpy.hsplit"), you can split an array along its horizontal axis, either by specifying the number of equally shaped arrays to return, or by specifying the columns after which the division should occur: >>> a = np.floor(10 * rg.random((2, 12))) >>> a array([[6., 7., 6., 9., 0., 5., 4., 0., 6., 8., 5., 2.], [8., 5., 5., 7., 1., 8., 6., 7., 1., 8., 1., 0.]]) >>> # Split `a` into 3 >>> np.hsplit(a, 3) [array([[6., 7., 6., 9.], [8., 5., 5., 7.]]), array([[0., 5., 4., 0.], [1., 8., 6., 7.]]), array([[6., 8., 5., 2.], [1., 8., 1., 0.]])] >>> # Split `a` after the third and the fourth column >>> np.hsplit(a, (3, 4)) [array([[6., 7., 6.], [8., 5., 5.]]), array([[9.], [7.]]), array([[0., 5., 4., 0., 6., 8., 5., 2.], [1., 8., 6., 7., 1., 8., 1., 0.]])] [`vsplit`](../reference/generated/numpy.vsplit#numpy.vsplit "numpy.vsplit") splits along the vertical axis, and [`array_split`](../reference/generated/numpy.array_split#numpy.array_split "numpy.array_split") allows one to specify along which axis to split. ## Copies and views When operating and manipulating arrays, their data is sometimes copied into a new array and sometimes not. This is often a source of confusion for beginners. There are three cases: ### No copy at all Simple assignments make no copy of objects or their data. >>> a = np.array([[ 0, 1, 2, 3], ... [ 4, 5, 6, 7], ... [ 8, 9, 10, 11]]) >>> b = a # no new object is created >>> b is a # a and b are two names for the same ndarray object True Python passes mutable objects as references, so function calls make no copy. >>> def f(x): ... print(id(x)) ... >>> id(a) # id is a unique identifier of an object 148293216 # may vary >>> f(a) 148293216 # may vary ### View or shallow copy Different array objects can share the same data. The `view` method creates a new array object that looks at the same data. >>> c = a.view() >>> c is a False >>> c.base is a # c is a view of the data owned by a True >>> c.flags.owndata False >>> >>> c = c.reshape((2, 6)) # a's shape doesn't change, reassigned c is still a view of a >>> a.shape (3, 4) >>> c[0, 4] = 1234 # a's data changes >>> a array([[ 0, 1, 2, 3], [1234, 5, 6, 7], [ 8, 9, 10, 11]]) Slicing an array returns a view of it: >>> s = a[:, 1:3] >>> s[:] = 10 # s[:] is a view of s. Note the difference between s = 10 and s[:] = 10 >>> a array([[ 0, 10, 10, 3], [1234, 10, 10, 7], [ 8, 10, 10, 11]]) ### Deep copy The `copy` method makes a complete copy of the array and its data. >>> d = a.copy() # a new array object with new data is created >>> d is a False >>> d.base is a # d doesn't share anything with a False >>> d[0, 0] = 9999 >>> a array([[ 0, 10, 10, 3], [1234, 10, 10, 7], [ 8, 10, 10, 11]]) Sometimes `copy` should be called after slicing if the original array is not required anymore. For example, suppose `a` is a huge intermediate result and the final result `b` only contains a small fraction of `a`, a deep copy should be made when constructing `b` with slicing: >>> a = np.arange(int(1e8)) >>> b = a[:100].copy() >>> del a # the memory of ``a`` can be released. If `b = a[:100]` is used instead, `a` is referenced by `b` and will persist in memory even if `del a` is executed. See also [Copies and views](basics.copies#basics-copies-and-views). ### Functions and methods overview Here is a list of some useful NumPy functions and methods names ordered in categories. See [Routines and objects by topic](../reference/routines#routines) for the full list. Array Creation [`arange`](../reference/generated/numpy.arange#numpy.arange "numpy.arange"), [`array`](../reference/generated/numpy.array#numpy.array "numpy.array"), [`copy`](../reference/generated/numpy.copy#numpy.copy "numpy.copy"), [`empty`](../reference/generated/numpy.empty#numpy.empty "numpy.empty"), [`empty_like`](../reference/generated/numpy.empty_like#numpy.empty_like "numpy.empty_like"), [`eye`](../reference/generated/numpy.eye#numpy.eye "numpy.eye"), [`fromfile`](../reference/generated/numpy.fromfile#numpy.fromfile "numpy.fromfile"), [`fromfunction`](../reference/generated/numpy.fromfunction#numpy.fromfunction "numpy.fromfunction"), [`identity`](../reference/generated/numpy.identity#numpy.identity "numpy.identity"), [`linspace`](../reference/generated/numpy.linspace#numpy.linspace "numpy.linspace"), [`logspace`](../reference/generated/numpy.logspace#numpy.logspace "numpy.logspace"), [`mgrid`](../reference/generated/numpy.mgrid#numpy.mgrid "numpy.mgrid"), [`ogrid`](../reference/generated/numpy.ogrid#numpy.ogrid "numpy.ogrid"), [`ones`](../reference/generated/numpy.ones#numpy.ones "numpy.ones"), [`ones_like`](../reference/generated/numpy.ones_like#numpy.ones_like "numpy.ones_like"), [`r_`](../reference/generated/numpy.r_#numpy.r_ "numpy.r_"), [`zeros`](../reference/generated/numpy.zeros#numpy.zeros "numpy.zeros"), [`zeros_like`](../reference/generated/numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Conversions [`ndarray.astype`](../reference/generated/numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype"), [`atleast_1d`](../reference/generated/numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d"), [`atleast_2d`](../reference/generated/numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d"), [`atleast_3d`](../reference/generated/numpy.atleast_3d#numpy.atleast_3d "numpy.atleast_3d"), `mat` Manipulations [`array_split`](../reference/generated/numpy.array_split#numpy.array_split "numpy.array_split"), [`column_stack`](../reference/generated/numpy.column_stack#numpy.column_stack "numpy.column_stack"), [`concatenate`](../reference/generated/numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`diagonal`](../reference/generated/numpy.diagonal#numpy.diagonal "numpy.diagonal"), [`dsplit`](../reference/generated/numpy.dsplit#numpy.dsplit "numpy.dsplit"), [`dstack`](../reference/generated/numpy.dstack#numpy.dstack "numpy.dstack"), [`hsplit`](../reference/generated/numpy.hsplit#numpy.hsplit "numpy.hsplit"), [`hstack`](../reference/generated/numpy.hstack#numpy.hstack "numpy.hstack"), [`ndarray.item`](../reference/generated/numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item"), [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis"), [`ravel`](../reference/generated/numpy.ravel#numpy.ravel "numpy.ravel"), [`repeat`](../reference/generated/numpy.repeat#numpy.repeat "numpy.repeat"), [`reshape`](../reference/generated/numpy.reshape#numpy.reshape "numpy.reshape"), [`resize`](../reference/generated/numpy.resize#numpy.resize "numpy.resize"), [`squeeze`](../reference/generated/numpy.squeeze#numpy.squeeze "numpy.squeeze"), [`swapaxes`](../reference/generated/numpy.swapaxes#numpy.swapaxes "numpy.swapaxes"), [`take`](../reference/generated/numpy.take#numpy.take "numpy.take"), [`transpose`](../reference/generated/numpy.transpose#numpy.transpose "numpy.transpose"), [`vsplit`](../reference/generated/numpy.vsplit#numpy.vsplit "numpy.vsplit"), [`vstack`](../reference/generated/numpy.vstack#numpy.vstack "numpy.vstack") Questions [`all`](../reference/generated/numpy.all#numpy.all "numpy.all"), [`any`](../reference/generated/numpy.any#numpy.any "numpy.any"), [`nonzero`](../reference/generated/numpy.nonzero#numpy.nonzero "numpy.nonzero"), [`where`](../reference/generated/numpy.where#numpy.where "numpy.where") Ordering [`argmax`](../reference/generated/numpy.argmax#numpy.argmax "numpy.argmax"), [`argmin`](../reference/generated/numpy.argmin#numpy.argmin "numpy.argmin"), [`argsort`](../reference/generated/numpy.argsort#numpy.argsort "numpy.argsort"), [`max`](../reference/generated/numpy.max#numpy.max "numpy.max"), [`min`](../reference/generated/numpy.min#numpy.min "numpy.min"), [`ptp`](../reference/generated/numpy.ptp#numpy.ptp "numpy.ptp"), [`searchsorted`](../reference/generated/numpy.searchsorted#numpy.searchsorted "numpy.searchsorted"), [`sort`](../reference/generated/numpy.sort#numpy.sort "numpy.sort") Operations [`choose`](../reference/generated/numpy.choose#numpy.choose "numpy.choose"), [`compress`](../reference/generated/numpy.compress#numpy.compress "numpy.compress"), [`cumprod`](../reference/generated/numpy.cumprod#numpy.cumprod "numpy.cumprod"), [`cumsum`](../reference/generated/numpy.cumsum#numpy.cumsum "numpy.cumsum"), [`inner`](../reference/generated/numpy.inner#numpy.inner "numpy.inner"), [`ndarray.fill`](../reference/generated/numpy.ndarray.fill#numpy.ndarray.fill "numpy.ndarray.fill"), [`imag`](../reference/generated/numpy.imag#numpy.imag "numpy.imag"), [`prod`](../reference/generated/numpy.prod#numpy.prod "numpy.prod"), [`put`](../reference/generated/numpy.put#numpy.put "numpy.put"), [`putmask`](../reference/generated/numpy.putmask#numpy.putmask "numpy.putmask"), [`real`](../reference/generated/numpy.real#numpy.real "numpy.real"), [`sum`](../reference/generated/numpy.sum#numpy.sum "numpy.sum") Basic Statistics [`cov`](../reference/generated/numpy.cov#numpy.cov "numpy.cov"), [`mean`](../reference/generated/numpy.mean#numpy.mean "numpy.mean"), [`std`](../reference/generated/numpy.std#numpy.std "numpy.std"), [`var`](../reference/generated/numpy.var#numpy.var "numpy.var") Basic Linear Algebra [`cross`](../reference/generated/numpy.cross#numpy.cross "numpy.cross"), [`dot`](../reference/generated/numpy.dot#numpy.dot "numpy.dot"), [`outer`](../reference/generated/numpy.outer#numpy.outer "numpy.outer"), [`linalg.svd`](../reference/generated/numpy.linalg.svd#numpy.linalg.svd "numpy.linalg.svd"), [`vdot`](../reference/generated/numpy.vdot#numpy.vdot "numpy.vdot") ## Less basic ### Broadcasting rules Broadcasting allows universal functions to deal in a meaningful way with inputs that do not have exactly the same shape. The first rule of broadcasting is that if all input arrays do not have the same number of dimensions, a “1” will be repeatedly prepended to the shapes of the smaller arrays until all the arrays have the same number of dimensions. The second rule of broadcasting ensures that arrays with a size of 1 along a particular dimension act as if they had the size of the array with the largest shape along that dimension. The value of the array element is assumed to be the same along that dimension for the “broadcast” array. After application of the broadcasting rules, the sizes of all arrays must match. More details can be found in [Broadcasting](basics.broadcasting#basics- broadcasting). ## Advanced indexing and index tricks NumPy offers more indexing facilities than regular Python sequences. In addition to indexing by integers and slices, as we saw before, arrays can be indexed by arrays of integers and arrays of booleans. ### Indexing with arrays of indices >>> a = np.arange(12)**2 # the first 12 square numbers >>> i = np.array([1, 1, 3, 8, 5]) # an array of indices >>> a[i] # the elements of `a` at the positions `i` array([ 1, 1, 9, 64, 25]) >>> >>> j = np.array([[3, 4], [9, 7]]) # a bidimensional array of indices >>> a[j] # the same shape as `j` array([[ 9, 16], [81, 49]]) When the indexed array `a` is multidimensional, a single array of indices refers to the first dimension of `a`. The following example shows this behavior by converting an image of labels into a color image using a palette. >>> palette = np.array([[0, 0, 0], # black ... [255, 0, 0], # red ... [0, 255, 0], # green ... [0, 0, 255], # blue ... [255, 255, 255]]) # white >>> image = np.array([[0, 1, 2, 0], # each value corresponds to a color in the palette ... [0, 3, 4, 0]]) >>> palette[image] # the (2, 4, 3) color image array([[[ 0, 0, 0], [255, 0, 0], [ 0, 255, 0], [ 0, 0, 0]], [[ 0, 0, 0], [ 0, 0, 255], [255, 255, 255], [ 0, 0, 0]]]) We can also give indexes for more than one dimension. The arrays of indices for each dimension must have the same shape. >>> a = np.arange(12).reshape(3, 4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> i = np.array([[0, 1], # indices for the first dim of `a` ... [1, 2]]) >>> j = np.array([[2, 1], # indices for the second dim ... [3, 3]]) >>> >>> a[i, j] # i and j must have equal shape array([[ 2, 5], [ 7, 11]]) >>> >>> a[i, 2] array([[ 2, 6], [ 6, 10]]) >>> >>> a[:, j] array([[[ 2, 1], [ 3, 3]], [[ 6, 5], [ 7, 7]], [[10, 9], [11, 11]]]) In Python, `arr[i, j]` is exactly the same as `arr[(i, j)]`—so we can put `i` and `j` in a `tuple` and then do the indexing with that. >>> l = (i, j) >>> # equivalent to a[i, j] >>> a[l] array([[ 2, 5], [ 7, 11]]) However, we can not do this by putting `i` and `j` into an array, because this array will be interpreted as indexing the first dimension of `a`. >>> s = np.array([i, j]) >>> # not what we want >>> a[s] Traceback (most recent call last): File "", line 1, in IndexError: index 3 is out of bounds for axis 0 with size 3 >>> # same as `a[i, j]` >>> a[tuple(s)] array([[ 2, 5], [ 7, 11]]) Another common use of indexing with arrays is the search of the maximum value of time-dependent series: >>> time = np.linspace(20, 145, 5) # time scale >>> data = np.sin(np.arange(20)).reshape(5, 4) # 4 time-dependent series >>> time array([ 20. , 51.25, 82.5 , 113.75, 145. ]) >>> data array([[ 0. , 0.84147098, 0.90929743, 0.14112001], [-0.7568025 , -0.95892427, -0.2794155 , 0.6569866 ], [ 0.98935825, 0.41211849, -0.54402111, -0.99999021], [-0.53657292, 0.42016704, 0.99060736, 0.65028784], [-0.28790332, -0.96139749, -0.75098725, 0.14987721]]) >>> # index of the maxima for each series >>> ind = data.argmax(axis=0) >>> ind array([2, 0, 3, 1]) >>> # times corresponding to the maxima >>> time_max = time[ind] >>> >>> data_max = data[ind, range(data.shape[1])] # => data[ind[0], 0], data[ind[1], 1]... >>> time_max array([ 82.5 , 20. , 113.75, 51.25]) >>> data_max array([0.98935825, 0.84147098, 0.99060736, 0.6569866 ]) >>> np.all(data_max == data.max(axis=0)) True You can also use indexing with arrays as a target to assign to: >>> a = np.arange(5) >>> a array([0, 1, 2, 3, 4]) >>> a[[1, 3, 4]] = 0 >>> a array([0, 0, 2, 0, 0]) However, when the list of indices contains repetitions, the assignment is done several times, leaving behind the last value: >>> a = np.arange(5) >>> a[[0, 0, 2]] = [1, 2, 3] >>> a array([2, 1, 3, 3, 4]) This is reasonable enough, but watch out if you want to use Python’s `+=` construct, as it may not do what you expect: >>> a = np.arange(5) >>> a[[0, 0, 2]] += 1 >>> a array([1, 1, 3, 3, 4]) Even though 0 occurs twice in the list of indices, the 0th element is only incremented once. This is because Python requires `a += 1` to be equivalent to `a = a + 1`. ### Indexing with boolean arrays When we index arrays with arrays of (integer) indices we are providing the list of indices to pick. With boolean indices the approach is different; we explicitly choose which items in the array we want and which ones we don’t. The most natural way one can think of for boolean indexing is to use boolean arrays that have _the same shape_ as the original array: >>> a = np.arange(12).reshape(3, 4) >>> b = a > 4 >>> b # `b` is a boolean with `a`'s shape array([[False, False, False, False], [False, True, True, True], [ True, True, True, True]]) >>> a[b] # 1d array with the selected elements array([ 5, 6, 7, 8, 9, 10, 11]) This property can be very useful in assignments: >>> a[b] = 0 # All elements of `a` higher than 4 become 0 >>> a array([[0, 1, 2, 3], [4, 0, 0, 0], [0, 0, 0, 0]]) You can look at the following example to see how to use boolean indexing to generate an image of the [Mandelbrot set](https://en.wikipedia.org/wiki/Mandelbrot_set): >>> import numpy as np >>> import matplotlib.pyplot as plt >>> def mandelbrot(h, w, maxit=20, r=2): ... """Returns an image of the Mandelbrot fractal of size (h,w).""" ... x = np.linspace(-2.5, 1.5, 4*h+1) ... y = np.linspace(-1.5, 1.5, 3*w+1) ... A, B = np.meshgrid(x, y) ... C = A + B*1j ... z = np.zeros_like(C) ... divtime = maxit + np.zeros(z.shape, dtype=int) ... ... for i in range(maxit): ... z = z**2 + C ... diverge = abs(z) > r # who is diverging ... div_now = diverge & (divtime == maxit) # who is diverging now ... divtime[div_now] = i # note when ... z[diverge] = r # avoid diverging too much ... ... return divtime >>> plt.clf() >>> plt.imshow(mandelbrot(400, 400)) The second way of indexing with booleans is more similar to integer indexing; for each dimension of the array we give a 1D boolean array selecting the slices we want: >>> a = np.arange(12).reshape(3, 4) >>> b1 = np.array([False, True, True]) # first dim selection >>> b2 = np.array([True, False, True, False]) # second dim selection >>> >>> a[b1, :] # selecting rows array([[ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> >>> a[b1] # same thing array([[ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> >>> a[:, b2] # selecting columns array([[ 0, 2], [ 4, 6], [ 8, 10]]) >>> >>> a[b1, b2] # a weird thing to do array([ 4, 10]) Note that the length of the 1D boolean array must coincide with the length of the dimension (or axis) you want to slice. In the previous example, `b1` has length 3 (the number of _rows_ in `a`), and `b2` (of length 4) is suitable to index the 2nd axis (columns) of `a`. ### The ix_() function The [`ix_`](../reference/generated/numpy.ix_#numpy.ix_ "numpy.ix_") function can be used to combine different vectors so as to obtain the result for each n-uplet. For example, if you want to compute all the a+b*c for all the triplets taken from each of the vectors a, b and c: >>> a = np.array([2, 3, 4, 5]) >>> b = np.array([8, 5, 4]) >>> c = np.array([5, 4, 6, 8, 3]) >>> ax, bx, cx = np.ix_(a, b, c) >>> ax array([[[2]], [[3]], [[4]], [[5]]]) >>> bx array([[[8], [5], [4]]]) >>> cx array([[[5, 4, 6, 8, 3]]]) >>> ax.shape, bx.shape, cx.shape ((4, 1, 1), (1, 3, 1), (1, 1, 5)) >>> result = ax + bx * cx >>> result array([[[42, 34, 50, 66, 26], [27, 22, 32, 42, 17], [22, 18, 26, 34, 14]], [[43, 35, 51, 67, 27], [28, 23, 33, 43, 18], [23, 19, 27, 35, 15]], [[44, 36, 52, 68, 28], [29, 24, 34, 44, 19], [24, 20, 28, 36, 16]], [[45, 37, 53, 69, 29], [30, 25, 35, 45, 20], [25, 21, 29, 37, 17]]]) >>> result[3, 2, 4] 17 >>> a[3] + b[2] * c[4] 17 You could also implement the reduce as follows: >>> def ufunc_reduce(ufct, *vectors): ... vs = np.ix_(*vectors) ... r = ufct.identity ... for v in vs: ... r = ufct(r, v) ... return r and then use it as: >>> ufunc_reduce(np.add, a, b, c) array([[[15, 14, 16, 18, 13], [12, 11, 13, 15, 10], [11, 10, 12, 14, 9]], [[16, 15, 17, 19, 14], [13, 12, 14, 16, 11], [12, 11, 13, 15, 10]], [[17, 16, 18, 20, 15], [14, 13, 15, 17, 12], [13, 12, 14, 16, 11]], [[18, 17, 19, 21, 16], [15, 14, 16, 18, 13], [14, 13, 15, 17, 12]]]) The advantage of this version of reduce compared to the normal ufunc.reduce is that it makes use of the broadcasting rules in order to avoid creating an argument array the size of the output times the number of vectors. ### Indexing with strings See [Structured arrays](basics.rec#structured-arrays). ## Tricks and tips Here we give a list of short and useful tips. ### “Automatic” reshaping To change the dimensions of an array, you can omit one of the sizes which will then be deduced automatically: >>> a = np.arange(30) >>> b = a.reshape((2, -1, 3)) # -1 means "whatever is needed" >>> b.shape (2, 5, 3) >>> b array([[[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11], [12, 13, 14]], [[15, 16, 17], [18, 19, 20], [21, 22, 23], [24, 25, 26], [27, 28, 29]]]) ### Vector stacking How do we construct a 2D array from a list of equally-sized row vectors? In MATLAB this is quite easy: if `x` and `y` are two vectors of the same length you only need do `m=[x;y]`. In NumPy this works via the functions `column_stack`, `dstack`, `hstack` and `vstack`, depending on the dimension in which the stacking is to be done. For example: >>> x = np.arange(0, 10, 2) >>> y = np.arange(5) >>> m = np.vstack([x, y]) >>> m array([[0, 2, 4, 6, 8], [0, 1, 2, 3, 4]]) >>> xy = np.hstack([x, y]) >>> xy array([0, 2, 4, 6, 8, 0, 1, 2, 3, 4]) The logic behind those functions in more than two dimensions can be strange. See also [NumPy for MATLAB users](numpy-for-matlab-users) ### Histograms The NumPy `histogram` function applied to an array returns a pair of vectors: the histogram of the array and a vector of the bin edges. Beware: `matplotlib` also has a function to build histograms (called `hist`, as in Matlab) that differs from the one in NumPy. The main difference is that `pylab.hist` plots the histogram automatically, while `numpy.histogram` only generates the data. >>> import numpy as np >>> rg = np.random.default_rng(1) >>> import matplotlib.pyplot as plt >>> # Build a vector of 10000 normal deviates with variance 0.5^2 and mean 2 >>> mu, sigma = 2, 0.5 >>> v = rg.normal(mu, sigma, 10000) >>> # Plot a normalized histogram with 50 bins >>> plt.hist(v, bins=50, density=True) # matplotlib version (plot) (array...) >>> # Compute the histogram with numpy and then plot it >>> (n, bins) = np.histogram(v, bins=50, density=True) # NumPy version (no plot) >>> plt.plot(.5 * (bins[1:] + bins[:-1]), n) With Matplotlib >=3.4 you can also use `plt.stairs(n, bins)`. ## Further reading * The [Python tutorial](https://docs.python.org/tutorial/) * [NumPy reference](../reference/index#reference) * [SciPy Tutorial](https://docs.scipy.org/doc/scipy/tutorial/index.html) * [SciPy Lecture Notes](https://scipy-lectures.org) * A [matlab, R, IDL, NumPy/SciPy dictionary](https://mathesaurus.sourceforge.net/) * [tutorial-svd](https://numpy.org/numpy-tutorials/content/tutorial-svd.html "\(in NumPy tutorials\)") # What is NumPy? NumPy is the fundamental package for scientific computing in Python. It is a Python library that provides a multidimensional array object, various derived objects (such as masked arrays and matrices), and an assortment of routines for fast operations on arrays, including mathematical, logical, shape manipulation, sorting, selecting, I/O, discrete Fourier transforms, basic linear algebra, basic statistical operations, random simulation and much more. At the core of the NumPy package, is the [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") object. This encapsulates _n_ -dimensional arrays of homogeneous data types, with many operations being performed in compiled code for performance. There are several important differences between NumPy arrays and the standard Python sequences: * NumPy arrays have a fixed size at creation, unlike Python lists (which can grow dynamically). Changing the size of an [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") will create a new array and delete the original. * The elements in a NumPy array are all required to be of the same data type, and thus will be the same size in memory. The exception: one can have arrays of (Python, including NumPy) objects, thereby allowing for arrays of different sized elements. * NumPy arrays facilitate advanced mathematical and other types of operations on large numbers of data. Typically, such operations are executed more efficiently and with less code than is possible using Python’s built-in sequences. * A growing plethora of scientific and mathematical Python-based packages are using NumPy arrays; though these typically support Python-sequence input, they convert such input to NumPy arrays prior to processing, and they often output NumPy arrays. In other words, in order to efficiently use much (perhaps even most) of today’s scientific/mathematical Python-based software, just knowing how to use Python’s built-in sequence types is insufficient - one also needs to know how to use NumPy arrays. The points about sequence size and speed are particularly important in scientific computing. As a simple example, consider the case of multiplying each element in a 1-D sequence with the corresponding element in another sequence of the same length. If the data are stored in two Python lists, `a` and `b`, we could iterate over each element: c = [] for i in range(len(a)): c.append(a[i]*b[i]) This produces the correct answer, but if `a` and `b` each contain millions of numbers, we will pay the price for the inefficiencies of looping in Python. We could accomplish the same task much more quickly in C by writing (for clarity we neglect variable declarations and initializations, memory allocation, etc.) for (i = 0; i < rows; i++) { c[i] = a[i]*b[i]; } This saves all the overhead involved in interpreting the Python code and manipulating Python objects, but at the expense of the benefits gained from coding in Python. Furthermore, the coding work required increases with the dimensionality of our data. In the case of a 2-D array, for example, the C code (abridged as before) expands to for (i = 0; i < rows; i++) { for (j = 0; j < columns; j++) { c[i][j] = a[i][j]*b[i][j]; } } NumPy gives us the best of both worlds: element-by-element operations are the “default mode” when an [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") is involved, but the element-by-element operation is speedily executed by pre-compiled C code. In NumPy c = a * b does what the earlier examples do, at near-C speeds, but with the code simplicity we expect from something based on Python. Indeed, the NumPy idiom is even simpler! This last example illustrates two of NumPy’s features which are the basis of much of its power: vectorization and broadcasting. ## Why is NumPy fast? Vectorization describes the absence of any explicit looping, indexing, etc., in the code - these things are taking place, of course, just “behind the scenes” in optimized, pre-compiled C code. Vectorized code has many advantages, among which are: * vectorized code is more concise and easier to read * fewer lines of code generally means fewer bugs * the code more closely resembles standard mathematical notation (making it easier, typically, to correctly code mathematical constructs) * vectorization results in more “Pythonic” code. Without vectorization, our code would be littered with inefficient and difficult to read `for` loops. Broadcasting is the term used to describe the implicit element-by-element behavior of operations; generally speaking, in NumPy all operations, not just arithmetic operations, but logical, bit-wise, functional, etc., behave in this implicit element-by-element fashion, i.e., they broadcast. Moreover, in the example above, `a` and `b` could be multidimensional arrays of the same shape, or a scalar and an array, or even two arrays with different shapes, provided that the smaller array is “expandable” to the shape of the larger in such a way that the resulting broadcast is unambiguous. For detailed “rules” of broadcasting see [Broadcasting](basics.broadcasting#basics-broadcasting). ## Who else uses NumPy? NumPy fully supports an object-oriented approach, starting, once again, with [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). For example, [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") is a class, possessing numerous methods and attributes. Many of its methods are mirrored by functions in the outer-most NumPy namespace, allowing the programmer to code in whichever paradigm they prefer. This flexibility has allowed the NumPy array dialect and NumPy [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") class to become the _de-facto_ language of multi-dimensional data interchange used in Python.