Tech News
← Back to articles

How is einx notation universal?

read original related products more articles

To address this question, let’s first look at how tensor operations are commonly expressed in existing tensor frameworks.

Classical notation#

Tensor operations can be dissected into two distinct components:

An elementary operation that is performed. Example: np.sum computes a sum-reduction. A division of the input tensor into sub-tensors. The elementary operation is applied to each sub-tensor independently. We refer to this as vectorization. Example: Sub-tensors in np.sum span the dimensions specified by the axis parameter. The sum-reduction is vectorized over all other dimensions.

In common tensor frameworks like Numpy, PyTorch, Tensorflow or Jax, different elementary operations are implemented with different vectorization rules. For example, to express vectorization

np.sum uses the axis parameter,

np.add follows implicit broadcasting rules (e.g. in combination with np.newaxis ), and

np.matmul provides an implicit and custom set of rules.