# Data Types#

The data types supported by Ivy are as follows:

int8

int16

int32

int64

uint8

uint16

uint32

uint64

bfloat16

float16

float32

float64

bool

complex64

complex128

The supported data types are all defined at import time, with each of these set as an ivy.Dtype instance.
The `ivy.Dtype`

class derives from `str`

, and has simple logic in the constructor to verify that the string formatting is correct.
All data types can be queried as attributes of the `ivy`

namespace, such as `ivy.float32`

etc.

In addition, *native* data types are also specified at import time.
Likewise, these are all *initially* set as ivy.Dtype instances.

There is also an `ivy.NativeDtype`

class defined, but this is initially set as an empty class.

The following tuples are also defined: `all_dtypes`

, `all_numeric_dtypes`

, `all_int_dtypes`

, `all_float_dtypes`

.
These each contain all possible data types which fall into the corresponding category.
Each of these tuples is also replicated in a new set of four valid tuples and a set of four invalid tuples.
When no backend is set, all data types are assumed to be valid, and so the invalid tuples are all empty, and the valid tuples are set as equal to the original four *“all”* tuples.

However, when a backend is set, then some of these are updated.
Firstly, the `ivy.NativeDtype`

is replaced with the backend-specific data type class.
Secondly, each of the native data types are replaced with the true native data types.
Thirdly, the valid data types are updated.
Finally, the invalid data types are updated.

This leaves each of the data types unmodified, for example `ivy.float32`

will still reference the original definition in `ivy/ivy/__init__.py`

,
whereas `ivy.native_float32`

will now reference the new definition in `/ivy/functional/backends/backend/__init__.py`

.

The tuples `all_dtypes`

, `all_numeric_dtypes`

, `all_int_dtypes`

and `all_float_dtypes`

are also left unmodified.
Importantly, we must ensure that unsupported data types are removed from the `ivy`

namespace.
For example, torch supports `uint8`

, but does not support `uint16`

, `uint32`

or `uint64`

.
Therefore, after setting a torch backend via `ivy.set_backend('torch')`

, we should no longer be able to access `ivy.uint16`

.
This is handled in `ivy.set_backend()`

.

## Data Type Module#

The data_type.py module provides a variety of functions for working with data types.
A few examples include `ivy.astype()`

which copies an array to a specified data type, `ivy.broadcast_to()`

which broadcasts an array to a specified shape, and `ivy.result_type()`

which returns the dtype that results from applying the type promotion rules to the arguments.

Many functions in the `data_type.py`

module are *convenience* functions, which means that they do not directly modify arrays, as explained in the Function Types section.

For example, the following are all convenience functions: ivy.can_cast, which determines if one data type can be cast to another data type according to type-promotion rules, ivy.dtype, which gets the data type for the input array, ivy.set_default_dtype, which sets the global default data dtype, and ivy.default_dtype, which returns the correct data type to use.

ivy.default_dtype is arguably the most important function.
Any function in the functional API that receives a `dtype`

argument will make use of this function, as explained below.

## Data Type Promotion#

In order to ensure that the same data type is always returned when operations are performed on arrays with different data types, regardless of which backend framework is set, Ivy has it’s own set of data type promotion rules and corresponding functions. These rules build directly on top of the rules outlined in the Array API Standard.

The rules are simple: all data type promotions in Ivy should adhere a promotion table that extends Array API Standard promotion table using this promotion table, and one of two extra promotion tables depending on precision mode that will be explained in the following section.

In order to ensure adherence to this promotion table, many backend functions make use of the functions ivy.promote_types, ivy.type_promote_arrays, or ivy.promote_types_of_inputs. These functions: promote data types in the inputs and return the new data types, promote the data types of the arrays in the input and return new arrays, and promote the data types of the numeric or array values inputs and return new type promoted values, respectively.

For an example of how some of these functions are used, the implementations for `ivy.add()`

in each backend framework are as follows:

JAX:

```
def add(
x1: Union[float, JaxArray],
x2: Union[float, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return jnp.add(x1, x2)
```

NumPy:

```
@_handle_0_dim_output
def add(
x1: Union[float, np.ndarray],
x2: Union[float, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return np.add(x1, x2, out=out)
```

TensorFlow:

```
def add(
x1: Union[float, tf.Tensor, tf.Variable],
x2: Union[float, tf.Tensor, tf.Variable],
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return tf.experimental.numpy.add(x1, x2)
```

PyTorch:

```
def add(
x1: Union[float, torch.Tensor],
x2: Union[float, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return torch.add(x1, x2, out=out)
```

It’s important to always make use of the Ivy promotion functions as opposed to backend-specific promotion functions such as `jax.numpy.promote_types()`

, `numpy.promote_types()`

, `tf.experimental.numpy.promote_types()`

and `torch.promote_types()`

, as these will generally have promotion rules which will subtly differ from one another and from Ivy’s unified promotion rules.

On the other hand, each frontend framework has its own set of rules for how data types should be promoted, and their own type promoting functions `promote_types_frontend_name()`

and `promote_types_of_frontend_name_inputs()`

in `ivy/functional/frontends/frontend_name/__init__.py`

.
We should always use these functions in any frontend implementation, to ensure we follow exactly the same promotion rules as the frontend framework uses.

It should be noted that data type promotion is only used for unifying data types of inputs to a common one for performing various mathematical operations.
Examples shown above demonstrate the usage of the `add`

operation.
As different data types cannot be simply summed, they are promoted to the least common type, according to the presented promotion table.
This ensures that functions always return specific and expected values, independently of the specified backend.

However, data promotion is never used for increasing the accuracy or precision of computations. This is a required condition for all operations, even if the upcasting can help to avoid numerical instabilities caused by underflow or overflow.

Assume that an algorithm is required to compute an inverse of a nearly singular matrix, that is defined in `float32`

data type.
It is likely that this operation can produce numerical instabilities and generate `inf`

or `nan`

values.
Temporary upcasting the input matrix to `float64`

for computing an inverse and then downcasting the matrix back to `float32`

may help to produce a stable result.
However, temporary upcasting and subsequent downcasting can not be performed as this is not expected by the user.
Whenever the user defines data with a specific data type, they expect a certain memory footprint.

The user expects specific behaviour and memory constraints whenever they specify and use concrete data types, and those decisions should be respected. Therefore, Ivy does not upcast specific values to improve the stability or precision of the computation.

### Precise Mode#

There are cases that arise in mixed promotion (Integer and Float, Complex and Float) that aren’t covered by the Array API Standard promotion table, and depending on each use case, the mixed promotion rules differ as observed in different frameworks, for example Tensorflow leaves integer/floating mixed promotion undefined to make behavior utterly predictable (at some cost to user convenience), while Numpy avoids precision loss at all costs even if that meant casting the arrays to wider-than-necessary dtypes

#### Precise Promotion Table#

This table focuses on numerical accuracy at the cost of a higher memory footprint. A 16-bit signed or unsigned integer cannot be represented at full precision by a 16-bit float, which has only 10 bits of mantissa. Therefore, it might make sense to promote integers to floats represented by twice the number of bits. There are two disadvantages of this approach:

It still leaves int64 and uint64 promotion undefined, because there is no standard floating point type with enough bits of mantissa to represent their full range of values. We could relax the precision constraint and use

`float64`

as the upper bound for this case.Some operations result in types that are much wider than necessary; for example mixed operations between

`uint16`

and float16 would promote all the way to`float64`

, which is not ideal.

```
with ivy.PreciseMode(True):
print(ivy.promote_types("float32","int32"))
# float64
```

#### Non-Precise Promotion Table#

The advantage of this approach is that, outside unsigned ints, it avoids all wider-than-necessary promotions: you can never get an f64 output without a 64-bit input, and you can never get an `float32`

output without a 32-bit input: this results in convenient semantics for working on accelerators while avoiding unwanted 64-bit values. This feature of giving primacy to floating point types resembles the type promotion behavior of PyTorch.
the disadvantage of this approach is that mixed float/integer promotion is very prone to precision loss: for example, `int64`

(with a maximum value of 9.2*10^18 can be promoted to `float16`

(with a maximum value of 6.5*10^4, meaning most representable values will become inf, but we are fine accepting potential loss of precision (but not loss of magnitude) in mixed type promotion which satisfies most of the use cases in deep learning scenarios.

```
with ivy.PreciseMode(False):
print(ivy.promote_types("float32","int32"))
# float32
```

## Arguments in other Functions#

All `dtype`

arguments are keyword-only.
All creation functions include the `dtype`

argument, for specifying the data type of the created array.
Some other non-creation functions also support the `dtype`

argument, such as `ivy.prod()`

and `ivy.sum()`

, but most functions do not include it.
The non-creation functions which do support it are generally functions that involve a compounding reduction across the array, which could result in overflows, and so an explicit `dtype`

argument is useful for handling such cases.

The `dtype`

argument is handled in the infer_dtype wrapper, for all functions which have the decorator `@infer_dtype`

.
This function calls ivy.default_dtype in order to determine the correct data type.
As discussed in the Function Wrapping section, this is applied to all applicable functions dynamically during backend setting.

Overall, ivy.default_dtype infers the data type as follows:

if the

`dtype`

argument is provided, use this directlyotherwise, if an array is present in the arguments, set

`arr`

to this array. This will then be used to infer the data type by calling`ivy.dtype()`

on the arrayotherwise, if a

*relevant*scalar is present in the arguments, set`arr`

to this scalar and derive the data type from this by calling either`ivy.default_int_dtype()`

or`ivy.default_float_dtype()`

depending on whether the scalar is an int or float. This will either return the globally set default int data type or globally set default float data type (settable via`ivy.set_default_int_dtype()`

and`ivy.set_default_float_dtype()`

respectively). An example of a*relevant*scalar is`start`

in the function`ivy.arange()`

, which is used to set the starting value of the returned array. Examples of*irrelevant*scalars which should**not**be used for determining the data type are`axis`

,`axes`

,`dims`

etc. which must be integers, and control other configurations of the function being called, with no bearing at all on the data types used by that function.otherwise, if no arrays or relevant scalars are present in the arguments, then use the global default data type, which can either be an int or float data type. This is settable via

`ivy.set_default_dtype()`

.

For the majority of functions which defer to infer_dtype for handling the data type, these steps will have been followed and the `dtype`

argument will be populated with the correct value before the backend-specific implementation is even entered into.
Therefore, whereas the `dtype`

argument is listed as optional in the ivy API at `ivy/functional/ivy/category_name.py`

, the argument is listed as required in the backend-specific implementations at `ivy/functional/backends/backend_name/category_name.py`

.

Let’s take a look at the function `ivy.zeros()`

as an example.

The implementation in `ivy/functional/ivy/creation.py`

has the following signature:

```
@outputs_to_ivy_arrays
@handle_out_argument
@infer_dtype
@infer_device
def zeros(
shape: Union[int, Sequence[int]],
*,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice]] = None,
) -> ivy.Array:
```

Whereas the backend-specific implementations in `ivy/functional/backends/backend_name/statistical.py`

all list `dtype`

as required.

Jax:

```
def zeros(
shape: Union[int, Sequence[int]],
*,
dtype: jnp.dtype,
device: jaxlib.xla_extension.Device,
) -> JaxArray:
```

NumPy:

```
def zeros(
shape: Union[int, Sequence[int]],
*,
dtype: np.dtype,
device: str,
) -> np.ndarray:
```

TensorFlow:

```
def zeros(
shape: Union[int, Sequence[int]],
*,
dtype: tf.DType,
device: str,
) -> Union[tf.Tensor, tf.Variable]:
```

PyTorch:

```
def zeros(
shape: Union[int, Sequence[int]],
*,
dtype: torch.dtype,
device: torch.device,
) -> torch.Tensor:
```

This makes it clear that these backend-specific functions are only entered into once the correct `dtype`

has been determined.

However, the `dtype`

argument for functions which don’t have the `@infer_dtype`

decorator are **not** handled by infer_dtype, and so these defaults must be handled by the backend-specific implementations themselves.

One reason for not adding `@infer_dtype`

to a function is because it includes *relevant* scalar arguments for inferring the data type from.
infer_dtype is not able to correctly handle such cases, and so the dtype handling is delegated to the backend-specific implementations.

For example `ivy.full()`

doesn’t have the `@infer_dtype`

decorator even though it has a `dtype`

argument because of the *relevant* `fill_value`

which cannot be correctly handled by infer_dtype.

The PyTorch-specific implementation is as follows:

```
def full(
shape: Union[int, Sequence[int]],
fill_value: Union[int, float],
*,
dtype: Optional[Union[ivy.Dtype, torch.dtype]] = None,
device: torch.device,
) -> Tensor:
return torch.full(
shape_to_tuple(shape),
fill_value,
dtype=ivy.default_dtype(dtype=dtype, item=fill_value, as_native=True),
device=device,
)
```

The implementations for all other backends follow a similar pattern to this PyTorch implementation, where the `dtype`

argument is optional and `ivy.default_dtype()`

is called inside the backend-specific implementation.

## Supported and Unsupported Data Types#

Some backend functions (implemented in `ivy/functional/backends/`

) make use of the decorators `@with_supported_dtypes`

or `@with_unsupported_dtypes`

, which flag the data types which this particular function does and does not support respectively for the associated backend.
Only one of these decorators can be specified for any given function.
In the case of `@with_supported_dtypes`

it is assumed that all unmentioned data types are unsupported, and in the case of `@with_unsupported_dtypes`

it is assumed that all unmentioned data types are supported.

The decorators take two arguments, a dictionary with the unsupported dtypes mapped to the corresponding version of the backend framework and the current version of the backend framework on the user’s system. Based on that, the version specific unsupported dtypes and devices are set for the given function every time the function is called.

For Backend Functions:

```
@with_unsupported_dtypes({"2.0.1 and below": ("float16",)}, backend_version)
def expm1(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.expm1(x, out=out)
```

and for frontend functions we add the corresponding framework string as the second argument instead of the version.

For Frontend Functions:

```
@with_unsupported_dtypes({"2.0.1 and below": ("float16", "bfloat16")}, "torch")
def trace(input):
if "int" in input.dtype:
input = input.astype("int64")
target_type = "int64" if "int" in input.dtype else input.dtype
return ivy.astype(ivy.trace(input), target_type)
```

For compositional functions, the supported and unsupported data types can then be inferred automatically using the helper functions function_supported_dtypes and function_unsupported_dtypes respectively, which traverse the abstract syntax tree of the compositional function and evaluate the relevant attributes for each primary function in the composition. The same approach applies for most stateful methods, which are themselves compositional.

It is also possible to add supported and unsupported dtypes as a combination of both class and individual dtypes. The allowed dtype classes are: `valid`

, `numeric`

, `float`

, `integer`

, and `unsigned`

.

For example, using the decorator:

```
@with_unsupported_dtypes{{"2.0.1 and below": ("unsigned", "bfloat16", "float16")}, backend_version)
```

would consider all the unsigned integer dtypes (`uint8`

, `uint16`

, `uint32`

, `uint64`

), `bfloat16`

and `float16`

as unsupported for the function.

In order to get the supported and unsupported devices and dtypes for a function, the corresponding documentation of that function for that specific framework can be referred. However, sometimes new unsupported dtypes are discovered while testing too. So it is suggested to explore it both ways.

It should be noted that `unsupported_dtypes`

is different from `ivy.invalid_dtypes`

which consists of all the data types that every function of that particular backend does not support, and so if a certain `dtype`

is already present in the `ivy.invalid_dtypes`

then we should not add it to the `@with_unsupported_dtypes`

decorator.

Sometimes, it might be possible to support a natively unsupported data type by either casting to a supported data type and then casting back, or explicitly handling these data types without deferring to a backend function at all.

An example of the former is `ivy.logical_not()`

with a tensorflow backend:

```
def logical_not(
x: Union[tf.Tensor, tf.Variable],
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.logical_not(tf.cast(x, tf.bool))
```

An example of the latter is `ivy.abs()`

with a tensorflow backend:

```
def abs(
x: Union[float, tf.Tensor, tf.Variable],
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
if "uint" in ivy.dtype(x):
return x
else:
return tf.abs(x)
```

The `[un]supported_dtypes_and_devices`

decorators can be used for more specific cases where a certain
set of dtypes is not supported by a certain device.

```
@with_unsupported_device_and_dtypes({"2.6.0 and below": {"cpu": ("int8", "int16", "uint8")}}, backend_version)
def gcd(
x1: Union[paddle.Tensor, int, list, tuple],
x2: Union[paddle.Tensor, float, list, tuple],
/,
*,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
x1, x2 = promote_types_of_inputs(x1, x2)
return paddle.gcd(x1, x2)
```

These decorators can also be used as context managers and be applied to a block of code at once or even a module, so that the decorator is applied to all the functions within that context. For example:

```
# we define this function each time we use this context manager
# so that context managers can access the globals in the
# module they are being used
def globals_getter_func(x=None):
if not x:
return globals()
else:
globals()[x[0]] = x[1]
with with_unsupported_dtypes({"0.4.11 and below": ("complex",)}, backend_version):
def f1(*args,**kwargs):
pass
def f2(*args,**kwargs):
pass
from . import activations
from . import operations
```

In some cases, the lack of support for a particular data type by the backend function might be more difficult to handle correctly.
For example, in many cases casting to another data type will result in a loss of precision, input range, or both.
In such cases, the best solution is to simply add the data type to the `@with_unsupported_dtypes`

decorator, rather than trying to implement a long and complex patch to achieve the desired behaviour.

Some cases where a data type is not supported are very subtle.
For example, `uint8`

is not supported for `ivy.prod()`

with a torch backend, despite `torch.prod()`

handling `torch.uint8`

types in the input totally fine.

The reason for this is that the Array API Standard mandates that `prod()`

upcasts the unsigned integer return to have the same number of bits as the default integer data type.
By default, the default integer data type in Ivy is `int32`

, and so we should return an array of type `uint32`

despite the input arrays being of type `uint8`

.
However, torch does not support `uint32`

, and so we cannot fully adhere to the requirements of the standard for `uint8`

inputs.
Rather than breaking this rule and returning arrays of type `uint8`

only with a torch backend, we instead opt to remove official support entirely for this combination of data type, function, and backend framework.
This will avoid all of the potential confusion that could arise if we were to have inconsistent and unexpected outputs when using officially supported data types in Ivy.

Another important point to note is that for cases where an entire dtype series is not supported or supported. For example if float16, float32 and float64 are not supported or is supported by a framework which could be a backend or frontend framework, then we simply identify that by simply replacing the different float dtypes with the str float. The same logic is applied to other dtypes such as complex, where we simply replace the entire dtypes with the str complex

An example is `ivy.fmin()`

with a tensorflow backend:

```
@with_supported_dtypes({"2.13.0 and below": ("float",)}, backend_version)
def fmin(
x1: Union[tf.Tensor, tf.Variable],
x2: Union[tf.Tensor, tf.Variable],
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
x1, x2 = promote_types_of_inputs(x1, x2)
x1 = tf.where(tf.math.is_nan(x1), x2, x1)
x2 = tf.where(tf.math.is_nan(x2), x1, x2)
ret = tf.experimental.numpy.minimum(x1, x2)
return ret
```

As seen in the above code, we simply use the str float instead of writing all the float dtypes that are supported

Another example is `ivy.floor_divide()`

with a tensorflow backend:

```
@with_unsupported_dtypes({"2.13.0 and below": ("complex",)}, backend_version)
def floor_divide(
x1: Union[float, tf.Tensor, tf.Variable],
x2: Union[float, tf.Tensor, tf.Variable],
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return tf.experimental.numpy.floor_divide(x1, x2)
```

As seen in the above code, we simply use the str complex instead of writing all the complex dtypes that are not supported

### Supported and Unsupported Data Types Attributes#

In addition to the unsupported / supported data types decorator, we also have the `unsupported_dtypes`

and `supported_dtypes`

attributes. These attributes operate in a manner similar to the attr:@with_unsupported_dtypes and attr:@with_supported_dtypes decorators.

#### Special Case#

However, the major difference between the attributes and the decorators is that the attributes are set and assigned in the ivy function itself `ivy/functional/ivy/`

,
while the decorators are used within the frontend `ivy/functional/frontends/`

and backend `ivy/functional/backends/`

to identify the supported or unsupported data types, depending on the use case.
The attributes are set for functions that don’t have a specific backend implementation for each backend, where we provide the backend as one of the arguments to the attribute of the framework agnostic function (because all ivy functions are framework agnostic), which allows it to identify the supported or unsupported dtypes for each backend.

An example of an ivy function which does not have a specific backend implementation for each backend is the `einops_reduce`

function. This function , makes use of a third-party library `einops`

which has its own backend-agnostic implementations.

The `unsupported_dtypes`

and `supported_dtypes`

attributes take two arguments, a dictionary with the unsupported dtypes mapped to the corresponding backend framework. Based on that, the specific unsupported dtypes are set for the given function every time the function is called.
For example, we use the `unsupported_dtypes`

attribute for the `einops_reduce`

function within the ivy functional API as shown below:

```
einops_reduce.unsupported_dtypes = {
"torch": ("float16",),
"tensorflow": ("complex",),
"paddle": ("complex", "uint8", "int8", "int16", "float16"),
}
```

With the above approach, we ensure that anytime the backend is set to torch, the `einops_reduce`

function does not support float16, likewise, complex dtypes are not supported with a tensorflow backend and
complex, uint8, int8, int16, float16 are not supported with a paddle backend.

## Backend Data Type Bugs#

In some cases, the lack of support might just be a bug which will likely be resolved in a future release of the framework.
In these cases, as well as adding to the `unsupported_dtypes`

attribute, we should also add a `#ToDo`

comment in the implementation, explaining that the support of the data type will be added as soon as the bug is fixed, with a link to an associated open issue in the framework repos included in the comment.

For example, the following code throws an error when `dtype`

is `torch.int32`

but not when it is `torch.int64`

.
This is tested with torch version `1.12.1`

.
This is a known bug:

```
dtype = torch.int32 # or torch.int64
x = torch.randint(1, 10, ([1, 2, 3]), dtype=dtype)
torch.tensordot(x, x, dims=([0], [0]))
```

Despite `torch.int32`

working correctly with `torch.tensordot()`

in the vast majority of cases, our solution is to still add `"int32"`

into the `unsupported_dtypes`

attribute, which will prevent the unit tests from failing in the CI.
We also add the following comment above the `unsupported_dtypes`

attribute:

```
# ToDo: re-add int32 support once
# (https://github.com/pytorch/pytorch/issues/84530) is fixed
@with_unsupported_dtypes({"2.0.1 and below": ("int32",)}, backend_version)
```

Similarly, the following code throws an error for torch version `1.11.0`

but not `1.12.1`

.

```
x = torch.tensor([0], dtype=torch.float32)
torch.cumsum(x, axis=0, dtype=torch.bfloat16)
```

Writing short-lived patches for these temporary issues would add unwarranted complexity to the backend implementations, and introduce the risk of forgetting about the patch, needlessly bloating the codebase with redundant code. In such cases, we can explicitly flag which versions support which data types like so:

```
@with_unsupported_dtypes(
{"2.0.1 and below": ("uint8", "bfloat16", "float16"), "1.12.1": ()}, backend_version
)
def cumsum(
x: torch.Tensor,
axis: int = 0,
exclusive: bool = False,
reverse: bool = False,
*,
dtype: Optional[torch.dtype] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
```

In the above example the `torch.cumsum`

function undergoes changes in the unsupported dtypes from one version to another.
Starting from version `1.12.1`

it doesn’t have any unsupported dtypes.
The decorator assigns the version specific unsupported dtypes to the function and if the current version is not found in the dictionary, then it defaults to the behaviour of the last known version.

The same workflow has been implemented for `supported_dtypes`

, `unsupported_devices`

and `supported_devices`

.

The slight downside of this approach is that there is less data type coverage for each version of each backend, but taking responsibility for patching this support for all versions would substantially inflate the implementational requirements for ivy, and so we have decided to opt out of this responsibility!

## Data Type Casting Modes#

As discussed earlier, many backend functions have a set of unsupported dtypes which are otherwise supported by the backend itself. This raises a question that whether we should support these dtypes by casting them to some other but close dtype. We avoid manually casting unsupported dtypes for most of the part as this could be seen as undesirable behavior to some of users. This is where we have various dtype casting modes so as to give the users an option to automatically cast unsupported dtype operations to a supported and a nearly same dtype.

There are currently four modes that accomplish this.

`upcast_data_types`

`downcast_data_types`

`crosscast_data_types`

`cast_data_types`

`upcast_data_types`

mode casts the unsupported dtype encountered to the next highest supported dtype in the same
dtype group, i.e, if the unsupported dtype encountered is `uint8`

, then this mode will try to upcast it to the next available supported `uint`

dtype. If no
higher uint dtype is available, then there won’t be any upcasting performed. You can set this mode by calling `ivy.upcast_data_types()`

with an optional `val`

keyword argument that defaults to `True`

.

Similarly, `downcast_data_dtypes`

tries to downcast to the next lower supported dtype in the same dtype group. No casting is performed if no lower dtype is found in the same group.
It can also be set by calling `ivy.downcast_data_types()`

with the optional `val`

keyword that defaults to boolean value `True`

.

`crosscast_data_types`

is for cases when a function doesn’t support `int`

dtypes, but supports `float`

and vice-versa. In such cases,
we cast to the default supported `float`

dtype if it’s the unsupported integer case or we cast to the default supported `int`

dtype if it’s the unsupported `float`

case.

The `cast_data_types`

mode is the combination of all the three modes that we discussed till now. It works its way from crosscasting to upcasting and finally to downcasting to provide support
for any unsupported dtype that is encountered by the functions.

This is the unsupported dtypes for `exmp1`

. It doesn’t support `float16`

. We will see how we can
still pass `float16`

arrays and watch it pass for different modes.

Example of Upcasting mode :

```
@with_unsupported_dtypes({"2.0.1 and below": ("float16", "complex")}, backend_version)
@handle_numpy_arrays_in_specific_backend
def expm1(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.expm1(x, out=out)
```

The function `expm1`

has `float16`

as one of the unsupported dtypes, for the version `2.0.1`

which
is being used for execution at the time of writing this. We will see how cating modes handles this.

```
import ivy
ivy.set_backend('torch')
ret = ivy.expm1(ivy.array([1], dtype='float16')) # raises exception
ivy.upcast_data_types()
ret = ivy.expm1(ivy.array([1], dtype='float16')) # doesn't raise exception
```

Example of Downcasting mode :

```
import ivy
ivy.set_backend('torch')
try:
ret = ivy.expm1(ivy.array([1], dtype='float16')) # raises exception
ivy.downcast_data_types()
ret = ivy.expm1(ivy.array([1], dtype='float16')) # doesn't raise exception
```

Example of Mixed casting mode :

```
import ivy
ivy.set_backend('torch')
ret = ivy.expm1(ivy.array([1], dtype='float16')) # raises exception
ivy.cast_data_types()
ret = ivy.expm1(ivy.array([1], dtype='float16')) # doesn't raise exception
```

Example of Cross casting mode :

```
@with_unsupported_dtypes({"2.0.1 and below": ("float",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def lcm(
x1: torch.Tensor,
x2: torch.Tensor,
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = promote_types_of_inputs(x1, x2)
return torch.lcm(x1, x2, out=out)
```

This function doesn’t support any of the `float`

dtypes, so we will see how cross casting mode can
enable `float`

dtypes to be passed here too.

```
import ivy
ivy.set_backend('torch')
ret = ivy.lcm(ivy.array([1], dtype='float16'),ivy.array([1], dtype='float16')) # raises exception
ivy.crosscast_data_types()
ret = ivy.lcm(ivy.array([1], dtype='float16'),ivy.array([1], dtype='float16')) # doesn't raise exception
```

Since all `float`

dtypes are not supported by the `lcm`

function in `torch`

, it is
casted to the default integer dtype , i.e `int32`

.

While, casting modes can handle a lot of cases, it doesn’t guarantee 100% support for the unsupported dtypes. In cases where there is no other supported dtype available to cast to, casting mode won’t work and the function would throw the usual error. Since casting modes simply tries to cast an array or dtype to a different one that the given function supports, it is not supposed to provide optimal performance or precision, and hence should be avoided if these are the prime concerns of the user.

Together with these modes we provide some level of flexibility to users when they encounter functions that don’t support a dtype which is otherwise supported by the backend. However, it should be well understood that this may lead to loss of precision and/or an increase in memory consumption.

## Superset Data Type Support#

As explained in the superset section of the Deep Dive, we generally go for the superset of behaviour for all Ivy functions, and data type support is no exception.
Some backends like tensorflow do not support integer array inputs for certain functions.
For example `tensorflow.cos()`

only supports non-integer values.
However, backends like torch and JAX support integer arrays as inputs.
To ensure that integer types are supported in Ivy when a tensorflow backend is set, we simply promote any integer array passed to the function to the default float dtype.
As with all superset design decisions, this behavior makes it much easier to support all frameworks in our frontends, without the need for lots of extra logic for handling integer array inputs for the frameworks which support it natively.

**Round Up**

This should have hopefully given you a good feel for data types, and how these are handled in Ivy.

If you have any questions, please feel free to reach out on discord in the `data types thread`_!

**Video**