heat
add modules/namespaces to the heat namespace
Subpackages
heat.classification
heat.cluster
heat
heat.linalg
heat._operations
heat.arithmetics
heat.base
heat.communication
heat.complex_math
heat.constants
heat.devices
heat.dndarray
heat.exponential
heat.factories
heat.indexing
heat.io
heat.logical
heat.manipulations
heat.memory
heat.printing
heat.random
heat.relational
heat.rounding
heat.sanitation
heat.signal
heat.statistics
heat.stride_tricks
heat.tiling
heat.trigonometrics
heat.types
heat.version
heat.datasets
heat.fft
heat.graph
heat.naive_bayes
heat.nn
heat.optim
heat.preprocessing
heat.regression
heat.sparse
heat.spatial
heat.utils
Package Contents
- class finfo
Class describing machine limits (bit representation) of floating point types.
- Variables:
bits (int) – The number of bits occupied by the type.
eps (float) – The smallest representable positive number such that
1.0 + eps != 1.0
. Type ofeps
is an appropriate floating point type.max (float) – The largest representable number.
min (float) – The smallest representable number, typically
-max
.tiny (float) – The smallest positive usable number. Type of
tiny
is an appropriate floating point type.
- Parameters:
dtype (datatype) – Kind of floating point data-type about which to get information.
Examples
>>> import heat as ht >>> info = ht.types.finfo(ht.float32) >>> info.bits 32 >>> info.eps 1.1920928955078125e-07
- class iinfo
Class describing machine limits (bit representation) of integer types.
- Variables:
bits (int) – The number of bits occupied by the type.
max (float) – The largest representable number.
min (float) – The smallest representable number, typically
-max
.
- Parameters:
dtype (datatype) – Kind of floating point data-type about which to get information.
Examples
>>> import heat as ht >>> info = ht.types.iinfo(ht.int32) >>> info.bits 32
- __version__ :str
The combined version string, consisting out of major, minor, micro and possibly extension.
- add(t1: heat.core.dndarray.DNDarray | float, t2: heat.core.dndarray.DNDarray | float, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Element-wise addition of values from two operands, commutative. Takes the first and second operand (scalar or
DNDarray
) whose elements are to be added as argument and returns aDNDarray
containing the results of element-wise addition oft1
andt2
.- Parameters:
t1 (DNDarray or scalar) – The first operand involved in the addition
t2 (DNDarray or scalar) – The second operand involved in the addition
out (DNDarray, optional) – The output array. It must have a shape that the inputs broadcast to and matching split axis. If not provided, a freshly allocated array is returned.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the added value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Examples
>>> import heat as ht >>> ht.add(1.0, 4.0) DNDarray(5., dtype=ht.float32, device=cpu:0, split=None) >>> T1 = ht.float32([[1, 2], [3, 4]]) >>> T2 = ht.float32([[2, 2], [2, 2]]) >>> ht.add(T1, T2) DNDarray([[3., 4.], [5., 6.]], dtype=ht.float32, device=cpu:0, split=None) >>> s = 2.0 >>> ht.add(T1, s) DNDarray([[3., 4.], [5., 6.]], dtype=ht.float32, device=cpu:0, split=None)
- bitwise_and(t1: heat.core.dndarray.DNDarray | float, t2: heat.core.dndarray.DNDarray | float, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Compute the bitwise AND of two
DNDarray
t1
andt2
element-wise. Only integer and boolean types are handled. Ift1.shape!=t2.shape
, they must be broadcastable to a common shape (which becomes the shape of the output)- Parameters:
t1 (DNDarray or scalar) – The first operand involved in the operation
t2 (DNDarray or scalar) – The second operand involved in the operation
out (DNDarray, optional) – The output array. It must have a shape that the inputs broadcast to and matching split axis. If not provided, a freshly allocated array is returned.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the added value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Examples
>>> ht.bitwise_and(13, 17) DNDarray(1, dtype=ht.int64, device=cpu:0, split=None) >>> ht.bitwise_and(14, 13) DNDarray(12, dtype=ht.int64, device=cpu:0, split=None) >>> ht.bitwise_and(ht.array([14,3]), 13) DNDarray([12, 1], dtype=ht.int64, device=cpu:0, split=None) >>> ht.bitwise_and(ht.array([11,7]), ht.array([4,25])) DNDarray([0, 1], dtype=ht.int64, device=cpu:0, split=None) >>> ht.bitwise_and(ht.array([2,5,255]), ht.array([3,14,16])) DNDarray([ 2, 4, 16], dtype=ht.int64, device=cpu:0, split=None) >>> ht.bitwise_and(ht.array([True, True]), ht.array([False, True])) DNDarray([False, True], dtype=ht.bool, device=cpu:0, split=None)
- bitwise_or(t1: heat.core.dndarray.DNDarray | float, t2: heat.core.dndarray.DNDarray | float, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Compute the bit-wise OR of two
DNDarray
t1
andt2
element-wise. Only integer and boolean types are handled. Ift1.shape!=t2.shape
, they must be broadcastable to a common shape (which becomes the shape of the output)- Parameters:
t1 (DNDarray or scalar) – The first operand involved in the operation
t2 (DNDarray or scalar) – The second operand involved in the operation
out (DNDarray, optional) – The output array. It must have a shape that the inputs broadcast to and matching split axis. If not provided, a freshly allocated array is returned.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the added value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Examples
>>> ht.bitwise_or(13, 16) DNDarray(29, dtype=ht.int64, device=cpu:0, split=None) >>> ht.bitwise_or(32, 2) DNDarray(34, dtype=ht.int64, device=cpu:0, split=None) >>> ht.bitwise_or(ht.array([33, 4]), 1) DNDarray([33, 5], dtype=ht.int64, device=cpu:0, split=None) >>> ht.bitwise_or(ht.array([33, 4]), ht.array([1, 2])) DNDarray([33, 6], dtype=ht.int64, device=cpu:0, split=None) >>> ht.bitwise_or(ht.array([2, 5, 255]), ht.array([4, 4, 4])) DNDarray([ 6, 5, 255], dtype=ht.int64, device=cpu:0, split=None) >>> ht.bitwise_or(ht.array([2, 5, 255, 2147483647], dtype=ht.int32), ht.array([4, 4, 4, 2147483647], dtype=ht.int32)) DNDarray([ 6, 5, 255, 2147483647], dtype=ht.int32, device=cpu:0, split=None) >>> ht.bitwise_or(ht.array([True, True]), ht.array([False, True])) DNDarray([True, True], dtype=ht.bool, device=cpu:0, split=None)
- bitwise_xor(t1: heat.core.dndarray.DNDarray | float, t2: heat.core.dndarray.DNDarray | float, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Compute the bit-wise XOR of two arrays
t1
andt2
element-wise. Only integer and boolean types are handled. Ifx1.shape!=x2.shape
, they must be broadcastable to a common shape (which becomes the shape of the output).- Parameters:
t1 (DNDarray or scalar) – The first operand involved in the operation
t2 (DNDarray or scalar) – The second operand involved in the operation
out (DNDarray, optional) – The output array. It must have a shape that the inputs broadcast to and matching split axis. If not provided, a freshly allocated array is returned.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the added value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Examples
>>> ht.bitwise_xor(13, 17) DNDarray(28, dtype=ht.int64, device=cpu:0, split=None) >>> ht.bitwise_xor(31, 5) DNDarray(26, dtype=ht.int64, device=cpu:0, split=None) >>> ht.bitwise_xor(ht.array([31,3]), 5) DNDarray([26, 6], dtype=ht.int64, device=cpu:0, split=None) >>> ht.bitwise_xor(ht.array([31,3]), ht.array([5,6])) DNDarray([26, 5], dtype=ht.int64, device=cpu:0, split=None) >>> ht.bitwise_xor(ht.array([True, True]), ht.array([False, True])) DNDarray([ True, False], dtype=ht.bool, device=cpu:0, split=None)
- copysign(a: heat.core.dndarray.DNDarray, b: heat.core.dndarray.DNDarray | float | int, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Create a new floating-point tensor with the magnitude of ‘a’ and the sign of ‘b’, element-wise
- Parameters:
a (DNDarray) – The input array
b (DNDarray or Number) – value(s) whose signbit(s) are applied to the magnitudes in ‘a’
out (DNDarray, optional) – The output array. It must have a shape that the inputs broadcast to and matching split axis. If not provided, a freshly allocated array is returned.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the divided value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Examples
>>> ht.copysign(ht.array([3, 2, -8, -2, 4]), 1) DNDarray([3, 2, 8, 2, 4], dtype=ht.int64, device=cpu:0, split=None) >>> ht.copysign(ht.array([3., 2., -8., -2., 4.]), ht.array([1., -1., 1., -1., 1.])) DNDarray([ 3., -2., 8., -2., 4.], dtype=ht.float32, device=cpu:0, split=None)
- cumprod(a: heat.core.dndarray.DNDarray, axis: int, dtype: heat.core.types.datatype = None, out=None) heat.core.dndarray.DNDarray
Return the cumulative product of elements along a given axis.
- Parameters:
a (DNDarray) – Input array.
axis (int) – Axis along which the cumulative product is computed.
dtype (datatype, optional) – Type of the returned array, as well as of the accumulator in which the elements are multiplied. If
dtype
is not specified, it defaults to the datatype ofa
, unlessa
has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used instead.out (DNDarray, optional) – Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type of the resulting values will be cast if necessary.
Examples
>>> a = ht.full((3,3), 2) >>> ht.cumprod(a, 0) DNDarray([[2., 2., 2.], [4., 4., 4.], [8., 8., 8.]], dtype=ht.float32, device=cpu:0, split=None)
- cumsum(a: heat.core.dndarray.DNDarray, axis: int, dtype: heat.core.types.datatype = None, out=None) heat.core.dndarray.DNDarray
Return the cumulative sum of the elements along a given axis.
- Parameters:
a (DNDarray) – Input array.
axis (int) – Axis along which the cumulative sum is computed.
dtype (datatype, optional) – Type of the returned array and of the accumulator in which the elements are summed. If
dtype
is not specified, it defaults to the datatype ofa
, unlessa
has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used.out (DNDarray, optional) – Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type will be cast if necessary.
Examples
>>> a = ht.ones((3,3)) >>> ht.cumsum(a, 0) DNDarray([[1., 1., 1.], [2., 2., 2.], [3., 3., 3.]], dtype=ht.float32, device=cpu:0, split=None)
- diff(a: heat.core.dndarray.DNDarray, n: int = 1, axis: int = -1, prepend: int | float | heat.core.dndarray.DNDarray = None, append: int | float | heat.core.dndarray.DNDarray = None) heat.core.dndarray.DNDarray
Calculate the n-th discrete difference along the given axis. The first difference is given by
out[i]=a[i+1]-a[i]
along the given axis, higher differences are calculated by using diff recursively. The shape of the output is the same asa
except along axis where the dimension is smaller byn
. The datatype of the output is the same as the datatype of the difference between any two elements ofa
. The split does not change. The output array is balanced.- Parameters:
a (DNDarray) – Input array
n (int, optional) – The number of times values are differenced. If zero, the input is returned as-is.
n=2
is equivalent todiff(diff(a))
axis (int, optional) – The axis along which the difference is taken, default is the last axis.
prepend (Union[int, float, DNDarray]) – Value to prepend along axis prior to performing the difference. Scalar values are expanded to arrays with length 1 in the direction of axis and the shape of the input array in along all other axes. Otherwise the dimension and shape must match a except along axis.
append (Union[int, float, DNDarray]) – Values to append along axis prior to performing the difference. Scalar values are expanded to arrays with length 1 in the direction of axis and the shape of the input array in along all other axes. Otherwise the dimension and shape must match a except along axis.
- div(t1: heat.core.dndarray.DNDarray | float, t2: heat.core.dndarray.DNDarray | float, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Element-wise true division of values of operand
t1
by values of operandst2
(i.et1/t2
). Operation is not commutative.- Parameters:
t1 (DNDarray or scalar) – The first operand whose values are divided.
t2 (DNDarray or scalar) – The second operand by whose values is divided.
out (DNDarray, optional) – The output array. It must have a shape that the inputs broadcast to and matching split axis. If not provided, a freshly allocated array is returned.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the divided value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Example
>>> ht.div(2.0, 2.0) DNDarray(1., dtype=ht.float32, device=cpu:0, split=None) >>> T1 = ht.float32([[1, 2], [3, 4]]) >>> T2 = ht.float32([[2, 2], [2, 2]]) >>> ht.div(T1, T2) DNDarray([[0.5000, 1.0000], [1.5000, 2.0000]], dtype=ht.float32, device=cpu:0, split=None) >>> s = 2.0 >>> ht.div(s, T1) DNDarray([[2.0000, 1.0000], [0.6667, 0.5000]], dtype=ht.float32, device=cpu:0, split=None)
- divmod(t1: heat.core.dndarray.DNDarray | float, t2: heat.core.dndarray.DNDarray | float, out1: heat.core.dndarray.DNDarray = None, out2: heat.core.dndarray.DNDarray = None, /, out: Tuple[heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray] = (None, None), *, where: bool | heat.core.dndarray.DNDarray = True) Tuple[heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray]
Element-wise division remainder and quotient from an integer division of values of operand
t1
by values of operandt2
(i.e. C Library function divmod). Result has the sign as the dividendt1
. Operation is not commutative.- Parameters:
t1 (DNDarray or scalar) – The first operand whose values are divided (may be floats)
t2 (DNDarray or scalar) – The second operand by whose values is divided (may be floats)
out1 (DNDarray, optional) – The output array for the quotient. It must have a shape that the inputs broadcast to and matching split axis. If not provided, a freshly allocated array is returned. If provided, it must be of the same shape as the expected output. Only one of out1 and out can be provided.
out2 (DNDarray, optional) – The output array for the remainder. It must have a shape that the inputs broadcast to and matching split axis. If not provided, a freshly allocated array is returned. If provided, it must be of the same shape as the expected output. Only one of out2 and out can be provided.
out (tuple of two DNDarrays, optional) – Tuple of two output arrays (quotient, remainder), respectively. Both must have a shape that the inputs broadcast to and matching split axis. If not provided, a freshly allocated array is returned. If provided, they must be of the same shape as the expected output. out1 and out2 cannot be used at the same time.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out1 array will be set to the quotient value and the out2 array will be set to the remainder value. Elsewhere, the out1 and out2 arrays will retain their original value. If an uninitialized out1 and out2 array is created via the default out1=None and out2=None, locations within them where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out1 and out2 arrays.
Examples
>>> ht.divmod(2.0, 2.0) (DNDarray(1., dtype=ht.float32, device=cpu:0, split=None), DNDarray(0., dtype=ht.float32, device=cpu:0, split=None)) >>> T1 = ht.float32([[1, 2], [3, 4]]) >>> T2 = ht.float32([[2, 2], [2, 2]]) >>> ht.divmod(T1, T2) (DNDarray([[0., 1.], [1., 2.]], dtype=ht.float32, device=cpu:0, split=None), DNDarray([[1., 0.], [1., 0.]], dtype=ht.float32, device=cpu:0, split=None)) >>> s = 2.0 >>> ht.divmod(s, T1) (DNDarray([[2., 1.], [0., 0.]], dtype=ht.float32, device=cpu:0, split=None), DNDarray([[0., 0.], [2., 2.]], dtype=ht.float32, device=cpu:0, split=None))
- floordiv(t1: heat.core.dndarray.DNDarray | float, t2: heat.core.dndarray.DNDarray | float, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Element-wise floor division of value(s) of operand
t1
by value(s) of operandt2
(i.e.t1//t2
), not commutative.- Parameters:
t1 (DNDarray or scalar) – The first operand whose values are divided
t2 (DNDarray or scalar) – The second operand by whose values is divided
out (DNDarray, optional) – The output array. It must have a shape that the inputs broadcast to and matching split axis. If not provided, a freshly allocated array is returned.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the divided value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Examples
>>> T1 = ht.float32([[1.7, 2.0], [1.9, 4.2]]) >>> ht.floordiv(T1, 1) DNDarray([[1., 2.], [1., 4.]], dtype=ht.float64, device=cpu:0, split=None) >>> T2 = ht.float32([1.5, 2.5]) >>> ht.floordiv(T1, T2) DNDarray([[1., 0.], [1., 1.]], dtype=ht.float32, device=cpu:0, split=None)
- floor_divide
Alias for
floordiv()
- fmod(t1: heat.core.dndarray.DNDarray | float, t2: heat.core.dndarray.DNDarray | float, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Element-wise division remainder of values of operand
t1
by values of operandt2
(i.e. C Library function fmod). Result has the sign as the dividendt1
. Operation is not commutative.- Parameters:
t1 (DNDarray or scalar) – The first operand whose values are divided (may be floats)
t2 (DNDarray or scalar) – The second operand by whose values is divided (may be floats)
out (DNDarray, optional) – The output array. It must have a shape that the inputs broadcast to and matching split axis. If not provided, a freshly allocated array is returned. If provided, it must be of the same shape as the expected output.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the divided value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Examples
>>> ht.fmod(2.0, 2.0) DNDarray(0., dtype=ht.float32, device=cpu:0, split=None) >>> T1 = ht.float32([[1, 2], [3, 4]]) >>> T2 = ht.float32([[2, 2], [2, 2]]) >>> ht.fmod(T1, T2) DNDarray([[1., 0.], [1., 0.]], dtype=ht.float32, device=cpu:0, split=None) >>> s = 2.0 >>> ht.fmod(s, T1) DNDarray([[0., 0.], [2., 2.]], dtype=ht.float32, device=cpu:0, split=None)
- gcd(a: heat.core.dndarray.DNDarray, b: heat.core.dndarray.DNDarray, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Returns the greatest common divisor of |a| and |b| element-wise.
- Parameters:
a (DNDarray) – The first input array, must be of integer type
b (DNDarray) – the second input array, must be of integer type
out (DNDarray, optional) – The output array. It must have a shape that the inputs broadcast to and matching split axis. If not provided, a freshly allocated array is returned.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the divided value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Examples
>>> import heat as ht >>> T1 = ht.int(ht.ones(3)) * 9 >>> T2 = ht.arange(3) + 1 >>> ht.gcd(T1, T2) DNDarray([1, 1, 3], dtype=ht.int32, device=cpu:0, split=None)
- hypot(a: heat.core.dndarray.DNDarray, b: heat.core.dndarray.DNDarray, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Given the ‘legs’ of a right triangle, return its hypotenuse. Equivalent to \(sqrt(a^2 + b^2)\), element-wise.
- Parameters:
a (DNDarray) – The first input array
b (DNDarray) – the second input array
out (DNDarray, optional) – The output array. It must have a shape that the inputs broadcast to and matching split axis. If not provided, a freshly allocated array is returned.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the divided value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Examples
>>> a = ht.array([2.]) >>> b = ht.array([1.,3.,3.]) >>> ht.hypot(a,b) DNDarray([2.2361, 3.6056, 3.6056], dtype=ht.float32, device=cpu:0, split=None)
- invert(a: heat.core.dndarray.DNDarray, /, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Computes the bitwise NOT of the given input
DNDarray
. The input array must be of integral or Boolean types. For boolean arrays, it computes the logical NOT. Bitwise_not is an alias for invert.- Parameters:
a (DNDarray) – The input array to invert. Must be of integral or Boolean types
out (DNDarray, optional) – Alternative output array in which to place the result. It must have the same shape as the expected output. The dtype of the output will be the one of the input array, unless it is logical, in which case it will be casted to int8. If not provided or None, a freshly- allocated array is returned.
Examples
>>> ht.invert(ht.array([13], dtype=ht.uint8)) DNDarray([242], dtype=ht.uint8, device=cpu:0, split=None) >>> ht.bitwise_not(ht.array([-1, -2, 3], dtype=ht.int8)) DNDarray([ 0, 1, -4], dtype=ht.int8, device=cpu:0, split=None)
- lcm(a: heat.core.dndarray.DNDarray, b: heat.core.dndarray.DNDarray, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Returns the lowest common multiple of |a| and |b| element-wise.
- Parameters:
a (DNDarray or scalar) – The first input (array), must be of integer type
b (DNDarray or scalar) – the second input (array), must be of integer type
out (DNDarray, optional) – The output array. It must have a shape that the inputs broadcast to and matching split axis. If not provided, a freshly allocated array is returned.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the divided value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Examples
>>> a = ht.array([6, 12, 15]) >>> b = ht.array([3, 4, 5]) >>> ht.lcm(a,b) DNDarray([ 6, 12, 15], dtype=ht.int64, device=cpu:0, split=None) >>> s = 2 >>> ht.lcm(s,a) DNDarray([ 6, 12, 30], dtype=ht.int64, device=cpu:0, split=None) >>> ht.lcm(b,s) DNDarray([ 6, 4, 10], dtype=ht.int64, device=cpu:0, split=None)
- left_shift(t1: heat.core.dndarray.DNDarray, t2: heat.core.dndarray.DNDarray | float, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Shift the bits of an integer to the left.
- Parameters:
t1 (DNDarray) – Input array
t2 (DNDarray or float) – Integer number of zero bits to add
out (DNDarray, optional) – Output array for the result. Must have the same shape as the expected output. The dtype of the output will be the one of the input array, unless it is logical, in which case it will be casted to int8. If not provided or None, a freshly-allocated array is returned.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the shifted value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Examples
>>> ht.left_shift(ht.array([1,2,3]), 1) DNDarray([2, 4, 6], dtype=ht.int64, device=cpu:0, split=None)
- mod
Alias for
remainder()
- mul(t1: heat.core.dndarray.DNDarray | float, t2: heat.core.dndarray.DNDarray | float, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Element-wise multiplication (NOT matrix multiplication) of values from two operands, commutative. Takes the first and second operand (scalar or
DNDarray
) whose elements are to be multiplied as argument.- Parameters:
t1 (DNDarray or scalar) – The first operand involved in the multiplication
t2 (DNDarray or scalar) – The second operand involved in the multiplication
out (DNDarray, optional) – Output array. It must have a shape that the inputs broadcast to and matching split axis. If not provided or None, a freshly-allocated array is returned.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the multiplied value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Examples
>>> ht.mul(2.0, 4.0) DNDarray(8., dtype=ht.float32, device=cpu:0, split=None) >>> T1 = ht.float32([[1, 2], [3, 4]]) >>> s = 3.0 >>> ht.mul(T1, s) DNDarray([[ 3., 6.], [ 9., 12.]], dtype=ht.float32, device=cpu:0, split=None) >>> T2 = ht.float32([[2, 2], [2, 2]]) >>> ht.mul(T1, T2) DNDarray([[2., 4.], [6., 8.]], dtype=ht.float32, device=cpu:0, split=None)
- nan_to_num(a: heat.core.dndarray.DNDarray, nan: float = 0.0, posinf: float = None, neginf: float = None, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Replaces NaNs, positive infinity values, and negative infinity values in the input ‘a’ with the values specified by nan, posinf, and neginf, respectively. By default, NaNs are replaced with zero, positive infinity is replaced with the greatest finite value representable by input’s dtype, and negative infinity is replaced with the least finite value representable by input’s dtype.
- Parameters:
a (DNDarray) – Input array.
nan (float, optional) – Value to be used to replace NaNs. Default value is 0.0.
posinf (float, optional) – Value to replace positive infinity values with. If None, positive infinity values are replaced with the greatest finite value of the input’s dtype. Default value is None.
neginf (float, optional) – Value to replace negative infinity values with. If None, negative infinity values are replaced with the greatest negative finite value of the input’s dtype. Default value is None.
out (DNDarray, optional) – Alternative output array in which to place the result. It must have the same shape as the expected output, but the datatype of the output values will be cast if necessary.
Examples
>>> x = ht.array([float('nan'), float('inf'), -float('inf')]) >>> ht.nan_to_num(x) DNDarray([ 0.0000e+00, 3.4028e+38, -3.4028e+38], dtype=ht.float32, device=cpu:0, split=None)
- nanprod(a: heat.core.dndarray.DNDarray, axis: int | Tuple[int, Ellipsis] = None, out: heat.core.dndarray.DNDarray = None, keepdims: bool = None) heat.core.dndarray.DNDarray
Return the product of array elements over a given axis treating Not a Numbers (NaNs) as one.
- Parameters:
a (DNDarray) – Input array.
axis (None or int or Tuple[int,...], optional) – Axis or axes along which a product is performed. The default,
axis=None
, will calculate the product of all the elements in the input array. If axis is negative it counts from the last to the first axis. If axis is a tuple of ints, a product is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before.out (DNDarray, optional) – Alternative output array in which to place the result. It must have the same shape as the expected output, but the datatype of the output values will be cast if necessary.
keepdims (bool, optional) – If this is set to
True
, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
Examples
>>> ht.nanprod(ht.array([4.,ht.nan])) DNDarray(4., dtype=ht.float32, device=cpu:0, split=None) >>> ht.nanprod(ht.array([ [1.,ht.nan], [3.,4.]])) DNDarray(12., dtype=ht.float32, device=cpu:0, split=None) >>> ht.nanprod(ht.array([ [1.,ht.nan], [ht.nan,4.] ]), axis=1) DNDarray([ 1., 4.], dtype=ht.float32, device=cpu:0, split=None)
- nansum(a: heat.core.dndarray.DNDarray, axis: int | Tuple[int, Ellipsis] = None, out: heat.core.dndarray.DNDarray = None, keepdims: bool = None) heat.core.dndarray.DNDarray
Sum of array elements over a given axis treating Not a Numbers (NaNs) as zero. An array with the same shape as
self.__array
except for the specified axis which becomes one, e.g.a.shape=(1, 2, 3)
=>ht.ones((1, 2, 3)).sum(axis=1).shape=(1, 1, 3)
- Parameters:
a (DNDarray) – Input array.
axis (None or int or Tuple[int,...], optional) – Axis along which a sum is performed. The default,
axis=None
, will sum all of the elements of the input array. Ifaxis
is negative it counts from the last to the first axis. Ifaxis
is a tuple of ints, a sum is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before.out (DNDarray, optional) – Alternative output array in which to place the result. It must have the same shape as the expected output, but the datatype of the output values will be cast if necessary.
keepdims (bool, optional) – If this is set to
True
, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
Examples
>>> ht.sum(ht.ones(2)) DNDarray(2., dtype=ht.float32, device=cpu:0, split=None) >>> ht.sum(ht.ones((3,3))) DNDarray(9., dtype=ht.float32, device=cpu:0, split=None) >>> ht.sum(ht.ones((3,3)).astype(ht.int)) DNDarray(9, dtype=ht.int64, device=cpu:0, split=None) >>> ht.sum(ht.ones((3,2,1)), axis=-3) DNDarray([[3.], [3.]], dtype=ht.float32, device=cpu:0, split=None)
- neg(a: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Element-wise negation of a.
- Parameters:
Examples
>>> ht.neg(ht.array([-1, 1])) DNDarray([ 1, -1], dtype=ht.int64, device=cpu:0, split=None) >>> -ht.array([-1., 1.]) DNDarray([ 1., -1.], dtype=ht.float32, device=cpu:0, split=None)
- pos(a: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Element-wise positive of a.
- Parameters:
Notes
Equivalent to a.copy().
Examples
>>> ht.pos(ht.array([-1, 1])) DNDarray([-1, 1], dtype=ht.int64, device=cpu:0, split=None) >>> +ht.array([-1., 1.]) DNDarray([-1., 1.], dtype=ht.float32, device=cpu:0, split=None)
- pow(t1: heat.core.dndarray.DNDarray | float, t2: heat.core.dndarray.DNDarray | float, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Element-wise power function of values of operand
t1
to the power of values of operandt2
(i.et1**t2
). Operation is not commutative.- Parameters:
t1 (DNDarray or scalar) – The first operand whose values represent the base
t2 (DNDarray or scalar) – The second operand whose values represent the exponent
out (DNDarray, optional) – Output array. It must have a shape that the inputs broadcast to and matching split axis. If not provided or None, a freshly-allocated array is returned.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the exponentiated value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Examples
>>> ht.pow (3.0, 2.0) DNDarray(9., dtype=ht.float32, device=cpu:0, split=None) >>> T1 = ht.float32([[1, 2], [3, 4]]) >>> T2 = ht.float32([[3, 3], [2, 2]]) >>> ht.pow(T1, T2) DNDarray([[ 1., 8.], [ 9., 16.]], dtype=ht.float32, device=cpu:0, split=None) >>> s = 3.0 >>> ht.pow(T1, s) DNDarray([[ 1., 8.], [27., 64.]], dtype=ht.float32, device=cpu:0, split=None)
- prod(a: heat.core.dndarray.DNDarray, axis: int | Tuple[int, Ellipsis] = None, out: heat.core.dndarray.DNDarray = None, keepdims: bool = None) heat.core.dndarray.DNDarray
Return the product of array elements over a given axis in form of a DNDarray shaped as a but with the specified axis removed.
- Parameters:
a (DNDarray) – Input array.
axis (None or int or Tuple[int,...], optional) – Axis or axes along which a product is performed. The default,
axis=None
, will calculate the product of all the elements in the input array. If axis is negative it counts from the last to the first axis. If axis is a tuple of ints, a product is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before.out (DNDarray, optional) – Alternative output array in which to place the result. It must have the same shape as the expected output, but the datatype of the output values will be cast if necessary.
keepdims (bool, optional) – If this is set to
True
, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
Examples
>>> ht.prod(ht.array([1.,2.])) DNDarray(2., dtype=ht.float32, device=cpu:0, split=None) >>> ht.prod(ht.array([ [1.,2.], [3.,4.]])) DNDarray(24., dtype=ht.float32, device=cpu:0, split=None) >>> ht.prod(ht.array([ [1.,2.], [3.,4.] ]), axis=1) DNDarray([ 2., 12.], dtype=ht.float32, device=cpu:0, split=None)
- remainder(t1: heat.core.dndarray.DNDarray | float, t2: heat.core.dndarray.DNDarray | float, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Element-wise division remainder of values of operand
t1
by values of operandt2
(i.e.t1%t2
). Result has the same sign as the divisort2
. Operation is not commutative.- Parameters:
t1 (DNDarray or scalar) – The first operand whose values are divided
t2 (DNDarray or scalar) – The second operand by whose values is divided
out (DNDarray, optional) – Output array. It must have a shape that the inputs broadcast to and matching split axis. If not provided, a freshly allocated array is returned.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the divided value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Examples
>>> ht.remainder(2, 2) DNDarray(0, dtype=ht.int64, device=cpu:0, split=None) >>> T1 = ht.int32([[1, 2], [3, 4]]) >>> T2 = ht.int32([[2, 2], [2, 2]]) >>> ht.remainder(T1, T2) DNDarray([[1, 0], [1, 0]], dtype=ht.int32, device=cpu:0, split=None) >>> s = 2 >>> ht.remainder(s, T1) DNDarray([[0, 0], [2, 2]], dtype=ht.int32, device=cpu:0, split=None)
- right_shift(t1: heat.core.dndarray.DNDarray | float, t2: heat.core.dndarray.DNDarray | float, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Shift the bits of an integer to the right.
- Parameters:
t1 (DNDarray or scalar) – Input array
t2 (DNDarray or scalar) – Integer number of bits to remove
out (DNDarray, optional) – Output array for the result. Must have the same shape as the expected output. The dtype of the output will be the one of the input array, unless it is logical, in which case it will be casted to int8. If not provided or None, a freshly-allocated array is returned.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the shifted value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Examples
>>> ht.right_shift(ht.array([1,2,3]), 1) DNDarray([0, 1, 1], dtype=ht.int64, device=cpu:0, split=None)
- sub(t1: heat.core.dndarray.DNDarray | float, t2: heat.core.dndarray.DNDarray | float, /, out: heat.core.dndarray.DNDarray | None = None, *, where: bool | heat.core.dndarray.DNDarray = True) heat.core.dndarray.DNDarray
Element-wise subtraction of values of operand
t2
from values of operandst1
(i.et1-t2
). Operation is not commutative.- Parameters:
t1 (DNDarray or scalar) – The first operand from which values are subtracted
t2 (DNDarray or scalar) – The second operand whose values are subtracted
out (DNDarray, optional) – Output array. It must have a shape that the inputs broadcast to and matching split axis. If not provided or None, a freshly-allocated array is returned.
where (DNDarray, optional) – Condition to broadcast over the inputs. At locations where the condition is True, the out array will be set to the subtracted value. Elsewhere, the out array will retain its original value. If an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. If distributed, the split axis (after broadcasting if required) must match that of the out array.
Examples
>>> ht.sub(4.0, 1.0) DNDarray(3., dtype=ht.float32, device=cpu:0, split=None) >>> T1 = ht.float32([[1, 2], [3, 4]]) >>> T2 = ht.float32([[2, 2], [2, 2]]) >>> ht.sub(T1, T2) DNDarray([[-1., 0.], [ 1., 2.]], dtype=ht.float32, device=cpu:0, split=None) >>> s = 2.0 >>> ht.sub(s, T1) DNDarray([[ 1., 0.], [-1., -2.]], dtype=ht.float32, device=cpu:0, split=None)
- sum(a: heat.core.dndarray.DNDarray, axis: int | Tuple[int, Ellipsis] = None, out: heat.core.dndarray.DNDarray = None, keepdims: bool = None) heat.core.dndarray.DNDarray
Sum of array elements over a given axis. An array with the same shape as
self.__array
except for the specified axis which becomes one, e.g.a.shape=(1, 2, 3)
=>ht.ones((1, 2, 3)).sum(axis=1).shape=(1, 1, 3)
- Parameters:
a (DNDarray) – Input array.
axis (None or int or Tuple[int,...], optional) – Axis along which a sum is performed. The default,
axis=None
, will sum all of the elements of the input array. Ifaxis
is negative it counts from the last to the first axis. Ifaxis
is a tuple of ints, a sum is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before.out (DNDarray, optional) – Alternative output array in which to place the result. It must have the same shape as the expected output, but the datatype of the output values will be cast if necessary.
keepdims (bool, optional) – If this is set to
True
, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
Examples
>>> ht.sum(ht.ones(2)) DNDarray(2., dtype=ht.float32, device=cpu:0, split=None) >>> ht.sum(ht.ones((3,3))) DNDarray(9., dtype=ht.float32, device=cpu:0, split=None) >>> ht.sum(ht.ones((3,3)).astype(ht.int)) DNDarray(9, dtype=ht.int64, device=cpu:0, split=None) >>> ht.sum(ht.ones((3,2,1)), axis=-3) DNDarray([[3.], [3.]], dtype=ht.float32, device=cpu:0, split=None)
- class DNDarray(array: torch.Tensor, gshape: Tuple[int, Ellipsis], dtype: heat.core.types.datatype, split: int | None, device: heat.core.devices.Device, comm: Communication, balanced: bool)
Distributed N-Dimensional array. The core element of HeAT. It is composed of PyTorch tensors local to each process.
- Parameters:
array (torch.Tensor) – Local array elements
gshape (Tuple[int,...]) – The global shape of the array
dtype (datatype) – The datatype of the array
split (int or None) – The axis on which the array is divided between processes
device (Device) – The device on which the local arrays are using (cpu or gpu)
comm (Communication) – The communications object for sending and receiving data
balanced (bool or None) – Describes whether the data are evenly distributed across processes. If this information is not available (
self.balanced is None
), it can be gathered via theis_balanced()
method (requires communication).
- __prephalo(start, end) torch.Tensor
Extracts the halo indexed by start, end from
self.array
in the direction ofself.split
- Parameters:
start (int) – Start index of the halo extracted from
self.array
end (int) – End index of the halo extracted from
self.array
- get_halo(halo_size: int) torch.Tensor
Fetch halos of size
halo_size
from neighboring ranks and save them inself.halo_next/self.halo_prev
.- Parameters:
halo_size (int) – Size of the halo.
- __cat_halo() torch.Tensor
Return local array concatenated to halos if they are available.
- __array__() numpy.ndarray
Returns a view of the process-local slice of the
DNDarray
as a numpy ndarray, if theDNDarray
resides on CPU. Otherwise, it returns a copy, on CPU, of the process-local slice ofDNDarray
as numpy ndarray.
- astype(dtype, copy=True) DNDarray
Returns a casted version of this array. Casted array is a new array of the same shape but with given type of this array. If copy is
True
, the same array is returned instead.
- balance_() DNDarray
Function for balancing a
DNDarray
between all nodes. To determine if this is needed use theis_balanced()
function. If theDNDarray
is already balanced this function will do nothing. This function modifies theDNDarray
itself and will not return anything.Examples
>>> a = ht.zeros((10, 2), split=0) >>> a[:, 0] = ht.arange(10) >>> b = a[3:] [0/2] tensor([[3., 0.], [1/2] tensor([[4., 0.], [5., 0.], [6., 0.]]) [2/2] tensor([[7., 0.], [8., 0.], [9., 0.]]) >>> b.balance_() >>> print(b.gshape, b.lshape) [0/2] (7, 2) (1, 2) [1/2] (7, 2) (3, 2) [2/2] (7, 2) (3, 2) >>> b [0/2] tensor([[3., 0.], [4., 0.], [5., 0.]]) [1/2] tensor([[6., 0.], [7., 0.]]) [2/2] tensor([[8., 0.], [9., 0.]]) >>> print(b.gshape, b.lshape) [0/2] (7, 2) (3, 2) [1/2] (7, 2) (2, 2) [2/2] (7, 2) (2, 2)
- __cast(cast_function) float | int
Implements a generic cast function for
DNDarray
objects.- Parameters:
cast_function (function) – The actual cast function, e.g.
float
orint
- Raises:
TypeError – If the
DNDarray
object cannot be converted into a scalar.
- collect_(target_rank: int | None = 0) None
A method collecting a distributed DNDarray to one MPI rank, chosen by the target_rank variable. It is a specific case of the
redistribute_
method.- Parameters:
target_rank (int, optional) – The rank to which the DNDarray will be collected. Default: 0.
- Raises:
TypeError – If the target rank is not an integer.
ValueError – If the target rank is out of bounds.
Examples
>>> st = ht.ones((50, 81, 67), split=2) >>> print(st.lshape) [0/2] (50, 81, 23) [1/2] (50, 81, 22) [2/2] (50, 81, 22) >>> st.collect_() >>> print(st.lshape) [0/2] (50, 81, 67) [1/2] (50, 81, 0) [2/2] (50, 81, 0) >>> st.collect_(1) >>> print(st.lshape) [0/2] (50, 81, 0) [1/2] (50, 81, 67) [2/2] (50, 81, 0)
- counts_displs() Tuple[Tuple[int], Tuple[int]]
Returns actual counts (number of items per process) and displacements (offsets) of the DNDarray. Does not assume load balance.
- cpu() DNDarray
Returns a copy of this object in main memory. If this object is already in main memory, then no copy is performed and the original object is returned.
- create_lshape_map(force_check: bool = False) torch.Tensor
Generate a ‘map’ of the lshapes of the data on all processes. Units are
(process rank, lshape)
- Parameters:
force_check (bool, optional) – if False (default) and the lshape map has already been created, use the previous result. Otherwise, create the lshape_map
- create_partition_interface()
Create a partition interface in line with the DPPY proposal. This is subject to change. The intention of this to facilitate the usage of a general format for the referencing of distributed datasets.
An example of the output and shape is shown below.
- __partitioned__ = {
‘shape’: (27, 3, 2), ‘partition_tiling’: (4, 1, 1), ‘partitions’: {
- (0, 0, 0): {
‘start’: (0, 0, 0), ‘shape’: (7, 3, 2), ‘data’: tensor([…], dtype=torch.int32), ‘location’: [0], ‘dtype’: torch.int32, ‘device’: ‘cpu’
}, (1, 0, 0): {
‘start’: (7, 0, 0), ‘shape’: (7, 3, 2), ‘data’: None, ‘location’: [1], ‘dtype’: torch.int32, ‘device’: ‘cpu’
}, (2, 0, 0): {
‘start’: (14, 0, 0), ‘shape’: (7, 3, 2), ‘data’: None, ‘location’: [2], ‘dtype’: torch.int32, ‘device’: ‘cpu’
}, (3, 0, 0): {
‘start’: (21, 0, 0), ‘shape’: (6, 3, 2), ‘data’: None, ‘location’: [3], ‘dtype’: torch.int32, ‘device’: ‘cpu’
}
}, ‘locals’: [(rank, 0, 0)], ‘get’: lambda x: x,
}
- Return type:
dictionary containing the partition interface as shown above.
- fill_diagonal(value: float) DNDarray
Fill the main diagonal of a 2D
DNDarray
. This function modifies the input tensor in-place, and returns the input array.- Parameters:
value (float) – The value to be placed in the
DNDarrays
main diagonal
- __getitem__(key: int | Tuple[int, Ellipsis] | List[int, Ellipsis]) DNDarray
Global getter function for DNDarrays. Returns a new DNDarray composed of the elements of the original tensor selected by the indices given. This does NOT redistribute or rebalance the resulting tensor. If the selection of values is unbalanced then the resultant tensor is also unbalanced! To redistributed the
DNDarray
usebalance()
(issue #187)- Parameters:
key (int, slice, Tuple[int,...], List[int,...]) – Indices to get from the tensor.
Examples
>>> a = ht.arange(10, split=0) (1/2) >>> tensor([0, 1, 2, 3, 4], dtype=torch.int32) (2/2) >>> tensor([5, 6, 7, 8, 9], dtype=torch.int32) >>> a[1:6] (1/2) >>> tensor([1, 2, 3, 4], dtype=torch.int32) (2/2) >>> tensor([5], dtype=torch.int32) >>> a = ht.zeros((4,5), split=0) (1/2) >>> tensor([[0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.]]) (2/2) >>> tensor([[0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.]]) >>> a[1:4, 1] (1/2) >>> tensor([0.]) (2/2) >>> tensor([0., 0.])
- is_balanced(force_check: bool = False) bool
Determine if
self
is balanced evenly (or as evenly as possible) across all nodes distributed evenly (or as evenly as possible) across all processes. This is equivalent to returningself.balanced
. If no information is available (self.balanced = None
), the balanced status will be assessed via collective communication.Parameters force_check : bool, optional
If True, the balanced status of the
DNDarray
will be assessed via collective communication in any case.
- is_distributed() bool
Determines whether the data of this
DNDarray
is distributed across multiple processes.
- item()
Returns the only element of a 1-element
DNDarray
. Mirror of the pytorch command by the same name. If size ofDNDarray
is >1 element, then aValueError
is raised (by pytorch)Examples
>>> import heat as ht >>> x = ht.zeros((1)) >>> x.item() 0.0
- __len__() int
The length of the
DNDarray
, i.e. the number of items in the first dimension.
- numpy() numpy.array
Returns a copy of the
DNDarray
as numpy ndarray. If theDNDarray
resides on the GPU, the underlying data will be copied to the CPU first.If the
DNDarray
is distributed, an MPI Allgather operation will be performed before converting to np.ndarray, i.e. each MPI process will end up holding a copy of the entire array in memory. Make sure process memory is sufficient!Examples
>>> import heat as ht T1 = ht.random.randn((10,8)) T1.numpy()
- __repr__() str
Computes a printable representation of the passed DNDarray.
- ravel()
Flattens the
DNDarray
.See also
Examples
>>> a = ht.ones((2,3), split=0) >>> b = a.ravel() >>> a[0,0] = 4 >>> b DNDarray([4., 1., 1., 1., 1., 1.], dtype=ht.float32, device=cpu:0, split=0)
- redistribute_(lshape_map: torch.Tensor | None = None, target_map: torch.Tensor | None = None)
Redistributes the data of the
DNDarray
along the split axis to match the given target map. This function does not modify the non-split dimensions of theDNDarray
. This is an abstraction and extension of the balance function.- Parameters:
lshape_map (torch.Tensor, optional) – The current lshape of processes. Units are
[rank, lshape]
.target_map (torch.Tensor, optional) – The desired distribution across the processes. Units are
[rank, target lshape]
. Note: the only important parts of the target map are the values along the split axis, values which are not along this axis are there to mimic the shape of thelshape_map
.
Examples
>>> st = ht.ones((50, 81, 67), split=2) >>> target_map = torch.zeros((st.comm.size, 3), dtype=torch.int64) >>> target_map[0, 2] = 67 >>> print(target_map) [0/2] tensor([[ 0, 0, 67], [0/2] [ 0, 0, 0], [0/2] [ 0, 0, 0]], dtype=torch.int32) [1/2] tensor([[ 0, 0, 67], [1/2] [ 0, 0, 0], [1/2] [ 0, 0, 0]], dtype=torch.int32) [2/2] tensor([[ 0, 0, 67], [2/2] [ 0, 0, 0], [2/2] [ 0, 0, 0]], dtype=torch.int32) >>> print(st.lshape) [0/2] (50, 81, 23) [1/2] (50, 81, 22) [2/2] (50, 81, 22) >>> st.redistribute_(target_map=target_map) >>> print(st.lshape) [0/2] (50, 81, 67) [1/2] (50, 81, 0) [2/2] (50, 81, 0)
- __redistribute_shuffle(snd_pr: int | torch.Tensor, send_amt: int | torch.Tensor, rcv_pr: int | torch.Tensor, snd_dtype: torch.dtype)
Function to abstract the function used during redistribute for shuffling data between processes along the split axis
- Parameters:
snd_pr (int or torch.Tensor) – Sending process
send_amt (int or torch.Tensor) – Amount of data to be sent by the sending process
rcv_pr (int or torch.Tensor) – Receiving process
snd_dtype (torch.dtype) – Torch type of the data in question
- resplit_(axis: int = None)
In-place option for resplitting a
DNDarray
.- Parameters:
axis (int) – The new split axis,
None
denotes gathering, an int will set the new split axis
Examples
>>> a = ht.zeros((4, 5,), split=0) >>> a.lshape (0/2) (2, 5) (1/2) (2, 5) >>> ht.resplit_(a, None) >>> a.split None >>> a.lshape (0/2) (4, 5) (1/2) (4, 5) >>> a = ht.zeros((4, 5,), split=0) >>> a.lshape (0/2) (2, 5) (1/2) (2, 5) >>> ht.resplit_(a, 1) >>> a.split 1 >>> a.lshape (0/2) (4, 3) (1/2) (4, 2)
- __setitem__(key: int | Tuple[int, Ellipsis] | List[int, Ellipsis], value: float | DNDarray | torch.Tensor)
Global item setter
- Parameters:
key (Union[int, Tuple[int,...], List[int,...]]) – Index/indices to be set
value (Union[float, DNDarray,torch.Tensor]) – Value to be set to the specified positions in the DNDarray (self)
Notes
If a
DNDarray
is given as the value to be set then the split axes are assumed to be equal. If they are not, PyTorch will raise an error when the values are attempted to be set on the local arrayExamples
>>> a = ht.zeros((4,5), split=0) (1/2) >>> tensor([[0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.]]) (2/2) >>> tensor([[0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.]]) >>> a[1:4, 1] = 1 >>> a (1/2) >>> tensor([[0., 0., 0., 0., 0.], [0., 1., 0., 0., 0.]]) (2/2) >>> tensor([[0., 1., 0., 0., 0.], [0., 1., 0., 0., 0.]])
- __setter(key: int | Tuple[int, Ellipsis] | List[int, Ellipsis], value: float | DNDarray | torch.Tensor)
Utility function for checking
value
and forwarding to :func:__setitem__
- Raises:
NotImplementedError – If the type of
value
ist not supported
- __str__() str
Computes a string representation of the passed
DNDarray
.
- tolist(keepsplit: bool = False) List
Return a copy of the local array data as a (nested) Python list. For scalars, a standard Python number is returned.
- Parameters:
keepsplit (bool) – Whether the list should be returned locally or globally.
Examples
>>> a = ht.array([[0,1],[2,3]]) >>> a.tolist() [[0, 1], [2, 3]]
>>> a = ht.array([[0,1],[2,3]], split=0) >>> a.tolist() [[0, 1], [2, 3]]
>>> a = ht.array([[0,1],[2,3]], split=1) >>> a.tolist(keepsplit=True) (1/2) [[0], [2]] (2/2) [[1], [3]]
- __torch_proxy__() torch.Tensor
Return a 1-element torch.Tensor strided as the global self shape. Used internally for sanitation purposes.
- __xitem_get_key_start_stop(rank: int, actives: list, key_st: int, key_sp: int, step: int, ends: torch.Tensor, og_key_st: int) Tuple[int, int]
- class BaseEstimator
Abstract base class for all estimators, i.e. parametrized analysis algorithms, in Heat. Can be used as mixin.
- _parameter_names() List[str]
Get the names of all parameters that can be set inside the constructor of the estimator.
- get_params(deep: bool = True) Dict[str, object]
Get parameters for this estimator.
- Parameters:
deep (bool, default: True) – If
True
, will return the parameters for this estimator and contained sub-objects that are estimators.
- __repr__(indent: int = 1) str
Returns a printable representation of the object.
- Parameters:
indent (int, default: 1) – Indicates the indentation for the top-level output.
- set_params(**params: Dict[str, object]) BaseEstimator.set_params.self
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have to be nested dictionaries.
- Parameters:
**params (dict[str, object]) – Estimator parameters to bet set.
- class ClassificationMixin
Mixin for all classifiers in Heat.
- fit(x: heat.core.dndarray.DNDarray, y: heat.core.dndarray.DNDarray)
Fits the classification model.
- fit_predict(x: heat.core.dndarray.DNDarray, y: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Fits model and returns classes for each input sample Convenience method; equivalent to calling
fit()
followed bypredict()
.
- predict(x: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Predicts the class labels for each sample.
- Parameters:
x (DNDarray) – Values to predict the classes for. Shape = (n_samples, n_features)
- class TransformMixin
Mixin for all transformations in Heat.
- fit(x: heat.core.dndarray.DNDarray)
Fits the transformation model.
- Parameters:
x (DNDarray) – Training instances to train on. Shape = (n_samples, n_features)
- fit_transform(x: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Fits model and returns transformed data for each input sample Convenience method; equivalent to calling
fit()
followed bytransform()
.- Parameters:
x (DNDarray) – Input data to be transformed. Shape = (n_samples, n_features)
- transform(x: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Transforms the input data.
- xDNDarray
Values to transform. Shape = (n_samples, n_features)
- class ClusteringMixin
Clustering mixin for all clusterers in Heat.
- fit(x: heat.core.dndarray.DNDarray)
Computes the clustering.
- Parameters:
x (DNDarray) – Training instances to cluster. Shape = (n_samples, n_features)
- fit_predict(x: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Compute clusters and returns the predicted cluster assignment for each sample. Returns index of the cluster each sample belongs to. Convenience method; equivalent to calling
fit()
followed bypredict()
.- Parameters:
x (DNDarray) – Input data to be clustered. Shape = (n_samples, n_features)
- class RegressionMixin
Mixin for all regression estimators in Heat.
- fit(x: heat.core.dndarray.DNDarray, y: heat.core.dndarray.DNDarray)
Fits the regression model.
- fit_predict(x: heat.core.dndarray.DNDarray, y: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Fits model and returns regression predictions for each input sample Convenience method; equivalent to calling
fit()
followed bypredict()
.
- predict(x: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Predicts the continuous labels for each sample.
- Parameters:
x (DNDarray) – Values to let the model predict. Shape = (n_samples, n_features)
- is_classifier(estimator: object) bool
Return
True
if the given estimator is a classifier,False
otherwise.- Parameters:
estimator (object) – Estimator object to test.
- is_transformer(estimator: object) bool
Return
True
if the given estimator is a transformer,False
otherwise.- Parameters:
estimator (object) – Estimator object to test.
- is_estimator(estimator: object) bool
Return
True
if the given estimator is an estimator,False
otherwise.- Parameters:
estimator (object) – Estimator object to test.
- is_clusterer(estimator: object) bool
Return
True
if the given estimator is a clusterer,False
otherwise.- Parameters:
estimator (object) – Estimator object to test.
- is_regressor(estimator: object) bool
Return
True
if the given estimator is a regressor,False
otherwise.- Parameters:
estimator (object) – Estimator object to test.
- sanitize_axis(shape: Tuple[int, Ellipsis], axis: int | None | Tuple[int, Ellipsis]) int | None | Tuple[int, Ellipsis]
Checks conformity of an axis with respect to a given shape. The axis will be converted to its positive equivalent and is checked to be within bounds
- Parameters:
shape (Tuple[int, ...]) – Shape of an array
axis (ints or Tuple[int, ...] or None) – The axis to be sanitized
- Raises:
ValueError – if the axis cannot be sanitized, i.e. out of bounds.
TypeError – if the axis is not integral.
Examples
>>> import heat as ht >>> ht.core.stride_tricks.sanitize_axis((5,4,4),1) 1 >>> ht.core.stride_tricks.sanitize_axis((5,4,4),-1) 2 >>> ht.core.stride_tricks.sanitize_axis((5, 4), (1,)) (1,) >>> ht.core.stride_tricks.sanitize_axis((5, 4), 1.0) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "heat/heat/core/stride_tricks.py", line 99, in sanitize_axis raise TypeError("axis must be None or int or tuple, but was {}".format(type(axis))) TypeError: axis must be None or int or tuple, but was <class 'float'>
- class MPIRequest(handle, sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any = None, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any = None, tensor: torch.Tensor = None, permutation: Tuple[int, Ellipsis] = None)
Represents a handle on a non-blocking operation
- Parameters:
handle (MPI.Communicator) – Handle for the mpi4py Communicator
sendbuf (DNDarray or torch.Tensor or Any) – The buffer for the data to be send
recvbuf (DNDarray or torch.Tensor or Any) – The buffer to the receive data
tensor (torch.Tensor) – Internal Data
permutation (Tuple[int,...]) – Permutation of the tensor axes
- Wait(status: mpi4py.MPI.Status = None)
Waits for an MPI request to complete
- __getattr__(name: str) Callable
Default pass-through for the communicator methods.
- Parameters:
name (str) – The name of the method to be called.
- class Communication
Base class for Communications (inteded for other backends)
- is_distributed() NotImplementedError
Whether or not the Communication is distributed
- chunk(shape, split) NotImplementedError
Calculates the chunk of data that will be assigned to this compute node given a global data shape and a split axis. Returns
(offset, local_shape, slices)
: the offset in the split dimension, the resulting local shape if the global input shape is chunked on the split axis and the chunk slices with respect to the given shape- Parameters:
shape (Tuple[int,...]) – The global shape of the data to be split
split (int) – The axis along which to chunk the data
- class MPICommunication(handle=MPI.COMM_WORLD)
Bases:
Communication
Class encapsulating all MPI Communication
- Parameters:
handle (MPI.Communicator) – Handle for the mpi4py Communicator
- __mpi_type_mappings
- is_distributed() bool
Determines whether the communicator is distributed, i.e. handles more than one node.
- chunk(shape: Tuple[int], split: int, rank: int = None, w_size: int = None, sparse: bool = False) Tuple[int, Tuple[int], Tuple[slice]]
Calculates the chunk of data that will be assigned to this compute node given a global data shape and a split axis. Returns
(offset, local_shape, slices)
: the offset in the split dimension, the resulting local shape if the global input shape is chunked on the split axis and the chunk slices with respect to the given shape- Parameters:
shape (Tuple[int,...]) – The global shape of the data to be split
split (int) – The axis along which to chunk the data
rank (int, optional) – Process for which the chunking is calculated for, defaults to
self.rank
. Intended for creating chunk maps without communicationw_size (int, optional) – The MPI world size, defaults to
self.size
. Intended for creating chunk maps without communicationsparse (bool, optional) – Specifies whether the array is a sparse matrix
- counts_displs_shape(shape: Tuple[int], axis: int) Tuple[Tuple[int], Tuple[int], Tuple[int]]
Calculates the item counts, displacements and output shape for a variable sized all-to-all MPI-call (e.g.
MPI_Alltoallv
). The passed shape is regularly chunk along the given axis and for all nodes.- Parameters:
shape (Tuple[int,...]) – The object for which to calculate the chunking.
axis (int) – The axis along which the chunking is performed.
- mpi_type_and_elements_of(obj: heat.core.dndarray.DNDarray | torch.Tensor, counts: Tuple[int], displs: Tuple[int], is_contiguous: bool | None) Tuple[mpi4py.MPI.Datatype, Tuple[int, Ellipsis]]
Determines the MPI data type and number of respective elements for the given tensor (
DNDarray
or ``torch.Tensor). In case the tensor is contiguous in memory, a native MPI data type can be used. Otherwise, a derived data type is automatically constructed using the storage information of the passed object.- Parameters:
obj (DNDarray or torch.Tensor) – The object for which to construct the MPI data type and number of elements
counts (Tuple[ints,...], optional) – Optional counts arguments for variable MPI-calls (e.g. Alltoallv)
displs (Tuple[ints,...], optional) – Optional displacements arguments for variable MPI-calls (e.g. Alltoallv)
is_contiguous (bool) – Information on global contiguity of the memory-distributed object. If None, it will be set to local contiguity via
torch.Tensor.is_contiguous()
.ToDo (#) –
- as_mpi_memory(obj) mpi4py.MPI.memory
Converts the passed
torch.Tensor
into an MPI compatible memory view.- Parameters:
obj (torch.Tensor) – The tensor to be converted into a MPI memory view.
- as_buffer(obj: torch.Tensor, counts: Tuple[int] = None, displs: Tuple[int] = None, is_contiguous: bool | None = None) List[mpi4py.MPI.memory | Tuple[int, int] | mpi4py.MPI.Datatype]
Converts a passed
torch.Tensor
into a memory buffer object with associated number of elements and MPI data type.- Parameters:
obj (torch.Tensor) – The object to be converted into a buffer representation.
counts (Tuple[int,...], optional) – Optional counts arguments for variable MPI-calls (e.g. Alltoallv)
displs (Tuple[int,...], optional) – Optional displacements arguments for variable MPI-calls (e.g. Alltoallv)
is_contiguous (bool, optional) – Optional information on global contiguity of the memory-distributed object.
- alltoall_sendbuffer(obj: torch.Tensor) List[mpi4py.MPI.memory | Tuple[int, int] | mpi4py.MPI.Datatype]
Converts a passed
torch.Tensor
into a memory buffer object with associated number of elements and MPI data type. XXX: might not work for all MPI stacks. Might require multiple type commits or so- Parameters:
obj (torch.Tensor) – The object to be transformed into a custom MPI datatype
- alltoall_recvbuffer(obj: torch.Tensor) List[mpi4py.MPI.memory | Tuple[int, int] | mpi4py.MPI.Datatype]
Converts a passed
torch.Tensor
into a memory buffer object with associated number of elements and MPI data type. XXX: might not work for all MPI stacks. Might require multiple type commits or so- Parameters:
obj (torch.Tensor) – The object to be transformed into a custom MPI datatype
- Free() None
Free a communicator.
- Split(color: int = 0, key: int = 0) MPICommunication
Split communicator by color and key.
- Parameters:
color (int, optional) – Determines the new communicator for a process.
key (int, optional) – Ordering within the new communicator.
- Irecv(buf: heat.core.dndarray.DNDarray | torch.Tensor | Any, source: int = MPI.ANY_SOURCE, tag: int = MPI.ANY_TAG) MPIRequest
Nonblocking receive
- Parameters:
buf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to place the received message
source (int, optional) – Rank of source process, that send the message
tag (int, optional) – A Tag to identify the message
- Recv(buf: heat.core.dndarray.DNDarray | torch.Tensor | Any, source: int = MPI.ANY_SOURCE, tag: int = MPI.ANY_TAG, status: mpi4py.MPI.Status = None)
Blocking receive
- Parameters:
buf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to place the received message
source (int, optional) – Rank of the source process, that send the message
tag (int, optional) – A Tag to identify the message
status (MPI.Status, optional) – Details on the communication
- __send_like(func: Callable, buf: heat.core.dndarray.DNDarray | torch.Tensor | Any, dest: int, tag: int) Tuple[heat.core.dndarray.DNDarray | torch.Tensor | None]
Generic function for sending a message to process with rank “dest”
- Parameters:
func (Callable) – The respective MPI sending function
buf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the message to be send
dest (int, optional) – Rank of the destination process, that receives the message
tag (int, optional) – A Tag to identify the message
- Bsend(buf: heat.core.dndarray.DNDarray | torch.Tensor | Any, dest: int, tag: int = 0)
Blocking buffered send
- Parameters:
buf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the message to be send
dest (int, optional) – Index of the destination process, that receives the message
tag (int, optional) – A Tag to identify the message
- Ibsend(buf: heat.core.dndarray.DNDarray | torch.Tensor | Any, dest: int, tag: int = 0) MPIRequest
Nonblocking buffered send
- Parameters:
buf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the message to be send
dest (int, optional) – Rank of the destination process, that receives the message
tag (int, optional) – A Tag to identify the message
- Irsend(buf: heat.core.dndarray.DNDarray | torch.Tensor | Any, dest: int, tag: int = 0) MPIRequest
Nonblocking ready send
- Parameters:
buf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the message to be send
dest (int, optional) – Rank of the destination process, that receives the message
tag (int, optional) – A Tag to identify the message
- Isend(buf: heat.core.dndarray.DNDarray | torch.Tensor | Any, dest: int, tag: int = 0) MPIRequest
Nonblocking send
- Parameters:
buf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the message to be send
dest (int, optional) – Rank of the destination process, that receives the message
tag (int, optional) – A Tag to identify the message
- Issend(buf: heat.core.dndarray.DNDarray | torch.Tensor | Any, dest: int, tag: int = 0) MPIRequest
Nonblocking synchronous send
- Parameters:
buf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the message to be send
dest (int, optional) – Rank of the destination process, that receives the message
tag (int, optional) – A Tag to identify the message
- Rsend(buf: heat.core.dndarray.DNDarray | torch.Tensor | Any, dest: int, tag: int = 0)
Blocking ready send
- Parameters:
buf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the message to be send
dest (int, optional) – Rank of the destination process, that receives the message
tag (int, optional) – A Tag to identify the message
- Ssend(buf: heat.core.dndarray.DNDarray | torch.Tensor | Any, dest: int, tag: int = 0)
Blocking synchronous send
- Parameters:
buf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the message to be send
dest (int, optional) – Rank of the destination process, that receives the message
tag (int, optional) – A Tag to identify the message
- Send(buf: heat.core.dndarray.DNDarray | torch.Tensor | Any, dest: int, tag: int = 0)
Blocking send
- Parameters:
buf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the message to be send
dest (int, optional) – Rank of the destination process, that receives the message
tag (int, optional) – A Tag to identify the message
- __broadcast_like(func: Callable, buf: heat.core.dndarray.DNDarray | torch.Tensor | Any, root: int) Tuple[heat.core.dndarray.DNDarray | torch.Tensor | None]
Generic function for broadcasting a message from the process with rank “root” to all other processes of the communicator
- Parameters:
func (Callable) – The respective MPI broadcast function
buf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the message to be broadcasted
root (int) – Rank of the root process, that broadcasts the message
- Bcast(buf: heat.core.dndarray.DNDarray | torch.Tensor | Any, root: int = 0) None
Blocking Broadcast
- Parameters:
buf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the message to be broadcasted
root (int) – Rank of the root process, that broadcasts the message
- Ibcast(buf: heat.core.dndarray.DNDarray | torch.Tensor | Any, root: int = 0) MPIRequest
Nonblocking Broadcast
- Parameters:
buf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the message to be broadcasted
root (int) – Rank of the root process, that broadcasts the message
- __reduce_like(func: Callable, sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, *args, **kwargs) Tuple[heat.core.dndarray.DNDarray | torch.Tensor | None]
Generic function for reduction operations.
- Allreduce(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, op: mpi4py.MPI.Op = MPI.SUM)
Combines values from all processes and distributes the result back to all processes
- Exscan(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, op: mpi4py.MPI.Op = MPI.SUM)
Computes the exclusive scan (partial reductions) of data on a collection of processes
- Iallreduce(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, op: mpi4py.MPI.Op = MPI.SUM) MPIRequest
Nonblocking allreduce reducing values on all processes to a single value
- Iexscan(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, op: mpi4py.MPI.Op = MPI.SUM) MPIRequest
Nonblocking Exscan
- Iscan(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, op: mpi4py.MPI.Op = MPI.SUM) MPIRequest
Nonblocking Scan
- Ireduce(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, op: mpi4py.MPI.Op = MPI.SUM, root: int = 0) MPIRequest
Nonblocking reduction operation
- Reduce(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, op: mpi4py.MPI.Op = MPI.SUM, root: int = 0)
Reduce values from all processes to a single value on process “root”
- Scan(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, op: mpi4py.MPI.Op = MPI.SUM)
Computes the scan (partial reductions) of data on a collection of processes in a nonblocking way
- __allgather_like(func: Callable, sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, axis: int, **kwargs)
Generic function for allgather operations.
- Parameters:
func (Callable) – Type of MPI Allgather function (i.e. allgather, allgatherv, iallgather)
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
axis (int) – Concatenation axis: The axis along which
sendbuf
is packed and along whichrecvbuf
puts together individual chunks
- Allgather(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recv_axis: int = 0)
Gathers data from all tasks and distribute the combined data to all tasks
- Parameters:
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
recv_axis (int) – Concatenation axis: The axis along which
sendbuf
is packed and along whichrecvbuf
puts together individual chunks
- Allgatherv(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recv_axis: int = 0)
v-call of Allgather: Each process may contribute a different amount of data.
- Parameters:
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
recv_axis (int) – Concatenation axis: The axis along which
sendbuf
is packed and along whichrecvbuf
puts together individual chunks
- Iallgather(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recv_axis: int = 0) MPIRequest
Nonblocking Allgather.
- Parameters:
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
recv_axis (int) – Concatenation axis: The axis along which
sendbuf
is packed and along whichrecvbuf
puts together individual chunks
- Iallgatherv(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recv_axis: int = 0)
Nonblocking v-call of Allgather: Each process may contribute a different amount of data.
- Parameters:
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
recv_axis (int) – Concatenation axis: The axis along which
sendbuf
is packed and along whichrecvbuf
puts together individual chunks
- __alltoall_like(func: Callable, sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, send_axis: int, recv_axis: int, **kwargs)
Generic function for alltoall operations.
- Parameters:
func (Callable) – Specific alltoall function
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
send_axis (int) –
Future split axis, along which data blocks will be created that will be send to individual ranks
if
send_axis==recv_axis
, an error will be thrownif
send_axis
orrecv_axis
areNone
, an error will be thrown
recv_axis (int) – Prior split axis, along which blocks are received from the individual ranks
- Alltoall(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, send_axis: int = 0, recv_axis: int = None)
All processes send data to all processes: The jth block sent from process i is received by process j and is placed in the ith block of recvbuf.
- Parameters:
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
send_axis (int) –
Future split axis, along which data blocks will be created that will be send to individual ranks
if
send_axis==recv_axis
, an error will be thrownif
send_axis
orrecv_axis
areNone
, an error will be thrown
recv_axis (int) – Prior split axis, along which blocks are received from the individual ranks
- Alltoallv(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, send_axis: int = 0, recv_axis: int = None)
v-call of Alltoall: All processes send different amount of data to, and receive different amount of data from, all processes
- Parameters:
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
send_axis (int) –
Future split axis, along which data blocks will be created that will be send to individual ranks
if
send_axis==recv_axis
, an error will be thrownif
send_axis
orrecv_axis
areNone
, an error will be thrown
recv_axis (int) – Prior split axis, along which blocks are received from the individual ranks
- Ialltoall(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, send_axis: int = 0, recv_axis: int = None) MPIRequest
Nonblocking Alltoall
- Parameters:
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
send_axis (int) –
Future split axis, along which data blocks will be created that will be send to individual ranks
if
send_axis==recv_axis
, an error will be thrownif
send_axis
orrecv_axis
areNone
, an error will be thrown
recv_axis (int) – Prior split axis, along which blocks are received from the individual ranks
- Ialltoallv(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, send_axis: int = 0, recv_axis: int = None) MPIRequest
Nonblocking v-call of Alltoall: All processes send different amount of data to, and receive different amount of data from, all processes
- Parameters:
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
send_axis (int) –
Future split axis, along which data blocks will be created that will be send to individual ranks
if
send_axis==recv_axis
, an error will be thrownif
send_axis
orrecv_axis
areNone
, an error will be thrown
recv_axis (int) – Prior split axis, along which blocks are received from the individual ranks
- __gather_like(func: Callable, sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, send_axis: int, recv_axis: int, send_factor: int = 1, recv_factor: int = 1, **kwargs)
Generic function for gather operations.
- Parameters:
func (Callable) – Type of MPI Scatter/Gather function
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
send_axis (int) – The axis along which
sendbuf
is packedrecv_axis (int) – The axis along which
recvbuf
is packedsend_factor (int) – Number of elements to be scattered (vor non-v-calls)
recv_factor (int) – Number of elements to be gathered (vor non-v-calls)
- Gather(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, root: int = 0, axis: int = 0, recv_axis: int = None)
Gathers together values from a group of processes
- Parameters:
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
root (int) – Rank of receiving process
axis (int) – The axis along which
sendbuf
is packedrecv_axis (int) – The axis along which
recvbuf
is packed
- Gatherv(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, root: int = 0, axis: int = 0, recv_axis: int = None)
v-call for Gather: All processes send different amount of data
- Parameters:
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
root (int) – Rank of receiving process
axis (int) – The axis along which
sendbuf
is packedrecv_axis (int) – The axis along which
recvbuf
is packed
- Igather(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, root: int = 0, axis: int = 0, recv_axis: int = None) MPIRequest
Non-blocking Gather
- Parameters:
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
root (int) – Rank of receiving process
axis (int) – The axis along which
sendbuf
is packedrecv_axis (int) – The axis along which
recvbuf
is packed
- Igatherv(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, root: int = 0, axis: int = 0, recv_axis: int = None) MPIRequest
Non-blocking v-call for Gather: All processes send different amount of data
- Parameters:
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
root (int) – Rank of receiving process
axis (int) – The axis along which
sendbuf
is packedrecv_axis (int) – The axis along which
recvbuf
is packed
- __scatter_like(func: Callable, sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, send_axis: int, recv_axis: int, send_factor: int = 1, recv_factor: int = 1, **kwargs)
Generic function for scatter operations.
- Parameters:
func (Callable) – Type of MPI Scatter/Gather function
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
send_axis (int) – The axis along which
sendbuf
is packedrecv_axis (int) – The axis along which
recvbuf
is packedsend_factor (int) – Number of elements to be scattered (vor non-v-calls)
recv_factor (int) – Number of elements to be gathered (vor non-v-calls)
- Iscatter(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, root: int = 0, axis: int = 0, recv_axis: int = None) MPIRequest
Non-blocking Scatter
- Parameters:
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
root (int) – Rank of sending process
axis (int) – The axis along which
sendbuf
is packedrecv_axis (int) – The axis along which
recvbuf
is packed
- Iscatterv(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, root: int = 0, axis: int = 0, recv_axis: int = None) MPIRequest
Non-blocking v-call for Scatter: Sends different amounts of data to different processes
- Parameters:
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
root (int) – Rank of sending process
axis (int) – The axis along which
sendbuf
is packedrecv_axis (int) – The axis along which
recvbuf
is packed
- Scatter(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, root: int = 0, axis: int = 0, recv_axis: int = None)
Sends data parts from one process to all other processes in a communicator
- Parameters:
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
root (int) – Rank of sending process
axis (int) – The axis along which
sendbuf
is packedrecv_axis (int) – The axis along which
recvbuf
is packed
- Scatterv(sendbuf: heat.core.dndarray.DNDarray | torch.Tensor | Any, recvbuf: int, root: int = 0, axis: int = 0, recv_axis: int = None)
v-call for Scatter: Sends different amounts of data to different processes
- Parameters:
sendbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address of the send message
recvbuf (Union[DNDarray, torch.Tensor, Any]) – Buffer address where to store the result
root (int) – Rank of sending process
axis (int) – The axis along which
sendbuf
is packedrecv_axis (int) – The axis along which
recvbuf
is packed
- __getattr__(name: str)
Default pass-through for the communicator methods.
- Parameters:
name (str) – The name of the method to be called.
- get_comm() Communication
Retrieves the currently globally set default communication.
- sanitize_comm(comm: Communication | None) Communication
Sanitizes a device or device identifier, i.e. checks whether it is already an instance of
heat.core.devices.Device
or a string with known device identifier and maps it to a properDevice
.- Parameters:
comm (Communication) – The comm to be sanitized
- Raises:
TypeError – If the given communication is not the proper type
- use_comm(comm: Communication = None)
Sets the globally used default communicator.
- Parameters:
comm (Communication or None) – The communication to be set
- e
Euler’s number, Euler’s constant (\(e\)).
- Euler
Euler’s number, Euler’s constant (\(e\)).
- inf
IEEE 754 floating point representation of (positive) infinity (\(\infty\)).
- Inf
IEEE 754 floating point representation of (positive) infinity (\(\infty\)).
- Infty
IEEE 754 floating point representation of (positive) infinity (\(\infty\)).
- Infinity
IEEE 754 floating point representation of (positive) infinity (\(\infty\)).
- nan
IEEE 754 floating point representation of Not a Number (NaN).
- NaN
IEEE 754 floating point representation of Not a Number (NaN).
- pi
Archimedes’ constant (\(\pi\)).
- angle(x: heat.core.dndarray.DNDarray, deg: bool = False, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Calculate the element-wise angle of the complex argument.
- Parameters:
Examples
>>> ht.angle(ht.array([1.0, 1.0j, 1+1j, -2+2j, 3 - 3j])) DNDarray([ 0.0000, 1.5708, 0.7854, 2.3562, -0.7854], dtype=ht.float32, device=cpu:0, split=None) >>> ht.angle(ht.array([1.0, 1.0j, 1+1j, -2+2j, 3 - 3j]), deg=True) DNDarray([ 0., 90., 45., 135., -45.], dtype=ht.float32, device=cpu:0, split=None)
- conjugate(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Compute the complex conjugate, element-wise.
- Parameters:
Examples
>>> ht.conjugate(ht.array([1.0, 1.0j, 1+1j, -2+2j, 3 - 3j])) DNDarray([ (1-0j), -1j, (1-1j), (-2-2j), (3+3j)], dtype=ht.complex64, device=cpu:0, split=None)
- imag(x: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Return the imaginary part of the complex argument. The returned DNDarray and the input DNDarray share the same underlying storage.
- Parameters:
x (DNDarray) – Input array for which the imaginary part is returned.
Examples
>>> ht.imag(ht.array([1.0, 1.0j, 1+1j, -2+2j, 3 - 3j])) DNDarray([ 0., 1., 1., 2., -3.], dtype=ht.float32, device=cpu:0, split=None)
- real(x: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Return the real part of the complex argument. The returned DNDarray and the input DNDarray share the same underlying storage.
- Parameters:
x (DNDarray) – Input array for which the real part is returned.
Examples
>>> ht.real(ht.array([1.0, 1.0j, 1+1j, -2+2j, 3 - 3j])) DNDarray([ 1., 0., 1., -2., 3.], dtype=ht.float32, device=cpu:0, split=None)
- class Device(device_type: str, device_id: int, torch_device: str)
Implements a compute device. HeAT can run computations on different compute devices or backends. A device describes the device type and id on which said computation should be carried out.
- Parameters:
device_type (str) – Represents HeAT’s device name
device_id (int) – The device id
torch_device (str) – The corresponding PyTorch device type
Examples
>>> ht.Device("cpu", 0, "cpu:0") device(cpu:0) >>> ht.Device("gpu", 0, "cuda:0") device(gpu:0)
- __repr__() str
Return the unambiguous information of
Device
.
- __str__() str
Return the descriptive information of
Device
.
- cpu
The standard CPU Device
Examples
>>> ht.cpu device(cpu:0) >>> ht.ones((2, 3), device=ht.cpu) DNDarray([[1., 1., 1.], [1., 1., 1.]], dtype=ht.float32, device=cpu:0, split=None)
- sanitize_device(device: str | Device | None = None) Device
Sanitizes a device or device identifier, i.e. checks whether it is already an instance of
Device
or a string with known device identifier and maps it to a properDevice
.- Parameters:
device (str or Device, optional) – The device to be sanitized
- Raises:
ValueError – If the given device id is not recognized
- use_device(device: str | Device | None = None) None
Sets the globally used default
Device
.- Parameters:
device (str or Device) – The device to be set
- exp(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Calculate the exponential of all elements in the input array. Result is a
DNDarray
of the same shape asx
.- Parameters:
Examples
>>> ht.exp(ht.arange(5)) DNDarray([ 1.0000, 2.7183, 7.3891, 20.0855, 54.5981], dtype=ht.float32, device=cpu:0, split=None)
- expm1(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Calculate \(exp(x) - 1\) for all elements in the array. Result is a
DNDarray
of the same shape asx
.- Parameters:
Examples
>>> ht.expm1(ht.arange(5)) + 1. DNDarray([ 1.0000, 2.7183, 7.3891, 20.0855, 54.5981], dtype=ht.float64, device=cpu:0, split=None)
- exp2(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Calculate the exponential of two of all elements in the input array (\(2^x\)). Result is a
DNDarray
of the same shape asx
.- Parameters:
Examples
>>> ht.exp2(ht.arange(5)) DNDarray([ 1., 2., 4., 8., 16.], dtype=ht.float32, device=cpu:0, split=None)
- log(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Natural logarithm, element-wise. The natural logarithm is the inverse of the exponential function, so that \(log(exp(x)) = x\). The natural logarithm is logarithm in base e. Result is a
DNDarray
of the same shape asx
. Negative input elements are returned as NaN.- Parameters:
Examples
>>> ht.log(ht.arange(5)) DNDarray([ -inf, 0.0000, 0.6931, 1.0986, 1.3863], dtype=ht.float32, device=cpu:0, split=None)
- log2(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Compute the logarithm to the base 2 (\(log_2(x)\)), element-wise. Result is a
DNDarray
of the same shape asx
. Negative input elements are returned as NaN.- Parameters:
Examples
>>> ht.log2(ht.arange(5)) DNDarray([ -inf, 0.0000, 1.0000, 1.5850, 2.0000], dtype=ht.float32, device=cpu:0, split=None)
- log10(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Compute the logarithm to the base 10 (\(log_{10}(x)\)), element-wise. Result is a
DNDarray
of the same shape asx
. Negative input elements are returned as NaN.- Parameters:
Examples
>>> ht.log10(ht.arange(5)) DNDarray([ -inf, 0.0000, 0.3010, 0.4771, 0.6021], dtype=ht.float32, device=cpu:0, split=None)
- log1p(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Return the natural logarithm of one plus the input array, element-wise. Result is a
DNDarray
of the same shape asx
. Negative input elements are returned as NaN.- Parameters:
Examples
>>> ht.log1p(ht.arange(5)) DNDarray([0.0000, 0.6931, 1.0986, 1.3863, 1.6094], dtype=ht.float32, device=cpu:0, split=None)
- logaddexp(x1: heat.core.dndarray.DNDarray, x2: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Calculates the logarithm of the sum of exponentiations \(log(exp(x1) + exp(x2))\) for each element \({x1}_i\) of the input array x1 with the respective element \({x2}_i\) of the input array x2.
- Parameters:
x1 (DNDarray) – first input array. Should have a floating-point data type.
x2 (DNDarray) – second input array. Must be compatible with x1. Should have a floating-point data type.
out (DNDarray, optional) – A location in which to store the results. If provided, it must have a broadcastable shape. If not provided or set to
None
, a fresh array is allocated.
See also
logaddexp2()
Logarithm of the sum of exponentiations of inputs in base-2.
Examples
>>> ht.logaddexp(ht.array([-1.0]), ht.array([-1.0, -2, -3])) DNDarray([-0.3069, -0.6867, -0.8731], dtype=ht.float32, device=cpu:0, split=None)
- logaddexp2(x1: heat.core.dndarray.DNDarray, x2: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Calculates the logarithm of the sum of exponentiations in base-2 \(log2(exp(x1) + exp(x2))\) for each element \({x1}_i\) of the input array x1 with the respective element \({x2}_i\) of the input array x2.
- Parameters:
x1 (DNDarray) – first input array. Should have a floating-point data type.
x2 (DNDarray) – second input array. Must be compatible with x1. Should have a floating-point data type.
out (DNDarray, optional) – A location in which to store the results. If provided, it must have a broadcastable shape. If not provided or set to
None
, a fresh array is allocated.
See also
logaddexp()
Logarithm of the sum of exponentiations of inputs.
Examples
>>> ht.logaddexp2(ht.array([-1.0]), ht.array([-1.0, -2, -3])) DNDarray([ 0.0000, -0.4150, -0.6781], dtype=ht.float32, device=cpu:0, split=None)
- sqrt(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Return the non-negative square-root of a tensor element-wise. Result is a
DNDarray
of the same shape asx
. Negative input elements are returned as NaN.- Parameters:
Examples
>>> ht.sqrt(ht.arange(5)) DNDarray([0.0000, 1.0000, 1.4142, 1.7321, 2.0000], dtype=ht.float32, device=cpu:0, split=None) >>> ht.sqrt(ht.arange(-5, 0)) DNDarray([nan, nan, nan, nan, nan], dtype=ht.float32, device=cpu:0, split=None)
- square(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Return a new tensor with the squares of the elements of input.
- Parameters:
x (DNDarray) – The array for which to compute the squares.
out (DNDarray, optional) – A location in which to store the results. If provided, it must have a broadcastable shape. If not provided or set to
None
, a fresh array is allocated.Examples –
-------- –
ht.random.rand(4) (>>> a =) –
a (>>>) –
DNDarray([0.8654 (0, split=None)) –
0.1432 (0, split=None)) –
0.9164 (0, split=None)) –
0.6179] (0, split=None)) –
dtype=ht.float32 (0, split=None)) –
device=cpu (0, split=None)) –
ht.square(a) (>>>) –
DNDarray([0.7488 (0, split=None)) –
0.0205 (0, split=None)) –
0.8397 (0, split=None)) –
0.3818] (0, split=None)) –
dtype=ht.float32 –
device=cpu –
- arange(*args: int | float, dtype: Type[heat.core.types.datatype] | None = None, split: int | None = None, device: str | heat.core.devices.Device | None = None, comm: heat.core.communication.Communication | None = None) heat.core.dndarray.DNDarray
Return evenly spaced values within a given interval.
Values are generated within the half-open interval
[start, stop)
(in other words, the interval including start but excluding stop). For integer arguments the function is equivalent to the Python built-in range function, but returns a array rather than a list. When using a non-integer step, such as 0.1, the results may be inconsistent due to being subject to numerical rounding. In the cases the usage oflinspace()
is recommended. For floating point arguments, the length of the result is \(\lceil(stop-start)/step\rceil\). Again, due to floating point rounding, this rule may result in the last element of out being greater than stop by machine epsilon.- Parameters:
start (scalar, optional) – Start of interval. The interval includes this value. The default start value is 0.
stop (scalar) – End of interval. The interval does not include this value, except in some cases where
step
is not an integer and floating point round-off affects the length ofout
.step (scalar, optional) – Spacing between values. For any output
out
, this is the distance between two adjacent values,out[i+1]-out[i]
. The default step size is 1. Ifstep
is specified as a position argument,start
must also be given.dtype (datatype, optional) – The type of the output array. If dtype is not given, it is automatically inferred from the other input arguments.
split (int or None, optional) – The axis along which the array is split and distributed;
None
means no distribution.device (str, optional) – Specifies the device the array shall be allocated on, defaults to globally set default device.
comm (Communication, optional) – Handle to the nodes holding distributed parts or copies of this array.
See also
linspace()
Evenly spaced numbers with careful handling of endpoints.
Examples
>>> ht.arange(3) DNDarray([0, 1, 2], dtype=ht.int32, device=cpu:0, split=None) >>> ht.arange(3.0) DNDarray([0., 1., 2.], dtype=ht.float32, device=cpu:0, split=None) >>> ht.arange(3, 7) DNDarray([3, 4, 5, 6], dtype=ht.int32, device=cpu:0, split=None) >>> ht.arange(3, 7, 2) DNDarray([3, 5], dtype=ht.int32, device=cpu:0, split=None)
- array(obj: Iterable, dtype: Type[heat.core.types.datatype] | None = None, copy: bool | None = None, ndmin: int = 0, order: str = 'C', split: int | None = None, is_split: int | None = None, device: heat.core.devices.Device | None = None, comm: heat.core.communication.Communication | None = None) heat.core.dndarray.DNDarray
Create a
DNDarray
.- Parameters:
obj (array_like) – A tensor or array, any object exposing the array interface, an object whose
__array__
method returns an array, or any (nested) sequence.dtype (datatype, optional) – The desired data-type for the array. If not given, then the type will be determined as the minimum type required to hold the objects in the sequence. This argument can only be used to ‘upcast’ the array. For downcasting, use the
astype()
method.copy (bool, optional) – If
True
, the input object is copied. IfFalse
, input which supports the buffer protocol is never copied. IfNone
(default), the function reuses the existing memory buffer if possible, and copies otherwise.ndmin (int, optional) – Specifies the minimum number of dimensions that the resulting array should have. Ones will, if needed, be attached to the shape if
ndim > 0
and prefaced in case ofndim < 0
to meet the requirement.order (str, optional) – Options:
'C'
or'F'
. Specifies the memory layout of the newly created array. Default isorder='C'
, meaning the array will be stored in row-major order (C-like). Iforder=‘F’
, the array will be stored in column-major order (Fortran-like).split (int or None, optional) – The axis along which the passed array content
obj
is split and distributed in memory. Mutually exclusive withis_split
.is_split (int or None, optional) – Specifies the axis along which the local data portions, passed in obj, are split across all machines. Useful for interfacing with other distributed-memory code. The shape of the global array is automatically inferred. Mutually exclusive with
split
.device (str or Device, optional) – Specifies the
Device
the array shall be allocated on (i.e. globally set default device).comm (Communication, optional) – Handle to the nodes holding distributed array chunks.
- Raises:
NotImplementedError – If order is one of the NumPy options
'K'
or'A'
.ValueError – If
copy
is False but a copy is necessary to satisfy other requirements (e.g. different dtype, device, etc.).TypeError – If the input object cannot be converted to a torch.Tensor, hence it cannot be converted to a
DNDarray
.
Examples
>>> ht.array([1, 2, 3]) DNDarray([1, 2, 3], dtype=ht.int64, device=cpu:0, split=None) >>> ht.array([1, 2, 3.0]) DNDarray([1., 2., 3.], dtype=ht.float32, device=cpu:0, split=None) >>> ht.array([[1, 2], [3, 4]]) DNDarray([[1, 2], [3, 4]], dtype=ht.int64, device=cpu:0, split=None) >>> ht.array([1, 2, 3], ndmin=2) DNDarray([[1], [2], [3]], dtype=ht.int64, device=cpu:0, split=None) >>> ht.array([1, 2, 3], dtype=float) DNDarray([1., 2., 3.], dtype=ht.float32, device=cpu:0, split=None) >>> ht.array([1, 2, 3, 4], split=0) DNDarray([1, 2, 3, 4], dtype=ht.int64, device=cpu:0, split=0) >>> if ht.MPI_WORLD.rank == 0 >>> a = ht.array([1, 2], is_split=0) >>> else: >>> a = ht.array([3, 4], is_split=0) >>> a DNDarray([1, 2, 3, 4], dtype=ht.int64, device=cpu:0, split=0) >>> a = np.arange(2 * 3).reshape(2, 3) >>> a array([[ 0, 1, 2], [ 3, 4, 5]]) >>> a.strides (24, 8) >>> b = ht.array(a) >>> b DNDarray([[0, 1, 2], [3, 4, 5]], dtype=ht.int64, device=cpu:0, split=None) >>> b.strides (24, 8) >>> b.larray.untyped_storage() 0 1 2 3 4 5 [torch.LongStorage of size 6] >>> c = ht.array(a, order='F') >>> c DNDarray([[0, 1, 2], [3, 4, 5]], dtype=ht.int64, device=cpu:0, split=None) >>> c.strides (8, 16) >>> c.larray.untyped_storage() 0 3 1 4 2 5 [torch.LongStorage of size 6] >>> a = np.arange(4 * 3).reshape(4, 3) >>> a.strides (24, 8) >>> b = ht.array(a, order='F', split=0) >>> b DNDarray([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]], dtype=ht.int64, device=cpu:0, split=0) >>> b.strides [0/2] (8, 16) [1/2] (8, 16) >>> b.larray.untyped_storage() [0/2] 0 3 1 4 2 5 [torch.LongStorage of size 6] [1/2] 6 9 7 10 8 11 [torch.LongStorage of size 6]
- asarray(obj: Iterable, dtype: Type[heat.core.types.datatype] | None = None, copy: bool | None = None, order: str = 'C', is_split: bool | None = None, device: str | heat.core.devices.Device | None = None) heat.core.dndarray.DNDarray
Convert
obj
to a DNDarray. Ifobj
is a DNDarray or Tensor with the same dtype and device or if the data is an ndarray of the correspondingdtype
and thedevice
is the CPU, no copy will be performed.- Parameters:
obj (iterable) – Input data, in any form that can be converted to an array. This includes e.g. lists, lists of tuples, tuples, tuples of tuples, tuples of lists and ndarrays.
dtype (dtype, optional) – By default, the data-type is inferred from the input data.
copy (bool, optional) – If
True
, then the object is copied. IfFalse
, the object is not copied and aValueError
is raised in the case a copy would be necessary. IfNone
, a copy will only be made if obj is a nested sequence or if a copy is needed to satisfy any of the other requirements, e.g.dtype
.order (str, optional) – Whether to use row-major (C-style) or column-major (Fortran-style) memory representation. Defaults to ‘C’.
is_split (None or int, optional) – Specifies the axis along which the local data portions, passed in obj, are split across all MPI processes. Useful for interfacing with other HPC code. The shape of the global tensor is automatically inferred.
device (str, ht.Device or None, optional) – Specifies the device the tensor shall be allocated on. By default, it is inferred from the input data.
Examples
>>> a = [1,2] >>> ht.asarray(a) DNDarray([1, 2], dtype=ht.int64, device=cpu:0, split=None) >>> a = np.array([1,2,3]) >>> n = ht.asarray(a) >>> n DNDarray([1, 2, 3], dtype=ht.int64, device=cpu:0, split=None) >>> n[0] = 0 >>> a DNDarray([0, 2, 3], dtype=ht.int64, device=cpu:0, split=None) >>> a = torch.tensor([1,2,3]) >>> t = ht.asarray(a) >>> t DNDarray([1, 2, 3], dtype=ht.int64, device=cpu:0, split=None) >>> t[0] = 0 >>> a DNDarray([0, 2, 3], dtype=ht.int64, device=cpu:0, split=None) >>> a = ht.array([1,2,3,4], dtype=ht.float32) >>> ht.asarray(a, dtype=ht.float32) is a True >>> ht.asarray(a, dtype=ht.float64) is a False
- empty(shape: int | Sequence[int], dtype: Type[heat.core.types.datatype] = types.float32, split: int | None = None, device: heat.core.devices.Device | None = None, comm: heat.core.communication.Communication | None = None, order: str = 'C') heat.core.dndarray.DNDarray
Returns a new uninitialized
DNDarray
of given shape and data type. May be allocated split up across multiple nodes along the specified axis.- Parameters:
shape (int or Sequence[int,...]) – Desired shape of the output array, e.g. 1 or (1, 2, 3,).
dtype (datatype) – The desired HeAT data type for the array.
split (int, optional) – The axis along which the array is split and distributed;
None
means no distribution.device (str or Device, optional) – Specifies the
Device
. the array shall be allocated on, defaults to globally set default device.comm (Communication, optional) – Handle to the nodes holding distributed parts or copies of this array.
order (str, optional) – Options:
'C'
or'F'
. Specifies the memory layout of the newly created array. Default isorder='C'
, meaning the array will be stored in row-major order (C-like). Iforder=‘F’
, the array will be stored in column-major order (Fortran-like).
- Raises:
NotImplementedError – If order is one of the NumPy options
'K'
or'A'
.
Examples
>>> ht.empty(3) DNDarray([0., 0., 0.], dtype=ht.float32, device=cpu:0, split=None) >>> ht.empty(3, dtype=ht.int) DNDarray([59140784, 0, 59136816], dtype=ht.int32, device=cpu:0, split=None) >>> ht.empty((2, 3,)) DNDarray([[-1.7206e-10, 4.5905e-41, -1.7206e-10], [ 4.5905e-41, 4.4842e-44, 0.0000e+00]], dtype=ht.float32, device=cpu:0, split=None)
- empty_like(a: heat.core.dndarray.DNDarray, dtype: Type[heat.core.types.datatype] | None = None, split: int | None = None, device: heat.core.devices.Device | None = None, comm: heat.core.communication.Communication | None = None, order: str = 'C') heat.core.dndarray.DNDarray
Returns a new uninitialized
DNDarray
with the same type, shape and data distribution of given object. Data type and data distribution strategy can be explicitly overriden.- Parameters:
a (DNDarray) – The shape and data-type of
a
define these same attributes of the returned array. Uninitialized array with the same shape, type and split axis asa
unless overriden.dtype (datatype, optional) – Overrides the data type of the result.
split (int or None, optional) – The axis along which the array is split and distributed;
None
means no distribution.device (str or Device, optional) – Specifies the
Device
the array shall be allocated on, defaults to globally set default device.comm (Communication, optional) – Handle to the nodes holding distributed parts or copies of this array.
order (str, optional) – Options:
'C'
or'F'
. Specifies the memory layout of the newly created array. Default isorder='C'
, meaning the array will be stored in row-major order (C-like). Iforder=‘F’
, the array will be stored in column-major order (Fortran-like).
- Raises:
NotImplementedError – If order is one of the NumPy options
'K'
or'A'
.
Examples
>>> x = ht.ones((2, 3,)) >>> x DNDarray([[1., 1., 1.], [1., 1., 1.]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.empty_like(x) DNDarray([[-1.7205e-10, 4.5905e-41, 7.9442e-37], [ 0.0000e+00, 4.4842e-44, 0.0000e+00]], dtype=ht.float32, device=cpu:0, split=None)
- eye(shape: int | Sequence[int], dtype: Type[heat.core.types.datatype] = types.float32, split: int | None = None, device: heat.core.devices.Device | None = None, comm: heat.core.communication.Communication | None = None, order: str = 'C') heat.core.dndarray.DNDarray
Returns a new 2-D
DNDarray
with ones on the diagonal and zeroes elsewhere, i.e. an identity matrix.- Parameters:
shape (int or Sequence[int,...]) – The shape of the data-type. If only one number is provided, returning array will be square with that size. In other cases, the first value represents the number rows, the second the number of columns.
dtype (datatype, optional) – Overrides the data type of the result.
split (int or None, optional) – The axis along which the array is split and distributed;
None
means no distribution.device (str or Device, optional) – Specifies the
Device
the array shall be allocated on, defaults to globally set default device.comm (Communication, optional) – Handle to the nodes holding distributed parts or copies of this array.
order (str, optional) – Options:
'C'
or'F'
. Specifies the memory layout of the newly created array. Default isorder='C'
, meaning the array will be stored in row-major order (C-like). Iforder=‘F’
, the array will be stored in column-major order (Fortran-like).
- Raises:
NotImplementedError – If order is one of the NumPy options
'K'
or'A'
.
Examples
>>> ht.eye(2) DNDarray([[1., 0.], [0., 1.]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.eye((2, 3), dtype=ht.int32) DNDarray([[1, 0, 0], [0, 1, 0]], dtype=ht.int32, device=cpu:0, split=None)
- from_partitioned(x, comm: heat.core.communication.Communication | None = None) heat.core.dndarray.DNDarray
Return a newly created DNDarray constructed from the ‘__partitioned__’ attributed of the input object. Memory of local partitions will be shared (zero-copy) as long as supported by data objects. Currently supports numpy ndarrays and torch tensors as data objects. Current limitations:
Partitions must be ordered in the partition-grid by rank
Only one split-axis
Only one partition per rank
Only SPMD-style __partitioned__
- Parameters:
x (object) – Requires x.__partitioned__
comm (Communication, optional) – Handle to the nodes holding distributed parts or copies of this array.
See also
ht.core.DNDarray.create_partition_interface
.- Raises:
AttributeError – If not hasattr(x, “__partitioned__”) or if underlying data has no dtype.
TypeError – If it finds an unsupported array types
RuntimeError – If other unsupported content is found.
Examples
>>> import heat as ht >>> a = ht.ones((44,55), split=0) >>> b = ht.from_partitioned(a) >>> assert (a==b).all() >>> a[40] = 4711 >>> assert (a==b).all()
- from_partition_dict(parted: dict, comm: heat.core.communication.Communication | None = None) heat.core.dndarray.DNDarray
Return a newly created DNDarray constructed from the ‘__partitioned__’ attributed of the input object. Memory of local partitions will be shared (zero-copy) as long as supported by data objects. Currently supports numpy ndarrays and torch tensors as data objects. Current limitations:
Partitions must be ordered in the partition-grid by rank
Only one split-axis
Only one partition per rank
Only SPMD-style __partitioned__
- Parameters:
parted (dict) – A partition dictionary used to create the new DNDarray
comm (Communication, optional) – Handle to the nodes holding distributed parts or copies of this array.
See also
ht.core.DNDarray.create_partition_interface
.- Raises:
AttributeError – If not hasattr(x, “__partitioned__”) or if underlying data has no dtype.
TypeError – If it finds an unsupported array types
RuntimeError – If other unsupported content is found.
Examples
>>> import heat as ht >>> a = ht.ones((44,55), split=0) >>> b = ht.from_partition_dict(a.__partitioned__) >>> assert (a==b).all() >>> a[40] = 4711 >>> assert (a==b).all()
- full(shape: int | Sequence[int], fill_value: int | float, dtype: Type[heat.core.types.datatype] = types.float32, split: int | None = None, device: heat.core.devices.Device | None = None, comm: heat.core.communication.Communication | None = None, order: str = 'C') heat.core.dndarray.DNDarray
Return a new
DNDarray
of given shape and type, filled withfill_value
.- Parameters:
shape (int or Sequence[int,...]) – Shape of the new array, e.g., (2, 3) or 2.
fill_value (scalar) – Fill value.
dtype (datatype, optional) – The desired data-type for the array
split (int or None, optional) – The axis along which the array is split and distributed;
None
means no distribution.device (str or Device, optional) – Specifies the
Device
the array shall be allocated on, defaults to globally set default device.comm (Communication, optional) – Handle to the nodes holding distributed parts or copies of this array.
order (str, optional) – Options:
'C'
or'F'
. Specifies the memory layout of the newly created array. Default isorder='C'
, meaning the array will be stored in row-major order (C-like). Iforder=‘F’
, the array will be stored in column-major order (Fortran-like).
- Raises:
NotImplementedError – If order is one of the NumPy options
'K'
or'A'
.
Examples
>>> ht.full((2, 2), ht.inf) DNDarray([[inf, inf], [inf, inf]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.full((2, 2), 10) DNDarray([[10., 10.], [10., 10.]], dtype=ht.float32, device=cpu:0, split=None)
- full_like(a: heat.core.dndarray.DNDarray, fill_value: int | float, dtype: Type[heat.core.types.datatype] = types.float32, split: int | None = None, device: heat.core.devices.Device | None = None, comm: heat.core.communication.Communication | None = None, order: str = 'C') heat.core.dndarray.DNDarray
Return a full
DNDarray
with the same shape and type as a given array.- Parameters:
a (DNDarray) – The shape and data-type of
a
define these same attributes of the returned array.fill_value (scalar) – Fill value.
dtype (datatype, optional) – Overrides the data type of the result.
split (int or None, optional) – The axis along which the array is split and distributed;
None
means no distribution.device (str or Device, optional) – Specifies the
Device
the array shall be allocated on, defaults to globally set default device.comm (Communication, optional) – Handle to the nodes holding distributed parts or copies of this array.
order (str, optional) – Options:
'C'
or'F'
. Specifies the memory layout of the newly created array. Default isorder='C'
, meaning the array will be stored in row-major order (C-like). Iforder=‘F’
, the array will be stored in column-major order (Fortran-like).
- Raises:
NotImplementedError – If order is one of the NumPy options
'K'
or'A'
.
Examples
>>> x = ht.zeros((2, 3,)) >>> x DNDarray([[0., 0., 0.], [0., 0., 0.]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.full_like(x, 1.0) DNDarray([[1., 1., 1.], [1., 1., 1.]], dtype=ht.float32, device=cpu:0, split=None)
- linspace(start: int | float, stop: int | float, num: int = 50, endpoint: bool = True, retstep: bool = False, dtype: Type[heat.core.types.datatype] | None = None, split: int | None = None, device: heat.core.devices.Device | None = None, comm: heat.core.communication.Communication | None = None) Tuple[heat.core.dndarray.DNDarray, float]
Returns num evenly spaced samples, calculated over the interval
[start, stop]
. The endpoint of the interval can optionally be excluded. There are num equally spaced samples in the closed interval[start, stop]
or the half-open interval[start, stop)
(depending on whether endpoint isTrue
orFalse
).- Parameters:
start (scalar or scalar-convertible) – The starting value of the sample interval, maybe a sequence if convertible to scalar
stop (scalar or scalar-convertible) – The end value of the sample interval, unless is set to False. In that case, the sequence consists of all but the last of
num+1
evenly spaced samples, so that stop is excluded. Note that the step size changes when endpoint isFalse
.num (int, optional) – Number of samples to generate, defaults to 50. Must be non-negative.
endpoint (bool, optional) – If
True
, stop is the last sample, otherwise, it is not included.retstep (bool, optional) – If
True
, return (samples, step), where step is the spacing between samples.dtype (dtype, optional) – The type of the output array.
split (int or None, optional) – The axis along which the array is split and distributed;
None
means no distribution.device (str or Device, optional) – Specifies the
Device
the array shall be allocated on, defaults to globally set default device.comm (Communication, optional) – Handle to the nodes holding distributed parts or copies of this array.
Examples
>>> ht.linspace(2.0, 3.0, num=5) DNDarray([2.0000, 2.2500, 2.5000, 2.7500, 3.0000], dtype=ht.float32, device=cpu:0, split=None) >>> ht.linspace(2.0, 3.0, num=5, endpoint=False) DNDarray([2.0000, 2.2000, 2.4000, 2.6000, 2.8000], dtype=ht.float32, device=cpu:0, split=None) >>> ht.linspace(2.0, 3.0, num=5, retstep=True) (DNDarray([2.0000, 2.2500, 2.5000, 2.7500, 3.0000], dtype=ht.float32, device=cpu:0, split=None), 0.25)
- logspace(start: int | float, stop: int | float, num: int = 50, endpoint: bool = True, base: float = 10.0, dtype: Type[heat.core.types.datatype] | None = None, split: int | None = None, device: heat.core.devices.Device | None = None, comm: heat.core.communication.Communication | None = None) heat.core.dndarray.DNDarray
Return numbers spaced evenly on a log scale. In linear space, the sequence starts at
base**start
(base
to the power ofstart
) and ends withbase**stop
(seeendpoint
below).- Parameters:
start (scalar or scalar-convertible) –
base**start
is the starting value of the sequence.stop (scalar or scalar-convertible) –
base**stop
is the final value of the sequence, unless endpoint isFalse
. In that case,num+1
values are spaced over the interval in log-space, of which all but the last (a sequence of lengthnum
) are returned.num (int, optional) – Number of samples to generate.
endpoint (bool, optional) – If
True
, stop is the last sample. Otherwise, it is not included.base (float, optional) – The base of the log space. The step size between the elements in \(ln(samples) / ln(base)\) (or \(base(samples)\)) is uniform.
dtype (datatype, optional) – The type of the output array. If
dtype
is not given, infer the data type from the other input arguments.split (int or None, optional) – The axis along which the array is split and distributed;
None
means no distribution.device (str or Device, optional) – Specifies the
Device
the array shall be allocated on, defaults to globally set default device.comm (Communication, optional) – Handle to the nodes holding distributed parts or copies of this array.
See also
arange()
Similar to
linspace()
, with the step size specified instead of the number of samples. Note that, when used with a float endpoint, the endpoint may or may not be included.linspace()
Similar to
logspace
, but with the samples uniformly distributed in linear space, instead of log space.
Examples
>>> ht.logspace(2.0, 3.0, num=4) DNDarray([ 100.0000, 215.4434, 464.1590, 1000.0000], dtype=ht.float32, device=cpu:0, split=None) >>> ht.logspace(2.0, 3.0, num=4, endpoint=False) DNDarray([100.0000, 177.8279, 316.2278, 562.3413], dtype=ht.float32, device=cpu:0, split=None) >>> ht.logspace(2.0, 3.0, num=4, base=2.0) DNDarray([4.0000, 5.0397, 6.3496, 8.0000], dtype=ht.float32, device=cpu:0, split=None)
- meshgrid(*arrays: Sequence[heat.core.dndarray.DNDarray], indexing: str = 'xy') List[heat.core.dndarray.DNDarray]
Returns coordinate matrices from coordinate vectors.
- Parameters:
arrays (Sequence[ DNDarray ]) – one-dimensional arrays representing grid coordinates. If exactly one vector is distributed, the returned matrices will be distributed along the axis equal to the index of this vector in the input list.
indexing (str, optional) – Cartesian ‘xy’ or matrix ‘ij’ indexing of output. It is ignored if zero or one one-dimensional arrays are provided. Default: ‘xy’ .
- Raises:
ValueError – If indexing is not ‘xy’ or ‘ij’.
ValueError – If more than one input vector is distributed.
Examples
>>> x = ht.arange(4) >>> y = ht.arange(3) >>> xx, yy = ht.meshgrid(x,y) >>> xx DNDarray([[0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3]], dtype=ht.int32, device=cpu:0, split=None) >>> yy DNDarray([[0, 0, 0, 0], [1, 1, 1, 1], [2, 2, 2, 2]], dtype=ht.int32, device=cpu:0, split=None)
- ones(shape: int | Sequence[int], dtype: Type[heat.core.types.datatype] = types.float32, split: int | None = None, device: heat.core.devices.Device | None = None, comm: heat.core.communication.Communication | None = None, order: str = 'C') heat.core.dndarray.DNDarray
Returns a new
DNDarray
of given shape and data type filled with one. May be allocated split up across multiple nodes along the specified axis.- Parameters:
shape (int or Sequence[int,...]) – Desired shape of the output array, e.g. 1 or (1, 2, 3,).
dtype (datatype, optional) – The desired HeAT data type for the array.
split (int or None, optional) – The axis along which the array is split and distributed;
None
means no distribution.device (str or Device, optional) – Specifies the
Device
the array shall be allocated on, defaults to globally set default device.comm (Communication, optional) – Handle to the nodes holding distributed parts or copies of this array.
order (str, optional) – Options:
'C'
or'F'
. Specifies the memory layout of the newly created array. Default isorder='C'
, meaning the array will be stored in row-major order (C-like). Iforder=‘F’
, the array will be stored in column-major order (Fortran-like).
- Raises:
NotImplementedError – If order is one of the NumPy options
'K'
or'A'
.
Examples
>>> ht.ones(3) DNDarray([1., 1., 1.], dtype=ht.float32, device=cpu:0, split=None) >>> ht.ones(3, dtype=ht.int) DNDarray([1, 1, 1], dtype=ht.int32, device=cpu:0, split=None) >>> ht.ones((2, 3,)) DNDarray([[1., 1., 1.], [1., 1., 1.]], dtype=ht.float32, device=cpu:0, split=None)
- ones_like(a: heat.core.dndarray.DNDarray, dtype: Type[heat.core.types.datatype] | None = None, split: int | None = None, device: heat.core.devices.Device | None = None, comm: heat.core.communication.Communication | None = None, order: str = 'C') heat.core.dndarray.DNDarray
Returns a new
DNDarray
filled with ones with the same type, shape and data distribution of given object. Data type and data distribution strategy can be explicitly overriden.- Parameters:
a (DNDarray) – The shape and data-type of
a
define these same attributes of the returned array.dtype (datatype, optional) – Overrides the data type of the result.
split (int or None, optional) – The axis along which the array is split and distributed;
None
means no distribution.device (str or Device, optional) – Specifies the
Device
the array shall be allocated on, defaults to globally set default device.comm (Communication, optional) – Handle to the nodes holding distributed parts or copies of this array.
order (str, optional) – Options:
'C'
or'F'
. Specifies the memory layout of the newly created array. Default isorder='C'
, meaning the array will be stored in row-major order (C-like). Iforder=‘F’
, the array will be stored in column-major order (Fortran-like).
- Raises:
NotImplementedError – If order is one of the NumPy options
'K'
or'A'
.
Examples
>>> x = ht.zeros((2, 3,)) >>> x DNDarray([[0., 0., 0.], [0., 0., 0.]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.ones_like(x) DNDarray([[1., 1., 1.], [1., 1., 1.]], dtype=ht.float32, device=cpu:0, split=None)
- zeros(shape: int | Sequence[int], dtype: Type[heat.core.types.datatype] = types.float32, split: int | None = None, device: heat.core.devices.Device | None = None, comm: heat.core.communication.Communication | None = None, order: str = 'C') heat.core.dndarray.DNDarray
Returns a new
DNDarray
of given shape and data type filled with zero values. May be allocated split up across multiple nodes along the specified axis.- Parameters:
shape (int or Sequence[int,...]) – Desired shape of the output array, e.g. 1 or (1, 2, 3,).
dtype (datatype) – The desired HeAT data type for the array.
split (int or None, optional) – The axis along which the array is split and distributed;
None
means no distribution.device (str or Device, optional) – Specifies the
Device
the array shall be allocated on, defaults to globally set default device.comm (Communication, optional) – Handle to the nodes holding distributed parts or copies of this array.
order (str, optional) – Options:
'C'
or'F'
. Specifies the memory layout of the newly created array. Default isorder='C'
, meaning the array will be stored in row-major order (C-like). Iforder=‘F’
, the array will be stored in column-major order (Fortran-like).
- Raises:
NotImplementedError – If order is one of the NumPy options
'K'
or'A'
.
Examples
>>> ht.zeros(3) DNDarray([0., 0., 0.], dtype=ht.float32, device=cpu:0, split=None) >>> ht.zeros(3, dtype=ht.int) DNDarray([0, 0, 0], dtype=ht.int32, device=cpu:0, split=None) >>> ht.zeros((2, 3,)) DNDarray([[0., 0., 0.], [0., 0., 0.]], dtype=ht.float32, device=cpu:0, split=None)
- zeros_like(a: heat.core.dndarray.DNDarray, dtype: Type[heat.core.types.datatype] | None = None, split: int | None = None, device: heat.core.devices.Device | None = None, comm: heat.core.communication.Communication | None = None, order: str = 'C') heat.core.dndarray.DNDarray
Returns a new
DNDarray
filled with zeros with the same type, shape and data distribution of given object. Data type and data distribution strategy can be explicitly overriden.- Parameters:
a (DNDarray) – The shape and data-type of
a
define these same attributes of the returned array.dtype (datatype, optional) – Overrides the data type of the result.
split (int or None, optional) – The axis along which the array is split and distributed;
None
means no distribution.device (str or Device, optional) – Specifies the
Device
the array shall be allocated on, defaults to globally set default device.comm (Communication, optional) – Handle to the nodes holding distributed parts or copies of this array.
order (str, optional) – Options:
'C'
or'F'
. Specifies the memory layout of the newly created array. Default isorder='C'
, meaning the array will be stored in row-major order (C-like). Iforder=‘F’
, the array will be stored in column-major order (Fortran-like).
- Raises:
NotImplementedError – If order is one of the NumPy options
'K'
or'A'
.
Examples
>>> x = ht.ones((2, 3,)) >>> x DNDarray([[1., 1., 1.], [1., 1., 1.]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.zeros_like(x) DNDarray([[0., 0., 0.], [0., 0., 0.]], dtype=ht.float32, device=cpu:0, split=None)
- nonzero(x: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Return a
DNDarray
containing the indices of the elements that are non-zero.. (usingtorch.nonzero
) Ifx
is split then the result is split in the 0th dimension. However, thisDNDarray
can be UNBALANCED as it contains the indices of the non-zero elements on each node. Returns an array with one entry for each dimension ofx
, containing the indices of the non-zero elements in that dimension. The values inx
are always tested and returned in row-major, C-style order. The corresponding non-zero values can be obtained with:x[nonzero(x)]
.- Parameters:
x (DNDarray) – Input array
Examples
>>> import heat as ht >>> x = ht.array([[3, 0, 0], [0, 4, 1], [0, 6, 0]], split=0) >>> ht.nonzero(x) DNDarray([[0, 0], [1, 1], [1, 2], [2, 1]], dtype=ht.int64, device=cpu:0, split=0) >>> y = ht.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], split=0) >>> y > 3 DNDarray([[False, False, False], [ True, True, True], [ True, True, True]], dtype=ht.bool, device=cpu:0, split=0) >>> ht.nonzero(y > 3) DNDarray([[1, 0], [1, 1], [1, 2], [2, 0], [2, 1], [2, 2]], dtype=ht.int64, device=cpu:0, split=0) >>> y[ht.nonzero(y > 3)] DNDarray([4, 5, 6, 7, 8, 9], dtype=ht.int64, device=cpu:0, split=0)
- where(cond: heat.core.dndarray.DNDarray, x: None | int | float | heat.core.dndarray.DNDarray = None, y: None | int | float | heat.core.dndarray.DNDarray = None) heat.core.dndarray.DNDarray
Return a
DNDarray
containing elements chosen fromx
ory
depending on condition. Result is aDNDarray
with elements fromx
where cond isTrue
, and elements fromy
elsewhere (False
).- Parameters:
cond (DNDarray) – Condition of interest, where true yield
x
otherwise yieldy
x (DNDarray or int or float, optional) – Values from which to choose.
x
,y
and condition need to be broadcastable to some shape.y (DNDarray or int or float, optional) – Values from which to choose.
x
,y
and condition need to be broadcastable to some shape.
- Raises:
NotImplementedError – if splits of the two input
DNDarray
differTypeError – if only x or y is given or both are not DNDarrays or numerical scalars
Notes
When only condition is provided, this function is a shorthand for
nonzero()
.Examples
>>> import heat as ht >>> x = ht.arange(10, split=0) >>> ht.where(x < 5, x, 10*x) DNDarray([ 0, 1, 2, 3, 4, 50, 60, 70, 80, 90], dtype=ht.int64, device=cpu:0, split=0) >>> y = ht.array([[0, 1, 2], [0, 2, 4], [0, 3, 6]]) >>> ht.where(y < 4, y, -1) DNDarray([[ 0, 1, 2], [ 0, 2, -1], [ 0, 3, -1]], dtype=ht.int64, device=cpu:0, split=None)
- load(path: str, *args: List[object] | None, **kwargs: Dict[str, object] | None) heat.core.dndarray.DNDarray
Attempts to load data from a file stored on disk. Attempts to auto-detect the file format by determining the extension. Supports at least CSV files, HDF5 and netCDF4 are additionally possible if the corresponding libraries are installed.
- Parameters:
path (str) – Path to the file to be read.
args (list, optional) – Additional options passed to the particular functions.
kwargs (dict, optional) – Additional options passed to the particular functions.
- Raises:
ValueError – If the file extension is not understood or known.
RuntimeError – If the optional dependency for a file extension is not available.
Examples
>>> ht.load('data.h5', dataset='DATA') DNDarray([ 1.0000, 2.7183, 7.3891, 20.0855, 54.5981], dtype=ht.float32, device=cpu:0, split=None) >>> ht.load('data.nc', variable='DATA') DNDarray([ 1.0000, 2.7183, 7.3891, 20.0855, 54.5981], dtype=ht.float32, device=cpu:0, split=None)
- load_csv(path: str, header_lines: int = 0, sep: str = ',', dtype: heat.core.types.datatype = types.float32, encoding: str = 'utf-8', split: int | None = None, device: str | None = None, comm: heat.core.communication.Communication | None = None) heat.core.dndarray.DNDarray
Loads data from a CSV file. The data will be distributed along the axis 0.
- Parameters:
path (str) – Path to the CSV file to be read.
header_lines (int, optional) – The number of columns at the beginning of the file that should not be considered as data.
sep (str, optional) – The single
char
orstr
that separates the values in each row.dtype (datatype, optional) – Data type of the resulting array.
encoding (str, optional) – The type of encoding which will be used to interpret the lines of the csv file as strings.
split (int or None : optional) – Along which axis the resulting array should be split. Default is
None
which means each node will have the full array.device (str, optional) – The device id on which to place the data, defaults to globally set default device.
comm (Communication, optional) – The communication to use for the data distribution, defaults to global default
- Raises:
TypeError – If any of the input parameters are not of correct type.
Examples
>>> import heat as ht >>> a = ht.load_csv('data.csv') >>> a.shape [0/3] (150, 4) [1/3] (150, 4) [2/3] (150, 4) [3/3] (150, 4) >>> a.lshape [0/3] (38, 4) [1/3] (38, 4) [2/3] (37, 4) [3/3] (37, 4) >>> b = ht.load_csv('data.csv', header_lines=10) >>> b.shape [0/3] (140, 4) [1/3] (140, 4) [2/3] (140, 4) [3/3] (140, 4) >>> b.lshape [0/3] (35, 4) [1/3] (35, 4) [2/3] (35, 4) [3/3] (35, 4)
- save_csv(data: heat.core.dndarray.DNDarray, path: str, header_lines: Iterable[str] = None, sep: str = ',', decimals: int = -1, encoding: str = 'utf-8', comm: heat.core.communication.Communication | None = None, truncate: bool = True)
Saves data to CSV files. Only 2D data, all split axes.
- Parameters:
data (DNDarray) – The DNDarray to be saved to CSV.
path (str) – The path as a string.
header_lines (Iterable[str]) – Optional iterable of str to prepend at the beginning of the file. No pound sign or any other comment marker will be inserted.
sep (str) – The separator character used in this CSV.
decimals (int) – Number of digits after decimal point.
encoding (str) – The encoding to be used in this CSV.
comm (Optional[Communication]) – An optional object of type Communication to be used.
truncate (bool) – Whether to truncate an existing file before writing, i.e. fully overwrite it. The sane default is True. Setting it to False will not shorten files if needed and thus may leave garbage at the end of existing files.
- save(data: heat.core.dndarray.DNDarray, path: str, *args: List[object] | None, **kwargs: Dict[str, object] | None)
Attempts to save data from a
DNDarray
to disk. An auto-detection based on the file format extension is performed.- Parameters:
data (DNDarray) – The array holding the data to be stored
path (str) – Path to the file to be stored.
args (list, optional) – Additional options passed to the particular functions.
kwargs (dict, optional) – Additional options passed to the particular functions.
- Raises:
ValueError – If the file extension is not understood or known.
RuntimeError – If the optional dependency for a file extension is not available.
Examples
>>> x = ht.arange(100, split=0) >>> ht.save(x, 'data.h5', 'DATA', mode='a')
- supports_hdf5() bool
Returns
True
if Heat supports reading from and writing to HDF5 files,False
otherwise.
- supports_netcdf() bool
Returns
True
if Heat supports reading from and writing to netCDF4 files,False
otherwise.
- all(x: heat.core.dndarray.DNDarray, axis: int | Tuple[int] | None = None, out: heat.core.dndarray.DNDarray | None = None, keepdims: bool = False) heat.core.dndarray.DNDarray | bool
Test whether all array elements along a given axis evaluate to
True
. A new boolean orDNDarray
is returned unless out is specified, in which case a reference toout
is returned.- Parameters:
x (DNDarray) – Input array or object that can be converted to an array.
axis (None or int or Tuple[int,...], optional) – Axis or axes along which a logical AND reduction is performed. The default (
axis=None
) is to perform a logical AND over all the dimensions of the input array.axis
may be negative, in which case it counts from the last to the first axis.out (DNDarray, optional) – Alternate output array in which to place the result. It must have the same shape as the expected output and its type is preserved.
keepdims (bool, optional) – If this is set to
True
, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array.
Examples
>>> x = ht.random.randn(4, 5) >>> x DNDarray([[ 0.7199, 1.3718, 1.5008, 0.3435, 1.2884], [ 0.1532, -0.0968, 0.3739, 1.7843, 0.5614], [ 1.1522, 1.9076, 1.7638, 0.4110, -0.2803], [-0.5475, -0.0271, 0.8564, -1.5870, 1.3108]], dtype=ht.float32, device=cpu:0, split=None) >>> y = x < 0.5 >>> y DNDarray([[False, False, False, True, False], [ True, True, True, False, False], [False, False, False, True, True], [ True, True, False, True, False]], dtype=ht.bool, device=cpu:0, split=None) >>> ht.all(y) DNDarray([False], dtype=ht.bool, device=cpu:0, split=None) >>> ht.all(y, axis=0) DNDarray([False, False, False, False, False], dtype=ht.bool, device=cpu:0, split=None) >>> ht.all(x, axis=1) DNDarray([True, True, True, True], dtype=ht.bool, device=cpu:0, split=None) >>> out = ht.zeros(5) >>> ht.all(y, axis=0, out=out) DNDarray([False, False, False, False, False], dtype=ht.float32, device=cpu:0, split=None) >>> out DNDarray([False, False, False, False, False], dtype=ht.float32, device=cpu:0, split=None)
- allclose(x: heat.core.dndarray.DNDarray, y: heat.core.dndarray.DNDarray, rtol: float = 1e-05, atol: float = 1e-08, equal_nan: bool = False) bool
Test whether two tensors are element-wise equal within a tolerance. Returns
True
if|x-y|<=atol+rtol*|y|
for all elements ofx
andy
,False
otherwise- Parameters:
x (DNDarray) – First array to compare
y (DNDarray) – Second array to compare
atol (float, optional) – Absolute tolerance.
rtol (float, optional) – Relative tolerance (with respect to
y
).equal_nan (bool, optional) – Whether to compare NaN’s as equal. If
True
, NaN’s inx
will be considered equal to NaN’s iny
in the output array.
Examples
>>> x = ht.float32([[2, 2], [2, 2]]) >>> ht.allclose(x, x) True >>> y = ht.float32([[2.00005, 2.00005], [2.00005, 2.00005]]) >>> ht.allclose(x, y) False >>> ht.allclose(x, y, atol=1e-04) True
- any(x, axis: int | None = None, out: heat.core.dndarray.DNDarray | None = None, keepdims: bool = False) heat.core.dndarray.DNDarray
Returns a
DNDarray
containing the result of the test whether any array elements along a given axis evaluate toTrue
. The returning array is one dimensional unless axis is notNone
.- Parameters:
x (DNDarray) – Input tensor
axis (int, optional) – Axis along which a logic OR reduction is performed. With
axis=None
, the logical OR is performed over all dimensions of the array.out (DNDarray, optional) – Alternative output tensor in which to place the result. It must have the same shape as the expected output. The output is a array with
datatype=bool
.keepdims (bool, optional) – If this is set to
True
, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array.
Examples
>>> x = ht.float32([[0.3, 0, 0.5]]) >>> x.any() DNDarray([True], dtype=ht.bool, device=cpu:0, split=None) >>> x.any(axis=0) DNDarray([ True, False, True], dtype=ht.bool, device=cpu:0, split=None) >>> x.any(axis=1) DNDarray([True], dtype=ht.bool, device=cpu:0, split=None) >>> y = ht.int32([[0, 0, 1], [0, 0, 0]]) >>> res = ht.zeros(3, dtype=ht.bool) >>> y.any(axis=0, out=res) DNDarray([False, False, True], dtype=ht.bool, device=cpu:0, split=None) >>> res DNDarray([False, False, True], dtype=ht.bool, device=cpu:0, split=None)
- isclose(x: heat.core.dndarray.DNDarray, y: heat.core.dndarray.DNDarray, rtol: float = 1e-05, atol: float = 1e-08, equal_nan: bool = False) heat.core.dndarray.DNDarray
Returns a boolean
DNDarray
, with elementsTrue
wherea
andb
are equal within the given tolerance. If bothx
andy
are scalars, returns a single boolean value.- Parameters:
x (DNDarray) – Input array to compare.
y (DNDarray) – Input array to compare.
rtol (float) – The relative tolerance parameter.
atol (float) – The absolute tolerance parameter.
equal_nan (bool) – Whether to compare NaN’s as equal. If
True
, NaN’s in x will be considered equal to NaN’s in y in the output array.
- isfinite(x: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Test element-wise for finiteness (not infinity or not Not a Number) and return result as a boolean
DNDarray
.- Parameters:
x (DNDarray) – Input tensor
Examples
>>> ht.isfinite(ht.array([1, ht.inf, -ht.inf, ht.nan])) DNDarray([ True, False, False, False], dtype=ht.bool, device=cpu:0, split=None)
- isinf(x: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Test element-wise for positive or negative infinity and return result as a boolean
DNDarray
.- Parameters:
x (DNDarray) – Input tensor
Examples
>>> ht.isinf(ht.array([1, ht.inf, -ht.inf, ht.nan])) DNDarray([False, True, True, False], dtype=ht.bool, device=cpu:0, split=None)
- isnan(x: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Test element-wise for NaN and return result as a boolean
DNDarray
.- Parameters:
x (DNDarray) – Input tensor
Examples
>>> ht.isnan(ht.array([1, ht.inf, -ht.inf, ht.nan])) DNDarray([False, False, False, True], dtype=ht.bool, device=cpu:0, split=None)
- isneginf(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Test if each element of x is negative infinite, return result as a boolean
DNDarray
.- Parameters:
Examples
>>> ht.isnan(ht.array([1, ht.inf, -ht.inf, ht.nan])) DNDarray([False, False, True, False], dtype=ht.bool, device=cpu:0, split=None)
- isposinf(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None)
Test if each element of x is positive infinite, return result as a boolean
DNDarray
.- Parameters:
Examples
>>> ht.isnan(ht.array([1, ht.inf, -ht.inf, ht.nan])) DNDarray([False, True, False, False], dtype=ht.bool, device=cpu:0, split=None)
- logical_and(x: heat.core.dndarray.DNDarray, y: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Compute the truth value of
x
ANDy
element-wise. Returns a booleanDNDarray
containing the truth value ofx
ANDy
element-wise.Examples
>>> ht.logical_and(ht.array([True, False]), ht.array([False, False])) DNDarray([False, False], dtype=ht.bool, device=cpu:0, split=None)
- logical_not(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Computes the element-wise logical NOT of the given input
DNDarray
.- Parameters:
Examples
>>> ht.logical_not(ht.array([True, False])) DNDarray([False, True], dtype=ht.bool, device=cpu:0, split=None)
- logical_or(x: heat.core.dndarray.DNDarray, y: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Returns boolean
DNDarray
containing the element-wise logical NOT of the given inputDNDarray
.Examples
>>> ht.logical_or(ht.array([True, False]), ht.array([False, False])) DNDarray([ True, False], dtype=ht.bool, device=cpu:0, split=None)
- logical_xor(x: heat.core.dndarray.DNDarray, y: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Computes the element-wise logical XOR of the given input
DNDarray
.Examples
>>> ht.logical_xor(ht.array([True, False, True]), ht.array([True, False, False])) DNDarray([False, False, True], dtype=ht.bool, device=cpu:0, split=None)
- signbit(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Checks if signbit is set element-wise (less than zero).
Examples
>>> a = ht.array([2, -1.3, 0]) >>> ht.signbit(a) DNDarray([False, True, False], dtype=ht.bool, device=cpu:0, split=None)
- balance(array: heat.core.dndarray.DNDarray, copy=False) heat.core.dndarray.DNDarray
Out of place balance function. More information on the meaning of balance can be found in
DNDarray.balance_()
.
- broadcast_arrays(*arrays: heat.core.dndarray.DNDarray) List[heat.core.dndarray.DNDarray]
Broadcasts one or more arrays against one another. Returns the broadcasted arrays, distributed along the split dimension of the first array in the list. If the first array is not distributed, the output will not be distributed.
Notes
Broadcasted arrays are a view of the original arrays if possible, otherwise a copy is made.
Examples
>>> import heat as ht >>> a = ht.ones((100, 10), split=0) >>> b = ht.ones((10,), split=None) >>> c = ht.ones((1, 10), split=1) >>> d, e, f = ht.broadcast_arrays(a, b, c) >>> d.shape (100, 10) >>> e.shape (100, 10) >>> f.shape (100, 10) >>> d.split 0 >>> e.split 0 >>> f.split 0
- broadcast_to(x: heat.core.dndarray.DNDarray, shape: Tuple[int, Ellipsis]) heat.core.dndarray.DNDarray
Broadcasts an array to a specified shape. Returns a view of
x
ifx
is not distributed, otherwise it returns a broadcasted, distributed, load-balanced copy ofx
.- Parameters:
x (DNDarray) – DNDarray to broadcast.
shape (Tuple[int, ...]) – Array shape. Must be compatible with
x
.
- Raises:
ValueError – If the array is not compatible with the new shape according to PyTorch’s broadcasting rules.
Examples
>>> import heat as ht >>> a = ht.arange(100, split=0) >>> b = ht.broadcast_to(a, (10,100)) >>> b.shape (10, 100) >>> b.split 1 >>> c = ht.broadcast_to(a, (100, 10)) ValueError: Shape mismatch: object cannot be broadcast to the given shape. Original shape: (100,), target shape: (100, 10)
- collect(arr: heat.core.dndarray.DNDarray, target_rank: int | None = 0) heat.core.dndarray.DNDarray
A function collecting a distributed DNDarray to one rank, chosen by the target_rank variable. It is a specific case of the
redistribute_
method.- Parameters:
arr (DNDarray) – The DNDarray to be collected.
target_rank (int, optional) – The rank to which the DNDarray will be collected. Default: 0.
- Raises:
TypeError – If the target rank is not an integer.
ValueError – If the target rank is out of bounds.
Examples
>>> st = ht.ones((50, 81, 67), split=2) >>> print(st.lshape) [0/2] (50, 81, 23) [1/2] (50, 81, 22) [2/2] (50, 81, 22) >>> collected_st = collect(st) >>> print(collected_st) [0/2] (50, 81, 67) [1/2] (50, 81, 0) [2/2] (50, 81, 0) >>> collected_st = collect(collected_st, 1) >>> print(st.lshape) [0/2] (50, 81, 0) [1/2] (50, 81, 67) [2/2] (50, 81, 0)
- column_stack(arrays: Sequence[heat.core.dndarray.DNDarray, Ellipsis]) heat.core.dndarray.DNDarray
Stack 1-D or 2-D DNDarray`s as columns into a 2-D `DNDarray. If the input arrays are 1-D, they will be stacked as columns. If they are 2-D, they will be concatenated along the second axis.
- Parameters:
- Raises:
ValueError – If arrays have more than 2 dimensions
Notes
All DNDarray`s in the sequence must have the same number of rows. All `DNDarray`s must be split along the same axis! Note that distributed 1-D arrays (`split = 0) by default will be transposed into distributed column arrays with split == 1.
See also
Examples
>>> # 1-D tensors >>> a = ht.array([1, 2, 3]) >>> b = ht.array([2, 3, 4]) >>> ht.column_stack((a, b)).larray tensor([[1, 2], [2, 3], [3, 4]]) >>> # 1-D and 2-D tensors >>> a = ht.array([1, 2, 3]) >>> b = ht.array([[2, 5], [3, 6], [4, 7]]) >>> c = ht.array([[7, 10], [8, 11], [9, 12]]) >>> ht.column_stack((a, b, c)).larray tensor([[ 1, 2, 5, 7, 10], [ 2, 3, 6, 8, 11], [ 3, 4, 7, 9, 12]]) >>> # distributed DNDarrays, 3 processes >>> a = ht.arange(10, split=0).reshape((5, 2)) >>> b = ht.arange(5, 20, split=0).reshape((5, 3)) >>> c = ht.arange(20, 40, split=0).reshape((5, 4)) >>> ht_column_stack((a, b, c)).larray [0/2] tensor([[ 0, 1, 5, 6, 7, 20, 21, 22, 23], [0/2] [ 2, 3, 8, 9, 10, 24, 25, 26, 27]], dtype=torch.int32) [1/2] tensor([[ 4, 5, 11, 12, 13, 28, 29, 30, 31], [1/2] [ 6, 7, 14, 15, 16, 32, 33, 34, 35]], dtype=torch.int32) [2/2] tensor([[ 8, 9, 17, 18, 19, 36, 37, 38, 39]], dtype=torch.int32) >>> # distributed 1-D and 2-D DNDarrays, 3 processes >>> a = ht.arange(5, split=0) >>> b = ht.arange(5, 20, split=1).reshape((5, 3)) >>> ht_column_stack((a, b)).larray [0/2] tensor([[ 0, 5], [0/2] [ 1, 8], [0/2] [ 2, 11], [0/2] [ 3, 14], [0/2] [ 4, 17]], dtype=torch.int32) [1/2] tensor([[ 6], [1/2] [ 9], [1/2] [12], [1/2] [15], [1/2] [18]], dtype=torch.int32) [2/2] tensor([[ 7], [2/2] [10], [2/2] [13], [2/2] [16], [2/2] [19]], dtype=torch.int32)
- concatenate(arrays: Sequence[heat.core.dndarray.DNDarray, Ellipsis], axis: int = 0) heat.core.dndarray.DNDarray
Join 2 or more DNDarrays along an existing axis.
- Parameters:
arrays (Sequence[DNDarray, ...]) – The arrays must have the same shape, except in the dimension corresponding to axis.
axis (int, optional) – The axis along which the arrays will be joined (default is 0).
- Raises:
RuntimeError – If the concatenated
DNDarray
meta information, e.g. split or comm, does not match.TypeError – If the passed parameters are not of correct type.
ValueError – If the number of passed arrays is less than two or their shapes do not match.
Examples
>>> x = ht.zeros((3, 5), split=None) [0/1] tensor([[0., 0., 0., 0., 0.], [0/1] [0., 0., 0., 0., 0.], [0/1] [0., 0., 0., 0., 0.]]) [1/1] tensor([[0., 0., 0., 0., 0.], [1/1] [0., 0., 0., 0., 0.], [1/1] [0., 0., 0., 0., 0.]]) >>> y = ht.ones((3, 6), split=0) [0/1] tensor([[1., 1., 1., 1., 1., 1.], [0/1] [1., 1., 1., 1., 1., 1.]]) [1/1] tensor([[1., 1., 1., 1., 1., 1.]]) >>> ht.concatenate((x, y), axis=1) [0/1] tensor([[0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1.], [0/1] [0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1.]]) [1/1] tensor([[0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1.]]) >>> x = ht.zeros((4, 5), split=1) [0/1] tensor([[0., 0., 0.], [0/1] [0., 0., 0.], [0/1] [0., 0., 0.], [0/1] [0., 0., 0.]]) [1/1] tensor([[0., 0.], [1/1] [0., 0.], [1/1] [0., 0.], [1/1] [0., 0.]]) >>> y = ht.ones((3, 5), split=1) [0/1] tensor([[1., 1., 1.], [0/1] [1., 1., 1.], [0/1] [1., 1., 1.]]) [1/1] tensor([[1., 1.], [1/1] [1., 1.], [1/1] [1., 1.]]) >>> ht.concatenate((x, y), axis=0) [0/1] tensor([[0., 0., 0.], [0/1] [0., 0., 0.], [0/1] [0., 0., 0.], [0/1] [0., 0., 0.], [0/1] [1., 1., 1.], [0/1] [1., 1., 1.], [0/1] [1., 1., 1.]]) [1/1] tensor([[0., 0.], [1/1] [0., 0.], [1/1] [0., 0.], [1/1] [0., 0.], [1/1] [1., 1.], [1/1] [1., 1.], [1/1] [1., 1.]])
- diag(a: heat.core.dndarray.DNDarray, offset: int = 0) heat.core.dndarray.DNDarray
Extract a diagonal or construct a diagonal array. See the documentation for
diagonal()
for more information about extracting the diagonal.- Parameters:
a (DNDarray) – The array holding data for creating a diagonal array or extracting a diagonal. If a is a 1-dimensional array, a diagonal 2d-array will be returned. If a is a n-dimensional array with n > 1 the diagonal entries will be returned in an n-1 dimensional array.
offset (int, optional) – The offset from the main diagonal. Offset greater than zero means above the main diagonal, smaller than zero is below the main diagonal.
See also
Examples
>>> import heat as ht >>> a = ht.array([1, 2]) >>> ht.diag(a) DNDarray([[1, 0], [0, 2]], dtype=ht.int64, device=cpu:0, split=None) >>> ht.diag(a, offset=1) DNDarray([[0, 1, 0], [0, 0, 2], [0, 0, 0]], dtype=ht.int64, device=cpu:0, split=None) >>> ht.equal(ht.diag(ht.diag(a)), a) True >>> a = ht.array([[1, 2], [3, 4]]) >>> ht.diag(a) DNDarray([1, 4], dtype=ht.int64, device=cpu:0, split=None)
- diagonal(a: heat.core.dndarray.DNDarray, offset: int = 0, dim1: int = 0, dim2: int = 1) heat.core.dndarray.DNDarray
Extract a diagonal of an n-dimensional array with n > 1. The returned array will be of dimension n-1.
- Parameters:
a (DNDarray) – The array of which the diagonal should be extracted.
offset (int, optional) – The offset from the main diagonal. Offset greater than zero means above the main diagonal, smaller than zero is below the main diagonal. Default is 0 which means the main diagonal will be selected.
dim1 (int, optional) – First dimension with respect to which to take the diagonal.
dim2 (int, optional) – Second dimension with respect to which to take the diagonal.
Examples
>>> import heat as ht >>> a = ht.array([[1, 2], [3, 4]]) >>> ht.diagonal(a) DNDarray([1, 4], dtype=ht.int64, device=cpu:0, split=None) >>> ht.diagonal(a, offset=1) DNDarray([2], dtype=ht.int64, device=cpu:0, split=None) >>> ht.diagonal(a, offset=-1) DNDarray([3], dtype=ht.int64, device=cpu:0, split=None) >>> a = ht.array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> ht.diagonal(a) DNDarray([[0, 6], [1, 7]], dtype=ht.int64, device=cpu:0, split=None) >>> ht.diagonal(a, dim2=2) DNDarray([[0, 5], [2, 7]], dtype=ht.int64, device=cpu:0, split=None)
- dsplit(x: Sequence[heat.core.dndarray.DNDarray, Ellipsis], indices_or_sections: Iterable) List[heat.core.dndarray.DNDarray, Ellipsis]
Split array into multiple sub-DNDarrays along the 3rd axis (depth). Returns a list of sub-DNDarrays as copies of parts of x.
- Parameters:
x (DNDarray) – DNDArray to be divided into sub-DNDarrays.
indices_or_sections (int or 1-dimensional array_like (i.e. undistributed DNDarray, list or tuple)) – If indices_or_sections is an integer, N, the DNDarray will be divided into N equal DNDarrays along the 3rd axis. If such a split is not possible, an error is raised. If indices_or_sections is a 1-D DNDarray of sorted integers, the entries indicate where along the 3rd axis the array is split. If an index exceeds the dimension of the array along the 3rd axis, an empty sub-DNDarray is returned correspondingly.
- Raises:
ValueError – If indices_or_sections is given as integer, but a split does not result in equal division.
Notes
Please refer to the split documentation. dsplit is equivalent to split with axis=2, the array is always split along the third axis provided the array dimension is greater than or equal to 3.
Examples
>>> x = ht.array(24).reshape((2, 3, 4)) >>> ht.dsplit(x, 2) [DNDarray([[[ 0, 1], [ 4, 5], [ 8, 9]], [[12, 13], [16, 17], [20, 21]]]), DNDarray([[[ 2, 3], [ 6, 7], [10, 11]], [[14, 15], [18, 19], [22, 23]]])] >>> ht.dsplit(x, [1, 4]) [DNDarray([[[ 0], [ 4], [ 8]], [[12], [16], [20]]]), DNDarray([[[ 1, 2, 3], [ 5, 6, 7], [ 9, 10, 11]], [[13, 14, 15], [17, 18, 19], [21, 22, 23]]]), DNDarray([])]
- expand_dims(a: heat.core.dndarray.DNDarray, axis: int) heat.core.dndarray.DNDarray
Expand the shape of an array. Insert a new axis that will appear at the axis position in the expanded array shape.
- Parameters:
a (DNDarray) – Input array to be expanded.
axis (int) – Position in the expanded axes where the new axis is placed.
- Raises:
ValueError – If axis is not consistent with the available dimensions.
Examples
>>> x = ht.array([1,2]) >>> x.shape (2,) >>> y = ht.expand_dims(x, axis=0) >>> y array([[1, 2]]) >>> y.shape (1, 2) >>> y = ht.expand_dims(x, axis=1) >>> y array([[1], [2]]) >>> y.shape (2, 1)
- flatten(a: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Flattens an array into one dimension.
- Parameters:
a (DNDarray) – Array to collapse
Warning
If a.split>0, the array must be redistributed along the first axis (see
resplit()
).See also
Examples
>>> a = ht.array([[[1,2],[3,4]],[[5,6],[7,8]]]) >>> ht.flatten(a) DNDarray([1, 2, 3, 4, 5, 6, 7, 8], dtype=ht.int64, device=cpu:0, split=None)
- flip(a: heat.core.dndarray.DNDarray, axis: int | Tuple[int, Ellipsis] = None) heat.core.dndarray.DNDarray
Reverse the order of elements in an array along the given axis. The shape of the array is preserved, but the elements are reordered.
- Parameters:
a (DNDarray) – Input array to be flipped
axis (int or Tuple[int,...]) – A list of axes to be flipped
Examples
>>> a = ht.array([[0,1],[2,3]]) >>> ht.flip(a, [0]) DNDarray([[2, 3], [0, 1]], dtype=ht.int64, device=cpu:0, split=None) >>> b = ht.array([[0,1,2],[3,4,5]], split=1) >>> ht.flip(a, [0,1]) (1/2) tensor([5,4,3]) (2/2) tensor([2,1,0])
- fliplr(a: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Flip array in the left/right direction. If a.ndim>2, flip along dimension 1.
- Parameters:
a (DNDarray) – Input array to be flipped, must be at least 2-D
Examples
>>> a = ht.array([[0,1],[2,3]]) >>> ht.fliplr(a) DNDarray([[1, 0], [3, 2]], dtype=ht.int64, device=cpu:0, split=None) >>> b = ht.array([[0,1,2],[3,4,5]], split=0) >>> ht.fliplr(b) (1/2) tensor([[2, 1, 0]]) (2/2) tensor([[5, 4, 3]])
- flipud(a: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Flip array in the up/down direction.
- Parameters:
a (DNDarray) – Input array to be flipped
Examples
>>> a = ht.array([[0,1],[2,3]]) >>> ht.flipud(a) DNDarray([[2, 3], [0, 1]], dtype=ht.int64, device=cpu:0, split=None)) >>> b = ht.array([[0,1,2],[3,4,5]], split=0) >>> ht.flipud(b) (1/2) tensor([3,4,5]) (2/2) tensor([0,1,2])
- hsplit(x: heat.core.dndarray.DNDarray, indices_or_sections: Iterable) List[heat.core.dndarray.DNDarray, Ellipsis]
Split array into multiple sub-DNDarrays along the 2nd axis (horizontally/column-wise). Returns a list of sub-DNDarrays as copies of parts of x.
- Parameters:
x (DNDarray) – DNDArray to be divided into sub-DNDarrays.
indices_or_sections (int or 1-dimensional array_like (i.e. undistributed DNDarray, list or tuple)) – If indices_or_sections is an integer, N, the DNDarray will be divided into N equal DNDarrays along the 2nd axis. If such a split is not possible, an error is raised. If indices_or_sections is a 1-D DNDarray of sorted integers, the entries indicate where along the 2nd axis the array is split. If an index exceeds the dimension of the array along the 2nd axis, an empty sub-DNDarray is returned correspondingly.
- Raises:
ValueError – If indices_or_sections is given as integer, but a split does not result in equal division.
Notes
Please refer to the split documentation. hsplit is nearly equivalent to split with axis=1, the array is always split along the second axis though, in contrary to split, regardless of the array dimension.
Examples
>>> x = ht.arange(24).reshape((2, 4, 3)) >>> ht.hsplit(x, 2) [DNDarray([[[ 0, 1, 2], [ 3, 4, 5]], [[12, 13, 14], [15, 16, 17]]]), DNDarray([[[ 6, 7, 8], [ 9, 10, 11]], [[18, 19, 20], [21, 22, 23]]])] >>> ht.hsplit(x, [1, 3]) [DNDarray([[[ 0, 1, 2]], [[12, 13, 14]]]), DNDarray([[[ 3, 4, 5], [ 6, 7, 8]], [[15, 16, 17], [18, 19, 20]]]), DNDarray([[[ 9, 10, 11]], [[21, 22, 23]]])]
- hstack(arrays: Sequence[heat.core.dndarray.DNDarray, Ellipsis]) heat.core.dndarray.DNDarray
Stack arrays in sequence horizontally (column-wise). This is equivalent to concatenation along the second axis, except for 1-D arrays where it concatenates along the first axis.
- Parameters:
arrays (Sequence[DNDarray, ...]) – The arrays must have the same shape along all but the second axis, except 1-D arrays which can be any length.
See also
concatenate()
,stack()
,vstack()
,column_stack()
,row_stack()
Examples
>>> a = ht.array((1,2,3)) >>> b = ht.array((2,3,4)) >>> ht.hstack((a,b)).larray [0/1] tensor([1, 2, 3, 2, 3, 4]) [1/1] tensor([1, 2, 3, 2, 3, 4]) >>> a = ht.array((1,2,3), split=0) >>> b = ht.array((2,3,4), split=0) >>> ht.hstack((a,b)).larray [0/1] tensor([1, 2, 3]) [1/1] tensor([2, 3, 4]) >>> a = ht.array([[1],[2],[3]], split=0) >>> b = ht.array([[2],[3],[4]], split=0) >>> ht.hstack((a,b)).larray [0/1] tensor([[1, 2], [0/1] [2, 3]]) [1/1] tensor([[3, 4]])
- moveaxis(x: heat.core.dndarray.DNDarray, source: int | Sequence[int], destination: int | Sequence[int]) heat.core.dndarray.DNDarray
Moves axes at the positions in source to new positions.
- Parameters:
x (DNDarray) – The input array.
source (int or Sequence[int, ...]) – Original positions of the axes to move. These must be unique.
destination (int or Sequence[int, ...]) – Destination positions for each of the original axes. These must also be unique.
See also
transpose
Permute the dimensions of an array.
- Raises:
TypeError – If source or destination are not ints, lists or tuples.
ValueError – If source and destination do not have the same number of elements.
Examples
>>> x = ht.zeros((3, 4, 5)) >>> ht.moveaxis(x, 0, -1).shape (4, 5, 3) >>> ht.moveaxis(x, -1, 0).shape (5, 3, 4)
- pad(array: heat.core.dndarray.DNDarray, pad_width: int | Sequence[Sequence[int, int], Ellipsis], mode: str = 'constant', constant_values: int = 0) heat.core.dndarray.DNDarray
Pads tensor with a specific value (default=0). (Not all dimensions supported)
- Parameters:
array (DNDarray) – Array to be padded
pad_width (Union[int, Sequence[Sequence[int, int], ...]]) –
Number of values padded to the edges of each axis. ((before_1, after_1),…(before_N, after_N)) unique pad widths for each axis. Determines how many elements are padded along which dimension.
Shortcuts:
((before, after),) or (before, after): before and after pad width for each axis.
(pad_width,) or int: before = after = pad width for all axes.
Therefore:
pad last dimension: (padding_left, padding_right)
pad last 2 dimensions: ((padding_top, padding_bottom),(padding_left, padding_right))
pad last 3 dimensions: ((padding_front, padding_back),(padding_top, padding_bottom),(paddling_left, padding_right) )
… (same pattern)
mode (str, optional) –
‘constant’ (default): Pads the input tensor boundaries with a constant value. This is available for arbitrary dimensions
constant_values (Union[int, float, Sequence[Sequence[int,int], ...], Sequence[Sequence[float,float], ...]]) –
Number or tuple of 2-element-sequences (containing numbers), optional (default=0) The fill values for each axis (1 tuple per axis). ((before_1, after_1), … (before_N, after_N)) unique pad values for each axis.
Shortcuts:
((before, after),) or (before, after): before and after padding values for each axis.
(value,) or int: before = after = padding value for all axes.
Notes
This function follows the principle of datatype integrity. Therefore, an array can only be padded with values of the same datatype. All values that violate this rule are implicitly cast to the datatype of the DNDarray.
Examples
>>> a = torch.arange(2 * 3 * 4).reshape(2, 3, 4) >>> b = ht.array(a, split = 0) Pad last dimension >>> c = ht.pad(b, (2,1), constant_values=1) tensor([[[ 1, 1, 0, 1, 2, 3, 1], [ 1, 1, 4, 5, 6, 7, 1], [ 1, 1, 8, 9, 10, 11, 1]], [[ 1, 1, 12, 13, 14, 15, 1], [ 1, 1, 16, 17, 18, 19, 1], [ 1, 1, 20, 21, 22, 23, 1]]]) Pad last 2 dimensions >>> d = ht.pad(b, [(1,0), (2,1)]) DNDarray([[[ 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 1, 2, 3, 0], [ 0, 0, 4, 5, 6, 7, 0], [ 0, 0, 8, 9, 10, 11, 0]],
- [[ 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 12, 13, 14, 15, 0], [ 0, 0, 16, 17, 18, 19, 0], [ 0, 0, 20, 21, 22, 23, 0]]], dtype=ht.int64, device=cpu:0, split=0)
Pad last 3 dimensions >>> e = ht.pad(b, ((2,1), [1,0], (2,1))) DNDarray([[[ 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0]],
- [[ 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0]],
- [[ 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 1, 2, 3, 0], [ 0, 0, 4, 5, 6, 7, 0], [ 0, 0, 8, 9, 10, 11, 0]],
- [[ 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 12, 13, 14, 15, 0], [ 0, 0, 16, 17, 18, 19, 0], [ 0, 0, 20, 21, 22, 23, 0]],
- [[ 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0]]], dtype=ht.int64, device=cpu:0, split=0)
- ravel(a: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Return a flattened view of a if possible. A copy is returned otherwise.
- Parameters:
a (DNDarray) – array to collapse
Notes
Returning a view of distributed data is only possible when split != 0. The returned DNDarray may be unbalanced. Otherwise, data must be communicated among processes, and ravel falls back to flatten.
See also
Examples
>>> a = ht.ones((2,3), split=0) >>> b = ht.ravel(a) >>> a[0,0] = 4 >>> b DNDarray([4., 1., 1., 1., 1., 1.], dtype=ht.float32, device=cpu:0, split=0)
- redistribute(arr: heat.core.dndarray.DNDarray, lshape_map: torch.Tensor = None, target_map: torch.Tensor = None) heat.core.dndarray.DNDarray
Redistributes the data of the
DNDarray
along the split axis to match the given target map. This function does not modify the non-split dimensions of theDNDarray
. This is an abstraction and extension of the balance function.- Parameters:
arr (DNDarray) – DNDarray to redistribute
lshape_map (torch.Tensor, optional) – The current lshape of processes. Units are
[rank, lshape]
.target_map (torch.Tensor, optional) – The desired distribution across the processes. Units are
[rank, target lshape]
. Note: the only important parts of the target map are the values along the split axis, values which are not along this axis are there to mimic the shape of thelshape_map
.
Examples
>>> st = ht.ones((50, 81, 67), split=2) >>> target_map = torch.zeros((st.comm.size, 3), dtype=torch.int64) >>> target_map[0, 2] = 67 >>> print(target_map) [0/2] tensor([[ 0, 0, 67], [0/2] [ 0, 0, 0], [0/2] [ 0, 0, 0]], dtype=torch.int32) [1/2] tensor([[ 0, 0, 67], [1/2] [ 0, 0, 0], [1/2] [ 0, 0, 0]], dtype=torch.int32) [2/2] tensor([[ 0, 0, 67], [2/2] [ 0, 0, 0], [2/2] [ 0, 0, 0]], dtype=torch.int32) >>> print(st.lshape) [0/2] (50, 81, 23) [1/2] (50, 81, 22) [2/2] (50, 81, 22) >>> ht.redistribute_(st, target_map=target_map) >>> print(st.lshape) [0/2] (50, 81, 67) [1/2] (50, 81, 0) [2/2] (50, 81, 0)
- repeat(a: Iterable, repeats: Iterable, axis: int | None = None) heat.core.dndarray.DNDarray
Creates a new DNDarray by repeating elements of array a. The output has the same shape as a, except along the given axis. If axis is None, this function returns a flattened DNDarray.
- Parameters:
a (array_like (i.e. int, float, or tuple/ list/ np.ndarray/ ht.DNDarray of ints/floats)) – Array containing the elements to be repeated.
repeats (int, or 1-dimensional/ DNDarray/ np.ndarray/ list/ tuple of ints) – The number of repetitions for each element, indicates broadcast if int or array_like of 1 element. In this case, the given value is broadcasted to fit the shape of the given axis. Otherwise, its length must be the same as a in the specified axis. To put it differently, the amount of repetitions has to be determined for each element in the corresponding dimension (or in all dimensions if axis is None).
axis (int, optional) – The axis along which to repeat values. By default, use the flattened input array and return a flat output array.
Examples
>>> ht.repeat(3, 4) DNDarray([3, 3, 3, 3])
>>> x = ht.array([[1,2],[3,4]]) >>> ht.repeat(x, 2) DNDarray([1, 1, 2, 2, 3, 3, 4, 4])
>>> x = ht.array([[1,2],[3,4]]) >>> ht.repeat(x, [0, 1, 2, 0]) DNDarray([2, 3, 3])
>>> ht.repeat(x, [1,2], axis=0) DNDarray([[1, 2], [3, 4], [3, 4]])
- reshape(a: heat.core.dndarray.DNDarray, *shape: int | Tuple[int, Ellipsis], **kwargs) heat.core.dndarray.DNDarray
Returns an array with the same data and number of elements as a, but with the specified shape.
- Parameters:
a (DNDarray) – The input array
shape (Union[int, Tuple[int,...]]) – Shape of the new array. Must be compatible with the original shape. If an integer, then the result will be a 1-D array of that length. One shape dimension can be -1. In this case, the value is inferred from the length of the array and remaining dimensions.
new_split (int, optional) – The distribution axis of the reshaped array. Default: None (same distribution axis as a).
- Raises:
ValueError – If the number of elements in the new shape is inconsistent with the input data.
Notes
reshape() might require significant communication among processes. Operating along split axis 0 is recommended.
See also
Examples
>>> a = ht.zeros((3,4)) >>> ht.reshape(a, (4,3)) DNDarray([[0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], dtype=ht.float32, device=cpu:0, split=None) >>> a = ht.linspace(0, 14, 8, split=0) >>> ht.reshape(a, (2,4)) (1/2) tensor([[0., 2., 4., 6.]]) (2/2) tensor([[ 8., 10., 12., 14.]])
- resplit(arr: heat.core.dndarray.DNDarray, axis: int = None) heat.core.dndarray.DNDarray
Out-of-place redistribution of the content of the DNDarray. Allows to “unsplit” (i.e. gather) all values from all nodes, as well as to define a new axis along which the array is split without changes to the values.
- Parameters:
arr (DNDarray) – The array from which to resplit
axis (int or None) – The new split axis, None denotes gathering, an int will set the new split axis
Warning
This operation might involve a significant communication overhead. Use it sparingly and preferably for small arrays.
Examples
>>> a = ht.zeros((4, 5,), split=0) >>> a.lshape (0/2) (2, 5) (1/2) (2, 5) >>> b = resplit(a, None) >>> b.split None >>> b.lshape (0/2) (4, 5) (1/2) (4, 5) >>> a = ht.zeros((4, 5,), split=0) >>> a.lshape (0/2) (2, 5) (1/2) (2, 5) >>> b = resplit(a, 1) >>> b.split 1 >>> b.lshape (0/2) (4, 3) (1/2) (4, 2)
- roll(x: heat.core.dndarray.DNDarray, shift: int | Tuple[int], axis: int | Tuple[int] | None = None) heat.core.dndarray.DNDarray
Rolls array elements along a specified axis. Array elements that roll beyond the last position are re-introduced at the first position. Array elements that roll beyond the first position are re-introduced at the last position.
- Parameters:
x (DNDarray) – input array
shift (Union[int, Tuple[int, ...]]) – number of places by which the elements are shifted. If ‘shift’ is a tuple, then ‘axis’ must be a tuple of the same size, and each of the given axes is shifted by the corrresponding element in ‘shift’. If ‘shift’ is an int and ‘axis’ a tuple, then the same shift is used for all specified axes.
axis (Optional[Union[int, Tuple[int, ...]]]) – axis (or axes) along which elements to shift. If ‘axis’ is None, the array is flattened, shifted, and then restored to its original shape. Default: None.
- Raises:
TypeError – If ‘shift’ or ‘axis’ is not of type int, list or tuple.
ValueError – If ‘shift’ and ‘axis’ are tuples with different sizes.
Examples
>>> a = ht.arange(20).reshape((4,5)) >>> a DNDarray([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]], dtype=ht.int32, device=cpu:0, split=None) >>> ht.roll(a, 1) DNDarray([[19, 0, 1, 2, 3], [ 4, 5, 6, 7, 8], [ 9, 10, 11, 12, 13], [14, 15, 16, 17, 18]], dtype=ht.int32, device=cpu:0, split=None) >>> ht.roll(a, -1, 0) DNDarray([[ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [ 0, 1, 2, 3, 4]], dtype=ht.int32, device=cpu:0, split=None)
- rot90(m: heat.core.dndarray.DNDarray, k: int = 1, axes: Sequence[int, int] = (0, 1)) heat.core.dndarray.DNDarray
Rotate an array by 90 degrees in the plane specified by axes. Rotation direction is from the first towards the second axis.
- Parameters:
- Raises:
ValueError – If len(axis)!=2.
ValueError – If the axes are the same.
ValueError – If axes are out of range.
Notes
rot90(m, k=1, axes=(1,0))
is the reverse ofrot90(m, k=1, axes=(0,1))
.rot90(m, k=1, axes=(1,0))
is equivalent torot90(m, k=-1, axes=(0,1))
.
May change the split axis on distributed tensors.
Examples
>>> m = ht.array([[1,2],[3,4]], dtype=ht.int) >>> m DNDarray([[1, 2], [3, 4]], dtype=ht.int32, device=cpu:0, split=None) >>> ht.rot90(m) DNDarray([[2, 4], [1, 3]], dtype=ht.int32, device=cpu:0, split=None) >>> ht.rot90(m, 2) DNDarray([[4, 3], [2, 1]], dtype=ht.int32, device=cpu:0, split=None) >>> m = ht.arange(8).reshape((2,2,2)) >>> ht.rot90(m, 1, (1,2)) DNDarray([[[1, 3], [0, 2]],
- [[5, 7],
[4, 6]]], dtype=ht.int32, device=cpu:0, split=None)
- row_stack(arrays: Sequence[heat.core.dndarray.DNDarray, Ellipsis]) heat.core.dndarray.DNDarray
Stack 1-D or 2-D DNDarray`s as rows into a 2-D `DNDarray. If the input arrays are 1-D, they will be stacked as rows. If they are 2-D, they will be concatenated along the first axis.
- Parameters:
arrays (Sequence[DNDarrays, ...]) – Sequence of `DNDarray`s.
- Raises:
ValueError – If arrays have more than 2 dimensions
Notes
All ``DNDarray``s in the sequence must have the same number of columns. All ``DNDarray``s must be split along the same axis!
See also
Examples
>>> # 1-D tensors >>> a = ht.array([1, 2, 3]) >>> b = ht.array([2, 3, 4]) >>> ht.row_stack((a, b)).larray tensor([[1, 2, 3], [2, 3, 4]]) >>> # 1-D and 2-D tensors >>> a = ht.array([1, 2, 3]) >>> b = ht.array([[2, 3, 4], [5, 6, 7]]) >>> c = ht.array([[7, 8, 9], [10, 11, 12]]) >>> ht.row_stack((a, b, c)).larray tensor([[ 1, 2, 3], [ 2, 3, 4], [ 5, 6, 7], [ 7, 8, 9], [10, 11, 12]]) >>> # distributed DNDarrays, 3 processes >>> a = ht.arange(10, split=0).reshape((2, 5)) >>> b = ht.arange(5, 20, split=0).reshape((3, 5)) >>> c = ht.arange(20, 40, split=0).reshape((4, 5)) >>> ht.row_stack((a, b, c)).larray [0/2] tensor([[0, 1, 2, 3, 4], [0/2] [5, 6, 7, 8, 9], [0/2] [5, 6, 7, 8, 9]], dtype=torch.int32) [1/2] tensor([[10, 11, 12, 13, 14], [1/2] [15, 16, 17, 18, 19], [1/2] [20, 21, 22, 23, 24]], dtype=torch.int32) [2/2] tensor([[25, 26, 27, 28, 29], [2/2] [30, 31, 32, 33, 34], [2/2] [35, 36, 37, 38, 39]], dtype=torch.int32) >>> # distributed 1-D and 2-D DNDarrays, 3 processes >>> a = ht.arange(5, split=0) >>> b = ht.arange(5, 20, split=0).reshape((3, 5)) >>> ht.row_stack((a, b)).larray [0/2] tensor([[0, 1, 2, 3, 4], [0/2] [5, 6, 7, 8, 9]]) [1/2] tensor([[10, 11, 12, 13, 14]]) [2/2] tensor([[15, 16, 17, 18, 19]])
- shape(a: heat.core.dndarray.DNDarray) Tuple[int, Ellipsis]
Returns the global shape of a (potentially distributed) DNDarray as a tuple.
- Parameters:
a (DNDarray) – The input DNDarray.
- sort(a: heat.core.dndarray.DNDarray, axis: int = -1, descending: bool = False, out: heat.core.dndarray.DNDarray | None = None)
Sorts the elements of a along the given dimension (by default in ascending order) by their value. The sorting is not stable which means that equal elements in the result may have a different ordering than in the original array. Sorting where axis==a.split needs a lot of communication between the processes of MPI. Returns a tuple (values, indices) with the sorted local results and the indices of the elements in the original data
- Parameters:
a (DNDarray) – Input array to be sorted.
axis (int, optional) – The dimension to sort along. Default is the last axis.
descending (bool, optional) – If set to True, values are sorted in descending order.
out (DNDarray, optional) – A location in which to store the results. If provided, it must have a broadcastable shape. If not provided or set to None, a fresh array is allocated.
- Raises:
ValueError – If axis is not consistent with the available dimensions.
Examples
>>> x = ht.array([[4, 1], [2, 3]], split=0) >>> x.shape (1, 2) (1, 2) >>> y = ht.sort(x, axis=0) >>> y (array([[2, 1]], array([[1, 0]])) (array([[4, 3]], array([[0, 1]])) >>> ht.sort(x, descending=True) (array([[4, 1]], array([[0, 1]])) (array([[3, 2]], array([[1, 0]]))
- split(x: heat.core.dndarray.DNDarray, indices_or_sections: Iterable, axis: int = 0) List[heat.core.dndarray.DNDarray, Ellipsis]
Split a DNDarray into multiple sub-DNDarrays. Returns a list of sub-DNDarrays as copies of parts of x.
- Parameters:
x (DNDarray) – DNDArray to be divided into sub-DNDarrays.
indices_or_sections (int or 1-dimensional array_like (i.e. undistributed DNDarray, list or tuple)) –
If indices_or_sections is an integer, N, the DNDarray will be divided into N equal DNDarrays along axis. If such a split is not possible, an error is raised. If indices_or_sections is a 1-D DNDarray of sorted integers, the entries indicate where along axis the array is split. For example, indices_or_sections = [2, 3] would, for axis = 0, result in
x[:2]
x[2:3]
x[3:]
If an index exceeds the dimension of the array along axis, an empty sub-array is returned correspondingly.
axis (int, optional) – The axis along which to split, default is 0. axis is not allowed to equal x.split if x is distributed.
- Raises:
ValueError – If indices_or_sections is given as integer, but a split does not result in equal division.
Warning
Though it is possible to distribute x, this function has nothing to do with the split parameter of a DNDarray.
Examples
>>> x = ht.arange(12).reshape((4,3)) >>> ht.split(x, 2) [ DNDarray([[0, 1, 2], [3, 4, 5]]), DNDarray([[ 6, 7, 8], [ 9, 10, 11]])] >>> ht.split(x, [2, 3, 5]) [ DNDarray([[0, 1, 2], [3, 4, 5]]), DNDarray([[6, 7, 8]] DNDarray([[ 9, 10, 11]]), DNDarray([])] >>> ht.split(x, [1, 2], 1) [DNDarray([[0], [3], [6], [9]]), DNDarray([[ 1], [ 4], [ 7], [10]], DNDarray([[ 2], [ 5], [ 8], [11]])]
- squeeze(x: heat.core.dndarray.DNDarray, axis: int | Tuple[int, Ellipsis] = None) heat.core.dndarray.DNDarray
Remove single-element entries from the shape of a DNDarray. Returns the input array, but with all or a subset (indicated by axis) of the dimensions of length 1 removed. Split semantics: see Notes below.
- Parameters:
x (DNDarray) – Input data.
axis (None or int or Tuple[int,...], optional) – Selects a subset of the single-element entries in the shape. If axis is None, all single-element entries will be removed from the shape.
- Raises:
ValueError –
Notes
Split semantics: a distributed DNDarray will keep its original split dimension after “squeezing”, which, depending on the squeeze axis, may result in a lower numerical split value (see Examples).
Examples
>>> import heat as ht >>> a = ht.random.randn(1,3,1,5) >>> a DNDarray([[[[-0.2604, 1.3512, 0.1175, 0.4197, 1.3590]], [[-0.2777, -1.1029, 0.0697, -1.3074, -1.1931]], [[-0.4512, -1.2348, -1.1479, -0.0242, 0.4050]]]], dtype=ht.float32, device=cpu:0, split=None) >>> a.shape (1, 3, 1, 5) >>> ht.squeeze(a).shape (3, 5) >>> ht.squeeze(a) DNDarray([[-0.2604, 1.3512, 0.1175, 0.4197, 1.3590], [-0.2777, -1.1029, 0.0697, -1.3074, -1.1931], [-0.4512, -1.2348, -1.1479, -0.0242, 0.4050]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.squeeze(a,axis=0).shape (3, 1, 5) >>> ht.squeeze(a,axis=-2).shape (1, 3, 5) >>> ht.squeeze(a,axis=1).shape Traceback (most recent call last): ... ValueError: Dimension along axis 1 is not 1 for shape (1, 3, 1, 5) >>> x.shape (10, 1, 12, 13) >>> x.split 2 >>> x.squeeze().shape (10, 12, 13) >>> x.squeeze().split 1
- stack(arrays: Sequence[heat.core.dndarray.DNDarray, Ellipsis], axis: int = 0, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Join a sequence of DNDarray`s along a new axis. The `axis parameter specifies the index of the new axis in the dimensions of the result. For example, if axis=0, the arrays will be stacked along the first dimension; if axis=-1, they will be stacked along the last dimension. See Notes below for split semantics.
- Parameters:
arrays (Sequence[DNDarrays, ...]) – Each DNDarray must have the same shape, must be split along the same axis, and must be balanced.
axis (int, optional) – The axis in the result array along which the input arrays are stacked.
out (DNDarray, optional) – If provided, the destination to place the result. The shape and split axis must be correct, matching that of what stack would have returned if no out argument were specified (see Notes below).
- Raises:
TypeError – If arrays in sequence are not DNDarray`s, or if their `dtype attribute does not match.
ValueError – If arrays contains less than 2 `DNDarray`s.
ValueError – If the DNDarray`s are of different shapes, or if they are split along different axes (`split attribute).
RuntimeError – If the DNDarrays reside on different devices.
Notes
Split semantics:
stack()
requires that all arrays in the sequence be split along the same dimension. After stacking, the data are still distributed along the original dimension, however a new dimension has been added at axis, therefore:if \(axis <= split\), output will be distributed along \(split+1\)
if \(axis > split\), output will be distributed along split
See also
column_stack()
,concatenate()
,hstack()
,row_stack()
,vstack()
Examples
>>> a = ht.arange(20).reshape((4, 5)) >>> b = ht.arange(20, 40).reshape((4, 5)) >>> ht.stack((a,b), axis=0).larray tensor([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]], [[20, 21, 22, 23, 24], [25, 26, 27, 28, 29], [30, 31, 32, 33, 34], [35, 36, 37, 38, 39]]]) >>> # distributed DNDarrays, 3 processes, stack along last dimension >>> a = ht.arange(20, split=0).reshape(4, 5) >>> b = ht.arange(20, 40, split=0).reshape(4, 5) >>> ht.stack((a,b), axis=-1).larray [0/2] tensor([[[ 0, 20], [0/2] [ 1, 21], [0/2] [ 2, 22], [0/2] [ 3, 23], [0/2] [ 4, 24]], [0/2] [[ 5, 25], [0/2] [ 6, 26], [0/2] [ 7, 27], [0/2] [ 8, 28], [0/2] [ 9, 29]]]) [1/2] tensor([[[10, 30], [1/2] [11, 31], [1/2] [12, 32], [1/2] [13, 33], [1/2] [14, 34]]]) [2/2] tensor([[[15, 35], [2/2] [16, 36], [2/2] [17, 37], [2/2] [18, 38], [2/2] [19, 39]]])
- swapaxes(x: heat.core.dndarray.DNDarray, axis1: int, axis2: int) heat.core.dndarray.DNDarray
Interchanges two axes of an array.
- Parameters:
x (DNDarray) – Input array.
axis1 (int) – First axis.
axis2 (int) – Second axis.
See also
transpose()
Permute the dimensions of an array.
Examples
>>> x = ht.array([[[0,1],[2,3]],[[4,5],[6,7]]]) >>> ht.swapaxes(x, 0, 1) DNDarray([[[0, 1], [4, 5]], [[2, 3], [6, 7]]], dtype=ht.int64, device=cpu:0, split=None) >>> ht.swapaxes(x, 0, 2) DNDarray([[[0, 4], [2, 6]], [[1, 5], [3, 7]]], dtype=ht.int64, device=cpu:0, split=None)
- tile(x: heat.core.dndarray.DNDarray, reps: Sequence[int, Ellipsis]) heat.core.dndarray.DNDarray
Construct a new DNDarray by repeating ‘x’ the number of times given by ‘reps’.
If ‘reps’ has length ‘d’, the result will have ‘max(d, x.ndim)’ dimensions:
if ‘x.ndim < d’, ‘x’ is promoted to be d-dimensional by prepending new axes.
So a shape (3,) array is promoted to (1, 3) for 2-D replication, or shape (1, 1, 3) for 3-D replication (if this is not the desired behavior, promote ‘x’ to d-dimensions manually before calling this function);
if ‘x.ndim > d’, ‘reps’ will replicate the last ‘d’ dimensions of ‘x’, i.e., if
‘x.shape’ is (2, 3, 4, 5), a ‘reps’ of (2, 2) will be expanded to (1, 1, 2, 2).
- Parameters:
x (DNDarray) – Input
reps (Sequence[ints,...]) – Repetitions
- Returns:
tiled – Split semantics: if x is distributed, the tiled data will be distributed along the same dimension. Note that nominally tiled.split != x.split in the case where len(reps) > x.ndim. See example below.
- Return type:
Examples
>>> x = ht.arange(12).reshape((4,3)).resplit_(0) >>> x DNDarray([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]], dtype=ht.int32, device=cpu:0, split=0) >>> reps = (1, 2, 2) >>> tiled = ht.tile(x, reps) >>> tiled DNDarray([[[ 0, 1, 2, 0, 1, 2], [ 3, 4, 5, 3, 4, 5], [ 6, 7, 8, 6, 7, 8], [ 9, 10, 11, 9, 10, 11], [ 0, 1, 2, 0, 1, 2], [ 3, 4, 5, 3, 4, 5], [ 6, 7, 8, 6, 7, 8], [ 9, 10, 11, 9, 10, 11]]], dtype=ht.int32, device=cpu:0, split=1)
- topk(a: heat.core.dndarray.DNDarray, k: int, dim: int = -1, largest: bool = True, sorted: bool = True, out: Tuple[heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray] | None = None) Tuple[heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray]
Returns the \(k\) highest entries in the array. (Not Stable for split arrays)
- Parameters:
a (DNDarray) – Input data
k (int) – Desired number of output items
dim (int, optional) – Dimension along which to sort, per default the last dimension
largest (bool, optional) – If True, return the \(k\) largest items, otherwise return the \(k\) smallest items
sorted (bool, optional) – Whether to sort the output (descending if largest is True, else ascending)
out (Tuple[DNDarray, ...], optional) – output buffer
Examples
>>> a = ht.array([1, 2, 3]) >>> ht.topk(a,2) (DNDarray([3, 2], dtype=ht.int64, device=cpu:0, split=None), DNDarray([2, 1], dtype=ht.int64, device=cpu:0, split=None)) >>> a = ht.array([[1,2,3],[1,2,3]]) >>> ht.topk(a,2,dim=1) (DNDarray([[3, 2], [3, 2]], dtype=ht.int64, device=cpu:0, split=None), DNDarray([[2, 1], [2, 1]], dtype=ht.int64, device=cpu:0, split=None)) >>> a = ht.array([[1,2,3],[1,2,3]], split=1) >>> ht.topk(a,2,dim=1) (DNDarray([[3, 2], [3, 2]], dtype=ht.int64, device=cpu:0, split=1), DNDarray([[2, 1], [2, 1]], dtype=ht.int64, device=cpu:0, split=1))
- unique(a: heat.core.dndarray.DNDarray, sorted: bool = False, return_inverse: bool = False, axis: int = None) Tuple[heat.core.dndarray.DNDarray, torch.tensor]
Finds and returns the unique elements of a DNDarray. If return_inverse is True, the second tensor will hold the list of inverse indices If distributed, it is most efficient if axis!=a.split.
- Parameters:
a (DNDarray) – Input array.
sorted (bool, optional) – Whether the found elements should be sorted before returning as output. Warning: sorted is not working if axis!=None and axis!=a.split
return_inverse (bool, optional) – Whether to also return the indices for where elements in the original input ended up in the returned unique list.
axis (int, optional) – Axis along which unique elements should be found. Default to None, which will return a one dimensional list of unique values.
Examples
>>> x = ht.array([[3, 2], [1, 3]]) >>> ht.unique(x, sorted=True) array([1, 2, 3]) >>> ht.unique(x, sorted=True, axis=0) array([[1, 3], [2, 3]]) >>> ht.unique(x, sorted=True, axis=1) array([[2, 3], [3, 1]])
- vsplit(x: heat.core.dndarray.DNDarray, indices_or_sections: Iterable) List[heat.core.dndarray.DNDarray, Ellipsis]
Split array into multiple sub-DNDNarrays along the 1st axis (vertically/row-wise). Returns a list of sub-DNDarrays as copies of parts of
x
.- Parameters:
x (DNDarray) – DNDArray to be divided into sub-DNDarrays.
indices_or_sections (Iterable) –
If indices_or_sections is an integer, N, the DNDarray will be divided into N equal DNDarrays along the 1st axis.
If such a split is not possible, an error is raised.
If indices_or_sections is a 1-D DNDarray of sorted integers, the entries indicate where along the 1st axis the array is split.
If an index exceeds the dimension of the array along the 1st axis, an empty sub-DNDarray is returned correspondingly.
- Raises:
ValueError – If indices_or_sections is given as integer, but a split does not result in equal division.
Notes
Please refer to the split documentation.
hsplit()
is equivalent to split with axis=0, the array is always split along the first axis regardless of the array dimension.Examples
>>> x = ht.arange(24).reshape((4, 3, 2)) >>> ht.vsplit(x, 2) [DNDarray([[[ 0, 1], [ 2, 3], [ 4, 5]], [[ 6, 7], [ 8, 9], [10, 11]]]), DNDarray([[[12, 13], [14, 15], [16, 17]], [[18, 19], [20, 21], [22, 23]]])] >>> ht.vsplit(x, [1, 3]) [DNDarray([[[0, 1], [2, 3], [4, 5]]]), DNDarray([[[ 6, 7], [ 8, 9], [10, 11]], [[12, 13], [14, 15], [16, 17]]]), DNDarray([[[18, 19], [20, 21], [22, 23]]])]
- vstack(arrays: Sequence[heat.core.dndarray.DNDarray, Ellipsis]) heat.core.dndarray.DNDarray
Stack arrays in sequence vertically (row wise). This is equivalent to concatenation along the first axis. This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The
concatenate()
function provides more general stacking operations.- Parameters:
arrays (Sequence[DNDarray,...]) – The arrays must have the same shape along all but the first axis. 1-D arrays must have the same length.
Notes
The split axis will be switched to 1 in the case that both elements are 1D and split=0
See also
concatenate()
,stack()
,hstack()
,column_stack()
,row_stack()
Examples
>>> a = ht.array([1, 2, 3]) >>> b = ht.array([2, 3, 4]) >>> ht.vstack((a,b)).larray [0/1] tensor([[1, 2, 3], [0/1] [2, 3, 4]]) [1/1] tensor([[1, 2, 3], [1/1] [2, 3, 4]]) >>> a = ht.array([1, 2, 3], split=0) >>> b = ht.array([2, 3, 4], split=0) >>> ht.vstack((a,b)).larray [0/1] tensor([[1, 2], [0/1] [2, 3]]) [1/1] tensor([[3], [1/1] [4]]) >>> a = ht.array([[1], [2], [3]], split=0) >>> b = ht.array([[2], [3], [4]], split=0) >>> ht.vstack((a,b)).larray [0] tensor([[1], [0] [2], [0] [3]]) [1] tensor([[2], [1] [3], [1] [4]])
- copy(x: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Return a deep copy of the given object.
- Parameters:
x (DNDarray) – Input array to be copied.
Examples
>>> a = ht.array([1,2,3]) >>> b = ht.copy(a) >>> b DNDarray([1, 2, 3], dtype=ht.int64, device=cpu:0, split=None) >>> a[0] = 4 >>> a DNDarray([4, 2, 3], dtype=ht.int64, device=cpu:0, split=None) >>> b DNDarray([1, 2, 3], dtype=ht.int64, device=cpu:0, split=None)
- sanitize_memory_layout(x: torch.Tensor, order: str = 'C') torch.Tensor
Return the given object with memory layout as defined below. The default memory distribution is assumed.
- Parameters:
x (torch.Tensor) – Input data
order (str, optional.) – Default is
'C'
as in C-like (row-major) memory layout. The array is stored first dimension first (rows first ifndim=2
). Alternative is'F'
, as in Fortran-like (column-major) memory layout. The array is stored last dimension first (columns first ifndim=2
).
- get_printoptions() dict
Returns the currently configured printing options as key-value pairs.
- global_printing() None
For DNDarray`s, the builtin `print function will gather all of the data, format it then print it on ONLY rank 0.
- Return type:
None
Examples
>>> x = ht.arange(15 * 5, dtype=ht.float).reshape((15, 5)).resplit(0) >>> print(x) [0] DNDarray([[ 0., 1., 2., 3., 4.], [ 5., 6., 7., 8., 9.], [10., 11., 12., 13., 14.], [15., 16., 17., 18., 19.], [20., 21., 22., 23., 24.], [25., 26., 27., 28., 29.], [30., 31., 32., 33., 34.], [35., 36., 37., 38., 39.], [40., 41., 42., 43., 44.], [45., 46., 47., 48., 49.], [50., 51., 52., 53., 54.], [55., 56., 57., 58., 59.], [60., 61., 62., 63., 64.], [65., 66., 67., 68., 69.], [70., 71., 72., 73., 74.]], dtype=ht.float32, device=cpu:0, split=0)
- local_printing() None
The builtin print function will now print the local PyTorch Tensor values for DNDarrays given as arguments.
Examples
>>> x = ht.ht.arange(15 * 5, dtype=ht.float).reshape((15, 5)).resplit(0) >>> ht.local_printing() [0/2]Printing options set to LOCAL. DNDarrays will print the local PyTorch Tensors >>> print(x) [0/2] [[ 0., 1., 2., 3., 4.], [0/2] [ 5., 6., 7., 8., 9.], [0/2] [10., 11., 12., 13., 14.], [0/2] [15., 16., 17., 18., 19.], [0/2] [20., 21., 22., 23., 24.]] [1/2] [[25., 26., 27., 28., 29.], [1/2] [30., 31., 32., 33., 34.], [1/2] [35., 36., 37., 38., 39.], [1/2] [40., 41., 42., 43., 44.], [1/2] [45., 46., 47., 48., 49.]] [2/2] [[50., 51., 52., 53., 54.], [2/2] [55., 56., 57., 58., 59.], [2/2] [60., 61., 62., 63., 64.], [2/2] [65., 66., 67., 68., 69.], [2/2] [70., 71., 72., 73., 74.]]
- print0(*args, **kwargs) None
Wraps the builtin print function in such a way that it will only run the command on rank 0. If this is called with DNDarrays and local printing, only the data local to process 0 is printed. For more information see the examples.
This function is also available as a builtin when importing heat.
Examples
>>> x = ht.arange(15 * 5, dtype=ht.float).reshape((15, 5)).resplit(0) >>> # GLOBAL PRINTING >>> ht.print0(x) [0] DNDarray([[ 0., 1., 2., 3., 4.], [ 5., 6., 7., 8., 9.], [10., 11., 12., 13., 14.], [15., 16., 17., 18., 19.], [20., 21., 22., 23., 24.], [25., 26., 27., 28., 29.], [30., 31., 32., 33., 34.], [35., 36., 37., 38., 39.], [40., 41., 42., 43., 44.], [45., 46., 47., 48., 49.], [50., 51., 52., 53., 54.], [55., 56., 57., 58., 59.], [60., 61., 62., 63., 64.], [65., 66., 67., 68., 69.], [70., 71., 72., 73., 74.]], dtype=ht.float32, device=cpu:0, split=0) >>> ht.local_printing() [0/2] Printing options set to LOCAL. DNDarrays will print the local PyTorch Tensors >>> print0(x) [0/2] [[ 0., 1., 2., 3., 4.], [0/2] [ 5., 6., 7., 8., 9.], [0/2] [10., 11., 12., 13., 14.], [0/2] [15., 16., 17., 18., 19.], [0/2] [20., 21., 22., 23., 24.]], device: cpu:0, split: 0
- set_printoptions(precision=None, threshold=None, edgeitems=None, linewidth=None, profile=None, sci_mode=None)
Configures the printing options. List of items shamelessly taken from NumPy and PyTorch (thanks guys!).
- Parameters:
precision (int, optional) – Number of digits of precision for floating point output (default=4).
threshold (int, optional) – Total number of array elements which trigger summarization rather than full repr string (default=1000).
edgeitems (int, optional) – Number of array items in summary at beginning and end of each dimension (default=3).
linewidth (int, optional) – The number of characters per line for the purpose of inserting line breaks (default = 80).
profile (str, optional) – Sane defaults for pretty printing. Can override with any of the above options. Can be any one of default, short, full.
sci_mode (bool, optional) – Enable (True) or disable (False) scientific notation. If None (default) is specified, the value is automatically inferred by HeAT.
- eq(x: heat.core.dndarray.DNDarray | float | int, y: heat.core.dndarray.DNDarray | float | int) heat.core.dndarray.DNDarray
Returns a
DNDarray
containing the results of element-wise comparision. Takes the first and second operand (scalar orDNDarray
) whose elements are to be compared as argument.- Parameters:
Examples
>>> import heat as ht >>> x = ht.float32([[1, 2],[3, 4]]) >>> ht.eq(x, 3.0) DNDarray([[False, False], [ True, False]], dtype=ht.bool, device=cpu:0, split=None) >>> y = ht.float32([[2, 2], [2, 2]]) >>> ht.eq(x, y) DNDarray([[False, True], [False, False]], dtype=ht.bool, device=cpu:0, split=None)
- equal(x: heat.core.dndarray.DNDarray | float | int, y: heat.core.dndarray.DNDarray | float | int) bool
Overall comparison of equality between two
DNDarray
. ReturnsTrue
if two arrays have the same size and elements, andFalse
otherwise.- Parameters:
Examples
>>> import heat as ht >>> x = ht.float32([[1, 2],[3, 4]]) >>> ht.equal(x, ht.float32([[1, 2],[3, 4]])) True >>> y = ht.float32([[2, 2], [2, 2]]) >>> ht.equal(x, y) False >>> ht.equal(x, 3.0) False
- ge(x: heat.core.dndarray.DNDarray | float | int, y: heat.core.dndarray.DNDarray | float | int) heat.core.dndarray.DNDarray
Returns a D:class:~heat.core.dndarray.DNDarray containing the results of element-wise rich greater than or equal comparison between values from operand
x
with respect to values of operandy
(i.e.x>=y
), not commutative. Takes the first and second operand (scalar orDNDarray
) whose elements are to be compared as argument.- Parameters:
Examples
>>> import heat as ht >>> x = ht.float32([[1, 2],[3, 4]]) >>> ht.ge(x, 3.0) DNDarray([[False, False], [ True, True]], dtype=ht.bool, device=cpu:0, split=None) >>> y = ht.float32([[2, 2], [2, 2]]) >>> ht.ge(x, y) DNDarray([[False, True], [ True, True]], dtype=ht.bool, device=cpu:0, split=None)
- gt(x: heat.core.dndarray.DNDarray | float | int, y: heat.core.dndarray.DNDarray | float | int) heat.core.dndarray.DNDarray
Returns a
DNDarray
containing the results of element-wise rich greater than comparison between values from operandx
with respect to values of operandy
(i.e.x>y
), not commutative. Takes the first and second operand (scalar orDNDarray
) whose elements are to be compared as argument.- Parameters:
Examples
>>> import heat as ht >>> x = ht.float32([[1, 2],[3, 4]]) >>> ht.gt(x, 3.0) DNDarray([[False, False], [False, True]], dtype=ht.bool, device=cpu:0, split=None) >>> y = ht.float32([[2, 2], [2, 2]]) >>> ht.gt(x, y) DNDarray([[False, False], [ True, True]], dtype=ht.bool, device=cpu:0, split=None)
- le(x: heat.core.dndarray.DNDarray | float | int, y: heat.core.dndarray.DNDarray | float | int) heat.core.dndarray.DNDarray
Return a
DNDarray
containing the results of element-wise rich less than or equal comparison between values from operandx
with respect to values of operandy
(i.e.x<=y
), not commutative. Takes the first and second operand (scalar orDNDarray
) whose elements are to be compared as argument.- Parameters:
Examples
>>> import heat as ht >>> x = ht.float32([[1, 2],[3, 4]]) >>> ht.le(x, 3.0) DNDarray([[ True, True], [ True, False]], dtype=ht.bool, device=cpu:0, split=None) >>> y = ht.float32([[2, 2], [2, 2]]) >>> ht.le(x, y) DNDarray([[ True, True], [False, False]], dtype=ht.bool, device=cpu:0, split=None)
- lt(x: heat.core.dndarray.DNDarray | float | int, y: heat.core.dndarray.DNDarray | float | int) heat.core.dndarray.DNDarray
Returns a
DNDarray
containing the results of element-wise rich less than comparison between values from operandx
with respect to values of operandy
(i.e.x<y
), not commutative. Takes the first and second operand (scalar orDNDarray
) whose elements are to be compared as argument.- Parameters:
Examples
>>> import heat as ht >>> x = ht.float32([[1, 2],[3, 4]]) >>> ht.lt(x, 3.0) DNDarray([[ True, True], [False, False]], dtype=ht.bool, device=cpu:0, split=None) >>> y = ht.float32([[2, 2], [2, 2]]) >>> ht.lt(x, y) DNDarray([[ True, False], [False, False]], dtype=ht.bool, device=cpu:0, split=None)
- ne(x: heat.core.dndarray.DNDarray | float | int, y: heat.core.dndarray.DNDarray | float | int) heat.core.dndarray.DNDarray
Returns a
DNDarray
containing the results of element-wise rich comparison of non-equality between values from two operands, commutative. Takes the first and second operand (scalar orDNDarray
) whose elements are to be compared as argument.- Parameters:
Examples
>>> import heat as ht >>> x = ht.float32([[1, 2],[3, 4]]) >>> ht.ne(x, 3.0) DNDarray([[ True, True], [False, True]], dtype=ht.bool, device=cpu:0, split=None) >>> y = ht.float32([[2, 2], [2, 2]]) >>> ht.ne(x, y) DNDarray([[ True, False], [ True, True]], dtype=ht.bool, device=cpu:0, split=None)
- abs(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None, dtype: Type[heat.core.types.datatype] | None = None) heat.core.dndarray.DNDarray
Returns
DNDarray
containing the elementwise abolute values of the input arrayx
.- Parameters:
x (DNDarray) – The array for which the compute the absolute value.
out (DNDarray, optional) – A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or
None
, a freshly-allocated array is returned.dtype (datatype, optional) – Determines the data type of the output array. The values are cast to this type with potential loss of precision.
- Raises:
TypeError – If dtype is not a heat type.
- absolute(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None, dtype: Type[heat.core.types.datatype] | None = None) heat.core.dndarray.DNDarray
Calculate the absolute value element-wise.
abs()
is a shorthand for this function.- Parameters:
x (DNDarray) – The array for which the compute the absolute value.
out (DNDarray, optional) – A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or
None
, a freshly-allocated array is returned.dtype (datatype, optional) – Determines the data type of the output array. The values are cast to this type with potential loss of precision.
- ceil(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Return the ceil of the input, element-wise. Result is a
DNDarray
of the same shape asx
. The ceil of the scalarx
is the smallest integer i, such thati>=x
. It is often denoted as \(\lceil x \rceil\).- Parameters:
Examples
>>> import heat as ht >>> ht.ceil(ht.arange(-2.0, 2.0, 0.4)) DNDarray([-2., -1., -1., -0., -0., 0., 1., 1., 2., 2.], dtype=ht.float32, device=cpu:0, split=None)
- clip(x: heat.core.dndarray.DNDarray, min, max, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Returns a
DNDarray
with the elements of this array, but where values<a_min
are replaced witha_min
, and those>a_max
witha_max
.- Parameters:
x (DNDarray) – Array containing elements to clip.
min (scalar or None) – Minimum value. If
None
, clipping is not performed on lower interval edge. Not more than one ofa_min
anda_max
may beNone
.max (scalar or None) – Maximum value. If
None
, clipping is not performed on upper interval edge. Not more than one ofa_min
anda_max
may be None.out (DNDarray, optional) – The results will be placed in this array. It may be the input array for in-place clipping.
out
must be of the right shape to hold the output. Its type is preserved.
- Raises:
ValueError – if either min or max is not set
- fabs(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Calculate the absolute value element-wise and return floating-point class:~heat.core.dndarray.DNDarray. This function exists besides
abs==absolute
since it will be needed in case complex numbers will be introduced in the future.
- floor(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Return the floor of the input, element-wise. The floor of the scalar
x
is the largest integer i, such thati<=x
. It is often denoted as \(\lfloor x \rfloor\).- Parameters:
Examples
>>> import heat as ht >>> ht.floor(ht.arange(-2.0, 2.0, 0.4)) DNDarray([-2., -2., -2., -1., -1., 0., 0., 0., 1., 1.], dtype=ht.float32, device=cpu:0, split=None)
- modf(x: heat.core.dndarray.DNDarray, out: Tuple[heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray] | None = None) Tuple[heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray]
Return the fractional and integral parts of a
DNDarray
, element-wise. The fractional and integral parts are negative if the given number is negative.- Parameters:
- Raises:
Examples
>>> import heat as ht >>> ht.modf(ht.arange(-2.0, 2.0, 0.4)) (DNDarray([ 0.0000, -0.6000, -0.2000, -0.8000, -0.4000, 0.0000, 0.4000, 0.8000, 0.2000, 0.6000], dtype=ht.float32, device=cpu:0, split=None), DNDarray([-2., -1., -1., -0., -0., 0., 0., 0., 1., 1.], dtype=ht.float32, device=cpu:0, split=None))
- round(x: heat.core.dndarray.DNDarray, decimals: int = 0, out: heat.core.dndarray.DNDarray | None = None, dtype: Type[heat.core.types.datatype] | None = None) heat.core.dndarray.DNDarray
Calculate the rounded value element-wise.
- Parameters:
x (DNDarray) – The array for which the compute the rounded value.
decimals (int, optional) – Number of decimal places to round to. If decimals is negative, it specifies the number of positions to the left of the decimal point.
out (DNDarray, optional) – A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or
None
, a freshly-allocated array is returned.dtype (datatype, optional) – Determines the data type of the output array. The values are cast to this type with potential loss of precision.
- Raises:
TypeError – if dtype is not a heat data type
Examples
>>> import heat as ht >>> ht.round(ht.arange(-2.0, 2.0, 0.4)) DNDarray([-2., -2., -1., -1., -0., 0., 0., 1., 1., 2.], dtype=ht.float32, device=cpu:0, split=None)
- sgn(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Returns an indication of the sign of a number, element-wise. The definition for complex values is equivalent to \(x / |x|\).
- Parameters:
See also
sign()
Equivalent function on non-complex arrays. The definition for complex values is equivalent to \(x / \sqrt{x \cdot x}\)
Examples
>>> a = ht.array([-1, -0.5, 0, 0.5, 1]) >>> ht.sign(a) DNDarray([-1., -1., 0., 1., 1.], dtype=ht.float32, device=cpu:0, split=None) >>> ht.sgn(ht.array([5-2j, 3+4j])) DNDarray([(0.9284766912460327-0.3713906705379486j), (0.6000000238418579+0.800000011920929j)], dtype=ht.complex64, device=cpu:0, split=None)
- sign(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Returns an indication of the sign of a number, element-wise. The definition for complex values is equivalent to \(x / \sqrt{x \cdot x}\).
- Parameters:
See also
sgn()
Equivalent function on non-complex arrays. The definition for complex values is equivalent to \(x / |x|\).
Examples
>>> a = ht.array([-1, -0.5, 0, 0.5, 1]) >>> ht.sign(a) DNDarray([-1., -1., 0., 1., 1.], dtype=ht.float32, device=cpu:0, split=None) >>> ht.sign(ht.array([5-2j, 3+4j])) DNDarray([(1+0j), (1+0j)], dtype=ht.complex64, device=cpu:0, split=None)
- trunc(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Return the trunc of the input, element-wise. The truncated value of the scalar
x
is the nearest integeri
which is closer to zero thanx
is. In short, the fractional part of the signed numberx
is discarded.- Parameters:
Examples
>>> import heat as ht >>> ht.trunc(ht.arange(-2.0, 2.0, 0.4)) DNDarray([-2., -1., -1., -0., -0., 0., 0., 0., 1., 1.], dtype=ht.float32, device=cpu:0, split=None)
- sanitize_distribution(*args: heat.core.dndarray.DNDarray, target: heat.core.dndarray.DNDarray, diff_map: torch.Tensor = None) Union[heat.core.dndarray.DNDarray, Tuple(DNDarray)]
Distribute every arg according to target.lshape_map or, if provided, diff_map. After this sanitation, the lshapes are compatible along the split dimension. Args can contain non-distributed DNDarrays, they will be split afterwards, if target is split.
- Parameters:
args (DNDarray) – Dndarrays to be distributed
target (DNDarray) – Dndarray used to sanitize the metadata and to, if diff_map is not given, determine the resulting distribution.
diff_map (torch.Tensor (optional)) – Different lshape_map. Overwrites the distribution of the target array. Used in cases when the target array does not correspond to the actually wanted distribution, e.g. because it only contains a single element along the split axis and gets broadcast.
- Raises:
TypeError – When an argument is not a
DNDarray
orNone
.ValueError – When the split-axes or sizes along the split-axis do not match.
See also
create_lshape_map()
Function to create the lshape_map.
- sanitize_in(x: Any)
Verify that input object is
DNDarray
.- Parameters:
x (Any) – Input object
- Raises:
TypeError – When
x
is not aDNDarray
.
- sanitize_infinity(x: heat.core.dndarray.DNDarray | torch.Tensor) int | float
Returns largest possible value for the
dtype
of the input array.- Parameters:
x (Union[DNDarray, torch.Tensor]) – Input object.
- sanitize_in_tensor(x: Any)
Verify that input object is
torch.Tensor
.- Parameters:
x (Any) – Input object.
- Raises:
TypeError – When
x
is not atorch.Tensor
.
- sanitize_lshape(array: heat.core.dndarray.DNDarray, tensor: torch.Tensor)
Verify shape consistency when manipulating process-local arrays.
- Parameters:
array (DNDarray) – the original, potentially distributed
DNDarray
tensor (torch.Tensor) – process-local data meant to replace
array.larray
- Raises:
ValueError – if shape of local
torch.Tensor
is inconsistent with globalDNDarray
.
- sanitize_out(out: heat.core.dndarray.DNDarray, output_shape: Tuple, output_split: int, output_device: str, output_comm: heat.core.communication.Communication = None)
Validate output buffer
out
.- Parameters:
out (DNDarray) – the out buffer where the result of some operation will be stored
output_shape (Tuple) – the calculated shape returned by the operation
output_split (Int) – the calculated split axis returned by the operation
output_device (Str) – “cpu” or “gpu” as per location of data
output_comm (Communication) – Communication object of the result of the operation
- Raises:
TypeError – if
out
is not aDNDarray
.ValueError – if shape, split direction, or device of the output buffer
out
do not match the operation result.
- sanitize_sequence(seq: Sequence[int, Ellipsis] | Sequence[float, Ellipsis] | heat.core.dndarray.DNDarray | torch.Tensor) List
Check if sequence is valid, return list.
- Parameters:
seq (Union[Sequence[int, ...], Sequence[float, ...], DNDarray, torch.Tensor]) – Input sequence.
- Raises:
TypeError – if
seq
is neither a list nor a tuple
- scalar_to_1d(x: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Turn a scalar
DNDarray
into a 1-DDNDarray
with 1 element.- Parameters:
x (DNDarray) – with x.ndim = 0
- argmax(x: heat.core.dndarray.DNDarray, axis: int | None = None, out: heat.core.dndarray.DNDarray | None = None, **kwargs: object) heat.core.dndarray.DNDarray
Returns an array of the indices of the maximum values along an axis. It has the same shape as
x.shape
with the dimension along axis removed.- Parameters:
Examples
>>> a = ht.random.randn(3, 3) >>> a DNDarray([[ 1.0661, 0.7036, -2.0908], [-0.7534, -0.4986, -0.7751], [-0.4815, 1.9436, 0.6400]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.argmax(a) DNDarray([7], dtype=ht.int64, device=cpu:0, split=None) >>> ht.argmax(a, axis=0) DNDarray([0, 2, 2], dtype=ht.int64, device=cpu:0, split=None) >>> ht.argmax(a, axis=1) DNDarray([0, 1, 1], dtype=ht.int64, device=cpu:0, split=None)
- argmin(x: heat.core.dndarray.DNDarray, axis: int | None = None, out: heat.core.dndarray.DNDarray | None = None, **kwargs: object) heat.core.dndarray.DNDarray
Returns an array of the indices of the minimum values along an axis. It has the same shape as
x.shape
with the dimension along axis removed.- Parameters:
Examples
>>> a = ht.random.randn(3, 3) >>> a DNDarray([[ 1.0661, 0.7036, -2.0908], [-0.7534, -0.4986, -0.7751], [-0.4815, 1.9436, 0.6400]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.argmin(a) DNDarray([2], dtype=ht.int64, device=cpu:0, split=None) >>> ht.argmin(a, axis=0) DNDarray([1, 1, 0], dtype=ht.int64, device=cpu:0, split=None) >>> ht.argmin(a, axis=1) DNDarray([2, 2, 0], dtype=ht.int64, device=cpu:0, split=None)
- average(x: heat.core.dndarray.DNDarray, axis: int | Tuple[int, Ellipsis] | None = None, weights: heat.core.dndarray.DNDarray | None = None, returned: bool = False) heat.core.dndarray.DNDarray | Tuple[heat.core.dndarray.DNDarray, Ellipsis]
Compute the weighted average along the specified axis.
If
returned=True
, return a tuple with the average as the first element and the sum of the weights as the second element.sum_of_weights
is of the same type asaverage
.- Parameters:
x (DNDarray) – Array containing data to be averaged.
axis (None or int or Tuple[int,...], optional) – Axis or axes along which to average
x
. The default,axis=None
, will average over all of the elements of the input array. If axis is negative it counts from the last to the first axis. #TODO Issue #351: If axis is a tuple of ints, averaging is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before.weights (DNDarray, optional) – An array of weights associated with the values in
x
. Each value inx
contributes to the average according to its associated weight. The weights array can either be 1D (in which case its length must be the size ofx
along the given axis) or of the same shape asx
. Ifweights=None
, then all data inx
are assumed to have a weight equal to one, the result is equivalent tomean()
.returned (bool, optional) – If
True
, the tuple(average, sum_of_weights)
is returned, otherwise only the average is returned. Ifweights=None
,sum_of_weights
is equivalent to the number of elements over which the average is taken.
- Raises:
ZeroDivisionError – When all weights along axis are zero.
TypeError – When the length of 1D weights is not the same as the shape of
x
along axis.
Examples
>>> data = ht.arange(1,5, dtype=float) >>> data DNDarray([1., 2., 3., 4.], dtype=ht.float32, device=cpu:0, split=None) >>> ht.average(data) DNDarray(2.5000, dtype=ht.float32, device=cpu:0, split=None) >>> ht.average(ht.arange(1,11, dtype=float), weights=ht.arange(10,0,-1)) DNDarray([4.], dtype=ht.float64, device=cpu:0, split=None) >>> data = ht.array([[0, 1], [2, 3], [4, 5]], dtype=float, split=1) >>> weights = ht.array([1./4, 3./4]) >>> ht.average(data, axis=1, weights=weights) DNDarray([0.7500, 2.7500, 4.7500], dtype=ht.float32, device=cpu:0, split=None) >>> ht.average(data, weights=weights) Traceback (most recent call last): ... TypeError: Axis must be specified when shapes of x and weights differ.
- bincount(x: heat.core.dndarray.DNDarray, weights: heat.core.dndarray.DNDarray | None = None, minlength: int = 0) heat.core.dndarray.DNDarray
Count number of occurrences of each value in array of non-negative ints. Return a non-distributed
DNDarray
of length max(x) + 1 if input is non-empty, else 0.The number of bins (size 1) is one larger than the largest value in x unless x is empty, in which case the result is a tensor of size 0. If minlength is specified, the number of bins is at least minlength and if x is empty, then the result is tensor of size minlength filled with zeros. If n is the value at position i, out[n] += weights[i] if weights is specified else out[n] += 1.
- Parameters:
- Raises:
ValueError – If x and weights don’t have the same distribution.
Examples
>>> ht.bincount(ht.arange(5)) DNDarray([1, 1, 1, 1, 1], dtype=ht.int64, device=cpu:0, split=None) >>> ht.bincount(ht.array([0, 1, 3, 2, 1]), weights=ht.array([0, 0.5, 1, 1.5, 2])) DNDarray([0.0000, 2.5000, 1.5000, 1.0000], dtype=ht.float32, device=cpu:0, split=None)
- bucketize(input: heat.core.dndarray.DNDarray, boundaries: heat.core.dndarray.DNDarray | torch.Tensor, out_int32: bool = False, right: bool = False, out: heat.core.dndarray.DNDarray = None) heat.core.dndarray.DNDarray
Returns the indices of the buckets to which each value in the input belongs, where the boundaries of the buckets are set by boundaries.
- Parameters:
input (DNDarray) – The input array.
boundaries (DNDarray or torch.Tensor) – monotonically increasing sequence defining the bucket boundaries, 1-dimensional, not distributed
out_int32 (bool, optional) – set the dtype of the output to
ht.int64
(False) orht.int32
(True)right (bool, optional) – indicate whether the buckets include the right (False) or left (True) boundaries, see Notes.
out (DNDarray, optional) – The output array, must be the shame shape and split as the input array.
Notes
This function uses the PyTorch’s setting for
right
:right
returned index i satisfies
False
boundaries[i-1] < x <= boundaries[i]
True
boundaries[i-1] <= x < boundaries[i]
- Raises:
RuntimeError – If boundaries is distributed.
See also
digitize
NumPy-like version of this function.
Examples
>>> boundaries = ht.array([1, 3, 5, 7, 9]) >>> v = ht.array([[3, 6, 9], [3, 6, 9]]) >>> ht.bucketize(v, boundaries) DNDarray([[1, 3, 4], [1, 3, 4]], dtype=ht.int64, device=cpu:0, split=None) >>> ht.bucketize(v, boundaries, right=True) DNDarray([[2, 3, 5], [2, 3, 5]], dtype=ht.int64, device=cpu:0, split=None)
- cov(m: heat.core.dndarray.DNDarray, y: heat.core.dndarray.DNDarray | None = None, rowvar: bool = True, bias: bool = False, ddof: int | None = None) heat.core.dndarray.DNDarray
Estimate the covariance matrix of some data, m. For more imformation on the algorithm please see the numpy function of the same name
- Parameters:
m (DNDarray) – A 1-D or 2-D array containing multiple variables and observations. Each row of
m
represents a variable, and each column a single observation of all those variables.y (DNDarray, optional) – An additional set of variables and observations.
y
has the same form as that ofm
.rowvar (bool, optional) – If
True
(default), then each row represents a variable, with observations in the columns. Otherwise, the relationship is transposed: each column represents a variable, while the rows contain observations.bias (bool, optional) – Default normalization (
False
) is by (N - 1), where N is the number of observations given (unbiased estimate). IfTrue
, then normalization is by N. These values can be overridden by using the keywordddof
in numpy versions >= 1.5.ddof (int, optional) – If not
None
the default value implied bybias
is overridden. Note thatddof=1
will return the unbiased estimate andddof=0
will return the simple average.
- digitize(x: heat.core.dndarray.DNDarray, bins: heat.core.dndarray.DNDarray | torch.Tensor, right: bool = False) heat.core.dndarray.DNDarray
Return the indices of the bins to which each value in the input array x belongs. If values in x are beyond the bounds of bins, 0 or len(bins) is returned as appropriate.
- Parameters:
Notes
This function uses NumPy’s setting for
right
:right
order of bins
returned index i satisfies
False
increasing
bins[i-1] <= x < bins[i]
True
increasing
bins[i-1] < x <= bins[i]
False
decreasing
bins[i-1] > x >= bins[i]
True
decreasing
bins[i-1] >= x > bins[i]
- Raises:
RuntimeError – If bins is distributed.
See also
bucketize
PyTorch-like version of this function.
Examples
>>> x = ht.array([1.2, 10.0, 12.4, 15.5, 20.]) >>> bins = ht.array([0, 5, 10, 15, 20]) >>> ht.digitize(x,bins,right=True) DNDarray([1, 2, 3, 4, 4], dtype=ht.int64, device=cpu:0, split=None) >>> ht.digitize(x,bins,right=False) DNDarray([1, 3, 3, 4, 5], dtype=ht.int64, device=cpu:0, split=None)
- histc(input: heat.core.dndarray.DNDarray, bins: int = 100, min: int = 0, max: int = 0, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Return the histogram of a DNDarray.
The elements are sorted into equal width bins between min and max. If min and max are both equal, the minimum and maximum values of the data are used. Elements lower than min and higher than max are ignored.
- Parameters:
Examples
>>> ht.histc(ht.array([1., 2, 1]), bins=4, min=0, max=3) DNDarray([0., 2., 1., 0.], dtype=ht.float32, device=cpu:0, split=None) >>> ht.histc(ht.arange(10, dtype=ht.float64, split=0), bins=10) DNDarray([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], dtype=ht.float64, device=cpu:0, split=None)
- histogram(a: heat.core.dndarray.DNDarray, bins: int = 10, range: Tuple[int, int] = (0, 0), normed: bool | None = None, weights: heat.core.dndarray.DNDarray | None = None, density: bool | None = None) heat.core.dndarray.DNDarray
Compute the histogram of a DNDarray.
- Parameters:
a (DNDarray) – the input array, must be of float type
bins (int, optional) – number of histogram bins
range (Tuple[int,int], optional) – lower and upper end of the bins. If not provided, range is simply (a.min(), a.max()).
normed (bool, optional) – Deprecated since NumPy version 1.6. TODO: remove.
weights (DNDarray, optional) – array of weights. Not implemented yet.
density (bool, optional) – Not implemented yet.
Notes
This is a wrapper function of
histc()
for some basic compatibility with the NumPy API.See also
- kurtosis(x: heat.core.dndarray.DNDarray, axis: int | None = None, unbiased: bool = True, Fischer: bool = True) heat.core.dndarray.DNDarray
Compute the kurtosis (Fisher or Pearson) of a dataset. Kurtosis is the fourth central moment divided by the square of the variance. If Fisher’s definition is used, then 3.0 is subtracted from the result to give 0.0 for a normal distribution.
If unbiased is True (defualt) then the kurtosis is calculated using k statistics to eliminate bias coming from biased moment estimators
- Parameters:
x (ht.DNDarray) – Input array
axis (NoneType or Int) – Axis along which skewness is calculated, Default is to compute over the whole array x
unbiased (Bool) – if True (default) the calculations are corrected for bias
Fischer (bool) – Whether use Fischer’s definition or not. If true 3. is subtracted from the result.
Warning
UserWarning: Dependent on the axis given and the split configuration, a UserWarning may be thrown during this function as data is transferred between processes.
- max(x: heat.core.dndarray.DNDarray, axis: int | Tuple[int, Ellipsis] | None = None, out: heat.core.dndarray.DNDarray | None = None, keepdims: bool | None = None) heat.core.dndarray.DNDarray
Return the maximum along a given axis.
- Parameters:
x (DNDarray) – Input array.
axis (None or int or Tuple[int,...], optional) – Axis or axes along which to operate. By default, flattened input is used. If this is a tuple of ints, the maximum is selected over multiple axes, instead of a single axis or all the axes as before.
out (DNDarray, optional) – Tuple of two output arrays
(max, max_indices)
. Must be of the same shape and buffer length as the expected output. The minimum value of an output element. Must be present to allow computation on empty slice.keepdims (bool, optional) – If this is set to
True
, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array.
Examples
>>> a = ht.float32([ [1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12] ]) >>> ht.max(a) DNDarray([12.], dtype=ht.float32, device=cpu:0, split=None) >>> ht.max(a, axis=0) DNDarray([10., 11., 12.], dtype=ht.float32, device=cpu:0, split=None) >>> ht.max(a, axis=1) DNDarray([ 3., 6., 9., 12.], dtype=ht.float32, device=cpu:0, split=None)
- maximum(x1: heat.core.dndarray.DNDarray, x2: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Compares two
DNDarrays
and returns a newDNDarray
containing the element-wise maxima. TheDNDarrays
must have the same shape, or shapes that can be broadcast to a single shape. For broadcasting semantics, see: https://pytorch.org/docs/stable/notes/broadcasting.html If one of the elements being compared isNaN
, then that element is returned. TODO: Check this: If both elements are NaNs then the first is returned. The latter distinction is important for complex NaNs, which are defined as at least one of the real or imaginary parts beingNaN
. The net effect is that NaNs are propagated.- Parameters:
x1 (DNDarray) – The first array containing the elements to be compared.
x2 (DNDarray) – The second array containing the elements to be compared.
out (DNDarray, optional) – A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or
None
, a freshly-allocated array is returned.
Examples
>>> import heat as ht >>> a = ht.random.randn(3, 4) >>> a DNDarray([[ 0.2701, -0.6993, 1.2197, 0.0579], [ 0.6815, 0.4722, -0.3947, -0.3030], [ 1.0101, -1.2460, -1.3953, -0.6879]], dtype=ht.float32, device=cpu:0, split=None) >>> b = ht.random.randn(3, 4) >>> b DNDarray([[ 0.9664, 0.6159, -0.8555, 0.8204], [-1.2200, -0.0759, 0.0437, 0.4700], [ 1.2271, 1.0530, 0.1095, 0.8386]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.maximum(a, b) DNDarray([[0.9664, 0.6159, 1.2197, 0.8204], [0.6815, 0.4722, 0.0437, 0.4700], [1.2271, 1.0530, 0.1095, 0.8386]], dtype=ht.float32, device=cpu:0, split=None) >>> c = ht.random.randn(1, 4) >>> c DNDarray([[-0.5363, -0.9765, 0.4099, 0.3520]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.maximum(a, c) DNDarray([[ 0.2701, -0.6993, 1.2197, 0.3520], [ 0.6815, 0.4722, 0.4099, 0.3520], [ 1.0101, -0.9765, 0.4099, 0.3520]], dtype=ht.float32, device=cpu:0, split=None) >>> d = ht.random.randn(3, 4, 5) >>> ht.maximum(a, d) ValueError: operands could not be broadcast, input shapes (3, 4) (3, 4, 5)
- mean(x: heat.core.dndarray.DNDarray, axis: int | Tuple[int, Ellipsis] | None = None) heat.core.dndarray.DNDarray
Calculates and returns the mean of a
DNDarray
. If an axis is given, the mean will be taken in that direction.- Parameters:
x (DNDarray) – Values for which the mean is calculated for. The dtype of
x
must be a floataxis (None or int or iterable) – Axis which the mean is taken in. Default
None
calculates mean of all data items.
Notes
Split semantics when axis is an integer:
if
axis==x.split
, thenmean(x).split=None
if
axis>split
, thenmean(x).split=x.split
if
axis<split
, thenmean(x).split=x.split-1
Examples
>>> a = ht.random.randn(1,3) >>> a DNDarray([[-0.1164, 1.0446, -0.4093]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.mean(a) DNDarray(0.1730, dtype=ht.float32, device=cpu:0, split=None) >>> a = ht.random.randn(4,4) >>> a DNDarray([[-1.0585, 0.7541, -1.1011, 0.5009], [-1.3575, 0.3344, 0.4506, 0.7379], [-0.4337, -0.6516, -1.3690, -0.8772], [ 0.6929, -1.0989, -0.9961, 0.3547]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.mean(a, 1) DNDarray([-0.2262, 0.0413, -0.8328, -0.2619], dtype=ht.float32, device=cpu:0, split=None) >>> ht.mean(a, 0) DNDarray([-0.5392, -0.1655, -0.7539, 0.1791], dtype=ht.float32, device=cpu:0, split=None) >>> a = ht.random.randn(4,4) >>> a DNDarray([[-0.1441, 0.5016, 0.8907, 0.6318], [-1.1690, -1.2657, 1.4840, -0.1014], [ 0.4133, 1.4168, 1.3499, 1.0340], [-0.9236, -0.7535, -0.2466, -0.9703]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.mean(a, (0,1)) DNDarray(0.1342, dtype=ht.float32, device=cpu:0, split=None)
- median(x: heat.core.dndarray.DNDarray, axis: int | None = None, keepdims: bool = False) heat.core.dndarray.DNDarray
Compute the median of the data along the specified axis. Returns the median of the
DNDarray
elements.- Parameters:
x (DNDarray) – Input tensor
axis (int, or None, optional) – Axis along which the median is computed. Default is
None
, i.e., the median is computed along a flattened version of theDNDarray
.keepdims (bool, optional) – If True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result can broadcast correctly against the original array
a
.
- min(x: heat.core.dndarray.DNDarray, axis: int | Tuple[int, Ellipsis] | None = None, out: heat.core.dndarray.DNDarray | None = None, keepdims: bool | None = None) heat.core.dndarray.DNDarray
Return the minimum along a given axis.
- Parameters:
x (DNDarray) – Input array.
axis (None or int or Tuple[int,...]) – Axis or axes along which to operate. By default, flattened input is used. If this is a tuple of ints, the minimum is selected over multiple axes, instead of a single axis or all the axes as before.
out (Tuple[DNDarray,DNDarray], optional) – Tuple of two output arrays
(min, min_indices)
. Must be of the same shape and buffer length as the expected output. The maximum value of an output element. Must be present to allow computation on empty slice.keepdims (bool, optional) – If this is set to
True
, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array.
Examples
>>> a = ht.float32([ [1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12] ]) >>> ht.min(a) DNDarray([1.], dtype=ht.float32, device=cpu:0, split=None) >>> ht.min(a, axis=0) DNDarray([1., 2., 3.], dtype=ht.float32, device=cpu:0, split=None) >>> ht.min(a, axis=1) DNDarray([ 1., 4., 7., 10.], dtype=ht.float32, device=cpu:0, split=None)
- minimum(x1: heat.core.dndarray.DNDarray, x2: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Compares two
DNDarrays
and returns a newDNDarray
containing the element-wise minima. If one of the elements being compared isNaN
, then that element is returned. They must have the same shape, or shapes that can be broadcast to a single shape. For broadcasting semantics, see: https://pytorch.org/docs/stable/notes/broadcasting.html TODO: Check this: If both elements are NaNs then the first is returned. The latter distinction is important for complex NaNs, which are defined as at least one of the real or imaginary parts beingNaN
. The net effect is that NaNs are propagated.- Parameters:
x1 (DNDarray) – The first array containing the elements to be compared.
x2 (DNDarray) – The second array containing the elements to be compared.
out (DNDarray, optional) – A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or
None
, a freshly-allocated array is returned.
Examples
>>> import heat as ht >>> a = ht.random.randn(3,4) >>> a DNDarray([[-0.5462, 0.0079, 1.2828, 1.4980], [ 0.6503, -1.1069, 1.2131, 1.4003], [-0.3203, -0.2318, 1.0388, 0.4439]], dtype=ht.float32, device=cpu:0, split=None) >>> b = ht.random.randn(3,4) >>> b DNDarray([[ 1.8505, 2.3055, -0.2825, -1.4718], [-0.3684, 1.6866, -0.8570, -0.4779], [ 1.0532, 0.3775, -0.8669, -1.7275]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.minimum(a,b) DNDarray([[-0.5462, 0.0079, -0.2825, -1.4718], [-0.3684, -1.1069, -0.8570, -0.4779], [-0.3203, -0.2318, -0.8669, -1.7275]], dtype=ht.float32, device=cpu:0, split=None) >>> c = ht.random.randn(1,4) >>> c DNDarray([[-1.4358, 1.2914, -0.6042, -1.4009]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.minimum(a,c) DNDarray([[-1.4358, 0.0079, -0.6042, -1.4009], [-1.4358, -1.1069, -0.6042, -1.4009], [-1.4358, -0.2318, -0.6042, -1.4009]], dtype=ht.float32, device=cpu:0, split=None) >>> d = ht.random.randn(3,4,5) >>> ht.minimum(a,d) ValueError: operands could not be broadcast, input shapes (3, 4) (3, 4, 5)
- percentile(x: heat.core.dndarray.DNDarray, q: heat.core.dndarray.DNDarray | int | float | Tuple | List, axis: int | None = None, out: heat.core.dndarray.DNDarray | None = None, interpolation: str = 'linear', keepdims: bool = False) heat.core.dndarray.DNDarray
Compute the q-th percentile of the data along the specified axis. Returns the q-th percentile(s) of the tensor elements.
- Parameters:
x (DNDarray) – Input tensor
q (DNDarray, scalar, or list of scalars) – Percentile or sequence of percentiles to compute. Must belong to the interval [0, 100].
axis (int, or None, optional) – Axis along which the percentiles are computed. Default is None.
out (DNDarray, optional.) – Output buffer.
interpolation (str, optional) –
Interpolation method to use when the desired percentile lies between two data points \(i < j\). Can be one of:
‘linear’: \(i + (j - i) \cdot fraction\), where fraction is the fractional part of the index surrounded by i and j.
‘lower’: i.
‘higher’: j.
‘nearest’: i or j, whichever is nearest.
‘midpoint’: \((i + j) / 2\).
keepdims (bool, optional) – If True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result can broadcast correctly against the original array x.
- skew(x: heat.core.dndarray.DNDarray, axis: int = None, unbiased: bool = True) heat.core.dndarray.DNDarray
Compute the sample skewness of a data set.
- Parameters:
x (ht.DNDarray) – Input array
axis (NoneType or Int) – Axis along which skewness is calculated, Default is to compute over the whole array x
unbiased (Bool) – if True (default) the calculations are corrected for bias
Warning
UserWarning: Dependent on the axis given and the split configuration, a UserWarning may be thrown during this function as data is transferred between processes.
- std(x: heat.core.dndarray.DNDarray, axis: int | Tuple[int] | List[int] = None, ddof: int = 0, **kwargs: object) heat.core.dndarray.DNDarray
Calculates the standard deviation of a
DNDarray
with the bessel correction. If an axis is given, the variance will be taken in that direction.- Parameters:
x (DNDarray) – array for which the std is calculated for. The datatype of
x
must be a floataxis (None or int or iterable) – Axis which the std is taken in. Default
None
calculates std of all data items.ddof (int, optional) – Delta Degrees of Freedom: the denominator implicitely used in the calculation is N - ddof, where N represents the number of elements. If
ddof=1
, the Bessel correction will be applied. Settingddof>1
raises aNotImplementedError
.
Examples
>>> a = ht.random.randn(1, 3) >>> a DNDarray([[ 0.5714, 0.0048, -0.2942]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.std(a) DNDarray(0.3590, dtype=ht.float32, device=cpu:0, split=None) >>> a = ht.random.randn(4,4) >>> a DNDarray([[ 0.8488, 1.2225, 1.2498, -1.4592], [-0.5820, -0.3928, 0.1509, -0.0174], [ 0.6426, -1.8149, 0.1369, 0.0042], [-0.6043, -0.0523, -1.6653, 0.6631]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.std(a, 1, ddof=1) DNDarray([1.2961, 0.3362, 1.0739, 0.9820], dtype=ht.float32, device=cpu:0, split=None) >>> ht.std(a, 1) DNDarray([1.2961, 0.3362, 1.0739, 0.9820], dtype=ht.float32, device=cpu:0, split=None)
- var(x: heat.core.dndarray.DNDarray, axis: int | Tuple[int] | List[int] = None, ddof: int = 0, **kwargs: object) heat.core.dndarray.DNDarray
Calculates and returns the variance of a
DNDarray
. If an axis is given, the variance will be taken in that direction.- Parameters:
x (DNDarray) – Array for which the variance is calculated for. The datatype of
x
must be a floataxis (None or int or iterable) – Axis which the std is taken in. Default
None
calculates std of all data items.ddof (int, optional) – Delta Degrees of Freedom: the denominator implicitely used in the calculation is N - ddof, where N represents the number of elements. If
ddof=1
, the Bessel correction will be applied. Settingddof>1
raises aNotImplementedError
.
Notes
Split semantics when axis is an integer:
if
axis=x.split
, thenvar(x).split=None
if
axis>split
, thenvar(x).split = x.split
if
axis<split
, thenvar(x).split=x.split - 1
The variance is the average of the squared deviations from the mean, i.e.,
var=mean(abs(x - x.mean())**2)
. The mean is normally calculated asx.sum()/N
, whereN = len(x)
. If, however,ddof
is specified, the divisorN - ddof
is used instead. In standard statistical practice,ddof=1
provides an unbiased estimator of the variance of a hypothetical infinite population.ddof=0
provides a maximum likelihood estimate of the variance for normally distributed variables.Examples
>>> a = ht.random.randn(1,3) >>> a DNDarray([[-2.3589, -0.2073, 0.8806]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.var(a) DNDarray(1.8119, dtype=ht.float32, device=cpu:0, split=None) >>> ht.var(a, ddof=1) DNDarray(2.7179, dtype=ht.float32, device=cpu:0, split=None) >>> a = ht.random.randn(4,4) >>> a DNDarray([[-0.8523, -1.4982, -0.5848, -0.2554], [ 0.8458, -0.3125, -0.2430, 1.9016], [-0.6778, -0.3584, -1.5112, 0.6545], [-0.9161, 0.0168, 0.0462, 0.5964]], dtype=ht.float32, device=cpu:0, split=None) >>> ht.var(a, 1) DNDarray([0.2777, 1.0957, 0.8015, 0.3936], dtype=ht.float32, device=cpu:0, split=None) >>> ht.var(a, 0) DNDarray([0.7001, 0.4376, 0.4576, 0.7890], dtype=ht.float32, device=cpu:0, split=None) >>> ht.var(a, 0, ddof=1) DNDarray([0.7001, 0.4376, 0.4576, 0.7890], dtype=ht.float32, device=cpu:0, split=None) >>> ht.var(a, 0, ddof=0) DNDarray([0.7001, 0.4376, 0.4576, 0.7890], dtype=ht.float32, device=cpu:0, split=None)
- broadcast_shape(shape_a: Tuple[int, Ellipsis], shape_b: Tuple[int, Ellipsis]) Tuple[int, Ellipsis]
Infers, if possible, the broadcast output shape of two operands a and b. Inspired by stackoverflow post: https://stackoverflow.com/questions/24743753/test-if-an-array-is-broadcastable-to-a-shape
- Parameters:
shape_a (Tuple[int,...]) – Shape of first operand
shape_b (Tuple[int,...]) – Shape of second operand
- Raises:
ValueError – If the two shapes cannot be broadcast.
Examples
>>> import heat as ht >>> ht.core.stride_tricks.broadcast_shape((5,4),(4,)) (5, 4) >>> ht.core.stride_tricks.broadcast_shape((1,100,1),(10,1,5)) (10, 100, 5) >>> ht.core.stride_tricks.broadcast_shape((8,1,6,1),(7,1,5,)) (8,7,6,5)) >>> ht.core.stride_tricks.broadcast_shape((2,1),(8,4,3)) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "heat/core/stride_tricks.py", line 42, in broadcast_shape "operands could not be broadcast, input shapes {} {}".format(shape_a, shape_b) ValueError: operands could not be broadcast, input shapes (2, 1) (8, 4, 3)
- broadcast_shapes(*shapes: Tuple[int, Ellipsis]) Tuple[int, Ellipsis]
Infers, if possible, the broadcast output shape of multiple operands.
- Parameters:
*shapes (Tuple[int,...]) – Shapes of operands.
- Returns:
The broadcast output shape.
- Return type:
Tuple[int, …]
- Raises:
ValueError – If the shapes cannot be broadcast.
Examples
>>> import heat as ht >>> ht.broadcast_shapes((5,4),(4,)) (5, 4) >>> ht.broadcast_shapes((1,100,1),(10,1,5)) (10, 100, 5) >>> ht.broadcast_shapes((8,1,6,1),(7,1,5,)) (8,7,6,5)) >>> ht.broadcast_shapes((2,1),(8,4,3)) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "heat/core/stride_tricks.py", line 100, in broadcast_shapes "operands could not be broadcast, input shapes {}".format(shapes)) ValueError: operands could not be broadcast, input shapes ((2, 1), (8, 4, 3))
- sanitize_shape(shape: int | Tuple[int, Ellipsis], lval: int = 0) Tuple[int, Ellipsis]
Verifies and normalizes the given shape.
- Parameters:
shape (int or Tupe[int,...]) – Shape of an array.
lval (int) – Lowest legal value
- Raises:
ValueError – If the shape contains illegal values, e.g. negative numbers.
TypeError – If the given shape is neither and int or a sequence of ints.
Examples
>>> import heat as ht >>> ht.core.stride_tricks.sanitize_shape(3) (3,) >>> ht.core.stride_tricks.sanitize_shape([1, 2, 3]) (1, 2, 3,) >>> ht.core.stride_tricks.sanitize_shape(1.0) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "heat/heat/core/stride_tricks.py", line 159, in sanitize_shape raise TypeError("expected sequence object with length >= 0 or a single integer") TypeError: expected sequence object with length >= 0 or a single integer
- sanitize_slice(sl: slice, max_dim: int) slice
Remove None-types from a slice
- Parameters:
sl (slice) – slice to adjust
max_dim (int) – maximum index for the given slice
- Raises:
TypeError – if sl is not a slice.
- class SplitTiles(arr: heat.core.dndarray.DNDarray)
Initialize tiles with the tile divisions equal to the theoretical split dimensions in every dimension
- Parameters:
arr (DNDarray) – Base array for which to create the tiles
- Variables:
__DNDarray (DNDarray) – the
DNDarray
associated with the tiles__lshape_map (torch.Tensor) – map of the shapes of the local torch tensors of arr
__tile_locations (torch.Tensor) – locations of the tiles of
arr
__tile_ends_g (torch.Tensor) – the global indices of the ends of the tiles
__tile_dims (torch.Tensor) – the dimensions of all of the tiles
Examples
>>> a = ht.zeros((10, 11,), split=None) >>> a.create_split_tiles() >>> print(a.tiles.tile_ends_g) [0/2] tensor([[ 4, 7, 10], [0/2] [ 4, 8, 11]], dtype=torch.int32) [1/2] tensor([[ 4, 7, 10], [1/2] [ 4, 8, 11]], dtype=torch.int32) [2/2] tensor([[ 4, 7, 10], [2/2] [ 4, 8, 11]], dtype=torch.int32) >>> print(a.tiles.tile_locations) [0/2] tensor([[0, 0, 0], [0/2] [0, 0, 0], [0/2] [0, 0, 0]], dtype=torch.int32) [1/2] tensor([[1, 1, 1], [1/2] [1, 1, 1], [1/2] [1, 1, 1]], dtype=torch.int32) [2/2] tensor([[2, 2, 2], [2/2] [2, 2, 2], [2/2] [2, 2, 2]], dtype=torch.int32) >>> a = ht.zeros((10, 11), split=1) >>> a.create_split_tiles() >>> print(a.tiles.tile_ends_g) [0/2] tensor([[ 4, 7, 10], [0/2] [ 4, 8, 11]], dtype=torch.int32) [1/2] tensor([[ 4, 7, 10], [1/2] [ 4, 8, 11]], dtype=torch.int32) [2/2] tensor([[ 4, 7, 10], [2/2] [ 4, 8, 11]], dtype=torch.int32) >>> print(a.tiles.tile_locations) [0/2] tensor([[0, 1, 2], [0/2] [0, 1, 2], [0/2] [0, 1, 2]], dtype=torch.int32) [1/2] tensor([[0, 1, 2], [1/2] [0, 1, 2], [1/2] [0, 1, 2]], dtype=torch.int32) [2/2] tensor([[0, 1, 2], [2/2] [0, 1, 2], [2/2] [0, 1, 2]], dtype=torch.int32)
- set_tile_locations(split: int, tile_dims: torch.Tensor, arr: heat.core.dndarray.DNDarray) torch.Tensor
Create a torch.Tensor which contains the locations of the tiles of
arr
for the given split- Parameters:
split (int) – Target split dimension. Does not need to be equal to
arr.split
tile_dims (torch.Tensor) – Tensor containing the sizes of the each tile
arr (DNDarray) – Array for which the tiles are being created for
- __getitem__(key: int | slice | Tuple[int | slice, Ellipsis]) torch.Tensor
Getitem function for getting tiles. Returns the tile which is specified is returned, but only on the process which it resides
- Parameters:
key (int or Tuple or Slice) – Key which identifies the tile/s to get
Examples
>>> test = torch.arange(np.prod([i + 6 for i in range(2)])).reshape([i + 6 for i in range(2)]) >>> a = ht.array(test, split=0).larray [0/2] tensor([[ 0., 1., 2., 3., 4., 5., 6.], [0/2] [ 7., 8., 9., 10., 11., 12., 13.]]) [1/2] tensor([[14., 15., 16., 17., 18., 19., 20.], [1/2] [21., 22., 23., 24., 25., 26., 27.]]) [2/2] tensor([[28., 29., 30., 31., 32., 33., 34.], [2/2] [35., 36., 37., 38., 39., 40., 41.]]) >>> a.create_split_tiles() >>> a.tiles[:2, 2] [0/2] tensor([[ 5., 6.], [0/2] [12., 13.]]) [1/2] tensor([[19., 20.], [1/2] [26., 27.]]) [2/2] None >>> a = ht.array(test, split=1) >>> a.create_split_tiles() >>> a.tiles[1] [0/2] tensor([[14., 15., 16.], [0/2] [21., 22., 23.]]) [1/2] tensor([[17., 18.], [1/2] [24., 25.]]) [2/2] tensor([[19., 20.], [2/2] [26., 27.]])
- __get_tile_slices(key: int | slice | Tuple[int | slice, Ellipsis]) Tuple[slice, Ellipsis]
Create and return slices to convert a key from the tile indices to the normal indices
- get_tile_size(key: int | slice | Tuple[int | slice, Ellipsis]) Tuple[int, Ellipsis]
Get the size of a tile or tiles indicated by the given key
- Parameters:
key (int or slice or tuple) – which tiles to get
- __setitem__(key: int | slice | Tuple[int | slice, Ellipsis], value: int | float | torch.Tensor) None
Set the values of a tile
- Parameters:
key (int or Tuple or Slice) – Key which identifies the tile/s to get
value (int or torch.Tensor) – Value to be set on the tile
Examples
see getitem function for this class
- class SquareDiagTiles(arr: heat.core.dndarray.DNDarray, tiles_per_proc: int = 2)
Generate the tile map and the other objects which may be useful. The tiles generated here are based of square tiles along the diagonal. The size of these tiles along the diagonal dictate the divisions across all processes. If
gshape[0]>>gshape[1]
then there will be extra tiles generated below the diagonal. Ifgshape[0]
is close togshape[1]
, then the last tile (as well as the other tiles which correspond with said tile) will be extended to cover the whole array. However, extra tiles are not generated above the diagonal in the case thatgshape[0]<<gshape[1]
.- Parameters:
arr (DNDarray) – The array to be tiled
tiles_per_proc (int, optional) – The number of divisions per process Default: 2
- Variables:
__col_per_proc_list (List) – List is length of the number of processes, each element has the number of tile columns on the process whos rank equals the index
__DNDarray (DNDarray) – The whole DNDarray
__lshape_map (torch.Tensor) –
unit -> [rank, row size, column size]
Tensor filled with the shapes of the local tensors__tile_map (torch.Tensor) –
units -> row, column, start index in each direction, process
Tensor filled with the global indices of the generated tiles__row_per_proc_list (List) – List is length of the number of processes, each element has the number of tile rows on the process whos rank equals the index
Warning
The generation of these tiles may unbalance the original
DNDarray
!Notes
This tiling scheme is intended for use with the
qr()
function.- __adjust_cols_sp1_m_ls_n(arr: heat.core.dndarray.DNDarray, col_per_proc_list: List[int, Ellipsis], last_diag_pr: int, col_inds: List[int, Ellipsis], lshape_map: torch.Tensor) None
Add more columns after the diagonal ends if
m<n
andarr.split==1
- __adjust_last_row_sp0_m_ge_n(arr: heat.core.dndarray.DNDarray, lshape_map: torch.Tensor, last_diag_pr: int, row_inds: List[int, Ellipsis], row_per_proc_list: List[int, Ellipsis], tile_columns: int) None
Need to adjust the size of last row if
arr.split==0
and the diagonal ends before the last tile. This should only be run ifarr,split==0
andlast_diag_pr<arr.comm.size-1
.
- __adjust_lshape_sp0_1tile(arr: heat.core.dndarray.DNDarray, col_inds: List[int, Ellipsis], lshape_map: torch.Tensor, tiles_per_proc: int) None
If the split is 0 and the number of tiles per proc is 1 then the local data may need to be redistributed to fit the full diagonal on as many processes as possible. If there is a process where there is only 1 element, this function will adjust the
lshape_map
then redistributearr
so that there is not a single diagonal element on one process
- __create_cols(arr: heat.core.dndarray.DNDarray, lshape_map: torch.Tensor, tiles_per_proc: int) Tuple[torch.Tensor, List[int, Ellipsis], List[int, Ellipsis], torch.Tensor]
Calculates the last diagonal process, then creates a list of the number of tile columns per process, then calculates the starting indices of the columns. Also returns the number of tile columns.
- Parameters:
arr (DNDarray) – DNDarray for which to find the tile columns for
lshape_map (torch.Tensor) – The map of the local shapes (for more info see:
create_lshape_map()
)tiles_per_proc (int) – The number of divisions per process
- __def_end_row_inds_sp0_m_ge_n(arr: heat.core.dndarray.DNDarray, row_inds: List[int, Ellipsis], last_diag_pr: int, tiles_per_proc: int, lshape_map: torch.Tensor) None
Adjust the rows on the processes which are greater than the last diagonal processs to have rows which are chunked evenly into
tiles_per_proc
rows.
- __last_tile_row_adjust_sp1(arr: heat.core.dndarray.DNDarray, row_inds: List[int, Ellipsis]) None
Add extra row/s if there is space below the diagonal (
split=1
)
- get_start_stop(key: int | slice | Tuple[int, slice, Ellipsis]) Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]
Returns the start and stop indices in form of
(dim0 start, dim0 stop, dim1 start, dim1 stop)
which correspond to the tile/s which corresponds to the given key. The key MUST use global indices.- Parameters:
key (int or Tuple or List or slice) – Indices to select the tile STRIDES ARE NOT ALLOWED, MUST BE GLOBAL INDICES
Examples
>>> a = ht.zeros((12, 10), split=0) >>> a_tiles = ht.tiling.SquareDiagTiles(a, tiles_per_proc=2) # type: tiling.SquareDiagTiles >>> print(a_tiles.get_start_stop(key=(slice(0, 2), 2))) [0/1] (tensor(0), tensor(6), tensor(6), tensor(8)) [1/1] (tensor(0), tensor(6), tensor(6), tensor(8)) >>> print(a_tiles.get_start_stop(key=(0, 2))) [0/1] (tensor(0), tensor(3), tensor(6), tensor(8)) [1/1] (tensor(0), tensor(3), tensor(6), tensor(8)) >>> print(a_tiles.get_start_stop(key=2)) [0/1] (tensor(0), tensor(2), tensor(0), tensor(10)) [1/1] (tensor(0), tensor(2), tensor(0), tensor(10)) >>> print(a_tiles.get_start_stop(key=(3, 3))) [0/1] (tensor(2), tensor(6), tensor(8), tensor(10)) [1/1] (tensor(2), tensor(6), tensor(8), tensor(10))
- __getitem__(key: int | slice | Tuple[int, slice, Ellipsis]) torch.Tensor
Returns a local selection of the DNDarray corresponding to the tile/s desired Standard getitem function for the tiles. The returned item is a view of the original DNDarray, operations which are done to this view will change the original array. STRIDES ARE NOT AVAILABLE, NOR ARE CROSS-SPLIT SLICES
- Parameters:
key (int, slice, tuple) – indices of the tile/s desired
Examples
>>> a = ht.zeros((12, 10), split=0) >>> a_tiles = tiling.SquareDiagTiles(a, tiles_per_proc=2) # type: tiling.SquareDiagTiles >>> print(a_tiles[2, 3]) [0/1] None [1/1] tensor([[0., 0.], [1/1] [0., 0.]]) >>> print(a_tiles[2]) [0/1] None [1/1] tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [1/1] [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) >>> print(a_tiles[0:2, 1]) [0/1] tensor([[0., 0., 0.], [0/1] [0., 0., 0.], [0/1] [0., 0., 0.], [0/1] [0., 0., 0.], [0/1] [0., 0., 0.], [0/1] [0., 0., 0.]]) [1/1] None
- local_get(key: int | slice | Tuple[int, slice, Ellipsis]) torch.Tensor
Returns the local tile/s corresponding to the key given Getitem routing using local indices, converts to global indices then uses getitem
- Parameters:
key (int, slice, tuple, list) – Indices of the tile/s desired. If the stop index of a slice is larger than the end will be adjusted to the maximum allowed
Examples
See local_set function.
- local_set(key: int | slice | Tuple[int, slice, Ellipsis], value: int | float | torch.Tensor)
Setitem routing to set data to a local tile (using local indices)
- Parameters:
key (int or slice or Tuple[int,...]) – Indices of the tile/s desired If the stop index of a slice is larger than the end will be adjusted to the maximum allowed
value (torch.Tensor or int or float) – Data to be written to the tile
Examples
>>> a = ht.zeros((11, 10), split=0) >>> a_tiles = tiling.SquareDiagTiles(a, tiles_per_proc=2) # type: tiling.SquareDiagTiles >>> local = a_tiles.local_get(key=slice(None)) >>> a_tiles.local_set(key=slice(None), value=torch.arange(local.numel()).reshape(local.shape)) >>> print(a.larray) [0/1] tensor([[ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.], [0/1] [10., 11., 12., 13., 14., 15., 16., 17., 18., 19.], [0/1] [20., 21., 22., 23., 24., 25., 26., 27., 28., 29.], [0/1] [30., 31., 32., 33., 34., 35., 36., 37., 38., 39.], [0/1] [40., 41., 42., 43., 44., 45., 46., 47., 48., 49.], [0/1] [50., 51., 52., 53., 54., 55., 56., 57., 58., 59.]]) [1/1] tensor([[ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.], [1/1] [10., 11., 12., 13., 14., 15., 16., 17., 18., 19.], [1/1] [20., 21., 22., 23., 24., 25., 26., 27., 28., 29.], [1/1] [30., 31., 32., 33., 34., 35., 36., 37., 38., 39.], [1/1] [40., 41., 42., 43., 44., 45., 46., 47., 48., 49.]]) >>> a.lloc[:] = 0 >>> a_tiles.local_set(key=(0, 2), value=10) [0/1] tensor([[ 0., 0., 0., 0., 0., 0., 10., 10., 0., 0.], [0/1] [ 0., 0., 0., 0., 0., 0., 10., 10., 0., 0.], [0/1] [ 0., 0., 0., 0., 0., 0., 10., 10., 0., 0.], [0/1] [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0/1] [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0/1] [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) [1/1] tensor([[ 0., 0., 0., 0., 0., 0., 10., 10., 0., 0.], [1/1] [ 0., 0., 0., 0., 0., 0., 10., 10., 0., 0.], [1/1] [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [1/1] [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [1/1] [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) >>> a_tiles.local_set(key=(slice(None), 1), value=10) [0/1] tensor([[ 0., 0., 0., 10., 10., 10., 0., 0., 0., 0.], [0/1] [ 0., 0., 0., 10., 10., 10., 0., 0., 0., 0.], [0/1] [ 0., 0., 0., 10., 10., 10., 0., 0., 0., 0.], [0/1] [ 0., 0., 0., 10., 10., 10., 0., 0., 0., 0.], [0/1] [ 0., 0., 0., 10., 10., 10., 0., 0., 0., 0.], [0/1] [ 0., 0., 0., 10., 10., 10., 0., 0., 0., 0.]]) [1/1] tensor([[ 0., 0., 0., 10., 10., 10., 0., 0., 0., 0.], [1/1] [ 0., 0., 0., 10., 10., 10., 0., 0., 0., 0.], [1/1] [ 0., 0., 0., 10., 10., 10., 0., 0., 0., 0.], [1/1] [ 0., 0., 0., 10., 10., 10., 0., 0., 0., 0.], [1/1] [ 0., 0., 0., 10., 10., 10., 0., 0., 0., 0.]])
- local_to_global(key: int | slice | Tuple[int, slice, Ellipsis], rank: int) Tuple[int, slice, Ellipsis]
Convert local indices to global indices
- Parameters:
key (int or slice or Tuple or List) – Indices of the tile/s desired. If the stop index of a slice is larger than the end will be adjusted to the maximum allowed
rank (int) – Process rank
Examples
>>> a = ht.zeros((11, 10), split=0) >>> a_tiles = tiling.SquareDiagTiles(a, tiles_per_proc=2) # type: tiling.SquareDiagTiles >>> rank = a.comm.rank >>> print(a_tiles.local_to_global(key=(slice(None), 1), rank=rank)) [0/1] (slice(0, 2, None), 1) [1/1] (slice(2, 4, None), 1) >>> print(a_tiles.local_to_global(key=(0, 2), rank=0)) [0/1] (0, 2) [1/1] (0, 2) >>> print(a_tiles.local_to_global(key=(0, 2), rank=1)) [0/1] (2, 2) [1/1] (2, 2)
- match_tiles(tiles_to_match: SquareDiagTiles) None
Function to match the tile sizes of another tile map
- Parameters:
tiles_to_match (SquareDiagTiles) – The tiles which should be matched by the current tiling scheme
Notes
This function overwrites most, if not all, of the elements of this class. Intended for use with the Q matrix, to match the tiling of a/R. For this to work properly it is required that the 0th dim of both matrices is equal
- __setitem__(key: int | slice | Tuple[int, slice, Ellipsis], value: int | float | torch.Tensor) None
Item setter, uses the torch item setter and the getitem routines to set the values of the original array (arr in __init__)
- Parameters:
key (int or slice or Tuple[int,...]) – Tile indices to identify the target tiles
value (int or torch.Tensor) – Values to be set
Example
>>> a = ht.zeros((12, 10), split=0) >>> a_tiles = tiling.SquareDiagTiles(a, tiles_per_proc=2) # type: tiling.SquareDiagTiles >>> a_tiles[0:2, 2] = 11 >>> a_tiles[0, 0] = 22 >>> a_tiles[2] = 33 >>> a_tiles[3, 3] = 44 >>> print(a.larray) [0/1] tensor([[22., 22., 22., 0., 0., 0., 11., 11., 0., 0.], [0/1] [22., 22., 22., 0., 0., 0., 11., 11., 0., 0.], [0/1] [22., 22., 22., 0., 0., 0., 11., 11., 0., 0.], [0/1] [ 0., 0., 0., 0., 0., 0., 11., 11., 0., 0.], [0/1] [ 0., 0., 0., 0., 0., 0., 11., 11., 0., 0.], [0/1] [ 0., 0., 0., 0., 0., 0., 11., 11., 0., 0.]]) [1/1] tensor([[33., 33., 33., 33., 33., 33., 33., 33., 33., 33.], [1/1] [33., 33., 33., 33., 33., 33., 33., 33., 33., 33.], [1/1] [ 0., 0., 0., 0., 0., 0., 0., 0., 44., 44.], [1/1] [ 0., 0., 0., 0., 0., 0., 0., 0., 44., 44.], [1/1] [ 0., 0., 0., 0., 0., 0., 0., 0., 44., 44.], [1/1] [ 0., 0., 0., 0., 0., 0., 0., 0., 44., 44.]])
- acosh(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Compute the inverse hyperbolic cosine, element-wise. Result is a
DNDarray
of the same shape asx
. Input elements outside [1., +infinity] are returned asNaN
. Ifout
was provided,acosh
is a reference to it.- Parameters:
Examples
>>> ht.acosh(ht.array([1., 10., 20.])) DNDarray([0.0000, 2.9932, 3.6883], dtype=ht.float32, device=cpu:0, split=None)
- asinh(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Compute the inverse hyperbolic sine, element-wise. Result is a
DNDarray
of the same shape asx
. Input elements outside [-infinity., +infinity] are returned asNaN
. Ifout
was provided,asinh
is a reference to it.- Parameters:
Examples
>>> ht.asinh(ht.array([-10., 0., 10.])) DNDarray([-2.9982, 0.0000, 2.9982], dtype=ht.float32, device=cpu:0, split=None)
- atanh(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Compute the inverse hyperbolic tangent, element-wise. Result is a
DNDarray
of the same shape asx
. Input elements outside [-1., 1.] are returned asNaN
. Ifout
was provided,atanh
is a reference to it.- Parameters:
Examples
>>> ht.atanh(ht.array([-1.,-0., 0.83])) DNDarray([ -inf, -0.0000, 1.1881], dtype=ht.float32, device=cpu:0, split=None)
- arccos(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Compute the trigonometric arccos, element-wise. Result is a
DNDarray
of the same shape asx
. Input elements outside [-1., 1.] are returned asNaN
. Ifout
was provided,arccos
is a reference to it.- Parameters:
Examples
>>> ht.arccos(ht.array([-1.,-0., 0.83])) DNDarray([3.1416, 1.5708, 0.5917], dtype=ht.float32, device=cpu:0, split=None)
- arcsin(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Compute the trigonometric arcsin, element-wise. Result is a
DNDarray
of the same shape asx
. Input elements outside [-1., 1.] are returned asNaN
. Ifout
was provided,arcsin
is a reference to it.- Parameters:
Examples
>>> ht.arcsin(ht.array([-1.,-0., 0.83])) DNDarray([-1.5708, -0.0000, 0.9791], dtype=ht.float32, device=cpu:0, split=None)
- arctan(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Compute the trigonometric arctan, element-wise. Result is a
DNDarray
of the same shape asx
. Input elements outside [-1., 1.] are returned asNaN
. Ifout
was provided,arctan
is a reference to it.- Parameters:
Examples
>>> ht.arctan(ht.arange(-6, 7, 2)) DNDarray([-1.4056, -1.3258, -1.1071, 0.0000, 1.1071, 1.3258, 1.4056], dtype=ht.float32, device=cpu:0, split=None)
- arctan2(x1: heat.core.dndarray.DNDarray, x2: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Element-wise arc tangent of
x1/x2
choosing the quadrant correctly. Returns a newDNDarray
with the signed angles in radians between vector (x2
,``x1``) and vector (1,0)- Parameters:
Examples
>>> x = ht.array([-1, +1, +1, -1]) >>> y = ht.array([-1, -1, +1, +1]) >>> ht.arctan2(y, x) * 180 / ht.pi DNDarray([-135.0000, -45.0000, 45.0000, 135.0000], dtype=ht.float64, device=cpu:0, split=None)
- cos(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Return the trigonometric cosine, element-wise.
- Parameters:
x (ht.DNDarray) – The value for which to compute the trigonometric cosine.
out (ht.DNDarray or None, optional) – A location in which to store the results. If provided, it must have a broadcastable shape. If not provided or set to None, a fresh tensor is allocated.
Examples
>>> ht.cos(ht.arange(-6, 7, 2)) DNDarray([ 0.9602, -0.6536, -0.4161, 1.0000, -0.4161, -0.6536, 0.9602], dtype=ht.float32, device=cpu:0, split=None)
- cosh(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Compute the hyperbolic cosine, element-wise. Result is a
DNDarray
of the same shape asx
. Negative input elements are returned asNaN
. Ifout
was provided,cosh
is a reference to it.- Parameters:
Examples
>>> ht.cosh(ht.arange(-6, 7, 2)) DNDarray([201.7156, 27.3082, 3.7622, 1.0000, 3.7622, 27.3082, 201.7156], dtype=ht.float32, device=cpu:0, split=None)
- deg2rad(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Convert angles from degrees to radians.
- Parameters:
Examples
>>> ht.deg2rad(ht.array([0.,20.,45.,78.,94.,120.,180., 270., 311.])) DNDarray([0.0000, 0.3491, 0.7854, 1.3614, 1.6406, 2.0944, 3.1416, 4.7124, 5.4280], dtype=ht.float32, device=cpu:0, split=None)
- degrees(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Convert angles from radians to degrees.
- Parameters:
Examples
>>> ht.degrees(ht.array([0.,0.2,0.6,0.9,1.2,2.7,3.14])) DNDarray([ 0.0000, 11.4592, 34.3775, 51.5662, 68.7549, 154.6986, 179.9088], dtype=ht.float32, device=cpu:0, split=None)
- rad2deg(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Convert angles from radians to degrees.
- Parameters:
Examples
>>> ht.rad2deg(ht.array([0.,0.2,0.6,0.9,1.2,2.7,3.14])) DNDarray([ 0.0000, 11.4592, 34.3775, 51.5662, 68.7549, 154.6986, 179.9088], dtype=ht.float32, device=cpu:0, split=None)
- radians(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Convert angles from degrees to radians.
- Parameters:
Examples
>>> ht.radians(ht.array([0., 20., 45., 78., 94., 120., 180., 270., 311.])) DNDarray([0.0000, 0.3491, 0.7854, 1.3614, 1.6406, 2.0944, 3.1416, 4.7124, 5.4280], dtype=ht.float32, device=cpu:0, split=None)
- sin(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Compute the trigonometric sine, element-wise. Result is a
DNDarray
of the same shape asx
. Negative input elements are returned asNaN
. Ifout
was provided,sin
is a reference to it.- Parameters:
Examples
>>> ht.sin(ht.arange(-6, 7, 2)) DNDarray([ 0.2794, 0.7568, -0.9093, 0.0000, 0.9093, -0.7568, -0.2794], dtype=ht.float32, device=cpu:0, split=None)
- sinh(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Compute the hyperbolic sine, element-wise. Result is a
DNDarray
of the same shape asx
. Negative input elements are returned asNaN
. Ifout
was provided,sinh
is a reference to it.- Parameters:
Examples
>>> ht.sinh(ht.arange(-6, 7, 2)) DNDarray([-201.7132, -27.2899, -3.6269, 0.0000, 3.6269, 27.2899, 201.7132], dtype=ht.float32, device=cpu:0, split=None)
- tan(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Compute tangent element-wise. Result is a
DNDarray
of the same shape asx
. Equivalent tosin()
/cos()
element-wise. Ifout
was provided,tan
is a reference to it.- Parameters:
Examples
>>> ht.tan(ht.arange(-6, 7, 2)) DNDarray([ 0.2910, -1.1578, 2.1850, 0.0000, -2.1850, 1.1578, -0.2910], dtype=ht.float32, device=cpu:0, split=None)
- tanh(x: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Compute the hyperbolic tangent, element-wise. Result is a
DNDarray
of the same shape asx
. Ifout
was provided,tanh
is a reference to it.- Parameters:
Examples
>>> ht.tanh(ht.arange(-6, 7, 2)) DNDarray([-1.0000, -0.9993, -0.9640, 0.0000, 0.9640, 0.9993, 1.0000], dtype=ht.float32, device=cpu:0, split=None)
- class datatype
Defines the basic heat data types in the hierarchy as shown below. Design inspired by the Python package numpy. As part of the type-hierarchy: xx – is bit-width
generic
- torch_type() NotImplemented
Torch Datatype
- char() NotImplemented
Datatype short-hand name
- class number
Bases:
datatype
The general number datatype. Integer and Float classes will inherit from this.
- torch_type() NotImplemented
Torch Datatype
- char() NotImplemented
Datatype short-hand name
- class integer
Bases:
number
The general integer datatype. Specific integer classes inherit from this.
- torch_type() NotImplemented
Torch Datatype
- char() NotImplemented
Datatype short-hand name
- class signedinteger
Bases:
integer
The general signed integer datatype.
- torch_type() NotImplemented
Torch Datatype
- char() NotImplemented
Datatype short-hand name
- class unsignedinteger
Bases:
integer
The general unsigned integer datatype
- torch_type() NotImplemented
Torch Datatype
- char() NotImplemented
Datatype short-hand name
- class bool
Bases:
datatype
The boolean datatype in Heat
- torch_type() torch.dtype
Torch Datatype
- char() str
Datatype short-hand name
- class floating
Bases:
number
The general floating point datatype class.
- torch_type() NotImplemented
Torch Datatype
- char() NotImplemented
Datatype short-hand name
- class int8
Bases:
signedinteger
8 bit signed integer datatype
- torch_type() torch.dtype
Torch Datatype
- char() str
Datatype short-hand name
- class int16
Bases:
signedinteger
16 bit signed integer datatype
- torch_type() torch.dtype
Torch Datatype
- char() str
Datatype short-hand name
- class int32
Bases:
signedinteger
32 bit signed integer datatype
- torch_type() torch.dtype
Torch Datatype
- char() str
Datatype short-hand name
- class int64
Bases:
signedinteger
64 bit signed integer datatype
- torch_type() torch.dtype
Torch Datatype
- char() str
Datatype short-hand name
- class uint8
Bases:
unsignedinteger
8 bit unsigned integer datatype
- torch_type() torch.dtype
Torch Datatype
- char() str
Datatype short-hand name
- class float32
Bases:
floating
The 32 bit floating point datatype
- torch_type() torch.dtype
Torch Datatype
- char() str
Datatype short-hand name
- class float64
Bases:
floating
The 64 bit floating point datatype
- torch_type() torch.dtype
Torch Datatye
- char() str
Datatype short-hand name
- class flexible
Bases:
datatype
The general flexible datatype. Currently unused, placeholder for characters
- torch_type() NotImplemented
Torch Datatype
- char() NotImplemented
Datatype short-hand name
- can_cast(from_: str | Type[datatype] | Any, to: str | Type[datatype] | Any, casting: str = 'intuitive') bool
Returns True if cast between data types can occur according to the casting rule. If from is a scalar or array scalar, also returns True if the scalar value can be cast without overflow or truncation to an integer.
- Parameters:
from (Union[str, Type[datatype], Any]) – Scalar, data type or type specifier to cast from.
to (Union[str, Type[datatype], Any]) – Target type to cast to.
casting (str, optional) –
options: {“no”, “safe”, “same_kind”, “unsafe”, “intuitive”}, optional Controls the way the cast is evaluated
”no” the types may not be cast, i.e. they need to be identical
”safe” allows only casts that can preserve values with complete precision
”same_kind” safe casts are possible and down_casts within the same type family, e.g. int32 -> int8
”unsafe” means any conversion can be performed, i.e. this casting is always possible
”intuitive” allows all of the casts of safe plus casting from int32 to float32
- Raises:
TypeError – If the types are not understood or casting is not a string
ValueError – If the casting rule is not understood
Examples
>>> ht.can_cast(ht.int32, ht.int64) True >>> ht.can_cast(ht.int64, ht.float64) True >>> ht.can_cast(ht.int16, ht.int8) False >>> ht.can_cast(1, ht.float64) True >>> ht.can_cast(2.0e200, "u1") False >>> ht.can_cast('i8', 'i4', 'no') False >>> ht.can_cast("i8", "i4", "safe") False >>> ht.can_cast("i8", "i4", "same_kind") True >>> ht.can_cast("i8", "i4", "unsafe") True
- canonical_heat_type(a_type: str | Type[datatype] | Any) Type[datatype]
Canonicalize the builtin Python type, type string or HeAT type into a canonical HeAT type.
- Parameters:
a_type (type, str, datatype) – A description for the type. It may be a a Python builtin type, string or an HeAT type already. In the three former cases the according mapped type is looked up, in the latter the type is simply returned.
- Raises:
TypeError – If the type cannot be converted.
- heat_type_is_exact(ht_dtype: Type[datatype]) bool
Check if HeAT type is an exact type, i.e an integer type. True if ht_dtype is an integer, False otherwise
- Parameters:
ht_dtype (Type[datatype]) – HeAT type to check
- heat_type_is_inexact(ht_dtype: Type[datatype]) bool
Check if HeAT type is an inexact type, i.e floating point type. True if ht_dtype is a float, False otherwise
- Parameters:
ht_dtype (Type[datatype]) – HeAT type to check
- iscomplex(x: dndarray.DNDarray) dndarray.DNDarray
Test element-wise if input is complex.
- Parameters:
x (DNDarray) – The input DNDarray
Examples
>>> ht.iscomplex(ht.array([1+1j, 1])) DNDarray([ True, False], dtype=ht.bool, device=cpu:0, split=None)
- isreal(x: dndarray.DNDarray) dndarray.DNDarray
Test element-wise if input is real-valued.
- Parameters:
x (DNDarray) – The input DNDarray
Examples
>>> ht.iscomplex(ht.array([1+1j, 1])) DNDarray([ True, False], dtype=ht.bool, device=cpu:0, split=None)
- issubdtype(arg1: str | Type[datatype] | Any, arg2: str | Type[datatype] | Any) bool
Returns True if first argument is a typecode lower/equal in type hierarchy.
- Parameters:
arg1 (type, str, ht.dtype) – A description representing the type. It may be a a Python builtin type, string or an HeAT type already.
arg2 (type, str, ht.dtype) – A description representing the type. It may be a a Python builtin type, string or an HeAT type already.
Examples
>>> ints = ht.array([1, 2, 3], dtype=ht.int32) >>> ht.issubdtype(ints.dtype, ht.integer) True >>> ht.issubdype(ints.dtype, ht.floating) False >>> ht.issubdtype(ht.float64, ht.float32) False >>> ht.issubdtype('i', ht.integer) True
- heat_type_of(obj: str | Type[datatype] | Any | Iterable[str, Type[datatype], Any]) Type[datatype]
Returns the corresponding HeAT data type of given object, i.e. scalar, array or iterable. Attempts to determine the canonical data type based on the following priority list:
dtype property
type(obj)
type(obj[0])
- Parameters:
obj (scalar or DNDarray or iterable) – The object for which to infer the type.
- Raises:
TypeError – If the object’s type cannot be inferred.
- promote_types(type1: str | Type[datatype] | Any, type2: str | Type[datatype] | Any) Type[datatype]
Returns the data type with the smallest size and smallest scalar kind to which both
type1
andtype2
may be intuitively cast to, where intuitive casting refers to maintaining the same bit length if possible. This function is symmetric.- Parameters:
Examples
>>> ht.promote_types(ht.uint8, ht.uint8) <class 'heat.core.types.uint8'> >>> ht.promote_types(ht.int32, ht.float32) <class 'heat.core.types.float32'> >>> ht.promote_types(ht.int8, ht.uint8) <class 'heat.core.types.int16'> >>> ht.promote_types("i8", "f4") <class 'heat.core.types.float64'>
- result_type(*arrays_and_types: Tuple[dndarray.DNDarray | Type[datatype] | Any]) Type[datatype]
Returns the data type that results from type promotions rules performed in an arithmetic operation.
- Parameters:
arrays_and_types (List of arrays and types) – Input arrays, types or numbers of the operation.
Examples
>>> ht.result_type(ht.array([1], dtype=ht.int32), 1) ht.int32 >>> ht.result_type(ht.float32, ht.array(1, dtype=ht.int8)) ht.float32 >>> ht.result_type("i8", "f4") ht.float64
- class complex64
Bases:
complex
The complex 64 bit datatype. Both real and imaginary are 32 bit floating point
- torch_type()
Torch Datatype
- char()
Datatype short-hand name
- class complex128
Bases:
complex
The complex 128 bit datatype. Both real and imaginary are 64 bit floating point
- torch_type()
Torch Datatype
- char()
Datatype short-hand name
- convolve(a: heat.core.dndarray.DNDarray, v: heat.core.dndarray.DNDarray, mode: str = 'full') heat.core.dndarray.DNDarray
Returns the discrete, linear convolution of two one-dimensional `DNDarray`s or scalars.
- Parameters:
a (DNDarray or scalar) – One-dimensional signal DNDarray of shape (N,) or scalar.
v (DNDarray or scalar) – One-dimensional filter weight DNDarray of shape (M,) or scalar.
mode (str) –
Can be ‘full’, ‘valid’, or ‘same’. Default is ‘full’. ‘full’:
Returns the convolution at each point of overlap, with an output shape of (N+M-1,). At the end-points of the convolution, the signals do not overlap completely, and boundary effects may be seen.
- ’same’:
Mode ‘same’ returns output of length ‘N’. Boundary effects are still visible. This mode is not supported for even-sized filter weights
- ’valid’:
Mode ‘valid’ returns output of length ‘N-M+1’. The convolution product is only given for points where the signals overlap completely. Values outside the signal boundary have no effect.
Examples
Note how the convolution operator flips the second array before “sliding” the two across one another:
>>> a = ht.ones(10) >>> v = ht.arange(3).astype(ht.float) >>> ht.convolve(a, v, mode='full') DNDarray([0., 1., 3., 3., 3., 3., 2.]) >>> ht.convolve(a, v, mode='same') DNDarray([1., 3., 3., 3., 3.]) >>> ht.convolve(a, v, mode='valid') DNDarray([3., 3., 3.]) >>> a = ht.ones(10, split = 0) >>> v = ht.arange(3, split = 0).astype(ht.float) >>> ht.convolve(a, v, mode='valid') DNDarray([3., 3., 3., 3., 3., 3., 3., 3.])
[0/3] DNDarray([3., 3., 3.]) [1/3] DNDarray([3., 3., 3.]) [2/3] DNDarray([3., 3.]) >>> a = ht.ones(10, split = 0) >>> v = ht.arange(3, split = 0) >>> ht.convolve(a, v) DNDarray([0., 1., 3., 3., 3., 3., 3., 3., 3., 3., 3., 2.], dtype=ht.float32, device=cpu:0, split=0)
[0/3] DNDarray([0., 1., 3., 3.]) [1/3] DNDarray([3., 3., 3., 3.]) [2/3] DNDarray([3., 3., 3., 2.])
- cross(a: heat.core.dndarray.DNDarray, b: heat.core.dndarray.DNDarray, axisa: int = -1, axisb: int = -1, axisc: int = -1, axis: int = -1) heat.core.dndarray.DNDarray
Returns the cross product. 2D vectors will we converted to 3D.
- Parameters:
a (DNDarray) – First input array.
b (DNDarray) – Second input array. Must have the same shape as ‘a’.
axisa (int) – Axis of a that defines the vector(s). By default, the last axis.
axisb (int) – Axis of b that defines the vector(s). By default, the last axis.
axisc (int) – Axis of the output containing the cross product vector(s). By default, the last axis.
axis (int) – Axis that defines the vectors for which to compute the cross product. Overrides axisa, axisb and axisc. Default: -1
- Raises:
ValueError – If the two input arrays don’t match in shape, split, device, or comm. If the vectors are along the split axis.
TypeError – If ‘axis’ is not an integer.
Examples
>>> a = ht.eye(3) >>> b = ht.array([[0, 1, 0], [0, 0, 1], [1, 0, 0]]) >>> cross = ht.cross(a, b) DNDarray([[0., 0., 1.], [1., 0., 0.], [0., 1., 0.]], dtype=ht.float32, device=cpu:0, split=None)
- det(a: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Returns the determinant of a square matrix.
- Parameters:
a (DNDarray) – A square matrix or a stack of matrices. Shape = (…,M,M)
- Raises:
RuntimeError – If the dtype of ‘a’ is not floating-point.
RuntimeError – If a.ndim < 2 or if the length of the last two dimensions is not the same.
Examples
>>> a = ht.array([[-2,-1,2],[2,1,4],[-3,3,-1]]) >>> ht.linalg.det(a) DNDarray(54., dtype=ht.float64, device=cpu:0, split=None)
- dot(a: heat.core.dndarray.DNDarray, b: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray | float
Returns the dot product of two
DNDarrays
. Specifically,If both a and b are 1-D arrays, it is inner product of vectors.
If both a and b are 2-D arrays, it is matrix multiplication, but using matmul or
a@b
is preferred.If either a or b is 0-D (scalar), it is equivalent to multiply and using
multiply(a, b)
ora*b
is preferred.
- Parameters:
See also
vecdot
Supports (vector) dot along an axis.
- inv(a: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Computes the multiplicative inverse of a square matrix.
- Parameters:
a (DNDarray) – Square matrix of floating-point data type or a stack of square matrices. Shape = (…,M,M)
- Raises:
RuntimeError – If the inverse does not exist.
RuntimeError – If the dtype is not floating-point
RuntimeError – If a is not at least two-dimensional or if the lengths of the last two dimensions are not the same.
Examples
>>> a = ht.array([[1., 2], [2, 3]]) >>> ht.linalg.inv(a) DNDarray([[-3., 2.], [ 2., -1.]], dtype=ht.float32, device=cpu:0, split=None)
- matmul(a: heat.core.dndarray.DNDarray, b: heat.core.dndarray.DNDarray, allow_resplit: bool = False) heat.core.dndarray.DNDarray
Matrix multiplication of two
DNDarrays
:a@b=c
orA@B=c
. Returns a tensor with the result ofa@b
. The split dimension of the returned array is typically the split dimension of a. However, ifa.split=None
then the thec.split
will be set as the split dimension ofb
. If both areNone
thenc.split
is alsoNone
.- Parameters:
a (DNDarray) – 2 dimensional: \(L \times P\)
b (DNDarray) – 2 dimensional: \(P \times Q\)
allow_resplit (bool, optional) – Whether to distribute
a
in the case that botha.split is None
andb.split is None
. Default isFalse
. IfTrue
, if both are not split thena
will be distributed in-place along axis 0.
Notes
If
a
is a split vector then the returned vector will be of shape (\(1xQ\)) and will be split in the 1st dimensionIf
b
is a vector and eithera
orb
is split, then the returned vector will be of shape (\(Lx1\)) and will be split in the 0th dimension
References
[1] R. Gu, et al., “Improving Execution Concurrency of Large-scale Matrix Multiplication on Distributed Data-parallel Platforms,” IEEE Transactions on Parallel and Distributed Systems, vol 28, no. 9. 2017.
[2] S. Ryu and D. Kim, “Parallel Huge Matrix Multiplication on a Cluster with GPGPU Accelerators,” 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Vancouver, BC, 2018, pp. 877-882.
Example
>>> a = ht.ones((n, m), split=1) >>> a[0] = ht.arange(1, m + 1) >>> a[:, -1] = ht.arange(1, n + 1).larray [0/1] tensor([[1., 2.], [1., 1.], [1., 1.], [1., 1.], [1., 1.]]) [1/1] tensor([[3., 1.], [1., 2.], [1., 3.], [1., 4.], [1., 5.]]) >>> b = ht.ones((j, k), split=0) >>> b[0] = ht.arange(1, k + 1) >>> b[:, 0] = ht.arange(1, j + 1).larray [0/1] tensor([[1., 2., 3., 4., 5., 6., 7.], [2., 1., 1., 1., 1., 1., 1.]]) [1/1] tensor([[3., 1., 1., 1., 1., 1., 1.], [4., 1., 1., 1., 1., 1., 1.]]) >>> linalg.matmul(a, b).larray
- [0/1] tensor([[18., 8., 9., 10.],
[14., 6., 7., 8.], [18., 7., 8., 9.], [22., 8., 9., 10.], [26., 9., 10., 11.]])
- [1/1] tensor([[11., 12., 13.],
[ 9., 10., 11.], [10., 11., 12.], [11., 12., 13.], [12., 13., 14.]])
- matrix_norm(x: heat.core.dndarray.DNDarray, axis: Tuple[int, int] | None = None, keepdims: bool = False, ord: int | str | None = None) heat.core.dndarray.DNDarray
Computes the matrix norm of an array.
- Parameters:
x (DNDarray) – Input array
axis (tuple, optional) – Both axes of the matrix. If None ‘x’ must be a matrix. Default: None
keepdims (bool, optional) – Retains the reduced dimension when True. Default: False
ord (int, 'fro', 'nuc', optional) – The matrix norm order to compute. If None the Frobenius norm (‘fro’) is used. Default: None
See also
norm
Computes the vector or matrix norm of an array.
vector_norm
Computes the vector norm of an array.
Notes
The following norms are supported:
ord
norm for matrices
None
Frobenius norm
‘fro’
Frobenius norm
‘nuc’
nuclear norm
inf
max(sum(abs(x), axis=1))
-inf
min(sum(abs(x), axis=1))
1
max(sum(abs(x), axis=0))
-1
min(sum(abs(x), axis=0))
The following matrix norms are currently not supported:
ord
norm for matrices
2
largest singular value
-2
smallest singular value
- Raises:
TypeError – If axis is not a 2-tuple
ValueError – If an invalid matrix norm is given or ‘x’ is a vector.
Examples
>>> ht.matrix_norm(ht.array([[1,2],[3,4]])) DNDarray([[5.4772]], dtype=ht.float64, device=cpu:0, split=None) >>> ht.matrix_norm(ht.array([[1,2],[3,4]]), keepdims=True, ord=-1) DNDarray([[4.]], dtype=ht.float64, device=cpu:0, split=None)
- norm(x: heat.core.dndarray.DNDarray, axis: int | Tuple[int, int] | None = None, keepdims: bool = False, ord: int | float | str | None = None) heat.core.dndarray.DNDarray
Return the vector or matrix norm of an array.
- Parameters:
x (DNDarray) – Input vector
axis (int, tuple, optional) – Axes along which to compute the norm. If an integer, vector norm is used. If a 2-tuple, matrix norm is used. If None, it is inferred from the dimension of the array. Default: None
keepdims (bool, optional) – Retains the reduced dimension when True. Default: False
ord (int, float, inf, -inf, 'fro', 'nuc') – The norm order to compute. See Notes
See also
vector_norm
Computes the vector norm of an array.
matrix_norm
Computes the matrix norm of an array.
Notes
The following norms are supported:
ord
norm for matrices
norm for vectors
None
Frobenius norm
L2-norm (Euclidean)
‘fro’
Frobenius norm
–
‘nuc’
nuclear norm
–
inf
max(sum(abs(x), axis=1))
max(abs(x))
-inf
min(sum(abs(x), axis=1))
min(abs(x))
0
–
sum(x != 0)
1
max(sum(abs(x), axis=0))
L1-norm (Manhattan)
-1
min(sum(abs(x), axis=0))
1./sum(1./abs(a))
2
–
L2-norm (Euclidean)
-2
–
1./sqrt(sum(1./abs(a)**2))
other
–
sum(abs(x)**ord)**(1./ord)
The following matrix norms are currently not supported:
ord
norm for matrices
2
largest singular value
-2
smallest singular value
- Raises:
ValueError – If ‘axis’ has more than 2 elements
Examples
>>> from heat import linalg as LA >>> a = ht.arange(9, dtype=ht.float) - 4 >>> a DNDarray([-4., -3., -2., -1., 0., 1., 2., 3., 4.], dtype=ht.float32, device=cpu:0, split=None) >>> b = a.reshape((3, 3)) >>> b DNDarray([[-4., -3., -2.], [-1., 0., 1.], [ 2., 3., 4.]], dtype=ht.float32, device=cpu:0, split=None) >>> LA.norm(a) DNDarray(7.7460, dtype=ht.float32, device=cpu:0, split=None) >>> LA.norm(b) DNDarray(7.7460, dtype=ht.float32, device=cpu:0, split=None) >>> LA.norm(b, ord='fro') DNDarray(7.7460, dtype=ht.float32, device=cpu:0, split=None) >>> LA.norm(a, float('inf')) DNDarray([4.], dtype=ht.float32, device=cpu:0, split=None) >>> LA.norm(b, ht.inf) DNDarray([9.], dtype=ht.float32, device=cpu:0, split=None) >>> LA.norm(a, -ht.inf)) DNDarray([0.], dtype=ht.float32, device=cpu:0, split=None) >>> LA.norm(b, -ht.inf) DNDarray([2.], dtype=ht.float32, device=cpu:0, split=None) >>> LA.norm(a, 1) DNDarray([20.], dtype=ht.float32, device=cpu:0, split=None) >>> LA.norm(b, 1) DNDarray([7.], dtype=ht.float32, device=cpu:0, split=None) >>> LA.norm(a, -1) DNDarray([0.], dtype=ht.float32, device=cpu:0, split=None) >>> LA.norm(b, -1) DNDarray([6.], dtype=ht.float32, device=cpu:0, split=None) >>> LA.norm(a, 2) DNDarray(7.7460, dtype=ht.float32, device=cpu:0, split=None) >>> LA.norm(a, -2) DNDarray([0.], dtype=ht.float32, device=cpu:0, split=None) >>> LA.norm(a, 3) DNDarray([5.8480], dtype=ht.float32, device=cpu:0, split=None) >>> LA.norm(a, -3) DNDarray([0.], dtype=ht.float32, device=cpu:0, split=None) c = ht.array([[ 1, 2, 3], [-1, 1, 4]]) >>> LA.norm(c, axis=0) DNDarray([1.4142, 2.2361, 5.0000], dtype=ht.float64, device=cpu:0, split=None) >>> LA.norm(c, axis=1) DNDarray([3.7417, 4.2426], dtype=ht.float64, device=cpu:0, split=None) >>> LA.norm(c, axis=1, ord=1) DNDarray([6., 6.], dtype=ht.float64, device=cpu:0, split=None) >>> m = ht.arange(8).reshape(2,2,2) >>> LA.norm(m, axis=(1,2)) DNDarray([ 3.7417, 11.2250], dtype=ht.float32, device=cpu:0, split=None) >>> LA.norm(m[0, :, :]), LA.norm(m[1, :, :]) (DNDarray(3.7417, dtype=ht.float32, device=cpu:0, split=None), DNDarray(11.2250, dtype=ht.float32, device=cpu:0, split=None))
- outer(a: heat.core.dndarray.DNDarray, b: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None, split: int | None = None) heat.core.dndarray.DNDarray
Compute the outer product of two 1-D DNDarrays: \(out(i, j) = a(i) \times b(j)\). Given two vectors, \(a = (a_0, a_1, ..., a_N)\) and \(b = (b_0, b_1, ..., b_M)\), the outer product is:
\begin{pmatrix} a_0 \cdot b_0 & a_0 \cdot b_1 & . & . & a_0 \cdot b_M \\ a_1 \cdot b_0 & a_1 \cdot b_1 & . & . & a_1 \cdot b_M \\ . & . & . & . & . \\ a_N \cdot b_0 & a_N \cdot b_1 & . & . & a_N \cdot b_M \end{pmatrix}- Parameters:
a (DNDarray) – 1-dimensional: \(N\) Will be flattened by default if more than 1-D.
b (DNDarray) – 1-dimensional: \(M\) Will be flattened by default if more than 1-D.
out (DNDarray, optional) – 2-dimensional: \(N \times M\) A location where the result is stored
split (int, optional) – Split dimension of the resulting DNDarray. Can be 0, 1, or None. This is only relevant if the calculations are memory-distributed. Default is
split=0
(see Notes).
Notes
Parallel implementation of outer product, assumes arrays are dense. In the classical (dense) case, one of the two arrays needs to be communicated around the processes in a ring.
Sending
b
around in a ring results inouter
being split along the rows (outer.split = 0
).Sending
a
around in a ring results inouter
being split along the columns (outer.split = 1
).
So, if specified,
split
defines whichDNDarray
stays put and which one is passed around. Ifsplit
isNone
or unspecified, the result will be distributed along axis0
, i.e. by defaultb
is passed around,a
stays put.Examples
>>> a = ht.arange(4) >>> b = ht.arange(3) >>> ht.outer(a, b).larray (3 processes) [0/2] tensor([[0, 0, 0], [0, 1, 2], [0, 2, 4], [0, 3, 6]], dtype=torch.int32) [1/2] tensor([[0, 0, 0], [0, 1, 2], [0, 2, 4], [0, 3, 6]], dtype=torch.int32) [2/2] tensor([[0, 0, 0], [0, 1, 2], [0, 2, 4], [0, 3, 6]], dtype=torch.int32) >>> a = ht.arange(4, split=0) >>> b = ht.arange(3, split=0) >>> ht.outer(a, b).larray [0/2] tensor([[0, 0, 0], [0, 1, 2]], dtype=torch.int32) [1/2] tensor([[0, 2, 4]], dtype=torch.int32) [2/2] tensor([[0, 3, 6]], dtype=torch.int32) >>> ht.outer(a, b, split=1).larray [0/2] tensor([[0], [0], [0], [0]], dtype=torch.int32) [1/2] tensor([[0], [1], [2], [3]], dtype=torch.int32) [2/2] tensor([[0], [2], [4], [6]], dtype=torch.int32) >>> a = ht.arange(5, dtype=ht.float32, split=0) >>> b = ht.arange(4, dtype=ht.float64, split=0) >>> out = ht.empty((5,4), dtype=ht.float64, split=1) >>> ht.outer(a, b, split=1, out=out) >>> out.larray [0/2] tensor([[0., 0.], [0., 1.], [0., 2.], [0., 3.], [0., 4.]], dtype=torch.float64) [1/2] tensor([[0.], [2.], [4.], [6.], [8.]], dtype=torch.float64) [2/2] tensor([[ 0.], [ 3.], [ 6.], [ 9.], [12.]], dtype=torch.float64)
- projection(a: heat.core.dndarray.DNDarray, b: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Projection of vector
a
onto vectorb
- trace(a: heat.core.dndarray.DNDarray, offset: int | None = 0, axis1: int | None = 0, axis2: int | None = 1, dtype: heat.core.types.datatype | None = None, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray | float
Return the sum along diagonals of the array
If a is 2D, the sum along its diagonal with the given offset is returned, i.e. the sum of elements a[i, i+offset] for all i.
If a has more than two dimensions, then the axes specified by axis1 and axis2 are used to determine the 2D-sub-DNDarrays whose traces are returned. The shape of the resulting array is the same as that of a with axis1 and axis2 removed.
- Parameters:
a (array_like) – Input array, from which the diagonals are taken
offset (int, optional) – Offsets of the diagonal from the main diagonal. Can be both positive and negative. Defaults to 0.
axis1 (int, optional) – Axis to be used as the first axis of the 2D-sub-arrays from which the diagonals should be taken. Default is the first axis of a
axis2 (int, optional) – Axis to be used as the second axis of the 2D-sub-arrays from which the diagonals should be taken. Default is the second two axis of a
dtype (dtype, optional) – Determines the data-type of the returned array and of the accumulator where the elements are summed. If dtype has value None than the dtype is the same as that of a
out (ht.DNDarray, optional) – Array into which the output is placed. Its type is preserved and it must be of the right shape to hold the output Only applicable if a has more than 2 dimensions, thus the result is not a scalar. If distributed, its split axis might change eventually.
- Returns:
sum_along_diagonals – If a is 2D, the sum along the diagonal is returned as a scalar If a has more than 2 dimensions, then a DNDarray of sums along diagonals is returned
- Return type:
number (of defined dtype) or ht.DNDarray
Examples
2D-case >>> x = ht.arange(24).reshape((4, 6)) >>> x
- DNDarray([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17], [18, 19, 20, 21, 22, 23]], dtype=ht.int32, device=cpu:0, split=None)
>>> ht.trace(x) 42 >>> ht.trace(x, 1) 46 >>> ht.trace(x, -2) 31
> 2D-case >>> x = x.reshape((2, 3, 4)) >>> x
- DNDarray([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7], [ 8, 9, 10, 11]],
- [[12, 13, 14, 15],
[16, 17, 18, 19], [20, 21, 22, 23]]], dtype=ht.int32, device=cpu:0, split=None)
>>> ht.trace(x) DNDarray([16, 18, 20, 22], dtype=ht.int32, device=cpu:0, split=None) >>> ht.trace(x, 1) DNDarray([24, 26, 28, 30], dtype=ht.int32, device=cpu:0, split=None) >>> ht.trace(x, axis1=0, axis2=2) DNDarray([13, 21, 29], dtype=ht.int32, device=cpu:0, split=None)
- transpose(a: heat.core.dndarray.DNDarray, axes: List[int] | None = None) heat.core.dndarray.DNDarray
Permute the dimensions of an array.
- Parameters:
a (DNDarray) – Input array.
axes (None or List[int,...], optional) – By default, reverse the dimensions, otherwise permute the axes according to the values given.
- tril(m: heat.core.dndarray.DNDarray, k: int = 0) heat.core.dndarray.DNDarray
Returns the lower triangular part of the
DNDarray
. The lower triangular part of the array is defined as the elements on and below the diagonal, the other elements of the result array are set to 0. The argumentk
controls which diagonal to consider. Ifk=0
, all elements on and below the main diagonal are retained. A positive value includes just as many diagonals above the main diagonal, and similarly a negative value excludes just as many diagonals below the main diagonal.- Parameters:
m (DNDarray) – Input array for which to compute the lower triangle.
k (int, optional) – Diagonal above which to zero elements.
k=0
(default) is the main diagonal,k<0
is below andk>0
is above.
- triu(m: heat.core.dndarray.DNDarray, k: int = 0) heat.core.dndarray.DNDarray
Returns the upper triangular part of the
DNDarray
. The upper triangular part of the array is defined as the elements on and below the diagonal, the other elements of the result array are set to 0. The argumentk
controls which diagonal to consider. Ifk=0
, all elements on and below the main diagonal are retained. A positive value includes just as many diagonals above the main diagonal, and similarly a negative value excludes just as many diagonals below the main diagonal.- Parameters:
m (DNDarray) – Input array for which to compute the upper triangle.
k (int, optional) – Diagonal above which to zero elements.
k=0
(default) is the main diagonal,k<0
is below andk>0
is above.
- vdot(x1: heat.core.dndarray.DNDarray, x2: heat.core.dndarray.DNDarray) heat.core.dndarray.DNDarray
Computes the dot product of two vectors. Higher-dimensional arrays will be flattened.
- Parameters:
- Raises:
ValueError – If the number of elements is inconsistent.
See also
dot
Return the dot product without using the complex conjugate.
Examples
>>> a = ht.array([1+1j, 2+2j]) >>> b = ht.array([1+2j, 3+4j]) >>> ht.vdot(a,b) DNDarray([(17+3j)], dtype=ht.complex64, device=cpu:0, split=None) >>> ht.vdot(b,a) DNDarray([(17-3j)], dtype=ht.complex64, device=cpu:0, split=None)
- vecdot(x1: heat.core.dndarray.DNDarray, x2: heat.core.dndarray.DNDarray, axis: int | None = None, keepdims: bool | None = None) heat.core.dndarray.DNDarray
Computes the (vector) dot product of two DNDarrays.
- Parameters:
x1 (DNDarray) – first input array.
x2 (DNDarray) – second input array. Must be compatible with x1.
axis (int, optional) – axis over which to compute the dot product. The last dimension is used if ‘None’.
keepdims (bool, optional) – If this is set to ‘True’, the axes which are reduced are left in the result as dimensions with size one.
See also
dot
NumPy-like dot function.
Examples
>>> ht.vecdot(ht.full((3,3,3),3), ht.ones((3,3)), axis=0) DNDarray([[9., 9., 9.], [9., 9., 9.], [9., 9., 9.]], dtype=ht.float32, device=cpu:0, split=None)
- vector_norm(x: heat.core.dndarray.DNDarray, axis: int | Tuple[int] | None = None, keepdims=False, ord: int | float | None = None) heat.core.dndarray.DNDarray
Computes the vector norm of an array.
- Parameters:
x (DNDarray) – Input array
axis (int, tuple, optional) – Axis along which to compute the vector norm. If None ‘x’ must be a vector. Default: None
keepdims (bool, optional) – Retains the reduced dimension when True. Default: False
ord (int, float, optional) – The norm order to compute. If None the euclidean norm (2) is used. Default: None
See also
norm
Computes the vector norm or matrix norm of an array.
matrix_norm
Computes the matrix norm of an array.
Notes
The following norms are suported:
ord
norm for vectors
None
L2-norm (Euclidean)
inf
max(abs(x))
-inf
min(abs(x))
0
sum(x != 0)
1
L1-norm (Manhattan)
-1
1./sum(1./abs(a))
2
L2-norm (Euclidean)
-2
1./sqrt(sum(1./abs(a)**2))
other
sum(abs(x)**ord)**(1./ord)
- Raises:
TypeError – If axis is not an integer or a 1-tuple
ValueError – If an invalid vector norm is given.
Examples
>>> ht.vector_norm(ht.array([1,2,3,4])) DNDarray([5.4772], dtype=ht.float64, device=cpu:0, split=None) >>> ht.vector_norm(ht.array([[1,2],[3,4]]), axis=0, ord=1) DNDarray([[4., 6.]], dtype=ht.float64, device=cpu:0, split=None)
- cg(A: heat.core.dndarray.DNDarray, b: heat.core.dndarray.DNDarray, x0: heat.core.dndarray.DNDarray, out: heat.core.dndarray.DNDarray | None = None) heat.core.dndarray.DNDarray
Conjugate gradients method for solving a system of linear equations :math: Ax = b
- lanczos(A: heat.core.dndarray.DNDarray, m: int, v0: heat.core.dndarray.DNDarray | None = None, V_out: heat.core.dndarray.DNDarray | None = None, T_out: heat.core.dndarray.DNDarray | None = None) Tuple[heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray]
The Lanczos algorithm is an iterative approximation of the solution to the eigenvalue problem, as an adaptation of power methods to find the m “most useful” (tending towards extreme highest/lowest) eigenvalues and eigenvectors of an \(n \times n\) Hermitian matrix, where often \(m<<n\). It returns two matrices \(V\) and \(T\), where:
\(V\) is a Matrix of size \(n\times m\), with orthonormal columns, that span the Krylow subspace n
\(T\) is a Tridiagonal matrix of size \(m\times m\), with coefficients \(\alpha_1,..., \alpha_n\) on the diagonal and coefficients \(\beta_1,...,\beta_{n-1}\) on the side-diagonalsn
- Parameters:
A (DNDarray) – 2D Hermitian (if complex) or symmetric positive-definite matrix. Only distribution along axis 0 is supported, i.e. A.split must be 0 or None.
m (int) – Number of Lanczos iterations
v0 (DNDarray, optional) – 1D starting vector of Euclidean norm 1. If not provided, a random vector will be used to start the algorithm
V_out (DNDarray, optional) – Output Matrix for the Krylow vectors, Shape = (n, m), dtype=A.dtype, must be initialized to zero
T_out (DNDarray, optional) – Output Matrix for the Tridiagonal matrix, Shape = (m, m), must be initialized to zero
- qr(a: heat.core.dndarray.DNDarray, tiles_per_proc: int | torch.Tensor = 2, calc_q: bool = True, overwrite_a: bool = False) Tuple[heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray]
Calculates the QR decomposition of a 2D
DNDarray
. Factor the matrixa
as QR, whereQ
is orthonormal andR
is upper-triangular. Ifcalc_q==True
, function returnsQR(Q=Q, R=R)
, else function returnsQR(Q=None, R=R)
- Parameters:
a (DNDarray) – Array which will be decomposed
tiles_per_proc (int or torch.Tensor, optional) – Number of tiles per process to operate on We highly recommend to use tiles_per_proc > 1, as the choice 1 might result in an error in certain situations (in particular for split=0).
calc_q (bool, optional) – Whether or not to calculate Q. If
True
, function returns(Q, R)
. IfFalse
, function returns(None, R)
.overwrite_a (bool, optional) – If
True
, function overwritesa
with R IfFalse
, a new array will be created for R
Notes
This function is built on top of PyTorch’s QR function.
torch.linalg.qr()
using LAPACK on the backend. Basic information about QR factorization/decomposition can be found at https://en.wikipedia.org/wiki/QR_factorization. The algorithms are based on the CAQR and TSQRalgorithms. For more information see references.References
[0] W. Zheng, F. Song, L. Lin, and Z. Chen, “Scaling Up Parallel Computation of Tiled QR Factorizations by a Distributed Scheduling Runtime System and Analytical Modeling,” Parallel Processing Letters, vol. 28, no. 01, p. 1850004, 2018. n [1] Bilel Hadri, Hatem Ltaief, Emmanuel Agullo, Jack Dongarra. Tile QR Factorization with Parallel Panel Processing for Multicore Architectures. 24th IEEE International Parallel and DistributedProcessing Symposium (IPDPS 2010), Apr 2010, Atlanta, United States. inria-00548899 n [2] Gene H. Golub and Charles F. Van Loan. 1996. Matrix Computations (3rd Ed.).
Examples
>>> a = ht.random.randn(9, 6, split=0) >>> qr = ht.linalg.qr(a) >>> print(ht.allclose(a, ht.dot(qr.Q, qr.R))) [0/1] True [1/1] True >>> st = torch.randn(9, 6) >>> a = ht.array(st, split=1) >>> a_comp = ht.array(st, split=0) >>> q, r = ht.linalg.qr(a) >>> print(ht.allclose(a_comp, ht.dot(q, r))) [0/1] True [1/1] True
- hsvd_rank(A: heat.core.dndarray.DNDarray, maxrank: int, compute_sv: bool = False, maxmergedim: int | None = None, safetyshift: int = 5, silent: bool = True) Tuple[heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray, float] | Tuple[heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray] | heat.core.dndarray.DNDarray
Hierarchical SVD (hSVD) with prescribed truncation rank maxrank. If A = U diag(sigma) V^T is the true SVD of A, this routine computes an approximation for U[:,:maxrank] (and sigma[:maxrank], V[:,:maxrank]).
The accuracy of this approximation depends on the structure of A (“low-rank” is best) and appropriate choice of parameters.
One can expect a similar outcome from this routine as for sci-kit learn’s TruncatedSVD (with algorithm=’randomized’) although a different, determinstic algorithm is applied here. Hereby, the parameters n_components and n_oversamples (sci-kit learn) roughly correspond to maxrank and safetyshift (see below).
- ADNDarray
2D-array (float32/64) of which the hSVD has to be computed.
- maxrankint
truncation rank. (This parameter corresponds to n_components in sci-kit learn’s TruncatedSVD.)
- compute_svbool, optional
compute_sv=True implies that also Sigma and V are computed and returned. The default is False.
- maxmergedimint, optional
maximal size of the concatenation matrices during the merging procedure. The default is None and results in an appropriate choice depending on the size of the local slices of A and maxrank. Too small choices for this parameter will result in failure if the maximal size of the concatenation matrices does not allow to merge at least two matrices. Too large choices for this parameter can cause memory errors if the resulting merging problem becomes too large.
- safetyshiftint, optional
Increases the actual truncation rank within the computations by a safety shift. The default is 5. (There is some similarity to n_oversamples in sci-kit learn’s TruncatedSVD.)
- silentbool, optional
silent=False implies that some information on the computations are printed. The default is True.
- (Union[ Tuple[DNDarray, DNDarray, DNDarray, float], Tuple[DNDarray, DNDarray, DNDarray], DNDarray])
if compute_sv=True: U, Sigma, V, a-posteriori error estimate for the reconstruction error ||A-U Sigma V^T ||_F / ||A||_F (computed according to [2] along the “true” merging tree). if compute_sv=False: U, a-posteriori error estimate
The size of the process local SVDs to be computed during merging is proportional to the non-split size of the input A and (maxrank + safetyshift). Therefore, conservative choice of maxrank and safetyshift is advised to avoid memory issues. Note that, as sci-kit learn’s randomized SVD, this routine is different from numpy.linalg.svd because not all singular values and vectors are computed and even those computed may be inaccurate if the input matrix exhibts a unfavorable structure.
See also
hsvd_rtol()
References ——- [1] Iwen, Ong. A distributed and incremental SVD algorithm for agglomerative data analysis on large networks. SIAM J. Matrix Anal. Appl., 37(4), 2016. [2] Himpe, Leibner, Rave. Hierarchical approximate proper orthogonal decomposition. SIAM J. Sci. Comput., 40 (5), 2018.
- hsvd_rtol(A: heat.core.dndarray.DNDarray, rtol: float, compute_sv: bool = False, maxrank: int | None = None, maxmergedim: int | None = None, safetyshift: int = 5, no_of_merges: int | None = None, silent: bool = True) Tuple[heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray, float] | Tuple[heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray] | heat.core.dndarray.DNDarray
Hierchical SVD (hSVD) with prescribed upper bound on the relative reconstruction error. If A = U diag(sigma) V^T is the true SVD of A, this routine computes an approximation for U[:,:r] (and sigma[:r], V[:,:r]) such that the rel. reconstruction error ||A-U[:,:r] diag(sigma[:r]) V[:,:r]^T ||_F / ||A||_F does not exceed rtol.
The accuracy of this approximation depends on the structure of A (“low-rank” is best) and appropriate choice of parameters. This routine is similar to hsvd_rank with the difference that truncation is not performed after a fixed number (namly maxrank many) singular values but after such a number of singular values that suffice to capture a prescribed fraction of the amount of information contained in the input data (rtol).
- ADNDarray
2D-array (float32/64) of which the hSVD has to be computed.
- rtolfloat
desired upper bound on the relative reconstruction error ||A-U Sigma V^T ||_F / ||A||_F. This upper bound is processed into ‘local’ tolerances during the actual computations assuming the worst case scenario of a binary “merging tree”; therefore, the a-posteriori error for the relative error using the true “merging tree” (see output) may be significantly smaller than rtol. Prescription of maxrank or maxmergedim (disabled in default) can result in loss of desired precision, but can help to avoid memory issues.
- compute_svbool, optional
compute_sv=True implies that also Sigma and V are computed and returned. The default is False.
- no_of_mergesint, optional
Maximum number of processes to be merged at each step. If no further arguments are provided (see below), this completely determines the “merging tree” and may cause memory issues. The default is None and results in a binary merging tree. Note that no_of_merges dominates maxrank and maxmergedim in the sense that at most no_of_merges processes are merged even if maxrank and maxmergedim would allow merging more processes.
- maxrankint, optional
maximal truncation rank. The default is None. Setting at least one of maxrank and maxmergedim is recommended to avoid memory issues, but can result in loss of desired precision. Setting only maxrank (and not maxmergedim) results in an appropriate default choice for maxmergedim depending on the size of the local slices of A and the value of maxrank.
- maxmergedimint, optional
maximal size of the concatenation matrices during the merging procedure. The default is None and results in an appropriate choice depending on the size of the local slices of A and maxrank. The default is None. Too small choices for this parameter will result in failure if the maximal size of the concatenation matrices does not allow to merge at least two matrices. Too large choices for this parameter can cause memory errors if the resulting merging problem becomes too large. Setting at least one of maxrank and maxmergedim is recommended to avoid memory issues, but can result in loss of desired precision. Setting only maxmergedim (and not maxrank) results in an appropriate default choice for maxrank.
- safetyshiftint, optional
Increases the actual truncation rank within the computations by a safety shift. The default is 5.
- silentbool, optional
silent=False implies that some information on the computations are printed. The default is True.
- (Union[ Tuple[DNDarray, DNDarray, DNDarray, float], Tuple[DNDarray, DNDarray, DNDarray], DNDarray])
if compute_sv=True: U, Sigma, V, a-posteriori error estimate for the reconstruction error ||A-U Sigma V^T ||_F / ||A||_F (computed according to [2] along the “true” merging tree used in the computations). if compute_sv=False: U, a-posteriori error estimate
The maximum size of the process local SVDs to be computed during merging is proportional to the non-split size of the input A and (maxrank + safetyshift). Therefore, conservative choice of maxrank and safetyshift is advised to avoid memory issues. For similar reasons, prescribing only rtol and the number of processes to be merged in each step (without specifying maxrank or maxmergedim) may result in memory issues. Although prescribing maxrank is therefore strongly recommended to avoid memory issues, but may result in loss of desired precision (rtol). If this occures, a separate warning will be raised.
Note that this routine is different from numpy.linalg.svd because not all singular values and vectors are computed and even those computed may be inaccurate if the input matrix exhibts a unfavorable structure.
To avoid confusion, note that rtol in this routine does not have any similarity to tol in scikit learn’s TruncatedSVD.
See also
hsvd_rank()
References ——- [1] Iwen, Ong. A distributed and incremental SVD algorithm for agglomerative data analysis on large networks. SIAM J. Matrix Anal. Appl., 37(4), 2016. [2] Himpe, Leibner, Rave. Hierarchical approximate proper orthogonal decomposition. SIAM J. Sci. Comput., 40 (5), 2018.
- hsvd(A: heat.core.dndarray.DNDarray, maxrank: int | None = None, maxmergedim: int | None = None, rtol: float | None = None, safetyshift: int = 0, no_of_merges: int | None = 2, compute_sv: bool = False, silent: bool = True, warnings_off: bool = False) Tuple[heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray, float] | Tuple[heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray, heat.core.dndarray.DNDarray] | heat.core.dndarray.DNDarray
This function computes an approximate truncated SVD of A utilizing a distributed hiearchical algorithm; see the references. The present function hsvd is a low-level routine, provides many options/parameters, but no default values, and is not recommended for usage by non-experts since conflicts arising from inappropriate parameter choice will not be catched. We strongly recommend to use the corresponding high-level functions hsvd_rank and hsvd_rtol instead.
Input
- A: DNDarray
2D-array (float32/64) of which hSVD has to be computed
- maxrank: int, optional
truncation rank of the SVD
- maxmergedim: int, optional
maximal size of the concatenation matrices when “merging” the local SVDs
- rtol: float, optional
upper bound on the relative reconstruction error ||A-U Sigma V^T ||_F / ||A||_F (may deteriorate due to other parameters)
- safetyshift: int, optional
shift that increases the actual truncation rank of the local SVDs during the computations in order to increase accuracy
- no_of_merges: int, optional
maximum number of local SVDs to be “merged” at one step
- compute_sv: bool, optional
determines whether to compute U, Sigma, V (compute_sv=True) or not (then U only)
- silent: bool, optional
determines whether to print infos on the computations performed (silent=False)
- warnings_off: bool, optional
switch on and off warnings that are not intended for the high-level routines based on this function
- returns:
if compute_sv=True: U, Sigma, V, a-posteriori error estimate for the reconstruction error ||A-U Sigma V^T ||_F / ||A||_F (computed according to [2] along the “true” merging tree used in the computations). if compute_sv=False: U, a-posteriori error estimate
- rtype:
(Union[ Tuple[DNDarray, DNDarray, DNDarray, float], Tuple[DNDarray, DNDarray, DNDarray], DNDarray])
References
[1] Iwen, Ong. A distributed and incremental SVD algorithm for agglomerative data analysis on large networks. SIAM J. Matrix Anal. Appl., 37(4), 2016. [2] Himpe, Leibner, Rave. Hierarchical approximate proper orthogonal decomposition. SIAM J. Sci. Comput., 40 (5), 2018.
See also
- __version__ :str
The combined version string, consisting out of major, minor, micro and possibly extension.