nir#

The Neuromorphic Intermediate Representation reference implementation.

Documentation: https://nnir.readthedocs.io

Subpackages#

Submodules#

Package Contents#

Classes#

Conv1d

Convolutional layer in 1d.

Conv2d

Convolutional layer in 2d.

Delay

Simple delay node.

Flatten

Flatten node.

Input

Input Node.

NIRGraph

Neural Intermediate Representation (NIR) Graph containing a number of nodes and

Output

Output Node.

Affine

Affine transform that linearly maps and translates the input signal.

Linear

Linear transform without bias:

Scale

Scales a signal by some values.

CubaLIF

Current based leaky integrate and-fire-neuron model.

I

Integrator.

IF

Integrate-and-fire neuron model.

LI

Leaky integrator neuron model.

LIF

Leaky integrate and-fire-neuron model.

NIRNode

Base superclass of Neural Intermediate Representation Unit (NIR).

AvgPool2d

Average pooling layer in 2d.

SumPool2d

Sum pooling layer in 2d.

Threshold

Threshold node.

Functions#

str2NIRNode(→ typing.NIRNode)

dict2NIRNode(→ typing.NIRNode)

Assume data_dict["type"] exist and correspond to a subclass of NIRNode.

read(→ nir.NIRGraph)

Load a NIR from a HDF/conn5 file.

write(→ None)

Write a NIR to a HDF5 file.

class nir.Conv1d#

Bases: nir.ir.node.NIRNode

Convolutional layer in 1d.

Note that the input_shape argument is required to disambiguate the shape, and is used to infer the exact output shape along with the other parameters. If the input_shape is None, the output shape will also be None.

The NIRGraph.infer_all_shapes function may be used to automatically infer the input and output types on the graph level.

Parameters
  • input_shape (Optional[int]) – Shape of spatial input (N,)

  • weight (np.ndarray) – Weight, shape (C_out, C_in, N)

  • stride (int) – Stride

  • padding (int | str) – Padding, if string must be ‘same’ or ‘valid’

  • dilation (int) – Dilation

  • groups (int) – Groups

  • bias (np.ndarray) – Bias array of shape (C_out,)

input_shape: Optional[int]#
weight: numpy.ndarray#
stride: int#
padding: Union[int, str]#
dilation: int#
groups: int#
bias: numpy.ndarray#
input_type: Optional[Dict[str, numpy.ndarray]]#
output_type: Optional[Dict[str, numpy.ndarray]]#
metadata: Dict[str, Any]#
__post_init__()#
class nir.Conv2d#

Bases: nir.ir.node.NIRNode

Convolutional layer in 2d.

Note that the input_shape argument is required to disambiguate the shape, and is used to infer the exact output shape along with the other parameters. If the input_shape is None, the output shape will also be None.

The NIRGraph.infer_all_shapes function may be used to automatically infer the input and output types on the graph level.

Parameters
  • input_shape (Optional[tuple[int, int]]) – Shape of spatial input (N_x, N_y)

  • weight (np.ndarray) – Weight, shape (C_out, C_in, N_x, N_y)

  • stride (int | int, int) – Stride

  • padding (int | int, int | str) – Padding, if string must be ‘same’ or ‘valid’

  • dilation (int | int, int) – Dilation

  • groups (int) – Groups

  • bias (np.ndarray) – Bias array of shape (C_out,)

input_shape: Optional[Tuple[int, int]]#
weight: numpy.ndarray#
stride: Union[int, Tuple[int, int]]#
padding: Union[int, Tuple[int, int], str]#
dilation: Union[int, Tuple[int, int]]#
groups: int#
bias: numpy.ndarray#
__post_init__()#
class nir.Delay#

Bases: nir.ir.node.NIRNode

Simple delay node.

This node implements a simple delay:

\[y(t) = x(t - \tau)\]
delay: numpy.ndarray#
input_type: Optional[Dict[str, numpy.ndarray]]#
output_type: Optional[Dict[str, numpy.ndarray]]#
metadata: Dict[str, Any]#
__post_init__()#
class nir.Flatten#

Bases: nir.ir.node.NIRNode

Flatten node.

This node flattens its input tensor. input_type must be a dict with one key: “input”.

input_type: nir.ir.typing.Types#
start_dim: int = 1#
end_dim: int#
input_type: Optional[Dict[str, numpy.ndarray]]#
output_type: Optional[Dict[str, numpy.ndarray]]#
metadata: Dict[str, Any]#
__post_init__()#
to_dict() Dict[str, Any]#

Serialize into a dictionary.

Return type

Dict[str, Any]

classmethod from_dict(node: Dict[str, Any])#
Parameters

node (Dict[str, Any]) –

class nir.Input#

Bases: nir.ir.node.NIRNode

Input Node.

This is a virtual node, which allows feeding in data into the graph.

input_type: nir.ir.typing.Types#
__post_init__()#
to_dict() Dict[str, Any]#

Serialize into a dictionary.

Return type

Dict[str, Any]

classmethod from_dict(node: Dict[str, Any]) nir.ir.node.NIRNode#
Parameters

node (Dict[str, Any]) –

Return type

nir.ir.node.NIRNode

class nir.NIRGraph#

Bases: nir.ir.node.NIRNode

Neural Intermediate Representation (NIR) Graph containing a number of nodes and edges.

A graph of computational nodes and identity edges.

property inputs#
property outputs#
nodes: nir.ir.typing.Nodes#
edges: nir.ir.typing.Edges#
input_type: Optional[Dict[str, numpy.ndarray]]#
output_type: Optional[Dict[str, numpy.ndarray]]#
metadata: Dict[str, Any]#
static from_list(*nodes: nir.ir.node.NIRNode) NIRGraph#

Create a sequential graph from a list of nodes by labelling them after indices.

Parameters

nodes (nir.ir.node.NIRNode) –

Return type

NIRGraph

__post_init__()#
to_dict() Dict[str, Any]#

Serialize into a dictionary.

Return type

Dict[str, Any]

classmethod from_dict(node: Dict[str, Any]) nir.ir.node.NIRNode#
Parameters

node (Dict[str, Any]) –

Return type

nir.ir.node.NIRNode

infer_types()#

Infer the shapes of all nodes in this graph. Will modify the input_type and output_type of all nodes in the graph.

Assumes that either the input type or the output type of the graph is set. Assumes that if A->B, then A.output_type.values() = B.input_type.values()

class nir.Output#

Bases: nir.ir.node.NIRNode

Output Node.

Defines an output of the graph.

output_type: nir.ir.typing.Types#
__post_init__()#
to_dict() Dict[str, Any]#

Serialize into a dictionary.

Return type

Dict[str, Any]

classmethod from_dict(node: Dict[str, Any]) nir.ir.node.NIRNode#
Parameters

node (Dict[str, Any]) –

Return type

nir.ir.node.NIRNode

class nir.Affine#

Bases: nir.ir.node.NIRNode

Affine transform that linearly maps and translates the input signal.

This is equivalent to the Affine transformation

Assumes a one-dimensional input vector of shape (N,).

\[y(t) = W*x(t) + b\]
weight: numpy.ndarray#
bias: numpy.ndarray#
input_type: Optional[Dict[str, numpy.ndarray]]#
output_type: Optional[Dict[str, numpy.ndarray]]#
metadata: Dict[str, Any]#
__post_init__()#
class nir.Linear#

Bases: nir.ir.node.NIRNode

Linear transform without bias:

\[y(t) = W*x(t)\]
weight: numpy.ndarray#
__post_init__()#
class nir.Scale#

Bases: nir.ir.node.NIRNode

Scales a signal by some values.

This node is equivalent to the Hadamard product.

\[y(t) = x(t) \odot s\]
scale: numpy.ndarray#
__post_init__()#
class nir.CubaLIF#

Bases: nir.ir.node.NIRNode

Current based leaky integrate and-fire-neuron model.

The current based leaky integrate-and-fire neuron model is defined by the following equations:

\[\tau_{syn} \dot {I} = - I + w_{in} S\]
\[\tau_{mem} \dot {v} = (v_{leak} - v) + R I\]
\[\begin{split}z = \begin{cases} 1 & v > v_{threshold} \\ 0 & else \end{cases}\end{split}\]
\[\begin{split}v = \begin{cases} v-v_{threshold} & z=1 \\ v & else \end{cases}\end{split}\]

Where \(\tau_{syn}\) is the synaptic time constant, \(\tau_{mem}\) is the membrane time constant, \(R\) is the resistance, \(v_{leak}\) is the leak voltage, \(v_{threshold}\) is the firing threshold, \(w_{in}\) is the input current weight (elementwise), and \(S\) is the input spike.

tau_syn: numpy.ndarray#
tau_mem: numpy.ndarray#
r: numpy.ndarray#
v_leak: numpy.ndarray#
v_threshold: numpy.ndarray#
w_in: numpy.ndarray = 1.0#
input_type: Optional[Dict[str, numpy.ndarray]]#
output_type: Optional[Dict[str, numpy.ndarray]]#
metadata: Dict[str, Any]#
__post_init__()#
class nir.I#

Bases: nir.ir.node.NIRNode

Integrator.

The integrator neuron model is defined by the following equation:

\[\dot{v} = R I\]
r: numpy.ndarray#
input_type: Optional[Dict[str, numpy.ndarray]]#
output_type: Optional[Dict[str, numpy.ndarray]]#
metadata: Dict[str, Any]#
__post_init__()#
class nir.IF#

Bases: nir.ir.node.NIRNode

Integrate-and-fire neuron model.

The integrate-and-fire neuron model is defined by the following equations:

\[\dot{v} = R I\]
\[\begin{split}z = \begin{cases} 1 & v > v_{thr} \\ 0 & else \end{cases}\end{split}\]
\[\begin{split}v = \begin{cases} v-v_{thr} & z=1 \\ v & else \end{cases}\end{split}\]
r: numpy.ndarray#
v_threshold: numpy.ndarray#
input_type: Optional[Dict[str, numpy.ndarray]]#
output_type: Optional[Dict[str, numpy.ndarray]]#
metadata: Dict[str, Any]#
__post_init__()#
class nir.LI#

Bases: nir.ir.node.NIRNode

Leaky integrator neuron model.

The leaky integrator neuron model is defined by the following equation:

\[\tau \dot{v} = (v_{leak} - v) + R I\]

Where \(\tau\) is the time constant, \(v\) is the membrane potential, \(v_{leak}\) is the leak voltage, \(R\) is the resistance, and \(I\) is the input current.

tau: numpy.ndarray#
r: numpy.ndarray#
v_leak: numpy.ndarray#
input_type: Optional[Dict[str, numpy.ndarray]]#
output_type: Optional[Dict[str, numpy.ndarray]]#
metadata: Dict[str, Any]#
__post_init__()#
class nir.LIF#

Bases: nir.ir.node.NIRNode

Leaky integrate and-fire-neuron model.

The leaky integrate-and-fire neuron model is defined by the following equations:

\[\tau \dot{v} = (v_{leak} - v) + R I\]
\[\begin{split}z = \begin{cases} 1 & v > v_{thr} \\ 0 & else \end{cases}\end{split}\]
\[\begin{split}v = \begin{cases} v-v_{thr} & z=1 \\ v & else \end{cases}\end{split}\]

Where \(\tau\) is the time constant, \(v\) is the membrane potential, \(v_{leak}\) is the leak voltage, \(R\) is the resistance, \(v_{threshold}\) is the firing threshold, and \(I\) is the input current.

tau: numpy.ndarray#
r: numpy.ndarray#
v_leak: numpy.ndarray#
v_threshold: numpy.ndarray#
input_type: Optional[Dict[str, numpy.ndarray]]#
output_type: Optional[Dict[str, numpy.ndarray]]#
metadata: Dict[str, Any]#
__post_init__()#
class nir.NIRNode#

Base superclass of Neural Intermediate Representation Unit (NIR).

All NIR primitives inherit from this class, but NIRNodes should never be instantiated.

__eq__(other)#

Return self==value.

to_dict() Dict[str, Any]#

Serialize into a dictionary.

Return type

Dict[str, Any]

classmethod from_dict(node: Dict[str, Any]) NIRNode#
Parameters

node (Dict[str, Any]) –

Return type

NIRNode

class nir.AvgPool2d#

Bases: nir.ir.node.NIRNode

Average pooling layer in 2d.

kernel_size: numpy.ndarray#
stride: numpy.ndarray#
padding: numpy.ndarray#
__post_init__()#
class nir.SumPool2d#

Bases: nir.ir.node.NIRNode

Sum pooling layer in 2d.

kernel_size: numpy.ndarray#
stride: numpy.ndarray#
padding: numpy.ndarray#
input_type: Optional[Dict[str, numpy.ndarray]]#
output_type: Optional[Dict[str, numpy.ndarray]]#
metadata: Dict[str, Any]#
__post_init__()#
class nir.Threshold#

Bases: nir.ir.node.NIRNode

Threshold node.

This node implements the heaviside step function:

\[\begin{split}z = \begin{cases} 1 & v > v_{thr} \\ 0 & else \end{cases}\end{split}\]
threshold: numpy.ndarray#
input_type: Optional[Dict[str, numpy.ndarray]]#
output_type: Optional[Dict[str, numpy.ndarray]]#
metadata: Dict[str, Any]#
__post_init__()#
nir.str2NIRNode(type: str) NIRNode#
Parameters

type (str) –

Return type

NIRNode

nir.dict2NIRNode(data_dict: Dict[str, Any]) NIRNode#

Assume data_dict[“type”] exist and correspond to a subclass of NIRNode.

Other items should match fields in the corresponding NIRNode subclass, unless subclass provides from_dict. Any extra item will be rejected and should be removed before calling this function

Parameters

data_dict (Dict[str, Any]) –

Return type

NIRNode

nir.read(filename: Union[str, pathlib.Path]) nir.NIRGraph#

Load a NIR from a HDF/conn5 file.

Parameters

filename (Union[str, pathlib.Path]) –

Return type

nir.NIRGraph

nir.write(filename: Union[str, pathlib.Path, io.RawIOBase], graph: nir.NIRNode) None#

Write a NIR to a HDF5 file.

Parameters
  • filename (Union[str, pathlib.Path, io.RawIOBase]) –

  • graph (nir.NIRNode) –

Return type

None