This is an alternative implementation of the TDNN layer, proposed by Waibel et al. [1]. The main difference compared to other implementations is that it exploits the Pytorch Conv1d dilatation argument, making it multitudes faster than other popular implementations such as SiddGururani's PyTorch-TDNN.
# Create a TDNN layer
layer_context = [-2, 0, 2]
input_n_feat = previous_layer_n_feat
tddn_layer = TDNN(context=layer_context, input_channels=input_n_feat, output_channels=512, full_context=False)
# Run a forward pass; batch.size = [BATCH_SIZE, INPUT_CHANNELS, SEQUENCE_LENGTH]
out = tdnn_layer(batch)