Shortcut to create linear layer with nonlinear activation function

nn_nonlinear(in_features, out_features, bias = TRUE, activation = nn_relu())

Arguments

in_features

(integer) size of each input sample

out_features

(integer) size of each output sample

bias

(logical) If set to FALSE, the layer will not learn an additive bias. Default: TRUE

activation

(nn_module) A nonlinear activation function (default: torch::nn_relu())

Examples

net <- nn_nonlinear(10, 1)
x   <- torch_tensor(matrix(1, nrow = 2, ncol = 10))
net(x)
#> torch_tensor
#>  0
#>  0
#> [ CPUFloatType{2,1} ][ grad_fn = <ReluBackward0> ]