Neural Networks in R Remain Viable in 2026

The current state of R for deep learning: tools, workflows, and practical approaches

An overview of R’s neural network capabilities in 2026, featuring the new {kindling} package for simplified torch modeling
R
machine-learning
deep-learning
torch
tensorflow
keras
tidymodels
kindling
neural-networks
Author

Joshua Marie

Published

January 10, 2026

1 Introduction

Python dominates the AI/DL space very well (Hugging Face, diffusion models, large-scale RL), and it’s such a pleasure to deploy those model into production at scale. However, many data scientists, statisticians, bioinformaticians, and analysts continue working in R because:

  • Data naturally lives in tidy data frames, keeping preprocessing, exploration, and visualization in one language
  • tidymodels provides a consistent, production-ready modeling grammar
  • torch now offers excellent native GPU support without requiring Python installation
  • Strong deployment options exist (Shiny, Posit Connect/Workbench, plumber APIs, vetiver)
  • Statistical modeling and deep learning can coexist seamlessly in the same session

2 Why Neural Networks in R?

You see, R typically isn’t the best choice for training 100B-parameter LLMs or real-time video models, but it excels at:

  • Tabular deep learning
  • Research prototyping where statistics and interpretability matter
  • Teaching deep learning in statistics-focused curricula

Before diving into technical details, let’s address a key question: why choose R for deep learning when Python has such a robust ecosystem?

For many R users, the answer is straightforward: workflow integration.

2.1 The current R Neural Networks Stack in 2026

Package Backend Python dep? tidymodels? Best for
torch LibTorch No Partial Full control, custom models
luz torch No No High-level training loops
brulee torch No Yes Quick MLPs on tabular data
tabnet torch No Yes Tabular data (TabNet architecture)
keras / tensorflow TF/Python Yes Partial Familiar Keras API
kindling torch No Yes Full architecture tuning

torch is clearly the most actively developed native option in 2026. brulee remains the go-to for simple multilayer perceptrons (MLPs) within tidymodels. kindling aims to combine those—offering more flexible layer configuration than brulee while remaining fully tidymodels-native.

3 The R Neural Networks Ecosystem in 2026

Let’s examine the available tools.

3.1 Core Frameworks

These are the primary frameworks for building neural network models.

3.1.1 torch for R

The torch package provides R bindings to LibTorch (PyTorch’s C++ backend), offering:

  1. Full tensor operations with automatic differentiation
  2. GPU acceleration via CUDA
  3. Custom neural network architectures via nn_module
  4. Pre-trained models and transfer learning capabilities
box::use(
    torch[randn = torch_randn, torch_matmul]
)

x = randn(c(5, 3))
y = randn(c(3, 2))
z = torch_matmul(x, y)

3.1.2 Keras/TensorFlow

While tensorflow and keras remain available, they’ve become less prominent for R compared to torch in recent years, and still require Python installation. The torch ecosystem offers better R integration, more active development, and no Python dependency. This framework, however, beats torch in terms of composability.

Example usage:

box::use(
    keras[
        model_sequential = keras_model_sequential, 
        to_categorical, 
        layer_dense, 
        optimizer_adam, 
        compile, fit, 
        evaluate
    ],
    stats[predict]
)

x = as.matrix(iris[, 1:4])
y = {as.integer(iris$Species) - 1} |> to_categorical(num_classes = 3)

model = 
    model_sequential() |>
    layer_dense(
        units = 16,
        activation = "relu",
        input_shape = ncol(x)
    ) |>
    layer_dense(
        units = 16,
        activation = "relu"
    ) |>
    layer_dense(
        units = 3,
        activation = "softmax"
    )

model |>
    compile(
        optimizer = optimizer_adam(learning_rate = 0.01),
        loss = "categorical_crossentropy",
        metrics = "accuracy"
    )

history =
    model |>
    fit(
        x,
        y,
        epochs = 50,
        batch_size = 16,
        validation_split = 0.2,
        verbose = 1
    )

model |>
    evaluate(x, y)

pred = predict(model, x)

class_pred = max.col(pred) - 1

4 High-Level Interfaces

This is where things get interesting. Raw torch requires substantial boilerplate—defining modules, training loops, data loaders, etc. Several packages have emerged to simplify this workflow:

  • luz: A high-level interface for torch with built-in training loops
  • tabnet: Implements TabNet architecture specifically for tabular data
  • brulee: High-level modeling functions with torch
  • mlr3torch: Bridges torch deep learning into the mlr3 ecosystem
  • kindling: (My package!) Simplified torch neural network modeling with reduced boilerplate

5 Introducing {kindling}: Torch with Less Typing

Here’s the problem I aimed to solve: torch is powerful for deep learning and scientific computing, but verbose and less expressive (no offense), while tidymodels has an ergonomic API, thanks to tidyverse for traditional ML, but has limited deep learning stacks and integrations.

The kindling package goes further beyond by enabling you to:

  1. Customize the number of hidden layers
  2. Specify activation functions for each layer, and their specific parameters (e.g. lambd from nnf_softshrink() function)
  3. Tune neural network architecture while incorporating both capabilities above

I built this package for convenience, consistency, and ease of modeling. In its current implementation, it’s limited to data frames only, and planning to extend this for image datasets.

NoteCore Goal

The primary goal of this package is to reduce boilerplate and reduce typing when modelling neural network with torch.

5.1 Philosophy

I powered kindling with R’s metaprogramming and non-standard evaluation (NSE) through code generation with an embedded Domain-Specific Language (DSL) for specifying activation functions. Under the hood, it generates torch::nn_module() expressions based on your specifications and then wraps training logic into reusable functions.

The *_kindling() functions expose everything through the familiar parsnip interface, integrating seamlessly with tidymodels. This package makes neural network modeling with torch easier, more concise, and more readable (with tidymodels, of course).

The package gives you flexibility to work at whatever abstraction level suits your task, from raw torch code to fully integrated tidymodels workflows.

5.2 Installation

You can install this package from:

  1. CRAN

    install.packages("kindling")
    pak::pkg_install("kindling")
  2. GitHub

    remotes::install_github("joshuamarie/kindling")
    pak::pak("joshuamarie/kindling")

5.3 Future Plans

As of January 13, 2026, there are some stuffs that I want to add for the future versions. Here are my current plans for the future to enhance this package:

  1. More mlr3 integration.
  2. Suitability for time series modelling with customized algorithms.
  3. Ease to add more neural network models, e.g. deep belief networks.

6 References

Falbel D, Luraschi J (2023). torch: Tensors and Neural Networks with ‘GPU’ Acceleration. R package version 0.13.0.

Goodfellow I, Bengio Y, Courville A (2016). Deep Learning. MIT Press.

Kuhn M, Wickham H (2020). Tidymodels: a collection of packages for modeling and machine learning using tidyverse principles.

Wickham H (2019). Advanced R, 2nd edition. Chapman and Hall/CRC.

Keydana S (2023). Deep Learning and Scientific Computing with R torch