A step-by-step guide to integrating ONNX models into a Quarto blog post.
ai
quarto
meta
Learn how to use ONNX models in your Quarto blog with this comprehensive guide.
Author
Shon Czinner
Published
May 8, 2026
To get ONNX models working in your quarto blog, first you’ll need to have a model. In my case I’m using one from pytorch. This requires installing pytorch for the model, onnxscript and onnx for exporting the model, and onnxruntime for loading the model back into python to test it out.
To get the model into your quarto blog, you’ll need the model, and the files to load it and display/interact with it.
Make sure you list everything as a resource in the YAML block at the top of the post.
import torchimport torch.nn as nnimport onnxruntime as ortimport numpy as npclass MyModel(nn.Module):def__init__(self):super(MyModel, self).__init__()self.fc1 = nn.Linear(10, 20)self.relu = nn.ReLU()self.fc2 = nn.Linear(20, 1)def forward(self, x): x =self.fc1(x) x =self.relu(x) x =self.fc2(x)return x
Export the model
Code
model = MyModel() model.eval() # Set the model to evaluation modetorch.onnx.export(model, torch.randn(1, 10), "model.onnx",opset_version=20);
[torch.onnx] Obtain model graph for `MyModel([...]` with `torch.export.export(..., strict=False)`...
[torch.onnx] Obtain model graph for `MyModel([...]` with `torch.export.export(..., strict=False)`... ✅
[torch.onnx] Run decompositions...
[torch.onnx] Run decompositions... ✅
[torch.onnx] Translate the graph into ONNX...
[torch.onnx] Translate the graph into ONNX... ✅
[torch.onnx] Optimize the ONNX graph...
[torch.onnx] Optimize the ONNX graph... ✅
from IPython.display import Markdown,HTMLwithopen("incrementbutton.html", "r") as f: raw_code = f.read()# Using Markdown to wrap the code in a syntax-highlighted blockdisplay(Markdown(f"```html\n{raw_code}\n```"))
withopen("loadmodel.html", "r") as f: model_code = f.read()# Using Markdown to wrap the code in a syntax-highlighted blockdisplay(Markdown(f"```html\n{model_code}\n```"))