On this page
Continuous integration
Deno's built-in tools make it easy to set up Continuous Integration (CI)
pipelines for your projects. Testing,
linting and formatting your
code can all be done with the corresponding commands deno test
, deno lint
and deno fmt
. In addition, you can generate code coverage reports from test
results with deno coverage
in pipelines.
Setting up a basic pipeline Jump to heading
You can set up basic pipelines for Deno projects in GitHub Actions. The concepts explained on this page largely apply to other CI providers as well, such as Azure Pipelines, CircleCI or GitLab.
Building a pipeline for Deno generally starts with checking out the repository and installing Deno:
name: Build
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: denoland/setup-deno@v2
with:
deno-version: v2.x # Run with latest stable Deno.
To expand the workflow, add any of the deno
subcommands that you might need:
# Check if the code is formatted according to Deno's default
# formatting conventions.
- run: deno fmt --check
# Scan the code for syntax errors and style issues. If
# you want to use a custom linter configuration you can add a configuration file with --config
- run: deno lint
# Run all test files in the repository and collect code coverage. The example
# runs with all permissions, but it is recommended to run with the minimal permissions your program needs (for example --allow-read).
- run: deno test --allow-all --coverage=cov/
# This generates a report from the collected coverage in `deno test --coverage`. It is
# stored as a .lcov file which integrates well with services such as Codecov, Coveralls and Travis CI.
- run: deno coverage --lcov cov/ > cov.lcov
Cross-platform workflows Jump to heading
As a Deno module maintainer, you probably want to know that your code works on all of the major operating systems in use today: Linux, MacOS and Windows. A cross-platform workflow can be achieved by running a matrix of parallel jobs, each one running the build on a different OS:
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
steps:
- run: deno test --allow-all --coverage cov/
Note: GitHub Actions has a known
issue with handling
Windows-style line endings (CRLF). This may cause issues when running deno fmt
in a pipeline with jobs that run on windows
. To prevent this, configure the
Actions runner to use Linux-style line endings before running the
actions/checkout@v4
step:
git config --system core.autocrlf false
git config --system core.eol lf
If you are working with experimental or unstable Deno APIs, you can include a matrix job running the canary version of Deno. This can help to spot breaking changes early on:
jobs:
build:
runs-on: ${{ matrix.os }}
continue-on-error: ${{ matrix.canary }} # Continue in case the canary run does not succeed
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
deno-version: [v1.x]
canary: [false]
include:
- deno-version: canary
os: ubuntu-latest
canary: true
Speeding up Deno pipelines Jump to heading
Reducing repetition Jump to heading
In cross-platform runs, certain steps of a pipeline do not need to run for each
OS necessarily. For example, generating the same test coverage report on Linux,
MacOS and Windows is a bit redundant. You can use the if
conditional keyword
of GitHub Actions in these cases. The example below shows how to run code
coverage generation and upload steps only on the ubuntu
(Linux) runner:
- name: Generate coverage report
if: matrix.os == 'ubuntu-latest'
run: deno coverage --lcov cov > cov.lcov
- name: Upload coverage to Coveralls.io
if: matrix.os == 'ubuntu-latest'
# Any code coverage service can be used, Coveralls.io is used here as an example.
uses: coverallsapp/github-action@master
with:
github-token: ${{ secrets.GITHUB_TOKEN }} # Generated by GitHub.
path-to-lcov: cov.lcov
Caching dependencies Jump to heading
As a project grows in size, more and more dependencies tend to be included. Deno will download these dependencies during testing and if a workflow is run many times a day, this can become a time-consuming process. A common solution to speed things up is to cache dependencies so that they do not need to be downloaded anew.
Deno stores dependencies locally in a cache directory. In a pipeline the cache
can be preserved between workflows by turning on the cache: true
option on
denoland/setup-deno
:
steps:
- uses: actions/checkout@v4
- uses: denoland/setup-deno@v2
with:
cache: true
At first, when this workflow runs the cache is still empty and commands like
deno test
will still have to download dependencies, but when the job succeeds
the contents of cached dependencies are saved and any subsequent runs can
restore them from cache instead of re-downloading.
To demonstrate, let's say you have a project that uses the logger from
@std/log
:
{
"imports": {
"@std/log": "jsr:@std/log@0.224.5"
}
}
In order to increment this version, you can update the dependency and then reload the cache and update the lockfile locally:
deno install --reload --frozen=false
You should see changes in the lockfile's contents after running this. When this is committed and run through the pipeline, you should then see a new cache and using it in any runs that follow.
By default, the cache is automatically keyed by:
- the github job_id
- the runner os and architecture
- a hash of the
deno.lock
files in the project
It is possible to customize the default hash
(${{ hashFiles('**/deno.lock') }}
) used as part of the cache key via the
cache-hash
input.
- uses: denoland/setup-deno@v2
with:
# setting `cache-hash` implies `cache: true` and will replace
# the default cache-hash of `${{ hashFiles('**/deno.lock') }}`
cache-hash: ${{ hashFiles('**/deno.json') }}