# Deno Documentation - Full Content
> This document contains the full content of the Deno documentation website.
# Compressing response bodies
URL: https://docs.deno.com/deploy/api/compression
Compressing the response body to save bandwidth is a common practice. To take
some work off your shoulder, we built the capabilities directly into Deploy.
Deno Deploy supports brotli and gzip compression. Compression is applied when
the following conditions are met.
1. The request to your deployment has [`Accept-Encoding`][accept-encoding]
header set to either `br` (brotli) or `gzip`.
2. The response from your deployment includes the [`Content-Type`][content-type]
header.
3. The provided content type is compressible; we use
[this database](https://github.com/jshttp/mime-db/blob/master/db.json) to
determine if the content type is compressible.
4. The response body size is greater than 20 bytes.
When Deploy compresses the response body, it will set `Content-Encoding: gzip`
or `Content-Encoding: br` header to the response based on the compression
algorithm used.
### When is compression skipped?
Deno Deploy skips the compression if:
- The response has [`Content-Encoding`][content-encoding] header.
- The response has [`Content-Range`][content-range] header.
- The response's [`Cache-Control`][cache-control] header has
[`no-transform`][no-transform] value (e.g.
`cache-control: public, no-transform`).
### What happens to my `Etag` header?
When you set an Etag header with the response, we convert the header value to a
Weak Etag if we apply compression to your response body. If it is already a Weak
Etag, we don't touch the header.
[accept-encoding]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Encoding
[cache-control]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control
[content-encoding]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding
[content-type]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type
[no-transform]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control#other
[content-range]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Range
---
# Dynamic import
URL: https://docs.deno.com/deploy/api/dynamic-import
Deno Deploy supports [dynamic import] but with some limitations. This page
outlines these limitations.
### Specifiers must be statically determined string literals
In the usual dynamic import, specifiers don't need to be determined at build
time. So all of the following forms are valid:
```ts title="Valid dynamic imports in Deno CLI"
// 1. Statically determined string literal
await import("jsr:@std/assert");
// 2. Statically determined, but via variable
const specifier = "jsr:@std/assert";
await import(specifier);
// 3. Statically determined, but template literal
const stdModuleName = "path";
await import(`jsr:@std/${stdModuleName}`);
// 4. Dynamically determined
const rand = Math.random();
const mod = rand < 0.5 ? "npm:cowsay" : "npm:node-emoji";
await import(mod);
```
In Deno Deploy, however, specifiers must be string literals with no string
interpolation. So among the three examples above, only the first one works in
Deno Deploy.
```ts title="Only static string literals work in Deno Deploy"
// 1. ✅ Works fine on Deno Deploy
await import("jsr:@std/assert");
// 2. ❌ Doesn't work on Deno Deploy
// because what's passed to `import` is a variable
const specifier = "jsr:@std/streams";
await import(specifier);
// 3. ❌ Doesn't work on Deno Deploy
// because this has an interpolation
const stdModuleName = "path";
await import(`jsr:@std/${stdModuleName}`);
// 4. ❌ Doesn't work on Deno Deploy
// because it's dynamic
const rand = Math.random();
const mod = rand < 0.5 ? "npm:cowsay" : "npm:node-emoji";
await import(mod);
```
### One exception - dynamic specifiers work for same project files
Specifiers that are dynamically determined are supported if target files
(modules) are included in the same project.
```ts title="Dynamic specifiers work for files in the same project"
// ✅ Works fine on Deno Deploy
await import("./my_module1.ts");
// ✅ Works fine on Deno Deploy
const rand = Math.random();
const modPath = rand < 0.5 ? "dir1/moduleA.ts" : "dir2/dir3/moduleB.ts";
await import(`./${modPath}`);
```
Note that template literals starting with `./` tell the module resolver that the
target module is in the same project. Conversely, if a specifier does not start
with `./`, the possible target modules will not be included the resulting
[eszip], causing dynamic imports to fail at runtime, even if the final evaluated
specifier starts with `./`.
```ts
// ❌ Doesn't work because the analyzer can't statically determine if the
// specifier starts with `./` or not in this case.
// Compare this to the previous example. Only difference is whether to put
// `./` in the template literal or in the variable.
const rand = Math.random();
const modPath = rand < 0.5 ? "./dir1/moduleA.ts" : "./dir2/dir3/moduleB.ts";
await import(modPath);
```
We will consider if we can relax this constraint in the future.
:::tip What is eszip?
When you do a new deployment on Deno Deploy, the system analyzes your code,
constructs the module graph by recursively traversing it, and bundles all the
dependencies into a single file. We call this
[eszip](https://github.com/denoland/eszip). Since its creation is done
completely statically, dynamic import capabilities are limited on Deno Deploy.
:::
### Data URLs
[Data URL] can be used as a specifier passed to dynamic imports.
```ts title="Static data URL"
// ✅ Works fine on Deno Deploy
const { val } = await import(
"data:text/javascript,export const val = 42;"
);
console.log(val); // -> 42
```
For data URLs, fully dynamic data is supported.
```ts title="Dynamic data URL"
function generateDynamicDataUrl() {
const moduleStr = `export const val = ${Math.random()};`;
return `data:text/javascript,${moduleStr}`;
}
// ✅ Works fine on Deno Deploy
const { val } = await import(generateDynamicDataUrl());
console.log(val); // -> Random value is printed
```
Applying this technique to JavaScript code fetched from the web, you can even
simulate a true dynamic import:
```js title="external.js"
export const name = "external.js";
```
```ts title="Dynamic data URL from fetched source"
import { assert } from "jsr:@std/assert/assert";
const res = await fetch(
"https://gist.githubusercontent.com/magurotuna/1cacb136f9fd6b786eb8bbad92c8e6d6/raw/56a96fd0d246fd3feabbeecea6ea1155bdf5f50d/external.js",
);
assert(res.ok);
const src = await res.text();
const dataUrl = `data:application/javascript,${src}`;
// ✅ Works fine on Deno Deploy
const { name } = await import(dataUrl);
console.log(`Hello from ${name}`); // -> "Hello from external.js"
```
However, note that data URL given to `import` has to be JavaScript; TypeScript,
when passed, throws a [TypeError] at runtime.
[dynamic import]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/import
[eszip]: https://github.com/denoland/eszip
[Data URL]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs
[TypeError]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypeError
---
# API Reference
URL: https://docs.deno.com/deploy/api/
This is a reference for runtime APIs available on Deno Deploy. This API is very
similar to the standard [runtime API](/runtime/manual/runtime), but some APIs
are not available in the same way, given that Deno Deploy is a serverless
environment.
Please use this section of the documentation to explore available APIs on Deno
Deploy.
### Web APIs
- [`console`](https://developer.mozilla.org/en-US/docs/Web/API/console)
- [`atob`](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/atob)
- [`btoa`](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/btoa)
- [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API)
- `fetch`
- `Request`
- `Response`
- `URL`
- `File`
- `Blob`
- [TextEncoder](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoder)
- [TextDecoder](https://developer.mozilla.org/en-US/docs/Web/API/TextDecoder)
- [TextEncoderStream](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoderStream)
- [TextDecoderStream](https://developer.mozilla.org/en-US/docs/Web/API/TextDecoderStream)
- [Performance](https://developer.mozilla.org/en-US/docs/Web/API/Performance)
- [Web Crypto API](https://developer.mozilla.org/en-US/docs/Web/API/Crypto)
- `randomUUID()`
- `getRandomValues()`
- [SubtleCrypto](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto)
- [WebSocket API](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket)
- [Timers](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/setTimeout)
(`setTimeout`, `clearTimeout`, and `setInterval`)
- [Streams API](https://developer.mozilla.org/en-US/docs/Web/API/Streams_API)
- `ReadableStream`
- `WritableStream`
- `TransformStream`
- [URLPattern API](https://developer.mozilla.org/en-US/docs/Web/API/URLPattern)
- [Import Maps](https://docs.deno.com/runtime/manual/basics/import_maps/)
- Note: `import maps` are currently only available via
[deployctl](https://github.com/denoland/deployctl) or
[deployctl GitHub Action](https://github.com/denoland/deployctl/blob/main/action/README.md)
workflows.
### Deno APIs
> Note: only stable APIs of Deno are made available in Deploy.
- [`Deno.env`](https://docs.deno.com/api/deno/~/Deno.env) - Interact with
environment variables (secrets).
- `get(key: string): string | undefined` - get the value of an environment
variable.
- `toObject(): { [key: string]: string }` - get all environment variables as
an object.
- [`Deno.connect`](https://docs.deno.com/api/deno/~/Deno.connect) - Connect to
TCP sockets.
- [`Deno.connectTls`](https://docs.deno.com/api/deno/~/Deno.connectTls) -
Connect to TCP sockets using TLS.
- [`Deno.startTls`](https://docs.deno.com/api/deno/~/Deno.startTls) - Start TLS
handshake from an existing TCP connection.
- [`Deno.resolveDns`](https://docs.deno.com/api/deno/~/Deno.resolveDns) - Make
DNS queries
- File system API
- [`Deno.cwd`](https://docs.deno.com/api/deno/~/Deno.cwd) - Get the current
working directory
- [`Deno.readDir`](https://docs.deno.com/api/deno/~/Deno.readDir) - Get
directory listings
- [`Deno.readFile`](https://docs.deno.com/api/deno/~/Deno.readFile) - Read a
file into memory
- [`Deno.readTextFile`](https://docs.deno.com/api/deno/~/Deno.readTextFile) -
Read a text file into memory
- [`Deno.open`](https://docs.deno.com/api/deno/~/Deno.open) - Open a file for
streaming reading
- [`Deno.stat`](https://docs.deno.com/api/deno/~/Deno.stat) - Get file system
entry information
- [`Deno.lstat`](https://docs.deno.com/api/deno/~/Deno.lstat) - Get file
system entry information without following symlinks
- [`Deno.realPath`](https://docs.deno.com/api/deno/~/Deno.realPath) - Get the
real path of a file after resolving symlinks
- [`Deno.readLink`](https://docs.deno.com/api/deno/~/Deno.readLink) - Get the
target path for the given symlink
## Future support
In the future, these APIs will also be added:
- [Cache API](https://developer.mozilla.org/en-US/docs/Web/API/Cache)
- UDP API:
- `Deno.connectDatagram` for outbound UDP sockets
- Customizable `fetch` options using `Deno.createHttpClient`
## Limitations
Just like the Deno CLI, we do not implement the `__proto__` object field as
specified in ECMA Script Annex B.
---
# BroadcastChannel
URL: https://docs.deno.com/deploy/api/runtime-broadcast-channel
In Deno Deploy, code is run in different data centers around the world in order
to reduce latency by servicing requests at the data center nearest to the
client. In the browser, the
[`BroadcastChannel`](https://developer.mozilla.org/en-US/docs/Web/API/Broadcast_Channel_API)
API allows different tabs with the same origin to exchange messages. In Deno
Deploy, the BroadcastChannel API provides a communication mechanism between the
various instances; a simple message bus that connects the various Deploy
instances worldwide.
## Constructor
The `BroadcastChannel()` constructor creates a new `BroadcastChannel` instance
and connects to (or creates) the provided channel.
```ts
let channel = new BroadcastChannel(channelName);
```
#### Parameters
| name | type | description |
| ----------- | -------- | --------------------------------------------------------- |
| channelName | `string` | The name for the underlying broadcast channel connection. |
The return type of the constructor is a `BroadcastChannel` instance.
## Properties
| name | type | description |
| ---------------- | ---------------------- | ------------------------------------------------------------------------------------------------------------ |
| `name` | `string` | The name of the underlying broadcast channel. |
| `onmessage` | `function` (or `null`) | The function that's executed when the channel receives a new message ([`MessageEvent`][messageevent]). |
| `onmessageerror` | `function` (or `null`) | The function that's executed when the arrived message cannot be deserialized to a JavaScript data structure. |
## Methods
| name | description |
| ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------- |
| `close()` | Close the connection to the underlying channel. After closing, you can no longer post messages to the channel. |
| `postMessage(message)` | Post a message to the underlying channel. The message can be a string, object literal, a number or any kind of [`Object`][object]. |
`BroadcastChannel` extends [`EventTarget`][eventtarget], which allows you to use
methods of `EventTarget` like `addEventListener` and `removeEventListener` on an
instance of `BroadcastChannel`.
## Example: Update an in-memory cache across instances
One use case for a message bus like the one enabled by `BroadcastChannel` is
updating an in-memory cache of data between isolates running in different data
centers across the network. In the example below, we show how you can configure
a simple server that uses `BroadcastChannel` to synchornize state across all
running instances of the server.
```ts
import { Hono } from "jsr:@hono/hono";
// in-memory cache of messages
const messages = [];
// A BroadcastChannel used by all isolates
const channel = new BroadcastChannel("all_messages");
// When a new message comes in from other instances, add it
channel.onmessage = (event: MessageEvent) => {
messages.push(event.data);
};
// Create a server to add and retrieve messages
const app = new Hono();
// Add a message to the list
app.get("/send", (c) => {
// New messages can be added by including a "message" query param
const message = c.req.query("message");
if (message) {
messages.push(message);
channel.postMessage(message);
}
return c.redirect("/");
});
// Get a list of messages
app.get("/", (c) => {
// Return the current list of messages
return c.json(messages);
});
Deno.serve(app.fetch);
```
You can test this example yourself on Deno Deploy using
[this playground](https://dash.deno.com/playground/broadcast-channel-example).
[eventtarget]: https://developer.mozilla.org/en-US/docs/Web/API/EventTarget
[messageevent]: https://developer.mozilla.org/en-US/docs/Web/API/MessageEvent
[object]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object
---
# HTTP requests (fetch)
URL: https://docs.deno.com/deploy/api/runtime-fetch
The [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API)
allows you to make outbound HTTP requests in Deno Deploy. It is a web standard
and has the following interfaces:
- `fetch()` - The method that allows you to make outbound HTTP requests
- [`Request`](./runtime-request) - represents a request resource of fetch()
- [`Response`](./runtime-response) - represents a response resource of fetch()
- [`Headers`](./runtime-headers) - represents HTTP Headers of requests and
responses.
This page shows usage for the fetch() method. You can click above on the other
interfaces to learn more about them.
Fetch also supports fetching from file URLs to retrieve static files. For more
info on static files, see the [filesystem API documentation](./runtime-fs).
## `fetch()`
The `fetch()` method initiates a network request to the provided resource and
returns a promise that resolves after the response is available.
```ts
function fetch(
resource: Request | string,
init?: RequestInit,
): Promise;
```
#### Parameters
| name | type | optional | description |
| -------- | ------------------------------------------------------------- | -------- | ------------------------------------------------------------------ |
| resource | [`Request`](./runtime-request) [`USVString`][usvstring] | `false` | The resource can either be a request object or a URL string. |
| init | [`RequestInit`](./runtime-request#requestinit) | `true` | The init object lets you apply optional parameters to the request. |
The return type of `fetch()` is a promise that resolves to a
[`Response`](./runtime-response).
## Examples
The Deno Deploy script below makes a `fetch()` request to the GitHub API for
each incoming request, and then returns that response from the handler function.
```ts
async function handler(req: Request): Promise {
const resp = await fetch("https://api.github.com/users/denoland", {
// The init object here has an headers object containing a
// header that indicates what type of response we accept.
// We're not specifying the method field since by default
// fetch makes a GET request.
headers: {
accept: "application/json",
},
});
return new Response(resp.body, {
status: resp.status,
headers: {
"content-type": "application/json",
},
});
}
Deno.serve(handler);
```
[usvstring]: https://developer.mozilla.org/en-US/docs/Web/API/USVString
---
# File system APIs
URL: https://docs.deno.com/deploy/api/runtime-fs
Deno Deploy supports a limited set of the file system APIs available in Deno.
These file system APIs can access static files from your deployments. Static
files are for example:
- The files in your GitHub repository, if you deploy via the GitHub integration.
- The entrypoint file in a playground deployment.
The APIs that are available are:
- [Deno.cwd](#deno.cwd)
- [Deno.readDir](#deno.readdir)
- [Deno.readFile](#deno.readfile)
- [Deno.readTextFile](#deno.readtextfile)
- [Deno.open](#deno.open)
- [Deno.stat](#deno.stat)
- [Deno.lstat](#deno.lstat)
- [Deno.realPath](#deno.realpath)
- [Deno.readLink](#deno.readlink)
## Deno.cwd
`Deno.cwd()` returns the current working directory of your deployment. It is
located at the root of your deployment's root directory. For example, if you
deployed via the GitHub integration, the current working directory is the root
of your GitHub repository.
## Deno.readDir
`Deno.readDir()` allows you to list the contents of a directory.
The function is fully compatible with
[Deno](https://docs.deno.com/api/deno/~/Deno.readDir).
```ts
function Deno.readDir(path: string | URL): AsyncIterable
```
The path can be a relative or absolute. It can also be a `file:` URL.
### Example
This example lists the contents of a directory and returns this list as a JSON
object in the response body.
```js
async function handler(_req) {
// List the posts in the `blog` directory located at the root
// of the repository.
const posts = [];
for await (const post of Deno.readDir(`./blog`)) {
posts.push(post);
}
// Return JSON.
return new Response(JSON.stringify(posts, null, 2), {
headers: {
"content-type": "application/json",
},
});
}
Deno.serve(handler);
```
## Deno.readFile
`Deno.readFile()` allows you to read a file fully into memory.
The function definition is similar to
[Deno](https://docs.deno.com/api/deno/~/Deno.readFile), but it doesn't support
[`ReadFileOptions`](https://docs.deno.com/api/deno/~/Deno.ReadFileOptions) for
the time being. Support will be added in the future.
```ts
function Deno.readFile(path: string | URL): Promise
```
The path can be a relative or absolute. It can also be a `file:` URL.
### Example
This example reads the contents of a file into memory as a byte array, then
returns it as the response body.
```js
async function handler(_req) {
// Let's read the README.md file available at the root
// of the repository to explore the available methods.
// Relative paths are relative to the root of the repository
const readmeRelative = await Deno.readFile("./README.md");
// Absolute paths.
// The content of the repository is available under at Deno.cwd().
const readmeAbsolute = await Deno.readFile(`${Deno.cwd()}/README.md`);
// File URLs are also supported.
const readmeFileUrl = await Deno.readFile(
new URL(`file://${Deno.cwd()}/README.md`),
);
// Decode the Uint8Array as string.
const readme = new TextDecoder().decode(readmeRelative);
return new Response(readme);
}
Deno.serve(handler);
```
> Note: to use this feature, you must link a GitHub repository to your project.
Deno Deploy supports the `Deno.readFile` API to read static assets from the file
system. This is useful for serving static assets such as images, stylesheets,
and JavaScript files. This guide demonstrates how to use this feature.
Imagine the following file structure on a GitHub repository:
```console
├── mod.ts
└── style.css
```
The contents of `mod.ts`:
```ts
async function handleRequest(request: Request): Promise {
const { pathname } = new URL(request.url);
// This is how the server works:
// 1. A request comes in for a specific asset.
// 2. We read the asset from the file system.
// 3. We send the asset back to the client.
// Check if the request is for style.css.
if (pathname.startsWith("/style.css")) {
// Read the style.css file from the file system.
const file = await Deno.readFile("./style.css");
// Respond to the request with the style.css file.
return new Response(file, {
headers: {
"content-type": "text/css",
},
});
}
return new Response(
`
Example
`,
{
headers: {
"content-type": "text/html; charset=utf-8",
},
},
);
}
Deno.serve(handleRequest);
```
The path provided to the
[`Deno.readFile`](https://docs.deno.com/api/deno/~/Deno.readFile) API is
relative to the root of the repository. You can also specify absolute paths, if
they are inside `Deno.cwd`.
## Deno.readTextFile
This function is similar to [Deno.readFile](#Deno.readFile) except it decodes
the file contents as a UTF-8 string.
```ts
function Deno.readTextFile(path: string | URL): Promise
```
### Example
This example reads a text file into memory and returns the contents as the
response body.
```js
async function handler(_req) {
const readme = await Deno.readTextFile("./README.md");
return new Response(readme);
}
Deno.serve(handler);
```
## Deno.open
`Deno.open()` allows you to open a file, returning a file handle. This file
handle can then be used to read the contents of the file. See
[`Deno.File`](#deno.file) for information on the methods available on the file
handle.
The function definition is similar to
[Deno](https://docs.deno.com/api/deno/~/Deno.open), but it doesn't support
[`OpenOptions`](https://docs.deno.com/api/deno/~/Deno.OpenOptions) for the time
being. Support will be added in the future.
```ts
function Deno.open(path: string | URL): Promise
```
The path can be a relative or absolute. It can also be a `file:` URL.
### Example
This example opens a file, and then streams the content as the response body.
```js
async function handler(_req) {
// Open the README.md file available at the root of the repository.
const file = await Deno.open("./README.md");
// Use the `readable` property, which is a `ReadableStream`. This will
// automatically close the file handle when the response is done sending.
return new Response(file.readable);
}
Deno.serve(handler);
```
:::note
When you iterate over a file stream as shown below, the file descriptor will be
automatically closed at the end of iteration. There is no need to manually close
the file descriptor: `const iterator = fd.readable[Symbol.asyncIterator]();`
:::
## Deno.File
`Deno.File` is a file handle returned from [`Deno.open()`](#deno.open). It can
be used to read chunks of the file using the `read()` method. The file handle
can be closed using the `close()` method.
The interface is similar to [Deno](https://docs.deno.com/api/deno/~/Deno.File),
but it doesn't support writing to the file, or seeking. Support for the latter
will be added in the future.
```ts
class File {
readonly rid: number;
close(): void;
read(p: Uint8Array): Promise;
}
```
The path can be a relative or absolute. It can also be a `file:` URL.
## Deno.File#read()
The read method is used to read a chunk of the file. It should be passed a
buffer to read the data into. It returns the number of bytes read or `null` if
the end of the file has been reached.
```ts
function read(p: Uint8Array): Promise;
```
### Deno.File#close()
The close method is used to close the file handle. Closing the handle will
interrupt all ongoing reads.
```ts
function close(): void;
```
## Deno.stat
`Deno.stat()` reads a file system entry's metadata. It returns a
[`Deno.FileInfo`](#fileinfo) object. Symlinks are followed.
The function definition is the same as
[Deno](https://docs.deno.com/api/deno/~/Deno.stat). It does not return
modification time, access time, or creation time values.
```ts
function Deno.stat(path: string | URL): Promise
```
The path can be a relative or absolute. It can also be a `file:` URL.
### Example
This example gets the size of a file, and returns the result as the response
body.
```js
async function handler(_req) {
// Get file info of the README.md at the root of the repository.
const info = await Deno.stat("./README.md");
// Get the size of the file in bytes.
const size = info.size;
return new Response(`README.md is ${size} bytes large`);
}
Deno.serve(handler);
```
## Deno.lstat
`Deno.lstat()` is similar to `Deno.stat()`, but it does not follow symlinks.
The function definition is the same as
[Deno](https://docs.deno.com/api/deno/~/Deno.lstat). It does not return
modification time, access time, or creation time values.
```ts
function Deno.lstat(path: string | URL): Promise
```
The path can be a relative or absolute. It can also be a `file:` URL.
## Deno.FileInfo
The `Deno.FileInfo` interface is used to represent a file system entry's
metadata. It is returned by the [`Deno.stat()`](#deno.stat) and
[`Deno.lstat()`](#deno.lstat) functions. It can represent either a file, a
directory, or a symlink.
In Deno Deploy, only the file type, and size properties are available. The size
property behaves the same way it does on Linux.
```ts
interface FileInfo {
isDirectory: boolean;
isFile: boolean;
isSymlink: boolean;
size: number;
}
```
## Deno.realPath
`Deno.realPath()` returns the resolved absolute path to a file after following
symlinks.
The function definition is the same as
[Deno](https://docs.deno.com/api/deno/~/Deno.realPath).
```ts
function Deno.realPath(path: string | URL): Promise
```
The path can be a relative or absolute. It can also be a `file:` URL.
### Example
This example calls `Deno.realPath()` to get the absolute path of a file in the
root of the repository. The result is returned as the response body.
```ts
async function handler(_req) {
const path = await Deno.realPath("./README.md");
return new Response(`The fully resolved path for ./README.md is ${path}`);
}
Deno.serve(handler);
```
## Deno.readLink
`Deno.readLink()` returns the target path for a symlink.
The function definition is the same as
[Deno](https://docs.deno.com/api/deno/~/Deno.readLink).
```ts
function Deno.readLink(path: string | URL): Promise
```
The path can be a relative or absolute. It can also be a `file:` URL.
### Example
This example calls `Deno.readLink()` to get the absolute path of a file in the
root of the repository. The result is returned as the response body.
```ts
async function handler(_req) {
const path = await Deno.readLink("./my_symlink");
return new Response(`The target path for ./my_symlink is ${path}`);
}
Deno.serve(handler);
```
---
# HTTP Headers
URL: https://docs.deno.com/deploy/api/runtime-headers
The [Headers](https://developer.mozilla.org/en-US/docs/Web/API/Headers)
interface is part of the Fetch API. It allows you create and manipulate the HTTP
headers of request and response resources of fetch().
- [Constructor](#constructor)
- [Parameters](#parameters)
- [Methods](#methods)
- [Example](#example)
## Constructor
The Header() constructor creates a new `Header` instance.
```ts
let headers = new Headers(init);
```
#### Parameters
| name | type | optional | description |
| ---- | --------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------- |
| init | `Headers` / `{ [key: string]: string }` | `true` | The init option lets you initialize the headers object with an existing `Headers` or an object literal. |
The return type of the constructor is a `Headers` instance.
## Methods
| name | description |
| ------------------------------------- | ----------------------------------------------------------------- |
| `append(name: string, value: string)` | Appends a header (overwrites existing one) to the Headers object. |
| `delete(name: string)` | Deletes a header from the Headers object. |
| `set(name: string, value: string)` | Create a new header in the Headers object. |
| `get(name: string)` | Get the value of the header in the Headers object. |
| `has(name: string)` | Check if the header exists in the Headers objects. |
| `entries()` | Get the headers as key-value pair. The result is iterable. |
| `keys()` | Get all the keys of the Headers object. The result is iterable. |
## Example
```ts
// Create a new headers object from an object literal.
const myHeaders = new Headers({
accept: "application/json",
});
// Append a header to the headers object.
myHeaders.append("user-agent", "Deno Deploy");
// Print the headers of the headers object.
for (const [key, value] of myHeaders.entries()) {
console.log(key, value);
}
// You can pass the headers instance to Response or Request constructors.
const request = new Request("https://api.github.com/users/denoland", {
method: "POST",
headers: myHeaders,
});
```
---
# Node.js built-in APIs
URL: https://docs.deno.com/deploy/api/runtime-node
Deno Deploy natively supports importing built-in Node.js modules like `fs`,
`path`, and `http` through `node:` specifiers. This allows running code
originally written for Node.js without changes in Deno Deploy.
Here is an example of a Node.js HTTP server running on Deno Deploy:
```js
import { createServer } from "node:http";
import process from "node:process";
const server = createServer((req, res) => {
const message = `Hello from ${process.env.DENO_REGION} at ${new Date()}`;
res.end(message);
});
server.listen(8080);
```
_You can see this example live here:
https://dash.deno.com/playground/node-specifiers_
When using `node:` specifiers, all other features of Deno Deploy are still
available. For example, you can use `Deno.env` to access environment variables
even when using Node.js modules. You can also import other ESM modules from
external URLs as usual.
The following Node.js modules are available:
- `assert`
- `assert/strict`
- `async_hooks`
- `buffer`
- `child_process`
- `cluster`
- `console`
- `constants`
- `crypto`
- `dgram`
- `diagnostics_channel`
- `dns`
- `dns/promises`
- `domain`
- `events`
- `fs`
- `fs/promises`
- `http`
- `http2`
- `https`
- `module`
- `net`
- `os`
- `path`
- `path/posix`
- `path/win32`
- `perf_hooks`
- `process`
- `punycode`
- `querystring`
- `readline`
- `stream`
- `stream/consumers`
- `stream/promises`
- `stream/web`
- `string_decoder`
- `sys`
- `timers`
- `timers/promises`
- `tls`
- `tty`
- `url`
- `util`
- `util/types`
- `v8`
- `vm`
- `worker_threads`
- `zlib`
The behavior of these modules should be identical to Node.js in most cases. Due
to the sandboxing behaviour of Deno Deploy, some features are not available:
- Executing binaries with `child_process`
- Spawning workers using `worker_threads`
- Creating contexts and evaluating code with `vm`
> Note: the emulation of Node.js modules is sufficient for most use cases, but
> it is not yet perfect. If you encounter any issues, please
> [open an issue](https://github.com/denoland/deno).
---
# HTTP Request
URL: https://docs.deno.com/deploy/api/runtime-request
The [Request](https://developer.mozilla.org/en-US/docs/Web/API/Request)
interface is part of the Fetch API and represents the request of fetch().
- [Constructor](#constructor)
- [Parameters](#parameters)
- [Properties](#properties)
- [Methods](#methods)
- [Example](#example)
## Constructor
The Request() constructor creates a new Request instance.
```ts
let request = new Request(resource, init);
```
#### Parameters
| name | type | optional | description |
| -------- | ----------------------------- | -------- | ------------------------------------------------------------------------- |
| resource | `Request` or `USVString` | `false` | The resource can either be a request object or a URL string. |
| init | [`RequestInit`](#requestinit) | `true` | The init object lets you set optional parameters to apply to the request. |
The return type is a `Request` instance.
##### `RequestInit`
| name | type | default | description |
| ---------------------------- | --------------------------------------------------------------------------------------- | -------------- | ---------------------------------------------------------- |
| [`method`][method] | `string` | `GET` | The method of the request. |
| [`headers`][headers] | `Headers` or `{ [key: string]: string }` | none | Th Headers for the request. |
| [`body`][body] | `Blob`, `BufferSource`, `FormData`, `URLSearchParams`, `USVString`, or `ReadableStream` | none | The body of the request. |
| [`cache`][cache] | `string` | none | The cache mode of the request. |
| [`credentials`][credentials] | `string` | `same-origin` | The credentials mode of the request. |
| [`integrity`][integrity] | `string` | none | The crypotographic hash of the request's body. |
| [`mode`][mode] | `string` | `cors` | The request mode you want to use. |
| [`redirect`][redirect] | `string` | `follow` | The mode of how redirects are handled. |
| [`referrer`][referrer] | `string` | `about:client` | A `USVString` specifying `no-referrer`, `client` or a URL. |
## Properties
| name | type | description |
| ---------------------------------- | ------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------- |
| [`cache`][cache] | `string` | The cache mode indicates how the (`default`, `no-cache`, etc) request should be cached by browser. |
| [`credentials`][credentials] | `string` | The credentials (`omit`, `same-origin`, etc) indicate whether user agent should send cookies in case of CORs of the request. |
| [`destination`][destination] | [`RequestDestination`][requestdestination] | The string indicates the type of content being requested. |
| [`body`][body] | [`ReadableStream`][readablestream] | The getter exposes a `ReadableStream` of the body contents. |
| [`bodyUsed`][bodyused] | `boolean` | Indicates whether the body content is read. |
| [`url`][url] | `USVString` | The URL of the request. |
| [`headers`][headers] | [`Headers`](runtime-headers) | The headers associated with the request. |
| [`integrity`][integrity] | `string` | The crypotographic hash of the request's body. |
| [`method`][method] | `string` | The request's method (`POST`, `GET`, etc). |
| [`mode`][mode] | `string` | Indicates the mode of the request (e.g. `cors` ). |
| [`redirect`][redirect] | `string` | The mode of how redirects are handled. |
| [`referrer`][referrer] | `string` | The referrer of the request. |
| [`referrerPolicy`][referrerpolicy] | `string` | The referrer policy of the request |
All the above properties are read only.
## Methods
| name | description |
| ------------------------------ | ------------------------------------------------------------------------------------------- |
| [`arrayBuffer()`][arraybuffer] | Reads the body stream to its completion and returns an `ArrayBuffer` object. |
| [`blob()`][blob] | Reads the body stream to its completion and returns a `Blob` object. |
| [`formData()`][formdata] | Reads the body stream to its completion and returns a `FormData` object. |
| [`json()`][json] | Reads the body stream to its completion, parses it as JSON and returns a JavaScript object. |
| [`text()`][text] | Reads the body stream to its completion and returns a USVString object (text). |
| [`clone()`][clone] | Clones the Request object. |
## Example
```ts
function handler(_req) {
// Create a post request
const request = new Request("https://post.deno.dev", {
method: "POST",
body: JSON.stringify({
message: "Hello world!",
}),
headers: {
"content-type": "application/json",
},
});
console.log(request.method); // POST
console.log(request.headers.get("content-type")); // application/json
return fetch(request);
}
Deno.serve(handler);
```
[cache]: https://developer.mozilla.org/en-US/docs/Web/API/Request/cache
[credentials]: https://developer.mozilla.org/en-US/docs/Web/API/Request/credentials
[destination]: https://developer.mozilla.org/en-us/docs/web/api/request/destination
[requestdestination]: https://developer.mozilla.org/en-US/docs/Web/API/RequestDestination
[body]: https://developer.mozilla.org/en-US/docs/Web/API/Body/body
[bodyused]: https://developer.mozilla.org/en-US/docs/Web/API/Body/bodyUsed
[url]: https://developer.mozilla.org/en-US/docs/Web/API/Request/url
[headers]: https://developer.mozilla.org/en-US/docs/Web/API/Request/headers
[method]: https://developer.mozilla.org/en-US/docs/Web/API/Request/method
[integrity]: https://developer.mozilla.org/en-US/docs/Web/API/Request/integrity
[mode]: https://developer.mozilla.org/en-US/docs/Web/API/Request/mode
[redirect]: https://developer.mozilla.org/en-US/docs/Web/API/Request/redirect
[referrer]: https://developer.mozilla.org/en-US/docs/Web/API/Request/referrer
[referrerpolicy]: https://developer.mozilla.org/en-US/docs/Web/API/Request/referrerpolicy
[readablestream]: https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream
[arraybuffer]: https://developer.mozilla.org/en-US/docs/Web/API/Body/arrayBuffer
[blob]: https://developer.mozilla.org/en-US/docs/Web/API/Body/blob
[json]: https://developer.mozilla.org/en-US/docs/Web/API/Body/json
[text]: https://developer.mozilla.org/en-US/docs/Web/API/Body/text
[formdata]: https://developer.mozilla.org/en-US/docs/Web/API/Body/formdata
[clone]: https://developer.mozilla.org/en-US/docs/Web/API/Request/clone
---
# HTTP Response
URL: https://docs.deno.com/deploy/api/runtime-response
The [Response](https://developer.mozilla.org/en-US/docs/Web/API/Response)
interface is part of the Fetch API and represents a response resource of
fetch().
- [Constructor](#constructor)
- [Parameters](#parameters)
- [Properties](#properties)
- [Methods](#methods)
- [Example](#example)
## Constructor
The Response() constructor creates a new Response instance.
```ts
let response = new Response(body, init);
```
#### Parameters
| name | type | optional | description |
| ---- | --------------------------------------------------------------------------------------- | -------- | -------------------------------------------------------------------------- |
| body | `Blob`, `BufferSource`, `FormData`, `ReadableStream`, `URLSearchParams`, or `USVString` | `true` | The body of the response. The default value is `null`. |
| init | `ResponseInit` | `true` | An optional object that allows setting status and headers of the response. |
The return type is a `Response` instance.
##### `ResponseInit`
| name | type | optional | description |
| ------------ | ----------------------------------------------------- | -------- | ----------------------------------------------------- |
| `status` | `number` | `true` | The status code of the response. |
| `statusText` | `string` | `true` | The status message representative of the status code. |
| `headers` | `Headers` or `string[][]` or `Record` | `false` | The HTTP headers of the response. |
## Properties
| name | type | read only | description |
| -------------------------- | ---------------- | --------- | ----------------------------------------------------------- |
| [`body`][body] | `ReadableStream` | `true` | The getter exposes a `ReadableStream` of the body contents. |
| [`bodyUsed`][bodyused] | `boolean` | `true` | Indicates whether the body content is read. |
| [`url`][url] | `USVString` | `true` | The URL of the response. |
| [`headers`][headers] | `Headers` | `true` | The headers associated with the response. |
| [`ok`][ok] | `boolean` | `true` | Indicates if the response is successful (200-299 status). |
| [`redirected`][redirected] | `boolean` | `true` | Indicates if the response is the result of a redirect. |
| [`status`][status] | `number` | `true` | The status code of the response |
| [`statusText`][statustext] | `string` | `true` | The status message of the response |
| [`type`][type] | `string` | `true` | The type of the response. |
## Methods
| name | description |
| ---------------------------------------------------- | ------------------------------------------------------------------------------------------- |
| [`arrayBuffer()`][arraybuffer] | Reads the body stream to its completion and returns an `ArrayBuffer` object. |
| [`blob()`][blob] | Reads the body stream to its completion and returns a `Blob` object. |
| [`formData()`][formdata] | Reads the body stream to its completion and returns a `FormData` object. |
| [`json()`][json] | Reads the body stream to its completion, parses it as JSON and returns a JavaScript object. |
| [`text()`][text] | Reads the body stream to its completion and returns a USVString object (text). |
| [`clone()`][clone] | Clones the response object. |
| [`error()`][error] | Returns a new response object associated with a network error. |
| [`redirect(url: string, status?: number)`][redirect] | Creates a new response that redirects to the provided URL. |
## Example
```ts
function handler(_req) {
// Create a response with html as its body.
const response = new Response(" Hello ", {
status: 200,
headers: {
"content-type": "text/html",
},
});
console.log(response.status); // 200
console.log(response.headers.get("content-type")); // text/html
return response;
}
Deno.serve(handler);
```
[clone]: https://developer.mozilla.org/en-US/docs/Web/API/Response/clone
[error]: https://developer.mozilla.org/en-US/docs/Web/API/Response/error
[redirect]: https://developer.mozilla.org/en-US/docs/Web/API/Response/redirect
[body]: https://developer.mozilla.org/en-US/docs/Web/API/Body/body
[bodyused]: https://developer.mozilla.org/en-US/docs/Web/API/Body/bodyUsed
[url]: https://developer.mozilla.org/en-US/docs/Web/API/Request/url
[headers]: https://developer.mozilla.org/en-US/docs/Web/API/Request/headers
[ok]: https://developer.mozilla.org/en-US/docs/Web/API/Response/ok
[redirected]: https://developer.mozilla.org/en-US/docs/Web/API/Response/redirected
[status]: https://developer.mozilla.org/en-US/docs/Web/API/Response/status
[statustext]: https://developer.mozilla.org/en-US/docs/Web/API/Response/statusText
[type]: https://developer.mozilla.org/en-US/docs/Web/API/Response/type
[method]: https://developer.mozilla.org/en-US/docs/Web/API/Request/method
[readablestream]: https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream
[arraybuffer]: https://developer.mozilla.org/en-US/docs/Web/API/Body/arrayBuffer
[blob]: https://developer.mozilla.org/en-US/docs/Web/API/Body/blob
[json]: https://developer.mozilla.org/en-US/docs/Web/API/Body/json
[text]: https://developer.mozilla.org/en-US/docs/Web/API/Body/text
[formdata]: https://developer.mozilla.org/en-US/docs/Web/API/Body/formdata
---
# TCP sockets and TLS
URL: https://docs.deno.com/deploy/api/runtime-sockets
Deno Deploy supports outbound TCP and TLS connections. These APIs allow you to
use databases like PostgreSQL, SQLite, MongoDB, etc., with Deploy.
Looking for information on _serving_ TCP? Take a look at the documentation for
[`Deno.serve`](/api/deno/~/Deno.serve) including its support for
[TCP options](/api/deno/~/Deno.ServeTcpOptions).
## `Deno.connect`
Make outbound TCP connections.
The function definition is same as
[Deno](https://docs.deno.com/api/deno/~/Deno.connect) with the limitation that
`transport` option can only be `tcp` and `hostname` cannot be localhost or
empty.
```ts
function Deno.connect(options: ConnectOptions): Promise
```
### Example
```js
async function handler(_req) {
// Make a TCP connection to example.com
const connection = await Deno.connect({
port: 80,
hostname: "example.com",
});
// Send raw HTTP GET request.
const request = new TextEncoder().encode(
"GET / HTTP/1.1\nHost: example.com\r\n\r\n",
);
const _bytesWritten = await connection.write(request);
// Read 15 bytes from the connection.
const buffer = new Uint8Array(15);
await connection.read(buffer);
connection.close();
// Return the bytes as plain text.
return new Response(buffer, {
headers: {
"content-type": "text/plain;charset=utf-8",
},
});
}
Deno.serve(handler);
```
## `Deno.connectTls`
Make outbound TLS connections.
The function definition is the same as
[Deno](https://docs.deno.com/api/deno/~/Deno.connectTls) with the limitation
that hostname cannot be localhost or empty.
```ts
function Deno.connectTls(options: ConnectTlsOptions): Promise
```
### Example
```js
async function handler(_req) {
// Make a TLS connection to example.com
const connection = await Deno.connectTls({
port: 443,
hostname: "example.com",
});
// Send raw HTTP GET request.
const request = new TextEncoder().encode(
"GET / HTTP/1.1\nHost: example.com\r\n\r\n",
);
const _bytesWritten = await connection.write(request);
// Read 15 bytes from the connection.
const buffer = new Uint8Array(15);
await connection.read(buffer);
connection.close();
// Return the bytes as plain text.
return new Response(buffer, {
headers: {
"content-type": "text/plain;charset=utf-8",
},
});
}
Deno.serve(handler);
```
---
# Deno Deployᴱᴬ changelog
> Listing notable progress in the development and evolution of Deno Deploy Early Access
URL: https://docs.deno.com/deploy/early-access/changelog
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
## June 24th, 2025
### Features
- The playground now has live streaming logs and traces panels
- Logs and traces for the current revision are displayed for the past hour
- Logs and traces can be filtered, just like in the dedicated observability
pages
- Framework auto-detection now works for more projects out of the box, including
many Vite-based projects
- The organization dropdown now highlights the currently selected organization
more clearly
### Bug fixes
- The sparklines in the metrics overview are now working correctly
- The error rate metric now functions properly
- GitHub-triggered builds no longer run multiple times
- Next.js builds now work more reliably on older Next.js versions
## June 12th, 2025
### Features
- Deno DeployEA now supports playgrounds!
- Playgrounds can be created and accessed from the playgrounds tab in the
organizations overview
- Playgrounds can contain multiple files and include build steps
- The playground UI features an iframe to preview your deployed app
- Three templates are currently available: hello world, Next.js, and Hono
- On mobile devices, there is now a floating navbar that doesn't intrude into
page content
## June 9th, 2025
### Features
- Deno DeployEA has a new logo!
- Anyone can now join Early Access by signing up at
[dash.deno.com](https://dash.deno.com/account#early-access)
- Builds
- Builds can now use up to 8 GB of storage, up from 2 GB
- Builds can now use environment variables and secrets configured in the
organization or app settings (in the new "Build" context)
- Builds now have a maximum runtime of 5 minutes
- The metrics page has had a complete overhaul, by rewriting the chart
rendering:
- Dragging on a graph now zooms in on the selected area
- Much more data can now be shown without the page becoming slow to load
- The tooltip now follows the mouse cursor, together with a new crosshair that
allows for precise analysis
- Font sizes and colors have been improved for better readability
### Bug fixes
- Builds should not get stuck in a pending state anymore
- Dashboard pages now load significantly faster
- Correctly show spans in traces that have parents that are not exported (yet)
- The metrics page correctly refreshes now when switching time ranges
- The "Clear search" button in the telemetry search bar now works correctly
- Older Next.js versions (such as Next.js 13) build correctly now
- The environment variable drawer is now used everywhere, fixing a bug where
multiple env vars with the same name but different contexts would conflict
- Running `node ` in the builder does not fail anymore when the path is
absolute
- `npx` is now available in the builder
- Astro builds will not sporadically fail with `--unstable-vsock` errors anymore
- Svelte projects now deploy correctly when a project explicitly specifies
`@deno/svelte-adapter`
## May 26th, 2025
### Features
- When triggering a manual build you can now choose which branch to deploy
- You can now deploy Astro static sites without having to manually install the
Deno adapter
- There are now
[reference docs for you to peruse](https://docs.deno.com/deploy/early-access/).
### Bug fixes
- SvelteKit auto detection now works when using `npm` as the package manager
- Prewarming does not trigger random POST requests to your app anymore
- Visiting a page with a trailing slash will not 404 anymore
- Drawers will no longer close if you click inside, hold and drag over the
backdrop, and release
## May 22nd, 2025
### Features
- You can now bulk import env vars during app creation by pasting a `.env` file
into the env var drawer
- SvelteKit now works out of the box without manually installing the Deno
adapter
- A preset for the Lume static site generator is now available
### Bug fixes
- Environment variables now show up correctly on the timelines page
- The production timeline page now correctly shows all builds
- app.deno.com works on older versions of Firefox now
- Page titles across app.deno.com now reflect the page you are on
- The "Provision certificate" button does not lock up after DNS verification
failures anymore
- Domains that had a provisioned certificate or attached application can now be
deleted
---
# Getting started
> Step-by-step guide to creating and configuring your first Deno Deploy Early Access application, including organization setup, build configuration, environment variables, and deployment monitoring.
URL: https://docs.deno.com/deploy/early-access/getting_started
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
:::note
Deno DeployEA is in private beta. To use Deno Deploy
EA you must join the Early Access program from the
[Deploy Classic account settings page](https://dash.deno.com/account#early-access).
:::
## Create an organization
To get started with Deno DeployEA:
1. Visit [app.deno.com](http://app.deno.com)
2. Create an organization:

Note that you cannot create an organization with the same slug as any existing
project in Deploy Classic. Organization names and slugs cannot be changed after
creation.
## Create an app
After creating an organization, you'll be directed to the organization apps
page, which shows all your applications and provides access to organization
settings and custom domains.
To create an app, press the `+ New App` button:

An application is a single deployed web service with one build configuration,
build history, environment variables, attached custom domains, a linked GitHub
repository, etc.
## Select a repo
1. Choose the GitHub repository for your application:

If your repository doesn't appear, use the `Add another GitHub account` or
`Configure GitHub App permissions` buttons to grant the Deno Deploy GitHub app
access to your repositories.
> ⏳ Mono-repos (repositories where the application lives in a subdirectory) are
> not yet supported.
## Configure your app
Deno DeployEA automatically attempts to detect your application type
and configure an appropriate build setup. You can see the detected configuration
in the `App Config` box:

To modify this configuration, click `Edit build config`.

## Configure your build
In the build config drawer, you can customize:
### Framework preset
Select your framework or choose `No Preset` if using a custom setup.
### Install command
Command for installing dependencies (e.g., `npm install`, `deno install`). This
can be empty for Deno applications without a `package.json`.
### Build command
Command to compile/bundle your application (e.g., `next build`,
`deno task build`). Leave empty if your application doesn't require building.
### Runtime configuration
For most frameworks there are no options to configure here, as Deno Deploy
EA will figure out the ideal runtime configuration for the app based
on the framework preset. When a framework is not configured, you can choose here
whether the app is a `Dynamic` app that needs to execute code server side for
every request, such as an API server, server-side rendered application, etc., or
a `Static` app that consists only of a set of static files that need to be
hosted.
### Dynamic Entrypoint
The JavaScript or TypeScript file that should be executed to start the
application. This is the file path that you would pass locally to `deno
run` or
`node` to start the app. The path has to be relative to the working directory.
### Dynamic arguments
Additional command line arguments to pass to the app on startup, after the
entrypoint. These are arguments that are passed to the application not to Deno
itself.
### Static Directory
The directory in the working directory that contains the static files to be
served. For example,`dist`,`_site`, or`.output`.
### Single Page App mode
Whether the application is a single page app that should have the root
`index.html` served for any paths that do not exist as files in the static
directory, instead of a 404 page.
Closing the drawer saves the settings.
### Environment variables
To add environment variables:
1. Click `Add/Edit environment variables`
2. Click `+ Add variable` in the drawer
3. Enter the name and value
4. Choose whether it's a plain text variable or secret
5. Select the contexts where it should be available:
- **Production**: For requests to production domains
- **Development**: For requests to preview/branch domains
6. Click `Save` to apply your changes

## Build and deploy your app
1. Click `Create App` to create the application and start the first build
2. Watch the build progress through the live logs:

The build logs show these stages:
- **Prepare**: Cloning the repository and restoring caches
- **Install**: Running the install command and framework-specific setup
- **Build**: Executing the build command and preparing the deployment artifact
- **Warm up**: Testing the deployment with a request
- **Route**: Deploying the build to global regions
You can cancel a build with the button in the top-left corner, or restart failed
builds from the same location.
After completion, the top-right shows the preview URL, and below that, all
timelines where the build is deployed.
## Monitor your application
After deploying, use the observability tools to monitor your application:
### Logs
View application logs with filtering options for context, revision, and text
content:

Use the search bar to filter logs (e.g., `context:production`, `revision:`).
The time picker adjusts the displayed time range.
If a log is associated with a trace, you can click "View trace" to see the
corresponding trace information.
### Traces
View request traces with detailed timing information:

Click any trace to open the trace view showing all spans in a waterfall
visualization:

The trace view shows:
- Timeline of spans with duration
- Span details including attributes
- Logs emitted during the span To save the environment variables, press the save
button. You can re-open the drawer to edit / remove environment variables you
have added.
You can also edit the app name on this page, and select which region(s) the
application should be served from.
## Build and deploy your app
Finally, you can press the `Create App` button to create the app. This will
create the app and immediately trigger the first build:

On the build page you can see live streaming build logs split into multiple
sections:
- **Prepare:** cloning the GitHub repository and restoring build cache
- **Install:** executing the install command, and any framework specific
pre-install setup
- **Build:** executing the build command, any framework specific pre- and
post-build setup, and preparing the build artifact for deployment
- **Warm up:** sending a request to the preview URL of the deployment to ensure
it starts up correctly. The logs shown in the Warm up section are Runtime
logs, not build logs.
- **Route:** Deno Deploy is rolling out the new version of this build into all
global regions.
In the top left of this build is a button to cancel the build. For failed
builds, there is also a button to restart the build.
For completed builds, the top right shows the preview URL of the build. Further
down all timelines that this build is deployed to are shown, such as
`Production`, or `Git Branch` timelines.
You can also see how the build was triggered on this page. This can either be
`manual action`, for builds triggered through the UI, or `GitHub repo` for
builds triggered through the GitHub integration.
You can view the application through either the preview URL, or any of the other
URLs shown in the timelines list.
## Monitor your application
After visiting your application, you can view telemetry about your application
in the form of the logs and traces available in our observability panels. You
can visit these pages by clicking the respective buttons in the left sidebar.
### Logs

The logs page shows all recent logs in the project. By default logs from all
contexts (production and development) are shown, but using the filter button and
search bar at the top, the shown logs can be restricted. For example, to filter
to only production logs, add `context:production` to the search bar. To only
show logs from a certain revision, use `revision:` etc.
You can also use full text search in the search bar. The full text search fill
filter down the log entries to only those containing the text written,
case-insensitively.
By default logs from the last hour are shown. The time picker in the top right
can be used to adjust the time frame that logs are shown for. The time zone of
the timestamps shown is the time zone set in the time picker.
The "view trace" button on the right of a log line shows up if a log line is
correlated with a trace. This happens when a log line occurs within an active
trace. Clicking this button will open the respective trace as an overlay.
### Traces

The traces page shows all recent traces in the project. By default traces from
all contexts (production and development) are shown, but using the filter button
and search bar at the top, the shown traces can be restricted. For example, to
filter to only production traces, add `context:production` to the search bar. To
only show traces from a certain revision, use `revision:` etc.
All traces that contain an incoming HTTP request are shown in the list. The text
shown for each trace is the path of the request, and the duration of the trace
in milliseconds.
Clicking on a trace will open the trace view, which shows the full trace
including all spans and logs that are part of the trace.

For each span in the trace you can see the duration of the span, the name of the
span, the start and end time, and the recorded attributes. By clicking on a span
in the timeline, the details of that span will be shown in the summary panel at
the bottom.
The logs that are emitted as part of a given span are shown in the logs tab at
the bottom. Changing the selected span will update which logs are shown in this
panel.
---
# About Early Access
> Guide to Deno Deploy Early Access features, comparison with Deploy Classic, and getting started instructions for deployment.
URL: https://docs.deno.com/deploy/early-access/
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
Deno Deploy Early Access (Deno DeployEA) is a complete revamp of the
original Deploy, featuring:
- Improved NPM compatibility and web framework support
- Built-in OpenTelemetry integration
- Integrated build system
- Significantly enhanced underlying infrastructure
Join
the Early Access programGo to your Deno
DeployEA dashboard
:::note
Deno DeployEA is in private beta. To use Deno Deploy
EA you must join the Early Access program from the
[Deploy Classic account settings page](https://dash.deno.com/account#early-access).
:::
Deno DeployEA comes with a new dashboard at
[app.deno.com](https://app.deno.com). In this dashboard, you can create new Deno
DeployEA organizations that contain Deno DeployEA apps.
Within a single organization, you cannot mix Deno DeployEA apps with
Deploy Classic projects. You can switch between different organizations using
the organization picker in the top left of the dashboard.
## What is Deno DeployEA?
Deno Deploy is a serverless platform for running JavaScript and TypeScript
applications in the cloud (or self-hosted on your own infrastructure). It
provides a management plane for deploying and running applications through
integrations like GitHub deployment.
## Comparison to Deploy Classic
Deno DeployEA is a complete rework of Deploy Classic. It has a new
dashboard, and a new execution environment that uses Deno 2.0 and is much more
powerful than Deploy Classic. The below table compares the two versions of Deno
Deploy.
| Feature | Deno DeployEA | Deploy Classic |
| ------------------------------- | ------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------- |
| Web interface | app.deno.com | dash.deno.com |
| Dark mode | ✅ Supported | ❌ Not supported |
| Builds | ✅ Fully integrated | 🟠 Runs in GitHub Actions, no live streamed logs in the dashboard, caching requires manual setup, changing config requires editing YAML |
| Can run Deno apps | ✅ Full support | 🟠 Limited (no FFI, subprocesses, write permission) |
| Can run Node apps | ✅ Full support | 🟠 Limited (no FFI, native addons, subprocesses, write permission, and degraded NPM compatibility) |
| Can run Next.js/Astro/SvelteKit | ✅ First-class support | 🟠 Framework dependent, requires manual setup |
| First class static sites | ✅ Supported | ❌ Not supported |
| Environment Variables | ✅ Different dev/prod env vars | 🟠 One set of env vars for all deployments |
| CDN caching | ✅ Supported | ❌ Not supported |
| Web Cache API | ✅ Supported | ✅ Supported |
| Databases | ⏳ Coming soon | 🟠 Deno KV |
| Queues | ❌ Not supported | ✅ Supported |
| Cron | ❌ Not supported | ✅ Supported |
| Deploy from GitHub | ✅ Supported | ✅ Supported |
| Deploy from CLI | ⏳ Coming soon | ✅ Supported |
| Instant Rollback | ✅ Supported | ✅ Supported |
| Logs | ✅ Supported | ✅ Supported |
| Tracing | ✅ Supported | ❌ Not supported |
| Metrics | ✅ Supported | ❌ Not supported |
| OpenTelemetry export | ⏳ Work in progress | ❌ Not supported |
| Regions | 2 | 6 |
| Self hostable regions | ✅ Supported | ❌ Not supported |
## How to access EA
To begin using Deno DeployEA:
1. Visit [app.deno.com](https://app.deno.com) to access the new dashboard
2. Create a new Deno DeployEA organization
3. Create your first application within this organization
4. Deploy from your GitHub repository or directly from the dashboard
For detailed configuration instructions and framework-specific guides, please
refer to our reference documentation.
---
# deploy/early-access/reference/accounts.md
> Information about user accounts, authentication via GitHub, and managing your profile in Deno Deploy Early Access.
URL: https://docs.deno.com/deploy/early-access/reference/accounts
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
Deno Deploy accounts are linked to GitHub accounts. You can only sign in to Deno
Deploy using GitHub authentication.
Your primary contact email address and name are synced from GitHub. Both your
username and email address update on every sign in. After changing your email,
login, or name on GitHub, sign in again to see these changes reflected in the
Deno DeployEA dashboard.
Currently, only accounts enrolled in the Early Access program can access Deno
DeployEA. To join the program, visit the
[account settings in Deploy Classic](https://dash.deno.com/account#early-access)
and sign up. To access the Early Access Discord channel, connect your Discord
account to your Deno Deploy account through the same Early Access settings.
---
# deploy/early-access/reference/apps.md
> Guide to managing applications in Deno Deploy Early Access, including app creation, configuration, GitHub integration, and deployment options.
URL: https://docs.deno.com/deploy/early-access/reference/apps
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
Applications are web services that serve traffic within an organization. Each
application contains a history of revisions (previous versions), typically
corresponding to Git commits when using the GitHub integration.
Applications are identified by a slug, which must be unique within the
organization and is used in default domain names.
## Creating an application
To create an application:
1. Click the "+ Create App" button on the organization page
2. Select the GitHub repository to deploy from
3. Configure the app slug (name)
4. Set up build configuration
5. Add any required environment variables
> ⚠️ Currently, applications must be linked to a GitHub repository during
> creation.
The build configuration determines how the application is built during the
deployment process. Builds are automatically triggered on each push to the
linked repository or when manually clicking "Deploy Default Branch". For
detailed build configuration information, see the
[Builds documentation](/deploy/early-access/reference/builds/).
You can add environment variables during app creation by clicking "Edit
Environment Variables". For more details on environment variables, see the
[Environment Variables and Contexts](/deploy/early-access/reference/env-vars-and-contexts/)
documentation.
## Limitations
> ⚠️ Apps cannot currently be deleted.
> ⚠️ Apps cannot currently be renamed.
> ⚠️ Apps cannot currently be transferred to another organization.
## GitHub integration
The GitHub integration enables automatic deployments of the app from a GitHub
repository. Every push to the repository will trigger a new build of the app.
Depending on the branch of the commit, the build will be deployed to different
[timelines](/deploy/early-access/reference/timelines/).
Apps will generally be linked to a GitHub repository on creation. However, it is
possible to unlink the repository after creation, and optionally link it to a
new GitHub repository. This can be done from the app settings page.
Only accounts that have been authorized with the Deno Deploy GitHub app will be
visible in the GitHub repository dropdown. You can authorize new orgs or repos
by clicking the "+ Add another GitHub account" button in the user or
organization dropdown, or the "Configure GitHub app permissions" button in the
repository dropdown. This will redirect you to GitHub to authorize the Deno
Deploy GitHub app with the selected GitHub account or organization. After
authorizing, you will be redirected back to the app settings page, where you can
select the new GitHub repository.
---
# deploy/early-access/reference/builds.md
> Detailed explanation of the build process in Deno Deploy Early Access, covering build triggers, stages, configuration options, caching, and the build environment.
URL: https://docs.deno.com/deploy/early-access/reference/builds
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
In Deno DeployEA, each version of your application code is
represented as a revision (or build). When deploying from GitHub, revisions
generally map one-to-one to git commits in your repository.
## Build triggers
Builds can be triggered in two ways:
- **Manually**: Using the "Deploy Default Branch" button on the builds page,
which deploys the default git branch (usually `main`). The dropdown menu lets
you select a different branch.
- **Automatically**: When a new commit is pushed to a GitHub repository linked
to your app.
## Build stages
A revision goes through these stages before becoming available:
1. **Queuing**: The revision waits to be assigned to a builder.
2. **Preparing**: A builder downloads the source code and restores any available
build caches.
3. **Install**: The install command executes (if specified), typically
downloading dependencies.
4. **Build**: The build command executes (if specified), creating a build
artifact that is uploaded to the runtime infrastructure.
5. **Warm up**: A `GET /` request tests that the application boots correctly and
can handle HTTP requests.
6. **Route**: The global infrastructure is configured to route requests to the
new revision based on its timelines.
If any step fails, the build enters a "Failed" state and does not receive
traffic.
Build logs are streamed live to the dashboard during the build process and
remain available on the build page after completion.
Build caching speeds up builds by reusing files that haven't changed between
builds. This happens automatically for framework presets and the `DENO_DIR`
dependency cache.
You can cancel a running build using the "Cancel" button in the top-right corner
of the build page. Builds automatically cancel after running for 5 minutes.
## Build configuration
Build configuration defines how to convert source code into a deployable
artifact. You can modify build configuration in three places:
- During app creation by clicking "Edit build config"
- In app settings by clicking "Edit" in the build configuration section
- In the retry drawer on a failed build's page
When creating an app, build configuration may be automatically detected from
your repository if you're using a recognized framework or common build setup.
### Configuration options
- **Framework preset**: Optimized configuration for supported frameworks like
Next.js or Fresh. [Learn more about framework integrations](./frameworks/).
- **Install command**: Shell command for installing dependencies, such as
`npm install` or `deno install`.
- **Build command**: Shell command for building the project, often a task from
`package.json` or `deno.json`, such as `deno task build` or `npm run build`.
- **Runtime configuration**: Determines how the application serves traffic:
- **Dynamic**: For applications that respond to requests using a server (API
servers, server-rendered websites, etc.)
- **Entrypoint**: The JavaScript or TypeScript file to execute
- **Arguments** (optional): Command-line arguments to pass to the
application
- **Static**: For static websites serving pre-rendered content
- **Directory**: Folder containing static assets (e.g., `dist`, `.output`)
- **Single page app mode** (optional): Serves `index.html` for paths that
don't match static files instead of returning 404 errors
## Build environment
The build environment runs on Linux using either x64 or ARM64 architecture.
Available tools include:
- `deno` (same version as at runtime)
- `node`
- `npm`
- `npx`
- `yarn` (v1)
- `pnpm`
- `git`
- `tar`
- `gzip`
:::info
All JavaScript inside of the builder is executed using Deno.
The `node` command is actually a shim that translates Node.js invocations to
`deno run`. Similarly, `npm`, `npx`, `yarn`, and `pnpm` run through Deno rather
than Node.js.
:::
Environment variables configured for the "Build" context are available during
builds, but variables from "Production" or "Development" contexts are not.
[Learn more about environment variables](./env-vars-and-contexts/).
Builders have 8 GB of storage available during the build process.
---
# deploy/early-access/reference/caching.md
> Overview of CDN caching functionality in Deno Deploy Early Access, including cache configuration, directives, and best practices.
URL: https://docs.deno.com/deploy/early-access/reference/caching
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
Deno DeployEA includes a built-in CDN that can cache responses from
your application. This improves performance for:
- Static assets (images, CSS, JavaScript files)
- API responses and server-rendered pages that don't change frequently
Caching is enabled by default for all applications, but only responses with
appropriate caching headers are actually cached.
Deno DeployEA integrates with popular frameworks like Next.js to
automatically optimize caching for features such as Incremental Static
Regeneration (ISR).
The CDN cache is tied to both the revision and context. When you deploy a new
revision, the cache is automatically invalidated, ensuring users always see the
latest version of your application. Note that browser caching may still serve
older content if the `Cache-Control` header permits it.
## Caching a resource
To cache a resource, set the `Cache-Control` header in your response. This
standard HTTP header tells browsers and the CDN how to cache your content.
### Supported caching directives
Deno DeployEA supports these caching directives:
| Directive | Description |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `max-age` | Maximum time (in seconds) the response is considered fresh by both CDN and browsers. After this time, the response is considered stale and revalidated with the server. |
| `s-maxage` | Maximum time (in seconds) the response is considered fresh by shared caches (CDNs only, not browsers). After this time, the response is revalidated with the server. |
| `stale-while-revalidate` | Maximum time (in seconds) a stale response can be served while a fresh one is fetched in the background. |
| `stale-if-error` | Maximum time (in seconds) a stale response can be served if the server returns an error. |
| `immutable` | Indicates the response will never change, allowing indefinite caching. Ideal for content-hashed static assets. |
| `no-store` | Prevents caching of the response. Use for dynamic content that should never be cached. |
| `no-cache` | Requires revalidation with the server before serving from cache. Use for content that changes frequently but can benefit from conditional requests. |
### Additional caching headers
- `Vary`: Specifies which request headers should be included in the cache key,
creating separate cached versions based on those headers.
- `Expires`: Sets an absolute expiration date for the response (alternative to
`max-age`). do not change, such as images or CSS files.
- `no-store`: The response should not be cached. This is useful for dynamic
responses that should not be cached, such as API responses or server rendered
pages.
- `no-cache`: The response should be revalidated with the server before being
served from the cache. This is useful for dynamic responses that may change
frequently.
The `Vary` header can be used to specify which request headers should be part of
the cache key for the request.
The `Expires` header can be used to specify an absolute expiration date for the
response. This is an alternative to the `max-age` directive.
---
# deploy/early-access/reference/domains.md
> Complete guide to domain management in Deno Deploy Early Access, including organization domains, custom domains, DNS configuration, TLS certificates, and domain assignments.
URL: https://docs.deno.com/deploy/early-access/reference/domains
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
Every organization has a default domain used for all applications deployed
within that organization. For example, an organization with the slug `acme-inc`
would have a default domain of `acme-inc.deno.net`. An application named
`my-app` would automatically receive the production domain
`my-app.acme-inc.deno.net`.
In addition to these default domains, you can add custom domains to your
applications. Custom domains are domains that you own and control. To use a
custom domain, you must:
1. Own the domain (purchased from a domain registrar)
2. Have access to edit its DNS records
Custom domains belong to an organization and can be attached to any application
within that organization.
A custom domain can be added as:
- A base domain (e.g., `example.com` or a specific subdomain)
- A wildcard domain (e.g., `*.example.com`)
A base domain works with a single application, while a wildcard domain offers
more flexibility. You can either:
- Assign the entire wildcard to one application (all subdomains point to the
same app)
- Partially assign it to multiple applications (different subdomains point to
different apps)
All custom domains require valid TLS certificates. Deno DeployEA can
automatically provision these certificates using Let's Encrypt.
## Adding a custom domain
1. Go to the organization domains page (click your organization name in the top
left corner, then the "Domains" tab)
2. Click "Add Domain"
3. Enter your domain (e.g., `example.com`)
4. Select whether to add just this domain or also include the wildcard subdomain
5. Click "Add Domain"
This will open the domain configuration drawer.
### DNS configuration
The domain configuration drawer shows the DNS records needed to:
- Verify domain ownership
- Generate TLS certificates
- Route traffic to Deno DeployEA
There are three possible configuration methods, depending on your domain
registrar's capabilities:
#### ANAME/ALIAS method (preferred)
If your registrar supports `ANAME` or `ALIAS` records, this is the best option:
- Add one `ANAME`/`ALIAS` record
- Add one `CNAME` record for verification
#### CNAME method
Works well for subdomains but not for apex domains:
- Add two `CNAME` records
- Note: This method doesn't allow other DNS records (like `MX` records) on the
same domain
#### A record method
Most compatible but requires more configuration:
- Add one `A` record
- Add one `CNAME` record for verification
> Note: Currently, Deno DeployEA doesn't support IPv6. When using the
> `ANAME/ALIAS` or `CNAME` methods, your domain will automatically use IPv6 when
> supported. With the `A` method, you'll receive an email when it's time to add
> an `AAAA` record.
:::warning
When using Cloudflare as your DNS provider, you **MUST** disable the proxying
feature (orange cloud) for the `_acme-challenge` CNAME record, or verification
and certificate provisioning will fail.
:::
### Verification
After adding the DNS records, Deno DeployEA will verify your domain
ownership. This process may take a few minutes depending on your DNS provider.
You can leave the domain configuration drawer open during verification - it will
refresh automatically when complete.
You can manually trigger verification by clicking the "Provision Certificate"
button. Successful verification also initiates TLS certificate provisioning.
### TLS certificate provisioning
After domain verification, click "Provision Certificate" to generate a TLS
certificate through Let's Encrypt. This process takes up to 90 seconds.
Once provisioned, you'll see certificate details including expiration date and
issue time.
Certificates are automatically renewed near expiry. You can check the current
certificate status in the domain configuration drawer.
## Assigning a custom domain to an application
After adding a custom domain to your organization:
1. Go to the organization domains page
2. Click "Assign" next to the custom domain
3. Select the target application
4. If using a wildcard domain, choose whether to attach the base domain, the
wildcard, or a specific subdomain
5. Click "Assign Domain"
## Unassigning a custom domain from an application
1. Go to the application settings page
2. Find the "Custom Domains" section
3. Click "Remove" next to the domain you want to unassign
This removes the domain from the application but keeps it available in your
organization for use with other applications.
## Removing a custom domain
1. Go to the organization domains page
2. Open the domain configuration drawer
3. Click "Delete" and confirm
This removes the custom domain from your organization and deletes all domain
assignments across all applications. also select whether you want to attach the
base domain, the wildcard subdomain, or any specific subdomain to the
application.
Once you have selected the application and the domain, click on the "Assign
Domain" button to confirm.
## Unassigning a custom domain from an application
To unassign a custom domain from an application, go to the application settings
page and remove the custom domain from the "Custom Domains" section using the
"Remove" button.
This will unassign the custom domain from the application, but will not remove
the custom domain from the organization. The custom domain will still be
available for use with other applications in the organization.
## Removing a custom domain
To remove a custom domain from an organization, go to the organization domains
page and open the domain configuration drawer. In the drawer, click on the
"Delete" button and confirm. This will remove the custom domain from the
organization and delete all custom domain assignments for that domain from all
applications in the organization.
---
# deploy/early-access/reference/env-vars-and-contexts.md
> Guide to managing environment variables and contexts in Deno Deploy Early Access, including variable types, creation, editing, and accessing them in your code.
URL: https://docs.deno.com/deploy/early-access/reference/env-vars-and-contexts
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
Environment variables in Deno DeployEA allow you to configure your
application with static values such as API keys or database connection strings.
## Types of environment variables
Environment variables can be stored as:
- **Plain text**: Visible in the UI and suitable for non-sensitive values like
feature flags
- **Secrets**: Never visible in the UI after creation, only readable from
application code, suitable for sensitive values like API keys
Variables can be set at:
- **Application level**: Specific to a single application
- **Organization level**: Applied to all applications in the organization, but
can be overridden by application-level variables
## Contexts
Each environment variable applies to one or more contexts. Contexts represent
the logical "environments" in which your code runs, each with its own set of
variables and secrets.
By default, there are two contexts:
- **Production**: Used for the production timeline serving production traffic
- **Development**: Used for development timelines serving non-production traffic
(preview URLs and branch URLs)
:::info
Need additional contexts? Please contact [support](../support).
:::
Additionally, there is a **Build** context used during the build process.
Environment variables in the Build context are only available during builds and
aren't accessible in Production or Development contexts (and vice versa). This
separation enables different configuration for build-time vs. runtime.
Within a single application or organization, you cannot have multiple
environment variables with the same name in the same context. You can, however,
have variables with the same name in different non-overlapping contexts.
## Adding, editing and removing environment variables
You can manage environment variables from several locations:
- On the "New App" page while creating an application
- In the application settings under the "Environment Variables" section
- In the organization settings under the "Environment Variables" section
In each location, click the relevant edit button to open the environment
variables drawer. Changes only apply when you click "Save." Clicking "Cancel"
discards your changes.
To add a variable:
1. Click "Add Environment Variable"
2. Enter the name and value
3. Specify whether it's a secret
4. Select the contexts where it should apply
You can also bulk import variables from a `.env` file:
1. Click "+ Add from .env file"
2. Paste the contents of your `.env` file
3. Click "Import variables"
Note that lines starting with `#` are treated as comments.
To remove a variable, click the "Remove" button next to it.
To edit a variable, click the "Edit" button next to it to modify its name,
value, secret status, or applicable contexts.
## Using environment variables in your code
Access environment variables using the `Deno.env.get` API:
```ts
const myEnvVar = Deno.env.get("MY_ENV_VAR");
```
## Predefined environment variables
Deno DeployEA provides these predefined environment variables in all
contexts:
- `DENO_DEPLOYMENT_ID`: A unique identifier representing the entire
configuration set (application ID, revision ID, context, and environment
variables). Changes if any of these components change.
- `DENO_REVISION_ID`: The ID of the currently running revision.
More predefined variables will be added in the future.
Note that you cannot manually set any environment variables starting with
`DENO_*` as these are reserved system variables.
```ts
const myEnvVar = Deno.env.get("MY_ENV_VAR");
```
## Predefined environment variables
Deno DeployEA provides a set of predefined environment variables that
are automatically set for each application. These environment variables are
available in all contexts and can be used to access information about the
application and the environment in which it is running.
- `DENO_DEPLOYMENT_ID` - A unique identifier that represents the entire set of
configuration that the application is running in. This includes the
application ID, the revision ID, the context, and any applicable environment
variables. This value changes if any of the above change.
- `DENO_REVISION_ID` - The revision ID that is currently running.
More predefined environment variables will be added in the future.
It is not possible to manually set any environment variables that start with
`DENO_*`. These environment variables are set by Deno DeployEA and
are read-only.
---
# or npm install @deno/astro-adapter
> Detailed guide to supported JavaScript and TypeScript frameworks in Deno Deploy Early Access, including Next.js, Astro, Nuxt, SvelteKit, and more.
URL: https://docs.deno.com/deploy/early-access/reference/frameworks
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
Deno DeployEA supports a number of JavaScript and TypeScript
frameworks out of the box. This means that you can use these frameworks without
any additional configuration or setup.
Natively supported frameworks are tested to work with Deno Deploy
EA and are automatically detected when you create a new app. Deno
DeployEA automatically optimizes the build and runtime configuration
for these frameworks to be as optimal as possible.
Frameworks not listed here are still likely to work, but may require manually
configuring the install and/or build command and the runtime configuration in
the build settings.
Feel like a framework is missing? Let us know in the
[Deno Deploy Discord channel](https://discord.gg/deno) or
[contact Deno support](../support).
## Supported frameworks
### Next.js
Next.js is a React framework for building full-stack web applications. You use
React Components to build user interfaces, and Next.js for additional features
and optimizations.
Both pages and app router are supported out of the box. ISR, SSG, SSR, and PPR
are supported. Caching is supported out of the box, including using the new
`"use cache"`.
`next/image` works out of the box.
Next.js on Deno DeployEA always builds in standalone mode.
Tracing is supported out of the box, and Next.js automatically emits some spans
for incoming requests, routing, rendering, and other operations.
### Astro
Astro is a web framework for building content-driven websites like blogs,
marketing, and e-commerce. Astro leverages server rendering over client-side
rendering in the browser as much as possible.
For static Astro sites, no additional configuration is needed to use Deno Deploy
EA.
When using SSR in Astro with Deno Deploy
EA, you need to install the
[`@deno/astro-adapter`](https://github.com/denoland/deno-astro-adapter) package
and configure your `astro.config.mjs` file to use the adapter:
```bash
$ deno add npm:@deno/astro-adapter
# or npm install @deno/astro-adapter
# or yarn add @deno/astro-adapter
# or pnpm add @deno/astro-adapter
```
```diff title="astro.config.mjs"
import { defineConfig } from 'astro/config';
+ import deno from '@deno/astro-adapter';
export default defineConfig({
+ output: 'server',
+ adapter: deno(),
});
```
Sharp image optimization is supported.
The `astro:env` API is supported.
### Nuxt
Create high-quality web applications with Nuxt, the open source framework that
makes full-stack development with Vue.js intuitive.
Nuxt requires no additional setup.
### SolidStart
SolidStart is an open source meta-framework designed to unify components that
make up a web application. It is built on top of Solid.
SolidStart requires no additional setup.
### SvelteKit
SvelteKit is a framework for rapidly developing robust, performant web
applications using Svelte.
SvelteKit requires no additional setup.
### Fresh
Fresh is a full stack modern web framework for JavaScript and TypeScript
developers. Fresh uses Preact as the JSX rendering engine.
Fresh requires no additional setup.
### Lume
Lume is a static site generator for building fast and modern websites using
Deno.
Lume requires no additional setup.
### Remix
> ⚠️ **Experimental**: Remix is not yet fully supported. It is in the process of
> being integrated into Deno DeployEA. Some features may not work as
> expected. Please report any issues you encounter to the Deno team.
---
# deploy/early-access/reference/index.md
> Comprehensive reference guide for Deno Deploy Early Access covering accounts, organizations, applications, builds, observability, environments, and custom domains.
URL: https://docs.deno.com/deploy/early-access/reference/
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
Specific terminology is used in Deploy Early Access. Use this reference guide to
understand key concepts and details about the platform.
## Topics
### [Accounts](/deploy/early-access/reference/accounts)
Information about user accounts, authentication, and personal settings in Deploy
Early Access.
### [Organizations](/deploy/early-access/reference/organizations)
Learn about creating and managing organizations, team members, roles, and
permissions.
### [Applications](/deploy/early-access/reference/apps)
Details about application creation, configuration, and lifecycle management.
### [Builds](/deploy/early-access/reference/builds)
Understanding the build process, build configurations, and deployment pipelines.
### [Playgrounds](/deploy/early-access/reference/playgrounds)
Write and deploy code without needing to create a git repository.
### [Observability](/deploy/early-access/reference/observability)
Monitoring applications, accessing logs, metrics, and performance insights.
### [Environments](/deploy/early-access/reference/env-vars-and-contexts/)
Managing different deployment environments including development, staging, and
production.
### [Custom Domains](/deploy/early-access/reference/domains)
Setting up and configuring custom domains for your applications.
---
# deploy/early-access/reference/observability.md
> Comprehensive overview of monitoring features in Deno Deploy Early Access, including logs, traces, metrics, and filtering options.
URL: https://docs.deno.com/deploy/early-access/reference/observability
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
Deno DeployEA provides comprehensive observability features to help
you understand application performance, debug errors, and monitor usage. These
features leverage OpenTelemetry and the
[built-in OpenTelemetry integration in Deno](/runtime/fundamentals/open_telemetry/).
The three main observability features in Deno DeployEA are:
- **Logs**: Unstructured debug information emitted by your application code
- **Traces**: Structured information about request handling, including execution
time for each step and automatic capture of outbound I/O operations
- **Metrics**: Structured, high-level data about application performance and
usage, such as request count, error count, and latency
## Logs
Logs in Deno DeployEA are captured using the standard `console` API
and can be queried from the logs page in the dashboard.
Logs are organized by application. You can use the search bar to filter logs
based on various attributes and message content.
When logs are emitted inside the context of a trace, they become associated with
that specific trace and span. For such logs, a "View trace" button appears in
the logs interface, allowing you to open the relevant trace in an overlay drawer
for detailed inspection.
## Traces
Traces in Deno DeployEA are captured in three ways:
- **Automatically for built-in operations**: Incoming HTTP requests, outbound
fetch calls, and other system operations are traced automatically. This cannot
be disabled.
- **Automatically for supported frameworks**: Frameworks like Next.js, Fresh,
and Astro include built-in instrumentation. The specific frameworks and
operations covered may change over time.
- **Manually through custom instrumentation**: Your application code can create
new traces or spans using the OpenTelemetry API.
Traces are organized by application. The search bar lets you filter based on
various attributes and span names.
Clicking a trace opens the trace overlay drawer, showing all spans within that
trace in a waterfall view. This visualization displays the start time, end time,
and duration of each span, grouped by parent span with the root span at the top.
Clicking any span shows its details at the bottom of the drawer, including all
captured attributes. For example, outbound HTTP requests include the method,
URL, and status code.
The span details section also includes a "Logs" tab showing all logs emitted
within the selected span's context.
You can click "View logs" on any trace to open the logs page with the trace ID
pre-filled in the search bar, showing all logs related to that trace.
## Metrics
Metrics in Deno DeployEA are automatically captured for various
operations such as incoming HTTP requests and outbound fetch calls. This
automatic capture cannot be disabled.
Metrics are organized by application and displayed in time-series graphs showing
values over time. You can use the search bar to filter metrics based on various
attributes.
## Filtering
Logs, traces, and metrics can be filtered using these general attributes:
- **Revision**: The ID of the application revision that emitted the data
- **Context**: The context in which the data was emitted ("Production" or
"Development")
For logs and traces, this additional filter is available:
- **Trace**: The ID of the trace containing the log or spans
For traces only, these additional filters are available:
- **HTTP Method**: The HTTP method of the request that triggered the trace
- **HTTP Path**: The path of the request that triggered the trace
- **HTTP Status**: The HTTP status code of the response
### Time range filter
By default, the observability pages show data for the last hour. You can change
this using the time range filter in the top right corner of each page.
You can select predefined time ranges like "Last 1 hour," "Last 24 hours," or
"Last 7 days," or set a custom time range by clicking the "Custom" button.
Custom time ranges can be either absolute (specific start and end times) or
relative (e.g., 3 days ago, 1 hour from now). Relative time ranges use the same
syntax as Grafana:
- `now` - the current time
- `now-1h` - 1 hour ago
- `now/h` - the start of the current hour
- `now-1h/h` - the start of the previous hour
- `now/d+3h` - 3 hours from the start of the current day
- `now-1d/d` - the start of the previous day page. The time range filter can be
set to a predefined time range, like "Last 1 hour", "Last 24 hours", or "Last
7 days", or a custom time range.
The custom time range can be set by clicking on the "Custom" button. A custom
time range can either be absolute (a specific start and end time) or relative (3
days ago, 1 hour from now, etc.). The time range filter is shown in the top
right corner of the page.
Relative time ranges use the same syntax as Grafana, where `now` is the current
time, and `now-1h` is 1 hour ago. Furthermore syntax such as `now-1h/h` can be
used to round the time to the nearest hour. Some examples:
- `now-1h` - 1 hour ago
- `now/h` - the start of the current hour
- `now-1h/h` - the start of the previous hour
- `now/d+3h` - 3 hours from the start of the current day
- `now-1d/d` - the start of the previous day
---
# deploy/early-access/reference/organizations.md
> Guide to creating and managing organizations in Deno Deploy Early Access, including members, permissions, and organization administration.
URL: https://docs.deno.com/deploy/early-access/reference/organizations
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
Organizations are groups of users that collectively own apps and domains. When
signing up for Deno DeployEA, each user can either create an
organization or join an existing organization through invitation.
All users must belong to an organization to use Deno DeployEA, as all
resources are owned at the organization level.
Organizations have both a name and a slug. The name is visible only to
organization members and appears in the organization dropdown in both Deno
Deploy
EA and Deploy Classic. The slug forms part of the default domain for
all applications in the organization.
:::caution
Organizations cannot be renamed, nor can their slug be changed after creation.
:::
Every organization has a default domain used for production, git branch, and
preview URLs for projects in that organization. For example, an organization
with the slug `acme-inc` would have a default domain of `acme-inc.deno.net`.
Organizations can have multiple members. Currently, all members have owner
permissions for the organization, which means they can invite other members,
create and delete apps, and manage domains.
## Create an organization
Organizations in Deno DeployEA are created from the Deno Deploy
Classic dashboard:
1. Visit the [Deploy Classic dashboard](https://dash.deno.com) and sign in with
your GitHub account.
2. Click the "+" button in the organization dropdown in the top left corner of
the screen.
3. Select "Try the new Deno Deploy" option.
4. Click the "Create Early Access organization" button.
5. Enter an organization name and slug, then click "Create".
:::info
Organization slugs must be unique across all Deno DeployEA
organizations and cannot match any existing project name in Deno Deploy Classic.
:::
## Deleting an organization
Organizations cannot currently be deleted from the dashboard. Please
[contact Deno support](../support) if you need to delete an organization.
## Inviting users to an organization
To invite a user:
1. Go to the organization settings page and click "+ Invite User"
2. Enter the user's GitHub account username (e.g., `ry`)
3. Optionally enter an email address to send the invitation to
4. Click "Invite"
If you don't specify an email address, we'll attempt to send the invitation to
the email in the user's public GitHub profile or another email we may have on
record.
After inviting a user, they will receive an email with an invite link (if we
have their email address). They must click this link and accept the invitation
to join the organization. You can also directly share the personalized invite
link displayed in the members table after inviting a user.
You can cancel an invitation before it's accepted by clicking the delete button
next to the invited user in the members table and confirming by clicking "Save".
This invalidates the previously sent invitation link.
## Removing users from an organization
To remove a member from the organization, find the user in the members table in
the organization settings, click the remove button, and confirm by clicking
"Delete". "Delete".
---
# deploy/early-access/reference/playgrounds.md
> Write and deploy code completely from Deno Deploy, without the need for a git repository.
URL: https://docs.deno.com/deploy/early-access/reference/playgrounds
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
Playground applications enable you to create, edit, and deploy applications
entirely from the Deno DeployEA web dashboard, without needing to
create a GitHub repository.
Playgrounds contain one or more files (JavaScript, TypeScript, TSX, JSON, etc.)
that you can edit directly in the playground editor.
## Creating a playground
You can create playgrounds from the "Playgrounds" page in your organization.
Click the "New Playground" button to create a basic "Hello World" playground.
Using the dropdown on the "New Playground" button lets you create playgrounds
from other templates, such as Next.js or Hono.
## Editing a playground
To edit a playground, open it from the "Playgrounds" page in your organization.
The playground editor consists of five main sections:
- **Code editor**: The central area where you edit code for the currently
selected file. Above the editor is a navbar showing the current file name,
which you can click to edit.
- **File browser**: Located on the left of the code editor, this panel shows all
files in the playground. Click any file to open it in the editor. Create new
files by clicking the "New" icon at the top of the file browser. Delete files
using the delete button next to each file name.
- **Top bar**: Located above the code editor, this contains action buttons for
the playground. The "Deploy" button saves current changes and triggers a
build. "Build Config" and "Env Variables" buttons open their respective
configuration drawers. The left side of the top bar displays the playground
URL (unless the playground hasn't been deployed yet).
- **Bottom drawer**: Located beneath the code editor, this contains debugging
tools including "Build Logs" that show build progress during deployment, and
tabs for viewing logs and traces.
- **Right drawer**: Located to the right of the code editor, this contains tools
for inspecting application output. The "Preview" tab displays an iframe
showing the deployed application, while "HTTP Explorer" lets you send
individual HTTP requests to your deployment.
The playground content automatically saves when you click the "Deploy" button or
when the editor loses focus.
## Deleting a playground
> ⚠️ Playgrounds cannot currently be deleted.
## Renaming a playground
> ⚠️ Playgrounds cannot currently be renamed.
## Transferring a playground
> ⚠️ Playgrounds cannot currently be transferred to another organization.
## Deleting a playground
> ⚠️ Playgrounds can not currently be deleted.
## Renaming a playground
> ⚠️ Playgrounds can not currently be renamed.
## Transferring a playground
> ⚠️ Playgrounds can not currently be transferred to another organization.
---
# deploy/early-access/reference/runtime.md
> Details about the Deno Deploy Early Access runtime environment, including application lifecycle, startup, shutdown, and cold start optimization.
URL: https://docs.deno.com/deploy/early-access/reference/runtime
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
In Deno DeployEA, all applications execute using a standard Deno
runtime in a secure, isolated Linux environment.
The Deno runtime used in Deno DeployEA is the standard Deno runtime,
with full support for all features of the Deno CLI, including JSR and NPM
dependencies, reading and writing to the file system, making network requests,
spawning subprocesses, and loading FFI and node native addons.
The Deno runtime runs using `--allow-all` permissions.
Custom flags cannot be passed to the Deno runtime.
## Runtime environment
The runtime environment is a Linux-based environment running either x64 or ARM64
architecture. The exact set of tools available in the runtime environment is
subject to change and thus cannot be relied upon.
Currently Deno DeployEA runs on Deno 2.3.2.
## Lifecycle
Deno DeployEA runs applications in a serverless environment. This
means that an application is not always running and is only started when a
request is received. When no incoming traffic is received for a period of time,
the application is stopped.
Applications can be started and stopped at any time. They should start quickly
to respond to incoming requests without delay.
Multiple instances of the same application can run simultaneously. For example,
one instance could be running in the US and another in Europe. Each instance is
completely isolated from the others and they do not share CPU, memory, or disk
resources. Multiple instances can also start in the same region when needed,
such as to handle high traffic or during infrastructure updates.
### Startup
When the system decides to start an application, it provisions a new sandbox
environment for the application. This environment is isolated from all other
applications.
It then starts the application using the configured entrypoint and waits for the
HTTP server to start. If the application crashes before the HTTP server starts,
the request that triggered the start will fail with a 502 Bad Gateway error.
Once the application is started, incoming requests are routed to it and
responses are sent back to the client.
### Shutdown
The application remains alive until no new incoming requests are received or
responses (including response body bytes) are sent for a period of time. The
exact timeout is between 5 seconds and 10 minutes. WebSocket connections that
actively transmit data (including ping/pong frames) also keep the application
alive.
Once the system decides to stop the application, it sends a `SIGINT` signal to
the application as a trigger to shut down. From this point on, the application
has 5 seconds to shut down gracefully before it will be forcibly killed with a
`SIGKILL` signal.
### Eviction
Sometimes an isolate may shut down even if the application is actively receiving
traffic. Some examples of when this can happen are:
- An application was scaled up to handle load, but the load has decreased enough
to be handled by a single instance again.
- The underlying server executing the instance is too resource constrained to
continue running this application instance.
- The underlying infrastructure is being updated or has experienced a failure.
When the system decides to evict an application, it attempts to divert traffic
away from the instance being evicted as early as possible. Sometimes this means
that a request will wait for a new instance to boot up even though an existing
instance is already running.
When an application only serves requests that finish quickly, evictions are
usually unnoticeable. For applications that serve long-running requests or
WebSockets, evictions can be more noticeable because the application may need to
be evicted while still processing a request. The system will try to avoid these
scenarios, but it is not always possible.
After traffic has been diverted away from the old instance, the system sends a
`SIGINT` signal to trigger a graceful shutdown. The application should finish
processing any remaining requests quickly and shut down websockets and other
long-running connections. Clients making long-running requests should be
prepared to handle these disruptions and reconnect when disconnected.
5 seconds after the `SIGINT` signal is sent, the old instance will be forcibly
killed with a `SIGKILL` signal if it has not already shut down gracefully.
## Cold starts
Because applications are not always running, they may need to start when a
request is received. This is called a cold start. Cold starts in Deno Deploy
EA are highly optimized and complete within 100 milliseconds for
hello world applications, and within a few hundred milliseconds for larger
applications.
Deno DeployEA uses multiple optimizations to enable fast cold starts:
- Sandboxes and the Deno runtime are pre-provisioned to ensure they don't need
to be created from scratch when starting an application.
- Applications start immediately when the client sends the first TCP packet to
establish a TLS connection. For fast-starting applications, depending on the
network round trip latency, the application may already be running before the
client sends the HTTP request.
- File system access is optimized for frequently used startup files. Deno
DeployEA analyzes file access patterns during the build step's
warmup phase and optimizes the file system for faster access.
When cold starts are slow, they can negatively impact user experience. To
optimize your application for quick startup:
1. Minimize dependencies used by your application.
2. Load infrequently accessed code and dependencies lazily using dynamic
`import()`.
3. Minimize I/O operations during startup, especially top-level `await`
operations and network requests.
If your application starts slowly, please [contact Deno support](../support) for
help investigating the issue.
---
# deploy/early-access/reference/timelines.md
> Understanding deployment timelines in Deno Deploy Early Access, including production and development contexts, active revisions, rollbacks, and timeline locking.
URL: https://docs.deno.com/deploy/early-access/reference/timelines
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
A timeline is a representation of the history of one branch of the application.
Each timeline has a set of revisions, which are the individual items in the
timeline. One of the revisions (usually the most recent one) is the "active"
revision, which is the one that is currently serving traffic. The active
revision receives traffic on all URLs that are assigned to the timeline.
Each timeline is associated with a [context](./env-vars-and-contexts.md), which
decides which environment variables are available to the code running in that
timeline.
By default, there are multiple timelines set up for each application:
- **Production**: The production timeline contains all of the revisions from the
default git branch. This is the timeline that serves production traffic. This
timeline is associated with `https://..deno.net`, and any
custom domains that are mapped to the application. It uses the production
context.
- **Git Branch / ``**: Each git branch has its own timeline. This
timeline contains all of the revisions from that git branch. This timeline is
associated with `https://--..deno.net`. It
uses the development context.
> There is also one timeline for each revision, that contains only that
> revision. This is the timeline that backs the preview URL for that revision.
> This timeline is associated with
> `https://-..deno.net`. It uses the
> development context.
>
> Preview timelines are not visible in timeline pages in the UI. You can view
> the preview URL for a revision on that revision's build page.
You can view the timelines that each revision is associated with on the
revision's build page. You can also view the revisions that are associated with
a given timeline from the timeline pages.
## Active revision
Each timeline has an active revision. The active revision is the revision that
is currently serving traffic for that timeline. You can view the active revision
for a timeline on the timeline page.
Usually, the active revision is the most recently built revision on the
timeline. However, a different revision can be manually locked to be the active
revision. This enables rollback, and timeline locking:
### Rollback
Rollback is the process of reverting the active revision to a previous revision,
usually because the newer revision has some sort of bug or issue. By rolling
back to a known good revision, you can restore the application to a working
state without having to deploy new code via Git, and waiting for a build to
complete.
Refer to "changing the active revision" below for more information on how to
rollback a timeline.
### Timeline locking
Timeline locking is the process of locking a timeline to a specific revision, to
ensure that new builds do not automatically become the active revision. This is
useful if you are in a feature freeze situation, for example during a big event,
and want to de-risk by not allowing new builds to be deployed. When a timeline
is locked to a specific revision you can still create new builds by pushing to
Git, but they will not automatically become the active revision on the locked
timeline.
Refer to "changing the active revision" below for more information on how to
lock a timeline to a specific revision.
### Changing the active revision
On the timelines page you can lock any revision on that timeline to be the
active revision. This will lock the timeline to that revision, and new builds
will not automatically become the active revision on this timeline anymore. You
can then either unlock the revision from the timeline, reverting back to the
default behavior of the latest revision being the active revision, or you can
lock a different revision to be the active revision.
---
# deploy/early-access/support/index.md
URL: https://docs.deno.com/deploy/early-access/support/
:::info
You are viewing the documentation for Deno DeployEA. Looking for
Deploy Classic documentation? [View it here](/deploy/).
:::
If you have any questions or feedback about Deno DeployEA, please
reach out to us on the [Deno Discord](https://discord.gg/deno) in the
`#deploy-ea` channel or [contact us](mailto:deploy@deno.com).
We are actively working on improving the platform and would love to hear your
thoughts!
---
# Deno Deployᴱᴬ Usage Guidelines
> Important limitations, service level expectations, and terms of use for the Deno Deploy Early Access program.
URL: https://docs.deno.com/deploy/early-access/usage
As an early access product, Deno DeployEA currently has a number of
limitations you should be aware of before using it:
- Deno Deploy Pro account features do not yet extend to Deno Deploy
EA
- CLI deployment and deployment from GitHub Actions are not yet available in
Deno DeployEA
- Database features such as Deno KV are not yet available in Deno Deploy
EA
- Queues and Cron are not available in Deno DeployEA
:::info
Deno DeployEA is an early access product, and as such is not
currently covered by our regular service level agreements.
:::
The Deno company is now using Deno DeployEA to host our own websites
and is putting significant efforts into ensuring service reliability. However,
as this is a new system, occasional service interruptions may occur.
While Deno DeployEA is in closed beta, we are not charging for usage
of the platform. However, the
[Acceptable Use Policy](/deploy/manual/acceptable-use-policy/) and
[Terms and Conditions](/deploy/manual/terms-and-conditions/) still apply, and we
reserve the right to terminate any user, organization, or app that we find to be
in violation of these terms. in violation of these.
---
# deploy/index.md
URL: https://docs.deno.com/deploy/
---
# Backups
URL: https://docs.deno.com/deploy/kv/manual/backup
KV databases hosted on Deno Deploy can be continuously backed up to your own
S3-compatible storage buckets. This is in addition to the replication and
backups that we internally perform for all data stored in hosted Deno KV
databases to ensure high availability and data durability.
This backup happens continuously with very little lag, enabling
_[point-in-time-recovery](https://en.wikipedia.org/wiki/Point-in-time_recovery)_
and live replication. Enabling backup for KV databases unlocks various
interesting use-cases:
- Retrieving a consistent snapshot of your data at any point in time in the past
- Running a read-only data replica independent of Deno Deploy
- Pushing data into your favorite data pipeline by piping mutations into
streaming platforms and analytical databases like Kafka, BigQuery and
ClickHouse
## Configuring backup to Amazon S3
First you must create a bucket on AWS:
1. Go to the [AWS S3 console](https://s3.console.aws.amazon.com/s3/home)
2. Click "Create bucket"
3. Enter a bucket name and choose a AWS region, then scroll down and click
"Next"
1. Install the
[AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
2. Run
`aws s3api create-bucket --bucket --region --create-bucket-configuration LocationConstraint=`
(replace `` and `` with your own values)
Then, create an IAM policy with `PutObject` access to the bucket, attach it to
an IAM user, and create access keys for that user:
1. Go to the [AWS IAM console](https://console.aws.amazon.com/iam/home)
2. Click "Policies" in the left sidebar
3. Click on "Create policy"
4. Select the "JSON" the policy editor and paste the following policy:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "KVBackup",
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::/*"
}
]
}
```
Replace `` with the name of the bucket you created earlier.
5. Click "Review policy"
6. Enter a name for the policy and click "Create policy"
7. Click "Users" in the left sidebar
8. Click "Add user"
9. Enter a name for the user and click "Next"
10. Click "Attach policies directly"
11. Search for the policy you created earlier and click the checkbox next to it
12. Click "Next"
13. Click "Create user"
14. Click on the user you just created
15. Click "Security credentials" and then "Create access key"
16. Select "Other", then click "Next"
17. Enter a description for the access key and click "Create access key"
18. Copy the access key ID and secret access key and save them somewhere safe.
You will need them later, and you will not be able to retrieve them again.
1. Copy the following command to your terminal, and replace `` with
the name of the bucket you created earlier, then run it:
```
aws iam create-policy --policy-name --policy-document '{"Version":"2012-10-17","Statement":[{"Sid":"KVBackup","Effect":"Allow","Action":"s3:PutObject","Resource":"arn:aws:s3:::/*"}]}'
```
2. Copy the following command to your terminal, and replace `` with a
name for the user you are creating, then run it:
```
aws iam create-user --user-name
```
3. Copy the following command to your terminal, and replace `` with
the ARN of the policy you created in step 1, and `` with the name
of the user you created in the previous step, then run it:
```
aws iam attach-user-policy --policy-arn --user-name
```
4. Copy the following command to your terminal, and replace `` with
the name of the user you created in step 2, then run it:
```
aws iam create-access-key --user-name
```
5. Copy the access key ID and secret access key and save them somewhere safe.
You will need them later, and you will not be able to retrieve them again.
Now visit the [Deno Deploy dashboard](https://dash.deno.com), and click on the
"KV" tab in your project. Scroll to the "Backup" section, and click on "AWS S3".
Enter the bucket name, access key ID, and secret access key you created earlier,
and the region the bucket is in. Then click "Save".
The backup will start immediately. Once the data has been backed up, and
continuous backup is active, you will see the status change to "Active".
## Configuring backup to Google Cloud Storage
Google Cloud Storage (GCS) is compatible with the S3 protocol, and can also be
used as a backup target.
First you must create a bucket on GCP:
1. Go to the
[GCP Cloud Storage console](https://console.cloud.google.com/storage/browser)
2. Click on "Create" in the top bar
3. Enter a bucket name, choose a location, and click "Create"
1. Install the [gcloud CLI](https://cloud.google.com/sdk/docs/install)
2. Run `gcloud storage buckets create --location `
(replace `` and `` with your own values)
Then, create a service account with `Storage Object Admin` access to the bucket,
and create an HMAC access key for the service account:
1. Go to the [GCP IAM console](https://console.cloud.google.com/iam-admin/iam)
2. Click on "Service accounts" in the left sidebar
3. Click on "Create service account"
4. Enter a name for the service account and click "Done"
5. Copy the email for the service account you just created. You will need it
later.
6. Go to the
[GCP Cloud Storage console](https://console.cloud.google.com/storage/browser)
7. Click on the bucket you created earlier
8. Click on "Permissions" in the toolbar
9. Click "Grant access"
10. Paste the email for the service account you copied earlier into the "New
principals" field
11. Select "Storage Object Admin" from the "Select a role" dropdown
12. Click "Save"
13. Click on "Settings" in the left sidebar (still in the Cloud Storage console)
14. Click on the "Interoperability" tab
15. Click on "Create a key for a service account"
16. Select the service account you created earlier
17. Click "Create key"
18. Copy the access key and secret access key and save them somewhere safe. You
will need them later, and you will not be able to retrieve them again.
1. Run the following command, replacing `` with a name for
the service account you are creating:
```
gcloud iam service-accounts create
```
2. Run the following command, replacing `` with the name of the
bucket you created earlier, and `` with the email of
the service account you created in the previous step:
```
gsutil iam ch serviceAccount::objectAdmin gs://
```
3. Run the following command, replacing `` with the email
of the service account you created in the previous step:
```
gcloud storage hmac create
```
4. Copy the `accessId` and `secret` and save them somewhere safe. You will need
them later, and you will not be able to retrieve them again.
Now visit the [Deno Deploy dashboard](https://dash.deno.com), and click on the
"KV" tab in your project. Scroll to the "Backup" section, and click on "Google
Cloud Storage". Enter the bucket name, access key ID, and secret access key you
created earlier, and the region the bucket is in. Then click "Save".
The backup will start immediately. Once the data has been backed up, and
continuous backup is active, you will see the status change to "Active".
## Using backups
S3 backups can be used with the `denokv` tool. Please refer to the
[documentation](https://github.com/denoland/denokv) for more details.
---
# Scheduling cron tasks
URL: https://docs.deno.com/deploy/kv/manual/cron
The [`Deno.cron`](https://docs.deno.com/api/deno/~/Deno.cron) interface enables
you to configure JavaScript or TypeScript code that executes on a configurable
schedule using [cron syntax](https://en.wikipedia.org/wiki/Cron). In the example
below, we configure a block of JavaScript code that will execute every minute.
```ts
Deno.cron("Log a message", "* * * * *", () => {
console.log("This will print once a minute.");
});
```
It's also possible to use JavaScript objects to define the cron schedule. In the
example below, we configure a block of JavaScript code that will execute once an
hour.
```ts
Deno.cron("Log a message", { hour: { every: 1 } }, () => {
console.log("This will print once an hour.");
});
```
`Deno.cron` takes three arguments:
- A human-readable name for the cron task
- A cron schedule string or JavaScript object that defines a schedule on which
the cron job will run
- a function to be executed on the given schedule
If you are new to cron syntax, there are a number of third party modules
[like this one](https://www.npmjs.com/package/cron-time-generator) that will
help you generate cron schedule strings.
## Retrying failed runs
Failed cron invocations are automatically retried with a default retry policy.
If you would like to specify a custom retry policy, you can use the
`backoffSchedule` property to specify an array of wait times (in milliseconds)
to wait before retrying the function call again. In the following example, we
will attempt to retry failed callbacks three times - after one second, five
seconds, and then ten seconds.
```ts
Deno.cron("Retry example", "* * * * *", {
backoffSchedule: [1000, 5000, 10000],
}, () => {
throw new Error("Deno.cron will retry this three times, to no avail!");
});
```
## Design and limitations
Below are some design details and limitations to be aware of when using
`Deno.cron`.
### Tasks must be defined at the top level module scope
The [`Deno.cron`](https://docs.deno.com/api/deno/~/Deno.cron) interface is
designed to support static definition of cron tasks based on pre-defined
schedules. All `Deno.cron` tasks must be defined at the top-level of a module.
Any nested `Deno.cron` definitions (e.g. inside
[`Deno.serve`](https://docs.deno.com/api/deno/~/Deno.serve) handler) will result
in an error or will be ignored.
If you need to schedule tasks dynamically during your Deno program execution,
you can use the [Deno Queues](./queue_overview) APIs.
### Time zone
`Deno.cron` schedules are specified using UTC time zone. This helps avoid issues
with time zones which observe daylight saving time.
### Overlapping executions
It's possible for the next scheduled invocation of your cron task to overlap
with the previous invocation. If this occurs, `Deno.cron` will skip the next
scheduled invocation in order to avoid overlapping executions.
### Day-of-week numeric representation
`Deno.cron` does not use 0-based day-of-week numeric representation. Instead, it
uses 1-7 (or SUN-SAT) to represent Sunday through Saturday. This may be
different compared to other cron engines which use 0-6 representation.
## Usage on Deno Deploy
With [Deno Deploy](https://deno.com/deploy), you can run your background tasks
on V8 isolates in the cloud. When doing so, there are a few considerations to
keep in mind.
### Differences with Deno CLI
Like other Deno runtime built-ins (like queues and Deno KV), the `Deno.cron`
implementation works slightly differently on Deno Deploy.
#### How cron works by default
The implementation of `Deno.cron` in the Deno runtime keeps execution state
in-memory. If you run multiple Deno programs that use `Deno.cron`, each program
will have its own independent set of cron tasks.
#### How cron works on Deno Deploy
Deno Deploy provides a serverless implementation of `Deno.cron` that is designed
for high availability and scale. Deno Deploy automatically extracts your
`Deno.cron` definitions at deployment time, and schedules them for execution
using on-demand isolates. Your latest production deployment defines the set of
active cron tasks that are scheduled for execution. To add, remove, or modify
cron tasks, simply modify your code and create a new production deployment.
Deno Deploy guarantees that your cron tasks are executed at least once per each
scheduled time interval. This generally means that your cron handler will be
invoked once per scheduled time. In some failure scenarios, the handler may be
invoked multiple times for the same scheduled time.
### Cron dashboard
When you make a production deployment that includes a cron task, you can view a
list of all your cron tasks in the
[Deploy dashboard](https://dash.deno.com/projects) under the `Cron` tab for your
project.

### Pricing
`Deno.cron` invocations are charged at the same rate as inbound HTTP requests to
your deployments. Learn more about pricing
[here](https://deno.com/deploy/pricing).
### Deploy-specific limitations
- `Deno.cron` is only available for production deployments (not preview
deployments)
- The exact invocation time of your `Deno.cron` handler may vary by up to a
minute from the scheduled time
## Cron configuration examples
Here are a few common cron configurations, provided for your convenience.
```ts title="Run once a minute"
Deno.cron("Run once a minute", "* * * * *", () => {
console.log("Hello, cron!");
});
```
```ts title="Run every fifteen minutes"
Deno.cron("Run every fifteen minutes", "*/15 * * * *", () => {
console.log("Hello, cron!");
});
```
```ts title="Run once an hour on the hour"
Deno.cron("Run once an hour on the hour", "0 * * * *", () => {
console.log("Hello, cron!");
});
```
```ts title="Run every three hours"
Deno.cron("Run every three hours", "0 */3 * * *", () => {
console.log("Hello, cron!");
});
```
```ts title="Run every day at 1am"
Deno.cron("Run every day at 1am", "0 1 * * *", () => {
console.log("Hello, cron!");
});
```
```ts title="Run every Wednesday at midnight"
Deno.cron("Run every Wednesday at midnight", "0 0 * * WED", () => {
console.log("Hello, cron!");
});
```
```ts title="Run on the first of the month at midnight"
Deno.cron("Run on the first of the month at midnight", "0 0 1 * *", () => {
console.log("Hello, cron!");
});
```
---
# Data Modeling in TypeScript
URL: https://docs.deno.com/deploy/kv/manual/data_modeling_typescript
In TypeScript applications, it is usually desirable to create strongly-typed,
well-documented objects to contain the data that your application operates on.
Using [interfaces](https://www.typescriptlang.org/docs/handbook/2/objects.html)
or [classes](https://www.typescriptlang.org/docs/handbook/2/classes.html), you
can describe both the shape and behavior of objects in your programs.
If you are using Deno KV, however, there is a bit of extra work required to
persist and retrieve objects that are strongly typed. In this guide, we'll cover
strategies for working with strongly typed objects going into and back out from
Deno KV.
## Using interfaces and type assertions
When storing and retrieving application data in Deno KV, you might want to begin
by describing the shape of your data using TypeScript interfaces. Below is an
object model which describes some key components of a blogging system:
```ts title="model.ts"
export interface Author {
username: string;
fullName: string;
}
export interface Post {
slug: string;
title: string;
body: string;
author: Author;
createdAt: Date;
updatedAt: Date;
}
```
This object model describes a blog post and an associated author.
With Deno KV, you can use these TypeScript interfaces like
[data transfer objects (DTOs)](https://martinfowler.com/bliki/LocalDTO.html) - a
strongly typed wrapper around the otherwise untyped objects you might send to or
receive from Deno KV.
Without any additional work, you can happily store the contents of one of these
DTOs in Deno KV.
```ts
import { Author } from "./model.ts";
const kv = await Deno.openKv();
const a: Author = {
username: "acdoyle",
fullName: "Arthur Conan Doyle",
};
await kv.set(["authors", a.username], a);
```
When retrieving this same object from Deno KV, however, it won't by default have
type information associated with it. If you know the shape of the object that
was stored for the key, however, you can use
[type assertion](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#type-assertions)
to inform the TypeScript compiler about the shape of an object.
```ts
import { Author } from "./model.ts";
const kv = await Deno.openKv();
const r = await kv.get(["authors", "acdoyle"]);
const ac = r.value as Author;
console.log(ac.fullName);
```
You can also specify an optional
[type parameter](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.get) for
`get`:
```ts
import { Author } from "./model.ts";
const kv = await Deno.openKv();
const r = await kv.get(["authors", "acdoyle"]);
console.log(r.value.fullName);
```
For simpler data structures, this technique may be sufficient. But often, you
will want or need to apply some business logic when creating or accessing your
domain objects. When this need arises, you can develop a set of pure functions
that can operate on your DTOs.
## Encapsulating business logic with a service layer
When your application's persistence needs become more complex - such as when you
need to create [secondary indexes](./secondary_indexes) to query your data by
different keys, or maintain relationships between objects - you will want to
create a set of functions to sit on top of your DTOs to ensure that the data
being passed around is valid (and not merely typed correctly).
From our business objects above, the `Post` object is complex enough where it is
likely to need a small layer of code to save and retrieve an instance of the
object. Below is an example of two functions that wrap the underlying Deno KV
APIs, and return strongly typed object instances for the `Post` interface.
Notably, we need to store an identifier for an `Author` object, so we can
retrieve author information from KV later.
```ts
import { Author, Post } from "./model.ts";
const kv = await Deno.openKv();
interface RawPost extends Post {
authorUsername: string;
}
export async function savePost(p: Post): Promise {
const postData: RawPost = Object.assign({}, p, {
authorUsername: p.author.username,
});
await kv.set(["posts", p.slug], postData);
return p;
}
export async function getPost(slug: string): Promise {
const postResponse = await kv.get(["posts", slug]);
const rawPost = postResponse.value as RawPost;
const authorResponse = await kv.get(["authors", rawPost.authorUsername]);
const author = authorResponse.value as Author;
const post = Object.assign({}, postResponse.value, {
author,
}) as Post;
return post;
}
```
This thin layer uses a `RawPost` interface, which extends the actual `Post`
interface, to include some additional data that is used to reference data at
another index (the associated `Author` object).
The `savePost` and `getPost` functions take the place of a direct Deno KV `get`
or `set` operation, so that they can properly serialize and "hydrate" model
objects for us with appropriate types and associations.
---
# Deno KV Quick Start
URL: https://docs.deno.com/deploy/kv/manual/
**Deno KV** is a
[key-value database](https://en.wikipedia.org/wiki/Key%E2%80%93value_database)
built directly into the Deno runtime, available in the
[`Deno.Kv` namespace](https://docs.deno.com/api/deno/~/Deno.Kv). It can be used
for many kinds of data storage use cases, but excels at storing simple data
structures that benefit from very fast reads and writes. Deno KV is available in
the Deno CLI and on [Deno Deploy](./on_deploy).
:::caution
Deno KV is still in development and may change. To use it, you must pass the
`--unstable-kv` flag to Deno.
:::
Let's walk through the key features of Deno KV.
## Opening a database
In your Deno program, you can get a reference to a KV database using
[`Deno.openKv()`](https://docs.deno.com/api/deno/~/Deno.openKv). You may pass in
an optional file system path to where you'd like to store your database,
otherwise one will be created for you based on the current working directory of
your script.
```ts
const kv = await Deno.openKv();
```
## Creating, updating, and reading a key-value pair
Data in Deno KV is stored as key-value pairs, much like properties of a
JavaScript object literal or a
[Map](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map).
[Keys](./key_space) are represented as an array of JavaScript types, like
`string`, `number`, `bigint`, or `boolean`. Values can be arbitrary JavaScript
objects. In this example, we create a key-value pair representing a user's UI
preferences, and save it with
[`kv.set()`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.set).
```ts
const kv = await Deno.openKv();
const prefs = {
username: "ada",
theme: "dark",
language: "en-US",
};
const result = await kv.set(["preferences", "ada"], prefs);
```
Once a key-value pair is set, you can read it from the database with
[`kv.get()`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.get):
```ts
const entry = await kv.get(["preferences", "ada"]);
console.log(entry.key);
console.log(entry.value);
console.log(entry.versionstamp);
```
Both `get` and `list` [operations](./operations) return a
[KvEntry](https://docs.deno.com/api/deno/~/Deno.KvEntry) object with the
following properties:
- `key` - the array key you used to set the value
- `value` - the JavaScript object you set for this key
- `versionstamp` - a generated value used to determine if a key has been
updated.
The `set` operation is also used to update objects that already exist for a
given key. When a key's value is updated, its `versionstamp` will change to a
new generated value.
## Listing several key-value pairs
To get values for a finite number of keys, you may use
[`kv.getMany()`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.getMany).
Pass in several keys as arguments, and you'll receive an array of values for
each key. Note that **values and versionstamps can be `null`** if no value
exists for the given key(s).
```ts
const kv = await Deno.openKv();
const result = await kv.getMany([
["preferences", "ada"],
["preferences", "grace"],
]);
result[0].key; // ["preferences", "ada"]
result[0].value; // { ... }
result[0].versionstamp; // "00000000000000010000"
result[1].key; // ["preferences", "grace"]
result[1].value; // null
result[1].versionstamp; // null
```
Often, it is useful to retrieve a list of key-value pairs from all keys that
share a given prefix. This type of operation is possible using
[`kv.list()`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.list). In this
example, we get a list of key-value pairs that share the `"preferences"` prefix.
```ts
const kv = await Deno.openKv();
const entries = kv.list({ prefix: ["preferences"] });
for await (const entry of entries) {
console.log(entry.key); // ["preferences", "ada"]
console.log(entry.value); // { ... }
console.log(entry.versionstamp); // "00000000000000010000"
}
```
Returned keys are ordered lexicographically based on the next component of the
key after the prefix. So KV pairs with these keys:
- `["preferences", "ada"]`
- `["preferences", "bob"]`
- `["preferences", "cassie"]`
Will be returned in that order by `kv.list()`.
Read operations can either be performed in
[**strong or eventual consistency mode**](./operations). Strong consistency mode
guarantees that the read operation will return the most recently written value.
Eventual consistency mode may return a stale value, but is faster. By contrast,
writes are always performed in strong consistency mode.
## Deleting key-value pairs
You can delete a key from the database using
[`kv.delete()`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.delete). No
action is taken if no value is found for the given key.
```ts
const kv = await Deno.openKv();
await kv.delete(["preferences", "alan"]);
```
## Atomic transactions
Deno KV is capable of executing [atomic transactions](./transactions), which
enables you to conditionally execute one or many data manipulation operations at
once. In the following example, we create a new preferences object only if it
hasn't been created already.
```ts
const kv = await Deno.openKv();
const key = ["preferences", "alan"];
const value = {
username: "alan",
theme: "light",
language: "en-GB",
};
const res = await kv.atomic()
.check({ key, versionstamp: null }) // `null` versionstamps mean 'no value'
.set(key, value)
.commit();
if (res.ok) {
console.log("Preferences did not yet exist. Inserted!");
} else {
console.error("Preferences already exist.");
}
```
Learn more about transactions in Deno KV [here](./transactions).
## Improve querying with secondary indexes
[Secondary indexes](./secondary_indexes) store the same data by multiple keys,
allowing for simpler queries of the data you need. Let's say that we need to be
able to access user preferences by both username AND email. To enable this, you
could provide a function that wraps the logic to save the preferences to create
two indexes.
```ts
const kv = await Deno.openKv();
async function savePreferences(prefs) {
const key = ["preferences", prefs.username];
// Set the primary key
const r = await kv.set(key, prefs);
// Set the secondary key's value to be the primary key
await kv.set(["preferencesByEmail", prefs.email], key);
return r;
}
async function getByUsername(username) {
// Use as before...
const r = await kv.get(["preferences", username]);
return r;
}
async function getByEmail(email) {
// Look up the key by email, then second lookup for actual data
const r1 = await kv.get(["preferencesByEmail", email]);
const r2 = await kv.get(r1.value);
return r2;
}
```
Learn more about [secondary indexes in the manual here](./secondary_indexes).
## Watching for updates in Deno KV
You can also listen for updates from Deno KV with `kv.watch()`, which will emit
a new value or values of the key or keys you provide. In the below chat example,
we watch for updates on the key `["last_message_id", roomId]`. We retrieve
`messageId`, which we then use with `kv.list()` to grab all the new messages
from `seen` and `messageId`.
```ts
let seen = "";
for await (const [messageId] of kv.watch([["last_message_id", roomId]])) {
const newMessages = await Array.fromAsync(kv.list({
start: ["messages", roomId, seen, ""],
end: ["messages", roomId, messageId, ""],
}));
await websocket.write(JSON.stringify(newMessages));
seen = messageId;
}
```
Learn more about [using Deno KV watch here](./operations#watch).
## Production usage
Deno KV is available for use in live applications on [Deno Deploy](./on_deploy).
In production, Deno KV is backed by
[FoundationDB](https://www.foundationdb.org/), the open source key-value store
created by Apple.
**No additional configuration is necessary** to run your Deno programs that use
KV on Deploy - a new Deploy database will be provisioned for you when required
by your code. Learn more about Deno KV on Deno Deploy [here](./on_deploy).
## Testing
By default, [`Deno.openKv()`](https://docs.deno.com/api/deno/~/Deno.openKv)
creates or opens a persistent store based on the path from which the script that
invoked it was run. This isn't usually desirable for tests, which need to
produce the same behavior when run many times in a row.
To test code that uses Deno KV, you can use the special argument `":memory:"` to
create an ephemeral Deno KV datastore.
```ts
async function setDisplayName(
kv: Deno.Kv,
username: string,
displayname: string,
) {
await kv.set(["preferences", username, "displayname"], displayname);
}
async function getDisplayName(
kv: Deno.Kv,
username: string,
): Promise {
return (await kv.get(["preferences", username, "displayname"]))
.value as string;
}
Deno.test("Preferences", async (t) => {
const kv = await Deno.openKv(":memory:");
await t.step("can set displayname", async () => {
const displayName = await getDisplayName(kv, "example");
assertEquals(displayName, null);
await setDisplayName(kv, "example", "Exemplary User");
const displayName = await getDisplayName(kv, "example");
assertEquals(displayName, "Exemplary User");
});
});
```
This works because Deno KV is backed by SQLite when run for local development.
Just like in-memory SQLite databases, multiple ephemeral Deno KV stores can
exist at once without interfering with one another. For more information about
special database addressing modes, see
[the SQLite docs on the topic](https://www.sqlite.org/inmemorydb.html).
## Next steps
At this point, you're just beginning to scratch the surface with Deno KV. Be
sure to check out our guide on the [Deno KV key space](./key_space), and a
collection of [tutorials and example applications](../tutorials/index.md) here.
---
# Key Expiration (TTL for keys)
URL: https://docs.deno.com/deploy/kv/manual/key_expiration
Since version 1.36.2, Deno KV supports key expiration, allowing developers to
control time to live (TTL) for keys in a KV database. This allows an expiration
timestamp to be associated with a key, after which the key will be automatically
deleted from the database:
```ts
const kv = await Deno.openKv();
// `expireIn` is the number of milliseconds after which the key will expire.
function addSession(session: Session, expireIn: number) {
await kv.set(["sessions", session.id], session, { expireIn });
}
```
Key expiration is supported on both Deno CLI and Deno Deploy.
## Atomic expiration of multiple keys
If multiple keys are set in the same atomic operation and have the same
`expireIn` value, the expiration of those keys will be atomic. For example:
```ts
const kv = await Deno.openKv();
function addUnverifiedUser(
user: User,
verificationToken: string,
expireIn: number,
) {
await kv.atomic()
.set(["users", user.id], user, { expireIn })
.set(["verificationTokens", verificationToken], user.id, { expireIn })
.commit();
}
```
## Caveats
The expire timestamp specifies the _earliest_ time after which the key can be
deleted from the database. An implementation is allowed to expire a key at any
time after the specified timestamp, but not before. If you need to strictly
enforce an expiration time (e.g. for security purposes), please also add it as a
field of your value and do a check after retrieving the value from the database.
---
# Key Space
URL: https://docs.deno.com/deploy/kv/manual/key_space
Deno KV is a key value store. The key space is a flat namespace of
key+value+versionstamp pairs. Keys are sequences of key parts, which allow
modeling of hierarchical data. Values are arbitrary JavaScript objects.
Versionstamps represent when a value was inserted / modified.
## Keys
Keys in Deno KV are sequences of key parts, which can be `string`s, `number`s,
`boolean`s, `Uint8Array`s, or `bigint`s.
Using a sequence of parts, rather than a single string eliminates the
possibility of delimiter injection attacks, because there is no visible
delimiter.
> A key injection attack occurs when an attacker manipulates the structure of a
> key-value store by injecting delimiters used in the key encoding scheme into a
> user controlled variable, leading to unintended behavior or unauthorized
> access. For example, consider a key-value store using a slash (/) as a
> delimiter, with keys like "users/alice/settings" and "users/bob/settings". An
> attacker could create a new user with the name "alice/settings/hacked" to form
> the key "users/alice/settings/hacked/settings", injecting the delimiter and
> manipulating the key structure. In Deno KV, the injection would result in the
> key `["users", "alice/settings/hacked", "settings"]`, which is not harmful.
Between key parts, invisible delimiters are used to separate the parts. These
delimiters are never visible, but ensure that one part can not be confused with
another part. For example, the key parts `["abc", "def"]`, `["ab", "cdef"]`,
`["abc", "", "def"]` are all different keys.
Keys are case sensitive and are ordered lexicographically by their parts. The
first part is the most significant, and the last part is the least significant.
The order of the parts is determined by both the type and the value of the part.
### Key Part Ordering
Key parts are ordered lexicographically by their type, and within a given type,
they are ordered by their value. The ordering of types is as follows:
1. `Uint8Array`
1. `string`
1. `number`
1. `bigint`
1. `boolean`
Within a given type, the ordering is:
- `Uint8Array`: byte ordering of the array
- `string`: byte ordering of the UTF-8 encoding of the string
- `number`: -Infinity < -1.0 < -0.5 < -0.0 < 0.0 < 0.5 < 1.0 < Infinity < NaN
- `bigint`: mathematical ordering, largest negative number first, largest
positive number last
- `boolean`: false < true
This means that the part `1.0` (a number) is ordered before the part `2.0` (also
a number), but is greater than the part `0n` (a bigint), because `1.0` is a
number and `0n` is a bigint, and type ordering has precedence over the ordering
of values within a type.
### Key Examples
```js
["users", 42, "profile"]; // User with ID 42's profile
["posts", "2023-04-23", "comments"]; // Comments for all posts on 2023-04-23
["products", "electronics", "smartphones", "apple"]; // Apple smartphones in the electronics category
["orders", 1001, "shipping", "tracking"]; // Tracking information for order ID 1001
["files", new Uint8Array([1, 2, 3]), "metadata"]; // Metadata for a file with Uint8Array identifier
["projects", "openai", "tasks", 5]; // Task with ID 5 in the OpenAI project
["events", "2023-03-31", "location", "san_francisco"]; // Events in San Francisco on 2023-03-31
["invoices", 2023, "Q1", "summary"]; // Summary of Q1 invoices for 2023
["teams", "engineering", "members", 1n]; // Member with ID 1n in the engineering team
```
### Universally Unique Lexicographically Sortable Identifiers (ULIDs)
Key part ordering allows keys consisting of timestamps and ID parts to be listed
chronologically. Typically, you can generate a key using the following:
[`Date.now()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now)
and
[`crypto.randomUUID()`](https://developer.mozilla.org/en-US/docs/Web/API/Crypto/randomUUID):
```js
async function setUser(user) {
await kv.set(["users", Date.now(), crypto.randomUUID()], user);
}
```
Run multiple times sequentially, this produces the following keys:
```js
["users", 1691377037923, "8c72fa25-40ad-42ce-80b0-44f79bc7a09e"]; // First user
["users", 1691377037924, "8063f20c-8c2e-425e-a5ab-d61e7a717765"]; // Second user
["users", 1691377037925, "35310cea-58ba-4101-b09a-86232bf230b2"]; // Third user
```
However, having the timestamp and ID represented within a single key part may be
more straightforward in some cases. You can use a
[Universally Unique Lexicographically Sortable Identifier (ULID)](https://github.com/ulid/spec)
to do this. This type of identifier encodes a UTC timestamp, is
lexicographically sortable and is cryptographically random by default:
```js
import { ulid } from "jsr:@std/ulid";
const kv = await Deno.openKv();
async function setUser(user) {
await kv.set(["users", ulid()], user);
}
```
```js
["users", "01H76YTWK3YBV020S6MP69TBEQ"]; // First user
["users", "01H76YTWK4V82VFET9YTYDQ0NY"]; // Second user
["users", "01H76YTWK5DM1G9TFR0Y5SCZQV"]; // Third user
```
Furthermore, you can generate ULIDs monotonically increasingly using
`monotonicUlid` function:
```js
import { monotonicUlid } from "jsr:@std/ulid";
async function setUser(user) {
await kv.set(["users", monotonicUlid()], user);
}
```
```js
// Strict ordering for the same timestamp by incrementing the least-significant random bit by 1
["users", "01H76YTWK3YBV020S6MP69TBEQ"]; // First user
["users", "01H76YTWK3YBV020S6MP69TBER"]; // Second user
["users", "01H76YTWK3YBV020S6MP69TBES"]; // Third user
```
## Values
Values in Deno KV can be arbitrary JavaScript values that are compatible with
the [structured clone algorithm][structured clone algorithm]. This includes:
- `undefined`
- `null`
- `boolean`
- `number`
- `string`
- `bigint`
- `Uint8Array`
- `Array`
- `Object`
- `Map`
- `Set`
- `Date`
- `RegExp`
Objects and arrays can contain any of the above types, including other objects
and arrays. `Map`s and `Set`s can contain any of the above types, including
other `Map`s and `Set`s.
Circular references within values are supported.
Objects with a non-primitive prototype are not supported (such as class
instances or Web API objects). Functions and symbols can also not be serialized.
### `Deno.KvU64` type
In addition to structured serializable values, the special value `Deno.KvU64` is
also supported as a value. This object represents a 64-bit unsigned integer,
represented as a bigint. It can be used with the `sum`, `min`, and `max` KV
operations. It can not be stored within an object or array. It must be stored as
a top-level value.
It can be created with the `Deno.KvU64` constructor:
```js
const u64 = new Deno.KvU64(42n);
```
### Value Examples
```js,ignore
undefined;
null;
true;
false;
42;
-42.5;
42n;
"hello";
new Uint8Array([1, 2, 3]);
[1, 2, 3];
{ a: 1, b: 2, c: 3 };
new Map([["a", 1], ["b", 2], ["c", 3]]);
new Set([1, 2, 3]);
new Date("2023-04-23");
/abc/;
// Circular references are supported
const a = {};
const b = { a };
a.b = b;
// Deno.KvU64 is supported
new Deno.KvU64(42n);
```
## Versionstamp
All data in the Deno KV key-space is versioned. Every time a value is inserted
or modified, a versionstamp is assigned to it. Versionstamps are monotonically
increasing, non-sequential, 12 byte values that represent the time that the
value was modified. Versionstamps do not represent real time, but rather the
order in which the values were modified.
Because versionstamps are monotonically increasing, they can be used to
determine whether a given value is newer or older than another value. This can
be done by comparing the versionstamps of the two values. If versionstamp A is
greater than versionstamp B, then value A was modified more recently than value
B.
```js
versionstampA > versionstampB;
"000002fa526aaccb0000" > "000002fa526aacc90000"; // true
```
All data modified by a single transaction are assigned the same versionstamp.
This means that if two `set` operations are performed in the same atomic
operation, then the versionstamp of the new values will be the same.
Versionstamps are used to implement optimistic concurrency control. Atomic
operations can contain checks that ensure that the versionstamp of the data they
are operating on matches a versionstamp passed to the operation. If the
versionstamp of the data is not the same as the versionstamp passed to the
operation, then the transaction will fail and the operation will not be applied.
[structured clone algorithm]: https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Structured_clone_algorithm
---
# Using KV in Node.js
URL: https://docs.deno.com/deploy/kv/manual/node
Connecting to a Deno KV database in Node.js is supported via our
[official client library on npm](https://www.npmjs.com/package/@deno/kv). You
can find usage instructions for this option below.
## Installation and usage
Use your preferred npm client to install the client library for Node.js using
one of the commands below.
```sh
npm install @deno/kv
```
```sh
pnpm add @deno/kv
```
```sh
yarn add @deno/kv
```
Once you've added the package to your Node project, you can import the `openKv`
function (supports both ESM `import` and CJS `require`-based usage):
```js
import { openKv } from "@deno/kv";
// Connect to a KV instance
const kv = await openKv("");
// Write some data
await kv.set(["users", "alice"], { name: "Alice" });
// Read it back
const result = await kv.get(["users", "alice"]);
console.log(result.value); // { name: "Alice" }
```
By default, the access token used for authentication comes from the
`DENO_KV_ACCESS_TOKEN` environment variable. You can also pass it explicitly:
```js
import { openKv } from "@deno/kv";
const kv = await openKv("", { accessToken: myToken });
```
Once your Deno KV client is initialized, the same API available in Deno may be
used in Node as well.
## KV Connect URLs
Connecting to a KV database outside of Deno requires a
[KV Connect](https://github.com/denoland/denokv/blob/main/proto/kv-connect.md)
URL. A KV Connect URL for a database hosted on Deno Deploy will be in this
format: `https://api.deno.com/databases//connect`.
The `database-id` for your project can be found in the
[Deno Deploy dashboard](https://dash.deno.com/projects), under the project's
"KV" tab.

## More information
More information about how to use the Deno KV module for Node can be found on
the project's [README page](https://www.npmjs.com/package/@deno/kv).
---
# KV on Deno Deploy
URL: https://docs.deno.com/deploy/kv/manual/on_deploy
Deno Deploy now offers a built-in serverless key-value database called Deno KV.
Additionally, Deno KV is available within Deno itself, utilizing SQLite as its
backend. This feature has been accessible since Deno v1.32 with the `--unstable`
flag. Learn more about [Deno KV](/deploy/kv/manual).
## Consistency
Deno KV, by default, is a strongly-consistent database. It provides the
strictest form of strong consistency called _external consistency_, which
implies:
- **Serializability**: This is the highest level of isolation for transactions.
It ensures that the concurrent execution of multiple transactions results in a
system state that would be the same as if the transactions were executed
sequentially, one after another. In other words, the end result of
serializable transactions is equivalent to some sequential order of these
transactions.
- **Linearizability**: This consistency model guarantees that operations, such
as read and write, appear to be instantaneous and occur in real-time. Once a
write operation completes, all subsequent read operations will immediately
return the updated value. Linearizability ensures a strong real-time ordering
of operations, making the system more predictable and easier to reason about.
Meanwhile, you can choose to relax consistency constraints by setting the
`consistency: "eventual"` option on individual read operations. This option
allows the system to serve the read from global replicas and caches for minimal
latency.
Below are the latency figures observed in our top regions:
| Region | Latency (Eventual Consistency) | Latency (Strong Consistency) |
| -------------------------- | ------------------------------ | ---------------------------- |
| North Virginia (us-east4) | 7ms | 7ms |
| Frankfurt (europe-west3) | 7ms | 94ms |
| Netherlands (europe-west4) | 13ms | 95ms |
| California (us-west2) | 72ms | 72ms |
| Hong Kong (asia-east2) | 42ms | 194ms |
## Distributed queues
Serverless distributed queues are available on Deno Deploy. See
[Queues on Deno Deploy](/deploy/kv/manual/queue_overview#queues-on-deno-deploy)
for more details.
## Connect to managed databases from outside of Deno Deploy
You can connect to your Deno Deploy KV database from your Deno application
outside of Deno Deploy. To open a managed database, set the
`DENO_KV_ACCESS_TOKEN` environment variable to a Deno Deploy personal access
token and provide the URL of the database to `Deno.openKv`:
```ts
const kv = await Deno.openKv(
"https://api.deno.com/databases//connect",
);
```
Please check the
[docs](https://github.com/denoland/deno/tree/main/ext/kv#kv-connect) for the
specification of the protocol for connecting to a remote KV database
## Data distribution
Deno KV databases are replicated across at least 6 data centers, spanning 3
regions (US, Europe, and Asia). Once a write operation is committed, its
mutations are persistently stored in a minimum of two data centers within the
primary region. Asynchronous replication typically transfers these mutations to
the other two regions in under 10 seconds.
The system is designed to tolerate most data center-level failures without
experiencing downtime or data loss. Recovery Point Objectives (RPO) and Recovery
Time Objectives (RTO) help quantify the system's resilience under various
failure modes. RPO represents the maximum acceptable amount of data loss
measured in time, whereas RTO signifies the maximum acceptable time required to
restore the system to normal operations after a failure.
- Loss of one data center in the primary region: RPO=0 (no data loss), RTO<5s
(system restoration in under 5 seconds)
- Loss of any number of data centers in a replica region: RPO=0, RTO<5s
- Loss of two or more data centers in the primary region: RPO<60s (under 60
seconds of data loss)
---
# Operations
URL: https://docs.deno.com/deploy/kv/manual/operations
The Deno KV API provides a set of operations that can be performed on the key
space.
There are two operations that read data from the store, and five operations that
write data to the store.
Read operations can either be performed in strong or eventual consistency mode.
Strong consistency mode guarantees that the read operation will return the most
recently written value. Eventual consistency mode may return a stale value, but
is faster.
Write operations are always performed in strong consistency mode.
## `get`
The `get` operation returns the value and versionstamp associated with a given
key. If a value does not exist, get returns a `null` value and versionstamp.
There are two APIs that can be used to perform a `get` operation. The
[`Deno.Kv.prototype.get(key, options?)`][get] API, which can be used to read a
single key, and the [`Deno.Kv.prototype.getMany(keys, options?)`][getMany] API,
which can be used to read multiple keys at once.
Get operations are performed as a "snapshot read" in all consistency modes. This
means that when retrieving multiple keys at once, the values returned will be
consistent with each other.
```ts
const res = await kv.get(["config"]);
console.log(res); // { key: ["config"], value: "value", versionstamp: "000002fa526aaccb0000" }
const res = await kv.get(["config"], { consistency: "eventual" });
console.log(res); // { key: ["config"], value: "value", versionstamp: "000002fa526aaccb0000" }
const [res1, res2, res3] = await kv.getMany<[string, string, string]>([
["users", "sam"],
["users", "taylor"],
["users", "alex"],
]);
console.log(res1); // { key: ["users", "sam"], value: "sam", versionstamp: "00e0a2a0f0178b270000" }
console.log(res2); // { key: ["users", "taylor"], value: "taylor", versionstamp: "0059e9035e5e7c5e0000" }
console.log(res3); // { key: ["users", "alex"], value: "alex", versionstamp: "00a44a3c3e53b9750000" }
```
## `list`
The `list` operation returns a list of keys that match a given selector. The
associated values and versionstamps for these keys are also returned. There are
2 different selectors that can be used to filter the keys matched.
The `prefix` selector matches all keys that start with the given prefix key
parts, but not inclusive of an exact match of the key. The prefix selector may
optionally be given a `start` OR `end` key to limit the range of keys returned.
The `start` key is inclusive, and the `end` key is exclusive.
The `range` selector matches all keys that are lexicographically between the
given `start` and `end` keys. The `start` key is inclusive, and the `end` key is
exclusive.
> Note: In the case of the prefix selector, the `prefix` key must consist only
> of full (not partial) key parts. For example, if the key `["foo", "bar"]`
> exists in the store, then the prefix selector `["foo"]` will match it, but the
> prefix selector `["f"]` will not.
The list operation may optionally be given a `limit` to limit the number of keys
returned.
List operations can be performed using the
[`Deno.Kv.prototype.list(selector, options?)`][list] method. This method
returns a `Deno.KvListIterator` that can be used to iterate over the keys
returned. This is an async iterator, and can be used with `for await` loops.
```ts
// Return all users
const iter = kv.list({ prefix: ["users"] });
const users = [];
for await (const res of iter) users.push(res);
console.log(users[0]); // { key: ["users", "alex"], value: "alex", versionstamp: "00a44a3c3e53b9750000" }
console.log(users[1]); // { key: ["users", "sam"], value: "sam", versionstamp: "00e0a2a0f0178b270000" }
console.log(users[2]); // { key: ["users", "taylor"], value: "taylor", versionstamp: "0059e9035e5e7c5e0000" }
// Return the first 2 users
const iter = kv.list({ prefix: ["users"] }, { limit: 2 });
const users = [];
for await (const res of iter) users.push(res);
console.log(users[0]); // { key: ["users", "alex"], value: "alex", versionstamp: "00a44a3c3e53b9750000" }
console.log(users[1]); // { key: ["users", "sam"], value: "sam", versionstamp: "00e0a2a0f0178b270000" }
// Return all users lexicographically after "taylor"
const iter = kv.list({ prefix: ["users"], start: ["users", "taylor"] });
const users = [];
for await (const res of iter) users.push(res);
console.log(users[0]); // { key: ["users", "taylor"], value: "taylor", versionstamp: "0059e9035e5e7c5e0000" }
// Return all users lexicographically before "taylor"
const iter = kv.list({ prefix: ["users"], end: ["users", "taylor"] });
const users = [];
for await (const res of iter) users.push(res);
console.log(users[0]); // { key: ["users", "alex"], value: "alex", versionstamp: "00a44a3c3e53b9750000" }
console.log(users[1]); // { key: ["users", "sam"], value: "sam", versionstamp: "00e0a2a0f0178b270000" }
// Return all users starting with characters between "a" and "n"
const iter = kv.list({ start: ["users", "a"], end: ["users", "n"] });
const users = [];
for await (const res of iter) users.push(res);
console.log(users[0]); // { key: ["users", "alex"], value: "alex", versionstamp: "00a44a3c3e53b9750000" }
```
The list operation reads data from the store in batches. The size of each batch
can be controlled using the `batchSize` option. The default batch size is 500
keys. Data within a batch is read in a single snapshot read, so the values are
consistent with each other. Consistency modes apply to each batch of data read.
Across batches, data is not consistent. The borders between batches is not
visible from the API as the iterator returns individual keys.
The list operation can be performed in reverse order by setting the `reverse`
option to `true`. This will return the keys in lexicographically descending
order. The `start` and `end` keys are still inclusive and exclusive
respectively, and are still interpreted as lexicographically ascending.
```ts
// Return all users in reverse order, ending with "sam"
const iter = kv.list({ prefix: ["users"], start: ["users", "sam"] }, {
reverse: true,
});
const users = [];
for await (const res of iter) users.push(res);
console.log(users[0]); // { key: ["users", "taylor"], value: "taylor", versionstamp: "0059e9035e5e7c5e0000" }
console.log(users[1]); // { key: ["users", "sam"], value: "sam", versionstamp: "00e0a2a0f0178b270000" }
```
> Note: in the above example we set the `start` key to `["users", "sam"]`, even
> though the first key returned is `["users", "taylor"]`. This is because the
> `start` and `end` keys are always evaluated in lexicographically ascending
> order, even when the list operation is performed in reverse order (which
> returns the keys in lexicographically descending order).
## `set`
The `set` operation sets the value of a key in the store. If the key does not
exist, it is created. If the key already exists, its value is overwritten.
The `set` operation can be performed using the
[`Deno.Kv.prototype.set(key, value)`][set] method. This method returns a
`Promise` that resolves to a `Deno.KvCommitResult` object, which contains the
`versionstamp` of the commit.
Set operations are always performed in strong consistency mode.
```ts
const res = await kv.set(["users", "alex"], "alex");
console.log(res.versionstamp); // "00a44a3c3e53b9750000"
```
## `delete`
The `delete` operation deletes a key from the store. If the key does not exist,
the operation is a no-op.
The `delete` operation can be performed using the
[`Deno.Kv.prototype.delete(key)`][delete] method.
Delete operations are always performed in strong consistency mode.
```ts
await kv.delete(["users", "alex"]);
```
## `sum`
The `sum` operation atomically adds a value to a key in the store. If the key
does not exist, it is created with the value of the sum. If the key already
exists, its value is added to the sum.
The `sum` operation can only be performed as part of an atomic operation. The
[`Deno.AtomicOperation.prototype.mutate({ type: "sum", value })`][mutate] method
can be used to add a sum mutation to an atomic operation.
The sum operation can only be performed on values of type `Deno.KvU64`. Both the
operand and the value in the store must be of type `Deno.KvU64`.
If the new value of the key is greater than `2^64 - 1` or less than `0`, the sum
operation wraps around. For example, if the value in the store is `2^64 - 1` and
the operand is `1`, the new value will be `0`.
Sum operations are always performed in strong consistency mode.
```ts
await kv.atomic()
.mutate({
type: "sum",
key: ["accounts", "alex"],
value: new Deno.KvU64(100n),
})
.commit();
```
## `min`
The `min` operation atomically sets a key to the minimum of its current value
and a given value. If the key does not exist, it is created with the given
value. If the key already exists, its value is set to the minimum of its current
value and the given value.
The `min` operation can only be performed as part of an atomic operation. The
[`Deno.AtomicOperation.prototype.mutate({ type: "min", value })`][mutate] method
can be used to add a min mutation to an atomic operation.
The min operation can only be performed on values of type `Deno.KvU64`. Both the
operand and the value in the store must be of type `Deno.KvU64`.
Min operations are always performed in strong consistency mode.
```ts
await kv.atomic()
.mutate({
type: "min",
key: ["accounts", "alex"],
value: new Deno.KvU64(100n),
})
.commit();
```
## `max`
The `max` operation atomically sets a key to the maximum of its current value
and a given value. If the key does not exist, it is created with the given
value. If the key already exists, its value is set to the maximum of its current
value and the given value.
The `max` operation can only be performed as part of an atomic operation. The
[`Deno.AtomicOperation.prototype.mutate({ type: "max", value })`][mutate] method
can be used to add a max mutation to an atomic operation.
The max operation can only be performed on values of type `Deno.KvU64`. Both the
operand and the value in the store must be of type `Deno.KvU64`.
Max operations are always performed in strong consistency mode.
```ts
await kv.atomic()
.mutate({
type: "max",
key: ["accounts", "alex"],
value: new Deno.KvU64(100n),
})
.commit();
```
## `watch`
The `watch` operation accepts an array of keys, and returns a
[`ReadableStream`](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream),
which emits a new value whenever any of the watched keys change their
`versionstamp`. The emitted value is an array of
[Deno.KvEntryMaybe](https://docs.deno.com/api/deno/~/Deno.KvEntryMaybe) objects.
Note that the returned stream does not return every single intermediate state of
the watched keys, but keeps you up to date with the latest state of keys. This
means if a key is modified multiple times quickly, you may not receive a
notification for every change, but the latest state of the key.
```ts
const db = await Deno.openKv();
const stream = db.watch([["foo"], ["bar"]]);
for await (const entries of stream) {
entries[0].key; // ["foo"]
entries[0].value; // "bar"
entries[0].versionstamp; // "00000000000000010000"
entries[1].key; // ["bar"]
entries[1].value; // null
entries[1].versionstamp; // null
}
```
[get]: https://docs.deno.com/api/deno/~/Deno.Kv.prototype.get
[getMany]: https://docs.deno.com/api/deno/~/Deno.Kv.prototype.getMany
[list]: https://docs.deno.com/api/deno/~/Deno.Kv.prototype.list
[set]: https://docs.deno.com/api/deno/~/Deno.Kv.prototype.set
[delete]: https://docs.deno.com/api/deno/~/Deno.Kv.prototype.delete
[mutate]: https://docs.deno.com/api/deno/~/Deno.AtomicOperation.prototype.mutate
---
# Using Queues
URL: https://docs.deno.com/deploy/kv/manual/queue_overview
The Deno runtime includes a queueing API that supports offloading larger
workloads for async processing, with guaranteed at-least-once delivery of queued
messages. Queues can be used to offload tasks in a web application, or to
schedule units of work for a time in the future.
The primary APIs you'll use with queues are in the `Deno.Kv` namespace as
[`enqueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.enqueue) and
[`listenQueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.listenQueue).
## Enqueue a message
To enqueue a message for processing, use the `enqueue` method on an instance of
[`Deno.Kv`](https://docs.deno.com/api/deno/~/Deno.Kv). In the example below, we
show what it might look like to enqueue a notification for delivery.
```ts title="queue_example.ts"
// Describe the shape of your message object (optional)
interface Notification {
forUser: string;
body: string;
}
// Get a reference to a KV instance
const kv = await Deno.openKv();
// Create a notification object
const message: Notification = {
forUser: "alovelace",
body: "You've got mail!",
};
// Enqueue the message for immediate delivery
await kv.enqueue(message);
```
You can enqueue a message for later delivery by specifying a `delay` option in
milliseconds.
```ts
// Enqueue the message for delivery in 3 days
const delay = 1000 * 60 * 60 * 24 * 3;
await kv.enqueue(message, { delay });
```
You can also specify a key in Deno KV where your message value will be stored if
your message isn't delivered for any reason.
```ts
// Configure a key where a failed message would be sent
const backupKey = ["failed_notifications", "alovelace", Date.now()];
await kv.enqueue(message, { keysIfUndelivered: [backupKey] });
// ... disaster strikes ...
// Get the unsent message
const r = await kv.get(backupKey);
// This is the message that didn't get sent:
console.log("Found failed notification for:", r.value?.forUser);
```
## Listening for messages
You can configure a JavaScript function that will process items added to your
queue with the `listenQueue` method on an instance of
[`Deno.Kv`](https://docs.deno.com/api/deno/~/Deno.Kv).
```ts title="listen_example.ts"
// Define the shape of the object we expect as a message in the queue
interface Notification {
forUser: string;
body: string;
}
// Create a type guard to check the type of the incoming message
function isNotification(o: unknown): o is Notification {
return (
((o as Notification)?.forUser !== undefined &&
typeof (o as Notification).forUser === "string") &&
((o as Notification)?.body !== undefined &&
typeof (o as Notification).body === "string")
);
}
// Get a reference to a KV database
const kv = await Deno.openKv();
// Register a handler function to listen for values - this example shows
// how you might send a notification
kv.listenQueue((msg: unknown) => {
// Use type guard - then TypeScript compiler knows msg is a Notification
if (isNotification(msg)) {
console.log("Sending notification to user:", msg.forUser);
// ... do something to actually send the notification!
} else {
// If the message is of an unknown type, it might be an error
console.error("Unknown message received:", msg);
}
});
```
## Queue API with KV atomic transactions
You can combine the queue API with [KV atomic transactions](./transactions) to
atomically enqueue messages and modify keys in the same transaction.
```ts title="kv_transaction_example.ts"
const kv = await Deno.openKv();
kv.listenQueue(async (msg: unknown) => {
const nonce = await kv.get(["nonces", msg.nonce]);
if (nonce.value === null) {
// This messaged was already processed
return;
}
const change = msg.change;
const bob = await kv.get(["balance", "bob"]);
const liz = await kv.get(["balance", "liz"]);
const success = await kv.atomic()
// Ensure this message was not yet processed
.check({ key: nonce.key, versionstamp: nonce.versionstamp })
.delete(nonce.key)
.sum(["processed_count"], 1n)
.check(bob, liz) // balances did not change
.set(["balance", "bob"], bob.value - change)
.set(["balance", "liz"], liz.value + change)
.commit();
});
// Modify keys and enqueue messages in the same KV transaction!
const nonce = crypto.randomUUID();
await kv
.atomic()
.check({ key: ["nonces", nonce], versionstamp: null })
.enqueue({ nonce: nonce, change: 10 })
.set(["nonces", nonce], true)
.sum(["enqueued_count"], 1n)
.commit();
```
## Queue behavior
### Message delivery guarantees
The runtime guarantees at-least-once delivery. This means that for majority of
enqueued messages, the
[`listenQueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.listenQueue)
handler will be invoked once for each message. In some failure scenarios, the
handler may be invoked multiple times for the same message to ensure delivery.
It's important to design your applications such that duplicate messages are
handled correctly.
You may use queues in combination with
[KV atomic transactions](https://docs.deno.com/deploy/kv/manual/transactions)
primitives to ensure that your queue handler KV updates are performed exactly
once per message. See
[Queue API with KV atomic transactions](#queue-api-with-kv-atomic-transactions).
### Automatic retries
[`listenQueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.listenQueue)
handler is invoked to process your queued messages when they're ready for
delivery. If your handler throws an exception the runtime will automatically
retry to call the handler again until it succeeds or until maximum retry
attempts are reached. The message is considered to be successfully processed
once the
[`listenQueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.listenQueue)
handler invocation completes successfully. The message will be dropped if the
handler consistently fails on retries.
### Message delivery order
The runtime makes best effort to deliver messages in the order they were
enqueued. However, there is not strict order guarantee. Occasionally, messages
may be delivered out of order to ensure maximum throughput.
## Queues on Deno Deploy
Deno Deploy offers global, serverless, distributed implementation of the
queueing API, designed for high availability and throughput. You can use it to
build applications that scale to handle large workloads.
### Just-in-time isolate spin-up
When using queues with Deno Deploy, isolates are automatically spun up on demand
to invoke your
[`listenQueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.listenQueue)
handler when a message becomes available for processing. Defining
[`listenQueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.listenQueue)
handler is the only requirement to enable queue processing in your Deno Deploy
application, no additional configuration is needed.
### Queue size limit
The maximum number of undelivered queue messages is limited to 100,000.
[`enqueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.enqueue) method
will fail with an error if the queue is full.
### Pricing details and limits
- [`enqueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.enqueue) is
treated just like other [`Deno.Kv`](https://docs.deno.com/api/deno/~/Deno.Kv)
write operations. Enqueued messages consume KV storage and write units.
- Messages delivered through
[`listenQueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.listenQueue)
consume requests and KV write units.
- See [Pricing details](https://deno.com/deploy/pricing) for more information.
## Use cases
Queues can be useful in many different scenarios, but there are a few use cases
you might see a lot when building web applications.
### Offloading async processes
Sometimes a task that's initiated by a client (like sending a notification or
API request), may take long enough where you don't want to make clients wait for
that task to be completed before returning a response. Other times, clients
don't actually need a response at all, such as when a client is sending your
application a [webhook request](https://en.wikipedia.org/wiki/Webhook), so
there's no need to wait for the underlying task to be completed before returning
a response.
In these cases, you can offload work to a queue to keep your web application
responsive and send immediate feedback to clients. To see an example of this use
case in action, check out our
[webhook processing example](../tutorials/webhook_processor.md).
### Scheduling work for the future
Another helpful application of queues (and queue APIs like this one), is to
schedule work to happen at an appropriate time in the future. Maybe you'd like
to send a notification to a new customer a day after they have placed an order
to send them a satisfaction survey. You can schedule a queue message to be
delivered 24 hours into the future, and set up a listener to send out the
notification at that time.
To see an example of scheduling a notification to go out in the future, check
out our [notification example](../tutorials/schedule_notification.md).
---
# Secondary Indexes
URL: https://docs.deno.com/deploy/kv/manual/secondary_indexes
Key-value stores like Deno KV organize data as collections of key-value pairs,
where each unique key is associated with a single value. This structure enables
easy retrieval of values based on their keys but does not allow for querying
based on the values themselves. To overcome this constraint, you can create
secondary indexes, which store the same value under additional keys that include
(part of) that value.
Maintaining consistency between primary and secondary keys is crucial when using
secondary indexes. If a value is updated at the primary key without updating the
secondary key, the data returned from a query targeting the secondary key will
be incorrect. To ensure that primary and secondary keys always represent the
same data, use atomic operations when inserting, updating, or deleting data.
This approach ensures that the group of mutation actions are executed as a
single unit, and either all succeed or all fail, preventing inconsistencies.
## Unique indexes (one-to-one)
Unique indexes have each key in the index associated with exactly one primary
key. For example, when storing user data and looking up users by both their
unique IDs and email addresses, store user data under two separate keys: one for
the primary key (user ID) and another for the secondary index (email). This
setup allows querying users based on either their ID or their email. The
secondary index can also enforce uniqueness constraints on values in the store.
In the case of user data, use the index to ensure that each email address is
associated with only one user - in other words that emails are unique.
To implement a unique secondary index for this example, follow these steps:
1. Create a `User` interface representing the data:
```ts
interface User {
id: string;
name: string;
email: string;
}
```
2. Define an `insertUser` function that stores user data at both the primary and
secondary keys:
```ts
async function insertUser(user: User) {
const primaryKey = ["users", user.id];
const byEmailKey = ["users_by_email", user.email];
const res = await kv.atomic()
.check({ key: primaryKey, versionstamp: null })
.check({ key: byEmailKey, versionstamp: null })
.set(primaryKey, user)
.set(byEmailKey, user)
.commit();
if (!res.ok) {
throw new TypeError("User with ID or email already exists");
}
}
```
> This function performs the insert using an atomic operation that checks
> that no user with the same ID or email already exists. If either of these
> constraints is violated, the insert fails and no data is modified.
3. Define a `getUser` function to retrieve a user by their ID:
```ts
async function getUser(id: string): Promise {
const res = await kv.get(["users", id]);
return res.value;
}
```
4. Define a `getUserByEmail` function to retrieve a user by their email address:
```ts
async function getUserByEmail(email: string): Promise {
const res = await kv.get(["users_by_email", email]);
return res.value;
}
```
This function queries the store using the secondary key
(`["users_by_email", email]`).
5. Define a deleteUser function to delete users by their ID:
```ts
async function deleteUser(id: string) {
let res = { ok: false };
while (!res.ok) {
const getRes = await kv.get(["users", id]);
if (getRes.value === null) return;
res = await kv.atomic()
.check(getRes)
.delete(["users", id])
.delete(["users_by_email", getRes.value.email])
.commit();
}
}
```
> This function first retrieves the user by their ID to get the users email
> address. This is needed to retrieve the email that is needed to construct
> the key for the secondary index for this user address. It then performs an
> atomic operation that checks that the user in the database has not changed,
> and then deletes both the primary and secondary key pointing to the user
> value. If this fails (the user has been modified between query and delete),
> the atomic operation aborts. The entire procedure is retried until the
> delete succeeds. The check is required to prevent race conditions where
> value may have been modified between the retrieve and delete. This race can
> occur if an update changes the user's email, because the secondary index
> moves in this case. The delete of the secondary index then fails, because
> the delete is targeting the old secondary index key.
## Non-Unique Indexes (One-to-Many)
Non-unique indexes are secondary indexes where a single key can be associated
with multiple primary keys, allowing you to query for multiple items based on a
shared attribute. For example, when querying users by their favorite color,
implement this using a non-unique secondary index. The favorite color is a
non-unique attribute since multiple users can have the same favorite color.
To implement a non-unique secondary index for this example, follow these steps:
1. Define the `User` interface:
```ts
interface User {
id: string;
name: string;
favoriteColor: string;
}
```
2. Define the `insertUser` function:
```ts
async function insertUser(user: User) {
const primaryKey = ["users", user.id];
const byColorKey = [
"users_by_favorite_color",
user.favoriteColor,
user.id,
];
await kv.atomic()
.check({ key: primaryKey, versionstamp: null })
.set(primaryKey, user)
.set(byColorKey, user)
.commit();
}
```
3. Define a function to retrieve users by their favorite color:
```ts
async function getUsersByFavoriteColor(color: string): Promise {
const iter = kv.list({ prefix: ["users_by_favorite_color", color] });
const users = [];
for await (const { value } of iter) {
users.push(value);
}
return users;
}
```
This example demonstrates the use of a non-unique secondary index,
`users_by_favorite_color`, which allows querying users based on their favorite
color. The primary key remains the user `id`.
The primary difference between the implementation of unique and non-unique
indexes lies in the structure and organization of the secondary keys. In unique
indexes, each secondary key is associated with exactly one primary key, ensuring
that the indexed attribute is unique across all records. In the case of
non-unique indexes, a single secondary key can be associated with multiple
primary keys, as the indexed attribute may be shared among multiple records. To
achieve this, non-unique secondary keys are typically structured with an
additional unique identifier (e.g., primary key) as part of the key, allowing
multiple records with the same attribute to coexist without conflicts.
---
# Transactions
URL: https://docs.deno.com/deploy/kv/manual/transactions
The Deno KV store utilizes _optimistic concurrency control transactions_ rather
than _interactive transactions_ like many SQL systems like PostgreSQL or MySQL.
This approach employs versionstamps, which represent the current version of a
value for a given key, to manage concurrent access to shared resources without
using locks. When a read operation occurs, the system returns a versionstamp for
the associated key in addition to the value.
To execute a transaction, one performs an atomic operations that can consist of
multiple mutation actions (like set or delete). Along with these actions,
key+versionstamp pairs are provided as a condition for the transaction's
success. The optimistic concurrency control transaction will only commit if the
specified versionstamps match the current version for the values in the database
for the corresponding keys. This transaction model ensures data consistency and
integrity while allowing concurrent interactions within the Deno KV store.
Because OCC transactions are optimistic, they can fail on commit because the
version constraints specified in the atomic operation were violated. This occurs
when an agent updates a key used within the transaction between read and commit.
When this happens, the agent performing the transaction must retry the
transaction.
To illustrate how to use OCC transactions with Deno KV, this example shows how
to implement a `transferFunds(from: string, to: string, amount: number)`
function for an account ledger. The account ledger stores the balance for each
account in the key-value store. The keys are prefixed by `"account"`, followed
by the account identifier: `["account", "alice"]`. The value stored for each key
is a number that represents the account balance.
Here's a step-by-step example of implementing this `transferFunds` function:
```ts
async function transferFunds(sender: string, receiver: string, amount: number) {
if (amount <= 0) throw new Error("Amount must be positive");
// Construct the KV keys for the sender and receiver accounts.
const senderKey = ["account", sender];
const receiverKey = ["account", receiver];
// Retry the transaction until it succeeds.
let res = { ok: false };
while (!res.ok) {
// Read the current balance of both accounts.
const [senderRes, receiverRes] = await kv.getMany([senderKey, receiverKey]);
if (senderRes.value === null) {
throw new Error(`Account ${sender} not found`);
}
if (receiverRes.value === null) {
throw new Error(`Account ${receiver} not found`);
}
const senderBalance = senderRes.value;
const receiverBalance = receiverRes.value;
// Ensure the sender has a sufficient balance to complete the transfer.
if (senderBalance < amount) {
throw new Error(
`Insufficient funds to transfer ${amount} from ${sender}`,
);
}
// Perform the transfer.
const newSenderBalance = senderBalance - amount;
const newReceiverBalance = receiverBalance + amount;
// Attempt to commit the transaction. `res` returns an object with
// `ok: false` if the transaction fails to commit due to a check failure
// (i.e. the versionstamp for a key has changed)
res = await kv.atomic()
.check(senderRes) // Ensure the sender's balance hasn't changed.
.check(receiverRes) // Ensure the receiver's balance hasn't changed.
.set(senderKey, newSenderBalance) // Update the sender's balance.
.set(receiverKey, newReceiverBalance) // Update the receiver's balance.
.commit();
}
}
```
In this example, the `transferFunds` function reads the balances and
versionstamps of both accounts, calculates the new balances after the transfer,
and checks if there are sufficient funds in account A. It then performs an
atomic operation, setting the new balances with the versionstamp constraints. If
the transaction is successful, the loop exits. If the version constraints are
violated, the transaction fails, and the loop retries the transaction until it
succeeds.
## Limits
In addition to a max key size of 2 KiB and max value size of 64 KiB, there are
certain limits with the Deno KV transaction API:
- **Max keys per `kv.getMany()`**: 10
- **Max batch size per `kv.list()`**: 1000
- **Max checks in an atomic operation**: 100
- **Max mutations in an atomic operation**: 1000
- **Max total size of an atomic operation**: 800 KiB. This includes all keys and
values in checks and mutations, and encoding overhead counts toward this limit
as well.
- **Max total size of keys**: 90 KiB. This includes all keys in checks and
mutations, and encoding overhead counts toward this limit as well.
- **Max watched keys per `kv.watch()`**:10
---
# Deno KV Tutorials & Examples
URL: https://docs.deno.com/deploy/kv/tutorials/
Check out these examples showing real-world usage of Deno KV.
## Use queues to process incoming webhooks
Follow [this tutorial](./webhook_processor.md) to learn how to use queues to
offload tasks to a background process, so your web app can remain responsive.
This example shows how to enqueue tasks that handle incoming webhook requests
from [GitHub](https://www.github.com).
## Use queues to schedule a future notification
Follow [this tutorial](./schedule_notification.md) to learn how to schedule code
to execute at some time in the future using queues. This example shows how to
schedule a notification with [Courier](https://www.courier.com/).
## CRUD in Deno KV - TODO List
- Zod schema validation
- Built using Fresh
- Real-time collaboration using BroadcastChannel
- [Source code](https://github.com/denoland/showcase_todo)
- [Live preview](https://showcase-todo.deno.dev/)
## Deno SaaSKit
- Modern SaaS template built on Fresh.
- [Product Hunt](https://www.producthunt.com/)-like template entirely built on
KV.
- Uses Deno KV OAuth for GitHub OAuth 2.0 authentication
- Use to launch your next app project faster
- [Source code](https://github.com/denoland/saaskit)
- [Live preview](https://hunt.deno.land/)
## Multi-player Tic-Tac-Toe
- GitHub authentication
- Saved user state
- Real-time sync using BroadcastChannel
- [Source code](https://github.com/denoland/tic-tac-toe)
- [Live preview](https://tic-tac-toe-game.deno.dev/)
## Multi-user pixel art drawing
- Persistent canvas state
- Multi-user collaboration
- Real-time sync using BroadcastChannel
- [Source code](https://github.com/denoland/pixelpage)
- [Live preview](https://pixelpage.deno.dev/)
## GitHub authentication and KV
- Stores drawings in KV
- GitHub authentication
- [Source code](https://github.com/hashrock/kv-sketchbook)
- [Live preview](https://hashrock-kv-sketchbook.deno.dev/)
## Deno KV oAuth 2
- High-level OAuth 2.0 powered by Deno KV
- [Source code](https://github.com/denoland/deno_kv_oauth)
- [Live preview](https://kv-oauth.deno.dev/)
---
# Schedule a notification for a future date
URL: https://docs.deno.com/deploy/kv/tutorials/schedule_notification
A common use case for [queues](../manual/queue_overview.md) is scheduling work
to be completed at some point in the future. To help demonstrate how this works,
we've provided a sample application (described below) that schedules
notification messages sent through the [Courier API](https://www.courier.com/).
The application runs on [Deno Deploy](https://deno.com/deploy), using the
built-in KV and queue API implementations available there with zero
configuration.
## Download and configure the sample
⬇️
[**Download or clone the complete sample app here**](https://github.com/kwhinnery/deno_courier_example).
You can run and deploy this sample application yourself using the instructions
in the GitHub repo's
[`README` file](https://github.com/kwhinnery/deno_courier_example).
To run the example app above, you'll also need to
[sign up for Courier](https://app.courier.com/signup). Of course the techniques
you'll see in the application would just as easily apply to any notification
service, from [Amazon SNS](https://aws.amazon.com/sns/) to
[Twilio](https://www.twilio.com), but Courier provides an easy-to-use
notification API that you can use with a personal GMail account for testing (in
addition to all the other neat things it can do).
## Key functionality
After setting up and running the project, we'd like to direct your attention to
a few key parts of the code that implement the scheduling mechanics.
### Connecting to KV and adding a listener on app start
Most of the example app's functionality lives in
[server.tsx](https://github.com/kwhinnery/deno_courier_example/blob/main/server.tsx)
in the top-level directory. When the Deno app process starts, it creates a
connection to a Deno KV instance and attaches an event handler which will
process messages as they are received from the queue.
```ts title="server.tsx"
// Create a Deno KV database reference
const kv = await Deno.openKv();
// Create a queue listener that will process enqueued messages
kv.listenQueue(async (message) => {
/* ... implementation of listener here ... */
});
```
### Creating and scheduling a notification
After a new order is submitted through the form in this demo application, the
`enqueue` function is called with a delay of five seconds before a notification
email is sent out.
```ts title="server.tsx"
app.post("/order", async (c) => {
const { email, order } = await c.req.parseBody();
const n: Notification = {
email: email as string,
body: `Order received for: "${order as string}"`,
};
// Select a time in the future - for now, just wait 5 seconds
const delay = 1000 * 5;
// Enqueue the message for processing!
kv.enqueue(n, { delay });
// Redirect back home with a success message!
setCookie(c, "flash_message", "Order created!");
return c.redirect("/");
});
```
### Defining the notification data type in TypeScript
Often, it is desirable to work with strongly typed objects when pushing data
into or out of the queue. While queue messages are an
[`unknown`](https://www.typescriptlang.org/docs/handbook/2/functions.html#unknown)
TypeScript type initially, we can use
[type guards](https://www.typescriptlang.org/docs/handbook/2/narrowing.html) to
tell the compiler the shape of the data we expect.
Here's the source code for the
[notification module](https://github.com/kwhinnery/deno_courier_example/blob/main/notification.ts),
which we use to describe the properties of a notification in our system.
```ts title="notification.ts"
// Shape of a notification object
export default interface Notification {
email: string;
body: string;
}
// Type guard for a notification object
export function isNotification(o: unknown): o is Notification {
return (
((o as Notification)?.email !== undefined &&
typeof (o as Notification).email === "string") &&
((o as Notification)?.body !== undefined &&
typeof (o as Notification).body === "string")
);
}
```
In `server.tsx`, we use the exported type guard to ensure we are responding to
the right message types.
```ts title="server.tsx"
kv.listenQueue(async (message) => {
// Use type guard to short circuit early if the message is of the wrong type
if (!isNotification(message)) return;
// Grab the relevant data from the message, which TypeScript now knows
// is a Notification interface
const { email, body } = message;
// Create an email notification with Courier
// ...
});
```
### Sending a Courier API request
To send an email as scheduled, we use the Courier REST API. More information
about the Courier REST API can be found in
[their reference docs](https://www.courier.com/docs/reference/send/message/).
```ts title="server.tsx"
const response = await fetch("https://api.courier.com/send", {
method: "POST",
headers: {
Authorization: `Bearer ${COURIER_API_TOKEN}`,
},
body: JSON.stringify({
message: {
to: { email },
content: {
title: "New order placed by Deno!",
body: "notification body goes here",
},
},
}),
});
```
---
# Offload webhook processing to a queue
URL: https://docs.deno.com/deploy/kv/tutorials/webhook_processor
In a web application, it is often desirable to offload processing of async tasks
for which a client doesn't need an immediate response to a queue. Doing so can
keep your web app fast and responsive, instead of taking up valuable resources
waiting for long-running processes to complete.
One instance where you might want to deploy this technique is when
[handling webhooks](https://en.wikipedia.org/wiki/Webhook). Immediately upon
receiving the webhook request from a non-human client that doesn't need a
response, you can offload that work to a queue where it can be handled more
efficiently.
In this tutorial, we'll show you how to execute this technique when
[handling webhook requests for a GitHub repo](https://docs.github.com/en/webhooks/about-webhooks-for-repositories).
## Try in a playground
✏️
[**Check out the this playground, which implements a GitHub repo webhook handler**](https://dash.deno.com/playground/github-webhook-example).
Using Deno Deploy [playgrounds](/deploy/manual/playgrounds), you can instantly
deploy your own GitHub webhook handler that uses both queues and Deno KV. We'll
walk through what this code does in a moment.
## Configuring GitHub webhooks for a repository
To try out the webhook you just launched in a playground, set up a new webhook
configuration for a GitHub repository you control. You can find webhook
configuration under "Settings" for your repository.

## Code walkthrough
Our webhook handler function is relatively simple - without comments, it's only
23 lines of code total. It connects to a Deno KV database, sets up a queue
listener to process incoming messages, and sets up a simple server with
[`Deno.serve`](https://docs.deno.com/api/deno/~/Deno.serve) which responds to
incoming webhook requests.
Read along with the comments below to see what's happening at each step.
```ts title="server.ts"
// Get a handle for a Deno KV database instance. KV is built in to the Deno
// runtime, and is available with zero config both locally and on Deno Deploy
const kv = await Deno.openKv();
// Set up a listener that will handle work that is offloaded from our server.
// In this case, it's just going to add incoming webhook payloads to a KV
// database, with a timestamp.
kv.listenQueue(async (message) => {
await kv.set(["github", Date.now()], message);
});
// This is a simple HTTP server that will handle incoming POST requests from
// GitHub webhooks.
Deno.serve(async (req: Request) => {
if (req.method === "POST") {
// GitHub sends webhook requests as POST requests to your server. You can
// configure GitHub to send JSON in the POST body, which you can then parse
// from the request object.
const payload = await req.json();
await kv.enqueue(payload);
return new Response("", { status: 200 });
} else {
// If the server is handling a GET request, this will just list out all the
// webhook events that have been recorded in our KV database.
const iter = kv.list({ prefix: ["github"] });
const github = [];
for await (const res of iter) {
github.push({
timestamp: res.key[1],
payload: res.value,
});
}
return new Response(JSON.stringify(github, null, 2));
}
});
```
---
# Acceptable use policy
URL: https://docs.deno.com/deploy/manual/acceptable-use-policy
The Deno Deploy service includes resources (CPU time, request counts) that are
subject to this Acceptable Use policy. This document can give a rough estimate
to what we consider as "Acceptable Use", and what we do not.
### Examples of Acceptable Use
- ✅ Server-side rendered websites
- ✅ Jamstack sites and apps
- ✅ Single page applications
- ✅ APIs that query a DB or external API
- ✅ A personal blog
- ✅ A company website
- ✅ An e-commerce site
- ✅ Reverse proxy
### Not Acceptable Use
- ❌ Crypto mining
- ❌ Highly CPU-intensive load (e.g. machine learning)
- ❌ Media hosting for external sites
- ❌ Scrapers
- ❌ Forward proxy
- ❌ VPN
## Guidelines
We expect most projects to fall well within the usage limits. We will notify you
if your projects usage significantly deviates from the norm. We will reach out
to you where possible before taking any action to address unreasonable burdens
on our infrastructure.
---
# CI and GitHub Actions
URL: https://docs.deno.com/deploy/manual/ci_github
Deno Deploy's Git integration enables deployment of code changes that are pushed
to a GitHub repository. Commits on the production branch will be deployed as a
production deployment. Commits on all other branches will be deployed as a
preview deployment.
There are two modes of operation for the Git integration:
- **Automatic**: Deno Deploy will automatically pull code and assets from your
repository source every time you push, and deploy it. This mode is very fast,
but does not allow for a build step. _This is the recommended mode for most
users._
- **GitHub Actions**: In this mode, you push your code and assets to Deno Deploy
from a GitHub Actions workflow. This allows you to perform a build step before
deploying.
Deno Deploy will select an appropriate mode based on your custom deployment
configuration. Below, we go into more detail about the different configurations
for **Automatic** and **GitHub Actions** mode.
## Automatic
If your project doesn't require any additional build steps, then the system
chooses **Automatic** mode. The entrypoint file is simply the file that Deno
Deploy will run.
## GitHub Actions
If you enter a command in **Install Step** and/or **Build Step** in the
**Project Configuration**, Deno Deploy will create a necessary GitHub Actions
workflow file and push it into your repository. In this workflow file, we
leverage the `deployctl` [Github action][deploy-action] to deploy your project.
You can do whatever you need to do, such as running a build command, before
deploying it to Deno Deploy.
To configure preprocessing commands you want to run, click **Show advanced
options** button that appears after choosing your git repository. Then enter
values as needed to input boxes.
:::tip
For example, if you want to enable [ahead-of-time builds] for a Fresh project,
you will enter `deno task build` in the **Build Step** box.
See also [the Fresh doc][Deploy to production] for deploying a Fresh project to
Deno Deploy.
:::
The GitHub Actions workflow file that Deno Deploy generates and pushes to your
repository looks like as follows.
```yml title=".github/workflows/deploy.yml"
name: Deploy
on:
push:
branches: main
pull_request:
branches: main
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
permissions:
id-token: write # Needed for auth with Deno Deploy
contents: read # Needed to clone the repository
steps:
- name: Clone repository
uses: actions/checkout@v4
- name: Install Deno
uses: denoland/setup-deno@v2
with:
deno-version: v2.x
- name: Build step
run: "deno task build"
- name: Upload to Deno Deploy
uses: denoland/deployctl@v1
with:
project: ""
entrypoint: "main.ts"
root: "."
```
See
[deployctl README](https://github.com/denoland/deployctl/blob/main/action/README.md)
for more details.
[fileserver]: https://jsr.io/@std/http#file-server
[ghapp]: https://github.com/apps/deno-deploy
[deploy-action]: https://github.com/denoland/deployctl/blob/main/action/README.md
[ahead-of-time builds]: https://fresh.deno.dev/docs/concepts/ahead-of-time-builds
[Deploy to production]: https://fresh.deno.dev/docs/getting-started/deploy-to-production
---
# Custom domains
URL: https://docs.deno.com/deploy/manual/custom-domains
By default a project can be reached at its preview URL, which is
`$PROJECT_ID.deno.dev`, e.g. `dead-clam-55.deno.dev`. You can also add a custom
domain by following the instructions below.
## **Step 1:** Add your custom domain in the Deno Deploy dashboard
1. Click the "Settings" button on the project page, then select "Domains" from
the sidebar.
2. Enter the domain name you wish to add to the project and press "Add." Note
that you must own the domain that you want to add to a project. If you do not
own a domain yet, you can register one at a domain registrar like Google
Domains, Namecheap, or gandi.net.

3. The domain is added to the domains list and will have a "setup" badge.
4. Click on the "setup" badge to visit the domain setup page, which will display
the list of DNS records that need to be created/updated for your domain.

## **Step 2:** Update your custom domain's DNS records
Go to the DNS configuration panel of your domain registrar (or the service
you're using to manage DNS) and enter the records as described on the domain
setup page.

## **Step 3:** Validate that the DNS records have been updated
Go back to the Deno Deploy dashboard and click the **Validate** button on the
domain setup page. It will check if the DNS records are correctly set and if so,
update the status to "Validated, awaiting certificate provisioning."

## **Step 4:** Provision a certificate for your custom domain
At this point you have two options. 99% of the time, you should choose the first
option.
1. Let us automatically provision a certificate using Let's Encrypt.
To do this, press the **Get automatic certificates** button. Provisioning a
TLS certificate can take up to a minute. It is possible that the provisioning
fails if your domain specifies a CAA record that prevents
[Let's Encrypt](https://letsencrypt.org/) from provisioning certificates.
Certificates will be automatically renewed around 30 days before the
certificate expires. When you have been issued certificates successfully, you
will see a green checkmark like this:

2. Manually upload a certificate and private key.
To manually upload a certificate chain and private key, press the **Upload
your own certificates** button. You will be prompted to upload a certificate
chain and private key. The certificate chain needs to be complete and valid,
and your leaf certificate needs to be at the top of the chain.
---
# Using deployctl on the command line
URL: https://docs.deno.com/deploy/manual/deployctl
`deployctl` is a command line tool (CLI) that lets you operate the Deno Deploy
platform without leaving your terminal. With it you can deploy your code, create
and manage your projects and their deployments, and monitor their usage and
logs.
## Dependencies
The only dependency for `deployctl` is the Deno runtime. You can install it by
running the following command:
```sh
curl -fsSL https://deno.land/install.sh | sh
```
You don't need to setup a Deno Deploy account beforehand. It will be created
along the way when you deploy your first project.
## Install `deployctl`
With the Deno runtime installed, you can install the `deployctl` utility with
the following command:
```sh
deno install -gArf jsr:@deno/deployctl
```
The `-A` option in the deno install command grants all permissions to the
installed script. You can opt not to use it, in which case you will be prompted
to grant the necessary permissions when needed during the execution of the tool.
## Deploy
To perform a new deployment of your code, navigate to the root directory of your
project and execute:
```shell
deployctl deploy
```
### Project and Entrypoint
If this is the first deployment of the project, `deployctl` will guess the
project name based on the Git repo or directory it is in. Similarly, it will
guess the entrypoint by looking for files with common entrypoint names (main.ts,
src/main.ts, etc). After the first deployment, the settings used will be stored
in a config file (by default deno.json).
You can specify the project name and/or the entrypoint using the `--project` and
`--entrypoint` arguments respectively. If the project does not exist, it will be
created automatically. By default it is created in the personal organization of
the user, but it can also be created in a custom organization by specifying the
`--org` argument. If the organization does not exist yet, it will also be
created automatically.
```shell
deployctl deploy --project=helloworld --entrypoint=src/entrypoint.ts --org=my-team
```
### Include and Exclude Files
By default, deployctl deploys all the files in the current directory
(recursively, except `node_modules` directories). You can customize this
behavior using the `--include` and `--exclude` arguments (also supported in the
config file). These arguments accept specific files, whole directories and
globs. Here are some examples:
- Include only source and static files:
```shell
deployctl deploy --include=./src --include=./static
```
- Include only Typescript files:
```shell
deployctl deploy --include=**/*.ts
```
- Exclude local tooling and artifacts
```shell
deployctl deploy --exclude=./tools --exclude=./benches
```
A common pitfall is to not include the source code modules that need to be run
(entrypoint and dependencies). The following example will fail because `main.ts`
is not included:
```shell
deployctl deploy --include=./static --entrypoint=./main.ts
```
The entrypoint can also be a remote script. A common use case for this is to
deploy an static site using `std/http/file_server.ts` (more details in
[Static Site Tutorial](https://docs.deno.com/deploy/tutorials/static-site)):
```shell
deployctl deploy --include=dist --entrypoint=jsr:@std/http/file-server
```
### Environment variables
You can set env variables using `--env` (to set individual environment
variables) or `--env-file` (to load one or more environment files). These
options can be combined and used multiple times:
```shell
deployctl deploy --env-file --env-file=.other-env --env=DEPLOYMENT_TS=$(date +%s)
```
The deployment will have access to these variables using `Deno.env.get()`. Be
aware that the env variables set with `--env` and `--env-file` are specific for
the deployment being created and are not added to the list of
[env variables configured for the project](./environment-variables.md).
### Production Deployments
Each deployment you create have a unique URL. In addition, a project has a
"production URL" and custom domains routing trafffic to its "production"
deployment. Deployments can be promoted to production at any time, or created
directly as production using the `--prod` flag:
```shell
deployctl deploy --prod
```
Learn more about production deployments in the [Deployments](./deployments)
docs.
## Deployments
The deployments subcommand groups all the operations around deployments.
### List
You can list the deployments of a project with:
```shell
deployctl deployments list
```
Output:
```
✔ Page 1 of the list of deployments of the project 'my-project' is ready
┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ Deployment │ Date │ Status │ Database │ Domain │ Entrypoint │ Branch │ Commit │
├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ kcbxc4xwe4mc │ 12/3/2024 13:21:40 CET (2 days) │ Preview │ Preview │ https://my-project-kcbxc4xwe4mc.deno.dev │ main.ts │ main │ 4b6c506 │
│ c0ph5xa9exb3 │ 12/3/2024 13:21:25 CET (2 days) │ Production │ Production │ https://my-project-c0ph5xa9exb3.deno.dev │ main.ts │ main │ 4b6c506 │
│ kwkbev9er4h2 │ 12/3/2024 13:21:12 CET (2 days) │ Preview │ Preview │ https://my-project-kwkbev9er4h2.deno.dev │ main.ts │ main │ 4b6c506 │
│ dxseq0jc8402 │ 6/3/2024 23:16:51 CET (8 days) │ Preview │ Production │ https://my-project-dxseq0jc8402.deno.dev │ main.ts │ main │ 099359b │
│ 7xr5thz8yjbz │ 6/3/2024 22:58:32 CET (8 days) │ Preview │ Preview │ https://my-project-7xr5thz8yjbz.deno.dev │ main.ts │ another │ a4d2953 │
│ 4qr4h5ac3rfn │ 6/3/2024 22:57:05 CET (8 days) │ Failed │ Preview │ n/a │ main.ts │ another │ 56d2c88 │
│ 25wryhcqmb9q │ 6/3/2024 22:56:41 CET (8 days) │ Preview │ Preview │ https://my-project-25wryhcqmb9q.deno.dev │ main.ts │ another │ 4b6c506 │
│ 64tbrn8jre9n │ 6/3/2024 8:21:33 CET (8 days) │ Preview │ Production │ https://my-project-64tbrn8jre9n.deno.dev │ main.ts │ main │ 4b6c506 │
│ hgqgccnmzg04 │ 6/3/2024 8:17:40 CET (8 days) │ Failed │ Production │ n/a │ main.ts │ main │ 8071902 │
│ rxkh1w3g74e8 │ 6/3/2024 8:17:28 CET (8 days) │ Failed │ Production │ n/a │ main.ts │ main │ b142a59 │
│ wx6cw9aya64c │ 6/3/2024 8:02:29 CET (8 days) │ Preview │ Production │ https://my-project-wx6cw9aya64c.deno.dev │ main.ts │ main │ b803784 │
│ a1qh5fmew2yf │ 5/3/2024 16:25:29 CET (9 days) │ Preview │ Production │ https://my-project-a1qh5fmew2yf.deno.dev │ main.ts │ main │ 4bb1f0f │
│ w6pf4r0rrdkb │ 5/3/2024 16:07:35 CET (9 days) │ Preview │ Production │ https://my-project-w6pf4r0rrdkb.deno.dev │ main.ts │ main │ 6e487fc │
│ nn700gexgdzq │ 5/3/2024 13:37:11 CET (9 days) │ Preview │ Production │ https://my-project-nn700gexgdzq.deno.dev │ main.ts │ main │ c5b1d1f │
│ 98crfqxa6vvf │ 5/3/2024 13:33:52 CET (9 days) │ Preview │ Production │ https://my-project-98crfqxa6vvf.deno.dev │ main.ts │ main │ 090146e │
│ xcdcs014yc5p │ 5/3/2024 13:30:58 CET (9 days) │ Preview │ Production │ https://my-project-xcdcs014yc5p.deno.dev │ main.ts │ main │ 5b78c0f │
│ btw43kx89ws1 │ 5/3/2024 13:27:31 CET (9 days) │ Preview │ Production │ https://my-project-btw43kx89ws1.deno.dev │ main.ts │ main │ 663452a │
│ 62tg1ketkjx7 │ 5/3/2024 13:27:03 CET (9 days) │ Preview │ Production │ https://my-project-62tg1ketkjx7.deno.dev │ main.ts │ main │ 24d1618 │
│ 07ag6pt6kjex │ 5/3/2024 13:19:11 CET (9 days) │ Preview │ Production │ https://my-project-07ag6pt6kjex.deno.dev │ main.ts │ main │ 4944545 │
│ 4msyne1rvwj1 │ 5/3/2024 13:17:16 CET (9 days) │ Preview │ Production │ https://my-project-4msyne1rvwj1.deno.dev │ main.ts │ main │ dda85e1 │
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
Press enter to fetch the next page [Enter]
```
This command outputs pages of 20 deployments by default. You can iterate over
the pages with the enter key, and use the `--page` and `--limit` options to
query a specific page and page size.
Like with the rest of commands, you can use the `--project` option to specify
the project of which to list deployments, if you are not in a project directory
or want to list deployments from a different project.
### Show
Get all the details of a particular deployment using:
```shell
deployctl deployments show
```
Output:
```
✔ The production deployment of the project 'my-project' is 'c0ph5xa9exb3'
✔ The details of the deployment 'c0ph5xa9exb3' are ready:
c0ph5xa9exb3
------------
Status: Production
Date: 2 days, 12 hours, 29 minutes, 46 seconds ago (12/3/2024 13:21:25 CET)
Project: my-project (e54f23b5-828d-4b7f-af12-706d4591062b)
Organization: my-team (d97822ac-ee20-4ce9-b942-5389330b57ee)
Domain(s): https://my-project.deno.dev
https://my-project-c0ph5xa9exb3.deno.dev
Database: Production (0efa985f-3793-48bc-8c05-f740ffab4ca0)
Entrypoint: main.ts
Env Vars: HOME
Git
Ref: main [4b6c506]
Message: change name
Author: John Doe @johndoe [mailto:johndoe@deno.com]
Url: https://github.com/arnauorriols/my-project/commit/4b6c50629ceeeb86601347732d01dc7ed63bf34f
Crons: another cron [*/10 * * * *] succeeded at 15/3/2024 1:50:00 CET after 2 seconds (next at 15/3/2024 2:00:00 CET)
newest cron [*/10 * * * *] n/a
yet another cron [*/10 * * * *] failed at 15/3/2024 1:40:00 CET after 2 seconds (next at 15/3/2024 1:51:54 CET)
```
If no deployment is specified, the command shows the details of the current
production deployment of the project. To see the details of the last deployment,
use `--last`, and to see the details of a particular deployment, use `--id` (or
positional argument). You can also use `--next` or `--prev` to navigate the
deployments chronologically.
For example, to see the details of the second to last deployment, you can do:
```shell
deployctl deployments show --last --prev
```
And to see the details of 2 deployments after a specific deployment:
```shell
deployctl deployments show 64tbrn8jre9n --next=2
```
### Redeploy
The redeploy command creates a new deployment reusing the build of an existing
deployment, for the purpose of changing the resources associated with it. This
includes production domains, environment variables and KV databases.
:::info
The semantics of selecting the deployment to redeploy are the same as those of
the [show subcommand](#show), including `--last`, `--id`, `--next` and `--prev`.
:::
#### Production Domains
If you want to change the routing of the production domains of the project to a
particular deployment, you can redeploy it with the `--prod` option:
```shell
deployctl deployments redeploy --prod 64tbrn8jre9n
```
This will create a new deployment with the same code and environment variables
as the specified deployment, but with the production domains of the project
pointing to it. For those projects with preview/prod databases (ie projects
linked to GitHub), this will also set the production database for the new
deployment.
:::note
This feature is similar to the "promote to production" button found in the Deno
Deploy web application with the exception that the "promote to production"
button does not create a new deployment. Instead, the "promote to production"
button changes the domain routing in-place, however it's restricted to
deployments already using the production database.
:::
#### KV Database
If this is a GitHub deployment, it will have 2 databases, one for prod
deployments and one for preview deployments. You can change the database of a
deployment by redeploying it with the `--db` option:
```shell
deployctl deployments redeploy --db=prod --id=64tbrn8jre9n
```
:::note
When redeploying a deployment to prod, by default it will automatically
configure it to use the prod database. You can combine both `--prod` and `--db`
options to opt out of this behavior. For example, the following command will
redeploy the current production deployment (given the lack of positional
argument, `--id` or `--last`). The new deployment will become the new production
deployment, but it will use the preview database instead of the production
database:
```shell
deployctl deployments redeploy --prod --db=preview
```
:::
If your organization has custom databases, you can also set them by UUID:
```shell
deployctl deployments redeploy --last --db=5261e096-f9aa-4b72-8440-1c2b5b553def
```
#### Environment Variables
When a deployment is created, it inherits the environment variables of the
project. Given that the deployments are immutable, their environment variables
can never be changed. To set new environment variables in a deployment, you need
to redeploy it using `--env` (to set individual variables) and `--env-file` (to
load one or more environment files).
The following command redeploys the current production deployment with the env
variables defined in the `.env` and `.other-env` files, plus the `DEPLOYMENT_TS`
variable set to the current timestamp. The resulting deployment will be a
preview deployment (ie the production domains won't route traffic to it, given
the lack of `--prod`).
```shell
deployctl deployments redeploy --env-file --env-file=.other-env --env=DEPLOYMENT_TS=$(date +%s)
```
:::note
Be aware that when changing env variables, only the env variables set in the
redeploy command will be used by the new deployment. The project env variables
and the env variables of the deployment being redeployed are ignored. If this
does not suit your needs, please report your feedback at
https://github.com/denoland/deploy_feedback/issues/
:::
:::note
When you change the project environment variables in the Deno Deploy web
application, the current production deployment is redeployed with the new
environment variables, and the new deployment becomes the new production
deployment.
:::
### Delete
You can delete a deployment using the `delete` subcommand:
```shell
deployctl deployments delete 64tbrn8jre9n
```
Like `show` and `redeploy`, `delete` can also use `--last`, `--next` and
`--prev` to select the deployment to delete. Here's an example command that
deletes all the deployments of a project except the last (use with caution!):
```shell
while deployctl deployments delete --project=my-project --last --prev; do :; done
```
## Projects
The `projects` subcommand groups all the operations against projects as a whole.
this includes `list`, `show`, `rename`, `create` and `delete`.
### List
`deployctl projects list` outputs all the projects your user has access to,
grouped by organization:
```
Personal org:
blog
url-shortener
'my-team' org:
admin-site
main-site
analytics
```
You can filter by organization using `--org`:
```shell
deployctl projects list --org=my-team
```
### Show
To see the details of a particular project, use `projects show`. If you are
inside a project, it will pick up the project id from the config file. You can
also specify the project using `--project` or the positional argument:
```shell
deployctl projects show main-site
```
Output:
```
main-site
---------
Organization: my-team (5261e096-f9aa-4b72-8440-1c2b5b553def)
Domain(s): https://my-team.com
https://main-site.deno.dev
Dash URL: https://dash.deno.com/projects/8422c515-f68f-49b2-89f3-157f4b144611
Repository: https://github.com/my-team/main-site
Databases: [main] dd28e63e-f495-416b-909a-183380e3a232
[*] e061c76e-4445-409a-bc36-a1a9040c83b3
Crons: another cron [*/10 * * * *] succeeded at 12/3/2024 14:40:00 CET after 2 seconds (next at 12/3/2024 14:50:00 CET)
newest cron [*/10 * * * *] n/a
yet another cron [*/10 * * * *] failed at 12/3/2024 14:40:00 CET after 2 seconds (next at 12/3/2024 14:50:00 CET)
Deployments: kcbxc4xwe4mc c0ph5xa9exb3* kwkbev9er4h2 dxseq0jc8402 7xr5thz8yjbz
4qr4h5ac3rfn 25wryhcqmb9q 64tbrn8jre9n hgqgccnmzg04 rxkh1w3g74e8
wx6cw9aya64c a1qh5fmew2yf w6pf4r0rrdkb nn700gexgdzq 98crfqxa6vvf
xcdcs014yc5p btw43kx89ws1 62tg1ketkjx7 07ag6pt6kjex 4msyne1rvwj1
```
### Rename
Projects can be renamed easily with the `rename` subcommand. Similarly to the
other commands, if you run the command from within a project's directory, you
don't need to specify the current name of the project:
```shell
deployctl projects rename my-personal-blog
```
Output:
```
ℹ Using config file '/private/tmp/blog/deno.json'
✔ Project 'blog' (8422c515-f68f-49b2-89f3-157f4b144611) found
✔ Project 'blog' renamed to 'my-personal-blog'
```
:::note
Keep in mind that the name of the project is part of the preview domains
(https://my-personal-blog-kcbxc4xwe4mc.deno.dev) and the default production
domain (https://my-personal-blog.deno.dev). Therefore, when changing the project
name, the URLs with the previous name will no longer route to the project's
corresponding deployments.
:::
### Create
You can create an empty project with:
```shell
deployctl projects create my-new-project
```
### Delete
You can delete a project with:
```shell
deployctl projects delete my-new-project
```
## Top
The `top` subcommand is used to monitor the resource usage of a project in
real-time:
```shell
deployctl top
```
Output:
```
┌────────┬────────────────┬────────────────────────┬─────────┬───────┬─────────┬──────────┬─────────────┬────────────┬─────────┬─────────┬───────────┬───────────┐
│ (idx) │ deployment │ region │ Req/min │ CPU% │ CPU/req │ RSS/5min │ Ingress/min │ Egress/min │ KVr/min │ KVw/min │ QSenq/min │ QSdeq/min │
├────────┼────────────────┼────────────────────────┼─────────┼───────┼─────────┼──────────┼─────────────┼────────────┼─────────┼─────────┼───────────┼───────────┤
│ 6b80e8 │ "kcbxc4xwe4mc" │ "asia-northeast1" │ 80 │ 0.61 │ 4.56 │ 165.908 │ 11.657 │ 490.847 │ 0 │ 0 │ 0 │ 0 │
│ 08312f │ "kcbxc4xwe4mc" │ "asia-northeast1" │ 76 │ 3.49 │ 27.58 │ 186.278 │ 19.041 │ 3195.288 │ 0 │ 0 │ 0 │ 0 │
│ 77c10b │ "kcbxc4xwe4mc" │ "asia-south1" │ 28 │ 0.13 │ 2.86 │ 166.806 │ 7.354 │ 111.478 │ 0 │ 0 │ 0 │ 0 │
│ 15e356 │ "kcbxc4xwe4mc" │ "asia-south1" │ 66 │ 0.97 │ 8.93 │ 162.288 │ 17.56 │ 4538.371 │ 0 │ 0 │ 0 │ 0 │
│ a06817 │ "kcbxc4xwe4mc" │ "asia-southeast1" │ 126 │ 0.44 │ 2.11 │ 140.087 │ 16.504 │ 968.794 │ 0 │ 0 │ 0 │ 0 │
│ d012b6 │ "kcbxc4xwe4mc" │ "asia-southeast1" │ 119 │ 2.32 │ 11.72 │ 193.704 │ 23.44 │ 8359.829 │ 0 │ 0 │ 0 │ 0 │
│ 7d9a3d │ "kcbxc4xwe4mc" │ "australia-southeast1" │ 8 │ 0.97 │ 75 │ 158.872 │ 10.538 │ 3.027 │ 0 │ 0 │ 0 │ 0 │
│ 3c21be │ "kcbxc4xwe4mc" │ "australia-southeast1" │ 1 │ 0.04 │ 90 │ 105.292 │ 0.08 │ 1.642 │ 0 │ 0 │ 0 │ 0 │
│ b75dc7 │ "kcbxc4xwe4mc" │ "europe-west2" │ 461 │ 5.43 │ 7.08 │ 200.573 │ 63.842 │ 9832.936 │ 0 │ 0 │ 0 │ 0 │
│ 33607e │ "kcbxc4xwe4mc" │ "europe-west2" │ 35 │ 0.21 │ 3.69 │ 141.98 │ 9.438 │ 275.788 │ 0 │ 0 │ 0 │ 0 │
│ 9be3d2 │ "kcbxc4xwe4mc" │ "europe-west2" │ 132 │ 0.92 │ 4.19 │ 180.654 │ 15.959 │ 820.513 │ 0 │ 0 │ 0 │ 0 │
│ 33a859 │ "kcbxc4xwe4mc" │ "europe-west3" │ 1335 │ 7.57 │ 3.4 │ 172.032 │ 178.064 │ 10967.918 │ 0 │ 0 │ 0 │ 0 │
│ 3f54ce │ "kcbxc4xwe4mc" │ "europe-west4" │ 683 │ 4.76 │ 4.19 │ 187.802 │ 74.696 │ 7565.017 │ 0 │ 0 │ 0 │ 0 │
│ cf881c │ "kcbxc4xwe4mc" │ "europe-west4" │ 743 │ 3.95 │ 3.19 │ 177.213 │ 86.974 │ 6087.454 │ 0 │ 0 │ 0 │ 0 │
│ b4565b │ "kcbxc4xwe4mc" │ "me-west1" │ 3 │ 0.21 │ 55 │ 155.46 │ 2.181 │ 0.622 │ 0 │ 0 │ 0 │ 0 │
│ b97970 │ "kcbxc4xwe4mc" │ "southamerica-east1" │ 3 │ 0.08 │ 25 │ 186.049 │ 1.938 │ 0.555 │ 0 │ 0 │ 0 │ 0 │
│ fd7a08 │ "kcbxc4xwe4mc" │ "us-east4" │ 3 │ 0.32 │ 80 │ 201.101 │ 0.975 │ 58.495 │ 0 │ 0 │ 0 │ 0 │
│ 95d68a │ "kcbxc4xwe4mc" │ "us-east4" │ 133 │ 1.05 │ 4.77 │ 166.052 │ 28.107 │ 651.737 │ 0 │ 0 │ 0 │ 0 │
│ c473e7 │ "kcbxc4xwe4mc" │ "us-east4" │ 0 │ 0 │ 0 │ 174.154 │ 0.021 │ 0 │ 0 │ 0 │ 0 │ 0 │
│ ebabfb │ "kcbxc4xwe4mc" │ "us-east4" │ 19 │ 0.15 │ 4.78 │ 115.732 │ 7.764 │ 67.054 │ 0 │ 0 │ 0 │ 0 │
│ eac700 │ "kcbxc4xwe4mc" │ "us-south1" │ 114 │ 2.37 │ 12.54 │ 183.001 │ 18.401 │ 22417.397 │ 0 │ 0 │ 0 │ 0 │
│ cd2194 │ "kcbxc4xwe4mc" │ "us-south1" │ 35 │ 0.33 │ 5.68 │ 145.871 │ 8.142 │ 91.236 │ 0 │ 0 │ 0 │ 0 │
│ 140fec │ "kcbxc4xwe4mc" │ "us-west2" │ 110 │ 1.43 │ 7.84 │ 115.298 │ 18.093 │ 977.993 │ 0 │ 0 │ 0 │ 0 │
│ 51689f │ "kcbxc4xwe4mc" │ "us-west2" │ 1105 │ 7.66 │ 4.16 │ 187.277 │ 154.876 │ 14648.383 │ 0 │ 0 │ 0 │ 0 │
│ c5806e │ "kcbxc4xwe4mc" │ "us-west2" │ 620 │ 4.38 │ 4.24 │ 192.291 │ 109.086 │ 9685.688 │ 0 │ 0 │ 0 │ 0 │
└────────┴────────────────┴────────────────────────┴─────────┴───────┴─────────┴──────────┴─────────────┴────────────┴─────────┴─────────┴───────────┴───────────┘
⠼ Streaming...
```
The columns are defined as follows:
| Column | Description |
| ----------- | -------------------------------------------------------------------------------------------------- |
| idx | Instance discriminator. Opaque id to discriminate different executions running in the same region. |
| deployment | The id of the deployment running in the executing instance. |
| Req/min | Requests per minute received by the project. |
| CPU% | Percentage of CPU used by the project. |
| CPU/req | CPU time per request, in milliseconds. |
| RSS/5min | Max RSS used by the project during the last 5 minutes, in MB. |
| Ingress/min | Data received by the project per minute, in KB. |
| Egress/min | Data output by the project per minute, in KB. |
| KVr/min | KV reads performed by the project per minute. |
| KVw/min | KV writes performed by the project per minute. |
| QSenq/min | Queues enqueues performed by the project per minute. |
| QSdeq/min | Queues dequeues performed by the project per minute. |
You can filter by region using `--region`, which accepts substrings and can be
used multiple times:
```shell
deployctl top --region=asia --region=southamerica
```
## Logs
You can fetch the logs of your deployments with `deployctl logs`. It supports
both live logs where the logs are streamed to the console as they are generated,
and query persisted logs where the logs generated in the past are fetched.
To show the live logs of the current production deployment of a project:
```shell
deployctl logs
```
:::note
Unlike in the Deno Deploy web application, at the moment the logs subcommand
does not automatically switch to the new production deployment when it changes.
:::
To show the live logs of a particular deployment:
```shell
deployctl logs --deployment=1234567890ab
```
Logs can be filtered by level, region and text using `--levels` `--regions` and
`--grep` options:
```shell
deployctl logs --levels=error,info --regions=region1,region2 --grep='unexpected'
```
To show the persisted logs, use the `--since` and/or `--until` options:
```sh
deployctl logs --since=$(date -Iseconds -v-2H) --until=$(date -Iseconds -v-30M)
```
```sh
deployctl logs --since=$(date -Iseconds --date='2 hours ago') --until=$(date -Iseconds --date='30 minutes ago')
```
## API
If you use the [subhosting API](../../subhosting/manual/index.md),
`deployctl api` will help you interact with the API by handling the
authentication and headers for you:
```shell
deployctl api /projects/my-personal-blog/deployments
```
Use `--method` and `--body` to specify the HTTP method and the request body:
```shell
deployctl api --method=POST --body='{"name": "main-site"}' organizations/5261e096-f9aa-4b72-8440-1c2b5b553def/projects
```
## Local Development
For local development you can use the `deno` CLI. To install `deno`, follow the
instructions in the
[Deno manual](https://deno.land/manual/getting_started/installation).
After installation, you can run your scripts locally:
```shell
$ deno run --allow-net=:8000 ./main.ts
Listening on http://localhost:8000
```
To watch for file changes add the `--watch` flag:
```shell
$ deno run --allow-net=:8000 --watch ./main.ts
Listening on http://localhost:8000
```
For more information about the Deno CLI, and how to configure your development
environment and IDE, visit the Deno Manual's [Getting Started][manual-gs]
section.
[manual-gs]: https://deno.land/manual/getting_started
## JSON output
All the commands that output data have a `--format=json` option that outputs the
data in JSON objects. This output mode is the default when stdout is not a TTY,
notably when piping to another command. Together with `jq`, this mode enables
the programmatic use of all the data provided by `deployctl`:
Get the id of the current production deployment:
```shell
deployctl deployments show | jq .build.deploymentId
```
Get a csv stream of the CPU time per request on each isolate of each region:
```shell
deployctl top | jq -r '[.id,.region,.cpuTimePerRequest] | @csv'
```
---
# Deployments
URL: https://docs.deno.com/deploy/manual/deployments
A deployment is a snapshot of the code and environment variables required to run
an application. A new deployment can be created
[via `deployctl`](./deployctl.md#deploy) or automatically via Deploy's Github
integration if configured.
Deployments are immutable after they have been created. To deploy a new version
of the code for an application, a new deployment must be created. Once created,
deployments remain accessible.
All available deployments are listed on your project page under the
`Deployments` tab, pictured below. Old deployments can be deleted
[via `deployctl`](./deployctl.md#delete) and
[via API](https://apidocs.deno.com/#delete-/deployments/-deploymentId-).

## Custom domains
There can also be other URLs that can point to a deployment, like
[custom domains](custom-domains).
## Branch domains
`.deno.dev` is also supported.
## Production vs. preview deployments
All deployments have a preview URL that can be used to view this specific
deployment. Preview URLs have the format
`{project_name}-{deployment_id}.deno.dev`.

A deployment can either be a production or a preview deployment. These
deployments do not have any differences in runtime functionality. The only
distinguishing factor is that a project's production deployment will receive
traffic from the project URL (e.g. `myproject.deno.dev`), and from custom
domains in addition to traffic to the deployment's preview URL.
## Promoting preview deployments to production deployments via Deno Deploy UI
Preview deployments can be "promoted" to production via the Deno Deploy UI:
1. Navigate to the project page.
2. Click on the **Deployments** tab.
3. Click on the three dots next to the deployment you want to promote to
production and select **Promote to Production**

Promoting deployments to production is restricted to deployments that already
use the production KV database. This is particularly relevant for GitHub
deployments that use a different database for preview and production
deployments. Deployments (even those that use the preview KV database) can
always be redeployed to production using
[the `deployctl deployments redeploy` command](./deployctl.md#production-domains).
## Creating production deployments via `deployctl`
If you are deploying your Deno code with `deployctl`, you can deploy directly to
production with the `--prod` flag:
```sh
deployctl deploy --prod --project=helloworld main.ts
```
---
# Connect to DynamoDB
URL: https://docs.deno.com/deploy/manual/dynamodb
Amazon DynamoDB is a fully managed NoSQL database. To persist data to DynamoDB,
follow the steps below:
The tutorial assumes that you have an AWS and Deno Deploy account.
You can find a more comprehensive tutorial that builds a sample application on
top of DynamoDB [here](../tutorials/tutorial-dynamodb).
## Gather credentials from DynamoDB
The first step in the process is to generate AWS credentials to programmatically
access DynamoDB.
Generate Credentials:
1. Go to https://console.aws.amazon.com/iam/ and go to the "Users" section.
2. Click on the **Add user** button, fill the **User name** field (maybe use
`denamo`), and select **Programmatic access** type.
3. Click on **Next: Permissions**, then on **Attach existing policies
directly**, search for `AmazonDynamoDBFullAccess` and select it.
4. Click on **Next: Tags**, then on **Next: Review** and finally **Create
user**.
5. Click on **Download .csv** button to download the credentials.
## Create a project in Deno Deploy
Next, let's create a project in Deno Deploy and set it up with the requisite
environment variables:
1. Go to [https://dash.deno.com/new](https://dash.deno.com/new) (Sign in with
GitHub if you didn't already) and click on **+ Empty Project** under **Deploy
from the command line**.
2. Now click on the **Settings** button available on the project page.
3. Navigate to **Environment Variables** Section and add the following secrets.
- `AWS_ACCESS_KEY_ID` - Use the value that's available under **Access key ID**
column in the downloaded CSV.
- `AWS_SECRET_ACCESS_KEY` - Use the value that's available under **Secret access
key** column in the downloaded CSV.
## Write code that connects to DynamoDB
AWS has an
[official SDK](https://www.npmjs.com/package/@aws-sdk/client-dynamodb) that
works with browsers. As most Deno Deploy's APIs are similar to browsers', the
same SDK works with Deno Deploy. To use the SDK in Deno, import from a cdn like
below and create a client:
```js
import {
DynamoDBClient,
GetItemCommand,
PutItemCommand,
} from "https://esm.sh/@aws-sdk/client-dynamodb?dts";
// Create a client instance by providing your region information.
// The credentials are automatically obtained from environment variables which
// we set during our project creation step on Deno Deploy, so we don't have to
// pass them manually here.
const client = new ApiFactory().makeNew(DynamoDB);
serve({
"/songs": handleRequest,
});
async function handleRequest(request) {
// async/await.
try {
const data = await client.send(command);
// process data.
} catch (error) {
// error handling.
} finally {
// finally.
}
}
```
## Deploy application to Deno Deploy
Once you have finished writing your application, you can deploy it on Deno
Deploy.
To do this, go back to your project page at
`https://dash.deno.com/projects/`.
You should see a couple of options to deploy:
- [Github integration](ci_github)
- [`deployctl`](./deployctl.md)
```sh
deployctl deploy --project=
```
Unless you want to add a build step, we recommend that you select the Github
integration.
For more details on the different ways to deploy on Deno Deploy and the
different configuration options, read [here](how-to-deploy).
---
# Edge Cache
URL: https://docs.deno.com/deploy/manual/edge-cache
The [Web Cache API](https://developer.mozilla.org/en-US/docs/Web/API/Cache) is
supported on Deno Deploy. The cache is designed to provide microsecond-level
read latency, multi-GB/s write throughput and unbounded storage, with the
tradeoff of best-effort consistency and durability.
```ts
const cache = await caches.open("my-cache");
Deno.serve(async (req) => {
const cached = await cache.match(req);
if (cached) {
return cached;
}
const res = new Response("cached at " + new Date().toISOString());
await cache.put(req, res.clone());
return res;
});
```
Cached data is stored in the same Deno Deploy region that runs your code.
Usually your isolate observes read-after-write (RAW) and write-after-write (WAW)
consistency within the same region; however, in rare cases recent writes can be
lost, out-of-order or temporarily invisible.
## Expiration
By default, cached data is persisted for an indefinite period of time. While we
periodically scan and delete inactive objects, an object is usually kept in
cache for at least 30 days.
Edge Cache understands standard HTTP response headers `Expires` and
`Cache-Control`. You can use them to specify an expiration time for every cached
object, for example:
```
Expires: Thu, 22 Aug 2024 01:22:31 GMT
```
or:
```
Cache-Control: max-age=86400
```
## Limitations
- If a response is not constructed from a `Uint8Array` or `string` body, the
`Content-Length` header needs to be manually set.
- Deletion is not yet supported.
---
# Environment variables
URL: https://docs.deno.com/deploy/manual/environment-variables
Environment variables are useful to store values like access tokens of web
services. Each deployment has a set of environment variables defined at the
moment of creation and accessible from the code via the `Deno.env` API. There
are 2 ways to define the environment variables of a deployment:
## Project environment variables
You can define environment variables at the project level. When you create a
deployment, it will get the set of environment variables the project has defined
_at that particular moment_.
For convenience, When you change the environment variables of a project, the
current production deployment is _redeployed_, creating a new production
deployment with the new set of environment variables.
:::note
Deployments are immutable, including their environment variables. Changing the
environment variables of a project does not change the environment variables of
existing deployments.
:::
To add an environment variable to your project, click on the **Settings** button
on the project page and then on **Environment Variables** from the sidebar. Fill
in the key/value fields and click on "Add" to add an environment variable to
your project.

Updating an existing environment variable works the same way. Click on the "Add
Variable" button, enter the same name of the environment variable you wish to
update and enter the new value. Click on the "Save" button to complete the
update.
## Deployment environment variables
When deploying using `deployctl`, you can specify environment variables
[using the `--env` or `--env-file` flags](./deployctl.md#environment-variables),
complementing the environment variables already defined for the project. You can
also pass multiple `--env-file` arguments (e.g.,
`--env-file=.env.one --env-file=.env.two`) to include variables from multiple
files.
:::note
When multiple declarations for the same environment variable exist within a
single `.env` file, the first occurrence is applied. However, if the same
variable is defined across multiple `.env` files (using multiple `--env-file`
arguments), the value from the last file specified takes precedence. This means
that the first occurrence found in the last `.env` file listed will be applied.
:::
These env variables will be specific for the deployment being created.
### Default environment variables
Every deployment has the following environment variables preset, which you can
access from your code.
1. `DENO_REGION`
It holds the region code of the region in which the deployment is running.
You can use this variable to serve region-specific content.
You can refer to the region code from the [regions page](regions).
1. `DENO_DEPLOYMENT_ID`
It holds the ID of the deployment.
---
# Connect to FaunaDB
URL: https://docs.deno.com/deploy/manual/faunadb
FaunaDB calls itself "the data API for modern applications." It's a database
with a GraphQL interface that enables you to use GraphQL to interact with it.
Since you communicate with it using HTTP requests, you don't need to manage
connections, which works well for serverless applications.
This tutorial covers how to connect to a Fauna database from an application
deployed on Deno Deploy.
You can find a more comprehensive tutorial that builds a sample application on
top of Fauna [here](../tutorials/tutorial-faunadb).
## Get credentials from Fauna
We assume that you've already created a Fauna instance at
https://dashboard.fauna.com.
To access your Fauna database programmatically, you'll need to generate a
credential:
1. Click on **Security** section inside your particular database and click on
**New Key**. 
2. Select **Server** role and click on **Save**. Copy the secret. You'll need it
for the next step.
## Create a project in Deno Deploy
Next, let's create a project on Deno Deploy and set it up with the requisite
environment variables:
1. Go to [https://dash.deno.com/new](https://dash.deno.com/new) (Sign in with
GitHub if you didn't already) and click on **+ Empty Project** under **Deploy
from the command line**.
2. Now click on the **Settings** button available on the project page.
3. Navigate to the **Environment Variables** section and add the following
secrets.
- `FAUNA_SECRET` - The value should be the secret we created in the previous
step. 
## Write code that connects to Fauna
While with Node there is a Fauna JavaScript driver, with Deno, you should use
the graphql endpoint.
Fauna has a graphql endpoint for its database, and it generates essential
mutations like `create`, `update`, `delete` for a data type defined in the
schema. For example, Fauna will generate a mutation named `createQuote` to
create a new quote in the database for the data type `Quote`.
To interact with Fauna, we need to make a POST request to its graphql endpoint
with appropriate query and parameters to get the data in return. So let's
construct a generic function that will handle those things.
```javascript
import query from "https://esm.sh/faunadb@4.7.1";
import Client from "https://esm.sh/faunadb@4.7.1";
// Grab the secret from the environment.
const token = Deno.env.get("FAUNA_SECRET");
if (!token) {
throw new Error("environment variable FAUNA_SECRET not set");
}
var client = new Client.Client({
secret: token,
// Adjust the endpoint if you are using Region Groups
endpoint: "https://db.fauna.com/",
});
// HEAD
client.query(query.ToDate("2018-06-06"));
//
client
.query(query.ToDate("2018-06-06"))
//1e2f378 (Add some more pages)
.then(function (res) {
console.log("Result:", res);
})
.catch(function (err) {
console.log("Error:", err);
});
```
## Deploy application to Deno Deploy
Once you have finished writing your application, you can deploy it on Deno
Deploy.
To do this, go back to your project page at
`https://dash.deno.com/projects/`.
You should see a couple options to deploy:
- [Github integration](ci_github)
- [`deployctl`](./deployctl.md)
```sh
deployctl deploy --project=
```
Unless you want to add a build step, we recommend that you select the Github
integration.
For more details on the different ways to deploy on Deno Deploy and the
different configuration options, read [here](how-to-deploy).
---
# Connect to Firebase
URL: https://docs.deno.com/deploy/manual/firebase
Firebase is a platform developed by Google for creating mobile and web
applications. Its features include authentication primitives for log in and a
NoSQL datastore, Firestore, that you can persist data to.
This tutorial covers how to connect to Firebase from an application deployed on
Deno Deploy.
You can find a more comprehensive tutorial that builds a sample application on
top of Firebase [here](../tutorials/tutorial-firebase).
## Get credentials from Firebase
> This tutorial assumes that you've already created a project in Firebase and
> added a web application to your project.
1. Navigate to your project in Firebase and click on **Project Settings**
2. Scroll down until you see a card with your app name, and a code sample that
includes a `firebaseConfig`object. It should look something like the below.
Keep this handy. We will use it later:
```js
var firebaseConfig = {
apiKey: "APIKEY",
authDomain: "example-12345.firebaseapp.com",
projectId: "example-12345",
storageBucket: "example-12345.appspot.com",
messagingSenderId: "1234567890",
appId: "APPID",
};
```
## Create a Project in Deno Deploy
1. Go to [https://dash.deno.com/new](https://dash.deno.com/new) (Sign in with
GitHub if you didn't already) and click on **+ Empty Project** under **Deploy
from the command line**.
2. Now click on the **Settings** button available on the project page.
3. Navigate to the **Environment Variables** section and add the following:
FIREBASE_USERNAME
The Firebase user (email address) that was added above.
FIREBASE_PASSWORD
The Firebase user password that was added above.
FIREBASE_CONFIG
The configuration of the Firebase application as a JSON string.
```
Make sure that there a `image.png` inside `static-site`.
You have now a html page that says "Hello" and has a logo.
## Step 2: Deploy the static site using `deployctl`
To deploy this repo on Deno Deploy, from the `static-site` repository, run:
```console
deployctl deploy --project= --entrypoint=jsr:@std/http/file-server
```
To give a little more explanation of these commands: Because this is a static
site, there is no JavaScript to execute. Instead of giving Deno Deploy a
particular JavaScript or TypeScript file to run as the entrypoint file, you give
it this external `file_server.ts` program, which simply uploads all the static
files in the `static-site` repo, including the image and the html page, to Deno
Deploy. These static assets are then served up.
## Step 3: Voila!
Your static site should now be live! Its url will be output in the terminal, or
you can manage your new static site project in your
[Deno dashboard](https://dash.deno.com/projects/). If you click through to your
new project you will be able to view the site, configure its name, environment
variables, custom domains and more.
---
# Build a blog with Fresh
URL: https://docs.deno.com/deploy/tutorials/tutorial-blog-fresh
Tutorial [here](https://deno.com/blog/build-a-blog-with-fresh).
---
# API server with DynamoDB
URL: https://docs.deno.com/deploy/tutorials/tutorial-dynamodb
In this tutorial let's take a look at how we can use it to build a small API
that has endpoints to insert and retrieve information, backed by DynamoDB.
The tutorial assumes that you have an AWS and Deno Deploy account.
- [Overview](#overview)
- [Setup DynamoDB](#setup-dynamodb)
- [Create a Project in Deno Deploy](#create-a-project-in-deno-deploy)
- [Write the Application](#write-the-application)
- [Deploy the Application](#deploy-the-application)
## Overview
We're going to build an API with a single endpoint that accepts GET/POST
requests and returns appropriate information
```sh
# A GET request to the endpoint should return the details of the song based on its title.
GET /songs?title=Song%20Title # '%20' == space
# response
{
title: "Song Title"
artist: "Someone"
album: "Something",
released: "1970",
genres: "country rap",
}
# A POST request to the endpoint should insert the song details.
POST /songs
# post request body
{
title: "A New Title"
artist: "Someone New"
album: "Something New",
released: "2020",
genres: "country rap",
}
```
## Setup DynamoDB
Our first step in the process is to generate AWS credentials to programmatically
access DynamoDB.
Generate Credentials:
1. Go to https://console.aws.amazon.com/iam/ and go to "Users" section.
2. Click on **Create user** button, fill the **User name** field (maybe use
`denamo`) and select **Programmatic access** type.
3. Click **Next**
4. Select **Attach policies directly** and search for
`AmazonDynamoDBFullAccess`. Check the box next to this policy in the results.
5. Click **Next** and **Create user**
6. On the resulting **Users** page, click through to the user you just created
7. Click on **Create access key**
8. Select **Application running outside AWS**
9. Click ***Create**
10. Click **Download .csv file** to download the credentials you just created.
Create database table:
1. Go to https://console.aws.amazon.com/dynamodb and click on **Create table**
button.
2. Fill the **Table name** field with `songs` and **Partition key** with
`title`.
3. Scroll down and click on **Create table**.
4. Once the table is created, click on the table name and find its **General
information**
5. Under **Amazon Resource Name (ARN)** take note of the region of your new
table (for example us-east-1).
## Write the Application
Create a file called `index.js` and insert the following:
```js
import {
json,
serve,
validateRequest,
} from "https://deno.land/x/sift@0.6.0/mod.ts";
// AWS has an official SDK that works with browsers. As most Deno Deploy's
// APIs are similar to browser's, the same SDK works with Deno Deploy.
// So we import the SDK along with some classes required to insert and
// retrieve data.
import {
DynamoDBClient,
GetItemCommand,
PutItemCommand,
} from "https://esm.sh/@aws-sdk/client-dynamodb";
// Create a client instance by providing your region information.
// The credentials are obtained from environment variables which
// we set during our project creation step on Deno Deploy.
const client = new DynamoDBClient({
region: Deno.env.get("AWS_TABLE_REGION"),
credentials: {
accessKeyId: Deno.env.get("AWS_ACCESS_KEY_ID"),
secretAccessKey: Deno.env.get("AWS_SECRET_ACCESS_KEY"),
},
});
serve({
"/songs": handleRequest,
});
async function handleRequest(request) {
// The endpoint allows GET and POST request. A parameter named "title"
// for a GET request to be processed. And body with the fields defined
// below are required to process POST request.
// validateRequest ensures that the provided terms are met by the request.
const { error, body } = await validateRequest(request, {
GET: {
params: ["title"],
},
POST: {
body: ["title", "artist", "album", "released", "genres"],
},
});
if (error) {
return json({ error: error.message }, { status: error.status });
}
// Handle POST request.
if (request.method === "POST") {
try {
// When we want to interact with DynamoDB, we send a command using the client
// instance. Here we are sending a PutItemCommand to insert the data from the
// request.
const {
$metadata: { httpStatusCode },
} = await client.send(
new PutItemCommand({
TableName: "songs",
Item: {
// Here 'S' implies that the value is of type string
// and 'N' implies a number.
title: { S: body.title },
artist: { S: body.artist },
album: { S: body.album },
released: { N: body.released },
genres: { S: body.genres },
},
}),
);
// On a successful put item request, dynamo returns a 200 status code (weird).
// So we test status code to verify if the data has been inserted and respond
// with the data provided by the request as a confirmation.
if (httpStatusCode === 200) {
return json({ ...body }, { status: 201 });
}
} catch (error) {
// If something goes wrong while making the request, we log
// the error for our reference.
console.log(error);
}
// If the execution reaches here it implies that the insertion wasn't successful.
return json({ error: "couldn't insert data" }, { status: 500 });
}
// Handle GET request.
try {
// We grab the title form the request and send a GetItemCommand
// to retrieve the information about the song.
const { searchParams } = new URL(request.url);
const { Item } = await client.send(
new GetItemCommand({
TableName: "songs",
Key: {
title: { S: searchParams.get("title") },
},
}),
);
// The Item property contains all the data, so if it's not undefined,
// we proceed to returning the information about the title
if (Item) {
return json({
title: Item.title.S,
artist: Item.artist.S,
album: Item.album.S,
released: Item.released.S,
genres: Item.genres.S,
});
}
} catch (error) {
console.log(error);
}
// We might reach here if an error is thrown during the request to database
// or if the Item is not found in the database.
// We reflect both conditions with a general message.
return json(
{
message: "couldn't find the title",
},
{ status: 404 },
);
}
```
Initialize git in your new project and
[push it to GitHub](https://docs.github.com/en/get-started/start-your-journey/hello-world#step-1-create-a-repository).
## Deploy the Application
Now that we have everything in place, let's deploy your new application!
1. In your browser, visit [Deno Deploy](https://dash.deno.com/new_project) and
link your GitHub account.
2. Select the repository which contains your new application.
3. You can give your project a name or allow Deno to generate one for you
4. Select `index.js` in the Entrypoint dropdown
5. Click **Deploy Project**
In order for your Application to work, we will need to configure its environment
variables.
On your project's success page, or in your project dashboard, click on **Add
environmental variables**. Under Environment Variables, click **+ Add
Variable**. Create the following variables:
1. `AWS_ACCESS_KEY_ID` - with the value from the CSV you downloaded
2. `AWS_SECRET_ACCESS_KEY` - with the value from the CSV you downloaded.
3. `AWS_TABLE_REGION` - with your table's region
Click to save the variables.
Let's test the API.
POST some data.
```sh
curl --request POST --data \
'{"title": "Old Town Road", "artist": "Lil Nas X", "album": "7", "released": "2019", "genres": "Country rap, Pop"}' \
--dump-header - https://.deno.dev/songs
```
GET information about the title.
```sh
curl https://.deno.dev/songs?title=Old%20Town%20Road
```
Congratulations on learning how to use DynamoDB with Deno Deploy!
---
# API server with FaunaDB
URL: https://docs.deno.com/deploy/tutorials/tutorial-faunadb
FaunaDB calls itself "The data API for modern applications". It's a database
with a GraphQL interface that enables you to use GraphQL to interact with it.
Since we communicate with it using HTTP requests, we don't need to manage
connections which suits very well for serverless applications.
The tutorial assumes that you have [FaunaDB](https://fauna.com) and Deno Deploy
accounts, Deno Deploy CLI installed, and some basic knowledge of GraphQL.
- [Overview](#overview)
- [Build the API Endpoints](#build-the-api-endpoints)
- [Use FaunaDB for Persistence](#use-faunadb-for-persistence)
- [Deploy the API](#deploy-the-api)
## Overview
In this tutorial, let's build a small quotes API with endpoints to insert and
retrieve quotes. And later use FaunaDB to persist the quotes.
Let's start by defining the API endpoints.
```sh
# A POST request to the endpoint should insert the quote to the list.
POST /quotes/
# Body of the request.
{
"quote": "Don't judge each day by the harvest you reap but by the seeds that you plant.",
"author": "Robert Louis Stevenson"
}
# A GET request to the endpoint should return all the quotes from the database.
GET /quotes/
# Response of the request.
{
"quotes": [
{
"quote": "Don't judge each day by the harvest you reap but by the seeds that you plant.",
"author": "Robert Louis Stevenson"
}
]
}
```
Now that we understand how the endpoint should behave, let's proceed to build
it.
## Build the API Endpoints
First, create a file named `quotes.ts` and paste the following content.
Read through the comments in the code to understand what's happening.
```ts
import {
json,
serve,
validateRequest,
} from "https://deno.land/x/sift@0.6.0/mod.ts";
serve({
"/quotes": handleQuotes,
});
// To get started, let's just use a global array of quotes.
const quotes = [
{
quote: "Those who can imagine anything, can create the impossible.",
author: "Alan Turing",
},
{
quote: "Any sufficiently advanced technology is equivalent to magic.",
author: "Arthur C. Clarke",
},
];
async function handleQuotes(request: Request) {
// Make sure the request is a GET request.
const { error } = await validateRequest(request, {
GET: {},
});
// validateRequest populates the error if the request doesn't meet
// the schema we defined.
if (error) {
return json({ error: error.message }, { status: error.status });
}
// Return all the quotes.
return json({ quotes });
}
```
Run the above program using [the Deno CLI](https://deno.land).
```sh
deno run --allow-net=:8000 ./path/to/quotes.ts
# Listening on http://0.0.0.0:8000/
```
And curl the endpoint to see some quotes.
```sh
curl http://127.0.0.1:8000/quotes
# {"quotes":[
# {"quote":"Those who can imagine anything, can create the impossible.", "author":"Alan Turing"},
# {"quote":"Any sufficiently advanced technology is equivalent to magic.","author":"Arthur C. Clarke"}
# ]}
```
Let's proceed to handle the POST request.
Update the `validateRequest` function to make sure a POST request follows the
provided body scheme.
```diff
- const { error } = await validateRequest(request, {
+ const { error, body } = await validateRequest(request, {
GET: {},
+ POST: {
+ body: ["quote", "author"]
+ }
});
```
Handle the POST request by updating `handleQuotes` function with the following
code.
```diff
async function handleQuotes(request: Request) {
const { error, body } = await validateRequest(request, {
GET: {},
POST: {
body: ["quote", "author"],
},
});
if (error) {
return json({ error: error.message }, { status: error.status });
}
+ // Handle POST requests.
+ if (request.method === "POST") {
+ const { quote, author } = body as { quote: string; author: string };
+ quotes.push({ quote, author });
+ return json({ quote, author }, { status: 201 });
+ }
return json({ quotes });
}
```
Let's test it by inserting some data.
```sh
curl --dump-header - --request POST --data '{"quote": "A program that has not been tested does not work.", "author": "Bjarne Stroustrup"}' http://127.0.0.1:8000/quotes
```
The output might look like something below.
```console
HTTP/1.1 201 Created
transfer-encoding: chunked
content-type: application/json; charset=utf-8
{"quote":"A program that has not been tested does not work.","author":"Bjarne Stroustrup"}
```
Awesome! We built our API endpoint, and it's working as expected. Since the data
is stored in memory, it will be lost after a restart. Let's use FaunaDB to
persist our quotes.
## Use FaunaDB for Persistence
Let's define our database schema using GraphQL Schema.
```gql
# We're creating a new type named `Quote` to represent a quote and its author.
type Quote {
quote: String!
author: String!
}
type Query {
# A new field in the Query operation to retrieve all quotes.
allQuotes: [Quote!]
}
```
Fauna has a graphql endpoint for its database, and it generates essential
mutations like create, update, delete for a data type defined in the schema. For
example, fauna will generate a mutation named `createQuote` to create a new
quote in the database for the data type `Quote`. And we're additionally defining
a query field named `allQuotes` that returns all the quotes in the database.
Let's get to writing the code to interact with fauna from Deno Deploy
applications.
To interact with fauna, we need to make a POST request to its graphql endpoint
with appropriate query and parameters to get the data in return. So let's
construct a generic function that will handle those things.
```typescript
async function queryFauna(
query: string,
variables: { [key: string]: unknown },
): Promise<{
data?: any;
error?: any;
}> {
// Grab the secret from the environment.
const token = Deno.env.get("FAUNA_SECRET");
if (!token) {
throw new Error("environment variable FAUNA_SECRET not set");
}
try {
// Make a POST request to fauna's graphql endpoint with body being
// the query and its variables.
const res = await fetch("https://graphql.fauna.com/graphql", {
method: "POST",
headers: {
authorization: `Bearer ${token}`,
"content-type": "application/json",
},
body: JSON.stringify({
query,
variables,
}),
});
const { data, errors } = await res.json();
if (errors) {
// Return the first error if there are any.
return { data, error: errors[0] };
}
return { data };
} catch (error) {
return { error };
}
}
```
Add this code to the `quotes.ts` file. Now let's proceed to update the endpoint
to use fauna.
```diff
async function handleQuotes(request: Request) {
const { error, body } = await validateRequest(request, {
GET: {},
POST: {
body: ["quote", "author"],
},
});
if (error) {
return json({ error: error.message }, { status: error.status });
}
if (request.method === "POST") {
+ const { quote, author, error } = await createQuote(
+ body as { quote: string; author: string }
+ );
+ if (error) {
+ return json({ error: "couldn't create the quote" }, { status: 500 });
+ }
return json({ quote, author }, { status: 201 });
}
return json({ quotes });
}
+async function createQuote({
+ quote,
+ author,
+}: {
+ quote: string;
+ author: string;
+}): Promise<{ quote?: string; author?: string; error?: string }> {
+ const query = `
+ mutation($quote: String!, $author: String!) {
+ createQuote(data: { quote: $quote, author: $author }) {
+ quote
+ author
+ }
+ }
+ `;
+
+ const { data, error } = await queryFauna(query, { quote, author });
+ if (error) {
+ return { error };
+ }
+
+ return data;
+}
```
Now that we've updated the code to insert new quotes let's set up a fauna
database before proceeding to test the code.
Create a new database:
1. Go to https://dashboard.fauna.com (login if required) and click on **New
Database**
2. Fill the **Database Name** field and click on **Save**.
3. Click on **GraphQL** section visible on the left sidebar.
4. Create a file ending with `.gql` extension with the content being the schema
we defined above.
Generate a secret to access the database:
1. Click on **Security** section and click on **New Key**.
2. Select **Server** role and click on **Save**. Copy the secret.
Let's now run the application with the secret.
```sh
FAUNA_SECRET= deno run --allow-net=:8000 --watch quotes.ts
# Listening on http://0.0.0.0:8000
```
```sh
curl --dump-header - --request POST --data '{"quote": "A program that has not been tested does not work.", "author": "Bjarne Stroustrup"}' http://127.0.0.1:8000/quotes
```
Notice how the quote was added to your collection in FaunaDB.
Let's write a new function to get all the quotes.
```ts
async function getAllQuotes() {
const query = `
query {
allQuotes {
data {
quote
author
}
}
}
`;
const {
data: {
allQuotes: { data: quotes },
},
error,
} = await queryFauna(query, {});
if (error) {
return { error };
}
return { quotes };
}
```
And update the `handleQuotes` function with the following code.
```diff
-// To get started, let's just use a global array of quotes.
-const quotes = [
- {
- quote: "Those who can imagine anything, can create the impossible.",
- author: "Alan Turing",
- },
- {
- quote: "Any sufficiently advanced technology is equivalent to magic.",
- author: "Arthur C. Clarke",
- },
-];
async function handleQuotes(request: Request) {
const { error, body } = await validateRequest(request, {
GET: {},
POST: {
body: ["quote", "author"],
},
});
if (error) {
return json({ error: error.message }, { status: error.status });
}
if (request.method === "POST") {
const { quote, author, error } = await createQuote(
body as { quote: string; author: string },
);
if (error) {
return json({ error: "couldn't create the quote" }, { status: 500 });
}
return json({ quote, author }, { status: 201 });
}
+ // It's assumed that the request method is "GET".
+ {
+ const { quotes, error } = await getAllQuotes();
+ if (error) {
+ return json({ error: "couldn't fetch the quotes" }, { status: 500 });
+ }
+
+ return json({ quotes });
+ }
}
```
```sh
curl http://127.0.0.1:8000/quotes
```
You should see all the quotes we've inserted into the database. The final code
of the API is available at https://deno.com/examples/fauna.ts.
## Deploy the API
Now that we have everything in place, let's deploy your new API!
1. In your browser, visit [Deno Deploy](https://dash.deno.com/new_project) and
link your GitHub account.
2. Select the repository which contains your new API.
3. You can give your project a name or allow Deno to generate one for you
4. Select `index.ts` in the Entrypoint dropdown
5. Click **Deploy Project**
In order for your Application to work, we will need to configure its environment
variables.
On your project's success page, or in your project dashboard, click on **Add
environmental variables**. Under Environment Variables, click **+ Add
Variable**. Create a new variable called `FAUNA_SECRET` - The value should be
the secret we created earlier.
Click to save the variables.
On your project overview, click **View** to view the project in your browser,
add `/quotes` to the end of the url to see the content of your FaunaDB.
---
# API server with Firestore (Firebase)
URL: https://docs.deno.com/deploy/tutorials/tutorial-firebase
Firebase is a platform developed by Google for creating mobile and web
applications. You can persist data on the platform using Firestore. In this
tutorial let's take a look at how we can use it to build a small API that has
endpoints to insert and retrieve information.
- [Overview](#overview)
- [Concepts](#concepts)
- [Setup Firebase](#setup-firebase)
- [Write the application](#write-the-application)
- [Deploy the application](#deploy-the-application)
## Overview
We are going to build an API with a single endpoint that accepts `GET` and
`POST` requests and returns a JSON payload of information:
```sh
# A GET request to the endpoint without any sub-path should return the details
# of all songs in the store:
GET /songs
# response
[
{
title: "Song Title",
artist: "Someone",
album: "Something",
released: "1970",
genres: "country rap",
}
]
# A GET request to the endpoint with a sub-path to the title should return the
# details of the song based on its title.
GET /songs/Song%20Title # '%20' == space
# response
{
title: "Song Title"
artist: "Someone"
album: "Something",
released: "1970",
genres: "country rap",
}
# A POST request to the endpoint should insert the song details.
POST /songs
# post request body
{
title: "A New Title"
artist: "Someone New"
album: "Something New",
released: "2020",
genres: "country rap",
}
```
In this tutorial, we will be:
- Creating and setting up a
[Firebase Project](https://console.firebase.google.com/).
- Using a text editor to create our application.
- Creating a [gist](https://gist.github.com/) to "host" our application.
- Deploying our application on [Deno Deploy](https://dash.deno.com/).
- Testing our application using [cURL](https://curl.se/).
## Concepts
There are a few concepts that help in understanding why we take a particular
approach in the rest of the tutorial, and can help in extending the application.
You can skip ahead to [Setup Firebase](#setup-firebase) if you want.
### Deploy is browser-like
Even though Deploy runs in the cloud, in many aspects the APIs it provides are
based on web standards. So when using Firebase, the Firebase APIs are more
compatible with the web than those that are designed for server run times. That
means we will be using the Firebase web libraries in this tutorial.
### Firebase uses XHR
Firebase uses a wrapper around Closure's
[WebChannel](https://google.github.io/closure-library/api/goog.net.WebChannel.html)
and WebChannel was originally built around
[`XMLHttpRequest`](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest).
While WebChannel supports the more modern `fetch()` API, current versions of
Firebase for the web do not uniformly instantiate WebChannel with `fetch()`
support, and instead use `XMLHttpRequest`.
While Deploy is browser-like, it does not support `XMLHttpRequest`.
`XMLHttpRequest` is a "legacy" browser API that has several limitations and
features that would be difficult to implement in Deploy, which means it is
unlikely that Deploy will ever implement that API.
So, in this tutorial we will be using a limited _polyfill_ that provides enough
of the `XMLHttpRequest` feature set to allow Firebase/WebChannel to communicate
with the server.
### Firebase auth
Firebase offers quite [a few options](https://firebase.google.com/docs/auth)
around authentication. In this tutorial we are going to be using email and
password authentication.
When a user is logged in, Firebase can persist that authentication. Because we
are using the web libraries for Firebase, persisting the authentication allows a
user to navigate away from a page and not need to re-log in when returning.
Firebase allows authentication to be persisted in local storage, session storage
or none.
In a Deploy context, it is a little different. A Deploy deployment will remain
"active" meaning that in-memory state will be present from request to request on
some requests, but under various conditions a new deployment can be started up
or shutdown. Currently, Deploy doesn't offer any persistence outside of
in-memory allocation. In addition it doesn't currently offer the global
`localStorage` or `sessionStorage`, which is what is used by Firebase to store
the authentication information.
In order to reduce the need to re-authenticate but also ensure that we can
support multiple-users with a single deployment, we are going to use a polyfill
that will allow us to provide a `localStorage` interface to Firebase, but store
the information as a cookie in the client.
## Setup Firebase
[Firebase](https://firebase.google.com/) is a feature rich platform. All the
details of Firebase administration are beyond the scope of this tutorial. We
will cover what it needed for this tutorial.
1. Create a new project under the
[Firebase console](https://console.firebase.google.com/).
2. Add a web application to your project. Make note of the `firebaseConfig`
provided in the setup wizard. It should look something like the below. We
will use this later:
```js title="firebase.js"
var firebaseConfig = {
apiKey: "APIKEY",
authDomain: "example-12345.firebaseapp.com",
projectId: "example-12345",
storageBucket: "example-12345.appspot.com",
messagingSenderId: "1234567890",
appId: "APPID",
};
```
3. Under `Authentication` in the administration console for, you will want to
enable the `Email/Password` sign-in method.
4. You will want to add a user and password under `Authentication` and then
`Users` section, making note of the values used for later.
5. Add `Firestore Database` to your project. The console will allow you to setup
in _production mode_ or _test mode_. It is up to you how you configure this,
but _production mode_ will require you to setup further security rules.
6. Add a collection to the database named `songs`. This will require you to add
at least one document. Just set the document with an _Auto ID_.
_Note_ depending on the status of your Google account, there maybe other setup
and administration steps that need to occur.
## Write the application
We want to create our application as a JavaScript file in our favorite editor.
The first thing we will do is import the `XMLHttpRequest` polyfill that Firebase
needs to work under Deploy as well as a polyfill for `localStorage` to allow the
Firebase auth to persist logged in users:
```js title="firebase.js"
import "https://deno.land/x/xhr@0.1.1/mod.ts";
import { installGlobals } from "https://deno.land/x/virtualstorage@0.1.0/mod.ts";
installGlobals();
```
> ℹ️ we are using the current version of packages at the time of the writing of
> this tutorial. They may not be up-to-date and you may want to double check
> current versions.
Because Deploy has a lot of the web standard APIs, it is best to use the web
libraries for Firebase under deploy. Currently v9 is in still in beta for
Firebase, so we will use v8 in this tutorial:
```js title="firebase.js"
import firebase from "https://esm.sh/firebase@8.7.0/app";
import "https://esm.sh/firebase@8.7.0/auth";
import "https://esm.sh/firebase@8.7.0/firestore";
```
We are also going to use [oak](https://deno.land/x/oak) as the middleware
framework for creating the APIs, including middleware that will take the
`localStorage` values and set them as client cookies:
```js title="firebase.js"
import {
Application,
Router,
Status,
} from "https://deno.land/x/oak@v7.7.0/mod.ts";
import { virtualStorage } from "https://deno.land/x/virtualstorage@0.1.0/middleware.ts";
```
Now we need to setup our Firebase application. We will be getting the
configuration from environment variables we will setup later under the key
`FIREBASE_CONFIG` and get references to the parts of Firebase we are going to
use:
```js title="firebase.js"
const firebaseConfig = JSON.parse(Deno.env.get("FIREBASE_CONFIG"));
const firebaseApp = firebase.initializeApp(firebaseConfig, "example");
const auth = firebase.auth(firebaseApp);
const db = firebase.firestore(firebaseApp);
```
We are also going to setup the application to handle signed in users per
request. So we will create a map of users that we have previously signed in in
this deployment. While in this tutorial we will only ever have one signed in
user, the code can easily be adapted to allow clients to sign-in individually:
```js title="firebase.js"
const users = new Map();
```
Let's create our middleware router and create three different middleware
handlers to support `GET` and `POST` of `/songs` and a `GET` of a specific song
on `/songs/{title}`:
```js title="firebase.js"
const router = new Router();
// Returns any songs in the collection
router.get("/songs", async (ctx) => {
const querySnapshot = await db.collection("songs").get();
ctx.response.body = querySnapshot.docs.map((doc) => doc.data());
ctx.response.type = "json";
});
// Returns the first document that matches the title
router.get("/songs/:title", async (ctx) => {
const { title } = ctx.params;
const querySnapshot = await db.collection("songs").where("title", "==", title)
.get();
const song = querySnapshot.docs.map((doc) => doc.data())[0];
if (!song) {
ctx.response.status = 404;
ctx.response.body = `The song titled "${ctx.params.title}" was not found.`;
ctx.response.type = "text";
} else {
ctx.response.body = querySnapshot.docs.map((doc) => doc.data())[0];
ctx.response.type = "json";
}
});
function isSong(value) {
return typeof value === "object" && value !== null && "title" in value;
}
// Removes any songs with the same title and adds the new song
router.post("/songs", async (ctx) => {
const body = ctx.request.body();
if (body.type !== "json") {
ctx.throw(Status.BadRequest, "Must be a JSON document");
}
const song = await body.value;
if (!isSong(song)) {
ctx.throw(Status.BadRequest, "Payload was not well formed");
}
const querySnapshot = await db
.collection("songs")
.where("title", "==", song.title)
.get();
await Promise.all(querySnapshot.docs.map((doc) => doc.ref.delete()));
const songsRef = db.collection("songs");
await songsRef.add(song);
ctx.response.status = Status.NoContent;
});
```
Ok, we are almost done. We just need to create our middleware application, and
add the `localStorage` middleware we imported:
```js title="firebase.js"
const app = new Application();
app.use(virtualStorage());
```
And then we need to add middleware to authenticate the user. In this tutorial we
are simply grabbing the username and password from the environment variables we
will be setting up, but this could easily be adapted to redirect a user to a
sign-in page if they are not logged in:
```js title="firebase.js"
app.use(async (ctx, next) => {
const signedInUid = ctx.cookies.get("LOGGED_IN_UID");
const signedInUser = signedInUid != null ? users.get(signedInUid) : undefined;
if (!signedInUid || !signedInUser || !auth.currentUser) {
const creds = await auth.signInWithEmailAndPassword(
Deno.env.get("FIREBASE_USERNAME"),
Deno.env.get("FIREBASE_PASSWORD"),
);
const { user } = creds;
if (user) {
users.set(user.uid, user);
ctx.cookies.set("LOGGED_IN_UID", user.uid);
} else if (signedInUser && signedInUid.uid !== auth.currentUser?.uid) {
await auth.updateCurrentUser(signedInUser);
}
}
return next();
});
```
Now let's add our router to the middleware application and set the application
to listen on port 8000:
```js title="firebase.js"
app.use(router.routes());
app.use(router.allowedMethods());
await app.listen({ port: 8000 });
```
Now we have an application that should serve up our APIs.
## Deploy the Application
Now that we have everything in place, let's deploy your new application!
1. In your browser, visit [Deno Deploy](https://dash.deno.com/new_project) and
link your GitHub account.
2. Select the repository which contains your new application.
3. You can give your project a name or allow Deno to generate one for you
4. Select `firebase.js` in the Entrypoint dropdown
5. Click **Deploy Project**
In order for your Application to work, we will need to configure its environment
variables.
On your project's success page, or in your project dashboard, click on **Add
environmental variables**. Under Environment Variables, click **+ Add
Variable**. Create the following variables:
1. `FIREBASE_USERNAME` - The Firebase user (email address) that was added above
2. `FIREBASE_PASSWORD` - The Firebase user password that was added above
3. `FIREBASE_CONFIG` - The configuration of the Firebase application as a string
of JSON
The configuration needs to be a valid JSON string to be readable by the
application. If the code snippet given when setting up looked like this:
```js
var firebaseConfig = {
apiKey: "APIKEY",
authDomain: "example-12345.firebaseapp.com",
projectId: "example-12345",
storageBucket: "example-12345.appspot.com",
messagingSenderId: "1234567890",
appId: "APPID",
};
```
You would need to set the value of the string to this (noting that spacing and
new lines are not required):
```json
{
"apiKey": "APIKEY",
"authDomain": "example-12345.firebaseapp.com",
"projectId": "example-12345",
"storageBucket": "example-12345.appspot.com",
"messagingSenderId": "1234567890",
"appId": "APPID"
}
```
Click to save the variables.
Now let's take our API for a spin.
We can create a new song:
```sh
curl --request POST \
--header "Content-Type: application/json" \
--data '{"title": "Old Town Road", "artist": "Lil Nas X", "album": "7", "released": "2019", "genres": "Country rap, Pop"}' \
--dump-header \
- https://.deno.dev/songs
```
And we can get all the songs in our collection:
```sh
curl https://.deno.dev/songs
```
And we get specific information about a title we created:
```sh
curl https://.deno.dev/songs/Old%20Town%20Road
```
---
# Simple HTTP server
URL: https://docs.deno.com/deploy/tutorials/tutorial-http-server
In this tutorial, let's build a HTTP server that responds to all incoming HTTP
requests with `Hello, world!` and a `200 OK` HTTP status. We will be using the
Deno Deploy playground to deploy and edit this script.
## Step 1: Write the HTTP server script
A simple HTTP server can be written with a single line of code in Deno using
[`Deno.serve`](https://docs.deno.com/api/deno/~/Deno.serve):
```js title="One-line HTTP server"
Deno.serve(() => new Response("Hello, world!"));
```
While this type of server is useful for getting started, `Deno.serve` is capable
of supporting more advanced usage as well
([API reference docs](https://docs.deno.com/api/deno/~/Deno.serve)). Below is an
example of a more complex server that takes advantage of other API features.
```ts title="More complex Hello World server"
Deno.serve({
onListen: ({ port }) => {
console.log("Deno server listening on *:", port);
},
}, (req: Request, conn: Deno.ServeHandlerInfo) => {
// Get information about the incoming request
const method = req.method;
const ip = conn.remoteAddr.hostname;
console.log(`${ip} just made an HTTP ${method} request.`);
// Return a web standard Response object
return new Response("Hello, world!");
});
```
## Step 2: Deploy script to Deno Deploy
1. Create a new playground project by visiting
[your Deno dashboard](https://dash.deno.com/account/overview), and clicking
the **New Playground** button.
2. On the next screen, copy the code above (either the short or the longer
example) into the editor on the left side of the screen.
3. Press the **Save & Deploy** button on the right side of the top toolbar (or
press Ctrl+S).
You can preview the result on the right side of the playground editor, in the
preview pane.
You will see that if you change the script (for example `Hello, World!` ->
`Hello, Galaxy!`) and then re-deploy, the preview will automatically update. The
URL shown at the top of the preview pane can be used to visit the deployed page
from anywhere.
Even in the playground editor, scripts are deployed worldwide across our entire
global network. This guarantees fast and reliable performance, no matter the
location of your users.
---
# Build a blog with Hugo
URL: https://docs.deno.com/deploy/tutorials/tutorial-hugo-blog
Tutorial [here](https://deno.com/blog/hugo-blog-with-deno-deploy).
---
# API server with Postgres
URL: https://docs.deno.com/deploy/tutorials/tutorial-postgres
Postgres is a popular database for web applications because of its flexibility
and ease of use. This guide will show you how to use Deno Deploy with Postgres.
- [API server with Postgres](#api-server-with-postgres)
- [Overview](#overview)
- [Setup Postgres](#setup-postgres)
- [Neon Postgres](#neon-postgres)
- [Supabase](#supabase)
- [Write and deploy the application](#write-and-deploy-the-application)
## Overview
We are going to build the API for a simple todo list application. It will have
two endpoints:
`GET /todos` will return a list of all todos, and `POST /todos` will create a
new todo.
```
GET /todos
---
title: "returns a list of all todos"
---
[
{
"id": 1,
"title": "Buy bread"
},
{
"id": 2,
"title": "Buy rice"
},
{
"id": 3,
"title": "Buy spices"
}
]
POST /todos
---
title: "creates a new todo"
---
"Buy milk"
---
title: "returns a 201 status code"
---
```
In this tutorial, we will be:
- Creating and setting up a [Postgres](https://www.postgresql.org/) instance on
[Neon Postgres](https://neon.tech/) or [Supabase](https://supabase.com).
- Using a [Deno Deploy](../manual/deployctl.md) Playground to develop and deploy
the application.
- Testing our application using [cURL](https://curl.se/).
## Setup Postgres
> This tutorial will focus entirely on connecting to Postgres unencrypted. If
> you would like to use encryption with a custom CA certificate, use the
> documentation [here](https://deno-postgres.com/#/?id=ssltls-connection).
To get started we need to create a new Postgres instance for us to connect to.
For this tutorial, you can use either [Neon Postgres](https://neon.tech/) or
[Supabase](https://supabase.com), as they both provide free, managed Postgres
instances. If you like to host your database somewhere else, you can do that
too.
### Neon Postgres
1. Visit https://neon.tech/ and click **Sign up** to sign up with an email,
Github, Google, or partner account. After signing up, you are directed to the
Neon Console to create your first project.
2. Enter a name for your project, select a Postgres version, provide a database
name, and select a region. Generally, you'll want to select the region
closest to your application. When you're finished, click **Create project**.
3. You are presented with the connection string for your new project, which you
can use to connect to your database. Save the connection string, which looks
something like this:
```sh
postgres://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require
```
### Supabase
1. Visit https://app.supabase.io/ and click "New project".
2. Select a name, password, and region for your database. Make sure to save the
password, as you will need it later.
3. Click "Create new project". Creating the project can take a while, so be
patient.
4. Once the project is created, navigate to the "Database" tab on the left.
5. Go to the "Connection Pooling" settings, and copy the connection string from
the "Connection String" field. This is the connection string you will use to
connect to your database. Insert the password you saved earlier into this
string, and then save the string somewhere - you will need it later.
## Write and deploy the application
We can now start writing our application. To start, we will create a new Deno
Deploy playground in the control panel: press the "New Playground" button on
https://dash.deno.com/projects.
This will open up the playground editor. Before we can actually start writing
code, we'll need to put our Postgres connection string into the environment
variables. To do this, click on the project name in the top left corner of the
editor. This will open up the project settings.
From here, you can navigate to the "Settings" -> "Environment Variable" tab via
the left navigation menu. Enter "DATABASE_URL" into the "Key" field, and paste
your connection string into the "Value" field. Now, press "Add". Your
environment variables is now set.
Let's return back to the editor: to do this, go to the "Overview" tab via the
left navigation menu, and press "Open Playground". Let's start by serving HTTP
requests using `Deno.serve()`:
```ts
Deno.serve(async (req) => {
return new Response("Not Found", { status: 404 });
});
```
You can already save this code using Ctrl+S (or
Cmd+S on Mac). You should see the preview page on the
right refresh automatically: it now says "Not Found".
Next, let's import the Postgres module, read the connection string from the
environment variables, and create a connection pool.
```ts
import * as postgres from "https://deno.land/x/postgres@v0.14.0/mod.ts";
// Get the connection string from the environment variable "DATABASE_URL"
const databaseUrl = Deno.env.get("DATABASE_URL")!;
// Create a database pool with three connections that are lazily established
const pool = new postgres.Pool(databaseUrl, 3, true);
```
Again, you can save this code now, but this time you should see no changes. We
are creating a connection pool, but we are not actually running any queries
against the database yet. Before we can do that, we need to set up our table
schema.
We want to store a list of todos. Let's create a table called `todos` with an
auto-increment `id` column and a `title` column:
```ts
const pool = new postgres.Pool(databaseUrl, 3, true);
// Connect to the database
const connection = await pool.connect();
try {
// Create the table
await connection.queryObject`
CREATE TABLE IF NOT EXISTS todos (
id SERIAL PRIMARY KEY,
title TEXT NOT NULL
)
`;
} finally {
// Release the connection back into the pool
connection.release();
}
```
Now that we have a table, we can add the HTTP handlers for the GET and POST
endpoints.
```ts
Deno.serve(async (req) => {
// Parse the URL and check that the requested endpoint is /todos. If it is
// not, return a 404 response.
const url = new URL(req.url);
if (url.pathname !== "/todos") {
return new Response("Not Found", { status: 404 });
}
// Grab a connection from the database pool
const connection = await pool.connect();
try {
switch (req.method) {
case "GET": { // This is a GET request. Return a list of all todos.
// Run the query
const result = await connection.queryObject`
SELECT * FROM todos
`;
// Encode the result as JSON
const body = JSON.stringify(result.rows, null, 2);
// Return the result as JSON
return new Response(body, {
headers: { "content-type": "application/json" },
});
}
case "POST": { // This is a POST request. Create a new todo.
// Parse the request body as JSON. If the request body fails to parse,
// is not a string, or is longer than 256 chars, return a 400 response.
const title = await req.json().catch(() => null);
if (typeof title !== "string" || title.length > 256) {
return new Response("Bad Request", { status: 400 });
}
// Insert the new todo into the database
await connection.queryObject`
INSERT INTO todos (title) VALUES (${title})
`;
// Return a 201 Created response
return new Response("", { status: 201 });
}
default: // If this is neither a POST, or a GET return a 405 response.
return new Response("Method Not Allowed", { status: 405 });
}
} catch (err) {
console.error(err);
// If an error occurs, return a 500 response
return new Response(`Internal Server Error\n\n${err.message}`, {
status: 500,
});
} finally {
// Release the connection back into the pool
connection.release();
}
});
```
And there we go - application done. Deploy this code by saving the editor. You
can now POST to the `/todos` endpoint to create a new todo, and you can get a
list of all todos by making a GET request to `/todos`:
```sh
$ curl -X GET https://tutorial-postgres.deno.dev/todos
[]⏎
$ curl -X POST -d '"Buy milk"' https://tutorial-postgres.deno.dev/todos
$ curl -X GET https://tutorial-postgres.deno.dev/todos
[
{
"id": 1,
"title": "Buy milk"
}
]⏎
```
It's all working 🎉
The full code for the tutorial:
As an extra challenge, try add a `DELETE /todos/:id` endpoint to delete a todo.
The [URLPattern][urlpattern] API can help with this.
[urlpattern]: https://developer.mozilla.org/en-US/docs/Web/API/URL_Pattern_API
---
# Use WordPress as a headless CMS
URL: https://docs.deno.com/deploy/tutorials/tutorial-wordpress-frontend
WordPress is the most popular CMS in the world, but is difficult to use in a
"headless" form, i.e. with a custom frontend.
In this tutorial, we show how to use Fresh, a modern web framework built on
Deno, to create a frontend for headless WordPress.
## Step 1: Clone the Fresh WordPress theme
Fresh offers two ready-to-go themes, one for a blog and one for shopfront.
### Blog
```bash
git clone https://github.com/denoland/fresh-wordpress-themes.git
cd fresh-wordpress-themes/blog
deno task docker
```
### Shop
```bash
git clone https://github.com/denoland/fresh-wordpress-themes.git
cd fresh-wordpress-themes/corporate
deno task docker
```
Note that Blog and Shop themes use different setups for WordPress server. Make
sure you run `deno task docker` command in the right directory.
## Step 2: Open another terminal in the same directory and run:
```sh
deno task start
```
## Step 3: Visit http://localhost:8000/
You can manage the contents of the site via the WordPress dashboard at
http://localhost/wp-admin (username: `user`, password: `password`).
## WordPress hosting options
There are a lot of options for hosting WordPress on the internet. Many cloud
providers
[have](https://aws.amazon.com/getting-started/hands-on/launch-a-wordpress-website/)
[special](https://cloud.google.com/wordpress)
[guides](https://learn.microsoft.com/en-us/azure/app-service/quickstart-wordpress)
and
[templates](https://console.cloud.google.com/marketplace/product/click-to-deploy-images/wordpress)
dedicated to WordPress. There are also dedicated hosting services for WordPress,
such as [Bluehost](https://www.bluehost.com/),
[DreamHost](https://www.dreamhost.com/),
[SiteGround](https://www.siteground.com/), etc. You can choose which is the best
fit for your needs from these options.
There are also many resources on the internet about how to scale your WordPress
instances.
---
# Deploy a React app with Vite
URL: https://docs.deno.com/deploy/tutorials/vite
This tutorial covers how to deploy a Vite Deno and React app on Deno Deploy.
## Step 1: Create a Vite app
Let's use [Vite](https://vitejs.dev/) to quickly scaffold a Deno and React app:
```sh
deno run -RWE npm:create-vite-extra@latest
```
We'll name our project `vite-project`. Be sure to select `deno-react` in the
project configuration.
Then, `cd` into the newly created project folder.
## Step 2: Run the repo locally
To see and edit your new project locally you can run:
```sh
deno task dev
```
## Step 3: Deploy your project with Deno Deploy
Now that we have everything in place, let's deploy your new project!
1. In your browser, visit [Deno Deploy](https://dash.deno.com/new_project) and
link your GitHub account.
2. Select the repository which contains your new Vite project.
3. You can give your project a name or allow Deno to generate one for you.
4. Select **Vite** from the **Framework Preset** dropdown. This will populate
the **Entrypoint** form field.
5. Leave the **Install step** empty.
6. Set the **Build step** to `deno task build`.
7. Set the **Root directory** to `dist`
8. Click **Deploy Project**
> NB. The entrypoint that is set will be `jsr:@std/http/file-server`. Note that
> this is not a file that exists in the Vite repo itself. Instead, it is an
> external program. When run, this program uploads all the static asset files in
> your current repo (`vite-project/dist`) to Deno Deploy. Then when you navigate
> to the deployment URL, it serves up the local directory.
### `deployctl`
Alternatively, you can use `deployctl` directly to deploy `vite-project` to Deno
Deploy.
```console
cd /dist
deployctl deploy --project= --entrypoint=jsr:@std/http/file-server
```
---
# How to use Apollo with Deno
> Step-by-step tutorial on integrating Apollo GraphQL with Deno. Learn how to set up an Apollo Server, define schemas, implement resolvers, and build a complete GraphQL API using TypeScript.
URL: https://docs.deno.com/examples/tutorials/apollo
[Apollo Server](https://www.apollographql.com/) is a GraphQL server that you can
set up in minutes and use with your existing data source (or REST API). You can
then connect any GraphQL client to it to receive the data and take advantage of
GraphQL benefits, such as type-checking and efficient fetching.
We're going to get a simple Apollo server up and running that will allow us to
query some local data. We're only going to need three files for this:
1. `schema.ts` to set up our data model
2. `resolvers.ts` to set up how we're going to populate the data fields in our
schema
3. Our `main.ts` where the server is going to launch
We'll start by creating them:
```shell
touch schema.ts resolvers.ts main.ts
```
Let's go through setting up each.
[View source here.](https://github.com/denoland/examples/tree/main/with-apollo)
## schema.ts
Our `schema.ts` file describes our data. In this case, our data is a list of
dinosaurs. We want our users to be able to get the name and a short description
of each dino. In GraphQL language, this means that `Dinosaur` is our **type**,
and `name` and `description` are our **fields**. We can also define the data
type for each field. In this case, both are strings.
This is also where we describe the queries we allow for our data, using the
special **Query** type in GraphQL. We have two queries:
- `dinosaurs` which gets a list of all dinosaurs
- `dinosaur` which takes in the `name` of a dinosaur as an argument and returns
information about that one type of dinosaur.
We're going to export all this within our `typeDefs` type definitions, variable:
```tsx
export const typeDefs = `
type Dinosaur {
name: String
description: String
}
type Query {
dinosaurs: [Dinosaur]
dinosaur(name: String): Dinosaur
}
`;
```
If we wanted to write data, this is also where we would describe the
**Mutation** to do so. Mutations are how you write data with GraphQL. Because we
are using a static dataset here, we won't be writing anything.
## resolvers.ts
A resolver is responsible for populating the data for each query. Here we have
our list of dinosaurs and all the resolver is going to do is either a) pass that
entire list to the client if the user requests the `dinosaurs` query, or pass
just one if the user requests the `dinosaur` query.
```tsx
const dinosaurs = [
{
name: "Aardonyx",
description: "An early stage in the evolution of sauropods.",
},
{
name: "Abelisaurus",
description: '"Abel\'s lizard" has been reconstructed from a single skull.',
},
];
export const resolvers = {
Query: {
dinosaurs: () => dinosaurs,
dinosaur: (_: any, args: any) => {
return dinosaurs.find((dinosaur) => dinosaur.name === args.name);
},
},
};
```
With the latter, we pass the arguments from the client into a function to match
the name to a name in our dataset.
## main.ts
In our `main.ts` we're going to import the `ApolloServer` as well as `graphql`
and our `typeDefs` from the schema and our resolvers:
```tsx
import { ApolloServer } from "npm:@apollo/server@^4.1";
import { startStandaloneServer } from "npm:@apollo/server@4.1/standalone";
import { graphql } from "npm:graphql@16.6";
import { typeDefs } from "./schema.ts";
import { resolvers } from "./resolvers.ts";
const server = new ApolloServer({
typeDefs,
resolvers,
});
const { url } = await startStandaloneServer(server, {
listen: { port: 8000 },
});
console.log(`Server running on: ${url}`);
```
We pass our `typeDefs` and `resolvers` to `ApolloServer` to spool up a new
server. Finally, `startStandaloneServer` is a helper function to get the server
up and running quickly.
## Running the server
All that is left to do now is run the server:
```shell
deno run --allow-net --allow-read --allow-env main.ts
```
You should see `Server running on: 127.0.0.1:8000` in your terminal. If you go
to that address you will see the Apollo sandbox where we can enter our
`dinosaurs` query:
```graphql
query {
dinosaurs {
name
description
}
}
```
This will return our dataset:
```graphql
{
"data": {
"dinosaurs": [
{
"name": "Aardonyx",
"description": "An early stage in the evolution of sauropods."
},
{
"name": "Abelisaurus",
"description": "\"Abel's lizard\" has been reconstructed from a single skull."
}
]
}
}
```
Or if we want just one `dinosaur`:
```graphql
query {
dinosaur(name:"Aardonyx") {
name
description
}
}
```
Which returns:
```graphql
{
"data": {
"dinosaur": {
"name": "Aardonyx",
"description": "An early stage in the evolution of sauropods."
}
}
}
```
Awesome!
[Learn more about using Apollo and GraphQL in their tutorials](https://www.apollographql.com/tutorials/).
---
# Build Astro with Deno
> Step-by-step tutorial on building web applications with Astro and Deno. Learn how to scaffold projects, create dynamic pages, implement SSR, and deploy your Astro sites using Deno's Node.js compatibility.
URL: https://docs.deno.com/examples/tutorials/astro
[Astro](https://astro.build/) is a modern web framework focused on
content-centric websites, which leverages islands architecture and sends zero
JavaScript to the client by default. And with the recent release of
[Deno 2](https://deno.com/2), now
[backwards compatible with Node and npm](https://deno.com/blog/v2.0#backwards-compatible-forward-thinking),
the experience of using Astro and Deno has improved.
We’ll go over how to build a simple Astro project using Deno:
- [Scaffold an Astro project](#scaffold-an-astro-project)
- [Update index page](#update-index-page-to-list-all-dinosaurs)
- [Add a dynamic SSR page](#add-a-dynamic-ssr-page)
- [What’s next?](#whats-next)
Feel free to skip directly to
[the source code](https://github.com/denoland/examples/tree/main/with-astro) or
follow along below!
## Scaffold an Astro project
Astro provides a CLI tool to quickly scaffold a new Astro project. In your
terminal, run the command `deno init --npm astro@latest` to create a new Astro
project with Deno. For this tutorial, we’ll select the “Empty” template so we
can start from scratch, and skip installing dependencies so we can install them
with Deno later:
```jsx
deno init --npm astro@latest
astro Launch sequence initiated.
dir Where should we create your new project?
./dino-app
tmpl How would you like to start your new project?
Empty
ts Do you plan to write TypeScript?
Yes
use How strict should TypeScript be?
Strict
deps Install dependencies?
No
◼ No problem!
Remember to install dependencies after setup.
git Initialize a new git repository?
Yes
✔ Project initialized!
■ Template copied
■ TypeScript customized
■ Git initialized
next Liftoff confirmed. Explore your project!
Enter your project directory using cd ./dino-app
Run npm run dev to start the dev server. CTRL+C to stop.
Add frameworks like react or tailwind using astro add.
Stuck? Join us at https://astro.build/chat
╭─────╮ Houston:
│ ◠ ◡ ◠ Good luck out there, astronaut! 🚀
╰──🍫─╯
```
As of Deno 2,
[Deno can also install packages with the new `deno install` command](https://deno.com/blog/v2.0#deno-is-now-a-package-manager-with-deno-install).
So let’s run
[`deno install`](https://docs.deno.com/runtime/reference/cli/install/) with the
flag `--allow-scripts` to execute any npm lifecycle scripts:
```bash
deno install --allow-scripts
```
To see what commands we have, let’s run `deno task`:
```bash
deno task
Available tasks:
- dev (package.json)
astro dev
- start (package.json)
astro dev
- build (package.json)
astro check && astro build
- preview (package.json)
astro preview
- astro (package.json)
astro
```
We can start the Astro server with `deno task dev`:

## Configure the formatter
`deno fmt` supports Astro files with the
[`--unstable-component`](https://docs.deno.com/runtime/reference/cli/fmt/#formatting-options-unstable-component)
flag. To use it, run this command:
```sh
deno fmt --unstable-component
```
To configure `deno fmt` to always format your Astro files, add this at the top
level of your `deno.json` file:
```json
"unstable": ["fmt-component"]
```
## Update index page to list all dinosaurs
Our app will display facts about a variety of dinosaurs. The first page to
create will be the index page that lists links to all dinosaurs in our
“database”.
First, let’s create the data that will be used in the app. In this example,
we’ll hardcode the data in a json file, but you can use any data storage in
practice. We’ll create a `data` folder in the root of the project, then a
`dinosaurs.json` file with
[this text](https://github.com/denoland/tutorial-with-react/blob/main/api/data.json)
in it.
> ⚠️️ In this tutorial we hard code the data. But you can connect to
> [a variety of databases](https://docs.deno.com/runtime/tutorials/connecting_to_databases/)
> and
> [even use ORMs like Prisma](https://docs.deno.com/runtime/tutorials/how_to_with_npm/prisma/)
> with Deno.
Once we have the data, let’s create an index page that lists all of the
dinosaurs. In the `./src/pages/index.astro` page, let’s write the following:
```jsx
---
import data from "../../data/dinosaurs.json";
---
Dinosaurs
```
Let’s start the server with `deno task dev` and point our browser to
`localhost:4321`:

Awesome! But when you click on a dinosaur, it 404’s. Let’s fix that.
## Add a dynamic SSR page
Our app will display facts about a variety of dinosaurs. In order to do that,
we’ll create a dynamic server-side rendered (”SSR”), which
[offers better performance for end users while improving your pages SEO](https://deno.com/blog/the-future-and-past-is-server-side-rendering).
Next, let’s create a new file under `/src/pages/` called `[dinosaur].astro`. At
the top of the file, we'll add some logic to pull data from our hardcoded data
source and filter that against the `dinosaur` parameter set from the URL path.
At the bottom, we’ll render the data. Your file should look like this:
```jsx
---
import data from "../../data/dinosaurs.json";
const { dinosaur } = Astro.params;
const dinosaurObj = data.find((item) => item.name.toLowerCase() === dinosaur);
if (!dinosaurObj) return Astro.redirect("/404");
const { name, description } = dinosaurObj;
---
{ name }
{ description }
```
> ⚠️️ The
> [Deno language server](https://docs.deno.com/runtime/reference/lsp_integration/)
> does not currently support `.astro` files, so you may experience false red
> squigglies. We're working on improving this experience.
Let’s run it with `deno task dev`, and point our browser to
`localhost:4321/abrictosaurus`:

It works!
## What’s next
We hope this tutorial gives you a good idea of how to get started building with
Astro and Deno. You can learn more about Astro and
[their progressive approach to building websites](https://docs.astro.build/en/getting-started/).
If you’re interested in swapping out our hardcoded data store, here are some
resources on
[connecting to databases with Deno](https://docs.deno.com/runtime/tutorials/connecting_to_databases/),
including
[Planetscale](https://docs.deno.com/runtime/tutorials/how_to_with_npm/planetscale/),
[Redis](https://docs.deno.com/runtime/tutorials/how_to_with_npm/redis/), and
more. Or you can learn how to
[deploy your Astro project to Deno Deploy](https://deno.com/blog/astro-on-deno),
or follow these guides on how to self-host Deno to
[AWS](https://docs.deno.com/runtime/tutorials/aws_lightsail/),
[Digital Ocean](https://docs.deno.com/runtime/tutorials/digital_ocean/), and
[Google Cloud Run](https://docs.deno.com/runtime/tutorials/google_cloud_run/).
---
# How to Deploy Deno to AWS Lambda
> Step-by-step tutorial on deploying Deno applications to AWS Lambda. Learn about Docker containerization, ECR repositories, function configuration, and how to set up serverless Deno apps on AWS.
URL: https://docs.deno.com/examples/tutorials/aws_lambda
AWS Lambda is a serverless computing service provided by Amazon Web Services. It
allows you to run code without provisioning or managing servers.
Here's a step by step guide to deploying a Deno app to AWS Lambda using Docker.
The pre-requisites for this are:
- [`docker` CLI](https://docs.docker.com/reference/cli/docker/)
- an [AWS account](https://aws.amazon.com)
- [`aws` CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
## Step 1: Create a Deno App
Create a new Deno app using the following code:
```ts title="main.ts"
Deno.serve((req) => new Response("Hello World!"));
```
Save this code in a file named `main.ts`.
## Step 2: Create a Dockerfile
Create a new file named `Dockerfile` with the following content:
```Dockerfile
# Set up the base image
FROM public.ecr.aws/awsguru/aws-lambda-adapter:0.9.0 AS aws-lambda-adapter
FROM denoland/deno:bin-1.45.2 AS deno_bin
FROM debian:bookworm-20230703-slim AS deno_runtime
COPY --from=aws-lambda-adapter /lambda-adapter /opt/extensions/lambda-adapter
COPY --from=deno_bin /deno /usr/local/bin/deno
ENV PORT=8000
EXPOSE 8000
RUN mkdir /var/deno_dir
ENV DENO_DIR=/var/deno_dir
# Copy the function code
WORKDIR "/var/task"
COPY . /var/task
# Warmup caches
RUN timeout 10s deno run -A main.ts || [ $? -eq 124 ] || exit 1
CMD ["deno", "run", "-A", "main.ts"]
```
This Dockerfile uses the
[`aws-lambda-adapter`](https://github.com/awslabs/aws-lambda-web-adapter)
project to adapt regular HTTP servers, like Deno's `Deno.serve`, to the AWS
Lambda runtime API.
We also use the `denoland/deno:bin-1.45.2` image to get the Deno binary and
`debian:bookworm-20230703-slim` as the base image. The
`debian:bookworm-20230703-slim` image is used to keep the image size small.
The `PORT` environment variable is set to `8000` to tell the AWS Lambda adapter
that we are listening on port `8000`.
We set the `DENO_DIR` environment variable to `/var/deno_dir` to store cached
Deno source code and transpiled modules in the `/var/deno_dir` directory.
The warmup caches step is used to warm up the Deno cache before the function is
invoked. This is done to reduce the cold start time of the function. These
caches contain the compiled code and dependencies of your function code. This
step starts your server for 10 seconds and then exits.
When using a package.json, remember to run `deno install` to install
`node_modules` from your `package.json` file before warming up the caches or
running the function.
## Step 3: Build the Docker Image
Build the Docker image using the following command:
```bash
docker build -t hello-world .
```
## Step 4: Create an ECR Docker repository and push the image
With the AWS CLI, create an ECR repository and push the Docker image to it:
```bash
aws ecr create-repository --repository-name hello-world --region us-east-1 | grep repositoryUri
```
This should output a repository URI that looks like
`.dkr.ecr.us-east-1.amazonaws.com/hello-world`.
Authenticate Docker with ECR, using the repository URI from the previous step:
```bash
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin .dkr.ecr.us-east-1.amazonaws.com
```
Tag the Docker image with the repository URI, again using the repository URI
from the previous steps:
```bash
docker tag hello-world:latest .dkr.ecr.us-east-1.amazonaws.com/hello-world:latest
```
Finally, push the Docker image to the ECR repository, using the repository URI
from the previous steps:
```bash
docker push .dkr.ecr.us-east-1.amazonaws.com/hello-world:latest
```
## Step 5: Create an AWS Lambda function
Now you can create a new AWS Lambda function from the AWS Management Console.
1. Go to the AWS Management Console and
[navigate to the Lambda service](https://us-east-1.console.aws.amazon.com/lambda/home?region=us-east-1).
2. Click on the "Create function" button.
3. Choose "Container image".
4. Enter a name for the function, like "hello-world".
5. Click on the "Browse images" button and select the image you pushed to ECR.
6. Click on the "Create function" button.
7. Wait for the function to be created.
8. In the "Configuration" tab, go to the "Function URL" section and click on
"Create function URL".
9. Choose "NONE" for the auth type (this will make the lambda function publicly
accessible).
10. Click on the "Save" button.
## Step 6: Test the Lambda function
You can now visit your Lambda function's URL to see the response from your Deno
app.
🦕 You have successfully deployed a Deno app to AWS Lambda using Docker. You can
now use this setup to deploy more complex Deno apps to AWS Lambda.
---
# examples/tutorials/aws_lightsail.md
> Step-by-step tutorial on deploying Deno applications to AWS Lightsail. Learn about Docker containers, GitHub Actions automation, continuous deployment, and how to set up cost-effective cloud hosting for Deno apps.
URL: https://docs.deno.com/examples/tutorials/aws_lightsail
[Amazon Lightsail](https://aws.amazon.com/lightsail/) is the easiest and
cheapest way to get started with Amazon Web Services. It allows you to host
virtual machines and even entire container services.
This How To guide will show you how to deploy a Deno app to Amazon Lightsail
using Docker, Docker Hub, and GitHub Actions.
Before continuing, make sure you have:
- [`docker` CLI](https://docs.docker.com/engine/reference/commandline/cli/)
- a [Docker Hub account](https://hub.docker.com)
- a [GitHub account](https://github.com)
- an [AWS account](https://aws.amazon.com/)
## Create Dockerfile and docker-compose.yml
To focus on the deployment, our app will simply be a `main.ts` file that returns
a string as an HTTP response:
```ts
import { Application } from "jsr:@oak/oak";
const app = new Application();
app.use((ctx) => {
ctx.response.body = "Hello from Deno and AWS Lightsail!";
});
await app.listen({ port: 8000 });
```
Then, we'll create two files -- `Dockerfile` and `docker-compose.yml` -- to
build the Docker image.
In our `Dockerfile`, let's add:
```Dockerfile
FROM denoland/deno
EXPOSE 8000
WORKDIR /app
ADD . /app
RUN deno install --entrypoint main.ts
CMD ["run", "--allow-net", "main.ts"]
```
Then, in our `docker-compose.yml`:
```yml
version: "3"
services:
web:
build: .
container_name: deno-container
image: deno-image
ports:
- "8000:8000"
```
Let's test this locally by running `docker compose -f docker-compose.yml build`,
then `docker compose up`, and going to `localhost:8000`.

It works!
## Build, Tag, and Push to Docker Hub
First, let's sign into [Docker Hub](https://hub.docker.com/repositories) and
create a repository. Let's name it `deno-on-aws-lightsail`.
Then, let's tag and push our new image, replacing `username` with yours:
Then, let's build the image locally. Note our `docker-compose.yml` file will
name the build `deno-image`.
```shell
docker compose -f docker-compose.yml build
```
Let's [tag](https://docs.docker.com/engine/reference/commandline/tag/) the local
image with `{{ username }}/deno-on-aws-lightsail`:
```shell
docker tag deno-image {{ username }}/deno-on-aws-lightsail
```
We can now push the image to Docker Hub:
```shell
docker push {{ username }}/deno-on-aws-lightsail
```
After that succeeds, you should be able to see the new image on your Docker Hub
repository:

## Create and Deploy to a Lightsail Container
Let's head over to
[the Amazon Lightsail console](https://lightsail.aws.amazon.com/ls/webapp/home/container-services).
Then click "Containers" and "Create container service". Half way down the page,
click "Setup your first Deployment" and select "Specify a custom deployment".
You can write whatever container name you'd like.
In `Image`, be sure to use `{{ username }}/{{ image }}` that you have set in
your Docker Hub. For this example, it is `lambtron/deno-on-aws-lightsail`.
Let's click `Add open ports` and add `8000`.
Finally, under `PUBLIC ENDPOINT`, select the container name that you just
created.
The full form should look like below:

When you're ready, click "Create container service".
After a few moments, your new container should be deployed. Click on the public
address and you should see your Deno app:

## Automate using GitHub Actions
In order to automate that process, we'll use the `aws` CLI with the
[`lightsail` subcommand](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lightsail/push-container-image.html).
The steps in our GitHub Actions workflow will be:
1. Checkout the repo
2. Build our app as a Docker image locally
3. Install and authenticate AWS CLI
4. Push local Docker image to AWS Lightsail Container Service via CLI
Pre-requisites for this GitHub Action workflow to work:
- an AWS Lightsail Container Instance is created (see section above)
- IAM user and relevant permissions set.
([Learn more about managing access to Amazon Lightsail for an IAM user.](https://docs.aws.amazon.com/lightsail/latest/userguide/amazon-lightsail-managing-access-for-an-iam-user.html))
- `AWS_ACCESS_KEY_ID` and `AWS_SUCCESS_ACCESS_KEY` for your user with
permissions. (Follow
[this AWS guide](https://lightsail.aws.amazon.com/ls/docs/en_us/articles/lightsail-how-to-set-up-access-keys-to-use-sdk-api-cli)
to get generate an `AWS_ACCESS_KEY_ID` and `AWS_SUCCESS_ACCESS_KEY`.)
Let's create a new file `container.template.json`, which contains configuration
for how to make the service container deployment. Note the similarities these
option values have with the inputs we entered manually in the previous section.
```json
{
"containers": {
"app": {
"image": "",
"environment": {
"APP_ENV": "release"
},
"ports": {
"8000": "HTTP"
}
}
},
"publicEndpoint": {
"containerName": "app",
"containerPort": 8000,
"healthCheck": {
"healthyThreshold": 2,
"unhealthyThreshold": 2,
"timeoutSeconds": 5,
"intervalSeconds": 10,
"path": "/",
"successCodes": "200-499"
}
}
}
```
Let's add the below to your `.github/workflows/deploy.yml` file:
```yml
name: Build and Deploy to AWS Lightsail
on:
push:
branches:
- main
env:
AWS_REGION: us-west-2
AWS_LIGHTSAIL_SERVICE_NAME: container-service-2
jobs:
build_and_deploy:
name: Build and Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout main
uses: actions/checkout@v4
- name: Install Utilities
run: |
sudo apt-get update
sudo apt-get install -y jq unzip
- name: Install AWS Client
run: |
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install || true
aws --version
curl "https://s3.us-west-2.amazonaws.com/lightsailctl/latest/linux-amd64/lightsailctl" -o "lightsailctl"
sudo mv "lightsailctl" "/usr/local/bin/lightsailctl"
sudo chmod +x /usr/local/bin/lightsailctl
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-region: ${{ env.AWS_REGION }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Build Docker Image
run: docker build -t ${{ env.AWS_LIGHTSAIL_SERVICE_NAME }}:release .
- name: Push and Deploy
run: |
service_name=${{ env.AWS_LIGHTSAIL_SERVICE_NAME }}
aws lightsail push-container-image \
--region ${{ env.AWS_REGION }} \
--service-name ${service_name} \
--label ${service_name} \
--image ${service_name}:release
aws lightsail get-container-images --service-name ${service_name} | jq --raw-output ".containerImages[0].image" > image.txt
jq --arg image $(cat image.txt) '.containers.app.image = $image' container.template.json > container.json
aws lightsail create-container-service-deployment --service-name ${service_name} --cli-input-json file://$(pwd)/container.json
```
Whoa there is a lot going on here! The last two steps are most important:
`Build Docker Image` and `Push and Deploy`.
```shell
docker build -t ${{ env.AWS_LIGHTSAIL_SERVICE_NAME }}:release .
```
This command builds our Docker image with the name `container-service-2` and
tags it `release`.
```shell
aws lightsail push-container-image ...
```
This command pushes the local image to our Lightsail container.
```shell
aws lightsail get-container-images --service-name ${service_name} | jq --raw-output ".containerImages[0].image" > image.txt
```
This command retrieves the image information and, using
[`jq`](https://stedolan.github.io/jq/), parses it and saves the image name in a
local file `image.txt`.
```shell
jq --arg image $(cat image.txt) '.containers.app.image = $image' container.template.json > container.json
```
This command uses the image name saved in `image.txt` and
`container.template.json` and creates a new options file called
`container.json`. This options file will be passed to `aws lightsail` for the
final deployment in the next step.
```shell
aws lightsail create-container-service-deployment --service-name ${service_name} --cli-input-json file://$(pwd)/container.json
```
Finally, this command creates a new deployment using the `service_name`, along
with the config settings in `container.json`.
When you push to GitHub and the Action succeeds, you'll be able to see your new
Deno app on AWS:

🦕 Now you can deploy a Deno app to Amazon Lightsail using Docker, Docker Hub,
and GitHub Actions.
---
# Getting Started with OpenTelemetry in Deno
> Set up basic OpenTelemetry instrumentation in a Deno application. This tutorial covers creating a simple HTTP server with custom metrics and traces, and viewing the telemetry data.
URL: https://docs.deno.com/examples/tutorials/basic_opentelemetry
OpenTelemetry provides powerful observability tools for your applications. With
Deno's built-in OpenTelemetry support, you can easily instrument your code to
collect metrics, traces, and logs.
This tutorial will walk you through setting up a simple Deno application with
OpenTelemetry instrumentation.
## Prerequisites
- Deno 2.3 or later
## Step 1: Create a Simple HTTP Server
Let's start by creating a basic HTTP server that simulates a small web
application:
```ts title="server.ts"
import { metrics, trace } from "npm:@opentelemetry/api@1";
// Create a tracer and meter for our application
const tracer = trace.getTracer("my-server", "1.0.0");
const meter = metrics.getMeter("my-server", "1.0.0");
// Create some metrics
const requestCounter = meter.createCounter("http_requests_total", {
description: "Total number of HTTP requests",
});
const requestDuration = meter.createHistogram("http_request_duration_ms", {
description: "HTTP request duration in milliseconds",
unit: "ms",
});
// Start the server
Deno.serve({ port: 8000 }, (req) => {
// Record the start time for measuring request duration
const startTime = performance.now();
// Create a span for this request
return tracer.startActiveSpan("handle_request", async (span) => {
try {
// Extract the path from the URL
const url = new URL(req.url);
const path = url.pathname;
// Add attributes to the span
span.setAttribute("http.route", path);
span.setAttribute("http.method", req.method);
span.updateName(`${req.method} ${path}`);
// Add an event to the span
span.addEvent("request_started", {
timestamp: startTime,
request_path: path,
});
// Simulate some processing time
const waitTime = Math.random() * 100;
await new Promise((resolve) => setTimeout(resolve, waitTime));
// Add another event to the span
span.addEvent("processing_completed");
// Create the response
const response = new Response(`Hello from ${path}!`, {
headers: { "Content-Type": "text/plain" },
});
// Record metrics
requestCounter.add(1, {
method: req.method,
path,
status: 200,
});
const duration = performance.now() - startTime;
requestDuration.record(duration, {
method: req.method,
path,
});
span.setAttribute("request.duration_ms", duration);
return response;
} catch (error) {
// Record error in span
if (error instanceof Error) {
span.recordException(error);
span.setStatus({
code: trace.SpanStatusCode.ERROR,
message: error.message,
});
}
return new Response("Internal Server Error", { status: 500 });
} finally {
// Always end the span
span.end();
}
});
});
```
This server:
1. Creates a tracer and meter for our application
2. Sets up metrics to count requests and measure their duration
3. Creates a span for each request with attributes and events
4. Simulates some processing time
5. Records metrics for each request
## Step 2: Run the Server with OpenTelemetry Enabled
To run the server with OpenTelemetry, use these flags:
```sh
OTEL_DENO=true OTEL_SERVICE_NAME=my-server deno run --unstable-otel --allow-net server.ts
```
## Step 3: Create a Test Client
Let's create a simple client to send requests to our server:
```ts title="client.ts"
// Send 10 requests to different paths
for (let i = 0; i < 10; i++) {
const path = ["", "about", "users", "products", "contact"][i % 5];
const url = `http://localhost:8000/${path}`;
console.log(`Sending request to ${url}`);
try {
const response = await fetch(url);
const text = await response.text();
console.log(`Response from ${url}: ${text}`);
} catch (error) {
console.error(`Error fetching ${url}:`, error);
}
}
```
## Step 4: Run the Client
In a separate terminal, run the client:
```sh
deno run --allow-net client.ts
```
## Step 5: View the Telemetry Data
By default, Deno exports telemetry data to `http://localhost:4318` using the
OTLP protocol. You'll need an OpenTelemetry collector to receive and visualize
this data.
### Setting up a Local Collector
The quickest way to get started is with a local LGTM stack (Loki, Grafana,
Tempo, Mimir) in Docker:
```sh
docker run --name lgtm -p 3000:3000 -p 4317:4317 -p 4318:4318 --rm -ti \
-v "$PWD"/lgtm/grafana:/data/grafana \
-v "$PWD"/lgtm/prometheus:/data/prometheus \
-v "$PWD"/lgtm/loki:/data/loki \
-e GF_PATHS_DATA=/data/grafana \
docker.io/grafana/otel-lgtm:0.8.1
```
Then access Grafana at http://localhost:3000 (username: admin, password: admin).
In Grafana, you can:
1. View **Traces** in Tempo to see the individual request spans
2. View **Metrics** in Mimir/Prometheus to see request counts and durations
3. View **Logs** in Loki to see any logs from your application
## Understanding What You're Seeing
### Traces
In the Traces view, you'll see spans for:
- Each HTTP request processed by your server
- Each fetch request made by your client
- The relationships between these spans
Click on any span to see its details, including:
- Duration
- Attributes (http.route, http.method, etc.)
- Events (request_started, processing_completed)
### Metrics
In the Metrics view, you can query for:
- `http_requests_total` - The counter tracking the number of HTTP requests
- `http_request_duration_ms` - The histogram of request durations
You can also see built-in Deno metrics like:
- `http.server.request.duration`
- `http.server.active_requests`
### Logs
In the Logs view, you'll see all console logs from your application with correct
trace context.
## Troubleshooting
If you're not seeing data in your collector:
1. Check that you've set `OTEL_DENO=true` and used the `--unstable-otel` flag
2. Verify the collector is running and accessible at the default endpoint
3. Check if you need to set `OTEL_EXPORTER_OTLP_ENDPOINT` to a different URL
4. Look for errors in your Deno console output
Remember that OpenTelemetry support in Deno is still marked as unstable and may
change in future versions.
🦕 This tutorial provides a simple starting point for users who want to
experiment with OpenTelemetry in Deno without diving into more complex concepts
immediately.
This basic example can be extended in many ways:
- Add more custom metrics for business logic
- Create additional spans for important operations
- Use baggage to pass context attributes between services
- Set up alerts based on metrics thresholds
For more advanced usage, see our
[Distributed Tracing with Context Propagation](/examples/otel_span_propagation_tutorial/)
tutorial.
---
# Behavior-Driven Development (BDD)
> Implementing Behavior-Driven Development with Deno's Standard Library's BDD module. Create readable, well organised tests with effective assertions.
URL: https://docs.deno.com/examples/tutorials/bdd
Behavior-Driven Development (BDD) is an approach to software development that
encourages collaboration between developers, QA, and non-technical stakeholders.
BDD focuses on defining the behavior of an application through examples written
in a natural, ubiquitous language that all stakeholders can understand.
Deno's Standard Library provides a BDD-style testing module that allows you to
structure tests in a way that's both readable for non-technical stakeholders and
practical for implementation. In this tutorial, we'll explore how to use the BDD
module to create descriptive test suites for your applications.
## Introduction to BDD
BDD extends
[Test-Driven Development](https://en.wikipedia.org/wiki/Test-driven_development)
(TDD) by writing tests in a natural language that is easy to read. Rather than
thinking about "tests," BDD encourages us to consider "specifications" or
"specs" that describe how software should behave from the user's perspective.
This approach helps to keep tests focused on what the code should do rather than
how it is implemented.
The basic elements of BDD include:
- **Describe** blocks that group related specifications
- **It** statements that express a single behavior
- **Before/After** hooks for setup and teardown operations
## Using Deno's BDD module
To get started with BDD testing in Deno, we'll use the `@std/testing/bdd` module
from the [Deno Standard Library](https://jsr.io/@std/testing/doc/bdd).
First, let's import the necessary functions:
```ts
import {
afterAll,
afterEach,
beforeAll,
beforeEach,
describe,
it,
} from "jsr:@std/testing/bdd";
import { assertEquals, assertThrows } from "jsr:@std/assert";
```
These imports provide the core BDD functions:
- `describe` creates a block that groups related tests
- `it` declares a test case that verifies a specific behavior
- `beforeEach`/`afterEach` run before or after each test case
- `beforeAll`/`afterAll` run once before or after all tests in a describe block
We'll also use assertion functions from
[`@std/assert`](https://jsr.io/@std/assert) to verify our expectations.
### Writing your first BDD test
Let's create a simple calculator module and test it using BDD:
```ts title="calculator.ts"
export class Calculator {
private value: number = 0;
constructor(initialValue: number = 0) {
this.value = initialValue;
}
add(number: number): Calculator {
this.value += number;
return this;
}
subtract(number: number): Calculator {
this.value -= number;
return this;
}
multiply(number: number): Calculator {
this.value *= number;
return this;
}
divide(number: number): Calculator {
if (number === 0) {
throw new Error("Cannot divide by zero");
}
this.value /= number;
return this;
}
get result(): number {
return this.value;
}
}
```
Now, let's test this calculator using the BDD style:
```ts title="calculator_test.ts"
import { afterEach, beforeEach, describe, it } from "jsr:@std/testing/bdd";
import { assertEquals, assertThrows } from "jsr:@std/assert";
import { Calculator } from "./calculator.ts";
describe("Calculator", () => {
let calculator: Calculator;
// Before each test, create a new Calculator instance
beforeEach(() => {
calculator = new Calculator();
});
it("should initialize with zero", () => {
assertEquals(calculator.result, 0);
});
it("should initialize with a provided value", () => {
const initializedCalculator = new Calculator(10);
assertEquals(initializedCalculator.result, 10);
});
describe("add method", () => {
it("should add a positive number correctly", () => {
calculator.add(5);
assertEquals(calculator.result, 5);
});
it("should handle negative numbers", () => {
calculator.add(-5);
assertEquals(calculator.result, -5);
});
it("should be chainable", () => {
calculator.add(5).add(10);
assertEquals(calculator.result, 15);
});
});
describe("subtract method", () => {
it("should subtract a number correctly", () => {
calculator.subtract(5);
assertEquals(calculator.result, -5);
});
it("should be chainable", () => {
calculator.subtract(5).subtract(10);
assertEquals(calculator.result, -15);
});
});
describe("multiply method", () => {
beforeEach(() => {
// For multiplication tests, start with value 10
calculator = new Calculator(10);
});
it("should multiply by a number correctly", () => {
calculator.multiply(5);
assertEquals(calculator.result, 50);
});
it("should be chainable", () => {
calculator.multiply(2).multiply(3);
assertEquals(calculator.result, 60);
});
});
describe("divide method", () => {
beforeEach(() => {
// For division tests, start with value 10
calculator = new Calculator(10);
});
it("should divide by a number correctly", () => {
calculator.divide(2);
assertEquals(calculator.result, 5);
});
it("should throw when dividing by zero", () => {
assertThrows(
() => calculator.divide(0),
Error,
"Cannot divide by zero",
);
});
});
});
```
To run this test, use the `deno test` command:
```sh
deno test calculator_test.ts
```
You'll see output similar to this:
```sh
running 1 test from file:///path/to/calculator_test.ts
Calculator
✓ should initialize with zero
✓ should initialize with a provided value
add method
✓ should add a positive number correctly
✓ should handle negative numbers
✓ should be chainable
subtract method
✓ should subtract a number correctly
✓ should be chainable
multiply method
✓ should multiply by a number correctly
✓ should be chainable
divide method
✓ should divide by a number correctly
✓ should throw when dividing by zero
ok | 11 passed | 0 failed (234ms)
```
## Organizing tests with nested describe blocks
One of the powerful features of BDD is the ability to nest `describe` blocks,
which helps organize tests hierarchically. In the calculator example, we grouped
tests for each method within their own `describe` blocks. This not only makes
the tests more readable, but also makes it easier to locate issues when the test
fails.
You can nest `describe` blocks, but be cautious of nesting too deep as excessive
nesting can make tests harder to follow.
## Hooks
The BDD module provides four hooks:
- `beforeEach` runs before each test in the current describe block
- `afterEach` runs after each test in the current describe block
- `beforeAll` runs once before all tests in the current describe block
- `afterAll` runs once after all tests in the current describe block
### beforeEach/afterEach
These hooks are ideal for:
- Setting up a fresh test environment for each test
- Cleaning up resources after each test
- Ensuring test isolation
In the calculator example, we used `beforeEach` to create a new calculator
instance before each test, ensuring each test starts with a clean state.
### beforeAll/afterAll
These hooks are useful for:
- Expensive setup operations that can be shared across tests
- Setting up and tearing down database connections
- Creating and cleaning up shared resources
Here's an example of how you might use `beforeAll` and `afterAll`:
```ts
describe("Database operations", () => {
let db: Database;
beforeAll(async () => {
// Connect to the database once before all tests
db = await Database.connect(TEST_CONNECTION_STRING);
await db.migrate();
});
afterAll(async () => {
// Disconnect after all tests are complete
await db.close();
});
it("should insert a record", async () => {
const result = await db.insert({ name: "Test" });
assertEquals(result.success, true);
});
it("should retrieve a record", async () => {
const record = await db.findById(1);
assertEquals(record.name, "Test");
});
});
```
## Gherkin vs. JavaScript-style BDD
If you're familiar with Cucumber or other BDD frameworks, you might be expecting
Gherkin syntax with "Given-When-Then" statements.
Deno's BDD module uses a JavaScript-style syntax rather than Gherkin. This
approach is similar to other JavaScript testing frameworks like Mocha or
Jasmine. However, you can still follow BDD principles by:
1. Writing clear, behavior-focused test descriptions
2. Structuring your tests to reflect user stories
3. Following the "Arrange-Act-Assert" pattern in your test implementations
For example, you can structure your `it` blocks to mirror the Given-When-Then
format:
```ts
describe("Calculator", () => {
it("should add numbers correctly", () => {
// Given
const calculator = new Calculator();
// When
calculator.add(5);
// Then
assertEquals(calculator.result, 5);
});
});
```
If you need full Gherkin support with natural language specifications, consider
using a dedicated BDD framework that integrates with Deno, such as
[cucumber-js](https://github.com/cucumber/cucumber-js).
## Best Practices for BDD with Deno
### Write your tests for humans to read
BDD tests should read like documentation. Use clear, descriptive language in
your `describe` and `it` statements:
```ts
// Good
describe("User authentication", () => {
it("should reject login with incorrect password", () => {
// Test code
});
});
// Not good
describe("auth", () => {
it("bad pw fails", () => {
// Test code
});
});
```
### Keep tests focused
Each test should verify a single behavior. Avoid testing multiple behaviors in a
single `it` block:
```ts
// Good
it("should add an item to the cart", () => {
// Test adding to cart
});
it("should calculate the correct total", () => {
// Test total calculation
});
// Bad
it("should add an item and calculate total", () => {
// Test adding to cart
// Test total calculation
});
```
### Use context-specific setup
When tests within a describe block need different setup, use nested describes
with their own `beforeEach` hooks rather than conditional logic:
```ts
// Good
describe("User operations", () => {
describe("when user is logged in", () => {
beforeEach(() => {
// Setup logged-in user
});
it("should show the dashboard", () => {
// Test
});
});
describe("when user is logged out", () => {
beforeEach(() => {
// Setup logged-out state
});
it("should redirect to login", () => {
// Test
});
});
});
// Avoid
describe("User operations", () => {
beforeEach(() => {
// Setup base state
if (isLoggedInTest) {
// Setup logged-in state
} else {
// Setup logged-out state
}
});
it("should show dashboard when logged in", () => {
isLoggedInTest = true;
// Test
});
it("should redirect to login when logged out", () => {
isLoggedInTest = false;
// Test
});
});
```
### Handle asynchronous tests properly
When testing asynchronous code, remember to:
- Mark your test functions as `async`
- Use `await` for promises
- Handle errors properly
```ts
it("should fetch user data asynchronously", async () => {
const user = await fetchUser(1);
assertEquals(user.name, "John Doe");
});
```
🦕 By following the BDD principles and practices outlined in this tutorial, you
can build more reliable software and solidify your reasoning about the 'business
logic' of your code.
Remember that BDD is not just about the syntax or tools but about the
collaborative approach to defining and verifying application behavior. The most
successful BDD implementations combine these technical practices with regular
conversations between developers, testers, product and business stakeholders.
To continue learning about testing in Deno, explore other modules in the
Standard Library's testing suite, such as [mocking](/examples/mocking_tutorial/)
and [snapshot testing](/examples/snapshot_tutorial/).
---
# Chat application with WebSockets
> A tutorial on building a real-time chat app using Deno WebSockets. Learn how to create a WebSocket server with Oak, handle multiple client connections, manage state, and build an interactive chat interface with HTML, CSS, and JavaScript.
URL: https://docs.deno.com/examples/tutorials/chat_app
WebSockets are a powerful tool for building real-time applications. They allow
for bidirectional communication between the client and server without the need
for constant polling. A frequent use case for WebSockets are chat applications.
In this tutorial we'll create a simple chat app using Deno and the built in
[WebSockets API](/api/web/websockets). The chat app will allow multiple chat
clients to connect to the same backend and send group messages. After a client
enters a username, they can then start sending messages to other online clients.
Each client also displays the list of currently active users.
You can see the
[finished chat app on GitHub](https://github.com/denoland/tutorial-with-websockets).

## Initialize a new project
First, create a new directory for your project and navigate into it.
```sh
deno init chat-app
cd deno-chat-app
```
## Build the backend
We'll start by building the backend server that will handle the WebSocket
connections and broadcast messages to all connected clients. We'll use the
[`oak`](https://jsr.io/@oak/oak) middleware framework to set up our server,
clients can connect to the server, send messages and receive updates about other
connected users. Additionally the server will serve the static HTML, CSS and
JavaScript files that make up the chat client.
### Import dependencies
First, we'll need to import the necessary dependencies. Use the `deno add`
command to add Oak to your project:
```sh
deno add jsr:@oak/oak
```
### Set up the server
In your `main.ts` file, add the following code:
```ts title="main.ts"
import { Application, Context, Router } from "@oak/oak";
import ChatServer from "./ChatServer.ts";
const app = new Application();
const port = 8080;
const router = new Router();
const server = new ChatServer();
router.get("/start_web_socket", (ctx: Context) => server.handleConnection(ctx));
app.use(router.routes());
app.use(router.allowedMethods());
app.use(async (context) => {
await context.send({
root: Deno.cwd(),
index: "public/index.html",
});
});
console.log("Listening at http://localhost:" + port);
await app.listen({ port });
```
Next, create a new file called `ChatServer.ts` in the same directory as your
`main.ts` file. In this file we'll put the logic for handling the WebSocket
connections:
```ts title="ChatServer.ts"
import { Context } from "@oak/oak";
type WebSocketWithUsername = WebSocket & { username: string };
type AppEvent = { event: string; [key: string]: any };
export default class ChatServer {
private connectedClients = new Map();
public async handleConnection(ctx: Context) {
const socket = await ctx.upgrade() as WebSocketWithUsername;
const username = ctx.request.url.searchParams.get("username");
if (this.connectedClients.has(username)) {
socket.close(1008, `Username ${username} is already taken`);
return;
}
socket.username = username;
socket.onopen = this.broadcastUsernames.bind(this);
socket.onclose = () => {
this.clientDisconnected(socket.username);
};
socket.onmessage = (m) => {
this.send(socket.username, m);
};
this.connectedClients.set(username, socket);
console.log(`New client connected: ${username}`);
}
private send(username: string, message: any) {
const data = JSON.parse(message.data);
if (data.event !== "send-message") {
return;
}
this.broadcast({
event: "send-message",
username: username,
message: data.message,
});
}
private clientDisconnected(username: string) {
this.connectedClients.delete(username);
this.broadcastUsernames();
console.log(`Client ${username} disconnected`);
}
private broadcastUsernames() {
const usernames = [...this.connectedClients.keys()];
this.broadcast({ event: "update-users", usernames });
console.log("Sent username list:", JSON.stringify(usernames));
}
private broadcast(message: AppEvent) {
const messageString = JSON.stringify(message);
for (const client of this.connectedClients.values()) {
client.send(messageString);
}
}
}
```
This code sets up a `handleConnection` method that is called when a new
WebSocket connection is established. It receives a Context object from the Oak
framework and upgrades it to a WebSocket connection. It extracts the username
from the URL query parameters. If the username is already taken (i.e., exists in
connectedClients), it closes the socket with an appropriate message. Otherwise,
it sets the username property on the socket, assigns event handlers, and adds
the socket to `connectedClients`.
When the socket opens, it triggers the `broadcastUsernames` method, which sends
the list of connected usernames to all clients. When the socket closes, it calls
the `clientDisconnected` method to remove the client from the list of connected
clients.
When a message of type `send-message` is received, it broadcasts the message to
all connected clients, including the sender’s username.
## Build the frontend
We'll build a simple UI that shows a text input and a send button and displays
the sent messages, alongside a list of users in the chat.
### HTML
In your new project directory, create a `public` folder and add an `index.html`
file and add the following code:
```html title="index.html"
Deno Chat App
🦕 Deno Chat App
```
### CSS
If you'd like to style your chat app, create a `style.css` file in the `public`
folder and add this
[pre-made CSS](https://raw.githubusercontent.com/denoland/tutorial-with-websockets/refs/heads/main/public/style.css).
### JavaScript
We'll set up the client side JavaScript in an `app.js` file, you'll have seen it
linked in the HTML we just wrote. In the `public` folder and add an `app.js`
file with the following code:
```js title="app.js"
const myUsername = prompt("Please enter your name") || "Anonymous";
const url = new URL(`./start_web_socket?username=${myUsername}`, location.href);
url.protocol = url.protocol.replace("http", "ws");
const socket = new WebSocket(url);
socket.onmessage = (event) => {
const data = JSON.parse(event.data);
switch (data.event) {
case "update-users":
updateUserList(data.usernames);
break;
case "send-message":
addMessage(data.username, data.message);
break;
}
};
function updateUserList(usernames) {
const userList = document.getElementById("users");
userList.replaceChildren();
for (const username of usernames) {
const listItem = document.createElement("li");
listItem.textContent = username;
userList.appendChild(listItem);
}
}
function addMessage(username, message) {
const template = document.getElementById("message");
const clone = template.content.cloneNode(true);
clone.querySelector("span").textContent = username;
clone.querySelector("p").textContent = message;
document.getElementById("conversation").prepend(clone);
}
const inputElement = document.getElementById("data");
inputElement.focus();
const form = document.getElementById("form");
form.onsubmit = (e) => {
e.preventDefault();
const message = inputElement.value;
inputElement.value = "";
socket.send(JSON.stringify({ event: "send-message", message }));
};
```
This code prompts the user for a username, then creates a WebSocket connection
to the server with the username as a query parameter. It listens for messages
from the server and either updates the list of connected users or adds a new
message to the chat window. It also sends messages to the server when the user
submits the form either by pressing enter or clicking the send button. We use an
[HTML template](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/template)
to scaffold out the new messages to show in the chat window.
## Run the server
To run the server we'll need to grant the necessary permissions to Deno. In your
`deno.json` file, update the `dev` task to allow read and network access:
```diff title="deno.json"
-"dev": "deno run --watch main.ts"
+"dev": "deno run --allow-net --allow-read --watch main.ts"
```
Now if you visit [http://localhost:8080](http://localhost:8080/) you will be
able to start a chat session. You can open 2 simultaneous tabs and try chatting
with yourself.

🦕 Now you can use WebSockets with Deno you're ready to build all kinds of
realtime applications! WebSockets can be used to build realtime dashboards,
games and collaborative editing tools and much more! If you're looking for ways
to expand upon your chat app, perhaps you could consider adding data to the
messages to allow you to style messages differently if they're sent from you or
someone else. Whatever you're building, Deno will WebSocket to ya!
---
# Updating from CommonJS to ESM
> Step-by-step guide to migrating Node.js projects from CommonJS to ESM modules. Learn about import/export syntax changes, module resolution differences, and how to use modern JavaScript features in Deno.
URL: https://docs.deno.com/examples/tutorials/cjs_to_esm
If your Node.js project uses CommonJS modules (e.g. it uses `require`), you'll
need to update your code to use
[ECMAScript modules (ESM)](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules)
to run it in Deno. This guide will help you update your code to use ESM syntax.
## Module imports and exports
Deno supports [ECMAScript modules](/runtime/fundamentals/modules/) exclusively.
If your Node.js code uses
[`require`](https://nodejs.org/api/modules.html#modules-commonjs-modules), you
should update it to use `import` statements instead. If your internal code uses
CommonJS-style exports, those will also need to be updated.
A typical CommonJS-style project might look similar to this:
```js title="add_numbers.js"
module.exports = function addNumbers(num1, num2) {
return num1 + num2;
};
```
```js title="index.js"
const addNumbers = require("./add_numbers");
console.log(addNumbers(2, 2));
```
To convert these to [ECMAScript modules](/runtime/fundamentals/modules/), we'll
make a few minor changes:
```js title="add_numbers.js"
export function addNumbers(num1, num2) {
return num1 + num2;
}
```
```js title="index.js"
import { addNumbers } from "./add_numbers.js";
console.log(addNumbers(2, 2));
```
Exports:
| CommonJS | ECMAScript modules |
| ------------------------------------ | ---------------------------------- |
| `module.exports = function add() {}` | `export default function add() {}` |
| `exports.add = function add() {}` | `export function add() {}` |
Imports:
| CommonJS | ECMAScript modules |
| ------------------------------------------ | ---------------------------------------- |
| `const add = require("./add_numbers");` | `import add from "./add_numbers.js";` |
| `const { add } = require("./add_numbers")` | `import { add } from "./add_numbers.js"` |
### Quick fix with VS Code
If you are using VS Code, you can use its built-in feature to convert CommonJS
to ES6 modules. Right-click on the `require` statement, or the lightbulb icon
and select `Quick Fix` and then `Convert to ES module`.

### CommonJS vs ECMAScript resolution
An important distinction between the two module systems is that ECMAScript
resolution requires the full specifier **including the file extension**.
Omitting the file extension, and special handling of `index.js`, are features
unique to CommonJS. The benefit of the ECMAScript resolution is that it works
the same across the browser, Deno, and other runtimes.
| CommonJS | ECMAScript modules |
| -------------------- | ----------------------------- |
| `"./add_numbers"` | `"./add_numbers.js"` |
| `"./some/directory"` | `"./some/directory/index.js"` |
:::tip
Deno can add all the missing file extensions for you by running
`deno lint --fix`. Deno's linter comes with a `no-sloppy-imports` rule that will
show a linting error when an import path doesn't contain the file extension.
:::
🦕 Now that you know how to port from CJS to ESM you can take advantage of the
modern features that ESM offers, such as async module loading, interop with
browsers, better readability, standardization and future proofing.
---
# Deploying Deno to Cloudflare Workers
> Step-by-step tutorial on deploying Deno functions to Cloudflare Workers. Learn how to configure denoflare, create worker modules, test locally, and deploy your code to Cloudflare's global edge network.
URL: https://docs.deno.com/examples/tutorials/cloudflare_workers
Cloudflare Workers allows you to run JavaScript on Cloudflare's edge network.
This is a short How To guide on deploying a Deno function to Cloudflare Workers.
Note: You would only be able to deploy
[Module Workers](https://developers.cloudflare.com/workers/learning/migrating-to-module-workers/)
instead of web servers or apps.
## Setup `denoflare`
In order to deploy Deno to Cloudflare, we'll use this community created CLI
[`denoflare`](https://denoflare.dev/).
[Install it](https://denoflare.dev/cli/#installation):
```shell
deno install --unstable-worker-options --allow-read --allow-net --allow-env --allow-run --name denoflare --force \
https://raw.githubusercontent.com/skymethod/denoflare/v0.6.0/cli/cli.ts
```
## Create your function
In a new directory, let's create a `main.ts` file, which will contain our Module
Worker function:
```ts
export default {
fetch(request: Request): Response {
return new Response("Hello, world!");
},
};
```
At the very minimum, a Module Worker function must `export default` an object
that exposes a `fetch` function, which returns a `Response` object.
You can test this locally by running:
```shell
denoflare serve main.ts
```
If you go to `localhost:8080` in your browser, you'll see the response will say:
```console
Hello, world!
```
## Configure `.denoflare`
The next step is to create a `.denoflare` config file. In it, let's add:
```json
{
"$schema": "https://raw.githubusercontent.com/skymethod/denoflare/v0.5.11/common/config.schema.json",
"scripts": {
"main": {
"path": "/absolute/path/to/main.ts",
"localPort": 8000
}
},
"profiles": {
"myprofile": {
"accountId": "abcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"apiToken": "abcxxxxxxxxx_-yyyyyyyyyyyy-11-dddddddd"
}
}
}
```
You can find your `accountId` by going to your
[Cloudflare dashboard](https://dash.cloudflare.com/), clicking "Workers", and
finding "Account ID" on the right side.
You can generate an `apiToken` from your
[Cloudflare API Tokens settings](https://dash.cloudflare.com/profile/api-tokens).
When you create an API token, be sure to use the template "Edit Cloudflare
Workers".
After you add both to your `.denoflare` config, let's try pushing it to
Cloudflare:
```console
denoflare push main
```
Next, you can view your new function in your Cloudflare account:

Boom!
---
# Connecting to databases
> A guide to database connectivity in Deno. Learn how to use MySQL, PostgreSQL, MongoDB, SQLite, Firebase, Supabase, and popular ORMs to build data-driven applications with TypeScript.
URL: https://docs.deno.com/examples/tutorials/connecting_to_databases
It is common for applications to store and retrieve data from databases. Deno
supports connecting to many database management systems.
The Deno community has published a number of third-party modules that make it
easy to connect to popular databases like MySQL, Postgres, and MongoDB.
They are hosted at Deno's third-party module site
[deno.land/x](https://deno.land/x).
## MySQL
[deno_mysql](https://deno.land/x/mysql) is a MySQL and MariaDB database driver
for Deno.
### Connect to MySQL with deno_mysql
First import the `mysql` module and create a new client instance. Then connect
to the database passing an object with the connection details:
```ts title="main.js"
import { Client } from "https://deno.land/x/mysql/mod.ts";
const client = await new Client().connect({
hostname: "127.0.0.1",
username: "root",
db: "dbname",
password: "password",
});
```
Once connected, you can execute queries, insert data and retrive information.
## Postgres
[deno-postgres](https://deno.land/x/postgres) is a lightweight PostgreSQL driver
for Deno focused on developer experience.
### Connect to Postgres with deno-postgres
First, import the `Client` class from the `deno-postgres` module and create a
new client instance. Then connect to the database passing an object with the
connection details:
```ts
import { Client } from "https://deno.land/x/postgres/mod.ts";
const client = new Client({
user: "user",
database: "dbname",
hostname: "127.0.0.1",
port: 5432,
password: "password",
});
await client.connect();
```
### Connect to Postgres with postgresjs
[postgresjs](https://deno.land/x/postgresjs) is a full-featured Postgres client
for Node.js and Deno.
Import the `postgres` module and create a new client instance. Then connect to
the database passing a connection string as an argument:
```js
import postgres from "https://deno.land/x/postgresjs/mod.js";
const sql = postgres("postgres://username:password@host:port/database");
```
## MongoDB
We suggest using
[npm specifiers](/runtime/fundamentals/node/#using-npm-packages) to work with
the official [MongoDB driver on npm](https://www.npmjs.com/package/mongodb). You
can learn more about how to work with the driver
[in the official docs](https://www.mongodb.com/docs/drivers/node/current/). The
only difference using this module in the context of Deno will be how you import
the module using an `npm:` specifier.
Import the MongoDB driver, set up connection configuration then connect to a
MongoDB instance. You can then perform operations like inserting documents into
a collection before closing the connection:
```ts title="main.js"
import { MongoClient } from "npm:mongodb@6";
const url = "mongodb://localhost:27017";
const client = new MongoClient(url);
const dbName = "myProject";
await client.connect();
console.log("Connected successfully to server");
// Get a reference to a collection
const db = client.db(dbName);
const collection = db.collection("documents");
// Execute an insert operation
const insertResult = await collection.insertMany([{ a: 1 }, { a: 2 }]);
console.log("Inserted documents =>", insertResult);
client.close();
```
## SQLite
There are multiple solutions to connect to SQLite in Deno:
### Connect to SQLite using the `node:sqlite` module
_`node:sqlite` module has been added in Deno v2.2._
```ts
import { DatabaseSync } from "node:sqlite";
const database = new DatabaseSync("test.db");
const result = database.prepare("select sqlite_version()").get();
console.log(result);
db.close();
```
### Connect to SQLite with the FFI Module
[@db/sqlite](https://jsr.io/@db/sqlite) provides JavaScript bindings to the
SQLite3 C API, using [Deno FFI](/runtime/reference/deno_namespace_apis/#ffi).
```ts
import { Database } from "jsr:@db/sqlite@0.12";
const db = new Database("test.db");
const [version] = db.prepare("select sqlite_version()").value<[string]>()!;
console.log(version);
db.close();
```
### Connect to SQLite with the Wasm-Optimized Module
[sqlite](https://deno.land/x/sqlite) is a SQLite module for JavaScript and
TypeScript. The wrapper made specifically for Deno and uses a version of SQLite3
compiled to WebAssembly (Wasm).
```ts
import { DB } from "https://deno.land/x/sqlite/mod.ts";
const db = new DB("test.db");
db.close();
```
## Firebase
To connect to Firebase with Deno, import the
[firestore npm module](https://firebase.google.com/docs/firestore/quickstart)
with the [ESM CDN](https://esm.sh/). To learn more about using npm modules in
Deno with a CDN, see
[Using npm packages with CDNs](/runtime/fundamentals/modules/#https-imports).
### Connect to Firebase with the firestore npm module
```js
import { initializeApp } from "https://www.gstatic.com/firebasejs/9.8.1/firebase-app.js";
import {
addDoc,
collection,
connectFirestoreEmulator,
deleteDoc,
doc,
Firestore,
getDoc,
getDocs,
getFirestore,
query,
QuerySnapshot,
setDoc,
where,
} from "https://www.gstatic.com/firebasejs/9.8.1/firebase-firestore.js";
import { getAuth } from "https://www.gstatic.com/firebasejs/9.8.1/firebase-auth.js";
const app = initializeApp({
apiKey: Deno.env.get("FIREBASE_API_KEY"),
authDomain: Deno.env.get("FIREBASE_AUTH_DOMAIN"),
projectId: Deno.env.get("FIREBASE_PROJECT_ID"),
storageBucket: Deno.env.get("FIREBASE_STORAGE_BUCKET"),
messagingSenderId: Deno.env.get("FIREBASE_MESSING_SENDER_ID"),
appId: Deno.env.get("FIREBASE_APP_ID"),
measurementId: Deno.env.get("FIREBASE_MEASUREMENT_ID"),
});
const db = getFirestore(app);
const auth = getAuth(app);
```
## Supabase
To connect to Supabase with Deno, import the
[supabase-js npm module](https://supabase.com/docs/reference/javascript) with
the [esm.sh CDN](https://esm.sh/). To learn more about using npm modules in Deno
with a CDN, see
[Using npm packages with CDNs](/runtime/fundamentals/modules/#https-imports).
### Connect to Supabase with the supabase-js npm module
```js
import { createClient } from "https://esm.sh/@supabase/supabase-js";
const options = {
schema: "public",
headers: { "x-my-custom-header": "my-app-name" },
autoRefreshToken: true,
persistSession: true,
detectSessionInUrl: true,
};
const supabase = createClient(
"https://xyzcompany.supabase.co",
"public-anon-key",
options,
);
```
## ORMs
Object-Relational Mappings (ORM) define your data models as classes that you can
persist to a database. You can read and write data in your database through
instances of these classes.
Deno supports multiple ORMs, including Prisma and DenoDB.
### DenoDB
[DenoDB](https://deno.land/x/denodb) is a Deno-specific ORM.
#### Connect to DenoDB
```ts
import {
Database,
DataTypes,
Model,
PostgresConnector,
} from "https://deno.land/x/denodb/mod.ts";
const connection = new PostgresConnector({
host: "...",
username: "user",
password: "password",
database: "airlines",
});
const db = new Database(connection);
```
## GraphQL
GraphQL is an API query language often used to compose disparate data sources
into client-centric APIs. To set up a GraphQL API, you should first set up a
GraphQL server. This server exposes your data as a GraphQL API that your client
applications can query for data.
### Server
You can use [gql](https://deno.land/x/gql), an universal GraphQL HTTP middleware
for Deno, to run a GraphQL API server in Deno.
#### Run a GraphQL API server with gql
```ts
import { GraphQLHTTP } from "https://deno.land/x/gql/mod.ts";
import { makeExecutableSchema } from "https://deno.land/x/graphql_tools@0.0.2/mod.ts";
import { gql } from "https://deno.land/x/graphql_tag@0.0.1/mod.ts";
const typeDefs = gql`
type Query {
hello: String
}
`;
const resolvers = {
Query: {
hello: () => `Hello World!`,
},
};
const schema = makeExecutableSchema({ resolvers, typeDefs });
Deno.serve({ port: 3000 }, async () => {
const { pathname } = new URL(req.url);
return pathname === "/graphql"
? await GraphQLHTTP({
schema,
graphiql: true,
})(req)
: new Response("Not Found", { status: 404 });
});
```
### Client
To make GraphQL client calls in Deno, import the
[graphql npm module](https://www.npmjs.com/package/graphql) with the
[esm CDN](https://esm.sh/). To learn more about using npm modules in Deno via
CDN read [here](/runtime/fundamentals/modules/#https-imports).
#### Make GraphQL client calls with the graphql npm module
```js
import { buildSchema, graphql } from "https://esm.sh/graphql";
const schema = buildSchema(`
type Query {
hello: String
}
`);
const rootValue = {
hello: () => {
return "Hello world!";
},
};
const response = await graphql({
schema,
source: "{ hello }",
rootValue,
});
console.log(response);
```
🦕 Now you can connect your Deno project to a database you'll be able to work
with persistent data, perform CRUD operations and start building more complex
applications.
---
# Build a React app with create-vite
> A tutorial on building React applications with Deno and Vite. Learn how to set up a project, configure TypeScript, add API endpoints, implement routing, and deploy your React app using modern development tools.
URL: https://docs.deno.com/examples/tutorials/create_react
[React](https://reactjs.org) is the most widely used JavaScript frontend
library.
In this tutorial we'll build a simple React app with Deno. The app will display
a list of dinosaurs. When you click on one, it'll take you to a dinosaur page
with more details. You can see the
[finished app repo on GitHub](https://github.com/denoland/tutorial-with-react)

## Create a React app with Vite and Deno
This tutorial will use [create-vite](https://vitejs.dev/) to quickly scaffold a
Deno and React app. Vite is a build tool and development server for modern web
projects. It pairs well with React and Deno, leveraging ES modules and allowing
you to import React components directly.
In your terminal run the following command to create a new React app with Vite
using the typescript template:
```sh
deno run -A npm:create-vite@latest --template react-ts
```
When prompted, give your app a name, and `cd` into the newly created project
directory. Then run the following command to install the dependencies:
```sh
deno install
```
Now you can serve your new react app by running:
```sh
deno task dev
```
This will start the Vite server, click the output link to localhost to see your
app in the browser. If you have the
[Deno extension for VSCode](/runtime/getting_started/setup_your_environment/#visual-studio-code)
installed, you may notice that the editor highlights some errors in the code.
This is because the app created by Vite is designed with Node in mind and so
uses conventions that Deno does not (such as 'sloppy imports' - importing
modules without the file extension). Disable the Deno extension for this project
to avoid these errors or try out the
[tutorial to build a React app with a deno.json file](/runtime/tutorials/how_to_with_npm/react/).
## Add a backend
The next step is to add a backend API. We'll create a very simple API that
returns information about dinosaurs.
In the root of your new project, create an `api` folder. In that folder, create
a `main.ts` file, which will run the server, and a `data.json`, which will
contain the hard coded dinosaur data.
Copy and paste
[this json file](https://github.com/denoland/tutorial-with-react/blob/main/api/data.json)
into the `api/data.json` file.
We're going to build out a simple API server with routes that return dinosaur
information. We'll use the [`oak` middleware framework](https://jsr.io/@oak/oak)
and the [`cors` middleware](https://jsr.io/@tajpouria/cors) to enable
[CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS).
Use the `deno add` command to add the required dependencies to your project:
```shell
deno add jsr:@oak/oak jsr:@tajpouria/cors
```
Next, update `api/main.ts` to import the required modules and create a new
`Router` instance to define some routes:
```ts title="main.ts"
import { Application, Router } from "@oak/oak";
import { oakCors } from "@tajpouria/cors";
import data from "./data.json" with { type: "json" };
const router = new Router();
```
After this, in the same file, we'll define two routes. One at `/api/dinosaurs`
to return all the dinosaurs, and `/api/dinosaurs/:dinosaur` to return a specific
dinosaur based on the name in the URL:
```ts title="main.ts"
router.get("/api/dinosaurs", (context) => {
context.response.body = data;
});
router.get("/api/dinosaurs/:dinosaur", (context) => {
if (!context?.params?.dinosaur) {
context.response.body = "No dinosaur name provided.";
}
const dinosaur = data.find((item) =>
item.name.toLowerCase() === context.params.dinosaur.toLowerCase()
);
context.response.body = dinosaur ?? "No dinosaur found.";
});
```
Finally, at the bottom of the same file, create a new `Application` instance and
attach the routes we just defined to the application using
`app.use(router.routes())` and start the server listening on port 8000:
```ts title="main.ts"
const app = new Application();
app.use(oakCors());
app.use(router.routes());
app.use(router.allowedMethods());
await app.listen({ port: 8000 });
```
You can run the API server with `deno run --allow-env --allow-net api/main.ts`.
We'll create a task to run this command in the background and update the dev
task to run both the React app and the API server.
In your `package.json` file, update the `scripts` field to include the
following:
```jsonc title="package.json"
{
"scripts": {
"dev": "deno task dev:api & deno task dev:vite",
"dev:api": "deno run --allow-env --allow-net api/main.ts",
"dev:vite": "deno run -A npm:vite",
// ...
}
```
If you run `deno task dev` now and visit `localhost:8000/api/dinosaurs`, in your
browser you should see a JSON response of all of the dinosaurs.
## React support in Deno
At this point, your IDE or editor may be showing you warnings about missing
types in your project. Deno has built-in TypeScript support for React
applications. To enable this, you'll need to configure your project with the
appropriate type definitions and DOM libraries. Create or update your
`deno.json` file with the following TypeScript compiler options:
```jsonc title="deno.json"
"compilerOptions": {
"types": [
"react",
"react-dom",
"@types/react"
],
"lib": [
"dom",
"dom.iterable",
"deno.ns"
],
"jsx": "react-jsx",
"jsxImportSource": "react"
}
```
## Update the entrypoint
The entrypoint for the React app is in the `src/main.tsx` file. Ours is going to
be very basic:
```tsx title="main.tsx"
import { StrictMode } from "react";
import { createRoot } from "react-dom/client";
import "./index.css";
import App from "./App.tsx";
createRoot(document.getElementById("root")!).render(
,
);
```
## Add a router
The app will have two routes: `/` and `/:dinosaur`.
We'll use [`react-router-dom`](https://reactrouter.com/en/main) to build out
some routing logic, so we'll need to add the `react-router-dom` dependency to
your project. In the project root run:
```shell
deno add npm:react-router-dom
```
Update the `/src/App.tsx` file to import and use the
[`BrowserRouter`](https://reactrouter.com/en/main/router-components/browser-router)
component from `react-router-dom` and define the two routes:
```tsx title="App.tsx"
import { BrowserRouter, Route, Routes } from "react-router-dom";
import Index from "./pages/index.tsx";
import Dinosaur from "./pages/Dinosaur.tsx";
import "./App.css";
function App() {
return (
} />
} />
);
}
export default App;
```
### Proxy to forward the api requests
Vite will be serving the application on port `5173` while our api is running on
port `8000`. Therefore, we'll need to set up a proxy to allow the `api/`-paths
to get to be reachable by the router. Overwrite `vite.config.ts` with the
following to configure a proxy:
```ts title="vite.config.ts"
import { defineConfig } from "vite";
import react from "@vitejs/plugin-react";
export default defineConfig({
plugins: [react()],
server: {
proxy: {
"/api": {
target: "http://localhost:8000",
changeOrigin: true,
},
},
},
});
```
## Create the pages
We'll create two pages: `Index` and `Dinosaur`. The `Index` page will list all
the dinosaurs and the `Dinosaur` page will show details of a specific dinosaur.
Create a `pages` folder in the `src` directory and inside that create two files:
`index.tsx` and `Dinosaur.tsx`.
### Types
Both pages will use the `Dino` type to describe the shape of data they're
expecting from the API, so let's create a `types.ts` file in the `src`
directory:
```ts title="types.ts"
export type Dino = { name: string; description: string };
```
### index.tsx
This page will fetch the list of dinosaurs from the API and render them as
links:
```tsx title="index.tsx"
import { useEffect, useState } from "react";
import { Link } from "react-router-dom";
import { Dino } from "../types.ts";
export default function Index() {
const [dinosaurs, setDinosaurs] = useState([]);
useEffect(() => {
(async () => {
const response = await fetch(`/api/dinosaurs/`);
const allDinosaurs = await response.json() as Dino[];
setDinosaurs(allDinosaurs);
})();
}, []);
return (
Welcome to the Dinosaur app
Click on a dinosaur below to learn more.
{dinosaurs.map((dinosaur: Dino) => {
return (
{dinosaur.name}
);
})}
);
}
```
### Dinosaur.tsx
This page will fetch the details of a specific dinosaur from the API and render
it in a paragraph:
```tsx title="Dinosaur.tsx"
import { useEffect, useState } from "react";
import { Link, useParams } from "react-router-dom";
import { Dino } from "../types.ts";
export default function Dinosaur() {
const { selectedDinosaur } = useParams();
const [dinosaur, setDino] = useState({ name: "", description: "" });
useEffect(() => {
(async () => {
const resp = await fetch(`/api/dinosaurs/${selectedDinosaur}`);
const dino = await resp.json() as Dino;
setDino(dino);
})();
}, [selectedDinosaur]);
return (
{dinosaur.name}
{dinosaur.description}
🠠 Back to all dinosaurs
);
}
```
### Styling the list of dinosaurs
Since we are displaying the list of dinosaurs on the main page, let's do some
basic formatting. Add the following to the bottom of `src/App.css` to display
our list of dinosaurs in an orderly fashion:
```css title="src/App.css"
.dinosaur {
display: block;
}
```
## Run the app
To run the app use the task you set up earlier
```sh
deno task dev
```
Navigate to the local Vite server in your browser (`localhost:5173`) and you
should see the list of dinosaurs displayed which you can click through to find
out about each one.

## Build and deploy
At this point the app is being served by the Vite development server. To serve
the app in production, you can build the app with Vite and then serve the built
files with Deno. To do so we'll need to update the api server to serve the built
files. We'll write some middleware to do this. In your `api` directory create a
new folder `util` and a new file called `routeStaticFilesFrom.ts` and add the
following code:
```ts title="routeStaticFilesFrom.ts"
import { Next } from "jsr:@oak/oak/middleware";
import { Context } from "jsr:@oak/oak/context";
// Configure static site routes so that we can serve
// the Vite build output and the public folder
export default function routeStaticFilesFrom(staticPaths: string[]) {
return async (context: Context>, next: Next) => {
for (const path of staticPaths) {
try {
await context.send({ root: path, index: "index.html" });
return;
} catch {
continue;
}
}
await next();
};
}
```
This middleware will attempt to serve the static files from the paths provided
in the `staticPaths` array. If the file is not found it will call the next
middleware in the chain. We can now update the `api/main.ts` file to use this
middleware:
```ts title="main.ts"
import { Application, Router } from "@oak/oak";
import { oakCors } from "@tajpouria/cors";
import data from "./data.json" with { type: "json" };
import routeStaticFilesFrom from "./util/routeStaticFilesFrom.ts";
const router = new Router();
router.get("/api/dinosaurs", (context) => {
context.response.body = data;
});
router.get("/api/dinosaurs/:dinosaur", (context) => {
if (!context?.params?.dinosaur) {
context.response.body = "No dinosaur name provided.";
}
const dinosaur = data.find((item) =>
item.name.toLowerCase() === context.params.dinosaur.toLowerCase()
);
context.response.body = dinosaur ? dinosaur : "No dinosaur found.";
});
const app = new Application();
app.use(oakCors());
app.use(router.routes());
app.use(router.allowedMethods());
app.use(routeStaticFilesFrom([
`${Deno.cwd()}/dist`,
`${Deno.cwd()}/public`,
]));
await app.listen({ port: 8000 });
```
Add a `serve` script to your `package.json` file to build the app with Vite and
then run the API server:
```jsonc
{
"scripts": {
// ...
"serve": "deno task build && deno task dev:api",
}
```
Now you can serve the built app with Deno by running:
```sh
deno task serve
```
If you visit `localhost:8000` in your browser you should see the app running!
🦕 Now you can scaffold and develop a React app with Vite and Deno! You’re ready
to build blazing-fast web applications. We hope you enjoy exploring these
cutting-edge tools, we can't wait to see what you make!
---
# Better debugging with the console API
> An in-depth guide to advanced console debugging in Deno. Learn about console.table, timers, counters, tracers, and how to leverage the full console API beyond basic logging for better debugging workflows.
URL: https://docs.deno.com/examples/tutorials/debugging_with_console
Some of the console API is probably muscle memory for web developers, but there
is so much more than just `console.log()` for you to use. Deno has great support
for this API, so whether you’re writing JavaScript for the browser of for the
server it’s worth learning about these helpful utilities.
Let’s take a look at some of this API’s most useful methods. Your debugging is
going to get so much easier!
## `console.log()`
Hello, old friend! You’ll most likely be using this to output logging messages
to the console to help you debug.
```js
console.log("Hello, world!"); // "Hello, world!"
```
You can output multiple items by separated by commas like so:
```jsx
const person = { "name": "Jane", "city": "New York" };
console.log("Hello, ", person.name, "from ", person.city); // "Hello, Jane from New York"
```
Or you can use string literals:
```jsx
const person = { "name": "Jane", "city": "New York" };
console.log(`Hello ${person.name} from ${person.city}`); // "Hello, Jane from New York"
```
You can also [apply some styling using CSS](/examples/color_logging/) using the
`%c` directive:
```jsx
console.log("Wild %cblue", "color: blue", "yonder"); // Applies a blue text color to the word "blue"
```
…but there is much more you can do with the console API.
## `console.table()`
The `table` method is helpful for outputting structured data like objects for
easier inspection.
```jsx
const people = {
"john": {
"age": 30,
"city": "New York",
},
"jane": {
"age": 25,
"city": "Los Angeles",
},
};
console.table(people);
/*
┌───────┬─────┬───────────────┐
│ (idx) │ age │ city │
├───────┼─────┼───────────────┤
│ john │ 30 │ "New York" │
│ jane │ 25 │ "Los Angeles" │
└───────┴─────┴───────────────┘
*/
```
You can also specify the properties of your object that you’d like to include in
the table. Great for inspecting a summary of those detailed objects to see just
the part you are concerned with.
```jsx
console.table(people, ["city"]);
/* outputs
┌───────┬───────────────┐
│ (idx) │ city │
├───────┼───────────────┤
│ john │ "New York" │
│ jane │ "Los Angeles" │
└───────┴───────────────┘
*/
```
## Timer methods
Understanding how long specific parts of your application take is key to
removing performance bottlenecks and expensive operations. If you’ve ever
reached for JavaScript’s date method to make yourself a timer, you’ll wish you’d
know this one long ago. It’s more convenient and more accurate.
Try
using[`console.time()`](https://developer.mozilla.org/en-US/docs/Web/API/console/time_static),
[`console.timeLog()`](https://developer.mozilla.org/en-US/docs/Web/API/console/timeLog_static),
and
[`console.timeEnd()`](https://developer.mozilla.org/en-US/docs/Web/API/console/timeEnd_static)
instead.
```jsx
console.time("My timer"); // starts a timer with label "My timer"
// do some work...
console.timeLog("My timer"); // outputs the current timer value, e.g. "My timer: 9000ms"
// do more work...
console.timeEnd("My timer"); // stops "My timer" and reports its value, e.g. "My timer: 97338ms"
```
You can create multiple timers each with their own label. Very handy!
## Counting things with `console.count()`
It can be helpful to keep a count of how many times specific operations in your
code have been executed. Rather than doing this manually you can use
[`console.count()`](https://developer.mozilla.org/en-US/docs/Web/API/console/count_static)
which can maintain multiple counters for you based on the label you provide.
```jsx
// increment the default counter
console.count();
console.count();
console.count();
/*
"default: 1"
"default: 2"
"default: 3"
*/
```
This can be very handy inside a function and passing in a label, like so:
```jsx
function pat(animal) {
console.count(animal);
return `Patting the ${animal}`;
}
pat("cat");
pat("cat");
pat("dog");
pat("cat");
/*
"cat: 1"
"cat: 2"
"dog: 1"
"cat: 3"
*/
```
## Going deeper with `console.trace()`
For a detailed view of what is happening in your application, you can output a
stack trace to the console with
[`console.trace()`](https://developer.mozilla.org/en-US/docs/Web/API/console/trace_static):
```jsx
// main.js
function foo() {
function bar() {
console.trace();
}
bar();
}
foo();
/*
Trace
at bar (file:///PATH_TO/main.js:3:13)
at foo (file:///PATH_TO/main.js:5:3)
at file:///PATH_TO/main.js:8:1
*/
```
There’s more to explore, but these handy methods can give your JavaScript
debugging a boost and they are ready and waiting for you to use right in your
browser or in your Deno application.
Take a look at [console support](/api/web/~/Console) in the API Reference docs.
for more.
---
# Deploy an app with Deno Deploy
> A step-by-step tutorial for deploying your first Deno application to Deno Deploy Early Access.
URL: https://docs.deno.com/examples/tutorials/deno_deploy
Deno Deploy allows you to host your Deno applications on a global edge network,
with built in telemetry and CI/CD tooling.
This tutorial guides you through creating and deploying a simple Deno
application using Deno DeployEA.
## Prerequisites
1. A [GitHub](https://github.com) account
2. [Deno installed](https://docs.deno.com/runtime/manual/getting_started/installation)
on your local machine
3. Access to the
[Deno Deploy Early Access program](https://dash.deno.com/account#early-access)
## Create a simple Deno application with Vite
First, let's create a basic application with Vite, initialize a new
[Vite](https://vite.dev/guide/) project:
```sh
deno init --npm vite
```
Give your project a name and select your framework and variant. For this
tutorial, we'll create a vanilla TypeScript app.
Change directory to your newly created project name with `cd my-project-name`
then run:
```sh
deno install
deno run dev
```
You should see a basic app running at
[http://127.0.0.1:5173/](http://127.0.0.1:5173/).
You can edit the `main.ts` file to see changes in the browser.
## Create a GitHub repository
1. Go to [GitHub](https://github.com) and create a new repository.
2. Initialize your local directory as a Git repository:
```sh
git init
git add .
git commit -m "Initial commit"
```
3. Add your GitHub repository as a remote and push your code:
```sh
git remote add origin https://github.com/your-username/my-first-deno-app.git
git branch -M main
git push -u origin main
```
## Sign up for Deno Deploy Early Access
1. Visit the
[Deno Deploy account settings](https://dash.deno.com/account#early-access)
2. Click "Join the Early Access program"
3. Once approved, you'll receive an email with access instructions

## Create a Deno Deploy organization
1. Navigate to [app.deno.com](https://app.deno.com)
2. Click "+ New Organization"
3. Select the 'Standard Deploy' organization type
4. Enter an organization name and slug (this cannot be changed later)
5. Click "Create Standard Deploy organization"
## Create and deploy your application
1. From your organization's dashboard, click "Try new Deno Deploy Early Access"
2. Then click "+ New App"
3. Select the GitHub repository you created earlier
4. The app configuration should be automatically detected, but you can verify
these settings blu clicking the "Edit build config" button:
- Framework preset: No preset
- Runtime configuration: Static Site
- Install command: `deno install`
- Build command: `deno task build`
- Static Directory: `dist`
5. Click "Create App" to start the deployment process
## Monitor your deployment
1. Watch the build logs as your application is deployed
2. Once deployment completes, you'll see a preview URL (typically
`https://your-app-name.your-org-name.deno.net`)
3. Click the URL to view your deployed application!
## Make changes and redeploy
Let's update the application and see how changes are deployed:
Update your `main.ts` file locally:
```ts title="main.ts"
import './style.css'
import typescriptLogo from './typescript.svg'
import viteLogo from '/vite.svg'
import { setupCounter } from './counter.ts'
document.querySelector('#app')!.innerHTML = `
Hello from Deno Deploy!
Click on the Vite and TypeScript logos to learn more
setupCounter(document.querySelector('#counter')!)
```
2. Commit and push your changes:
```sh
git add .
git commit -m "Update application"
git push
```
Return to your Deno Deploy dashboard to see a new build automatically start.
Once the build completes, visit your application URL to see the update.
## Explore observability features
Deno DeployEA provides comprehensive observability tools:
1. From your application dashboard, click "Logs" in the sidebar
- You'll see console output from your application
- Use the search bar to filter logs (e.g., `context:production`)
2. Click "Traces" to view request traces
- Select a trace to see detailed timing information
- Examine spans to understand request processing
3. Click "Metrics" to view application performance metrics
- Monitor request counts, error rates, and response times
🦕 Now that you've deployed your first application, you might want to:
1. [Add a custom domain](/deploy/early-access/reference/domains/) to your
application
2. Explore [framework support](/deploy/early-access/reference/frameworks/) for
Next.js, Astro, and other frameworks
3. Learn about [caching strategies](/deploy/early-access/reference/caching/) to
improve performance
4. Set up different
[environments](/deploy/early-access/reference/env-vars-and-contexts/) for
development and production
For more information, check out the
[Deno DeployEA Reference documentation](/deploy/early-access/reference/).
---
# Monitor your app with OpenTelemetry and Deno Deploy
> A step-by-step tutorial for adding custom OpenTelemetry instrumentation to your Deno Deploy application.
URL: https://docs.deno.com/examples/tutorials/deploy_otel
Deno DeployEA includes built-in OpenTelemetry support that
automatically captures traces for HTTP requests, database queries, and other
operations. This tutorial shows how to add custom OpenTelemetry instrumentation
to your applications for more detailed observability.
## Prerequisites
1. A [GitHub](https://github.com) account
2. [Deno installed](https://docs.deno.com/runtime/manual/getting_started/installation)
on your local machine
3. Access to the
[Deno Deploy Early Access program](https://dash.deno.com/account#early-access)
4. Basic familiarity with
[OpenTelemetry concepts](https://opentelemetry.io/docs/concepts/)
## Create a basic API application
First, let's create a simple API server that we'll instrument with
OpenTelemetry:
```ts title="main.ts"
const dataStore: Record = {};
async function handler(req: Request): Promise {
const url = new URL(req.url);
// Simulate random latency
await new Promise((resolve) => setTimeout(resolve, Math.random() * 200));
try {
// Handle product listing
if (url.pathname === "/products" && req.method === "GET") {
return new Response(JSON.stringify(Object.values(dataStore)), {
headers: { "Content-Type": "application/json" },
});
}
// Handle product creation
if (url.pathname === "/products" && req.method === "POST") {
const data = await req.json();
const id = crypto.randomUUID();
dataStore[id] = data;
return new Response(JSON.stringify({ id, ...data }), {
status: 201,
headers: { "Content-Type": "application/json" },
});
}
// Handle product retrieval by ID
if (url.pathname.startsWith("/products/") && req.method === "GET") {
const id = url.pathname.split("/")[2];
const product = dataStore[id];
if (!product) {
return new Response("Product not found", { status: 404 });
}
return new Response(JSON.stringify(product), {
headers: { "Content-Type": "application/json" },
});
}
// Handle root route
if (url.pathname === "/") {
return new Response("Product API - Try /products endpoint");
}
return new Response("Not Found", { status: 404 });
} catch (error) {
console.error("Error handling request:", error);
return new Response("Internal Server Error", { status: 500 });
}
}
console.log("Server running on http://localhost:8000");
Deno.serve(handler, { port: 8000 });
```
Save this file and run it locally:
```sh
deno run --allow-net main.ts
```
Test the API with curl or a browser to ensure it works:
```sh
# List products (empty at first)
curl http://localhost:8000/products
# Add a product
curl -X POST http://localhost:8000/products \
-H "Content-Type: application/json" \
-d '{"name": "Test Product", "price": 19.99}'
```
## Add OpenTelemetry instrumentation
Now, let's add custom OpenTelemetry instrumentation to our application. Create a
new file called `instrumented-main.ts`:
```ts title="instrumented-main.ts"
import { trace } from "npm:@opentelemetry/api@1";
// Get the OpenTelemetry tracer
const tracer = trace.getTracer("product-api");
const dataStore: Record = {};
// Simulate a database operation with custom span
async function queryDatabase(
operation: string,
data?: unknown,
): Promise {
return await tracer.startActiveSpan(`database.${operation}`, async (span) => {
try {
// Add attributes to the span for better context
span.setAttributes({
"db.system": "memory-store",
"db.operation": operation,
});
// Simulate database latency
const delay = Math.random() * 100;
await new Promise((resolve) => setTimeout(resolve, delay));
// Add latency information to the span
span.setAttributes({ "db.latency_ms": delay });
if (operation === "list") {
return Object.values(dataStore);
} else if (operation === "get") {
return dataStore[data as string];
} else if (operation === "insert") {
const id = crypto.randomUUID();
dataStore[id] = data as string;
return { id, data };
}
return null;
} catch (error) {
// Record any errors to the span
span.recordException(error);
span.setStatus({ code: trace.SpanStatusCode.ERROR });
throw error;
} finally {
// End the span when we're done
span.end();
}
});
}
async function handler(req: Request): Promise {
// Create a parent span for the entire request
return await tracer.startActiveSpan(
`${req.method} ${new URL(req.url).pathname}`,
async (parentSpan) => {
const url = new URL(req.url);
// Add request details as span attributes
parentSpan.setAttributes({
"http.method": req.method,
"http.url": req.url,
"http.route": url.pathname,
});
try {
// Handle product listing
if (url.pathname === "/products" && req.method === "GET") {
const products = await queryDatabase("list");
return new Response(JSON.stringify(products), {
headers: { "Content-Type": "application/json" },
});
}
// Handle product creation
if (url.pathname === "/products" && req.method === "POST") {
// Create a span for parsing request JSON
const data = await tracer.startActiveSpan(
"parse.request.body",
async (span) => {
try {
const result = await req.json();
return result;
} catch (error) {
span.recordException(error);
span.setStatus({ code: trace.SpanStatusCode.ERROR });
throw error;
} finally {
span.end();
}
},
);
const result = await queryDatabase("insert", data);
return new Response(JSON.stringify(result), {
status: 201,
headers: { "Content-Type": "application/json" },
});
}
// Handle product retrieval by ID
if (url.pathname.startsWith("/products/") && req.method === "GET") {
const id = url.pathname.split("/")[2];
parentSpan.setAttributes({ "product.id": id });
const product = await queryDatabase("get", id);
if (!product) {
parentSpan.setAttributes({
"error": true,
"error.type": "not_found",
});
return new Response("Product not found", { status: 404 });
}
return new Response(JSON.stringify(product), {
headers: { "Content-Type": "application/json" },
});
}
// Handle root route
if (url.pathname === "/") {
return new Response("Product API - Try /products endpoint");
}
parentSpan.setAttributes({ "error": true, "error.type": "not_found" });
return new Response("Not Found", { status: 404 });
} catch (error) {
console.error("Error handling request:", error);
// Record the error in the span
parentSpan.recordException(error);
parentSpan.setAttributes({
"error": true,
"error.type": error.name,
"error.message": error.message,
});
parentSpan.setStatus({ code: trace.SpanStatusCode.ERROR });
return new Response("Internal Server Error", { status: 500 });
} finally {
// End the parent span when we're done
parentSpan.end();
}
},
);
}
console.log(
"Server running with OpenTelemetry instrumentation on http://localhost:8000",
);
Deno.serve(handler, { port: 8000 });
```
Run the instrumented version locally:
```sh
deno run --allow-net instrumented-main.ts
```
Test the API again with curl to generate some traces.
## Create a GitHub repository
1. Go to [GitHub](https://github.com) and create a new repository.
2. Initialize your local directory as a Git repository:
```sh
git init
git add .
git commit -m "Add OpenTelemetry instrumented API"
```
3. Add your GitHub repository as a remote and push your code:
```sh
git remote add origin https://github.com/your-username/otel-demo-app.git
git branch -M main
git push -u origin main
```
## Deploy to Deno Deploy Early Access
1. Navigate to [app.deno.com](https://app.deno.com)
2. Select your organization or create a new one if needed
3. Click "+ New App"
4. Select the GitHub repository you created earlier
5. Configure the build settings:
- Framework preset: No preset
- Runtime configuration: Dynamic
- Entrypoint: `instrumented-main.ts`
6. Click "Create App" to start the deployment process
## Generate sample traffic
To generate sample traces and metrics, let's send some traffic to your deployed
application:
1. Copy your deployment URL from the Deno Deploy dashboard
2. Send several requests to different endpoints:
```sh
# Store your app URL in a variable
APP_URL=https://your-app-name.your-org-name.deno.net
# Get the root route
curl $APP_URL/
# List products (empty at first)
curl $APP_URL/products
# Add some products
curl -X POST $APP_URL/products -H "Content-Type: application/json" -d '{"name": "Laptop", "price": 999.99}'
curl -X POST $APP_URL/products -H "Content-Type: application/json" -d '{"name": "Headphones", "price": 129.99}'
curl -X POST $APP_URL/products -H "Content-Type: application/json" -d '{"name": "Mouse", "price": 59.99}'
# List products again
curl $APP_URL/products
# Try to access a non-existent product (will generate an error span)
curl $APP_URL/products/nonexistent-id
```
## Explore OpenTelemetry traces and metrics
Now let's explore the observability data collected by Deno Deploy:
1. From your application dashboard, click "Traces" in the sidebar
- You'll see a list of traces for each request to your application
- You can filter traces by HTTP method or status code using the search bar
2. Select one of your `/products` POST traces to see detailed information:
- The parent span for the entire request
- Child spans for database operations
- The span for parsing the request body

3. Click on individual spans to see their details:
- Duration and timing information
- Attributes you set like `db.operation` and `db.latency_ms`
- Any recorded exceptions
4. Click "Logs" in the sidebar to see console output with trace context:
- Notice how logs emitted during a traced operation are automatically linked
to the trace
- Click "View trace" on a log line to see the associated trace
5. Click "Metrics" to view application performance metrics:
- HTTP request counts by endpoint
- Error rates
- Response time distributions
🦕 The automatic instrumentation in Deno DeployEA combined with your
custom instrumentation provides comprehensive visibility into your application's
performance and behavior.
For more information about OpenTelemetry in Deno, check out these resources:
- [OpenTelemetry in Deno documentation](/runtime/fundamentals/open_telemetry/)
- [Deno DeployEA Observability reference](/deploy/early-access/reference/observability/)
- [OpenTelemetry official documentation](https://opentelemetry.io/docs/)
---
# How to deploy Deno to Digital Ocean
> A step-by-step guide to deploying Deno applications on Digital Ocean. Learn about Docker containerization, GitHub Actions automation, container registries, and how to set up continuous deployment workflows.
URL: https://docs.deno.com/examples/tutorials/digital_ocean
Digital Ocean is a popular cloud infrastructure provider offering a variety of
hosting services ranging from networking, to compute, to storage.
Here's a step by step guide to deploying a Deno app to Digital Ocean using
Docker and GitHub Actions.
The pre-requisites for this are:
- [`docker` CLI](https://docs.docker.com/engine/reference/commandline/cli/)
- a [GitHub account](https://github.com)
- a [Digital Ocean account](https://digitalocean.com)
- [`doctl` CLI](https://docs.digitalocean.com/reference/doctl/how-to/install/)
## Create Dockerfile and docker-compose.yml
To focus on the deployment, our app will simply be a `main.ts` file that returns
a string as an HTTP response:
```ts title="main.ts"
import { Application } from "jsr:@oak/oak";
const app = new Application();
app.use((ctx) => {
ctx.response.body = "Hello from Deno and Digital Ocean!";
});
await app.listen({ port: 8000 });
```
Then, we'll create two files -- `Dockerfile` and `docker-compose.yml` -- to
build the Docker image.
In our `Dockerfile`, let's add:
```Dockerfile title="Dockerfile"
FROM denoland/deno
EXPOSE 8000
WORKDIR /app
ADD . /app
RUN deno install --entrypoint main.ts
CMD ["run", "--allow-net", "main.ts"]
```
Then, in our `docker-compose.yml`:
```yml
version: "3"
services:
web:
build: .
container_name: deno-container
image: deno-image
ports:
- "8000:8000"
```
Let's test this locally by running `docker compose -f docker-compose.yml build`,
then `docker compose up`, and going to `localhost:8000`.

It works!
## Build, Tag, and Push your Docker image to Digital Ocean Container Registry
Digital Ocean has its own private Container Registry, with which we can push and
pull Docker images. In order to use this registry, let's
[install and authenticate `doctl` on the command line](https://docs.digitalocean.com/reference/doctl/how-to/install/).
After that, we'll create a new private registry named `deno-on-digital-ocean`:
```shell
doctl registry create deno-on-digital-ocean
```
Using our Dockerfile and docker-compose.yml, we'll build a new image, tag it,
and push it to the registry. Note that `docker-compose.yml` will name the build
locally as `deno-image`.
```shell
docker compose -f docker-compose.yml build
```
Let's [tag](https://docs.docker.com/engine/reference/commandline/tag/) it with
`new`:
```shell
docker tag deno-image registry.digitalocean.com/deno-on-digital-ocean/deno-image:new
```
Now we can push it to the registry.
```shell
docker push registry.digitalocean.com/deno-on-digital-ocean/deno-image:new
```
You should see your new `deno-image` with the `new` tag in your
[Digital Ocean container registry](https://cloud.digitalocean.com/registry):

Perfect!
## Deploy to Digital Ocean via SSH
Once our `deno-image` is in the registry, we can run it anywhere using
`docker run`. In this case, we'll run it while in our
[Digital Ocean Droplet](https://www.digitalocean.com/products/droplets), their
hosted virtual machine.
While on your [Droplet page](https://cloud.digitalocean.com/droplets), click on
your Droplet and then `console` to SSH into the virtual machine. (Or you can
[ssh directly from your command line](https://docs.digitalocean.com/products/droplets/how-to/connect-with-ssh/).)
To pull down the `deno-image` image and run it, let's run:
```shell
docker run -d --restart always -it -p 8000:8000 --name deno-image registry.digitalocean.com/deno-on-digital-ocean/deno-image:new
```
Using our browser to go to the Digital Ocean address, we now see:

Boom!
## Automate the Deployment via GitHub Actions
Let's automate that entire process with GitHub actions.
First, let's get all of our environmental variables needed for logging into
`doctl` and SSHing into the Droplet:
- [DIGITALOCEAN_ACCESS_TOKEN](https://docs.digitalocean.com/reference/api/create-personal-access-token/)
- DIGITALOCEAN_HOST (the IP address of your Droplet)
- DIGITALOCEAN_USERNAME (the default is `root`)
- DIGITALOCEAN_SSHKEY (more on this below)
### Generate `DIGITALOCEAN_SSHKEY`
The `DIGITALOCEAN_SSHKEY` is a private key where its public counterpart exists
on the virtual machine in its `~/.ssh/authorized_keys` file.
To do this, first let's run `ssh-keygen` on your local machine:
```shell
ssh-keygen
```
When prompted for an email, **be sure to use your GitHub email** for the GitHub
Action to authenticate properly. Your final output should look something like
this:
```console
Output
Your identification has been saved in /your_home/.ssh/id_rsa
Your public key has been saved in /your_home/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:/hk7MJ5n5aiqdfTVUZr+2Qt+qCiS7BIm5Iv0dxrc3ks user@host
The key's randomart image is:
+---[RSA 3072]----+
| .|
| + |
| + |
| . o . |
|o S . o |
| + o. .oo. .. .o|
|o = oooooEo+ ...o|
|.. o *o+=.*+o....|
| =+=ooB=o.... |
+----[SHA256]-----+
```
Next, we'll have to upload the newly generated public key to your Droplet. You
can either use [`ssh-copy-id`](https://www.ssh.com/academy/ssh/copy-id) or
manually copy it, ssh into your Droplet, and pasting it to
`~/.ssh/authorized_keys`.
Using `ssh-copy-id`:
```shell
ssh-copy-id {{ username }}@{{ host }}
```
This command will prompt you for the password. Note that this will automatically
copy `id_rsa.pub` key from your local machine and paste it to your Droplet's
`~/.ssh/authorized_keys` file. If you've named your key something other than
`id_rsa`, you can pass it with the `-i` flag to the command:
```shell
ssh-copy-id -i ~/.ssh/mykey {{ username }}@{{ host }}
```
To test whether this is done successfully:
```shell
ssh -i ~/.ssh/mykey {{ username }}@{{ host }}
```
Awesome!
### Define the yml File
The final step is to put this all together. We're basically taking each step
during the manual deployment and adding them to a GitHub Actions workflow yml
file:
```yml
name: Deploy to Digital Ocean
on:
push:
branches:
- main
env:
REGISTRY: "registry.digitalocean.com/deno-on-digital-ocean"
IMAGE_NAME: "deno-image"
jobs:
build_and_push:
name: Build, Push, and Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout main
uses: actions/checkout@v4
- name: Set $TAG from shortened sha
run: echo "TAG=`echo ${GITHUB_SHA} | cut -c1-8`" >> $GITHUB_ENV
- name: Build container image
run: docker compose -f docker-compose.yml build
- name: Tag container image
run: docker tag ${{ env.IMAGE_NAME }} ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.TAG }}
- name: Install `doctl`
uses: digitalocean/action-doctl@v2
with:
token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
- name: Log in to Digital Ocean Container Registry
run: doctl registry login --expiry-seconds 600
- name: Push image to Digital Ocean Container Registry
run: docker push ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.TAG }}
- name: Deploy via SSH
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.DIGITALOCEAN_HOST }}
username: ${{ secrets.DIGITALOCEAN_USERNAME }}
key: ${{ secrets.DIGITALOCEAN_SSHKEY }}
script: |
# Login to Digital Ocean Container Registry
docker login -u ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }} -p ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }} registry.digitalocean.com
# Stop and remove a running image.
docker stop ${{ env.IMAGE_NAME }}
docker rm ${{ env.IMAGE_NAME }}
# Run a new container from a new image
docker run -d --restart always -it -p 8000:8000 --name ${{ env.IMAGE_NAME }} ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.TAG }}
```
When you push to GitHub, this yml file is automatically detected, triggering the
Deploy action.
---
# Build a Database App with Drizzle ORM and Deno
> Step-by-step guide to building database applications with Drizzle ORM and Deno. Learn about schema management, type-safe queries, PostgreSQL integration, migrations, and how to implement CRUD operations.
URL: https://docs.deno.com/examples/tutorials/drizzle
[Drizzle ORM](https://orm.drizzle.team/) is a TypeScript ORM that provides a
type-safe way to interact with your database. In this tutorial, we'll set up
Drizzle ORM with Deno and PostgreSQL to create, read, update, and delete
dinosaur data:
- [Install Drizzle](#install-drizzle)
- [Configure Drizzle](#configure-drizzle)
- [Define schemas](#define-schemas)
- [Interact with the database](#interact-with-the-database)
- [What's next?](#whats-next)
You can find all the code for this tutorial in
[this GitHub repo](https://github.com/denoland/examples/tree/main/with-drizzle).
## Install Drizzle
First, we'll install the required dependencies using Deno's npm compatibility.
We'll be using Drizzle with
[Postgres](https://orm.drizzle.team/docs/get-started-postgresql), but you can
also use [MySQL](https://orm.drizzle.team/docs/get-started-mysql) or
[SQLite](https://orm.drizzle.team/docs/get-started-sqlite). (If you don't have
Postgres, you can [install it here](https://www.postgresql.org/download/).)
```bash
deno install npm:drizzle-orm npm:drizzle-kit npm:pg npm:@types/pg
```
This installs Drizzle ORM and its associated tools —
[drizzle-kit](https://orm.drizzle.team/docs/kit-overview) for schema migrations,
[pg](https://www.npmjs.com/package/pg) for PostgreSQL connectivity, and
[the TypeScript types for PostgreSQL](https://www.npmjs.com/package/@types/pg).
These packages will allow us to interact with our database in a type-safe way
while maintaining compatibility with Deno's runtime environment.
It will also create a `deno.json` file in your project root to manage the npm
dependencies:
```json
{
"imports": {
"@types/pg": "npm:@types/pg@^8.11.10",
"drizzle-kit": "npm:drizzle-kit@^0.27.2",
"drizzle-orm": "npm:drizzle-orm@^0.36.0",
"pg": "npm:pg@^8.13.1"
}
}
```
## Configure Drizzle
Next, let's create a `drizzle.config.ts` file in your project root. This file
will configure Drizzle to work with your PostgreSQL database:
```tsx
import { defineConfig } from "drizzle-kit";
export default defineConfig({
out: "./drizzle",
schema: "./src/db/schema.ts",
dialect: "postgresql",
dbCredentials: {
url: Deno.env.get("DATABASE_URL")!,
},
});
```
These config settings determine:
- where to output migration files (`./drizzle`)
- where to find your schema definition (`./src/db/schema.ts`)
- that PostgreSQL as your database dialect, and
- how to connect to your database using the URL stored in your environment
variables
The `drizzle-kit` will use this configuration to manage your database schema and
generate SQL migrations automatically.
We’ll also need a `.env` file in the project root containing the `DATABASE_URL`
connection string:
```bash
DATABASE_URL=postgresql://[user[:password]@][host][:port]/[dbname]
```
Be sure to replace the login credentials with yours.
Next, let's connect to the database and use Drizzle to populate our tables.
## Define schemas
There are two ways that you can define your table schema with Drizzle. If you
already have Postgres tables defined, you can infer them with `pull`; otherwise,
you can define them in code, then use Drizzle to create a new table. We'll
explore both approaches below.
### Infer schema with `pull`
If you already have Postgres tables before adding Drizzle, then you can
introspect your database schema to automatically generate TypeScript types and
table definitions with the command
[`npm:drizzle-kit pull`](https://orm.drizzle.team/docs/drizzle-kit-pull). This
is particularly useful when working with an existing database or when you want
to ensure your code stays in sync with your database structure.
Let's say our current database already has the following table schemas:

We'll run the following command to instrospect the database and populate several
files under a `./drizzle` directory:
```bash
deno --env -A --node-modules-dir npm:drizzle-kit pull
Failed to find Response internal state key
No config path provided, using default 'drizzle.config.ts'
Reading config file '/private/tmp/deno-drizzle-example/drizzle.config.ts'
Pulling from ['public'] list of schemas
Using 'pg' driver for database querying
[✓] 2 tables fetched
[✓] 8 columns fetched
[✓] 0 enums fetched
[✓] 0 indexes fetched
[✓] 1 foreign keys fetched
[✓] 0 policies fetched
[✓] 0 check constraints fetched
[✓] 0 views fetched
[i] No SQL generated, you already have migrations in project
[✓] You schema file is ready ➜ drizzle/schema.ts 🚀
[✓] You relations file is ready ➜ drizzle/relations.ts 🚀
```
We use the --env flag to read the .env file with our database url and the
--node-modules-dir flag to create a node_modules folder that will allow us
to use drizzle-kit correctly.
The above command will create a number of files within a `./drizzle` directory
that define the schema, track changes, and provide the necessary information for
database migrations:
- `drizzle/schema.ts`: This file defines the database schema using Drizzle ORM's
schema definition syntax.
- `drizzle/relations.ts`: This file is intended to define relationships between
tables using Drizzle ORM's relations API.
- `drizzle/0000_long_veda.sql`: A SQL migration file that contains the SQL
code to create the database table(s). The code is commented out — you can
uncomment this code if you want to run this migration to create the table(s)
in a new environment.
- `drizzle/meta/0000_snapshot.json`: A snapshot file that represents the current
state of your database schema.
- `drizzle/meta/_journal.json`: This file keeps track of the migrations that
have been applied to your database. It helps Drizzle ORM know which migrations
have been run and which ones still need to be applied.
### Define schema in Drizzle first
If you don't already have an existing table defined in Postgres (e.g. you're
starting a completely new project), you can define the tables and types in code
and have Drizzle create them.
Let's create a new directory `./src/db/` and in it, a `schema.ts` file, which
we'll populate with the below:
```ts
// schema.ts
import {
boolean,
foreignKey,
integer,
pgTable,
serial,
text,
timestamp,
} from "drizzle-orm/pg-core";
export const dinosaurs = pgTable("dinosaurs", {
id: serial().primaryKey().notNull(),
name: text(),
description: text(),
});
export const tasks = pgTable("tasks", {
id: serial().primaryKey().notNull(),
dinosaurId: integer("dinosaur_id"),
description: text(),
dateCreated: timestamp("date_created", { mode: "string" }).defaultNow(),
isComplete: boolean("is_complete"),
}, (table) => {
return {
tasksDinosaurIdFkey: foreignKey({
columns: [table.dinosaurId],
foreignColumns: [dinosaurs.id],
name: "tasks_dinosaur_id_fkey",
}),
};
});
```
The above represents in code the two tables, dinosaurs and tasks and their relation. Learn more about using Drizzle to define schemas and their relations.
Once we have defined `./src/db/schema.ts`, we can create the tables and their
specified relationship by creating a migration:
```bash
deno -A --node-modules-dir npm:drizzle-kit generate
Failed to find Response internal state key
No config path provided, using default 'drizzle.config.ts'
Reading config file '/private/tmp/drizzle/drizzle.config.ts'
2 tables
dinosaurs 3 columns 0 indexes 0 fks
tasks 5 columns 0 indexes 1 fks
```
The above command will create a `./drizzle/` folder that contains migration
scripts and logs.
## Interact with the database
Now that we have setup Drizzle ORM, we can use it to simplify managing data in
our Postgres database. First, Drizzle suggests taking the `schema.ts` and
`relations.ts` and copying them to the `./src/db` directory to use within an
application.
Let's create a `./src/db/db.ts` which exports a few helper functions that'll
make it easier for us to interact with the database:
```ts
import { drizzle } from "drizzle-orm/node-postgres";
import { dinosaurs as dinosaurSchema, tasks as taskSchema } from "./schema.ts";
import { dinosaursRelations, tasksRelations } from "./relations.ts";
import pg from "pg";
import { integer } from "drizzle-orm/sqlite-core";
import { eq } from "drizzle-orm/expressions";
// Use pg driver.
const { Pool } = pg;
// Instantiate Drizzle client with pg driver and schema.
export const db = drizzle({
client: new Pool({
connectionString: Deno.env.get("DATABASE_URL"),
}),
schema: { dinosaurSchema, taskSchema, dinosaursRelations, tasksRelations },
});
// Insert dinosaur.
export async function insertDinosaur(dinosaurObj: typeof dinosaurSchema) {
return await db.insert(dinosaurSchema).values(dinosaurObj);
}
// Insert task.
export async function insertTask(taskObj: typeof taskSchema) {
return await db.insert(taskSchema).values(taskObj);
}
// Find dinosaur by id.
export async function findDinosaurById(dinosaurId: typeof integer) {
return await db.select().from(dinosaurSchema).where(
eq(dinosaurSchema.id, dinosaurId),
);
}
// Find dinosaur by name.
export async function findDinosaurByName(name: string) {
return await db.select().from(dinosaurSchema).where(
eq(dinosaurSchema.name, name),
);
}
// Find tasks based on dinosaur id.
export async function findDinosaurTasksByDinosaurId(
dinosaurId: typeof integer,
) {
return await db.select().from(taskSchema).where(
eq(taskSchema.dinosaurId, dinosaurId),
);
}
// Update dinosaur.
export async function updateDinosaur(dinosaurObj: typeof dinosaurSchema) {
return await db.update(dinosaurSchema).set(dinosaurObj).where(
eq(dinosaurSchema.id, dinosaurObj.id),
);
}
// Update task.
export async function updateTask(taskObj: typeof taskSchema) {
return await db.update(taskSchema).set(taskObj).where(
eq(taskSchema.id, taskObj.id),
);
}
// Delete dinosaur by id.
export async function deleteDinosaurById(id: typeof integer) {
return await db.delete(dinosaurSchema).where(
eq(dinosaurSchema.id, id),
);
}
// Delete task by id.
export async function deleteTask(id: typeof integer) {
return await db.delete(taskSchema).where(eq(taskSchema.id, id));
}
```
Now we can import some of these helper functions to a script where we can
perform some simple CRUD operations on our database. Let's create a new file
`./src/script.ts`:
```ts
import {
deleteDinosaurById,
findDinosaurByName,
insertDinosaur,
insertTask,
updateDinosaur,
} from "./db/db.ts";
// Create a new dinosaur.
await insertDinosaur({
name: "Denosaur",
description: "Dinosaurs should be simple.",
});
// Find that dinosaur by name.
const res = await findDinosaurByName("Denosaur");
// Create a task with that dinosaur by its id.
await insertTask({
dinosaurId: res.id,
description: "Remove unnecessary config.",
isComplete: false,
});
// Update a dinosaur with a new description.
const newDeno = {
id: res.id,
name: "Denosaur",
description: "The simplest dinosaur.",
};
await updateDinosaur(newDeno);
// Delete the dinosaur (and any tasks it has).
await deleteDinosaurById(res.id);
```
We can run it and it will perform all of the actions on the database:
```ts
deno -A --env ./src/script.ts
```
## What's next?
Drizzle ORM is a popular data mapping tool to simplify managing and maintaining
data models and working with your database. Hopefully, this tutorial gives you a
start on how to use Drizzle in your Deno projects.
Now that you have a basic understanding of how to use Drizzle ORM with Deno, you
could:
1. Add more complex database relationships
2. [Implement a REST API](https://docs.deno.com/examples/) using
[Hono](https://jsr.io/@hono/hono) to serve your dinosaur data
3. Add validation and error handling to your database operations
4. Write tests for your database interactions
5. [Deploy your application to the cloud](https://docs.deno.com/runtime/tutorials/#deploying-deno-projects)
🦕 Happy coding with Deno and Drizzle ORM! The type-safety and simplicity of
this stack make it a great choice for building modern web applications.
---
# How to use Express with Deno
> Step-by-step guide to using Express.js with Deno. Learn how to set up an Express server, configure routes, handle middleware, and build REST APIs using Deno's Node.js compatibility features.
URL: https://docs.deno.com/examples/tutorials/express
[Express](https://expressjs.com/) is a popular web framework known for being
simple and unopinionated with a large ecosystem of middleware.
This How To guide will show you how to create a simple API using Express and
Deno.
[View source here.](https://github.com/denoland/tutorial-with-express)
## Initialize a new deno project
In your commandline run the command to create a new starter project, then
navigate into the project directory:
```sh
deno init my-express-project
cd my-express-project
```
## Install Express
To install Express, we'll use the `npm:` module specifier. This specifier allows
us to import modules from npm:
```sh
deno add npm:express
```
This will add the latest `express` package to the `imports` field in your
`deno.json` file. Now you can import `express` in your code with
`import express from "express";`.
## Update `main.ts`
In the `main.ts`, let's create a simple server:
```ts
import express from "express";
const app = express();
app.get("/", (req, res) => {
res.send("Welcome to the Dinosaur API!");
});
app.listen(8000);
console.log(`Server is running on http://localhost:8000`);
```
You may notice that your editor is complaining about the `req` and `res`
parameters. This is because Deno does not have types for the `express` module.
To fix this, you can import the Express types file directly from npm. Add the
following comment to the top of your `main.ts` file:
```ts
// @ts-types="npm:@types/express@4.17.15"
```
This comment tells Deno to use the types from the `@types/express` package.
## Run the server
When you initialized the project, Deno set up a task which will run the main.ts
file, you can see it in the `deno.json` file. Update the `dev` task to include
the [`--allow-net`](/runtime/fundamentals/security/#network-access) flag:
````jsonc
{
"scripts": {
"dev": "deno run --allow-net main.ts"
},
...
}
This will allow the project to make network requests. You can [read more about permissions flags](/runtime/fundamentals/security/).
Now you can run the server with:
```sh
deno run dev
````
If you visit `localhost:8000` in your browser, you should see:
**Welcome to the Dinosaur API!**
## Add data and routes
The next step here is to add some data. We'll use this Dinosaur data that we
found from [this article](https://www.thoughtco.com/dinosaurs-a-to-z-1093748).
Feel free to
[copy it from here](https://raw.githubusercontent.com/denoland/tutorial-with-express/refs/heads/main/data.json).
Create a `data.json` file in the root of your project, and paste in the dinosaur
data.
Next, we'll import that data into `main.ts`:
```ts
import data from "./data.json" with { type: "json" };
```
We will create the routes to access that data.
To keep it simple, let's just define `GET` handlers for `/api/` and
`/api/:dinosaur`. Add the following code after the `const app = express();`
line:
```ts
app.get("/", (req, res) => {
res.send("Welcome to the Dinosaur API!");
});
app.get("/api", (req, res) => {
res.send(data);
});
app.get("/api/:dinosaur", (req, res) => {
if (req?.params?.dinosaur) {
const found = data.find((item) =>
item.name.toLowerCase() === req.params.dinosaur.toLowerCase()
);
if (found) {
res.send(found);
} else {
res.send("No dinosaurs found.");
}
}
});
app.listen(8000);
console.log(`Server is running on http://localhost:8000`);
```
Let's run the server with `deno run dev` and check out `localhost:8000/api` in
your browser. You should see a list of dinosaurs!
```jsonc
[
{
"name": "Aardonyx",
"description": "An early stage in the evolution of sauropods."
},
{
"name": "Abelisaurus",
"description": "\"Abel's lizard\" has been reconstructed from a single skull."
},
{
"name": "Abrictosaurus",
"description": "An early relative of Heterodontosaurus."
},
...
```
You can also get the details of a specific dinosaur by visiting "/api/dinosaur
name", for example `localhost:8000/api/aardonyx` will display:
```json
{
"name": "Aardonyx",
"description": "An early stage in the evolution of sauropods."
}
```
🦕 Now you're all set to use Express with Deno. You could consider expanding
this example into a dinosaur web app. Or take a look at
[Deno's built in HTTP server](https://docs.deno.com/runtime/fundamentals/http_server/).
---
# Fetch and stream data
> A tutorial on working with network requests in Deno. Learn how to use the fetch API for HTTP requests, handle responses, implement data streaming, and manage file uploads and downloads.
URL: https://docs.deno.com/examples/tutorials/fetch_data
Deno brings several familiar Web APIs to the server-side environment. If you've
worked with browsers you may recognize the [`fetch()`](/api/web/fetch) method
and the [`streams`](/api/web/streams) API, which are used to make network
requests and access streams of data over the network. Deno implements these
APIs, allowing you to fetch and stream data from the web.
## Fetching data
When building a web application, developers will often need to retrieve
resources from somewhere else on the web. We can do so with the `fetch` API.
We'll look at how to fetch different shapes of data from a url and how to handle
an error if the request fails.
Create a new file called `fetch.js` and add the following code:
```ts title="fetch.js"
// Output: JSON Data
const jsonResponse = await fetch("https://api.github.com/users/denoland");
const jsonData = await jsonResponse.json();
console.log(jsonData, "\n");
// Output: HTML Data
const textResponse = await fetch("https://deno.land/");
const textData = await textResponse.text();
console.log(textData, "\n");
// Output: Error Message
try {
await fetch("https://does.not.exist/");
} catch (error) {
console.log(error);
}
```
You can run this code with the `deno run` command. Because it is fetching data
across the network, you need to grant the `--allow-net` permission:
```sh
deno run --allow-net fetch.js
```
You should see the JSON data, HTML data as text, and an error message in the
console.
## Streaming data
Sometimes you may want to send or receive large files over the network. When you
don't know the size of a file in advance, streaming is a more efficient way to
handle the data. The client can read from the stream until it says it is done.
Deno provides a way to stream data using the `Streams API`. We'll look at how to
convert a file into a readable or writable stream and how to send and receive
files using streams.
Create a new file called `stream.js`.
We'll use the `fetch` API to retrieve a file. Then we'll use the
[`Deno.open`](/api/deno/Deno.open) method to create and open a writable file and
the [`pipeTo`](/api/web/~/ReadableStream.pipeTo) method from the Streams API to
send the byte stream to the created file.
Next, we'll use the `readable` property on a `POST` request to send the bite
stream of the file to a server.
```ts title="stream.js"
// Receiving a file
const fileResponse = await fetch("https://deno.land/logo.svg");
if (fileResponse.body) {
const file = await Deno.open("./logo.svg", { write: true, create: true });
await fileResponse.body.pipeTo(file.writable);
}
// Sending a file
const file = await Deno.open("./logo.svg", { read: true });
await fetch("https://example.com/", {
method: "POST",
body: file.readable,
});
```
You can run this code with the `deno run` command. Because it is fetching data
across the network and writing to a file, you need to grant the `--allow-net`,
`--allow-write` and `--allow-read` permissions:
```sh
deno run --allow-read --allow-write --allow-net stream.js
```
You should see the file `logo.svg` created and populated in the current
directory and, if you owned example.com you would see the file being sent to the
server.
🦕 Now you know how to fetch and stream data across a network and how to stream
that data to and from files! Whether you're serving static files, processing
uploads, generating dynamic content or streaming large datasets, Deno’s file
handling and streaming capabilities are great tools to have in your developer
toolbox!
---
# examples/tutorials/file_based_routing.md
> Tutorial on implementing file-based routing in Deno. Learn how to create a dynamic routing system similar to Next.js, handle HTTP methods, manage nested routes, and build a flexible server architecture.
URL: https://docs.deno.com/examples/tutorials/file_based_routing
If you've used frameworks like [Next.js](https://nextjs.org/), you might be
familiar with file based routing - you add a file in a specific directory and it
automatically becomes a route. This tutorial demonstrates how to create a simple
HTTP server that uses file based routing.
## Route requests
Create a new file called `server.ts`. This file will be used to route requests.
Set up an async function called `handler` that takes a request object as an
argument:
```ts title="server.ts"
async function handler(req: Request): Promise {
const url = new URL(req.url);
const path = url.pathname;
const method = req.method;
let module;
try {
module = await import(`.${path}.ts`);
} catch (_error) {
return new Response("Not found", { status: 404 });
}
if (module[method]) {
return module[method](req);
}
return new Response("Method not implemented", { status: 501 });
}
Deno.serve(handler);
```
The `handler` function sets up a path variable which contains the path,
extracted from the request URL, and a method variable which contains the request
method.
It then tries to import a module based on the path. If the module is not found,
it returns a 404 response.
If the module is found, it checks if the module has a method handler for the
request method. If the method handler is found, it calls the method handler with
the request object. If the method handler is not found, it returns a 501
response.
Finally, it serves the handler function using `Deno.serve`.
> The path could be any valid URL path such as `/users`, `/posts`, etc. For
> paths like `/users`, the file `./users.ts` will be imported. However, deeper
> paths like `/org/users` will require a file `./org/users.ts`. You can create
> nested routes by creating nested directories and files.
## Handle requests
Create a new file called `users.ts` in the same directory as `server.ts`. This
file will be used to handle requests to the `/users` path. We'll use a `GET`
request as an example. You could add more HTTP methods such as `POST`, `PUT`,
`DELETE`, etc.
In `users.ts`, set up an async function called `GET` that takes a request object
as an argument:
```ts title="users.ts"
export function GET(_req: Request): Response {
return new Response("Hello from user.ts", { status: 200 });
}
```
## Start the server
To start the server, run the following command:
```sh
deno run --allow-net --allow-read server.ts
```
This will start the server on `localhost:8080`. You can now make a `GET` request
to `localhost:8000/users` and you should see the response `Hello from user.ts`.
This command requires the `--allow-net` and `--allow-read`
[permissions flags](/runtime/fundamentals/security/) to allow access to the
network to start the server and to read the `users.ts` file from the file
system.
🦕 Now you can set up routing in your apps based on file structure. You can
extend this example to add more routes and methods as needed.
Thanks to [@naishe](https://github.com/naishe) for contributing this
tutorial.
---
# Write a file server
> Tutorial on building a file server with Deno. Learn how to handle HTTP requests, serve static files, implement streaming responses, and use the standard library's file server module for production deployments.
URL: https://docs.deno.com/examples/tutorials/file_server
A file server listens for incoming HTTP requests and serves files from the local
file system. This tutorial demonstrates how to create a simple file server using
Deno's built-in [file system APIs](/api/deno/file-system).
## Write a simple File Server
To start, create a new file called `file-server.ts`.
We'll use Deno's built in [HTTP server](/api/deno/~/Deno.serve) to listen for
incoming requests. In your new `file-server.ts` file, add the following code:
```ts title="file-server.ts"
Deno.serve(
{ hostname: "localhost", port: 8080 },
async (request) => {
const url = new URL(request.url);
const filepath = decodeURIComponent(url.pathname);
},
);
```
> If you're not familiar with the `URL` object, you can learn more about it in
> the [URL API](https://developer.mozilla.org/en-US/docs/Web/API/URL)
> documentation. The
> [decodeURIComponent function](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURIComponent)
> is used to decode the URL-encoded path, in the case that characters have been
> percent-encoded.)
### Open a file and stream its contents
When a request is received, we'll attempt to open the file specified in the
request URL with [`Deno.open`](/api/deno/~/Deno.open).
If the requested file exists, we'll convert it into a readable stream of data
with the
[ReadableStream API](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream),
and stream its contents to the response. We don't know how large the requested
file might be, so streaming it will prevent memory issues when serving large
files or multiple requests concurrently.
If the file does not exist, we'll return a "404 Not Found" response.
In the body of the request handler, below the two variables, add the following
code:
```ts
try {
const file = await Deno.open("." + filepath, { read: true });
return new Response(file.readable);
} catch {
return new Response("404 Not Found", { status: 404 });
}
```
### Run the file server
Run your new file server with the `deno run` command, allowing read access and
network access:
```shell
deno run --allow-read=. --allow-net file-server.ts
```
## Using the file server provided by the Deno Standard Library
Writing a file server from scratch is a good exercise to understand how Deno's
HTTP server works. However, writing production ready file server from scratch
can be complex and error-prone. It's better to use a tested and reliable
solution.
The Deno Standard Library provides you with a
[file server](https://jsr.io/@std/http/doc/file-server/~) so that you don't have
to write your own.
To use it, first install the remote script to your local file system:
```shell
# Deno 1.x
deno install --allow-net --allow-read jsr:@std/http/file-server
# Deno 2.x
deno install --global --allow-net --allow-read jsr:@std/http/file-server
```
> This will install the script to the Deno installation root, e.g.
> `/home/user/.deno/bin/file-server`.
You can now run the script with the simplified script name:
```shell
$ file-server .
Listening on:
- Local: http://0.0.0.0:8000
```
To see the complete list of options available with the file server, run
`file-server --help`.
If you visit [http://0.0.0.0:8000/](http://0.0.0.0:8000/) in your web browser
you will see the contents of your local directory.
### Using the @std/http file server in a Deno project
To use the file-server in a
[Deno project](/runtime/getting_started/first_project), you can add it to your
`deno.json` file with:
```sh
deno add jsr:@std/http
```
And then import it in your project:
```ts title="file-server.ts"
import { serveDir } from "@std/http/file-server";
Deno.serve((req) => {
const pathname = new URL(req.url).pathname;
if (pathname.startsWith("/static")) {
return serveDir(req, {
fsRoot: "path/to/static/files/dir",
});
}
return new Response();
});
```
This code will set up an HTTP server with `Deno.serve`. When a request comes in,
it checks if the requested path starts with “/static”. If so, it serves files
from the specified directory. Otherwise, it responds with an empty response.
🦕 Now you know how to write your own simple file server, and how to use the
file-server utility provided by the Deno Standard Library. You're equipped to
tackle a whole variety of tasks - whether it’s serving static files, handling
uploads, transforming data, or managing access control - you're ready to serve
files with Deno.
---
# File system events
> Tutorial on monitoring file system changes with Deno. Learn how to watch directories for file modifications, handle change events, and understand platform-specific behaviors across Linux, macOS, and Windows.
URL: https://docs.deno.com/examples/tutorials/file_system_events
## Concepts
- Use [Deno.watchFs](https://docs.deno.com/api/deno/~/Deno.watchFs) to watch for
file system events.
- Results may vary between operating systems.
## Example
To poll for file system events in the current directory:
```ts title="watcher.ts"
const watcher = Deno.watchFs(".");
for await (const event of watcher) {
console.log(">>>> event", event);
// Example event: { kind: "create", paths: [ "/home/alice/deno/foo.txt" ] }
}
```
Run with:
```shell
deno run --allow-read watcher.ts
```
Now try adding, removing and modifying files in the same directory as
`watcher.ts`.
Note that the exact ordering of the events can vary between operating systems.
This feature uses different syscalls depending on the platform:
- Linux: [inotify](https://man7.org/linux/man-pages/man7/inotify.7.html)
- macOS:
[FSEvents](https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/FSEvents_ProgGuide/Introduction/Introduction.html)
- Windows:
[ReadDirectoryChangesW](https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-readdirectorychangesw)
---
# How to deploy to Google Cloud Run
> Step-by-step guide to deploying Deno applications on Google Cloud Run. Learn about Docker containerization, Artifact Registry configuration, GitHub Actions automation, and how to set up continuous deployment to Google Cloud.
URL: https://docs.deno.com/examples/tutorials/google_cloud_run
[Google Cloud Run](https://cloud.google.com/run) is a managed compute platform
that lets you run containers on Google's scalable infrastructure.
This How To guide will show you how to use Docker to deploy your Deno app to
Google Cloud Run.
First, we'll show you how to deploy manually, then we'll show you how to
automate it with GitHub Actions.
Pre-requisites:
- [Google Cloud Platform account](https://cloud.google.com/gcp)
- [`docker` CLI](https://docs.docker.com/engine/reference/commandline/cli/)
installed
- [`gcloud`](https://cloud.google.com/sdk/gcloud) installed
## Manual Deployment
### Create `Dockerfile` and `docker-compose.yml`
To focus on the deployment, our app will simply be a `main.ts` file that returns
a string as an HTTP response:
```ts title="main.ts"
import { Application } from "jsr:@oak/oak";
const app = new Application();
app.use((ctx) => {
ctx.response.body = "Hello from Deno and Google Cloud Run!";
});
await app.listen({ port: 8000 });
```
Then, we'll create two files -- `Dockerfile` and `docker-compose.yml` -- to
build the Docker image.
In our `Dockerfile`, let's add:
```Dockerfile
FROM denoland/deno
EXPOSE 8000
WORKDIR /app
ADD . /app
RUN deno install --entrypoint main.ts
CMD ["run", "--allow-net", "main.ts"]
```
Then, in our `docker-compose.yml`:
```yml
version: "3"
services:
web:
build: .
container_name: deno-container
image: deno-image
ports:
- "8000:8000"
```
Let's test this locally by running `docker compose -f docker-compose.yml build`,
then `docker compose up`, and going to `localhost:8000`.

It works!
### Set up Artifact Registry
Artifact Registry is GCP's private registry of Docker images.
Before we can use it, go to GCP's
[Artifact Registry](https://console.cloud.google.com/artifacts) and click
"Create repository". You'll be asked for a name (`deno-repository`) and a region
(`us-central1`). Then click "Create".

### Build, Tag, and Push to Artifact Registry
Once we've created a repository, we can start pushing images to it.
First, let's add the registry's address to `gcloud`:
```shell
gcloud auth configure-docker us-central1-docker.pkg.dev
```
Then, let's build your Docker image. (Note that the image name is defined in our
`docker-compose.yml` file.)
```shell
docker compose -f docker-compose.yml build
```
Then, [tag](https://docs.docker.com/engine/reference/commandline/tag/) it with
the new Google Artifact Registry address, repository, and name. The image name
should follow this structure:
`{{ location }}-docker.pkg.dev/{{ google_cloudrun_project_name }}/{{ repository }}/{{ image }}`.
```shell
docker tag deno-image us-central1-docker.pkg.dev/deno-app-368305/deno-repository/deno-cloudrun-image
```
If you don't specify a tag, it'll use `:latest` by default.
Next, push the image:
```shell
docker push us-central1-docker.pkg.dev/deno-app-368305/deno-repository/deno-cloudrun-image
```
_[More info on how to push and pull images to Google Artifact Registry](https://cloud.google.com/artifact-registry/docs/docker/pushing-and-pulling)._
Your image should now appear in your Google Artifact Registry!

### Create a Google Cloud Run Service
We need an instance where we can build these images, so let's go to
[Google Cloud Run](https://console.cloud.google.com/run) and click "Create
Service".
Let's name it "hello-from-deno".
Select "Deploy one revision from an existing container image". Use the drop down
to select the image from the `deno-repository` Artifact Registry.
Select "allow unauthenticated requests" and then click "Create service". Make
sure the port is `8000`.
When it's done, your app should now be live:

Awesome!
### Deploy with `gcloud`
Now that it's created, we'll be able to deploy to this service from the `gcloud`
CLI. The command follows this structure:
`gcloud run deploy {{ service_name }} --image={{ image }} --region={{ region }} --allow-unauthenticated`.
Note that the `image` name follows the structure from above.
For this example, the command is:
```shell
gcloud run deploy hello-from-deno --image=us-central1-docker.pkg.dev/deno-app-368305/deno-repository/deno-cloudrun-image --region=us-central1 --allow-unauthenticated
```

Success!
## Automate Deployment with GitHub Actions
In order for automation to work, we first need to make sure that these both have
been created:
- the Google Artifact Registry
- the Google Cloud Run service instance
(If you haven't done that, please see the section before.)
Now that we have done that, we can automate it with a GitHub workflow. Here's
the yaml file:
```yml
name: Build and Deploy to Cloud Run
on:
push:
branches:
- main
env:
PROJECT_ID: { { PROJECT_ID } }
GAR_LOCATION: { { GAR_LOCATION } }
REPOSITORY: { { GAR_REPOSITORY } }
SERVICE: { { SERVICE } }
REGION: { { REGION } }
jobs:
deploy:
name: Deploy
permissions:
contents: "read"
id-token: "write"
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Google Auth
id: auth
uses: "google-github-actions/auth@v0"
with:
credentials_json: "${{ secrets.GCP_CREDENTIALS }}"
- name: Login to GAR
uses: docker/login-action@v2.1.0
with:
registry: ${{ env.GAR_LOCATION }}-docker.pkg.dev
username: _json_key
password: ${{ secrets.GCP_CREDENTIALS }}
- name: Build and Push Container
run: |-
docker build -t "${{ env.GAR_LOCATION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.REPOSITORY }}/${{ env.SERVICE }}:${{ github.sha }}" ./
docker push "${{ env.GAR_LOCATION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.REPOSITORY }}/${{ env.SERVICE }}:${{ github.sha }}"
- name: Deploy to Cloud Run
id: deploy
uses: google-github-actions/deploy-cloudrun@v0
with:
service: ${{ env.SERVICE }}
region: ${{ env.REGION }}
image: ${{ env.GAR_LOCATION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.REPOSITORY }}/${{ env.SERVICE }}:${{ github.sha }}
- name: Show Output
run: echo ${{ steps.deploy.outputs.url }}
```
The environment variables that we need to set are (the examples in parenthesis
are the ones for this repository)
- `PROJECT_ID`: your project id (`deno-app-368305`)
- `GAR_LOCATION`: the location your Google Artifact Registry is set
(`us-central1`)
- `GAR_REPOSITORY`: the name you gave your Google Artifact Registry
(`deno-repository`)
- `SERVICE`: the name of the Google Cloud Run service (`hello-from-deno`)
- `REGION`: the region of your Google Cloud Run service (`us-central1`)
The secret variables that we need to set are:
- `GCP_CREDENTIALS`: this is the
[service account](https://cloud.google.com/iam/docs/service-accounts) json
key. When you create the service account, be sure to
[include the roles and permissions necessary](https://cloud.google.com/iam/docs/granting-changing-revoking-access#granting_access_to_a_user_for_a_service_account)
for Artifact Registry and Google Cloud Run.
[Check out more details and examples of deploying to Cloud Run from GitHub Actions.](https://github.com/google-github-actions/deploy-cloudrun)
For reference:
https://github.com/google-github-actions/example-workflows/blob/main/workflows/deploy-cloudrun/cloudrun-docker.yml
---
# How to export telemetry data to Grafana
> Complete guide to exporting telemetry data with OpenTelemetry and Grafana. Learn how to configure collectors, visualize traces, and monitor application performance.
URL: https://docs.deno.com/examples/tutorials/grafana
[OpenTelemetry](https://opentelemetry.io/) (often abbreviated as OTel) is an
open-source observability framework that provides a standardized way to collect
and export telemetry data such as traces, metrics and logs. Deno has built-in
support for OpenTelemetry, making it easy to instrument your applications
without adding external dependencies. This integration works out of the box with
observability platforms like [Grafana](https://grafana.com/).
Grafana is an open-source observability platform that lets DevOps teams
visualize, query, and alert on metrics, logs, and traces from diverse data
sources in real time. It’s widely used for building dashboards to monitor
infrastructure, applications, and systems health.
Grafana also offers a hosted version called
[Grafana Cloud](https://grafana.com/products/cloud/). This tutorial will help
you configure your project to export OTel data to Grafana Cloud.
In this tutorial, we'll build a simple application and export its telemetry data
to Grafana Cloud. We'll cover:
- [Set up your chat app](#set-up-your-chat-app)
- [Set up a Docker collector](#set-up-a-docker-collector)
- [Generating telemetry data](#generating-telemetry-data)
- [Viewing telemetry data](#viewing-telemetry-data)
You can find the complete source code for this tutorial
[on GitHub](https://github.com/denoland/examples/tree/main/with-grafana).
## Set up your chat app
For this tutorial, we'll use a simple chat application to demonstrate how to
export telemetry data. You can find the
[code for the app on GitHub](https://github.com/denoland/examples/tree/main/with-grafana).
Either take a copy of that repository or create a
[main.ts](https://github.com/denoland/examples/blob/main/with-grafana/main.ts)
file and a
[.env](https://github.com/denoland/examples/blob/main/with-grafana/.env.example)
file.
In order to run the app you will need an OpenAI API key. You can get one by
signing up for an account at [OpenAI](https://platform.openai.com/signup) and
creating a new secret key. You can find your API key in the
[API keys section](https://platform.openai.com/account/api-keys) of your OpenAI
account. Once you have an API key, set up an `OPENAI_API-KEY` environment
variable in your `.env` file:
```env title=".env"
OPENAI_API_KEY=your_openai_api_key
```
## Set up a Docker collector
Next, we'll set up a Docker container to run the OpenTelemetry collector. The
collector is responsible for receiving telemetry data from your application and
exporting it to Grafana Cloud.
In the same directory as your `main.ts` file, create a `Dockerfile` and an
`otel-collector.yml` file. The `Dockerfile` will be used to build a Docker
image:
```dockerfile title="Dockerfile"
FROM otel/opentelemetry-collector-contrib:latest
COPY otel-collector.yml /otel-config.yml
CMD ["--config", "/otel-config.yml"]
```
[`FROM otel/opentelemetry-collector-contrib:latest`](https://hub.docker.com/r/otel/opentelemetry-collector-contrib/) -
This line specifies the base image for the container. It uses the official
OpenTelemetry Collector Contributor image, which contains all receivers,
exporters, processors, connectors, and other optional components, and pulls the
latest version.
`COPY otel-collector.yml /otel-config.yml` - This instruction copies our
configuration file named `otel-collector.yml` from the local build context into
the container. The file is renamed to `/otel-config.yml` inside the container.
`CMD ["--config", "/otel-config.yml"]` - This sets the default command that will
run when the container starts. It tells the OpenTelemetry Collector to use the
configuration file we copied in the previous step.
Next, let's setup a Grafana Cloud account and grab some info.
If you have not already,
[create a free Grafana Cloud account](https://grafana.com/auth/sign-up/create-user).
Once created, you will receive a Grafana Cloud stack. Click "Details".

Next, find "OpenTelemetry" and click "Configure".

This page will provide you with all the details you'll need to configure your
OpenTelemetry collector. Make note of your **OTLP Endpoint**, **Instance ID**,
and **Password / API Token** (you will have to generate one).

Next, add the following to your `otel-collector.yml` file to define how how
telemetry data should be collected and exported to Grafana Cloud:
```yml title="otel-collector.yml"
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
otlphttp/grafana_cloud:
endpoint: $_YOUR_GRAFANA_OTLP_ENDPOINT
auth:
authenticator: basicauth/grafana_cloud
extensions:
basicauth/grafana_cloud:
client_auth:
username: $_YOUR_INSTANCE_ID
password: $_YOUR_API_TOKEN
processors:
batch:
service:
extensions: [basicauth/grafana_cloud]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/grafana_cloud]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/grafana_cloud]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/grafana_cloud]
```
The `receivers` section configures how the collector receives data. It sets up
an OTLP (OpenTelemetry Protocol) receiver that listens on two protocols, `gRPC`
and `HTTP`, the `0.0.0.0` address means it will accept data from any source.
The `exporters` section defines where the collected data should be sent. Be sure
to include **the OTLP endpoint** provided by your Grafana Cloud instance.
The `extensions` section defines the authentication for OTel to export data to
Grafana Cloud. Be sure to include your Grafana Cloud **Instance ID**, as well as
your generated **Password / API Token**.
The `processors` section defines how the data should be processed before export.
It uses batch processing with a timeout of 5 seconds and a maximum batch size of
5000 items.
The `service` section ties everything together by defining three pipelines. Each
pipeline is responsible for a different type of telemetry data. The logs
pipeline collects application logs. The traces pipeline is for distributed
tracing data. The metric pipeline is for performance metrics.
Build and run the docker instance to start collecting your telemetry data with
the following command:
```sh
docker build -t otel-collector . && docker run -p 4317:4317 -p 4318:4318 otel-collector
```
## Generating telemetry data
Now that we have the app and the docker container set up, we can start
generating telemetry data. Run your application with these environment variables
to send data to the collector:
```sh
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 \
OTEL_SERVICE_NAME=chat-app \
OTEL_DENO=true \
deno run --unstable-otel --allow-net --allow-env --env-file --allow-read main.ts
```
This command:
- Points the OpenTelemetry exporter to your local collector (`localhost:4318`)
- Names your service "chat-app" in Grafana Cloud
- Enables Deno's OpenTelemetry integration
- Runs your application with the necessary permissions
To generate some telemetry data, make a few requests to your running application
in your browser at [`http://localhost:8000`](http://localhost:8000).
Each request will:
1. Generate traces as it flows through your application
2. Send logs from your application's console output
3. Create metrics about the request performance
4. Forward all this data through the collector to Grafana Cloud
## Viewing telemetry data
After making some requests to your application, you'll see three types of data
in your Grafana Cloud dashboard:
1. **Traces** - End-to-end request flows through your system
2. **Logs** - Console output and structured log data
3. **Metrics** - Performance and resource utilization data

You can drill down into individual spans to debug performance issues:

🦕 Now that you have telemetry export working, you could:
1. Add custom spans and attributes to better understand your application
2. Set up alerts based on latency or error conditions
3. Deploy your application and collector to production using platforms like:
- [Fly.io](https://docs.deno.com/examples/deploying_deno_with_docker/)
- [Digital Ocean](https://docs.deno.com/examples/digital_ocean_tutorial/)
- [AWS Lightsail](https://docs.deno.com/examples/aws_lightsail_tutorial/)
For more details on OpenTelemetry configuration, check out the
[Grafana Cloud documentation](https://grafana.com/docs/grafana-cloud/monitor-applications/application-observability/collector/).
---
# Executable scripts
> Guide to creating executable scripts with Deno. Learn about hashbangs, file permissions, cross-platform compatibility, and how to create command-line tools that can run directly from the terminal.
URL: https://docs.deno.com/examples/tutorials/hashbang
Making Deno scripts executable can come in handy when creating small tools or
utilities for tasks like file manipulation, data processing or repetitive tasks
that you might want to run from the command line. Executable scripts allow you
to create ad-hoc solutions without setting up an entire project.
## Creating an example script
To make a script executable, start the script with a hashbang, (sometimes called
a shebang). This is a sequence of characters (#!) that tells your operating
system how to execute a script. It is followed by the path to the interpreter
that should be used to run the script.
:::note
To use a hashbang on Windows you will need to install the Windows Subsystem for
Linux (WSL) or use a Unix-like shell like
[Git Bash](https://git-scm.com/downloads).
:::
We'll make a simple script that prints the Deno installation path using the
[Deno.env](/api/deno/~/Deno.env) API.
Create a file named `hashbang.ts` with the following content:
```ts title="hashbang.ts"
#!/usr/bin/env -S deno run --allow-env
const path = Deno.env.get("DENO_INSTALL");
console.log("Deno Install Path:", path);
```
This script tells the system to use the deno runtime to run the script. The -S
flag splits the command into arguments and indicates that the following argument
(`deno run --allow-env`) should be passed to the env command.
The script then retrieves the value associated with the environment variable
named `DENO_INSTALL` with `Deno.env.get()` and assigns it to a variable called
`path`. Finally, it prints the path to the console using `console.log()`.
### Execute the script
In order to execute the script, you may need to give the script execution
permissions, you can do so using the `chmod` command with a `+x` flag (for
execute):
```sh
chmod +x hashbang.ts
```
You can execute the script directly in the command line with:
```sh
./hashbang.ts
```
## Using hashbang in files with no extension
For brevity, you may wish to omit the extension for your script's filename. In
this case, supply one using the `--ext` flag in the script itself, then you can
run the script with just the file name:
```shell title="my_script"
$ cat my_script
#!/usr/bin/env -S deno run --allow-env --ext=js
console.log("Hello!");
$ ./my_script
Hello!
```
🦕 Now you can directly execute Deno scripts from the command line! Remember to
set the execute permission (`chmod +x`) for your script file, and you’re all set
to build anything from simple utilities to complex tools. Check out the
[Deno examples](/examples/) for inspiration on what you can script.
---
# How to export telemetry data to Honeycomb
> Complete guide to exporting telemetry data with OpenTelemetry and Honeycomb.io. Learn how to configure collectors, visualize traces, and monitor application performance.
URL: https://docs.deno.com/examples/tutorials/honeycomb
[OpenTelemetry](https://opentelemetry.io/) (often abbreviated as OTel) is an
open-source observability framework that provides a standardized way to collect
and export telemetry data such as traces, metrics and logs. Deno has built-in
support for OpenTelemetry, making it easy to instrument your applications
without adding external dependencies. This integration works out of the box with
observability platforms like [Honeycomb](https://honeycomb.io).
Honeycomb is an observability platform designed for debugging and understanding
complex, modern distributed systems.
In this tutorial, we'll build a simple application and export its telemetry data
to Honeycomb. We'll cover:
- [Set up your chat app](#set-up-your-chat-app)
- [Set up a Docker collector](#set-up-a-docker-collector)
- [Generating telemetry data](#generating-telemetry-data)
- [Viewing telemetry data](#viewing-telemetry-data)
You can find the complete source code for this tutorial
[on GitHub](https://github.com/denoland/examples/tree/main/with-honeycomb).
## Set up your chat app
For this tutorial, we'll use a simple chat application to demonstrate how to
export telemetry data. You can find the
[code for the app on GitHub](https://github.com/denoland/examples/tree/main/with-honeycomb).
Either take a copy of that repository or create a
[main.ts](https://github.com/denoland/examples/blob/main/with-honeycomb/main.ts)
file and a
[.env](https://github.com/denoland/examples/blob/main/with-honeycomb/.env.example)
file.
In order to run the app you will need an OpenAI API key. You can get one by
signing up for an account at [OpenAI](https://platform.openai.com/signup) and
creating a new secret key. You can find your API key in the
[API keys section](https://platform.openai.com/account/api-keys) of your OpenAI
account. Once you have an API key, set up an `OPENAI_API-KEY` environment
variable in your `.env` file:
```env title=".env"
OPENAI_API_KEY=your_openai_api_key
```
## Set up a Docker collector
Next, we'll set up a Docker container to run the OpenTelemetry collector. The
collector is responsible for receiving telemetry data from your application and
exporting it to Honeycomb.
If you have not already, create a free Honeycomb account and set up an
[ingest API key](https://docs.honeycomb.io/configure/environments/manage-api-keys/).
In the same directory as your `main.ts` file, create a `Dockerfile` and an
`otel-collector.yml` file. The `Dockerfile` will be used to build a Docker
image:
```dockerfile title="Dockerfile"
FROM otel/opentelemetry-collector:latest
COPY otel-collector.yml /otel-config.yml
CMD ["--config", "/otel-config.yml"]
```
`FROM otel/opentelemetry-collector:latest` - This line specifies the base image
for the container. It uses the official OpenTelemetry Collector image and pulls
the latest version.
`COPY otel-collector.yml /otel-config.yml` - This instruction copies our
configuration file named `otel-collector.yml` from the local build context into
the container. The file is renamed to `/otel-config.yml` inside the container.
`CMD ["--config", "/otel-config.yml"]` - This sets the default command that will
run when the container starts. It tells the OpenTelemetry Collector to use the
configuration file we copied in the previous step.
Next, add the following to your `otel-collector.yml` file to define how how
telemetry data should be collected and exported to Honeycomb:
```yml title="otel-collector.yml"
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
otlp:
endpoint: "api.honeycomb.io:443"
headers:
x-honeycomb-team: $_HONEYCOMB_API_KEY
processors:
batch:
timeout: 5s
send_batch_size: 5000
service:
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlp]
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlp]
```
The `receivers` section configures how the collector receives data. It sets up
an OTLP (OpenTelemetry Protocol) receiver that listens on two protocols, `gRPC`
and `HTTP`, the `0.0.0.0` address means it will accept data from any source.
The `exporters` section defines where the collected data should be sent. It's
configured to send data to Honeycomb's API endpoint at `api.honeycomb.io:443`.
The configuration requires an API key for authentication, swap
`$_HONEYCOMB_API_KEY` for your actual Honeycomb API key.
The `processors` section defines how the data should be processed before export.
It uses batch processing with a timeout of 5 seconds and a maximum batch size of
5000 items.
The `service` section ties everything together by defining three pipelines. Each
pipeline is responsible for a different type of telemetry data. The logs
pipeline collects application logs. The traces pipeline is for distributed
tracing data. The metric pipeline is for performance metrics.
Build and run the docker instance to start collecting your telemetry data with
the following command:
```sh
docker build -t otel-collector . && docker run -p 4317:4317 -p 4318:4318 otel-collector
```
## Generating telemetry data
Now that we have the app and the docker container set up, we can start
generating telemetry data. Run your application with these environment variables
to send data to the collector:
```sh
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 \
OTEL_SERVICE_NAME=chat-app \
OTEL_DENO=true \
deno run --unstable-otel --allow-net --allow-env --env-file --allow-read main.ts
```
This command:
- Points the OpenTelemetry exporter to your local collector (`localhost:4318`)
- Names your service "chat-app" in Honeycomb
- Enables Deno's OpenTelemetry integration
- Runs your application with the necessary permissions
To generate some telemetry data, make a few requests to your running application
in your browser at [`http://localhost:8000`](http://localhost:8000).
Each request will:
1. Generate traces as it flows through your application
2. Send logs from your application's console output
3. Create metrics about the request performance
4. Forward all this data through the collector to Honeycomb
## Viewing telemetry data
After making some requests to your application, you'll see three types of data
in your Honeycomb.io dashboard:
1. **Traces** - End-to-end request flows through your system
2. **Logs** - Console output and structured log data
3. **Metrics** - Performance and resource utilization data

You can drill down into individual spans to debug performance issues:

🦕 Now that you have telemetry export working, you could:
1. Add custom spans and attributes to better understand your application
2. Set up alerts based on latency or error conditions
3. Deploy your application and collector to production using platforms like:
- [Fly.io](https://docs.deno.com/examples/deploying_deno_with_docker/)
- [Digital Ocean](https://docs.deno.com/examples/digital_ocean_tutorial/)
- [AWS Lightsail](https://docs.deno.com/examples/aws_lightsail_tutorial/)
For more details on OpenTelemetry configuration, check out the
[Honeycomb documentation](https://docs.honeycomb.io/send-data/opentelemetry/collector/).
---
# How to export telemetry data to HyperDX
> Complete guide to exporting telemetry data with OpenTelemetry and HyperDX. Learn how to configure collectors, visualize traces, logs, metrics, and debug distributed applications effectively.
URL: https://docs.deno.com/examples/tutorials/hyperdx
[HyperDX](https://hyperdx.io) is an open source observability platform that
unifies logs, traces, metrics, exceptions, and session replays into a single
interface. It helps developers debug applications faster by providing a complete
view of your system's behavior and performance.
[OpenTelemetry](https://opentelemetry.io/) (often abbreviated as OTel) provides
a standardized way to collect and export telemetry data. Deno includes built-in
OpenTelemetry support, allowing you to instrument your applications without
additional dependencies. This integration works seamlessly with platforms like
HyperDX to collect and visualize telemetry data.
In this tutorial, we'll build a simple application and export its telemetry data
to HyperDX:
- [Set up your chat app](#set-up-your-chat-app)
- [Set up a Docker collector](#set-up-a-docker-collector)
- [Generating telemetry data](#generating-telemetry-data)
- [Viewing telemetry data](#viewing-telemetry-data)
You can find the complete source code for this tutorial
[on GitHub](https://github.com/denoland/examples/tree/main/with-hyperdx).
## Set up the app
For this tutorial, we'll use a simple chat application to demonstrate how to
export telemetry data. You can find the
[code for the app on GitHub](https://github.com/denoland/examples/tree/main/with-hyperdx).
Either take a copy of that repository or create a
[main.ts](https://github.com/denoland/examples/blob/main/with-hyperdx/main.ts)
file and a
[.env](https://github.com/denoland/examples/blob/main/with-hyperdx/.env.example)
file.
In order to run the app you will need an OpenAI API key. You can get one by
signing up for an account at [OpenAI](https://platform.openai.com/signup) and
creating a new secret key. You can find your API key in the
[API keys section](https://platform.openai.com/account/api-keys) of your OpenAI
account. Once you have an API key, set up an `OPENAI_API-KEY` environment
variable in your `.env` file:
```env title=".env"
OPENAI_API_KEY=your_openai_api_key
```
## Set up the collector
First, create a free HyperDX account to get your API key. Then, we'll set up two
files to configure the OpenTelemetry collector:
1. Create a `Dockerfile`:
```dockerfile title="Dockerfile"
FROM otel/opentelemetry-collector:latest
COPY otel-collector.yml /otel-config.yml
CMD ["--config", "/otel-config.yml"]
```
This Dockerfile:
- Uses the official OpenTelemetry Collector as the base image
- Copies your configuration into the container
- Sets up the collector to use your config when it starts
2. Create a file called `otel-collector.yml`:
```yml title="otel-collector.yml"
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
otlphttp/hdx:
endpoint: "https://in-otel.hyperdx.io"
headers:
authorization: $_HYPERDX_API_KEY
compression: gzip
processors:
batch:
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/hdx]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/hdx]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/hdx]
```
This configuration file sets up the OpenTelemetry collector to receive telemetry
data from your application and export it to HyperDX. It includes:
- The receivers section accepts data via gRPC (4317) and HTTP (4318)
- The Exporters section sends data to HyperDX with compression and
authentication
- The processors section batches telemetry data for efficient transmission
- The pipelines section defines separate flows for logs, traces, and metrics
Build and run the docker instance to start collecting your telemetry data with
the following command:
```sh
docker build -t otel-collector . && docker run -p 4317:4317 -p 4318:4318 otel-collector
```
## Generating telemetry data
Now that we have the app and the docker container set up, we can start
generating telemetry data. Run your application with these environment variables
to send data to the collector:
```sh
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 \
OTEL_SERVICE_NAME=chat-app \
OTEL_DENO=true \
deno run --unstable-otel --allow-net --allow-env --env-file --allow-read main.ts
```
This command:
- Points the OpenTelemetry exporter to your local collector (`localhost:4318`)
- Names your service "chat-app" in HyperDX
- Enables Deno's OpenTelemetry integration
- Runs your application with the necessary permissions
To generate some telemetry data, make a few requests to your running application
in your browser at [`http://localhost:8000`](http://localhost:8000).
Each request will:
1. Generate traces as it flows through your application
2. Send logs from your application's console output
3. Create metrics about the request performance
4. Forward all this data through the collector to HyperDX
## Viewing telemetry data
In your HyperDX dashboard, you'll see different views of your telemetry data:
### Logs View

Click any log to see details:

### Request Traces
See all logs within a single request:

### Metrics Dashboard
Monitor system performance:

🦕 Now that you have telemetry export working, you could:
1. Add custom spans and attributes to better understand your application
2. Set up alerts based on latency or error conditions
3. Deploy your application and collector to production using platforms like:
- [Fly.io](https://docs.deno.com/examples/deploying_deno_with_docker/)
- [Digital Ocean](https://docs.deno.com/examples/digital_ocean_tutorial/)
- [AWS Lightsail](https://docs.deno.com/examples/aws_lightsail_tutorial/)
🦕 For more details on OpenTelemetry configuration with HyperDX, see their
[documentation](https://www.hyperdx.io/docs/install/opentelemetry).
---
# Initialize a project
> Guide to creating and structuring new Deno projects. Learn about starting a new project, task configuration, dependency management, and best practices for growing applications.
URL: https://docs.deno.com/examples/tutorials/initialize_project
While it is possible to run scripts directly with `deno run`, for larger
projects it is recommended to create a sensible directory structure. This way
you can organize your code, manage dependencies, script tasks and run tests more
easily.
Initialize a new project by running the following command:
```sh
deno init my_project
```
Where `my_project` is the name of your project. You can
[read more about the project structure](/runtime/getting_started/first_project/).
### Run your project
Navigate to the project directory:
```sh
cd my_project
```
Then you can run the project directly using the `deno task` command:
```sh
deno run dev
```
Take a look in the `deno.json` file in your new project. You should see a `dev`
task in the "tasks" field.
```json title="deno.json"
"tasks": {
"dev": "deno run --watch main.ts"
},
```
The `dev` task is a common task that runs the project in development mode. As
you can see, it runs the `main.ts` file with the `--watch` flag, which will
automatically reload the script when changes are made. You can see this in
action if you open the `main.ts` file and make a change.
### Run the tests
In the project directory run:
```sh
deno test
```
This will execute all the tests in the project. You can read more about
[testing in Deno](/runtime/fundamentals/testing/) and we'll cover tests in a
little more depth in a later tutorial. At the moment you have one test file,
`main_test.ts`, which tests the `add` function in `main.ts`.
### Adding to your project
The `main.ts` file serves as the entry point for your application. It’s where
you’ll write your main program logic. When developing your project you will
start by removing the default addition program and replace it with your own
code. For example, if you’re building a web server, this is where you’d set up
your routes and handle requests.
Beyond the initial files, you’ll likely create additional modules (files) to
organize your code. Consider grouping related functionality into separate files.
Remember that Deno [supports ES modules](/runtime/fundamentals/modules/), so you
can use import and export statements to structure your code.
Example folder structure for a deno project:
```sh
my_project/
├── deno.json
├── main.ts
├── main_test.ts
├── routes/
│ ├── home.ts
│ ├── about.ts
├── services/
│ ├── user.ts
│ ├── post.ts
└──utils/
├── logger.ts
├── logger_test.ts
├── validator_test.ts
└── validator.ts
```
This kind of structure keeps your project clean and makes it easier to find and
manage files.
🦕 Congratulations! Now you know how to create a brand new project with
`deno init`. Remember that Deno encourages simplicity and avoids complex build
tools. Keep your project modular, testable, and organized. As your project
grows, adapt the structure to fit your needs. And most importantly, have fun
exploring Deno’s capabilities!
---
# How to deploy Deno on Kinsta
> Step-by-step guide to deploying Deno applications on Kinsta. Learn how to configure package.json, handle environment variables, set up Git deployments, and use Kinsta's application hosting platform.
URL: https://docs.deno.com/examples/tutorials/kinsta
[Kinsta Application Hosting](https://kinsta.com/application-hosting) is a
service that lets you build and deploy your web apps directly from your Git
repository.
## Preparing your application
At **Kinsta**, we recommend using the
[`deno-bin`](https://www.npmjs.com/package/deno-bin) package to run Deno
applications.
To do so, your `package.json` should look like this:
```json title="package.json"
{
"name": "deno app",
"scripts": {
"start": "deno run --allow-net index.js --port=${PORT}"
},
"devDependencies": {
"deno-bin": "^1.28.2"
}
}
```
## Example application
```js
import { parseArgs } from "jsr:@std/cli";
const { args } = Deno;
const port = parseArgs(args).port ? Number(parseArgs(args).port) : 8000;
Deno.serve({ port }, (_req) => new Response("Hello, world"));
```
The application itself is self-explanatory. It's crucial not to hardcode the
`PORT` but use the environmental variable **Kinsta** provides.
There is also a [repository](https://github.com/kinsta/hello-world-deno) that
should help you to get started.
## Deployment
1. Register on
[Kinsta Application Hosting](https://kinsta.com/signup/?product_type=app-db)
or login directly to [My Kinsta](https://my.kinsta.com/) admin panel.
2. Go to the Applications tab.
3. Connect your GitHub repository.
4. Press the **Add service > Application button**.
5. Follow the wizard steps.
---
# Testing in isolation with mocks
> Master the art of mocking in your unit tests. Learn how spies, stubs, fake time, and other Deno tools let you improve your code and confidence
URL: https://docs.deno.com/examples/tutorials/mocking
This guide builds on the
[basics of testing in Deno](/examples/testing_tutorial/) to focus specifically
on mocking techniques that help you isolate your code during testing.
For effective unit testing, you'll often need to "mock" the data that your code
interacts with. Mocking is a technique used in testing where you replace real
data with simulated versions that you can control. This is particularly useful
when testing components that interact with external services, such as APIs or
databases.
Deno provides [helpful mocking utilities](https://jsr.io/@std/testing/doc/mock)
through the Deno Standard Library, making your tests easier to write, more
reliable and faster.
### Spying
In Deno, you can [`spy`](https://jsr.io/@std/testing/doc/mock#spying) on a
function to track how it's called during test execution. Spies don't change how
a function behaves, but they record important details like how many times the
function was called and what arguments were passed to it.
By using spies, you can verify that your code interacts correctly with its
dependencies without setting up complex infrastructure.
In the following example we will test a function called `saveUser()`, which
takes a user object and a database object and calls the database's `save`
method:
```ts
import { assertEquals } from "jsr:@std/assert";
import { assertSpyCalls, spy } from "jsr:@std/testing/mock";
// Define types for better code quality
interface User {
name: string;
}
interface Database {
save: (user: User) => Promise;
}
// Function to test
function saveUser(
user: User,
database: Database,
): Promise {
return database.save(user);
}
// Test with a mock
Deno.test("saveUser calls database.save", async () => {
// Create a mock database with a spy on the save method
const mockDatabase = {
save: spy((user: User) => Promise.resolve({ id: 1, ...user })),
};
const user: User = { name: "Test User" };
const result = await saveUser(user, mockDatabase);
// Verify the mock was called correctly
assertSpyCalls(mockDatabase.save, 1);
assertEquals(mockDatabase.save.calls[0].args[0], user);
assertEquals(result, { id: 1, name: "Test User" });
});
```
We import the necessary functions from the Deno Standard Library to assert
equality and to create and verify spy functions.
The mock database is a stand-in for a real database object, with a `save` method
that is wrapped in a `spy`. The spy function tracks calls to the method, records
arguments passed to it and executes the underlying implementation (in this case
returning a promise with the `user` and an `id`).
The test calls `saveUser()` with the mock data and we use assertions to verify
that:
1. The save method was called exactly once
2. The first argument of the call was the `user` object we passed in
3. The result contains both the original user data and the added ID
We were able to test the `saveUser` operation without setting up or tearing down
any complex database state.
### Clearing spies
When working with multiple tests that use spies, it's important to reset or
clear spies between tests to avoid interference. The Deno testing library
provides a simple way to restore all spies to their original state using the
`restore()` method.
Here's how to clear a spy after you're done with it:
```ts
import { assertEquals } from "jsr:@std/assert";
import { assertSpyCalls, spy } from "jsr:@std/testing/mock";
Deno.test("spy cleanup example", () => {
// Create a spy on a function
const myFunction = spy((x: number) => x * 2);
// Use the spy
const result = myFunction(5);
assertEquals(result, 10);
assertSpyCalls(myFunction, 1);
// After testing, restore the spy
try {
// Test code using the spy
// ...
} finally {
// Always clean up spies
myFunction.restore();
}
});
```
Method spies are disposable, they can automatically restore themselves with the
`using` keyword. This approach means that you do not need to wrap your
assertions in a try statement to ensure you restore the methods before the tests
finish.
```ts
import { assertEquals } from "jsr:@std/assert";
import { assertSpyCalls, spy } from "jsr:@std/testing/mock";
Deno.test("using disposable spies", () => {
const calculator = {
add: (a: number, b: number) => a + b,
multiply: (a: number, b: number) => a * b,
};
// The spy will automatically be restored when it goes out of scope
using addSpy = spy(calculator, "add");
// Use the spy
const sum = calculator.add(3, 4);
assertEquals(sum, 7);
assertSpyCalls(addSpy, 1);
assertEquals(addSpy.calls[0].args, [3, 4]);
// No need for try/finally blocks - the spy will be restored automatically
});
Deno.test("using multiple disposable spies", () => {
const calculator = {
add: (a: number, b: number) => a + b,
multiply: (a: number, b: number) => a * b,
};
// Both spies will automatically be restored
using addSpy = spy(calculator, "add");
using multiplySpy = spy(calculator, "multiply");
calculator.add(5, 3);
calculator.multiply(4, 2);
assertSpyCalls(addSpy, 1);
assertSpyCalls(multiplySpy, 1);
// No cleanup code needed
});
```
For cases where you have multiple spies that don't support the `using` keyword,
you can track them in an array and restore them all at once:
```ts
Deno.test("multiple spies cleanup", () => {
const spies = [];
// Create spies
const functionA = spy((x: number) => x + 1);
spies.push(functionA);
const objectB = {
method: (x: number) => x * 2,
};
const spyB = spy(objectB, "method");
spies.push(spyB);
// Use the spies in tests
// ...
// Clean up all spies at the end
try {
// Test code using spies
} finally {
// Restore all spies
spies.forEach((spyFn) => spyFn.restore());
}
});
```
By properly cleaning up spies, you ensure that each test starts with a clean
state and avoid side effects between tests.
### Stubbing
While spies track method calls without changing behavior, stubs replace the
original implementation entirely.
[Stubbing](https://jsr.io/@std/testing/doc/mock#stubbing) is a form of mocking
where you temporarily replace a function or method with a controlled
implementation. This allows you to simulate specific conditions or behaviors and
return predetermined values. It can also be used when you need to override
environment-dependent functionality.
In Deno, you can create stubs using the `stub` function from the standard
testing library:
```ts
import { assertEquals } from "jsr:@std/assert";
import { Stub, stub } from "jsr:@std/testing/mock";
// Define types for better code quality
interface User {
name: string;
role: string;
}
// Original function
function getCurrentUser(userId: string): User {
// Implementation that might involve database calls
return { name: "Real User", role: "admin" };
}
// Function we want to test
function hasAdminAccess(userId: string): boolean {
const user = getCurrentUser(userId);
return user.role === "admin";
}
Deno.test("hasAdminAccess with stubbed user", () => {
// Create a stub that replaces getCurrentUser
const getUserStub: Stub = stub(
globalThis,
"getCurrentUser",
// Return a test user with non-admin role
() => ({ name: "Test User", role: "guest" }),
);
try {
// Test with the stubbed function
const result = hasAdminAccess("user123");
assertEquals(result, false);
// You can also change the stub's behavior during the test
getUserStub.restore(); // Remove first stub
const adminStub = stub(
globalThis,
"getCurrentUser",
() => ({ name: "Admin User", role: "admin" }),
);
try {
const adminResult = hasAdminAccess("admin456");
assertEquals(adminResult, true);
} finally {
adminStub.restore();
}
} finally {
// Always restore the original function
getUserStub.restore();
}
});
```
Here we import the necessary functions from the Deno Standard Library, then we
set up the function we're going to stub. In a real application this might
connect to a database, make an API call, or perform other operations that we may
want to avoid during testing.
We set up the function under test, in this case the `hasAdminAccess()` function.
We want to test whether it:
- Calls the `getCurrentUser()` function to get a user object
- Checks if the user's role is "admin"
- Returns a boolean indicating whether the user has admin access
Next we create a test named `hasAdminAccess with a stubbed user` and set up a
stub for the `getCurrentUser` function. This will replace the real
implementation with one that returns a user with a `guest` role.
We run the test with the stubbed function, it will call `hasAdminAccess` with a
user ID. Even though the real function would return a user with `admin` role,
our stub returns `guest`, so we can assert that `hasAdminAccess` returns `false`
(since our stub returns a non-admin user).
We can change the stub behavior to return `admin` instead and assert that the
function now returns `true`.
At the end we use a `finally` block to ensure the original function is restored
so that we don't accidentally affect other tests.
### Stubbing environment variables
For deterministic testing, you often need to control environment variables.
Deno's Standard Library provides utilities to achieve this:
```ts
import { assertEquals } from "jsr:@std/assert";
import { stub } from "jsr:@std/testing/mock";
// Function that depends on environment variables and time
function generateReport() {
const environment = Deno.env.get("ENVIRONMENT") || "development";
const timestamp = new Date().toISOString();
return {
environment,
generatedAt: timestamp,
data: {/* report data */},
};
}
Deno.test("report generation with controlled environment", () => {
// Stub environment
const originalEnv = Deno.env.get;
const envStub = stub(Deno.env, "get", (key: string) => {
if (key === "ENVIRONMENT") return "production";
return originalEnv.call(Deno.env, key);
});
// Stub time
const dateStub = stub(
Date.prototype,
"toISOString",
() => "2023-06-15T12:00:00Z",
);
try {
const report = generateReport();
// Verify results with controlled values
assertEquals(report.environment, "production");
assertEquals(report.generatedAt, "2023-06-15T12:00:00Z");
} finally {
// Always restore stubs to prevent affecting other tests
envStub.restore();
dateStub.restore();
}
});
```
### Faking time
Time-dependent code can be challenging to test because it may produce different
results based on when the test runs. Deno provides a
[`FakeTime`](https://jsr.io/@std/testing/doc/time) utility that allows you to
simulate the passage of time and control date-related functions during tests.
The example below demonstrates how to test time-dependent functions:
`isWeekend()`, which returns true if the current day is Saturday or Sunday, and
`delayedGreeting()` which calls a callback after a 1-second delay:
```ts
import { assertEquals } from "jsr:@std/assert";
import { FakeTime } from "jsr:@std/testing/time";
// Function that depends on the current time
function isWeekend(): boolean {
const date = new Date();
const day = date.getDay();
return day === 0 || day === 6; // 0 is Sunday, 6 is Saturday
}
// Function that works with timeouts
function delayedGreeting(callback: (message: string) => void): void {
setTimeout(() => {
callback("Hello after delay");
}, 1000); // 1 second delay
}
Deno.test("time-dependent tests", () => {
using fakeTime = new FakeTime();
// Create a fake time starting at a specific date (a Monday)
const mockedTime: FakeTime = fakeTime(new Date("2023-05-01T12:00:00Z"));
try {
// Test with the mocked Monday
assertEquals(isWeekend(), false);
// Move time forward to Saturday
mockedTime.tick(5 * 24 * 60 * 60 * 1000); // Advance 5 days
assertEquals(isWeekend(), true);
// Test async operations with timers
let greeting = "";
delayedGreeting((message) => {
greeting = message;
});
// Advance time to trigger the timeout immediately
mockedTime.tick(1000);
assertEquals(greeting, "Hello after delay");
} finally {
// Always restore the real time
mockedTime.restore();
}
});
```
Here we set up a test which creates a controlled time environment with
`fakeTime` which sets the starting date to May 1, 2023, (which was a Monday). It
returns a `FakeTime` controller object that lets us manipulate time.
We run tests with the mocked Monday and will see that the `isWeekend` function
returns `false`. Then we can advance time to Saturday and run the test again to
verify that `isWeekend` returns `true`.
The `fakeTime` function replaces JavaScript's timing functions (`Date`,
`setTimeout`, `setInterval`, etc.) with versions you can control. This allows
you to test code with specific dates or times regardless of when the test runs.
This powerful technique means you will avoid flaky tests that depend on the
system clock and can speed up tests by advancing time instantly instead of
waiting for real timeouts.
Fake time is particularly useful for testing:
- Calendar or date-based features, such as scheduling, appointments or
expiration dates
- Code with timeouts or intervals, such as polling, delayed operations or
debouncing
- Animations or transitions such as testing the completion of timed visual
effects
Like with stubs, always restore the real time functions after your tests using
the `restore()` method to avoid affecting other tests.
## Advanced mocking patterns
### Partial mocking
Sometimes you only want to mock certain methods of an object while keeping
others intact:
```ts
import { assertEquals } from "jsr:@std/assert";
import { stub } from "jsr:@std/testing/mock";
class UserService {
async getUser(id: string) {
// Complex database query
return { id, name: "Database User" };
}
async formatUser(user: { id: string; name: string }) {
return {
...user,
displayName: user.name.toUpperCase(),
};
}
async getUserFormatted(id: string) {
const user = await this.getUser(id);
return this.formatUser(user);
}
}
Deno.test("partial mocking with stubs", async () => {
const service = new UserService();
// Only mock the getUser method
const getUserMock = stub(
service,
"getUser",
() => Promise.resolve({ id: "test-id", name: "Mocked User" }),
);
try {
// The formatUser method will still use the real implementation
const result = await service.getUserFormatted("test-id");
assertEquals(result, {
id: "test-id",
name: "Mocked User",
displayName: "MOCKED USER",
});
// Verify getUser was called with the right arguments
assertEquals(getUserMock.calls.length, 1);
assertEquals(getUserMock.calls[0].args[0], "test-id");
} finally {
getUserMock.restore();
}
});
```
### Mocking fetch requests
Testing code that makes HTTP requests often requires mocking the `fetch` API:
```ts
import { assertEquals } from "jsr:@std/assert";
import { stub } from "jsr:@std/testing/mock";
// Function that uses fetch
async function fetchUserData(userId: string) {
const response = await fetch(`https://api.example.com/users/${userId}`);
if (!response.ok) {
throw new Error(`Failed to fetch user: ${response.status}`);
}
return await response.json();
}
Deno.test("mocking fetch API", async () => {
const originalFetch = globalThis.fetch;
// Create a response that the mock fetch will return
const mockResponse = new Response(
JSON.stringify({ id: "123", name: "John Doe" }),
{ status: 200, headers: { "Content-Type": "application/json" } },
);
// Replace fetch with a stubbed version
globalThis.fetch = stub(
globalThis,
"fetch",
(_input: string | URL | Request, _init?: RequestInit) =>
Promise.resolve(mockResponse),
);
try {
const result = await fetchUserData("123");
assertEquals(result, { id: "123", name: "John Doe" });
} finally {
// Restore original fetch
globalThis.fetch = originalFetch;
}
});
```
## Real-world example
Let's put everything together in a more comprehensive example. We'll test a user
authentication service that:
1. Validates user credentials
2. Calls an API to authenticate
3. Stores tokens with expiration times
In the example below, we'll create a full `AuthService` class that handles user
login, token management, and authentication. We'll test it thoroughly using
various mocking techniques covered earlier: stubbing fetch requests, spying on
methods, and manipulating time to test token expiration - all within organized
test steps.
Deno's testing API provides a useful `t.step()` function that allows you to
organize your tests into logical steps or sub-tests. This makes complex tests
more readable and helps pinpoint exactly which part of a test is failing. Each
step can have its own assertions and will be reported separately in the test
output.
```ts
import { assertEquals, assertRejects } from "jsr:@std/assert";
import { spy, stub } from "jsr:@std/testing/mock";
import { FakeTime } from "jsr:@std/testing/time";
// The service we want to test
class AuthService {
private token: string | null = null;
private expiresAt: Date | null = null;
async login(username: string, password: string): Promise {
// Validate inputs
if (!username || !password) {
throw new Error("Username and password are required");
}
// Call authentication API
const response = await fetch("https://api.example.com/login", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ username, password }),
});
if (!response.ok) {
throw new Error(`Authentication failed: ${response.status}`);
}
const data = await response.json();
// Store token with expiration (1 hour)
this.token = data.token;
this.expiresAt = new Date(Date.now() + 60 * 60 * 1000);
return this.token;
}
getToken(): string {
if (!this.token || !this.expiresAt) {
throw new Error("Not authenticated");
}
if (new Date() > this.expiresAt) {
this.token = null;
this.expiresAt = null;
throw new Error("Token expired");
}
return this.token;
}
logout(): void {
this.token = null;
this.expiresAt = null;
}
}
Deno.test("AuthService comprehensive test", async (t) => {
await t.step("login should validate credentials", async () => {
const authService = new AuthService();
await assertRejects(
() => authService.login("", "password"),
Error,
"Username and password are required",
);
});
await t.step("login should handle API calls", async () => {
const authService = new AuthService();
// Mock successful response
const mockResponse = new Response(
JSON.stringify({ token: "fake-jwt-token" }),
{ status: 200, headers: { "Content-Type": "application/json" } },
);
const fetchStub = stub(
globalThis,
"fetch",
(_url: string | URL | Request, options?: RequestInit) => {
// Verify correct data is being sent
const body = options?.body as string;
const parsedBody = JSON.parse(body);
assertEquals(parsedBody.username, "testuser");
assertEquals(parsedBody.password, "password123");
return Promise.resolve(mockResponse);
},
);
try {
const token = await authService.login("testuser", "password123");
assertEquals(token, "fake-jwt-token");
} finally {
fetchStub.restore();
}
});
await t.step("token expiration should work correctly", () => {
using fakeTime = new FakeTime();
const authService = new AuthService();
const time = fakeTime(new Date("2023-01-01T12:00:00Z"));
try {
// Mock the login process to set token directly
authService.login = spy(
authService,
"login",
async () => {
(authService as any).token = "fake-token";
(authService as any).expiresAt = new Date(
Date.now() + 60 * 60 * 1000,
);
return "fake-token";
},
);
// Login and verify token
authService.login("user", "pass").then(() => {
const token = authService.getToken();
assertEquals(token, "fake-token");
// Advance time past expiration
time.tick(61 * 60 * 1000);
// Token should now be expired
assertRejects(
() => {
authService.getToken();
},
Error,
"Token expired",
);
});
} finally {
time.restore();
(authService.login as any).restore();
}
});
});
```
This code defines `AuthService` class with three main functionalities:
- Login - Validates credentials, calls an API, and stores a token with an
expiration time
- GetToken - Returns the token if valid and not expired
- Logout - Clears the token and expiration
The testing structure is organized as a single main test with three logical
**steps**, each testing a different aspect of the service; credential
validation, API call handling and token expiration.
🦕 Effective mocking is essential for writing reliable, maintainable unit tests.
Deno provides several powerful tools to help you isolate your code during
testing. By mastering these mocking techniques, you'll be able to write more
reliable tests that run faster and don't depend on external services.
For more testing resources, check out:
- [Deno Testing API Documentation](/api/deno/testing)
- [Deno Standard Library Testing Modules](https://jsr.io/@std/testing)
- [Basic Testing in Deno](/examples/testing_tutorial/)
---
# Module metadata
> A guide to working with module metadata in Deno. Learn about import.meta properties, main module detection, file paths, URL resolution, and how to access module context information in your applications.
URL: https://docs.deno.com/examples/tutorials/module_metadata
## Concepts
- [import.meta](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import.meta)
can provide information on the context of the module.
- The boolean
[import.meta.main](https://docs.deno.com/api/web/~/ImportMeta#property_main)
will let you know if the current module is the program entry point.
- The string
[import.meta.url](https://docs.deno.com/api/web/~/ImportMeta#property_url)
will give you the URL of the current module.
- The string
[import.meta.filename](https://docs.deno.com/api/web/~/ImportMeta#property_filename)
will give you the fully resolved path to the current module. _For local
modules only_.
- The string
[import.meta.dirname](https://docs.deno.com/api/web/~/ImportMeta#property_dirname)
will give you the fully resolved path to the directory containing the current
module. _For local modules only_.
- The
[import.meta.resolve](https://docs.deno.com/api/web/~/ImportMeta#property_resolve)
allows you to resolve specifier relative to the current module. This function
takes into account an import map (if one was provided on startup).
- The string [Deno.mainModule](https://docs.deno.com/api/deno/~/Deno.mainModule)
will give you the URL of the main module entry point, i.e. the module invoked
by the deno runtime.
## Example
The example below uses two modules to show the difference between
`import.meta.url`, `import.meta.main` and `Deno.mainModule`. In this example,
`module_a.ts` is the main module entry point:
```ts title="module_b.ts"
export function outputB() {
console.log("Module B's import.meta.url", import.meta.url);
console.log("Module B's mainModule url", Deno.mainModule);
console.log(
"Is module B the main module via import.meta.main?",
import.meta.main,
);
}
```
```ts title="module_a.ts"
import { outputB } from "./module_b.ts";
function outputA() {
console.log("Module A's import.meta.url", import.meta.url);
console.log("Module A's mainModule url", Deno.mainModule);
console.log(
"Is module A the main module via import.meta.main?",
import.meta.main,
);
console.log(
"Resolved specifier for ./module_b.ts",
import.meta.resolve("./module_b.ts"),
);
}
outputA();
console.log("");
outputB();
```
If `module_a.ts` is located in `/home/alice/deno` then the output of
`deno run --allow-read module_a.ts` is:
```console
Module A's import.meta.url file:///home/alice/deno/module_a.ts
Module A's mainModule url file:///home/alice/deno/module_a.ts
Is module A the main module via import.meta.main? true
Resolved specifier for ./module_b.ts file:///home/alice/deno/module_b.ts
Module B's import.meta.url file:///home/alice/deno/module_b.ts
Module B's mainModule url file:///home/alice/deno/module_a.ts
Is module B the main module via import.meta.main? false
```
---
# How to use Mongoose with Deno
> Step-by-step guide to using Mongoose with Deno. Learn how to set up MongoDB connectivity, create schemas, implement data models, and perform CRUD operations using Mongoose's schema-based modeling.
URL: https://docs.deno.com/examples/tutorials/mongoose
[Mongoose](https://mongoosejs.com/) is a popular, schema-based library that
models data for [MongoDB](https://www.mongodb.com/). It simplifies writing
MongoDB validation, casting, and other relevant business logic.
This tutorial will show you how to setup Mongoose and MongoDB with your Deno
project.
[View source](https://github.com/denoland/examples/tree/main/with-mongoose) or
[check out the video guide](https://youtu.be/dmZ9Ih0CR9g).
## Creating a Mongoose Model
Let's create a simple app that connects to MongoDB, creates a `Dinosaur` model,
and adds and updates a dinosaur to the database.
First, we'll create the necessary files and directories:
```console
touch main.ts && mkdir model && touch model/Dinosaur.ts
```
In `/model/Dinosaur.ts`, we'll import `npm:mongoose`, define the [schema], and
export it:
```ts
import { model, Schema } from "npm:mongoose@^6.7";
// Define schema.
const dinosaurSchema = new Schema({
name: { type: String, unique: true },
description: String,
createdAt: { type: Date, default: Date.now },
updatedAt: { type: Date, default: Date.now },
});
// Validations
dinosaurSchema.path("name").required(true, "Dinosaur name cannot be blank.");
dinosaurSchema.path("description").required(
true,
"Dinosaur description cannot be blank.",
);
// Export model.
export default model("Dinosaur", dinosaurSchema);
```
## Connecting to MongoDB
Now, in our `main.ts` file, we'll import mongoose and the `Dinosaur` schema, and
connect to MongoDB:
```ts
import mongoose from "npm:mongoose@^6.7";
import Dinosaur from "./model/Dinosaur.ts";
await mongoose.connect("mongodb://localhost:27017");
// Check to see connection status.
console.log(mongoose.connection.readyState);
```
Because Deno supports top-level `await`, we're able to simply
`await mongoose.connect()`.
Running this, we should expect a log of `1`:
```shell
$ deno run --allow-read --allow-sys --allow-env --allow-net main.ts
1
```
It worked!
## Manipulating Data
Let's add an instance [method](https://mongoosejs.com/docs/guide.html#methods)
to our `Dinosaur` schema in `/model/Dinosaur.ts`:
```ts
// ./model/Dinosaur.ts
// Methods.
dinosaurSchema.methods = {
// Update description.
updateDescription: async function (description: string) {
this.description = description;
return await this.save();
},
};
// ...
```
This instance method, `updateDescription`, will allow you to update a record's
description.
Back in `main.ts`, let's start adding and manipulating data in MongoDB.
```ts
// main.ts
// Create a new Dinosaur.
const deno = new Dinosaur({
name: "Deno",
description: "The fastest dinosaur ever lived.",
});
// // Insert deno.
await deno.save();
// Find Deno by name.
const denoFromMongoDb = await Dinosaur.findOne({ name: "Deno" });
console.log(
`Finding Deno in MongoDB -- \n ${denoFromMongoDb.name}: ${denoFromMongoDb.description}`,
);
// Update description for Deno and save it.
await denoFromMongoDb.updateDescription(
"The fastest and most secure dinosaur ever lived.",
);
// Check MongoDB to see Deno's updated description.
const newDenoFromMongoDb = await Dinosaur.findOne({ name: "Deno" });
console.log(
`Finding Deno (again) -- \n ${newDenoFromMongoDb.name}: ${newDenoFromMongoDb.description}`,
);
```
Running the code, we get:
```console
Finding Deno in MongoDB --
Deno: The fastest dinosaur ever lived.
Finding Deno (again) --
Deno: The fastest and most secure dinosaur ever lived.
```
Boom!
For more info on using Mongoose, please refer to
[their documentation](https://mongoosejs.com/docs/guide.html).
---
# How to use MySQL2 with Deno
> Step-by-step guide to using MySQL2 with Deno. Learn how to set up database connections, execute queries, handle transactions, and build data-driven applications using MySQL's Node.js driver.
URL: https://docs.deno.com/examples/tutorials/mysql2
[MySQL](https://www.mysql.com/) is the most popular database in the
[2022 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2022/#most-popular-technologies-database)
and counts Facebook, Twitter, YouTube, and Netflix among its users.
[View source here.](https://github.com/denoland/examples/tree/main/with-mysql2)
You can manipulate and query a MySQL database with Deno using the `mysql2` node
package and importing via `npm:mysql2`. This allows us to use its Promise
wrapper and take advantage of top-level await.
```tsx
import mysql from "npm:mysql2@^2.3.3/promise";
```
## Connecting to MySQL
We can connect to our MySQL server using the `createConnection()` method. You
need the host (`localhost` if you are testing, or more likely a cloud database
endpoint in production) and the user and password:
```tsx
const connection = await mysql.createConnection({
host: "localhost",
user: "root",
password: "password",
});
```
You can also optionally specify a database during the connection creation. Here
we are going to use `mysql2` to create the database on the fly.
## Creating and populating the database
Now that you have the connection running, you can use `connection.query()` with
SQL commands to create databases and tables as well as insert the initial data.
First we want to generate and select the database to use:
```tsx
await connection.query("CREATE DATABASE denos");
await connection.query("use denos");
```
Then we want to create the table:
```tsx
await connection.query(
"CREATE TABLE `dinosaurs` ( `id` int NOT NULL AUTO_INCREMENT PRIMARY KEY, `name` varchar(255) NOT NULL, `description` varchar(255) )",
);
```
After the table is created we can populate the data:
```tsx
await connection.query(
"INSERT INTO `dinosaurs` (id, name, description) VALUES (1, 'Aardonyx', 'An early stage in the evolution of sauropods.'), (2, 'Abelisaurus', 'Abels lizard has been reconstructed from a single skull.'), (3, 'Deno', 'The fastest dinosaur that ever lived.')",
);
```
We now have all the data ready to start querying.
## Querying MySQL
We can use the same connection.query() method to write our queries. First we try
and get all the data in our `dinosaurs` table:
```tsx
const results = await connection.query("SELECT * FROM `dinosaurs`");
console.log(results);
```
The result from this query is all the data in our database:
```tsx
[
[
{
id: 1,
name: "Aardonyx",
description: "An early stage in the evolution of sauropods."
},
{
id: 2,
name: "Abelisaurus",
description: `Abel's lizard" has been reconstructed from a single skull.`
},
{ id: 3, name: "Deno", description: "The fastest dinosaur that ever lived." }
],
```
If we want to just get a single element from the database, we can change our
query:
```tsx
const [results, fields] = await connection.query(
"SELECT * FROM `dinosaurs` WHERE `name` = 'Deno'",
);
console.log(results);
```
Which gives us a single row result:
```tsx
[{ id: 3, name: "Deno", description: "The fastest dinosaur that ever lived." }];
```
Finally, we can close the connection:
```tsx
await connection.end();
```
For more on `mysql2`, check out their documentation
[here](https://github.com/sidorares/node-mysql2).
---
# Build a Next.js App
> Walkthrough guide to building a Next.js application with Deno. Learn how to set up a project, create API routes, implement server-side rendering, and build a full-stack TypeScript application.
URL: https://docs.deno.com/examples/tutorials/next
[Next.js](https://nextjs.org/) is a popular framework for building
server-side-rendered applications. It is built on top of React and provides a
lot of features out of the box.
In this tutorial, we'll build a simple Next.js application and run it with Deno.
The app will display a list of dinosaurs. When you click on one, it'll take you
to a dinosaur page with more details.

Start by verifying that you have the latest version of Deno installed, you will
need at least Deno 1.46.0:
```sh
deno --version
```
## Create a Next.js app with Deno
Next provides a CLI tool to quickly scaffold a new Next.js app. In your terminal
run the following command to create a new Next.js app with Deno:
```sh
deno run -A npm:create-next-app@latest
```
When prompted, select the default options to create a new Next.js app with
TypeScript.
Then, `cd` into the newly created project folder and run the following command
to install the dependencies
```sh
deno install
```
Next.js has some dependencies that still rely on `Object.prototype.__proto__`,
so you need to allow it. In a new `deno.json` file, add the following lines:
```json deno.json
{
"unstable": ["unsafe-proto"]
}
```
Now you can serve your new Next.js app:
```sh
deno task dev
```
This will start the Next.js server, click the output link to localhost to see
your app in the browser.
## Add a backend
The next step is to add a backend API. We'll create a very simple API that
returns information about dinosaurs.
We'll use Next.js's
[built in API route handlers](https://nextjs.org/docs/app/building-your-application/routing/route-handlers)
to set up our dinosaur API. Next.js uses a file-system-based router, where the
folder structure directly defines the routes.
We'll define three routes, The first route at `/api` will return the string
`Welcome to the dinosaur API`, then we'll set up `/api/dinosaurs` to return all
the dinosaurs, and finally `/api/dinosaur/[dinosaur]` to return a specific
dinosaur based on the name in the URL.
### /api/
In the `app` folder of your new project, create an `api` folder. In that folder,
create a `route.ts` file, which will handle requests to `/api/.
Copy and paste the following code into the `api/route.ts` file:
```ts title="route.ts"
export async function GET() {
return Response.json("welcome to the dinosaur API");
}
```
This code defines a simple route handler that returns a JSON response with the
string `welcome to the dinosaur API`.
### /api/dinosaurs
In the `api` folder, create a folder called `dinosaurs`. In that folder, make a
`data.json` file, which will contain the hard coded dinosaur data. Copy and
paste
[this json file](https://raw.githubusercontent.com/denoland/deno-vue-example/main/api/data.json)
into the `data.json` file.
Create a `route.ts` file in the `dinosaurs` directory, which will handle
requests to `/api/dinosaurs`. In this route we'll read the `data.json` file and
return the dinosaurs as JSON:
```ts title="route.ts"
import data from "./data.json" with { type: "json" };
export async function GET() {
return Response.json(data);
}
```
### /api/dinosaurs/[dinosaur]
And for the final route, `/api/dinosaurs/[dinosaur]`, we'll create a folder
called `[dinosaur]` in the `dinosaurs` directory. In there, create a `route.ts`
file. In this file we'll read the `data.json` file, find the dinosaur with the
name in the URL, and return it as JSON:
```ts title="route.ts"
import { NextRequest } from "next/server";
import data from "../data.json" with { type: "json" };
type RouteParams = { params: Promise<{ dinosaur: string }> };
export const GET = async (request: NextRequest, { params }: RouteParams) => {
const { dinosaur } = await params;
if (!dinosaur) {
return Response.json("No dinosaur name provided.");
}
const dinosaurData = data.find((item) =>
item.name.toLowerCase() === dinosaur.toLowerCase()
);
return Response.json(dinosaurData ? dinosaurData : "No dinosaur found.");
};
```
Now, if you run the app with `deno task dev` and visit
`http://localhost:3000/api/dinosaurs/brachiosaurus` in your browser, you should
see the details of the brachiosaurus dinosaur.
## Build the frontend
Now that we have our backend API set up, let's build the frontend to display the
dinosaur data.
### Define the dinosaur type
Firstly we'll set up a new type, to define the shape of the dinosaur data. In
the `app` directory, create a `types.ts` file and add the following code:
```ts title="types.ts"
export type Dino = { name: string; description: string };
```
### Update the homepage
We'll update the `page.tsx` file in the `app` directory to fetch the dinosaur
data from our API and display it as a list of links.
To execute client-side code in Next.js we need to use the `use Client` directive
at the top of the file. Then we'll import the modules that we'll need in this
page and export the default function that will render the page:
```tsx title="page.tsx"
"use client";
import { useEffect, useState } from "react";
import { Dino } from "./types";
import Link from "next/link";
export default function Home() {
}
```
Inside the body of the `Home` function, we'll define a state variable to store
the dinosaur data, and a `useEffect` hook to fetch the data from the API when
the component mounts:
```tsx title="page.tsx"
const [dinosaurs, setDinosaurs] = useState([]);
useEffect(() => {
(async () => {
const response = await fetch(`/api/dinosaurs`);
const allDinosaurs = await response.json() as Dino[];
setDinosaurs(allDinosaurs);
})();
}, []);
```
Beneath this, still inside the body of the `Home` function, we'll return a list
of links, each linking to the dinosaur's page:
```tsx title="page.tsx"
return (
Welcome to the Dinosaur app
Click on a dinosaur below to learn more.
{dinosaurs.map((dinosaur: Dino) => {
return (
{dinosaur.name}
);
})}
);
```
### Create the dinosaur page
Inside the `app` directory, create a new folder called `[dinosaur]`. Inside this
folder create a `page.tsx` file. This file will fetch the details of a specific
dinosaur from the API and render them on the page.
Much like the homepage, we'll need client side code, and we'll import the
modules we need and export a default function. We'll pass the incoming to the
function and set up a type for this parameter:
```tsx title="[dinosaur]/page.tsx"
"use client";
import { useEffect, useState } from "react";
import { Dino } from "../types";
import Link from "next/link";
type RouteParams = { params: Promise<{ dinosaur: string }> };
export default function Dinosaur({ params }: RouteParams) {
}
```
Inside the body of the `Dinosaur` function we'll get the selected dinosaur from
the request, set up a state variable to store the dinosaur data, and write a
`useEffect` hook to fetch the data from the API when the component mounts:
```tsx title="[dinosaur]/page.tsx"
const selectedDinosaur = params.then((params) => params.dinosaur);
const [dinosaur, setDino] = useState({ name: "", description: "" });
useEffect(() => {
(async () => {
const resp = await fetch(`/api/dinosaurs/${await selectedDinosaur}`);
const dino = await resp.json() as Dino;
setDino(dino);
})();
}, []);
```
Finally, still inside the `Dinosaur` function body, we'll return a paragraph
element containing the dinosaur's name and description:
```tsx title="[dinosaur]/page.tsx"
return (
{dinosaur.name}
{dinosaur.description}
🠠 Back to all dinosaurs
);
```
## Run the app
Now you can run the app with `deno task dev` and visit `http://localhost:3000`
in your browser to see the list of dinosaurs. Click on a dinosaur to see more
details!

🦕 Now you can build and run a Next.js app with Deno! To build on your app you
could consider [adding a database](/runtime/tutorials/connecting_to_databases/)
to replace your `data.json` file, or consider
[writing some tests](/runtime/fundamentals/testing/) to make your app reliable
and production ready.
---
# Build a Nuxt app with Deno
> Step-by-step guide to building Nuxt applications with Deno. Learn how to create a full-stack Vue.js app, implement server-side rendering, add Tailwind styling, and deploy your application.
URL: https://docs.deno.com/examples/tutorials/nuxt
[Nuxt](https://nuxt.com/) is a framework that provides an intuitive way to
create full-stack applications based on [Vue](https://vuejs.org/). It offers
file-based routing, a variety of rendering options, and automatic code splitting
out of the box. With its modular architecture, Nuxt simplifies the development
process by providing a structured approach to building Vue applications.
In this tutorial, we'll build a simple Nuxt application with Deno that will
display a list of dinosaurs and allow you to learn more about each one when you
click on the name:
- [Scaffold a Nuxt app](#scaffold-a-nuxt-app-with-deno)
- [Setup server API routes](#setup-server-api-routes)
- [Setup Vue frontend](#setup-vue-frontend)
- [Add Tailwind](#add-tailwind)
- [Next steps](#next-steps)
You can find the code for this project in this
[repo](https://github.com/denoland/examples/tree/main/with-nuxt).
## Scaffold a Nuxt app with Deno
We can create a new Nuxt project using Deno like this:
```bash
deno -A npm:nuxi@latest init
```
We'll use Deno to manage our package dependencies, and can grab the Nuxt package
from npm. This will create a nuxt-app with this project structure:
```
NUXT-APP/
├── .nuxt/ # Nuxt build directory
├── node_modules/ # Node.js dependencies
├── public/ # Static files
│ ├── favicon.ico
│ └── robots.txt
├── server/ # Server-side code
│ └── tsconfig.json
├── .gitignore
├── app.vue # Root Vue component
├── nuxt.config.ts # Nuxt configuration
├── package-lock.json # NPM lock file
├── package.json # Project manifest
├── README.md
└── tsconfig.json # TypeScript configuration
```
## Setup server API routes
Let’s first start by creating the API routes that serve the dinosaur data.
First, our
[dinosaur data](https://github.com/denoland/examples/blob/main/with-nuxt/server/api/data.json)
will live within the server directory as `server/api/data.json`:
```json title="server/api/data.json"
[
{
"name": "Aardonyx",
"description": "An early stage in the evolution of sauropods."
},
{
"name": "Abelisaurus",
"description": "\"Abel's lizard\" has been reconstructed from a single skull."
},
{
"name": "Abrictosaurus",
"description": "An early relative of Heterodontosaurus."
},
...etc
]
```
This is where our data will be pulled from. In a full application, this data
would come from a database.
> ⚠️️ In this tutorial we hard code the data. But you can connect
> to [a variety of databases](https://docs.deno.com/runtime/tutorials/connecting_to_databases/) and [even use ORMs like Prisma](https://docs.deno.com/runtime/tutorials/how_to_with_npm/prisma/) with
> Deno.
This app will have two API routes. They will serve the following:
- the full list of dinosaurs for an index page
- individual dinosaur information for an individual dinosaur page
Both will be `*.get.ts` files, which Nuxt automatically converts to API
endpoints to respond to `GET` requests.
[The filename convention determines both the HTTP method and the route path](https://nuxt.com/docs/guide/directory-structure/server#matching-http-method).
The initial `dinosaurs.get.ts` is fairly simple and uses
[`defineCachedEventHandler`](https://nitro.build/guide/cache) to create a cached
endpoint for better performance. This handler simply returns our full dinosaur
data array without any filtering:
```tsx title="server/api/dinosaurs.get.ts"
import data from "./data.json" with { type: "json" };
export default defineCachedEventHandler(() => {
return data;
});
```
The `GET` route for the individual dinosaur has a little more logic. It extracts
the name parameter from the event context, performs case-insensitive matching to
find the requested dinosaur, and includes proper error handling for missing or
invalid dinosaur names. We'll create a `dinosaurs` directory, then to pass the
name parameter, we'll make a new file named `[name].get.ts`:
```tsx title="server/api/dinosaurs/[name].get.ts"
import data from "../data.json";
export default defineCachedEventHandler((event) => {
const name = getRouterParam(event, "name");
if (!name) {
throw createError({
statusCode: 400,
message: "No dinosaur name provided",
});
}
const dinosaur = data.find(
(dino) => dino.name.toLowerCase() === name.toLowerCase(),
);
if (!dinosaur) {
throw createError({
statusCode: 404,
message: "Dinosaur not found",
});
}
return dinosaur;
});
```
Run the server with `deno task dev` and visit
[http://localhost:3000/api/dinosaurs](http://localhost:3000/api/dinosaurs) in
your browser, you should see the raw JSON response showing all of the dinosaurs!

You can also retrieve data for a single dinosaur by visiting a particular
dinosaur name, for example:
[http://localhost:3000/api/dinosaurs/aardonyx](http://localhost:3000/api/dinosaurs/aardonyx).

Next, we'll setup the frontend with Vue to display the index page and each
individual dinosaur page.
## Setup the Vue frontend
We want to set up two pages within the app:
- An index page which will list all of the dinosaurs
- An individual dinosaur page showing more information about the selected
dinosaur.
First, create the index page. Nuxt uses
[file-system routing](https://nuxt.com/docs/getting-started/routing), so we will
create a `pages` directory in the root, and within that an index page called
`index.vue`.
To get the data, we’ll use the `useFetch` composable to hit the API endpoint we
created in the previous section:
```tsx title="pages/index.vue"
Welcome to the Dinosaur app
Click on a dinosaur below to learn more.
{{ dinosaur.name }}
```
For the page that shows information on each dinosaur, we'll create a new dynamic
page called `[name].vue`. This page uses Nuxt's
[dynamic route parameters](https://nuxt.com/docs/getting-started/routing#route-parameters),
where the `[name]` in the filename can be accessed in JavaScript as
`route.params.name`. We’ll use the `useRoute` composable to access the route
parameters and `useFetch` to get the specific dinosaur's data based on the name
parameter:
```tsx title="pages/[name].vue"
{{ dinosaur.name }}
{{ dinosaur.description }}
Back to all dinosaurs
```
Next, we’ll have to connect these Vue components together so that they render
properly when we visit the root of the domain. Let’s update `app.vue` at the
root of the directory to serve our application’s root component. We’ll use
[`NuxtLayout`](https://nuxt.com/docs/api/components/nuxt-layout) for consistent
page structure and [`NuxtPage`](https://nuxt.com/docs/api/components/nuxt-page)
for dynamic page rendering:
```tsx title="app.vue"
;
```
Run the server with `deno task dev` and see how it looks at
[http://localhost:3000](http://localhost:3000):
Looks great!
## Add Tailwind
Like we said, we're going to add a little bit of styling to this application.
First, we'll set up a layout which will provide a consistent structure across
all pages using Nuxt's layout system with
[slot-based](https://vuejs.org/guide/components/slots) content injection:
```tsx title="layouts/default.vue"
;
```
In this project, we’re also going to use [tailwind](https://tailwindcss.com/)
for some basic design, so we need to install those dependencies:
```bash
deno install -D npm:tailwindcss npm:@tailwindcss/vite
```
Then, we're going to update the `nuxt.config.ts`. Import the Tailwind dependency
and configure the Nuxt application for Deno compatibility, We'll enable
development tools, and set up Tailwind CSS:
```tsx title="nuxt.config.ts"
import tailwindcss from "@tailwindcss/vite";
export default defineNuxtConfig({
compatibilityDate: "2025-05-15",
devtools: { enabled: true },
nitro: {
preset: "deno",
},
app: {
head: {
title: "Dinosaur Encyclopedia",
},
},
css: ["~/assets/css/main.css"],
vite: {
plugins: [
tailwindcss(),
],
},
});
```
Next, create a new css file, `assets/css/main.css`, and add an import `@import`
that imports tailwind, as well as the tailwind utilities:
```tsx title="assets/css/main.css"
@import "tailwindcss";
@tailwind base;
@tailwind components;
@tailwind utilities;
```
## Running the application
We can then run the application using:
```bash
deno task dev
```
This will start the app at localhost:3000:
And we’re done!
🦕 Next steps for a Nuxt app might be to add authentication using the
[Nuxt Auth](https://auth.nuxtjs.org/) module, implement state management with
[Pinia](https://pinia.vuejs.org/), add server-side data persistence with
[Prisma](https://docs.deno.com/examples/prisma_tutorial/) or
[MongoDB](https://docs.deno.com/examples/mongoose_tutorial/), and set up
automated testing with Vitest. These features would make it production-ready for
larger applications.
---
# Handle OS signals
> Tutorial on handling operating system signals in Deno. Learn how to capture SIGINT and SIGBREAK events, manage signal listeners, and implement graceful shutdown handlers in your applications.
URL: https://docs.deno.com/examples/tutorials/os_signals
> ⚠️ Windows only supports listening for SIGINT and SIGBREAK as of Deno v1.23.
## Concepts
- [Deno.addSignalListener()](https://docs.deno.com/api/deno/~/Deno.addSignalListener)
can be used to capture and monitor OS signals.
- [Deno.removeSignalListener()](https://docs.deno.com/api/deno/~/Deno.removeSignalListener)
can be used to stop watching the signal.
## Set up an OS signal listener
APIs for handling OS signals are modelled after already familiar
[`addEventListener`](https://developer.mozilla.org/en-US/docs/Web/API/EventTarget/addEventListener)
and
[`removeEventListener`](https://developer.mozilla.org/en-US/docs/Web/API/EventTarget/removeEventListener)
APIs.
> ⚠️ Note that listening for OS signals doesn't prevent event loop from
> finishing, ie. if there are no more pending async operations the process will
> exit.
You can use `Deno.addSignalListener()` function for handling OS signals:
```ts title="add_signal_listener.ts"
console.log("Press Ctrl-C to trigger a SIGINT signal");
Deno.addSignalListener("SIGINT", () => {
console.log("interrupted!");
Deno.exit();
});
// Add a timeout to prevent process exiting immediately.
setTimeout(() => {}, 5000);
```
Run with:
```shell
deno run add_signal_listener.ts
```
You can use `Deno.removeSignalListener()` function to unregister previously
added signal handler.
```ts title="signal_listeners.ts"
console.log("Press Ctrl-C to trigger a SIGINT signal");
const sigIntHandler = () => {
console.log("interrupted!");
Deno.exit();
};
Deno.addSignalListener("SIGINT", sigIntHandler);
// Add a timeout to prevent process exiting immediately.
setTimeout(() => {}, 5000);
// Stop listening for a signal after 1s.
setTimeout(() => {
Deno.removeSignalListener("SIGINT", sigIntHandler);
}, 1000);
```
Run with:
```shell
deno run signal_listeners.ts
```
---
# Distributed Tracing with Context Propagation in Deno
> Implement end-to-end distributed tracing with automatic context propagation in Deno applications. This tutorial covers creating traced services, automatic propagation of trace context, and visualizing distributed traces.
URL: https://docs.deno.com/examples/tutorials/otel_span_propagation
Modern applications are often built as distributed systems with multiple
services communicating with each other. When debugging issues or optimizing
performance in these systems, it's crucial to be able to trace requests as they
flow through different services. This is where distributed tracing comes in.
As of Deno 2.3, the runtime now automatically preserves trace context across
service boundaries, making end-to-end tracing in distributed systems simpler and
more powerful. This means that when one service makes a request to another, the
trace context is automatically propagated, allowing you to see the entire
request flow as a single trace.
## Setting up a distributed system
Our example system will consist of two parts:
1. A server that provides an API endpoint
2. A client that makes requests to the server
### The server
We'll set up a simple HTTP server that responds to GET requests with a JSON
message:
```ts title="server.ts"
import { trace } from "npm:@opentelemetry/api@1";
const tracer = trace.getTracer("api-server", "1.0.0");
// Create a simple API server with Deno.serve
Deno.serve({ port: 8000 }, (req) => {
return tracer.startActiveSpan("process-api-request", async (span) => {
// Add attributes to the span for better context
span.setAttribute("http.route", "/");
span.updateName("GET /");
// Add a span event to see in traces
span.addEvent("processing_request", {
request_id: crypto.randomUUID(),
timestamp: Date.now(),
});
// Simulate processing time
await new Promise((resolve) => setTimeout(resolve, 50));
console.log("Server: Processing request in trace context");
// End the span when we're done
span.end();
return new Response(JSON.stringify({ message: "Hello from server!" }), {
headers: { "Content-Type": "application/json" },
});
});
});
```
### The client
Now, let's create a client that will make requests to our server:
```ts title="client.ts"
import { SpanStatusCode, trace } from "npm:@opentelemetry/api@1";
const tracer = trace.getTracer("api-client", "1.0.0");
// Create a parent span for the client operation
await tracer.startActiveSpan("call-api", async (parentSpan) => {
try {
console.log("Client: Starting API call");
// The fetch call inside this span will automatically:
// 1. Create a child span for the fetch operation
// 2. Inject the trace context into the outgoing request headers
const response = await fetch("http://localhost:8000/");
const data = await response.json();
console.log(`Client: Received response: ${JSON.stringify(data)}`);
parentSpan.addEvent("received_response", {
status: response.status,
timestamp: Date.now(),
});
} catch (error) {
console.error("Error calling API:", error);
if (error instanceof Error) {
parentSpan.recordException(error);
}
parentSpan.setStatus({
code: SpanStatusCode.ERROR,
message: error instanceof Error ? error.message : String(error),
});
} finally {
parentSpan.end();
}
});
```
## Tracing with OpenTelemetry
Both the client and server code already include basic OpenTelemetry
instrumentation:
1. Create a tracer - both files create a tracer using `trace.getTracer()` with a
name and version.
2. Create spans - We use `startActiveSpan()` to create spans that represent
operations.
3. Add context - We add attributes and events to spans to provide more context.
4. Ending spans - We make sure to end spans when operations are complete.
## Automatic context propagation
The magic happens when the client makes a request to the server. In the client
code there is a fetch call to the server:
```ts
const response = await fetch("http://localhost:8000/");
```
Since this fetch call happens inside an active span, Deno automatically creates
a child span for the fetch operation and Injects the trace context into the
outgoing request headers.
When the server receives this request, Deno extracts the trace context from the
request headers and establishes the server span as a child of the client's span.
## Running the example
To run this example, first, start the server, giving your otel service a name:
```sh
OTEL_DENO=true OTEL_SERVICE_NAME=server deno run --unstable-otel --allow-net server.ts
```
Then, in another terminal, run the client, giving the client a different service
name to make observing the propagation clearer:
```sh
OTEL_DENO=true OTEL_SERVICE_NAME=client deno run --unstable-otel --allow-net client.ts
```
You should see:
1. The client logs "Client: Starting API call"
2. The server logs "Server: Processing request in trace context"
3. The client logs the response received from the server
## Viewing traces
To actually see the traces, you'll need an OpenTelemetry collector and a
visualization tool,
[for example Grafana Tempo](/runtime/fundamentals/open_telemetry/#quick-start).
When you visualize the traces, you'll see:
1. A parent span from the client
2. Connected to a child span for the HTTP request
3. Connected to a span from the server
4. All as part of a single trace!
For example, in Grafana, the trace visualization may look like this:

🦕 Now that you understand distributed tracing with Deno, you could extend this
to more complex systems with multiple services and async operations.
With Deno's automatic context propagation, implementing distributed tracing in
your applications has never been easier!
---
# How to use Planetscale with Deno
> Step-by-step guide to using Planetscale with Deno. Learn how to set up serverless MySQL databases, manage connections, execute queries, and build scalable applications with Planetscale's developer-friendly platform.
URL: https://docs.deno.com/examples/tutorials/planetscale
Planetscale is a MySQL-compatible serverless database that is designed with a
developer workflow where developers can create, branch, and deploy databases
from the command line.
[View source here.](https://github.com/denoland/examples/tree/main/with-planetscale)
We'll use the Planetscale serverless driver, `@planetscale/database`, to work
with Deno. First we want to create `main.ts` and import the connect method from
this package:
```tsx
import { connect } from "npm:@planetscale/database@^1.4";
```
## Configuring our connection
The connection requires three credentials: host, username, and password. These
are database-specific, so we first need to create a database in Planetscale. You
can do that by following the initial instructions
[here](https://planetscale.com/docs/tutorials/planetscale-quick-start-guide).
Don't worry about adding the schema—we can do that through
`@planetscale/database`.
Once you have created the database, head to Overview, click "Connect", and
choose "Connect with `@planetscale/database`" to get the host and username. Then
click through to Passwords to create a new password for your database. Once you
have all three you can plug them in directly, or better, store them as
environment variables:
```bash
export HOST=
export USERNAME=
export PASSWORD=
```
Then call them using `Deno.env`:
```tsx
const config = {
host: Deno.env.get("HOST"),
username: Deno.env.get("USERNAME"),
password: Deno.env.get("PASSWORD"),
};
const conn = connect(config);
```
This will also work on Deno Deploy if you set the environment variables in the
dashboard. Run with:
```shell
deno run --allow-net --allow-env main.ts
```
The `conn` object is now an open connection to our Planetscale database.
## Creating and populating our database table
Now that you have the connection running, you can `conn.execute()` with SQL
commands to create tables and insert the initial data:
```tsx
await conn.execute(
"CREATE TABLE dinosaurs (id int NOT NULL AUTO_INCREMENT PRIMARY KEY, name varchar(255) NOT NULL, description varchar(255) NOT NULL);",
);
await conn.execute(
"INSERT INTO `dinosaurs` (id, name, description) VALUES (1, 'Aardonyx', 'An early stage in the evolution of sauropods.'), (2, 'Abelisaurus', 'Abels lizard has been reconstructed from a single skull.'), (3, 'Deno', 'The fastest dinosaur that ever lived.')",
);
```
## Querying Planetscale
We can use same `conn.execute()` to also write our queries. Let's get a list of
all our dinosaurs:
```tsx
const results = await conn.execute("SELECT * FROM `dinosaurs`");
console.log(results.rows);
```
The result:
```tsx
[
{
id: 1,
name: "Aardonyx",
description: "An early stage in the evolution of sauropods.",
},
{
id: 2,
name: "Abelisaurus",
description: "Abels lizard has been reconstructed from a single skull.",
},
{ id: 3, name: "Deno", description: "The fastest dinosaur that ever lived." },
];
```
We can also get just a single row from the database by specifying a dinosaur
name:
```tsx
const result = await conn.execute(
"SELECT * FROM `dinosaurs` WHERE `name` = 'Deno'",
);
console.log(result.rows);
```
Which gives us a single row result:
```tsx
[{ id: 3, name: "Deno", description: "The fastest dinosaur that ever lived." }];
```
You can find out more about working with Planetscale in their
[docs](https://planetscale.com/docs).
---
# How to create a RESTful API with Prisma and Oak
> Guide to building a RESTful API using Prisma and Oak with Deno. Learn how to set up database schemas, generate clients, implement CRUD operations, and deploy your API with proper type safety.
URL: https://docs.deno.com/examples/tutorials/prisma
[Prisma](https://prisma.io) has been one of our top requested modules to work
with in Deno. The demand is understandable, given that Prisma's developer
experience is top notch and plays well with so many persistent data storage
technologies.
We're excited to show you how to use Prisma with Deno.
In this How To guide, we'll setup a simple RESTful API in Deno using Oak and
Prisma.
Let's get started.
[View source](https://github.com/denoland/examples/tree/main/with-prisma) or
[check out the video guide](https://youtu.be/P8VzA_XSF8w).
## Setup the application
Let's create the folder `rest-api-with-prisma-oak` and navigate there:
```shell
mkdir rest-api-with-prisma-oak
cd rest-api-with-prisma-oak
```
Then, let's run `prisma init` with Deno:
```shell
deno run --allow-read --allow-env --allow-write npm:prisma@latest init
```
This will generate
[`prisma/schema.prisma`](https://www.prisma.io/docs/concepts/components/prisma-schema).
Let's update it with the following:
```ts
generator client {
provider = "prisma-client-js"
previewFeatures = ["deno"]
output = "../generated/client"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model Dinosaur {
id Int @id @default(autoincrement())
name String @unique
description String
}
```
Prisma also generates a `.env` file with a `DATABASE_URL` environment variable.
Let's assign `DATABASE_URL` to a PostgreSQL connection string. In this example,
we'll use a free
[PostgreSQL database from Supabase](https://supabase.com/database).
Next, let's create the database schema:
```shell
deno run -A npm:prisma@latest db push
```
After that's complete, we'll need to generate a Prisma Client:
```shell
deno run -A --unstable-detect-cjs npm:prisma@latest generate --no-engine
```
## Setup Accelerate in the Prisma Data Platform
To get started with the Prisma Data Platform:
1. Sign up for a free [Prisma Data Platform account](https://console.prisma.io).
2. Create a project.
3. Navigate to the project you created.
4. Enable Accelerate by providing your database's connection string.
5. Generate an Accelerate connection string and copy it to your clipboard.
Assign the Accelerate connection string, that begins with `prisma://`, to
`DATABASE_URL` in your `.env` file replacing your existing connection string.
Next, let's create a seed script to seed the database.
## Seed your Database
Create `./prisma/seed.ts`:
```shell
touch prisma/seed.ts
```
And in `./prisma/seed.ts`:
```ts
import { Prisma, PrismaClient } from "../generated/client/deno/edge.ts";
const prisma = new PrismaClient({
datasourceUrl: envVars.DATABASE_URL,
});
const dinosaurData: Prisma.DinosaurCreateInput[] = [
{
name: "Aardonyx",
description: "An early stage in the evolution of sauropods.",
},
{
name: "Abelisaurus",
description: "Abel's lizard has been reconstructed from a single skull.",
},
{
name: "Acanthopholis",
description: "No, it's not a city in Greece.",
},
];
/**
* Seed the database.
*/
for (const u of dinosaurData) {
const dinosaur = await prisma.dinosaur.create({
data: u,
});
console.log(`Created dinosaur with id: ${dinosaur.id}`);
}
console.log(`Seeding finished.`);
await prisma.$disconnect();
```
We can now run `seed.ts` with:
```shell
deno run -A --env prisma/seed.ts
```
> [!TIP]
>
> The `--env` flag is used to tell Deno to load environment variables from the
> `.env` file.
After doing so, you should be able to see your data on Prisma Studio by running
the following command:
```bash
deno run -A npm:prisma studio
```
You should see something similar to the following screenshot:

## Create your API routes
We'll use [`oak`](https://jsr.io/@oak/oak) to create the API routes. Let's keep
them simple for now.
Let's create a `main.ts` file:
```shell
touch main.ts
```
Then, in your `main.ts` file:
```ts
import { PrismaClient } from "./generated/client/deno/edge.ts";
import { Application, Router } from "jsr:@oak/oak";
/**
* Initialize.
*/
const prisma = new PrismaClient({
datasources: {
db: {
url: envVars.DATABASE_URL,
},
},
});
const app = new Application();
const router = new Router();
/**
* Setup routes.
*/
router
.get("/", (context) => {
context.response.body = "Welcome to the Dinosaur API!";
})
.get("/dinosaur", async (context) => {
// Get all dinosaurs.
const dinosaurs = await prisma.dinosaur.findMany();
context.response.body = dinosaurs;
})
.get("/dinosaur/:id", async (context) => {
// Get one dinosaur by id.
const { id } = context.params;
const dinosaur = await prisma.dinosaur.findUnique({
where: {
id: Number(id),
},
});
context.response.body = dinosaur;
})
.post("/dinosaur", async (context) => {
// Create a new dinosaur.
const { name, description } = await context.request.body("json").value;
const result = await prisma.dinosaur.create({
data: {
name,
description,
},
});
context.response.body = result;
})
.delete("/dinosaur/:id", async (context) => {
// Delete a dinosaur by id.
const { id } = context.params;
const dinosaur = await prisma.dinosaur.delete({
where: {
id: Number(id),
},
});
context.response.body = dinosaur;
});
/**
* Setup middleware.
*/
app.use(router.routes());
app.use(router.allowedMethods());
/**
* Start server.
*/
await app.listen({ port: 8000 });
```
Now, let's run it:
```shell
deno run -A --env main.ts
```
Let's visit `localhost:8000/dinosaurs`:

Next, let's `POST` a new user with this `curl` command:
```shell
curl -X POST http://localhost:8000/dinosaur -H "Content-Type: application/json" -d '{"name": "Deno", "description":"The fastest, most secure, easiest to use Dinosaur ever to walk the Earth."}'
```
You should now see a new row on Prisma Studio:

Nice!
## What's next?
Building your next app will be more productive and fun with Deno and Prisma,
since both technologies deliver an intuitive developer experience with data
modeling, type-safety, and robust IDE support.
If you're interested in connecting Prisma to Deno Deploy,
[check out this awesome guide](https://www.prisma.io/docs/guides/deployment/deployment-guides/deploying-to-deno-deploy).
---
# Build Qwik with Deno
> Step-by-step guide to building Qwik applications with Deno. Learn about resumability, server-side rendering, route handling, and how to create fast, modern web applications with zero client-side JavaScript by default.
URL: https://docs.deno.com/examples/tutorials/qwik
[Qwik](https://qwik.dev/) is a JavaScript framework that delivers
instant-loading web applications by leveraging resumability instead of
hydration. In this tutorial, we'll build a simple Qwik application and run it
with Deno. The app will display a list of dinosaurs. When you click on one,
it'll take you to a dinosaur page with more details.
We'll go over how to build a simple Qwik app using Deno:
- [Scaffold a Qwik app](#scaffold-a-qwik-app)
- [Setup data and type definitions](#setup-data-and-type-definitions)
- [Build the frontend](#build-the-frontend)
- [Next steps](#next-steps)
Feel free to skip directly to
[the source code](https://github.com/denoland/examples/tree/main/with-qwik) or
follow along below!
## Scaffold a Qwik app
We can create a new Qwik project using deno like this:
```bash
deno init --npm qwik@latest
```
This will run you through the setup process for Qwik and Qwik City. Here, we
chose the simplest “Empty App” deployment with npm dependencies.
When complete, you’ll have a project structure that looks like this:
```
.
├── node_modules/
├── public/
└── src/
├── components/
│ └── router-head/
│ └── router-head.tsx
└── routes/
├── index.tsx
├── layout.tsx
├── service-worker.ts
├── entry.dev.tsx
├── entry.preview.tsx
├── entry.ssr.tsx
├── global.css
└── root.tsx
├── .eslintignore
├── .eslintrc.cjs
├── .gitignore
├── .prettierignore
├── package-lock.json
├── package.json
├── qwik.env.d.ts
├── README.md
├── tsconfig.json
└── vite.config.ts
```
Most of this is boilerplate configuration that we won’t touch. A few of the
important files to know for how Qwik works are:
- `src/components/router-head/router-head.tsx`: Manages the HTML head elements
(like title, meta tags, etc.) across different routes in your Qwik
application.
- `src/routes/index.tsx`: The main entry point and home page of your application
that users see when they visit the root URL.
- `src/routes/layout.tsx`: Defines the common layout structure that wraps around
pages, allowing you to maintain consistent UI elements like headers and
footers.
- `src/routes/service-worker.ts`: Handles Progressive Web App (PWA)
functionality, offline caching, and background tasks for your application.
- `src/routes/entry.ssr.tsx`: Controls how your application is server-side
rendered, managing the initial HTML generation and hydration process.
- `src/routes/root.tsx`: The root component that serves as the application's
shell, containing global providers and the main routing structure.
Now we can build out our own routes and files within the application.
## Setup data and type definitions
We’ll start by adding our
[dinosaur data](https://github.com/denoland/examples/blob/main/with-qwik/src/data/dinosaurs.json)
to a new `./src/data` directory as `dinosaurs.json`:
```jsonc
// ./src/data/dinosaurs.json
{
"dinosaurs": [
{
"name": "Tyrannosaurus Rex",
"description": "A massive carnivorous dinosaur with powerful jaws and tiny arms."
},
{
"name": "Brachiosaurus",
"description": "A huge herbivorous dinosaur with a very long neck."
},
{
"name": "Velociraptor",
"description": "A small but fierce predator that hunted in packs."
}
// ...
]
}
```
This is where our data will be pulled from. In a full application, this data
would come from a database.
> ⚠️️ In this tutorial we hard code the data. But you can connect
> to [a variety of databases](https://docs.deno.com/runtime/tutorials/connecting_to_databases/) and [even use ORMs like Prisma](https://docs.deno.com/runtime/tutorials/how_to_with_npm/prisma/) with
> Deno.
Next, let's add type definitions for our dinosaur data. We'll put it in
`types.ts` in `./src/`:
```tsx
// ./src/types.ts
export type Dino = {
name: string;
description: string;
};
```
Next, let's add API routes to server this data.
## Add API routes
First, let's create the route to load all dinosaurs for the index page. This API
endpoint uses Qwik City's
[`RequestHandler`](https://qwik.dev/docs/advanced/request-handling/) to create a
`GET` endpoint that loads and returns our dinosaur data using the json helper
for proper response formatting. We'll add the below to a new file in
`./src/routes/api/dinosaurs/index.ts`:
```tsx
// ./src/routes/api/dinosaurs/index.ts
import { RequestHandler } from "@builder.io/qwik-city";
import data from "~/data/dinosaurs.json" with { type: "json" };
export const onGet: RequestHandler = async ({ json }) => {
const dinosaurs = data;
json(200, dinosaurs);
};
```
Next, let's create the API route to get the information for a single dinosaur.
This takes the parameter from the URL and uses it to search through our dinosaur
data. We'll add the below code to `./src/routes/api/dinosaurs/[name]/index.ts`:
```tsx
// ./src/routes/api/dinosaurs/[name]/index.ts
import { RequestHandler } from "@builder.io/qwik-city";
import data from "~/data/dinosaurs.json" with { type: "json" };
export const onGet: RequestHandler = async ({ params, json }) => {
const { name } = params;
const dinosaurs = data;
if (!name) {
json(400, { error: "No dinosaur name provided." });
return;
}
const dinosaur = dinosaurs.find(
(dino) => dino.name.toLowerCase() === name.toLowerCase(),
);
if (!dinosaur) {
json(404, { error: "No dinosaur found." });
return;
}
json(200, dinosaur);
};
```
Now that the API routes are wired up and serving data, let's create the two
frontend pages: the index page and the individual dinosaur detail pages.
## Build the frontend
We'll create our homepage by updating our `./src/routes/index.tsx` file using
Qwik's [`routeLoader$`](https://qwik.dev/docs/route-loader/) for server-side
data fetching. This `component$` loads and renders the dinosaur data during SSR
via `useDinosaurs()`:
```tsx
// ./src/routes/index.tsx
import { component$ } from "@builder.io/qwik";
import { Link, routeLoader$ } from "@builder.io/qwik-city";
import type { Dino } from "~/types";
import data from "~/data/dinosaurs.json" with { type: "json" };
export const useDinosaurs = routeLoader$(() => {
return data;
});
export default component$(() => {
const dinosaursSignal = useDinosaurs();
return (
);
});
```
Now that we have our main index page, let's add a page for the individual
dinosaur information. We'll use Qwik's
[dynamic routing](https://qwik.dev/docs/routing/), with `[name]` as the key for
each dinosaur. This page leverages `routeLoader$` to fetch individual dinosaur
details based on the URL parameter, with built-in error handling if the dinosaur
isn't found.
The component uses the same SSR pattern as our index page, but with
parameter-based data loading and a simpler display layout for individual
dinosaur details:
```tsx
// ./src/routes/[name]/index.tsx
import { component$ } from "@builder.io/qwik";
import { Link, routeLoader$ } from "@builder.io/qwik-city";
import type { Dino } from "~/types";
import data from "~/data/dinosaurs.json" with { type: "json" };
export const useDinosaurDetails = routeLoader$(({ params }): Dino => {
const { dinosaurs } = data;
const dinosaur = dinosaurs.find(
(dino: Dino) => dino.name.toLowerCase() === params.name.toLowerCase(),
);
if (!dinosaur) {
throw new Error("Dinosaur not found");
}
return dinosaur;
});
export default component$(() => {
const dinosaurSignal = useDinosaurDetails();
return (
{dinosaurSignal.value.name}
{dinosaurSignal.value.description}
Back to all dinosaurs
);
});
```
Now that we have built our routes and the frontend components, we can run our
application:
```bash
deno task dev
```
This will start the app at `localhost:5173`:
Tada!
## Next steps
🦕 Now you can build and run a Qwik app with Deno! Here are some ways you could
enhance your dinosaur application:
Next steps for a Qwik app might be to use Qwik's lazy loading capabilities for
dinosaur images and other components, or add client-side state management for
complex features.
- Add persistent data store
[using a database like Postgres or MongoDB](https://docs.deno.com/runtime/tutorials/connecting_to_databases/)
and an ORM like [Drizzle](https://docs.deno.com/examples/drizzle_tutorial/) or
[Prisma](https://docs.deno.com/runtime/tutorials/how_to_with_npm/prisma/)
- use Qwik's lazy loading capabilities for dinosaur images and components
- add client-side state management
- self-host your app to
[AWS](https://docs.deno.com/runtime/tutorials/aws_lightsail/),
[Digital Ocean](https://docs.deno.com/runtime/tutorials/digital_ocean/), and
[Google Cloud Run](https://docs.deno.com/runtime/tutorials/google_cloud_run/)
---
# Build a React app with a starter template
> Complete guide to building React applications with Deno and Vite. Learn how to set up a project from a template, implement routing, add API endpoints, and deploy your full-stack TypeScript application.
URL: https://docs.deno.com/examples/tutorials/react
[React](https://reactjs.org) is the most widely used JavaScript frontend
library.
In this tutorial we'll build a simple React app with Deno. The app will display
a list of dinosaurs. When you click on one, it'll take you to a dinosaur page
with more details. You can see the
[finished app repo on GitHub](https://github.com/denoland/tutorial-with-react)

This tutorial will use [Vite](https://vitejs.dev/) to serve the app locally.
Vite is a build tool and development server for modern web projects. It pairs
well with React and Deno, leveraging ES modules and allowing you to import React
components directly.
## Starter app
We've set up a
[starter template for you to use](https://github.com/denoland/react-vite-ts-template).
This will set up a basic starter app with React, Vite and a deno.json file for
you to configure your project. Visit the GitHub repository at
[https://github.com/denoland/react-vite-ts-template](https://github.com/denoland/react-vite-ts-template)
and click the "Use this template" button to create a new repository.
Once you have created a new repository from the template, clone it to your local
machine and navigate to the project directory.
## Clone the repository locally
```sh
git clone https://github.com/your-username/your-repo-name.git
cd your-repo-name
```
## Install the dependencies
Install the project dependencies by running:
```sh
deno install
```
## Run the dev server
Now you can serve your new react app by running:
```sh
deno run dev
```
This will start the Vite server, click the output link to localhost to see your
app in the browser.
## About the template
The template repository you cloned comes with a basic React app. The app uses
Vite as a dev server and provides a static file server built with
[oak](https://jsr.io/@oak/oak) which will serve the built app when deployed. The
React app is in the `client` folder and the backend server is in the `server`
folder.
The `deno.json` file is used to configure the project and specify the
permissions required to run the app, it contains the `tasks` field which defines
the tasks that can be run with `deno run`. It has a `dev` task which runs the
Vite server and a `build` task which builds the app with Vite, and a `serve`
task which runs the backend server to serve the built app.
## Add a backend API
We'll build an API into the server provided by the template. This will be where
we get our dinosaur data.
In the `server` directory of your new project, create an `api` folder. In that
folder, create a `data.json`, which will contain the hard coded dinosaur data.
Copy and paste
[this json file](https://github.com/denoland/tutorial-with-react/blob/main/api/data.json)
into the `api/data.json` file. (If you were building a real app, you would
probably fetch this data from a database or an external API.)
We're going to build out some API routes that return dinosaur information into
the server that came with the template, we'll need the
[`cors` middleware](https://jsr.io/@tajpouria/cors) to enable
[CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS).
Use the `deno install` command to add the cors dependency to your project:
```shell
deno install jsr:@tajpouria/cors
```
Next, update `server/main.ts` to import the required modules and create a new
`Router` instance to define some routes:
```ts title="main.ts"
import { Application } from "jsr:@oak/oak/application";
import { Router } from "jsr:@oak/oak/router";
import { oakCors } from "@tajpouria/cors";
import routeStaticFilesFrom from "./util/routeStaticFilesFrom.ts";
import data from "./api/data.json" with { type: "json" };
export const app = new Application();
const router = new Router();
```
After this, in the same file, we'll define two routes. One at `/api/dinosaurs`
to return all the dinosaurs, and `/api/dinosaurs/:dinosaur` to return a specific
dinosaur based on the name in the URL:
```ts title="main.ts"
router.get("/api/dinosaurs", (context) => {
context.response.body = data;
});
router.get("/api/dinosaurs/:dinosaur", (context) => {
if (!context?.params?.dinosaur) {
context.response.body = "No dinosaur name provided.";
}
const dinosaur = data.find((item) =>
item.name.toLowerCase() === context.params.dinosaur.toLowerCase()
);
context.response.body = dinosaur ?? "No dinosaur found.";
});
```
At the bottom of the same file, attach the routes we just defined to the
application. We also must include the the static file server from the template,
and finally we'll start the server listening on port 8000:
```ts title="main.ts"
app.use(oakCors());
app.use(router.routes());
app.use(router.allowedMethods());
app.use(routeStaticFilesFrom([
`${Deno.cwd()}/client/dist`,
`${Deno.cwd()}/client/public`,
]));
if (import.meta.main) {
console.log("Server listening on port http://localhost:8000");
await app.listen({ port: 8000 });
}
```
You can run the API server with
`deno run --allow-env --allow-net --allow-read server/main.ts`. We'll create a
task to run this command in the background and update the dev task to run both
the React app and the API server.
In your `deno.json` file, update the `tasks` field to include the following:
```diff title="deno.json"
{
"tasks": {
+ "dev": "deno run -A npm:vite & deno run server:start",
"build": "deno run -A npm:vite build",
"server:start": "deno run -A --node-modules-dir --watch ./server/main.ts",
"serve": "deno run build && deno run server:start"
},
+ "nodeModulesDir": "auto",
```
If you run `deno run dev` now and visit `localhost:8000/api/dinosaurs`, in your
browser you should see a JSON response of all of the dinosaurs.
## Update the entrypoint
The entrypoint for the React app is in the `client/src/main.tsx` file. Ours is
going to be very basic:
```tsx title="main.tsx"
import { StrictMode } from "react";
import { createRoot } from "react-dom/client";
import "./index.css";
import App from "./App.tsx";
createRoot(document.getElementById("root")!).render(
,
);
```
## Add a router
The app will have two routes: `/` and `/:dinosaur`.
We'll use [`react-router-dom`](https://reactrouter.com/en/main) to build out
some routing logic, so we'll need to add the `react-router-dom` dependency to
your project. In the project root run:
```shell
deno install npm:react-router-dom
```
Update the `/src/App.tsx` file to import and use the
[`BrowserRouter`](https://reactrouter.com/en/main/router-components/browser-router)
component from `react-router-dom` and define the two routes:
```tsx title="App.tsx"
import { BrowserRouter, Route, Routes } from "react-router-dom";
import Index from "./pages/index.tsx";
import Dinosaur from "./pages/Dinosaur.tsx";
import "./App.css";
function App() {
return (
} />
} />
);
}
export default App;
```
## Proxy to forward the api requests
Vite will be serving the application on port `3000` while our api is running on
port `8000`. Therefore, we'll need to set up a proxy to allow the `api/` paths
to be reachable by the router. Add a proxy setting to the `vite.config.ts`:
```diff title="vite.config.ts"
export default defineConfig({
root: "./client",
server: {
port: 3000,
+ proxy: {
+ "/api": {
+ target: "http://localhost:8000",
+ changeOrigin: true,
+ },
+ },
```
## Create the pages
We'll create two pages: `Index` and `Dinosaur`. The `Index` page will list all
the dinosaurs and the `Dinosaur` page will show details of a specific dinosaur.
Create a `pages` folder in the `src` directory and inside that create two files:
`index.tsx` and `Dinosaur.tsx`.
### Types
Both pages will use the `Dino` type to describe the shape of data they're
expecting from the API, so let's create a `types.ts` file in the `src`
directory:
```ts title="types.ts"
export type Dino = { name: string; description: string };
```
### index.tsx
This page will fetch the list of dinosaurs from the API and render them as
links:
```tsx title="index.tsx"
import { useEffect, useState } from "react";
import { Link } from "react-router-dom";
import { Dino } from "../types.ts";
export default function Index() {
const [dinosaurs, setDinosaurs] = useState([]);
useEffect(() => {
(async () => {
const response = await fetch(`/api/dinosaurs/`);
const allDinosaurs = await response.json() as Dino[];
setDinosaurs(allDinosaurs);
})();
}, []);
return (
Welcome to the Dinosaur app
Click on a dinosaur below to learn more.
{dinosaurs.map((dinosaur: Dino) => {
return (
{dinosaur.name}
);
})}
);
}
```
### Dinosaur.tsx
This page will fetch the details of a specific dinosaur from the API and render
it in a paragraph:
```tsx title="Dinosaur.tsx"
import { useEffect, useState } from "react";
import { Link, useParams } from "react-router-dom";
import { Dino } from "../types";
export default function Dinosaur() {
const { selectedDinosaur } = useParams();
const [dinosaur, setDino] = useState({ name: "", description: "" });
useEffect(() => {
(async () => {
const resp = await fetch(`/api/dinosaurs/${selectedDinosaur}`);
const dino = await resp.json() as Dino;
setDino(dino);
})();
}, [selectedDinosaur]);
return (
{dinosaur.name}
{dinosaur.description}
🠠 Back to all dinosaurs
);
}
```
### Styling the list of dinosaurs
Since we are displaying the list of dinosaurs on the main page, let's do some
basic formatting. Add the following to the bottom of `src/App.css` to display
our list of dinosaurs in an orderly fashion:
```css title="src/App.css"
.dinosaur {
display: block;
}
```
## Run the app
To run the app use the task you set up earlier
```sh
deno run dev
```
Navigate to the local Vite server in your browser (`localhost:3000`) and you
should see the list of dinosaurs displayed which you can click through to find
out about each one.

## Build and deploy
The template you cloned comes with a `serve` task that builds the app and serves
it with the backend server. Run the following command to build and serve the
app:
```sh
deno run serve
```
If you visit `localhost:8000` in your browser you should see the app running!
You can deploy this app to your favourite cloud provider. We recommend using
[Deno Deploy](https://deno.com/deploy) for a simple and easy deployment
experience.
To deploy to Deno Deploy, visit the
[Deno Deploy dashboard](https://dash.deno.com) and create a new project. You can
then deploy the app by connecting your GitHub repository and selecting the
branch you want to deploy.
Give the project a name, and make sure that the `build step` is set to
`deno run build` and the `Entrypoint` is `server/main.ts`.
Click the `Deploy Project` button and your app will be live!
🦕 Now you can scaffold and develop a React app with Vite and Deno! You’re ready
to build blazing-fast web applications. We hope you enjoy exploring these
cutting-edge tools, we can't wait to see what you make!
---
# How to use Redis with Deno
> Step-by-step guide to using Redis with Deno. Learn how to set up caching, implement message brokers, handle data streaming, and optimize your applications with Redis's in-memory data store.
URL: https://docs.deno.com/examples/tutorials/redis
[Redis](https://redis.io/) is an in-memory data store you can use for caching,
as a message broker, or for streaming data.
[View source here.](https://github.com/denoland/examples/tree/main/with-redis)
Here we're going to set up Redis to cache data from an API call to speed up any
subsequent requests for that data. We're going to:
- Set up a Redis client to save data from every API call in memory
- Set up a Deno server so we can easily request certain data
- Call the Github API within the server handler to get the data on first request
- Serve data from Redis on every subsequent request
We can do this within a single file, `main.ts`.
## Connecting to a Redis client
We need two modules. The first is the Deno server. We'll use this to get the
information from the user to query our API. The second is Redis. We can grab the
node package for Redis using the `npm:` modifier:
```tsx
import { createClient } from "npm:redis@^4.5";
```
We create a Redis client using `createClient` and connect to our local Redis
server:
```tsx
// make a connection to the local instance of redis
const client = createClient({
url: "redis://localhost:6379",
});
await client.connect();
```
You can also set host, user, password, and port individually in this
[configuration](https://github.com/redis/node-redis/blob/master/docs/client-configuration.md)
object.
## Setting up the server
Our server is going to act as a wrapper around the Github API. A client can call
our server with a Github username in the URL pathname, such as
`http://localhost:3000/{username}`.
Parsing out the pathname and calling the Github API will take place inside a
handler function in our server. We strip the leading slash so we are left with a
variable we can pass to the Github API as a username. We'll then pass the
response back to the user.
```tsx
Deno.serve({ port: 3000 }, async (req) => {
const { pathname } = new URL(req.url);
// strip the leading slash
const username = pathname.substring(1);
const resp = await fetch(`https://api.github.com/users/${username}`);
const user = await resp.json();
return new Response(JSON.stringify(user), {
headers: {
"content-type": "application/json",
},
});
});
```
We'll run this with:
```tsx
deno run --allow-net main.ts
```
If we then go to [http://localhost:3000/ry](http://localhost:3000/ry) in
Postman, we'll get the Github response:

Let's cache this response using Redis.
## Checking the cache
Once we have our response from the Github API, we can cache this within Redis
using `client.set`, with our username as the key and the user object as the
value:
```tsx
await client.set(username, JSON.stringify(user));
```
Next time we request the same username, we can use `client.get` to get the
cached user:
```tsx
const cached_user = await client.get(username);
```
This returns null if the key doesn't exist. So we can use it in some flow
control. When we get the username, we'll initially check whether we already have
that user in the cache. If we do we'll serve the cached result. If not, we'll
call the Github API to get the user, cache it, the serve the API result. In both
cases, we'll add a custom header to show which version we're serving:
```tsx
const server = new Server({
handler: async (req) => {
const { pathname } = new URL(req.url);
// strip the leading slash
const username = pathname.substring(1);
const cached_user = await client.get(username);
if (cached_user) {
return new Response(cached_user, {
headers: {
"content-type": "application/json",
"is-cached": "true",
},
});
} else {
const resp = await fetch(`https://api.github.com/users/${username}`);
const user = await resp.json();
await client.set(username, JSON.stringify(user));
return new Response(JSON.stringify(user), {
headers: {
"content-type": "application/json",
"is-cached": "false",
},
});
}
},
port: 3000,
});
server.listenAndServe();
```
Running this first time gives us the same response as above, and we'll see the
`is-cached` header set to `false`:

But call with the same username again, and we get the cached result. The body is
identical:

But the header shows we have the cache:

We can also see that the response was ~200ms quicker!
You can check out the Redis documentation [here](https://redis.io/docs/) and the
Redis node package [here](https://github.com/redis/node-redis).
---
# Run a script
> A guide to creating and running basic scripts with Deno. Learn how to write and execute JavaScript and TypeScript code, understand runtime environments, and get started with fundamental Deno concepts.
URL: https://docs.deno.com/examples/tutorials/run_script
Deno is a secure runtime for JavaScript and TypeScript.
A runtime is the environment where your code executes. It provides the necessary
infrastructure for your programs to run, handling things like memory management,
I/O operations, and interaction with external resources. The runtime is
responsible for translating your high-level code (JavaScript or TypeScript) into
machine instructions that the computer can understand.
When you run JavaScript in a web browser (like Chrome, Firefox, or Edge), you’re
using a browser runtime.
Browser runtimes are tightly coupled with the browser itself. They provide APIs
for manipulating the Document Object Model (DOM), handling events, making
network requests, and more. These runtimes are sandboxed, they operate within
the browser’s security model. They can’t access resources outside the browser,
such as the file system or environment variables.
When you run your code with Deno, you’re executing your JavaScript or TypeScript
code directly on your machine, outside the browser context. Therefore, Deno
programs can access resources on the host computer, such as the file system,
environment variables, and network sockets.
Deno provides a seamless experience for running JavaScript and TypeScript code.
Whether you prefer the dynamic nature of JavaScript or the type safety of
TypeScript, Deno has you covered.
## Running a script
In this tutorial we'll create a simple "Hello World" example in both JavaScript
and TypeScript using Deno.
We'll define a `capitalize` function that capitalizes the first letter of a
word. Then, we define a `hello` function that returns a greeting message with
the capitalized name. Finally, we call the `hello` function with different names
and print the output to the console.
### JavaScript
First, create a `hello-world.js` file and add the following code:
```js title="hello-world.js"
function capitalize(word) {
return word.charAt(0).toUpperCase() + word.slice(1);
}
function hello(name) {
return "Hello " + capitalize(name);
}
console.log(hello("john"));
console.log(hello("Sarah"));
console.log(hello("kai"));
```
Run the script using the `deno run` command:
```sh
$ deno run hello-world.js
Hello John
Hello Sarah
Hello Kai
```
### TypeScript
This TypeScript example is exactly the same as the JavaScript example above, the
code just has the additional type information which TypeScript supports.
Create a `hello-world.ts` file and add the following code:
```ts title="hello-world.ts"
function capitalize(word: string): string {
return word.charAt(0).toUpperCase() + word.slice(1);
}
function hello(name: string): string {
return "Hello " + capitalize(name);
}
console.log(hello("john"));
console.log(hello("Sarah"));
console.log(hello("kai"));
```
Run the TypeScript script using the `deno run` command:
```sh
$ deno run hello-world.ts
Hello John
Hello Sarah
Hello Kai
```
🦕 Congratulations! Now you know how to create a simple script in both JS and TS
and how to run it in Deno with the `deno run` command. Keep exploring the
tutorials and examples to learn more about Deno!
---
# Snapshot testing
> Learn how to use snapshot testing in Deno to compare outputs against recorded references, making it easier to detect unintended changes in your code
URL: https://docs.deno.com/examples/tutorials/snapshot
Snapshot testing is a testing technique that captures the output of your code
and compares it against a stored reference version. Rather than manually writing
assertions for each property, you let the test runner record the entire output
structure, making it easier to detect any unexpected changes.
The [Deno Standard Library](/runtime/fundamentals/standard_library/) has a
[snapshot module](https://jsr.io/@std/testing/doc/snapshot), which enables
developers to write tests which assert a value against a reference snapshot.
This reference snapshot is a serialized representation of the original value and
is stored alongside the test file.
## Basic usage
The `assertSnapshot` function will create a snapshot of a value and compare it
to a reference snapshot, which is stored alongside the test file in the
`__snapshots__` directory.
To create an initial snapshot (or to update an existing snapshot), use the
`-- --update` flag with the `deno test` command.
### Basic snapshot example
The below example shows how to use the snapshot library with the `Deno.test`
API. We can test a snapshot of a basic object, containing string and number
properties.
The `assertSnapshot(t, a)` function compares the object against a stored
snapshot. The `t` parameter is the test context that Deno provides, which the
snapshot function uses to determine the test name and location for storing
snapshots.
```ts title="example_test.ts"
import { assertSnapshot } from "jsr:@std/testing/snapshot";
Deno.test("isSnapshotMatch", async (t) => {
const a = {
hello: "world!",
example: 123,
};
await assertSnapshot(t, a);
});
```
You will need to grant read and write file permissions in order for Deno to
write a snapshot file and then read it to test the assertion. If it is the first
time you are running the test a do not already have a snapshot, add the
`--update` flag:
```bash
deno test --allow-read --allow-write -- --update
```
If you already have a snapshot file, you can run the test with:
```bash
deno test --allow-read
```
The test will compare the current output of the object against the stored
snapshot. If they match, the test passes; if they differ, the test fails.
The snapshot file will look like this:
```ts title="__snapshots__/example_test.ts.snap"
export const snapshot = {};
snapshot[`isSnapshotMatch 1`] = `
{
example: 123,
hello: "world!",
}
`;
```
You can edit your test to change the `hello` string to `"everyone!"` and run the
test again with `deno test --allow-read`. This time the `assertSnapshot`
function will throw an `AssertionError`, causing the test to fail because the
snapshot created during the test does not match the one in the snapshot file.
## Updating snapshots
When adding new snapshot assertions to your test suite, or when intentionally
making changes which cause your snapshots to fail, you can update your snapshots
by running the snapshot tests in update mode. Tests can be run in update mode by
passing the `--update` or `-u` flag as an argument when running the test. When
this flag is passed, then any snapshots which do not match will be updated.
```bash
deno test --allow-read --allow-write -- --update
```
:::note
New snapshots will only be created when the `--update` flag is present.
:::
## Permissions
When running snapshot tests, the `--allow-read` permission must be enabled, or
else any calls to `assertSnapshot` will fail due to insufficient permissions.
Additionally, when updating snapshots, the `--allow-write` permission must be
enabled, as this is required in order to update snapshot files.
The assertSnapshot function will only attempt to read from and write to snapshot
files. As such, the allow list for `--allow-read` and `--allow-write` can be
limited to only include existing snapshot files, if desired.
## Version Control
Snapshot testing works best when changes to snapshot files are committed
alongside other code changes. This allows for changes to reference snapshots to
be reviewed along side the code changes that caused them, and ensures that when
others pull your changes, their tests will pass without needing to update
snapshots locally.
## Options
The `assertSnapshot` function can be called with an `options` object which
offers greater flexibility and enables some non standard use cases:
```ts
import { assertSnapshot } from "jsr:@std/testing/snapshot";
Deno.test("isSnapshotMatch", async (t) => {
const a = {
hello: "world!",
example: 123,
};
await assertSnapshot(t, a, {/*custom options go here*/});
});
```
### serializer
When you run a test with `assertSnapshot`, the data you're testing needs to be
converted to a string format that can be written to the snapshot file (when
creating or updating snapshots) and compared with the existing snapshot (when
validating), this is called serialization.
The `serializer` option allows you to provide a custom serializer function. This
custom function will be called by `assertSnapshot` and be passed the value being
asserted. Your custom function must:
1. Return a `string`
2. Be deterministic, (it will always produce the same output, given the same
input).
The code below shows a practical example of creating and using a custom
serializer function for snapshot testing. This serialiser removes any ANSI
colour codes from a string using the
[`stripColour`](https://jsr.io/@std/fmt/doc/colors) string formatter from the
Deno Standard Library.
```ts title="example_test.ts"
import { assertSnapshot, serialize } from "jsr:@std/testing/snapshot";
import { stripColor } from "jsr:@std/fmt/colors";
/**
* Serializes `actual` and removes ANSI escape codes.
*/
function customSerializer(actual: string) {
return serialize(stripColor(actual));
}
Deno.test("Custom Serializer", async (t) => {
const output = "\x1b[34mHello World!\x1b[39m";
await assertSnapshot(t, output, {
serializer: customSerializer,
});
});
```
```ts title="__snapshots__/example_test.ts.snap"
snapshot = {};
snapshot[`Custom Serializer 1`] = `"Hello World!"`;
```
Custom serializers can be useful in a variety of scenarios:
- To remove irrelevant formatting (like ANSI codes shown above) and improve
legibility
- To handle non-deterministic data. Timestamps, UUIDs, or random values can be
replaced with placeholders
- To mask or remove sensitive data that shouldn't be saved in snapshots
- Custom formatting to present complex objects in a domain-specific format
### Serialization with `Deno.customInspect`
Because the default serializer uses `Deno.inspect` under the hood, you can set
the property `Symbol.for("Deno.customInspect")` to a custom serialization
function if desired:
```ts title="example_test.ts"
// example_test.ts
import { assertSnapshot } from "jsr:@std/testing/snapshot";
class HTMLTag {
constructor(
public name: string,
public children: Array = [],
) {}
public render(depth: number) {
const indent = " ".repeat(depth);
let output = `${indent}<${this.name}>\n`;
for (const child of this.children) {
if (child instanceof HTMLTag) {
output += `${child.render(depth + 1)}\n`;
} else {
output += `${indent} ${child}\n`;
}
}
output += `${indent}${this.name}>`;
return output;
}
public [Symbol.for("Deno.customInspect")]() {
return this.render(0);
}
}
Deno.test("Page HTML Tree", async (t) => {
const page = new HTMLTag("html", [
new HTMLTag("head", [
new HTMLTag("title", [
"Simple SSR Example",
]),
]),
new HTMLTag("body", [
new HTMLTag("h1", [
"Simple SSR Example",
]),
new HTMLTag("p", [
"This is an example of how Deno.customInspect could be used to snapshot an intermediate SSR representation",
]),
]),
]);
await assertSnapshot(t, page);
});
```
This test will produce the following snapshot.
```ts title="__snapshots__/example_test.ts.snap"
export const snapshot = {};
snapshot[`Page HTML Tree 1`] = `
Simple SSR Example
Simple SSR Example
This is an example of how Deno.customInspect could be used to snapshot an intermediate SSR representation
`;
```
In contrast, when we remove the `Deno.customInspect` method, the test will
produce the following snapshot:
```ts title="__snapshots__/example_test.ts.snap"
export const snapshot = {};
snapshot[`Page HTML Tree 1`] = `HTMLTag {
children: [
HTMLTag {
children: [
HTMLTag {
children: [
"Simple SSR Example",
],
name: "title",
},
],
name: "head",
},
HTMLTag {
children: [
HTMLTag {
children: [
"Simple SSR Example",
],
name: "h1",
},
HTMLTag {
children: [
"This is an example of how Deno.customInspect could be used to snapshot an intermediate SSR representation",
],
name: "p",
},
],
name: "body",
},
],
name: "html",
}`;
```
You can see that this second snapshot is much less readable. This is because:
1. The keys are sorted alphabetically, so the name of the element is displayed
after its children
2. It includes a lot of extra information, causing the snapshot to be more than
twice as long
3. It is not an accurate serialization of the HTML which the data represents
Note that in this example it would be possible to achieve the same result by
calling:
```ts
await assertSnapshot(t, page.render(0));
```
However, depending on the public API you choose to expose, this may not be
practical.
It is also worth considering that this could have an impact beyond your snapshot
testing. For example, `Deno.customInspect` is also used to serialize objects
when calling `console.log` (and in some other cases). This may or may not be
desirable.
### `dir` and `path`
The `dir` and `path` options allow you to control where the snapshot file will
be saved to and read from. These can be absolute paths or relative paths. If
relative, they will be resolved relative to the test file.
For example, if your test file is located at `/path/to/test.ts` and the `dir`
option is set to `snapshots`, then the snapshot file would be written to
`/path/to/snapshots/test.ts.snap`.
- `dir` allows you to specify the snapshot directory, while still using the
default format for the snapshot file name.
- `path` allows you to specify the directory and file name of the snapshot file.
If your test file is located at `/path/to/test.ts` and the `path` option is set
to `snapshots/test.snapshot`, then the snapshot file would be written to
`/path/to/snapshots/test.snapshot`.
:::note
If both `dir` and `path` are specified, the `dir` option will be ignored and the
`path` option will be handled as normal.
:::
### `mode`
The `mode` option controls how `assertSnapshot` behaves regardless of command
line flags and has two settings, `assert` or `update`:
- `assert`: Always performs comparison only, ignoring any `--update` or `-u`
flags. If snapshots don't match, the test will fail with an `AssertionError`.
- `update`: Always updates snapshots. Any mismatched snapshots will be updated
after tests complete.
This option is useful when you need different snapshot behaviors within the same
test suite:
```ts
// Create a new snapshot or verify an existing one
await assertSnapshot(t, stableComponent);
// Always update this snapshot regardless of command line flags
await assertSnapshot(t, experimentalComponent, {
mode: "update",
name: "experimental feature",
});
// Always verify but never update this snapshot regardless of command line flags
await assertSnapshot(t, criticalComponent, {
mode: "assert",
name: "critical feature",
});
```
### `name`
The name of the snapshot. If unspecified, the name of the test step will be used
instead.
```ts title="example_test.ts"
import { assertSnapshot } from "jsr:@std/testing/snapshot";
Deno.test("isSnapshotMatch", async (t) => {
const a = {
hello: "world!",
example: 123,
};
await assertSnapshot(t, a, {
name: "Test Name",
});
});
```
```ts title="__snapshots__/example_test.ts.snap"
export const snapshot = {};
snapshot[`Test Name 1`] = `
{
example: 123,
hello: "world!",
}
`;
```
When `assertSnapshot` is run multiple times with the same value for name, then
the suffix will be incremented as normal. i.e. `Test Name 1`, `Test Name 2`,
`Test Name 3`, etc.
### `msg`
Used to set a custom error message. This will overwrite the default error
message, which includes the diff for failed snapshots:
```ts
Deno.test("custom error message example", async (t) => {
const userData = {
name: "John Doe",
role: "admin",
};
await assertSnapshot(t, userData, {
msg:
"User data structure has changed unexpectedly. Please verify your changes are intentional.",
});
});
```
When the snapshot fails, instead of seeing the default diff message, you'll see
your custom error message.
## Testing Different Data Types
Snapshot testing works with various data types and structures:
```ts
Deno.test("snapshot various types", async (t) => {
// Arrays
await assertSnapshot(t, [1, 2, 3, "four", { five: true }]);
// Complex objects
await assertSnapshot(t, {
user: { name: "Test", roles: ["admin", "user"] },
settings: new Map([["theme", "dark"], ["language", "en"]]),
});
// Error objects
await assertSnapshot(t, new Error("Test error message"));
});
```
## Working with Asynchronous Code
When testing asynchronous functions, ensure you await the results before passing
them to the snapshot:
```ts
Deno.test("async function test", async (t) => {
const fetchData = async () => {
// Simulate API call
return { success: true, data: ["item1", "item2"] };
};
const result = await fetchData();
await assertSnapshot(t, result);
});
```
## Best Practices
### Keep Snapshots Concise
Avoid capturing large data structures that aren't necessary for your test. Focus
on capturing only what's relevant.
### Descriptive Test Names
Use descriptive test names that clearly indicate what's being tested:
```ts
Deno.test(
"renders user profile card with all required fields",
async (t) => {
// ... test code
await assertSnapshot(t, component);
},
);
```
### Review Snapshots During Code Reviews
Always review snapshot changes during code reviews to ensure they represent
intentional changes and not regressions.
### Snapshot Organization
For larger projects, consider organizing snapshots by feature or component:
```ts
await assertSnapshot(t, component, {
path: `__snapshots__/components/${componentName}.snap`,
});
```
## Snapshot Testing in CI/CD
### GitHub Actions Example
When running snapshot tests in CI environments, you'll typically want to verify
existing snapshots rather than updating them:
```yaml title=".github/workflows/test.yml"
name: Test
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: denoland/setup-deno@v2
with:
deno-version: v2.x
- name: Run tests
run: deno test --allow-read
```
For pull requests that intentionally update snapshots, reviewers should verify
the changes are expected before merging.
## Practical Examples
### Testing HTML Output
HTML output testing with snapshots is particularly useful for web applications
where you want to ensure your components render the expected markup. This
approach allows you to catch unintended changes in your HTML structure,
attributes, or content that might affect the visual appearance or functionality
of your UI components.
By capturing a snapshot of the HTML output, you can:
- Verify that UI components render correctly with different props/data
- Detect regressions when refactoring rendering logic
- Document the expected output format of components
```ts
Deno.test("HTML rendering", async (t) => {
const renderComponent = () => {
return `
User Profile
Username: testuser
`;
};
await assertSnapshot(t, renderComponent());
});
```
### Testing API Responses
When building applications that interact with APIs, snapshot testing helps
ensure that the structure and format of API responses remain consistent. This is
particularly valuable for:
- Maintaining backward compatibility when updating API integrations
- Verifying that your API response parsing logic works correctly
- Documenting the expected shape of API responses for team collaboration
- Detecting unexpected changes in API responses that could break your
application
```ts
Deno.test("API response format", async (t) => {
const mockApiResponse = {
status: 200,
data: {
users: [
{ id: 1, name: "User 1" },
{ id: 2, name: "User 2" },
],
pagination: { page: 1, total: 10 },
},
};
await assertSnapshot(t, mockApiResponse);
});
```
🦕 Snapshot testing is a powerful technique that complements traditional unit
tests by allowing you to capture and verify complex outputs without writing
detailed assertions. By incorporating snapshot tests into your testing strategy,
you can catch unintended changes, document expected behavior, and build more
resilient applications.
---
# Build a SolidJS app with Deno
> Build a SolidJS application with Deno. Learn how to set up a project, implement reactive components, handle routing, create API endpoints with Hono, and build a full-stack TypeScript application.
URL: https://docs.deno.com/examples/tutorials/solidjs
[SolidJS](https://www.solidjs.com/) is a declarative JavaScript library for
creating user interfaces that emphasizes fine-grained reactivity and minimal
overhead. When combined with Deno's modern runtime environment, you get a
powerful, performant stack for building web applications. In this tutorial,
we'll build a simple dinosaur catalog app that demonstrates the key features of
both technologies.
We'll go over how to build a simple SolidJS app using Deno:
- [Scaffold a SolidJS app](#scaffold-a-solidjs-app-with-vite)
- [Set up on Hono backend](#set-up-our-hono-backend)
- [Create our SolidJS frontend](#create-our-solidjs-frontend)
- [Next steps](#next-steps)
Feel free to skip directly to
[the source code](https://github.com/denoland/examples/tree/main/with-solidjs)
or follow along below!
## Scaffold a SolidJS app with Vite
Let's set up our SolidJS application using [Vite](https://vite.dev/), a modern
build tool that provides an excellent development experience with features like
hot module replacement and optimized builds.
```bash
deno init --npm vite@latest solid-deno --template solid-ts
```
Our backend will be powered by [Hono](https://hono.dev/), which we can install
via [JSR](https://jsr.io). Let's also add `solidjs/router` for client-side
routing and navigation between our dinosaur catalog pages.
```bash
deno add jsr:@hono/hono npm:@solidjs/router
```
Learn more about deno add and using Deno as a package manager.
We'll also have to update our `deno.json` to include a few tasks and
`compilerOptions` to run our app:
```json
{
"tasks": {
"dev": "deno task dev:api & deno task dev:vite",
"dev:api": "deno run --allow-env --allow-net --allow-read api/main.ts",
"dev:vite": "deno run -A npm:vite",
"build": "deno run -A npm:vite build",
"serve": {
"command": "deno task dev:api",
"description": "Run the build, and then start the API server",
"dependencies": ["deno task build"]
}
},
"imports": {
"@hono/hono": "jsr:@hono/hono@^4.6.12",
"@solidjs/router": "npm:@solidjs/router@^0.14.10"
},
"compilerOptions": {
"jsx": "react-jsx",
"jsxImportSource": "solid-js",
"lib": ["DOM", "DOM.Iterable", "ESNext"]
}
}
```
You can write your tasks as objects. Here our serve command includes a description and dependencies.
Great! Next, let's setup our API backend.
## Set up our Hono backend
Within our main directory, we will set up an `api/` directory and create two
files. First, our dinosaur data file,
[`api/data.json`](https://github.com/denoland/examples/blob/main/with-solidjs/api/data.json):
```jsonc
// api/data.json
[
{
"name": "Aardonyx",
"description": "An early stage in the evolution of sauropods."
},
{
"name": "Abelisaurus",
"description": "\"Abel's lizard\" has been reconstructed from a single skull."
},
{
"name": "Abrictosaurus",
"description": "An early relative of Heterodontosaurus."
},
...
]
```
This is where our data will be pulled from. In a full application, this data
would come from a database.
> ⚠️️ In this tutorial we hard code the data. But you can connect
> to [a variety of databases](https://docs.deno.com/runtime/tutorials/connecting_to_databases/) and [even use ORMs like Prisma](https://docs.deno.com/runtime/tutorials/how_to_with_npm/prisma/) with
> Deno.
Secondly, we need our Hono server, `api/main.ts`:
```tsx
// api/main.ts
import { Hono } from "@hono/hono";
import data from "./data.json" with { type: "json" };
const app = new Hono();
app.get("/", (c) => {
return c.text("Welcome to the dinosaur API!");
});
app.get("/api/dinosaurs", (c) => {
return c.json(data);
});
app.get("/api/dinosaurs/:dinosaur", (c) => {
if (!c.req.param("dinosaur")) {
return c.text("No dinosaur name provided.");
}
const dinosaur = data.find((item) =>
item.name.toLowerCase() === c.req.param("dinosaur").toLowerCase()
);
console.log(dinosaur);
if (dinosaur) {
return c.json(dinosaur);
} else {
return c.notFound();
}
});
Deno.serve(app.fetch);
```
This Hono server provides two API endpoints:
- `GET /api/dinosaurs` to fetch all dinosaurs, and
- `GET /api/dinosaurs/:dinosaur` to fetch a specific dinosaur by name
This server will be started on `localhost:8000` when we run `deno task dev`.
Finally, before we start building out the frontend, let's update our
`vite.config.ts` file with the below, especially the `server.proxy`, which
informs our SolidJS frontend where to locate the API endpoint.
```tsx
// vite.config.ts
import { defineConfig } from "vite";
import solid from "vite-plugin-solid";
export default defineConfig({
plugins: [solid()],
server: {
proxy: {
"/api": {
target: "http://localhost:8000",
changeOrigin: true,
},
},
},
});
```
## Create our SolidJS frontend
Before we begin building out the frontend components, let's quickly define the
`Dino` type in `src/types.ts`:
```tsx
// src/types.ts
export type Dino = {
name: string;
description: string;
};
```
The `Dino` type interface ensures type safety throughout our application,
defining the shape of our dinosaur data and enabling TypeScript's static type
checking.
Next, let's set up our frontend to receive that data. We're going to have two
pages:
- `Index.tsx`
- `Dinosaur.tsx`
Here's the code for the `src/pages/Index.tsx` page:
```tsx
// src/pages/Index.tsx
import { createSignal, For, onMount } from "solid-js";
import { A } from "@solidjs/router";
import type { Dino } from "../types.ts";
export default function Index() {
const [dinosaurs, setDinosaurs] = createSignal([]);
onMount(async () => {
try {
const response = await fetch("/api/dinosaurs");
const allDinosaurs = (await response.json()) as Dino[];
setDinosaurs(allDinosaurs);
console.log("Fetched dinosaurs:", allDinosaurs);
} catch (error) {
console.error("Failed to fetch dinosaurs:", error);
}
});
return (
Welcome to the Dinosaur app
Click on a dinosaur below to learn more.
{(dinosaur) => (
{dinosaur.name}
)}
);
}
```
When using SolidJS, there are a few key differences to React to be aware of:
1. We use SolidJS-specific primitives:
- `createSignal` instead of `useState`
- `createEffect` instead of `useEffect`
- `For` component instead of `map`
- `A` component instead of `Link`
2. SolidJS components use fine-grained reactivity, so we call signals as
functions, e.g. `dinosaur()` instead of just `dinosaur`
3. The routing is handled by `@solidjs/router` instead of `react-router-dom`
4. Component imports use Solid's
[`lazy`](https://docs.solidjs.com/reference/component-apis/lazy) for code
splitting
The `Index` page uses SolidJS's `createSignal` to manage the list of dinosaurs
and `onMount` to fetch the data when the component loads. We use the `For`
component, which is SolidJS's efficient way of rendering lists, rather than
using JavaScript's map function. The `A` component from `@solidjs/router`
creates client-side navigation links to individual dinosaur pages, preventing
full page reloads.
Now the individual dinosaur data page at `src/pages/Dinosaur.tsx`:
```tsx
// src/pages/Dinosaur.tsx
import { createSignal, onMount } from "solid-js";
import { A, useParams } from "@solidjs/router";
import type { Dino } from "../types.ts";
export default function Dinosaur() {
const params = useParams();
const [dinosaur, setDinosaur] = createSignal({
name: "",
description: "",
});
onMount(async () => {
const resp = await fetch(`/api/dinosaurs/${params.selectedDinosaur}`);
const dino = (await resp.json()) as Dino;
setDinosaur(dino);
console.log("Dinosaur", dino);
});
return (
);
}
```
The `Dinosaur` page demonstrates SolidJS's approach to dynamic routing by using
`useParams` to access the URL parameters. It follows a similar pattern to the
`Index` page, using `createSignal` for state management and `onMount` for data
fetching, but focuses on a single dinosaur's details. This `Dinosaur` component
also shows how to access signal values in the template by calling them as
functions (e.g., `dinosaur().name`), which is a key difference from React's
state management.
Finally, to tie it all together, we'll update the `App.tsx` file, which will
serve both the `Index` and `Dinosaur` pages as components. The `App` component
sets up our routing configuration using `@solidjs/router`, defining two main
routes: the index route for our dinosaur list and a dynamic route for individual
dinosaur pages. The `:selectedDinosaur` parameter in the route path creates a
dynamic segment that matches any dinosaur name in the URL.
```tsx
// src/App.tsx
import { Route, Router } from "@solidjs/router";
import Index from "./pages/Index.tsx";
import Dinosaur from "./pages/Dinosaur.tsx";
import "./App.css";
const App = () => {
return (
);
};
export default App;
```
Finally, this `App` component will be called from our main index:
```tsx
// src/index.tsx
import { render } from "solid-js/web";
import App from "./App.tsx";
import "./index.css";
const wrapper = document.getElementById("root");
if (!wrapper) {
throw new Error("Wrapper div not found");
}
render(() => , wrapper);
```
The entry point of our application mounts the App component to the DOM using
SolidJS's `render` function. It includes a safety check to ensure the root
element exists before attempting to render, providing better error handling
during initialization.
Now, let's run `deno task dev` to start both the frontend and backend together:
## Next steps
🦕 Now you can build and run a SolidJS app with Deno! Here are some ways you
could enhance your dinosaur application:
- Add persistent data store
[using a database like Postgres or MongoDB](https://docs.deno.com/runtime/tutorials/connecting_to_databases/)
and an ORM like [Drizzle](https://deno.com/blog/build-database-app-drizzle) or
[Prisma](https://docs.deno.com/runtime/tutorials/how_to_with_npm/prisma/)
- Implement global state using SolidJS's
[`createContext`](https://docs.solidjs.com/reference/component-apis/create-context)
for sharing data between components
- Add loading states using
[`createResource`](https://docs.solidjs.com/reference/basic-reactivity/create-resource)'s
loading property
- Implement route-based code splitting with
[`lazy`](https://docs.solidjs.com/reference/component-apis/lazy) imports
- Use `Index` component for more efficient list rendering
- Deploy your app to
[AWS](https://docs.deno.com/runtime/tutorials/aws_lightsail/),
[Digital Ocean](https://docs.deno.com/runtime/tutorials/digital_ocean/), or
[Google Cloud Run](https://docs.deno.com/runtime/tutorials/google_cloud_run/)
The combination of SolidJS's unique reactive primitives, true DOM
reconciliation, and Deno's modern runtime provides an incredibly efficient
foundation for web development. With no Virtual DOM overhead and granular
updates only where needed, your application can achieve optimal performance
while maintaining clean, readable code.
---
# Stubbing in tests
> Learn how to use stubs in Deno to isolate code during testing by replacing function implementations with controlled behavior
URL: https://docs.deno.com/examples/tutorials/stubbing
Stubbing is a powerful technique for isolating the code you're testing by
replacing functions with controlled implementations. While
[spies](/examples/mocking_tutorial/#spying) monitor function calls without
changing behavior, stubs go a step further by completely replacing the original
implementation, allowing you to simulate specific conditions or behaviors during
testing.
## What are stubs?
Stubs are fake implementations that replace real functions during testing. They
let you:
- Control what values functions return
- Simulate errors or specific edge cases
- Prevent external services like databases or APIs from being called
- Test code paths that would be difficult to trigger with real implementations
Deno provides robust stubbing capabilities through the
[Standard Library's testing tools](https://jsr.io/@std/testing/doc/mock#stubbing).
## Basic stub usage
Here's a simple example demonstrating how to stub a function:
```ts
import { assertEquals } from "jsr:@std/assert";
import { stub } from "jsr:@std/testing/mock";
// Original function
function getUserName(id: number): string {
// In a real app, this might call a database
return "Original User";
}
// Function under test
function greetUser(id: number): string {
const name = getUserName(id);
return `Hello, ${name}!`;
}
Deno.test("greetUser with stubbed getUserName", () => {
// Create a stub that returns a controlled value
const getUserNameStub = stub(globalThis, "getUserName", () => "Test User");
try {
// Test with the stubbed function
const greeting = greetUser(123);
assertEquals(greeting, "Hello, Test User!");
} finally {
// Always restore the original function
getUserNameStub.restore();
}
});
```
In this example, we:
1. Import the necessary functions from Deno's standard library
2. Create a stub for the `getUserName` function that returns "Test User" instead
of calling the real implementation
3. Call our function under test, which will use the stubbed implementation
4. Verify the result meets our expectations
5. Restore the original function to prevent affecting other tests
## Using stubs in a testing scenario
Let's look at a more practical example with a `UserRepository` class that
interacts with a database:
```ts
import { assertSpyCalls, returnsNext, stub } from "jsr:@std/testing/mock";
import { assertThrows } from "jsr:@std/assert";
type User = {
id: number;
name: string;
};
// This represents our database access layer
const database = {
getUserById(id: number): User | undefined {
// In a real app, this would query a database
return { id, name: "Ada Lovelace" };
},
};
// The class we want to test
class UserRepository {
static findOrThrow(id: number): User {
const user = database.getUserById(id);
if (!user) {
throw new Error("User not found");
}
return user;
}
}
Deno.test("findOrThrow method throws when the user was not found", () => {
// Stub the database.getUserById function to return undefined
using dbStub = stub(database, "getUserById", returnsNext([undefined]));
// We expect this function call to throw an error
assertThrows(() => UserRepository.findOrThrow(1), Error, "User not found");
// Verify the stubbed function was called once
assertSpyCalls(dbStub, 1);
});
```
In this example:
1. We're testing the `findOrThrow` method, which should throw an error when a
user is not found
2. We stub `database.getUserById` to return `undefined`, simulating a missing
user
3. We verify that `findOrThrow` throws the expected error
4. We also check that the database method was called exactly once
Note that we're using the `using` keyword with `stub`, which is a convenient way
to ensure the stub is automatically restored when it goes out of scope.
## Advanced stub techniques
### Returning different values on subsequent calls
Sometimes you want a stub to return different values each time it's called:
```ts
import { returnsNext, stub } from "jsr:@std/testing/mock";
import { assertEquals } from "jsr:@std/assert";
Deno.test("stub with multiple return values", () => {
const fetchDataStub = stub(
globalThis,
"fetchData",
// Return these values in sequence
returnsNext(["first result", "second result", "third result"]),
);
try {
assertEquals(fetchData(), "first result");
assertEquals(fetchData(), "second result");
assertEquals(fetchData(), "third result");
} finally {
fetchDataStub.restore();
}
});
```
### Stubbing with implementation logic
You can also provide custom logic in your stub implementations:
```ts
import { stub } from "jsr:@std/testing/mock";
import { assertEquals } from "jsr:@std/assert";
Deno.test("stub with custom implementation", () => {
// Create a counter to track how many times the stub is called
let callCount = 0;
const calculateStub = stub(
globalThis,
"calculate",
(a: number, b: number) => {
callCount++;
return a + b * 2; // Custom implementation
},
);
try {
const result = calculate(5, 10);
assertEquals(result, 25); // 5 + (10 * 2)
assertEquals(callCount, 1);
} finally {
calculateStub.restore();
}
});
```
## Stubbing API calls and external services
One of the most common uses of stubs is to replace API calls during testing:
```ts
import { assertEquals } from "jsr:@std/assert";
import { stub } from "jsr:@std/testing/mock";
async function fetchUserData(id: string) {
const response = await fetch(`https://api.example.com/users/${id}`);
if (!response.ok) {
throw new Error(`Failed to fetch user: ${response.status}`);
}
return await response.json();
}
Deno.test("fetchUserData with stubbed fetch", async () => {
const mockResponse = new Response(
JSON.stringify({ id: "123", name: "Jane Doe" }),
{ status: 200, headers: { "Content-Type": "application/json" } },
);
// Replace global fetch with a stubbed version
const fetchStub = stub(
globalThis,
"fetch",
() => Promise.resolve(mockResponse),
);
try {
const user = await fetchUserData("123");
assertEquals(user, { id: "123", name: "Jane Doe" });
} finally {
fetchStub.restore();
}
});
```
## Best practices
1. **Always restore stubs**: Use `try/finally` blocks or the `using` keyword to
ensure stubs are restored, even if tests fail.
2. **Use stubs for external dependencies**: Stub out database calls, API
requests, or file system operations to make tests faster and more reliable.
3. **Keep stubs simple**: Stubs should return predictable values that let you
test specific scenarios.
4. **Combine with spies when needed**: Sometimes you need to both replace
functionality (stub) and track calls (spy).
5. **Stub at the right level**: Stub at the interface boundary rather than deep
within implementation details.
🦕 Stubs are a powerful tool for isolating your code during testing, allowing
you to create deterministic test environments and easily test edge cases. By
replacing real implementations with controlled behavior, you can write more
focused, reliable tests that run quickly and consistently.
For more testing resources, check out:
- [Testing in isolation with mocks](/examples/mocking_tutorial/)
- [Deno Standard Library Testing Modules](https://jsr.io/@std/testing)
- [Basic Testing in Deno](/examples/testing_tutorial/)
---
# Creating a subprocess
> A guide to working with subprocesses in Deno. Learn how to spawn processes, handle input/output streams, manage process lifecycles, and implement inter-process communication patterns safely.
URL: https://docs.deno.com/examples/tutorials/subprocess
## Concepts
- Deno is capable of spawning a subprocess via
[Deno.Command](https://docs.deno.com/api/deno/~/Deno.Command).
- `--allow-run` permission is required to spawn a subprocess.
- Spawned subprocesses do not run in a security sandbox.
- Communicate with the subprocess via the
[stdin](https://docs.deno.com/api/deno/~/Deno.stdin),
[stdout](https://docs.deno.com/api/deno/~/Deno.stdout) and
[stderr](https://docs.deno.com/api/deno/~/Deno.stderr) streams.
## Simple example
This example is the equivalent of running `echo "Hello from Deno!"` from the
command line.
```ts title="subprocess_simple.ts"
// define command used to create the subprocess
const command = new Deno.Command("echo", {
args: [
"Hello from Deno!",
],
});
// create subprocess and collect output
const { code, stdout, stderr } = await command.output();
console.assert(code === 0);
console.log(new TextDecoder().decode(stdout));
console.log(new TextDecoder().decode(stderr));
```
Run it:
```shell
$ deno run --allow-run=echo ./subprocess_simple.ts
Hello from Deno!
```
## Security
The `--allow-run` permission is required for creation of a subprocess. Be aware
that subprocesses are not run in a Deno sandbox and therefore have the same
permissions as if you were to run the command from the command line yourself.
## Communicating with subprocesses
By default when you use `Deno.Command()` the subprocess inherits `stdin`,
`stdout` and `stderr` of the parent process. If you want to communicate with a
started subprocess you must use the `"piped"` option.
## Piping to files
This example is the equivalent of running `yes &> ./process_output` in bash.
```ts title="subprocess_piping_to_files.ts"
import {
mergeReadableStreams,
} from "jsr:@std/streams@1.0.0-rc.4/merge-readable-streams";
// create the file to attach the process to
const file = await Deno.open("./process_output.txt", {
read: true,
write: true,
create: true,
});
// start the process
const command = new Deno.Command("yes", {
stdout: "piped",
stderr: "piped",
});
const process = command.spawn();
// example of combining stdout and stderr while sending to a file
const joined = mergeReadableStreams(
process.stdout,
process.stderr,
);
// returns a promise that resolves when the process is killed/closed
joined.pipeTo(file.writable).then(() => console.log("pipe join done"));
// manually stop process "yes" will never end on its own
setTimeout(() => {
process.kill();
}, 100);
```
Run it:
```shell
$ deno run --allow-run=yes --allow-read=. --allow-write=. ./subprocess_piping_to_file.ts
```
---
# Build an app with Tanstack and Deno
> Complete guide to building applications with Tanstack and Deno. Learn how to implement Query for data fetching, Router for navigation, manage server state, and create type-safe full-stack applications.
URL: https://docs.deno.com/examples/tutorials/tanstack
[Tanstack](https://tanstack.com/) is a set of framework-agnostic data management
tools. With Tanstack, developers can manage server state efficiently with
[Query](https://tanstack.com/query/latest), create powerful tables with
[Table](https://tanstack.com/table/latest), handle complex routing with
[Router](https://tanstack.com/router/latest), and build type-safe forms with
[Form](https://tanstack.com/form/latest). These tools work seamlessly across
[React](/examples/react_tutorial), [Vue](/examples/vue_tutorial),
[Solid](/examples/solidjs_tutorial), and other frameworks while maintaining
excellent TypeScript support.
In this tutorial, we’ll build a simple app using
[Tanstack Query](https://tanstack.com/query/latest) and
[Tanstack Router](https://tanstack.com/router/latest/docs/framework/react/quick-start).
The app will display a list of dinosaurs. When you click on one, it'll take you
to a dinosaur page with more details.
- [Start with the backend API](#start-with-the-backend-api)
- [Create a Tanstack-driven frontend](#create-tanstack-driven-frontend)
- [Next steps](#next-steps)
Feel free to skip directly to
[the source code](https://github.com/denoland/examples/tree/main/with-tanstack)
or follow along below!
## Start with the backend API
Within our main directory, let's setup an `api/` directory and create our
dinosaur data file, `api/data.json`:
```jsonc
// api/data.json
[
{
"name": "Aardonyx",
"description": "An early stage in the evolution of sauropods."
},
{
"name": "Abelisaurus",
"description": "\"Abel's lizard\" has been reconstructed from a single skull."
},
{
"name": "Abrictosaurus",
"description": "An early relative of Heterodontosaurus."
},
...
]
```
This is where our data will be pulled from. In a full application, this data
would come from a database.
> ⚠️️ In this tutorial we hard code the data. But you can connect
> to [a variety of databases](https://docs.deno.com/runtime/tutorials/connecting_to_databases/) and [even use ORMs like Prisma](https://docs.deno.com/runtime/tutorials/how_to_with_npm/prisma/) with
> Deno.
Secondly, let's create our [Hono](https://hono.dev/) server. We start by
installing Hono from [JSR](https://jsr.io) with `deno add`:
```shell
deno add jsr:@hono/hono
```
Next, let's create an `api/main.ts` file and populate it with the below. Note
we'll need to import
[`@hono/hono/cors`](https://hono.dev/docs/middleware/builtin/cors) and define
key attributes to allow the frontend to access the API routes.
```ts
// api/main.ts
import { Hono } from "@hono/hono";
import { cors } from "@hono/hono/cors";
import data from "./data.json" with { type: "json" };
const app = new Hono();
app.use(
"/api/*",
cors({
origin: "http://localhost:5173",
allowMethods: ["GET", "POST", "PUT", "DELETE"],
allowHeaders: ["Content-Type", "Authorization"],
exposeHeaders: ["Content-Type", "Authorization"],
credentials: true,
maxAge: 600,
}),
);
app.get("/", (c) => {
return c.text("Welcome to the dinosaur API!");
});
app.get("/api/dinosaurs", (c) => {
return c.json(data);
});
app.get("/api/dinosaurs/:dinosaur", (c) => {
if (!c.req.param("dinosaur")) {
return c.text("No dinosaur name provided.");
}
const dinosaur = data.find((item) =>
item.name.toLowerCase() === c.req.param("dinosaur").toLowerCase()
);
if (dinosaur) {
return c.json(dinosaur);
} else {
return c.notFound();
}
});
Deno.serve(app.fetch);
```
The Hono server provides two API endpoints:
- `GET /api/dinosaurs` to fetch all dinosaurs, and
- `GET /api/dinosaurs/:dinosaur` to fetch a specific dinosaur by name
Before we start working on the frontend, let's update our `deno tasks` in our
`deno.json` file. Yours should look something like this:
```jsonc
{
"tasks": {
"dev": "deno --allow-env --allow-net api/main.ts"
}
// ...
}
```
Now, the backend server will be started on `localhost:8000` when we run
`deno task dev`.
## Create Tanstack-driven frontend
Let's create the frontend that will use this data. First, we'll quickly scaffold
a new React app with Vite using the TypeScript template in the current
directory:
```shell
deno init --npm vite@latest --template react-ts ./
```
Then, we'll install our Tanstack-specific dependencies:
```shell
deno install npm:@tanstack/react-query npm:@tanstack/react-router
```
Let's update our `deno tasks` in our `deno.json` to add a command to start the
Vite server:
```jsonc
// deno.json
{
"tasks": {
"dev": "deno task dev:api & deno task dev:vite",
"dev:api": "deno --allow-env --allow-net api/main.ts",
"dev:vite": "deno -A npm:vite"
}
// ...
}
```
We can move onto building our components. We'll need two main pages for our app:
- `DinosaurList.tsx`: the index page, which will list out all the dinosaurs, and
- `Dinosaur.tsx`: the leaf page, which displays information about a single
dinosaur
Let's create a new `./src/components` directory and, within that, the file
`DinosaurList.tsx`:
```ts
// ./src/components/DinosaurList.tsx
import { useQuery } from "@tanstack/react-query";
import { Link } from "@tanstack/react-router";
async function fetchDinosaurs() {
const response = await fetch("http://localhost:8000/api/dinosaurs");
if (!response.ok) {
throw new Error("Failed to fetch dinosaurs");
}
return response.json();
}
export function DinosaurList() {
const {
data: dinosaurs,
isLoading,
error,
} = useQuery({
queryKey: ["dinosaurs"],
queryFn: fetchDinosaurs,
});
if (isLoading) return
);
}
```
This uses
[`useQuery`](https://tanstack.com/query/v4/docs/framework/react/guides/queries)
from **Tanstack Query** to fetch and cache the dinosaur data automatically, with
built-in loading and error states. Then it uses
[`Link`](https://tanstack.com/router/v1/docs/framework/react/api/router/linkComponent)
from **Tanstack Router** to create client-side navigation links with type-safe
routing parameters.
Next, let's create the `DinosaurDetail.tsx` component in the `./src/components/`
folder, which will show details about a single dinosaur:
```ts
// ./src/components/DinosaurDetail.tsx
import { useParams } from "@tanstack/react-router";
import { useQuery } from "@tanstack/react-query";
async function fetchDinosaurDetail(name: string) {
const response = await fetch(`http://localhost:8000/api/dinosaurs/${name}`);
if (!response.ok) {
throw new Error("Failed to fetch dinosaur detail");
}
return response.json();
}
export function DinosaurDetail() {
const { name } = useParams({ from: "/dinosaur/$name" });
const {
data: dinosaur,
isLoading,
error,
} = useQuery({
queryKey: ["dinosaur", name],
queryFn: () => fetchDinosaurDetail(name),
});
if (isLoading) return
Loading...
;
if (error instanceof Error) {
return
An error occurred: {error.message}
;
}
return (
{name}
{dinosaur?.description}
);
}
```
Again, this uses `useQuery` from **Tanstack Query** to fetch and cache
individual dinosaur details, with
[`queryKey`](https://tanstack.com/query/latest/docs/framework/react/guides/query-keys)
including the dinosaur name to ensure proper caching. Additionally, we use
[`useParams`](https://tanstack.com/router/v1/docs/framework/react/api/router/useParamsHook)
from **Tanstack Router** to safely extract and type the URL parameters defined
in our route configuration.
Before we can run this, we need to encapsulate these components into a layout.
Let's create another file in the `./src/components/` folder called `Layout.tsx`:
```ts
// ./src/components/Layout.tsx
export function Layout() {
return (
Dinosaur Encyclopedia
);
}
```
You may notice the
[`Outlet`](https://tanstack.com/router/v1/docs/framework/react/guide/outlets)
component towards the bottom of our newly created layout. This component is from
**Tanstack Router** and renders the child route's content, allowing for nested
routing while maintaining a consistent layout structure.
Next, we'll have to wire up this layout with `./src/main.tsx`, which an
important file that sets up the Tanstack Query client for managing server state
and the Tanstack Router for handling navigation:
```ts
// ./src/main.tsx
import React from "react";
import ReactDOM from "react-dom/client";
import { QueryClient, QueryClientProvider } from "@tanstack/react-query";
import { createRouter, RouterProvider } from "@tanstack/react-router";
import { routeTree } from "./routeTree";
const queryClient = new QueryClient();
const router = createRouter({ routeTree });
declare module "@tanstack/react-router" {
interface Register {
router: typeof router;
}
}
ReactDOM.createRoot(document.getElementById("root")!).render(
,
);
```
You'll notice we import
[`QueryClientProvider`](https://tanstack.com/query/latest/docs/framework/react/reference/QueryClientProvider),
which wraps the entire application to allow for query caching and state
management. We also import `RouterProvider`, which connects our defined routes
to React's rendering system.
Finally, we'll need to define a
[`routeTree.tsx`](https://tanstack.com/router/v1/docs/framework/react/guide/route-trees)
file in our `./src/` directory. This file defines our application's routing
structure using Tanstack Router's type-safe route definitions:
```ts
// ./src/routeTree.tsx
import { RootRoute, Route } from "@tanstack/react-router";
import { DinosaurList } from "./components/DinosaurList";
import { DinosaurDetail } from "./components/DinosaurDetail";
import { Layout } from "./components/Layout";
const rootRoute = new RootRoute({
component: Layout,
});
const indexRoute = new Route({
getParentRoute: () => rootRoute,
path: "/",
component: DinosaurList,
});
const dinosaurRoute = new Route({
getParentRoute: () => rootRoute,
path: "dinosaur/$name",
component: DinosaurDetail,
});
export const routeTree = rootRoute.addChildren([indexRoute, dinosaurRoute]);
```
In `./src/routeTree.tsx`, we create a hierarchy of routes with `Layout` as the
root component. Then we set two child routes, their paths and components — one
for the dinosaur list, `DinosaurList`, and the other for the individual dinosaur
details with a dynamic parameter, `DinosaurDetail`.
With all that complete, we can run this project:
```shell
deno task dev
```
## Next steps
This is just the beginning of building with Deno and Tanstack. You can add
persistent data storage like
[using a database like Postgres or MongoDB](https://docs.deno.com/runtime/tutorials/connecting_to_databases/)
and an ORM like [Drizzle](https://deno.com/blog/build-database-app-drizzle) or
[Prisma](https://docs.deno.com/runtime/tutorials/how_to_with_npm/prisma/). Or
deploy your app to
[AWS](https://docs.deno.com/runtime/tutorials/aws_lightsail/),
[Digital Ocean](https://docs.deno.com/runtime/tutorials/digital_ocean/), or
[Google Cloud Run](https://docs.deno.com/runtime/tutorials/google_cloud_run/)
You could also add real-time updates using
[Tanstack Query's refetching capabilities](https://tanstack.com/query/latest/docs/framework/react/examples/auto-refetching),
[implement infinite scrolling](https://tanstack.com/query/latest/docs/framework/react/examples/load-more-infinite-scroll)
for large dinosaur lists, or
[add complex filtering and sorting](https://tanstack.com/table/v8/docs/guide/column-filtering)
using **[Tanstack Table](https://tanstack.com/table/latest)**. The combination
of Deno's built-in web standards, tooling, and native TypeScript support, as
well as Tanstack's powerful data management opens up numerous possibilities for
building robust web applications.
---
# Writing tests
> Learn key concepts like test setup and structure, assertions, async testing, mocking, test fixtures, and code coverage
URL: https://docs.deno.com/examples/tutorials/testing
Testing is critical in software development to ensure your code works as
expected, and continues to work as you make changes. Tests verify that your
functions, modules, and applications behave correctly, handle edge cases
appropriately, and maintain expected performance characteristics.
## Why testing matters
Testing your code allows you to catch bugs, issues or regressions before they
reach production, saving time and resources. Tests are also useful to help plan
out the logic of your application, they can serve as a human readable
description of how your code is meant to be used.
Deno provides [built-in testing capabilities](/runtime/fundamentals/testing/),
making it straightforward to implement robust testing practices in your
projects.
## Writing tests with `Deno.test`
Defining a test in Deno is straightforward - use the `Deno.test()` function to
register your test with the test runner. This function accepts either a test
name and function, or a configuration object with more detailed options. All
test functions in files that match patterns like `*_test.{ts,js,mjs,jsx,tsx}` or
`*.test.{ts,js,mjs,jsx,tsx}` are automatically discovered and executed when you
run the `deno test` command.
Here are the basic ways to define tests:
```ts
// Basic test with a name and function
Deno.test("my first test", () => {
// Your test code here
});
// Test with configuration options
Deno.test({
name: "my configured test",
fn: () => {
// Your test code here
},
ignore: false, // Optional: set to true to skip this test
only: false, // Optional: set to true to only run this test
permissions: { // Optional: specify required permissions
read: true,
write: false,
},
});
```
### A simple example test
Let's start with a simple test. Create a file called `main_test.ts`, in it we
will test a basic addition operation using Deno's testing API and the
`assertEquals` function from the [Deno Standard Library](https://jsr.io/@std).
We use `Deno.test` and provide a name that describes what the test will do:
```ts title="main_test.ts"
// hello_test.ts
import { assertEquals } from "jsr:@std/assert";
// Function we want to test
function add(a: number, b: number): number {
return a + b;
}
Deno.test("basic addition test", () => {
// Arrange - set up the test data
const a = 1;
const b = 2;
// Act - call the function being tested
const result = add(a, b);
// Assert - verify the result is what we expect
assertEquals(result, 3);
});
```
To run this test, use the `deno test` command:
```sh
deno test hello_test.ts
```
You should see output indicating that your test has passed:
```
running 1 test from ./hello_test.ts
basic addition test ... ok (2ms)
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out (2ms)
```
Try changing the function implementation to make the test fail:
```ts
function add(a: number, b: number): number {
return a - b; // Changed from addition to subtraction
}
```
You'll see an error message that clearly shows what went wrong:
```sh
running 1 test from ./hello_test.ts
basic addition test ... FAILED (3ms)
failures:
basic addition test => ./hello_test.ts:12:3
error: AssertionError: Values are not equal:
[Diff] Actual / Expected
- -1
+ 3
at assertEquals (https://jsr.io/@std/assert@0.218.2/assert_equals.ts:31:9)
at Object.fn (file:///path/to/hello_test.ts:12:3)
at asyncOpSanitizer (ext:core/01_core.js:199:13)
at Object.sanitizeOps (ext:core/01_core.js:219:15)
at runTest (ext:test/06_test_runner.js:319:29)
at test (ext:test/06_test_runner.js:593:7)
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out (3ms)
```
This clear feedback helps you quickly identify and fix issues in your code.
## Test structure and organization
Deno will automatically find and run tests that match naming patterns like
`*_test.{ts,js,mjs,jsx,tsx}` or `*.test.{ts,js,mjs,jsx,tsx}`. There are plenty
of ways to organize your test files, we recommend co-locating your unit tests
with the code they are testing, and keeping integration tests and configuration
in a `tests` directory. This allows for immediate discovery of unit tests and
simplified imports, while keeping a separation between different types of tests.
Here's an example of how you might structure your project with tests:
```sh
my-deno-project/
├── src/
│ ├── models/
│ │ ├── user.ts
│ │ ├── user_test.ts // Unit tests for user model
│ │ ├── product.ts
│ │ └── product_test.ts // Unit tests for product model
│ ├── services/
│ │ ├── auth-service.ts
│ │ ├── auth-service_test.ts // Unit tests for auth service
│ │ ├── data-service.ts
│ │ └── data-service_test.ts // Unit tests for data service
│ └── utils/
│ ├── helpers.ts
│ └── helpers_test.ts // Unit tests for helpers
├── tests/
│ ├── integration/ // Integration tests directory
│ │ ├── api_test.ts // Tests API endpoints
│ │ └── db_test.ts // Tests database interactions
│ ├── e2e/ // End-to-end tests
│ │ └── user_flow_test.ts // Tests complete user workflows
│ └── fixtures/ // Shared test data and utilities
│ ├── test_data.ts // Test data used across tests
│ └── setup.ts // Common setup functions
├── main.ts
└── deno.json // Project configuration
```
This kind of structure offers a centralized place for test configuration while
maintaining the benefits of co-locating unit tests with their relevant files.
With this structure, you can:
```sh
# Run all tests
deno test
# Run only unit tests
deno test src/
# Run only integration tests
deno test tests/integration/
# Run specific module tests
deno test src/models/
# Run a specific test file
deno test src/models/user_test.ts
```
## Assertions
Assertions are the building blocks of effective tests, allowing you to verify
that your code behaves as expected. They check if a specific condition is true
and throw an error if it's not, causing the test to fail. Good assertions are
clear, specific, and help identify exactly what went wrong when a test fails.
Deno doesn't include assertions in its core library, but you can import them
from the [Deno standard library](https://jsr.io/@std/assert):
```ts
import {
assertArrayIncludes, // Check that array contains value
assertEquals, // Check that values are equal
assertExists, // Check that value is not null or undefined
assertMatch, // Check that string matches regex pattern
assertNotEquals, // Check that values are not equal
assertObjectMatch, // Check that object has expected properties
assertRejects, // Check that Promise rejects
assertStrictEquals, // Check that values are strictly equal (===)
assertStringIncludes, // Check that string contains substring
assertThrows, // Check that function throws an error
} from "jsr:@std/assert";
Deno.test("assertion examples", () => {
// Basic assertions
assertEquals(1 + 1, 2);
assertNotEquals("hello", "world");
assertExists("Hello");
// String assertions
assertStringIncludes("Hello, world!", "world");
assertMatch("deno@1.0.0", /^deno@\d+\.\d+\.\d+$/);
// Object assertions
assertObjectMatch(
{ name: "Jane", age: 25, city: "Tokyo" },
{ name: "Jane" }, // Only checks specified properties
);
// Strict equality (type + value)
assertStrictEquals("deno", "deno");
// Error assertions
assertThrows(
() => {
throw new Error("Something went wrong");
},
Error,
"Something went wrong",
);
});
```
For those that prefer fluent assertions (familiar to users of Jest), you can use
the `expect` module:
```ts
import { expect } from "jsr:@std/expect";
Deno.test("expect style assertions", () => {
// Basic matchers
expect(5).toBe(5);
expect({ name: "deno" }).toEqual({ name: "deno" });
// Collection matchers
expect([1, 2, 3]).toContain(2);
// Truthiness matchers
expect(true).toBeTruthy();
expect(0).toBeFalsy();
expect(null).toBeNull();
expect(undefined).toBeUndefined();
// Number matchers
expect(100).toBeGreaterThan(99);
expect(1).toBeLessThan(2);
// String matchers
expect("Hello world").toMatch(/world/);
// Function/error matchers
expect(() => {
throw new Error("fail");
}).toThrow();
});
```
### Real-world Example
Here's a more realistic example testing a function that processes user data:
```ts
// user_processor.ts
export function validateUser(user: any): { valid: boolean; errors: string[] } {
const errors: string[] = [];
if (!user.name || typeof user.name !== "string") {
errors.push("Name is required and must be a string");
}
if (!user.email || !user.email.includes("@")) {
errors.push("Valid email is required");
}
if (
user.age !== undefined && (typeof user.age !== "number" || user.age < 18)
) {
errors.push("Age must be a number and at least 18");
}
return {
valid: errors.length === 0,
errors,
};
}
// user_processor_test.ts
import { assertEquals } from "jsr:@std/assert";
import { validateUser } from "./user_processor.ts";
Deno.test("validateUser", async (t) => {
await t.step("should validate a correct user object", () => {
const user = {
name: "John Doe",
email: "john@example.com",
age: 30,
};
const result = validateUser(user);
assertEquals(result.valid, true);
assertEquals(result.errors.length, 0);
});
await t.step("should return errors for invalid user", () => {
const user = {
name: "",
email: "invalid-email",
age: 16,
};
const result = validateUser(user);
assertEquals(result.valid, false);
assertEquals(result.errors.length, 3);
assertEquals(result.errors[0], "Name is required and must be a string");
assertEquals(result.errors[1], "Valid email is required");
assertEquals(result.errors[2], "Age must be a number and at least 18");
});
await t.step("should handle missing properties", () => {
const user = {
name: "Jane Doe",
// email and age missing
};
const result = validateUser(user);
assertEquals(result.valid, false);
assertEquals(result.errors.length, 1);
assertEquals(result.errors[0], "Valid email is required");
});
});
```
## Async testing
Deno handles async tests naturally. Just make your test function async and use
await:
```ts
import { assertEquals } from "jsr:@std/assert";
Deno.test("async test example", async () => {
const response = await fetch("https://deno.land");
const status = response.status;
assertEquals(status, 200);
});
```
### Testing async functions
When testing functions that return promises, you should always await the result:
```ts
// async-function.ts
export async function fetchUserData(userId: string) {
const response = await fetch(`https://api.example.com/users/${userId}`);
if (!response.ok) {
throw new Error(`Failed to fetch user: ${response.status}`);
}
return await response.json();
}
// async-function_test.ts
import { assertEquals, assertRejects } from "jsr:@std/assert";
import { fetchUserData } from "./async-function.ts";
Deno.test("fetchUserData success", async () => {
// Mock the fetch function for testing
globalThis.fetch = async (url: string) => {
const data = JSON.stringify({ id: "123", name: "Test User" });
return new Response(data, { status: 200 });
};
const userData = await fetchUserData("123");
assertEquals(userData.id, "123");
assertEquals(userData.name, "Test User");
});
Deno.test("fetchUserData failure", async () => {
// Mock the fetch function to simulate an error
globalThis.fetch = async (url: string) => {
return new Response("Not Found", { status: 404 });
};
await assertRejects(
async () => await fetchUserData("nonexistent"),
Error,
"Failed to fetch user: 404",
);
});
```
## Mocking in tests
Mocking is an essential technique for isolating the code being tested from its
dependencies. Deno provides built-in utilities and third-party libraries for
creating mocks.
### Basic Mocking
You can create simple mocks by
[replacing functions or objects with your own
implementations](/examples/mocking_tutorial/). This allows you to control the
behavior of dependencies and test how your code interacts with them.
```ts
// Example of a module with a function we want to mock
const api = {
fetchData: async () => {
const response = await fetch("https://api.example.com/data");
return response.json();
},
};
// In your test file
Deno.test("basic mocking example", async () => {
// Store the original function
const originalFetchData = api.fetchData;
// Replace with mock implementation
api.fetchData = async () => {
return { id: 1, name: "Test Data" };
};
try {
// Test using the mock
const result = await api.fetchData();
assertEquals(result, { id: 1, name: "Test Data" });
} finally {
// Restore the original function
api.fetchData = originalFetchData;
}
});
```
### Using Spy Functions
Spies allow you to track function calls without changing their behavior:
```ts
import { spy } from "jsr:@std/testing/mock";
Deno.test("spy example", () => {
// Create a spy on console.log
const consoleSpy = spy(console, "log");
// Call the function we're spying on
console.log("Hello");
console.log("World");
// Verify the function was called correctly
assertEquals(consoleSpy.calls.length, 2);
assertEquals(consoleSpy.calls[0].args, ["Hello"]);
assertEquals(consoleSpy.calls[1].args, ["World"]);
// Restore the original function
consoleSpy.restore();
});
```
For more advanced mocking techniques, check our
[dedicated guide on mocking in Deno](/examples/mocking_tutorial/).
## Coverage
Code coverage is a metric that helps you understand how much of your code is
being tested. It measures which lines, functions, and branches of your code are
executed during your tests, giving you insight into areas that might lack proper
testing.
Coverage analysis helps you to:
- Identify untested parts of your codebase
- Ensure critical paths have tests
- Prevent regressions when making changes
- Measure testing progress over time
:::note
High coverage doesn't guarantee high-quality tests. It simply shows what code
was executed, not whether your assertions are meaningful or if edge cases are
handled correctly.
:::
Deno provides built-in coverage tools to help you analyze your test coverage. To
collect coverage information:
```bash
deno test --coverage=coverage_dir
```
This generates coverage data in a specified directory (here, `coverage_dir`). To
view a human-readable report:
```bash
deno coverage coverage_dir
```
You'll see output like:
```sh
file:///projects/my-project/src/utils.ts 85.7% (6/7)
file:///projects/my-project/src/models/user.ts 100.0% (15/15)
file:///projects/my-project/src/services/auth.ts 78.3% (18/23)
total: 87.5% (39/45)
```
For more detailed insights, you can also generate an HTML report:
```bash
deno coverage --html coverage_dir
```
This creates an interactive HTML report in the specified directory that shows
exactly which lines are covered and which are not.
By default, the coverage tool automatically excludes:
- Test files (matching patterns like `test.ts` or `test.js`)
- Remote files (those not starting with `file:`)
This ensures your coverage reports focus on your application code rather than
test files or external dependencies.
### Coverage Configuration
You can exclude files from coverage reports by using the `--exclude` flag:
```bash
deno coverage --exclude="test_,vendor/,_build/,node_modules/" coverage_dir
```
### Integrating with CI
For continuous integration environments, you might want to enforce a minimum
coverage threshold:
```yaml
# In your GitHub Actions workflow
- name: Run tests with coverage
run: deno test --coverage=coverage_dir
- name: Check coverage meets threshold
run: |
COVERAGE=$(deno coverage coverage_dir | grep "total:" | grep -o '[0-9]\+\.[0-9]\+')
if (( $(echo "$COVERAGE < 80" | bc -l) )); then
echo "Test coverage is below 80%: $COVERAGE%"
exit 1
fi
```
When working on your test coverage, remember to set realistic goals, aim for
meaningful coverage with high quality tests over 100% coverage.
## Comparison with other testing frameworks
If you're coming from other JavaScript testing frameworks, here's how Deno's
testing capabilities compare:
| Feature | Deno | Jest | Mocha | Jasmine |
| ------------- | ---------------- | ---------------------- | -------------------------- | --------------------- |
| Setup | Built-in | Requires installation | Requires installation | Requires installation |
| Syntax | `Deno.test()` | `test()`, `describe()` | `it()`, `describe()` | `it()`, `describe()` |
| Assertions | From std library | Built-in expect | Requires assertion library | Built-in expect |
| Mocking | From std library | Built-in jest.mock() | Requires sinon or similar | Built-in spies |
| Async support | Native | Needs special handling | Supports promises | Supports promises |
| File watching | `--watch` flag | watch mode | Requires nodemon | Requires extra tools |
| Code coverage | Built-in | Built-in | Requires istanbul | Requires istanbul |
### Testing Style Comparison
**Deno:**
```ts
import { assertEquals } from "jsr:@std/assert";
Deno.test("add function", () => {
assertEquals(1 + 2, 3);
});
```
**Jest:**
```ts
test("add function", () => {
expect(1 + 2).toBe(3);
});
```
**Mocha:**
```ts
import { assert } from "chai";
describe("math", () => {
it("should add numbers", () => {
assert.equal(1 + 2, 3);
});
});
```
**Jasmine:**
```ts
describe("math", () => {
it("should add numbers", () => {
expect(1 + 2).toBe(3);
});
});
```
## Next steps
🦕 Deno's built-in testing capabilities make it easy to write and run tests
without needing to install extra testing frameworks or tools. By following the
patterns and practices outlined in this tutorial, you can ensure your Deno
applications are well-tested and reliable.
For more information about testing in Deno, check out:
- [Testing documentation](/runtime/fundamentals/testing)
- [Mocking data for tests](/examples/mocking_tutorial/)
- [Writing benchmark tests](/examples/benchmarking/)
---
# Build a Typesafe API with tRPC and Deno
> A guide to building type-safe APIs with tRPC and Deno. Learn how to set up endpoints, implement RPC procedures, handle data validation, and create efficient client-server applications.
URL: https://docs.deno.com/examples/tutorials/trpc
Deno is an
[all-in-one, zero-config toolchain](https://docs.deno.com/runtime/manual/tools)
for writing JavaScript and
[TypeScript](https://docs.deno.com/runtime/fundamentals/typescript/) with
[natively supports Web Platform APIs](https://docs.deno.com/runtime/reference/web_platform_apis/),
making it an ideal choice for quickly building backends and APIs. To make our
API easier to maintain, we can use [tRPC](https://trpc.io/), a TypeScript RPC
([Remote Procedure Call](https://en.wikipedia.org/wiki/Remote_procedure_call))
framework that enables you to build fully type-safe APIs without schema
declarations or code generation.
In this tutorial, we'll build a simple type-safe API with tRPC and Deno that
returns information about dinosaurs:
- [Set up tPRC](#set-up-trpc)
- [Set up the server](#set-up-the-trpc-server)
- [Set up the client](#set-up-the-trpc-client)
- [What's next?](#whats-next)
You can find all the code for this tutorial in
[this GitHub repo](https://github.com/denoland/examples/tree/main/with-trpc).
## Set up tRPC
To get started with tRPC in Deno, we'll need to install the required
dependencies. Thanks to Deno's npm compatibility, we can use the npm versions of
tRPC packages along with Zod for input validation:
```bash
deno install npm:@trpc/server@next npm:@trpc/client@next npm:zod jsr:@std/path
```
This installs the most recent tRPC server and client packages,
[Zod](https://zod.dev/) for runtime type validation, and
[the Deno Standard Library's `path`](https://jsr.io/@std/path) utility. These
packages will allow us to build a type-safe API layer between our client and
server code.
This will create a `deno.json` file in the project root to manage the npm and
[jsr](https://jsr.io/) dependencies:
```tsx
{
"imports": {
"@std/path": "jsr:@std/path@^1.0.6",
"@trpc/client": "npm:@trpc/client@^11.0.0-rc.593",
"@trpc/server": "npm:@trpc/server@^11.0.0-rc.593",
"zod": "npm:zod@^3.23.8"
}
}
```
## Set up the tRPC server
The first step in building our tRPC application is setting up the server. We'll
start by initializing tRPC and creating our base router and procedure builders.
These will be the foundation for defining our API endpoints.
Create a `server/trpc.ts` file:
```tsx
// server/trpc.ts
import { initTRPC } from "@trpc/server";
/**
* Initialization of tRPC backend
* Should be done only once per backend!
*/
const t = initTRPC.create();
/**
* Export reusable router and procedure helpers
* that can be used throughout the router
*/
export const router = t.router;
export const publicProcedure = t.procedure;
```
This initializes tRPC and exports the router and procedure builders that we'll
use to define our API endpoints. The `publicProcedure` allows us to create
endpoints that don't require authentication.
Next, we'll create a simple data layer to manage our dinosaur data. Create a
`server/db.ts` file with the below:
```tsx
// server/db.ts
import { join } from "@std/path";
type Dino = { name: string; description: string };
const dataPath = join("data", "data.json");
async function readData(): Promise {
const data = await Deno.readTextFile(dataPath);
return JSON.parse(data);
}
async function writeData(dinos: Dino[]): Promise {
await Deno.writeTextFile(dataPath, JSON.stringify(dinos, null, 2));
}
export const db = {
dino: {
findMany: () => readData(),
findByName: async (name: string) => {
const dinos = await readData();
return dinos.find((dino) => dino.name === name);
},
create: async (data: { name: string; description: string }) => {
const dinos = await readData();
const newDino = { ...data };
dinos.push(newDino);
await writeData(dinos);
return newDino;
},
},
};
```
This creates a simple file-based database that reads and writes dinosaur data to
a JSON file. In a production environment, you'd typically use a proper database,
but this will work well for our demo.
> ⚠️️ In this tutorial, we hard code data and use a file-based database. However,
> you can
> [connect to a variety of databases](https://docs.deno.com/runtime/tutorials/connecting_to_databases/)
> and use ORMs like [Drizzle](https://docs.deno.com/examples/drizzle) or
> [Prisma](https://docs.deno.com/runtime/tutorials/how_to_with_npm/prisma/).
Finally, we'll need to provide the actual data. Let's create a `./data.json`
file with some sample dinosaur data:
```tsx
// data/data.json
[
{
"name": "Aardonyx",
"description": "An early stage in the evolution of sauropods."
},
{
"name": "Abelisaurus",
"description": "\"Abel's lizard\" has been reconstructed from a single skull."
},
{
"name": "Abrictosaurus",
"description": "An early relative of Heterodontosaurus."
},
{
"name": "Abrosaurus",
"description": "A close Asian relative of Camarasaurus."
},
...
]
```
Now, we can create our main server file that defines our tRPC router and
procedures. Create a `server/index.ts` file:
```tsx
// server/index.ts
import { createHTTPServer } from "@trpc/server/adapters/standalone";
import { z } from "zod";
import { db } from "./db.ts";
import { publicProcedure, router } from "./trpc.ts";
const appRouter = router({
dino: {
list: publicProcedure.query(async () => {
const dinos = await db.dino.findMany();
return dinos;
}),
byName: publicProcedure.input(z.string()).query(async (opts) => {
const { input } = opts;
const dino = await db.dino.findByName(input);
return dino;
}),
create: publicProcedure
.input(z.object({ name: z.string(), description: z.string() }))
.mutation(async (opts) => {
const { input } = opts;
const dino = await db.dino.create(input);
return dino;
}),
},
examples: {
iterable: publicProcedure.query(async function* () {
for (let i = 0; i < 3; i++) {
await new Promise((resolve) => setTimeout(resolve, 500));
yield i;
}
}),
},
});
// Export type router type signature, this is used by the client.
export type AppRouter = typeof appRouter;
const server = createHTTPServer({
router: appRouter,
});
server.listen(3000);
```
This sets up three main endpoints:
- `dino.list`: Returns all dinosaurs
- `dino.byName`: Returns a specific dinosaur by name
- `dino.create`: Creates a new dinosaur
- `examples.iterable`: A demonstration of tRPC's support for async iterables
The server is configured to listen on port 3000 and will handle all tRPC
requests.
While you can run the server now, you won't be able to access any of the routes
and have it return data. Let's fix that!
## Set up the tRPC client
With our server ready, we can create a client that consumes our API with full
type safety. Create a `client/index.ts` file:
```tsx
// client/index.ts
/**
* This is the client-side code that uses the inferred types from the server
*/
import {
createTRPCClient,
splitLink,
unstable_httpBatchStreamLink,
unstable_httpSubscriptionLink,
} from "@trpc/client";
/**
* We only import the `AppRouter` type from the server - this is not available at runtime
*/
import type { AppRouter } from "../server/index.ts";
// Initialize the tRPC client
const trpc = createTRPCClient({
links: [
splitLink({
condition: (op) => op.type === "subscription",
true: unstable_httpSubscriptionLink({
url: "http://localhost:3000",
}),
false: unstable_httpBatchStreamLink({
url: "http://localhost:3000",
}),
}),
],
});
const dinos = await trpc.dino.list.query();
console.log("Dinos:", dinos);
const createdDino = await trpc.dino.create.mutate({
name: "Denosaur",
description:
"A dinosaur that lives in the deno ecosystem. Eats Nodes for breakfast.",
});
console.log("Created dino:", createdDino);
const dino = await trpc.dino.byName.query("Denosaur");
console.log("Denosaur:", dino);
const iterable = await trpc.examples.iterable.query();
for await (const i of iterable) {
console.log("Iterable:", i);
}
```
This client code demonstrates several key features of tRPC:
1. **Type inference from the server router**. The client automatically inherits
all type definitions from the server through the `AppRouter` type import.
This means you get complete type support and compile-time type checking for
all your API calls. If you modify a procedure on the server, TypeScript will
immediately flag any incompatible client usage.
2. **Making queries and mutations**. The example demonstrates two types of API
calls: Queries (`list` and `byName`) used for fetching data without side
effects, and mutations (`create`) used for operations that modify server-side
state. The client automatically knows the input and output types for each
procedure, providing type safety throughout the entire request cycle.
3. **Working with async iterables**. The `examples.iterable` demonstrates tRPC's
support for streaming data using async iterables. This feature is
particularly useful for real-time updates or processing large datasets in
chunks.
Now, let's start our server to see it in action. In our `deno.json` config file,
let's create a new property `tasks` with the following commands:
```json
{
"tasks": {
"start": "deno -A server/index.ts",
"client": "deno -A client/index.ts"
}
// Other properties in deno.json remain the same.
}
```
We can list our available tasks with `deno task`:
```bash
deno task
Available tasks:
- start
deno -A server/index.ts
- client
deno -A client/index.ts
```
Now, we can start the server with `deno task start`. After that's running, we
can run the client with `deno task client`. You should see an output like this:
```bash
deno task client
Dinos: [
{
name: "Aardonyx",
description: "An early stage in the evolution of sauropods."
},
{
name: "Abelisaurus",
description: "Abel's lizard has been reconstructed from a single skull."
},
{
name: "Abrictosaurus",
description: "An early relative of Heterodontosaurus."
},
...
]
Created dino: {
name: "Denosaur",
description: "A dinosaur that lives in the deno ecosystem. Eats Nodes for breakfast."
}
Denosaur: {
name: "Denosaur",
description: "A dinosaur that lives in the deno ecosystem. Eats Nodes for breakfast."
}
Iterable: 0
Iterable: 1
Iterable: 2
```
Success! Running the `./client/index.ts` shows how to create a tRPC client and
use its JavaScript API to interact with the database. But how can we check if
the tRPC client is inferring the right types from the database? Let's modify the
code snippet below in `./client/index.ts` to pass a `number` instead of a
`string` as the `description`:
```diff
// ...
const createdDino = await trpc.dino.create.mutate({
name: "Denosaur",
description:
- "A dinosaur that lives in the deno ecosystem. Eats Nodes for breakfast.",
+ 100,
});
console.log("Created dino:", createdDino);
// ...
```
When we re-run the client:
```bash
deno task client
...
error: Uncaught (in promise) TRPCClientError: [
{
"code": "invalid_type",
"expected": "string",
"received": "number",
"path": [
"description"
],
"message": "Expected string, received number"
}
]
at Function.from (file:///Users/andyjiang/Library/Caches/deno/npm/registry.npmjs.org/@trpc/client/11.0.0-rc.608/dist/TRPCClientError.mjs:35:20)
at file:///Users/andyjiang/Library/Caches/deno/npm/registry.npmjs.org/@trpc/client/11.0.0-rc.608/dist/links/httpBatchStreamLink.mjs:118:56
at eventLoopTick (ext:core/01_core.js:175:7)
```
tRPC successfully threw an `invalid_type` error, since it was expecting a
`string` instead of a `number`.
## What’s next?
Now that you have a basic understanding of how to use tRPC with Deno, you could:
1. Build out an actual frontend using
[Next.js](https://trpc.io/docs/client/nextjs) or
[React](https://trpc.io/docs/client/react)
2. [Add authentication to your API using tRPC middleware](https://trpc.io/docs/server/middlewares#authorization)
3. Implement real-time features
[using tRPC subscriptions](https://trpc.io/docs/server/subscriptions)
4. Add [input validation](https://trpc.io/docs/server/validators) for more
complex data structures
5. Integrate with a proper database like
[PostgreSQL](https://docs.deno.com/runtime/tutorials/connecting_to_databases/#postgres)
or use an ORM like [Drizzle](https://docs.deno.com/examples/drizzle) or
[Prisma](https://docs.deno.com/runtime/tutorials/how_to_with_npm/prisma/)
6. Deploy your application to [Deno Deploy](https://deno.com/deploy) or
[any public cloud via Docker](https://docs.deno.com/runtime/tutorials/#deploying-deno-projects)
🦕 Happy type safety coding with Deno and tRPC!
---
# Build a Vue.js App
> A tutorial on building Vue.js applications with Deno. Learn how to set up a Vite project, implement component architecture, add routing, manage state, and create a full-stack TypeScript application.
URL: https://docs.deno.com/examples/tutorials/vue
[Vue.js](https://vuejs.org/) is a progressive front-end JavaScript framework. It
provides tools and features for creating dynamic and interactive user
interfaces.
In this tutorial we'll build a simple Vue.js app with Vite and Deno. The app
will display a list of dinosaurs. When you click on one, it'll take you to a
dinosaur page with more details. You can see the
[finished app on GitHub](https://github.com/denoland/tutorial-with-vue).

## Create a Vue.js app with Vite and Deno
We'll use [Vite](https://vitejs.dev/) to scaffold a basic Vue.js app. In your
terminal, run the following command to create a new .js app with Vite:
```shell
deno run -A npm:create-vite
```
When prompted, give your app a name and select `Vue` from the offered frameworks
and `TypeScript` as a variant.
Once created, `cd` into your new project and run the following command to
install dependencies:
```shell
deno install
```
Then, run the following command to serve your new Vue.js app:
```shell
deno task dev
```
Deno will run the `dev` task from the `package.json` file which will start the
Vite server. Click the output link to localhost to see your app in the browser.
## Configure the formatter
`deno fmt` supports Vue files with the
[`--unstable-component`](https://docs.deno.com/runtime/reference/cli/fmt/#formatting-options-unstable-component)
flag. To use it, run this command:
```sh
deno fmt --unstable-component
```
To configure `deno fmt` to always format your Vue files, add this at the top
level of your `deno.json` file:
```json
"unstable": ["fmt-component"]
```
## Add a backend
The next step is to add a backend API. We'll create a very simple API that
returns information about dinosaurs.
In the root of your new vite project, create an `api` folder. In that folder,
create a `main.ts` file, which will run the server, and a `data.json`, which
where we'll put the hard coded data.
Copy and paste
[this json file](https://raw.githubusercontent.com/denoland/tutorial-with-vue/refs/heads/main/api/data.json)
into `api/data.json`.
We're going to build out a simple API server with routes that return dinosaur
information. We'll use the [`oak` middleware framework](https://jsr.io/@oak/oak)
and the [`cors` middleware](https://jsr.io/@tajpouria/cors) to enable
[CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS).
Use the `deno add` command to add the required dependencies to your project:
```shell
deno add jsr:@oak/oak jsr:@tajpouria/cors
```
Next, update `api/main.ts` to import the required modules and create a new
`Router` instance to define some routes:
```ts title="main.ts"
import { Application, Router } from "@oak/oak";
import { oakCors } from "@tajpouria/cors";
import data from "./data.json" with { type: "json" };
const router = new Router();
```
After this, in the same file, we'll define three routes. The first route at `/`
will return the string `Welcome to the dinosaur API`, then we'll set up
`/dinosaurs` to return all the dinosaurs, and finally `/dinosaurs/:dinosaur` to
return a specific dinosaur based on the name in the URL:
```ts title="main.ts"
router
.get("/", (context) => {
context.response.body = "Welcome to dinosaur API!";
})
.get("/dinosaurs", (context) => {
context.response.body = data;
})
.get("/dinosaurs/:dinosaur", (context) => {
if (!context?.params?.dinosaur) {
context.response.body = "No dinosaur name provided.";
}
const dinosaur = data.find((item) =>
item.name.toLowerCase() === context.params.dinosaur.toLowerCase()
);
context.response.body = dinosaur ? dinosaur : "No dinosaur found.";
});
```
Finally, at the bottom of the same file, create a new `Application` instance and
attach the routes we just defined to the application using
`app.use(router.routes())` and start the server listening on port 8000:
```ts title="main.ts"
const app = new Application();
app.use(oakCors());
app.use(router.routes());
app.use(router.allowedMethods());
await app.listen({ port: 8000 });
```
You can run the API server with `deno run --allow-env --allow-net api/main.ts`.
We'll create a task to run this command and update the dev task to run both the
Vue.js app and the API server.
In your `package.json` file, update the `scripts` field to include the
following:
```jsonc
{
"scripts": {
"dev": "deno task dev:api & deno task dev:vite",
"dev:api": "deno run --allow-env --allow-net api/main.ts",
"dev:vite": "deno run -A npm:vite",
// ...
}
```
Now, if you run `deno task dev` and visit `localhost:8000`, in your browser you
should see the text `Welcome to dinosaur API!`, and if you visit
`localhost:8000/dinosaurs`, you should see a JSON response of all of the
dinosaurs.
## Build the frontend
### The entrypoint and routing
In the `src` directory, you'll find a `main.ts` file. This is the entry point
for the Vue.js app. Our app will have multiple route, so we'll need a router to
do our client-side routing. We'll use the official
[Vue Router](https://router.vuejs.org/) for this.
Update `src/main.ts` to import and use the router:
```ts
import { createApp } from "vue";
import router from "./router/index.ts";
import "./style.css";
import App from "./App.vue";
createApp(App)
.use(router)
.mount("#app");
```
Add the Vue Router module to the project with `deno add`:
```shell
deno add npm:vue-router
```
Next, create a `router` directory in the `src` directory. In it, create an
`index.ts` file with the following content:
```ts title="router/index.ts"
import { createRouter, createWebHistory } from "vue-router";
import HomePage from "../components/HomePage.vue";
import Dinosaur from "../components/Dinosaur.vue";
export default createRouter({
history: createWebHistory("/"),
routes: [
{
path: "/",
name: "Home",
component: HomePage,
},
{
path: "/:dinosaur",
name: "Dinosaur",
component: Dinosaur,
props: true,
},
],
});
```
This will set up a router with two routes: `/` and `/:dinosaur`. The `HomePage`
component will be rendered at `/` and the `Dinosaur` component will be rendered
at `/:dinosaur`.
Finally, you can delete all of the code in the `src/App.vue` file to and update
it to include only the `` component:
```html title="App.vue"
```
### The components
Vue.js splits the frontend UI into components. Each component is a reusable
piece of code. We'll create three components: one for the home page, one for the
list of dinosaurs, and one for an individual dinosaur.
Each component file is split into three parts: `
{{ dinosaur.name }}
```
This code uses the Vue.js
[v-for](https://vuejs.org/api/built-in-directives.html#v-for) directive to
iterate over the `dinosaurs` array and render each dinosaur as a `RouterLink`
component. The `:to` attribute of the `RouterLink` component specifies the route
to navigate to when the link is clicked, and the `:key` attribute is used to
uniquely identify each dinosaur.
#### The Homepage component
The homepage will contain a heading and then it will render the `Dinosaurs`
component. Add the following code to the `HomePage.vue` file:
```html title="HomePage.vue"
Welcome to the Dinosaur App! 🦕
Click on a dinosaur to learn more about them
Loading...
```
Because the `Dinosaurs` component fetches data asynchronously, use the
[`Suspense` component](https://vuejs.org/guide/built-ins/suspense.html) to
handle the loading state.
#### The Dinosaur component
The `Dinosaur` component will display the name and description of a specific
dinosaur and a link to go back to the full list.
First, we'll set up some types for the data we'll be fetching. Create a
`types.ts` file in the `src` directory and add the following code:
```ts title="types.ts"
type Dinosaur = {
name: string;
description: string;
};
type ComponentData = {
dinosaurDetails: null | Dinosaur;
};
```
Then update the `Dinosaur.vue` file:
```html title="Dinosaur.vue"
{{ dinosaurDetails?.name }}
{{ dinosaurDetails?.description }}
🠠 Back to all dinosaurs
```
This code uses the `props` option to define a prop named `dinosaur` that will be
passed to the component. The `mounted` lifecycle hook is used to fetch the
details of the dinosaur based on the `dinosaur` prop and store them in the
`dinosaurDetails` data property. This data is then rendered in the template.
## Run the app
Now that we've set up the frontend and backend, we can run the app. In your
terminal, run the following command:
```shell
deno task dev
```
Visit the output localhost link in your browser to see the app. Click on a
dinosaur to see more details!

🦕 Now that you can run a Vue app in Deno with Vite you're ready to build real
world applications! If you'd like to expand upon this demo you could consider
building out a backend server to serve the static app once built, then you'll be
able to
[deploy your dinosaur app to the cloud](https://docs.deno.com/deploy/manual/).
---
# Testing web apps
> A comprehensive guide to testing web applications with Deno
URL: https://docs.deno.com/examples/tutorials/web_testing
Deno is a JavaScript runtime that operates outside of the browser, as such, you
cannot directly manipulate the Document Object Model in Deno as you would in a
browser. However you can use a library like
[deno-dom](https://jsr.io/@b-fuze/deno-dom),
[JSDom](https://github.com/jsdom/jsdom) or
[LinkeDOM](https://www.npmjs.com/package/linkedom) to work with the DOM. This
tutorial will guide you through how to effectively test your web applications
using Deno.
## Testing UI components and DOM manipulation
Let's say you have a website that shows a uers's profile, you can set up a test
function to verify that the DOM element creation works correctly. This code sets
up a basic card element then tests whether the created DOM structure matches
what was expected.
```ts
import { assertEquals } from "jsr:@std/assert";
import { DOMParser, Element } from "jsr:@b-fuze/deno-dom";
// Component or function that manipulates the DOM
function createUserCard(user: { name: string; email: string }): Element {
const doc = new DOMParser().parseFromString("", "text/html")!;
const card = doc.createElement("div");
card.className = "user-card";
const name = doc.createElement("h2");
name.textContent = user.name;
card.appendChild(name);
const email = doc.createElement("p");
email.textContent = user.email;
email.className = "email";
card.appendChild(email);
return card;
}
Deno.test("DOM manipulation test", () => {
// Create a test user
const testUser = { name: "Test User", email: "test@example.com" };
// Call the function
const card = createUserCard(testUser);
// Assert the DOM structure is correct
assertEquals(card.className, "user-card");
assertEquals(card.children.length, 2);
assertEquals(card.querySelector("h2")?.textContent, "Test User");
assertEquals(card.querySelector(".email")?.textContent, "test@example.com");
});
```
## Testing Event Handling
Web applications often handle user interactions through events. Here's how to
test event handlers. This code sets up a button that tracks its active/inactive
state and updates its appearance when clicked. The accompanying test verifies
the toggle functionality by creating a button, checking its initial state,
simulating clicks, and asserting that the button correctly updates its state
after each interaction:
```ts
import { DOMParser } from "jsr:@b-fuze/deno-dom";
import { assertEquals } from "jsr:@std/assert";
// Component with event handling
function createToggleButton(text: string) {
const doc = new DOMParser().parseFromString("", "text/html")!;
const button = doc.createElement("button");
button.textContent = text;
button.dataset.active = "false";
button.addEventListener("click", () => {
const isActive = button.dataset.active === "true";
button.dataset.active = isActive ? "false" : "true";
button.classList.toggle("active", !isActive);
});
return button;
}
Deno.test("event handling test", () => {
// Create button
const button = createToggleButton("Toggle Me");
// Initial state
assertEquals(button.dataset.active, "false");
assertEquals(button.classList.contains("active"), false);
// Simulate click event
button.dispatchEvent(new Event("click"));
// Test after first click
assertEquals(button.dataset.active, "true");
assertEquals(button.classList.contains("active"), true);
// Simulate another click
button.dispatchEvent(new Event("click"));
// Test after second click
assertEquals(button.dataset.active, "false");
assertEquals(button.classList.contains("active"), false);
});
```
## Testing Fetch Requests
Testing components that make network requests requires mocking the fetch API.
In the below example we will [mock](/examples/mocking_tutorial/) the `fetch` API
to test a function that retrieves user data from an external API. The test
creates a spy function that returns predefined responses based on the requested
URL, allowing you to test both successful requests and error handling without
making actual network calls:
```ts
import { assertSpyCalls, spy } from "jsr:@std/testing/mock";
import { assertEquals } from "jsr:@std/assert";
// Component that fetches data
async function fetchUserData(
userId: string,
): Promise<{ name: string; email: string }> {
const response = await fetch(`https://api.example.com/users/${userId}`);
if (!response.ok) {
throw new Error(`Failed to fetch user: ${response.status}`);
}
return await response.json();
}
Deno.test("fetch request test", async () => {
// Mock fetch response
const originalFetch = globalThis.fetch;
const mockFetch = spy(async (input: RequestInfo | URL): Promise => {
const url = input.toString();
if (url === "https://api.example.com/users/123") {
return new Response(
JSON.stringify({ name: "John Doe", email: "john@example.com" }),
{ status: 200, headers: { "Content-Type": "application/json" } },
);
}
return new Response("Not found", { status: 404 });
});
// Replace global fetch with mock
globalThis.fetch = mockFetch;
try {
// Call the function with a valid ID
const userData = await fetchUserData("123");
// Assert the results
assertEquals(userData, { name: "John Doe", email: "john@example.com" });
assertSpyCalls(mockFetch, 1);
// Test error handling (optional)
try {
await fetchUserData("invalid");
throw new Error("Should have thrown an error for invalid ID");
} catch (error) {
assertEquals((error as Error).message, "Failed to fetch user: 404");
}
assertSpyCalls(mockFetch, 2);
} finally {
// Restore the original fetch
globalThis.fetch = originalFetch;
}
});
```
## Using Testing Steps to set up and teardown
For complex tests, you can use steps to organize test logic into discrete
sections, making tests more readable and maintainable. Steps also enable better
isolation between different parts of your test. Using step naming you can
implement a setup and teardown of the test conditions.
```ts
import { DOMParser } from "jsr:@b-fuze/deno-dom";
import { assertEquals, assertExists } from "jsr:@std/assert";
Deno.test("complex web component test", async (t) => {
const doc = new DOMParser().parseFromString(
"",
"text/html",
);
const body = doc.createElement("body");
const container = doc.createElement("div");
body.appendChild(container);
await t.step("initial rendering", () => {
container.innerHTML = ``;
const app = container.querySelector("#app");
assertExists(app);
assertEquals(app.children.length, 0);
});
await t.step("adding content", () => {
const app = container.querySelector("#app");
assertExists(app);
const header = doc.createElement("header");
header.textContent = "My App";
app.appendChild(header);
assertEquals(app.children.length, 1);
assertEquals(app.firstElementChild?.tagName.toLowerCase(), "header");
});
await t.step("responding to user input", () => {
const app = container.querySelector("#app");
assertExists(app);
const button = doc.createElement("button");
button.textContent = "Click me";
button.id = "test-button";
app.appendChild(button);
let clickCount = 0;
button.addEventListener("click", () => clickCount++);
button.dispatchEvent(new Event("click"));
button.dispatchEvent(new Event("click"));
assertEquals(clickCount, 2);
});
await t.step("removing content", () => {
const app = container.querySelector("#app");
assertExists(app);
const header = app.querySelector("header");
assertExists(header);
header.remove();
assertEquals(app.children.length, 1); // Only the button should remain
});
});
```
## Best Practices for Web Testing in Deno
1. Maintain isolation - Each test should be self-contained and not depend on
other tests.
2. Use names to show intent - descriptive names for tests make it clear what is
being tested and give more readable output in the console
3. Clean up after your tests - remove any DOM elements created during tests to
prevent test pollution.
4. Mock external services (such as APIs) to make tests faster and more reliable.
5. Organize tests into logical steps using `t.step()` for complex components.
## Running Your Tests
Execute your tests with the Deno test command:
```bash
deno test
```
For web tests, you might need additional permissions:
```bash
deno test --allow-net --allow-read --allow-env
```
🦕 By following the patterns in this tutorial, you can write comprehensive tests
for your web applications that verify both functionality and user experience.
Remember that effective testing leads to more robust applications and helps
catch issues before they reach your users.
---
# Building a word finder app with Deno
> A tutorial on creating a word search application with Deno. Learn how to build a web server, implement pattern matching, handle HTTP requests, and create an interactive web interface using Oak framework.
URL: https://docs.deno.com/examples/tutorials/word_finder
## Getting Started
In this tutorial we'll create a simple Word Finder web application using Deno.
No prior knowledge of Deno is required.
## Introduction
Our Word Finder application will take a pattern string provided by the user and
return all words in the English dictionary that match the pattern. The pattern
can include alphabetical characters as well as `_` and `?`. The `?` can stand
for any letter that isn't present in the pattern. `_` can stand for any letter.
For example, the pattern `c?t` matches "cat" and "cut". The pattern `go?d`
matches the words "goad" and "gold" (but not "good").

## Building the View
The function below renders the HTML that creates the simple UI displayed above.
You can specify a pattern and list of words to customize the HTML content. If a
pattern is specified then it will show up in the search text box. If the word
list is specified, then a bulleted list of words will be rendered.
```jsx title="render.js"
export function renderHtml(pattern, words) {
let searchResultsContent = "";
if (words.length > 0) {
let wordList = "";
for (const word of words) {
wordList += `
${word}
`;
}
searchResultsContent = `
Words found: ${words.length}
${wordList}
`;
}
return `
Deno Word Finder
Deno Word Finder
${searchResultsContent}
Instructions
Enter a word using _ and ? as needed for unknown characters. Using ? means to include letters that aren't already used (you can think of it as a "Wheel of Fortune" placeholder). Using _ will find words that contain any character (whether it's currently "revealed" or not).
For example, d__d would return:
dand
daud
dead
deed
dird
dodd
dowd
duad
dyad
And go?d would return:
goad
gold
`;
}
```
## Searching the Dictionary
We also need a simple search function which scans the dictionary and returns all
words that match the specified pattern. The function below takes a pattern and
dictionary and then returns all matched words.
```jsx title="search.js"
export function search(pattern, dictionary) {
// Create regex pattern that excludes characters already present in word
let excludeRegex = "";
for (let i = 0; i < pattern.length; i++) {
const c = pattern[i];
if (c != "?" && c != "_") {
excludeRegex += "^" + c;
}
}
excludeRegex = "[" + excludeRegex + "]";
// Let question marks only match characters not already present in word
let searchPattern = pattern.replace(/\?/g, excludeRegex);
// Let underscores match anything
searchPattern = "^" + searchPattern.replace(/\_/g, "[a-z]") + "$";
// Find all words in dictionary that match pattern
let matches = [];
for (let i = 0; i < dictionary.length; i++) {
const word = dictionary[i];
if (word.match(new RegExp(searchPattern))) {
matches.push(word);
}
}
return matches;
}
```
## Running a Deno Server
[Oak](https://jsr.io/@oak/oak) is a framework that lets you easily setup a
server in Deno (analogous to JavaScript's Express) and we'll be using it to host
our application. Our server will use our search function to populate our HTML
template with data and then return the customized HTML back to the viewer. We
can conveniently rely on the `/usr/share/dict/words` file as our dictionary
which is a standard file present on most Unix-like operating systems.
```jsx title="server.js"
import { Application, Router } from "jsr:@oak/oak";
import { search } from "./search.js";
import { renderHtml } from "./render.js";
const dictionary = (await Deno.readTextFile("/usr/share/dict/words")).split(
"\n",
);
const app = new Application();
const port = 8080;
const router = new Router();
router.get("/", async (ctx) => {
ctx.response.body = renderHtml("", []);
});
router.get("/api/search", async (ctx) => {
const pattern = ctx.request.url.searchParams.get("search-text");
ctx.response.body = renderHtml(pattern, search(pattern, dictionary));
});
app.use(router.routes());
app.use(router.allowedMethods());
console.log("Listening at http://localhost:" + port);
await app.listen({ port });
```
We can start our server with the following command. Note we need to explicitly
grant access to the file system and network because Deno is secure by default.
```bash
deno run --allow-read --allow-net server.js
```
Now if you visit [http://localhost:8080](http://localhost:8080/) you should be
able to view the Word Finder app.
## Example Code
You can find the entire example code
[here](https://github.com/awelm/deno-word-finder).
---
# All-in-one tooling
> Learn about Deno's built-in developer tools. Watch how to use the integrated formatter, linter, and test runner to improve code quality without additional configuration or third-party dependencies.
URL: https://docs.deno.com/examples/videos/all-in-one_tooling
## Video description
In Node.js, before we can get started working on our project, we have to go
through a configuration step for things like linting, formatting, and testing.
Deno saves us a ton of time by including these tools natively. Let's take a look
at what's included with these built-in CLI tools.
## Transcript and code
Here we have a function called sing:
```javascript
function sing(phrase: string, times: number): string {
return Array(times).fill(phrase).join(" ");
}
```
Now let's run the formatter:
```shell
deno fmt
```
The formatter automatically formats your code to follow Deno's rules and
conventions. Let's run it to clean up any formatting issues.
Deno even formats code snippets in markdown files. So anything that is enclosed
in triple backticks will be formatted when you run this command as well.
The deno lint command is used to analyze your code for potential issues. It’s
similar to ESLint but built into Deno.
```shell
deno lint --help
```
This will lint all of the JavaScript and TypeScript files in the current
directory and in subdirectories.
You can also lint specific files by passing their names
```shell
# lint specific files
deno lint myfile1.ts myfile2.ts
```
You can run it on specific directories
```shell
deno lint src/
```
And if you're feeling like you want to skip linting certain files, at the top of
the files, you can add a comment, and deno will know to skip this one.
```javascript
// deno-lint-ignore-file
// deno-lint-ignore-file -- reason for ignoring
```
Deno also has some CLI commands for testing. In our directory here we have a
test file. It uses the name of the function and test.
```javascript title="sing_test.ts"
import { sing } from "./sing.ts";
import { assertEquals } from "jsr:@std/assert";
Deno.test("sing repeats a phrase", () => {
const result = sing("La", 3);
assertEquals(result, "La La La");
});
```
Now, we’ll run our tests using the deno test command. Deno automatically
discovers and runs test files.
```shell
deno test
```
The way Deno decides which files should be considered test files is that it
follows:
`_test.ts`, `_test.js`, `_test.tsx`, `_test.jsx`, `.test.js`, `.test.ts`,
`.test.tsx`, `.test.jsx`
`deno test encourage.test.js`
Or you can pass a specific directory path and Deno will search for test files in
there.
```sh
./tests/
```
You can even check code coverage. By default, when you run deno test \--coverage
a coverage profile will be generated in the /coverage directory in the current
working directory.
```shell
deno test --coverage
```
From there you can run deno coverage to print a coverage report to standard
output
```shell
deno coverage
```
As you can see, Deno's built-in tools are pretty cool. We don't have to spend a
whole day configuring these settings before we can start working on our project.
And we can format, lint, and test code without the need for third-party
dependencies.
---
# Compatibility with Node & npm
URL: https://docs.deno.com/examples/videos/backward_compat_with_node_npm
## Video description
Explore how to integrate Deno into your existing Node.js projects seamlessly. In
this video, we'll use Node.js standard libraries and npm modules with simple
prefixes, maintain compatibility with CommonJS projects, and make use of Deno's
features like dependency installation, formatting, and linting. Make the
transition of your Node.js projects effortlessly without the need for major
rewrites.
## Transcript and code
Making the choice to use Deno does not mean that we can't take advantage of the
Node.js ecosystem. It also doesn't mean that we have to rebuild all of our
Node.js projects from scratch.
Using the features of the standard library, or the npm ecosystem, is as simple
as adding a prefix. If you want to learn more about the Node apis you can check
out [the Node API documentation](/api/node/).
Here's an example of Using Node's file system module with the promises API:
```typescript title="main.ts"
async function readFile() {
try {
const data = await fs.readFile("example.txt", "utf8");
console.log(data);
} catch (error) {
console.error("Error reading file", error);
}
}
readFile();
```
We read the file and we console log the data.
In node, we would import `fs` from `fs/promises` eg:
```typescript
import fs from "fs/promises";
```
In Deno, we just put the Node prefix in front of the import, eg:
```typescript
import fs from "node:fs/promises";
```
Then we run `deno main.ts` and opt into the "Running Deno with Node.js Built-in
read access".
If we run `deno main.ts` and allow
[read access](/runtime/fundamentals/security/) its going to read from the file.
Updating any imports in our apps to use this Node specifier will enable any code
using node.js built-ins.
Deno even supports CommonJS projects, which feels above and beyond I think
that's pretty cool!
What if we wanted to use an npm module, from say, Sentry, in our application.
We're going to use the **npm colon specifier** this time:
```typescript title="main.ts"
import * as Sentry from "npm:@sentry/node";
Sentry.init({ dsn: "https://example.com" });
function main() {
try {
throw new Error("This is an error");
} catch (error) {
Sentry.captureException(error);
console.error("Error caught", error);
}
}
```
We'll run the command:
```sh
deno run main.ts
```
Which will ask for access to our home directory, and other places, and there we
go! We are capturing this error as well! This backwards compatibility is pretty
amazing.
Are you working on an existing Node.js project? Well with Deno 2 you can do that
too. You can use `deno install` to install dependencies you can `deno fmt` for
formatting you can `deno lint` for linting we can even run `deno lint --fix` to
fix any linting problems automatically.
And yes you can also run Deno directly, so for any of the scripts that are part
of a `package.json` just run `deno task` with the name of the script, eg:
```sh
deno task dev
```
We can use all of the code that we've written before without having to change it
or stretch it too much, Deno just makes it work!
---
# Browser APIs in Deno
> Explore web standard APIs in Deno's server-side environment. Learn how to use fetch, streams, text encoders, and other browser-compatible features while building modern applications with familiar web APIs.
URL: https://docs.deno.com/examples/videos/browser_apis_in_deno
## Video description
Deno wants to give developers the most browser-like programming environment
possible. Deno uses web standard APIs, so if you're familiar with building for
the web, then you're familiar with Deno. If not, when you learn how to use deno,
you're also learning how to build for the web.
If you take a look at the docs, it gives you a good sense of what's available,
so we got things like Canvas and internationalization and messaging and storage
and streams, temporal, WebSockets, all of those things that we like to use on
the web, we're going to find them built in to Deno.
## Transcript and code
Let's take a look at `fetch` first. This works like you might think.
We're going to take a response. from fetching the JSON placeholder API. Then
we're going to take that response and convert it to JSON as a new variable and
console.log it. Now, if we take a look at this in the terminal, we'll say deno
allow network, so that we can opt into this running that fetch immediately.
```javascript title="main.ts"
const response = await fetch("https://snowtooth-hotel-api.fly.dev");
const data = await response.json();
console.log(data);
```
And we're done here. All the data comes back like we would expect.
```shell
deno add jsr:@std/streams
```
So let me show you what I mean by this. We're going to keep that fetch. We're
going to say if that response body value exists, we're going to create a new
variable called transformed stream, and we'll set that equal to response dot
body. Thank you. And here we're going to use the function called pipe through.
And Pipe through is this method in JavaScript that's going to allow us to take
the output of the readable stream and pass it through to modify the stream's
data. The first thing we're going to do is decode the byte stream into a text
stream. So we'll say new text, decoder stream. Then we'll chain on another one
of these functions pipeThrough.
So this time we're going to split the text stream into lines. So we'll have
different lines coming back from our data. Now the text line stream is actually
coming from a library that we need to include.
```javascript
import { TextLineStream } from "@std/streams";
import { toTransformStream } from "@std/streams/to-transform-stream";
const response = await fetch("https://example.com/data.txt");
// Ensure the response body exists
if (response.body) {
// Create a stream reader that processes the response body line by line
const transformedStream = response.body
// Decode the byte stream into a text stream
.pipeThrough(new TextDecoderStream())
// Split the text stream into lines
.pipeThrough(new TextLineStream())
// Get a reader to read the lines
//.getReader();
.pipeThrough(toTransformStream(async function* (src) {
for await (const chunk of src) {
if (chunk.trim().length === 0) {
continue;
}
console.log(chunk);
yield chunk;
}
}));
// Create a reader to consume the transformed stream
const reader = transformedStream.getReader();
// Read and log each line of text from the stream
while (true) {
const { value, done } = await reader.read();
if (done) break;
console.log(value); // Log each parsed JSON object
}
}
```
## Setting Up Configuration
So we're going to say `deno add jsr@std/streams`. That will create our
`deno.json` configuration file over here. There will be another video to dig
into this in a little more depth, but just know for now that this is including
any imports that are part of our project. So the transform stream is coming
together, but there's a few more steps.
## Using the Transform Stream
The next step is we use pipeThrough again. Now this time we're going to use
another function to transform stream, and this is going to come from standard
streams and specifically the function `toTransformStream`. Now this time we're
going to pass in here an asynchronous generator. We know that it's a generator
because we use that asterisk there and the body of this function is a loop, and
here we're going to say const chunk, so the little blob of data that we're
dealing with, chunk of source, which is the value that's passed in there.
We're going to say `console.log(chunk)`, and we're also going to yield the chunk
here. Okay, so what is this `console.log` doing for us? Let's go ahead and run
`deno --allow-net main.ts`. This is showing us that this is the top line of our
HTML document.
So we actually need a way to iterate through this, and we're going to do this by
creating a reader to consume this transformed stream. So let's get rid of our
console log here. Here we're going to create a value called reader that's going
to be set equal to `transformedStream.getReader()`. Now from here, what we can
do is create a little while loop here. So while that value is true.
We want to destructure `{value, done}` from `await reader.read()`. So again, we
can call the `.read()` method on that reader. Then we're going to say if `done`
is true, then we want to break out of the loop. Otherwise, we want to
`console.log(value)`.
Nice. So now we're going to see our HTML here printed line by line in our
console.
All right, so that is a quick example of using our text line stream. We can use
it in combination with fetch. And if you want to learn more about this API, you
can check out the documentation here. Deno offers us a truly browser-like
environment for using things like fetch, Web Workers, and much, much more.
Deno has made it really smooth to use these web-standard APIs in a way that
feels familiar and friendly.
---
# Build an API server with TypeScript
> A guide to creating a RESTful API server using Hono and TypeScript in Deno. Watch how to implement CRUD operations, handle routing, manage data persistence, and build a production-ready backend service.
URL: https://docs.deno.com/examples/videos/build_api_server_ts
## Video description
Use the light-weight Hono framework (spiritual successor to Express) to build a
RESTful API server that supports CRUD operations with a database.
## Transcript and code
If you’ve worked on a Node project in the past, you might have used Express to
set up a web server or to host an API. Let’s take a look at how we might do
something similar by using Hono, a small simple framework that we can use with
any runtime, but we’re going to use it with Deno. Basic Hono Setup
We’ll add Hono to our project with [JSR](https://jsr.io):
```shell
deno add jsr:@hono/hono
```
That will then be added to the deno.json file.
Then in our main file, we’ll create the basic Hono setup.
```ts
import { Hono } from "@hono/hono";
const app = new Hono();
app.get("/", (c) => {
return c.text("Hello from the Trees!");
});
Deno.serve(app.fetch);
```
Let’s run that `deno run --allow-net main.ts` and we’ll see it in the browser at
`localhost:8000`.
## CRUD Operations
Now that we’ve set up the simple server with Hono, we can start to build out our
database.
We’re going to use localStorage for this, but keep in mind that you can use any
persistent data storage with Deno - postgres, sql - Wherever you like to store
your data.
Let’s start by creating a container for some data. We’ll start with an interface
that describes a tree type:
```ts
interface Tree {
id: string;
species: string;
age: number;
location: string;
}
```
Then we’ll create some data:
```ts
const oak: Tree = {
id: "3",
species: "oak",
age: 3,
location: "Jim's Park",
};
```
Then we’re going to create a few helper functions that will help us interact
with localStorage:
```ts
const setItem = (key: string, value: Tree) => {
localStorage.setItem(key, JSON.stringify(value));
};
const getItem = (key: string): Tree | null => {
const item = localStorage.getItem(key);
return item ? JSON.parse(item) : null;
};
```
Now let’s use them:
```ts
setItem(`trees_${oak.id}`, oak);
const newTree = getItem(`trees_${oak.id}`);
console.log(newTree);
```
```shell
deno --allow-net main.ts
```
- `setItem` is adding the tree
- You can also use `setItem` to update the record -- if the key already exists
the value will be updated
```ts
const oak: Tree = {
id: "3",
species: "oak",
age: 4,
location: "Jim's Park",
};
localStorage.setItem(`trees_${oak.id}`, JSON.stringify(oak));
```
Ok, so now let’s use Hono’s routing to create some REST API routes now that we
understand how to work with these database methods:
```ts
app.post("/trees", async (c) => {
const { id, species, age, location } = await c.req.json();
const tree: Tree = { id, species, age, location };
setItem(`trees_${id}`, tree);
return c.json({
message: `We just added a ${species} tree!`,
});
});
```
To test this out we’ll send a curl request:
```shell
curl -X POST http://localhost:8000/trees \
-H "Content-Type: application/json" \
-d '{"id": "2", "species": "Willow", "age": 100, "location": "Juniper Park"}'
```
To prove that we created that tree, let’s get the data by its ID:
```ts
app.get("/trees/:id", async (c) => {
const id = c.req.param("id");
const tree = await kv.get(["trees", id]);
if (!tree.value) {
return c.json({ message: "Tree not found" }, 404);
}
return c.json(tree.value);
});
```
To test that, let’s run a curl request for the data
```shell
curl http://localhost:8000/trees/1
```
Or you can go to it in the browser: `http://localhost:8000/trees/1`
We can update a tree of course. Kind of like before but we’ll create a route for
that:
```ts
app.put("/trees/:id", (c) => {
const id = c.req.param("id");
const { species, age, location } = c.req.json();
const updatedTree: Tree = { id, species, age, location };
setItem(`trees_${id}`, updatedTree);
return c.json({
message: `Tree has relocated to ${location}!`,
});
});
```
And we’ll change the location because we’re going to PUT this tree somewhere
else:
```shell
curl -X PUT http://localhost:8000/trees/1 \
-H "Content-Type: application/json" \
-d '{"species": "Oak", "age": 8, "location": "Theft Park"}'
```
Finally if we wanted to delete a tree we can using the Hono delete function.
```ts
const deleteItem = (key: string) => {
localStorage.removeItem(key);
};
app.delete("/trees/:id", (c) => {
const id = c.req.param("id");
deleteItem(`trees_${id}`);
return c.json({
message: `Tree ${id} has been cut down!`,
});
});
```
We’ve used Deno in combination with Hono to build a little REST API for our tree
data. If we wanted to deploy this, we could and we could deploy with zero config
to [Deno deploy](https://deno.com/deploy).
You can deploy this to any cloud VPS like AWS, GCP, Digital Ocean, with the
[official Docker image](https://github.com/denoland/deno_docker)
## Complete code sample
```ts
import { Hono } from "@hono/hono";
const app = new Hono();
interface Tree {
id: string;
species: string;
age: number;
location: string;
}
const setItem = (key: string, value: Tree) => {
localStorage.setItem(key, JSON.stringify(value));
};
const getItem = (key: string): Tree | null => {
const item = localStorage.getItem(key);
return item ? JSON.parse(item) : null;
};
const deleteItem = (key: string) => {
localStorage.removeItem(key);
};
const oak: Tree = {
id: "3",
species: "oak",
age: 3,
location: "Jim's Park",
};
setItem(`trees_${oak.id}`, oak);
const newTree = getItem(`trees_${oak.id}`);
console.log(newTree);
app.get("/", (c) => {
return c.text("Hello from the Trees!");
});
app.post("/trees", async (c) => {
const { id, species, age, location } = await c.req.json();
const tree: Tree = { id, species, age, location };
setItem(`trees_${id}`, tree);
return c.json({
message: `We just added a ${species} tree!`,
});
});
app.get("/trees/:id", async (c) => {
const id = await c.req.param("id");
const tree = getItem(`trees_${id}`);
if (!tree) {
return c.json({ message: "Tree not found" }, 404);
}
return c.json(tree);
});
app.put("/trees/:id", async (c) => {
const id = c.req.param("id");
const { species, age, location } = await c.req.json();
const updatedTree: Tree = { id, species, age, location };
setItem(`trees_${id}`, updatedTree);
return c.json({
message: `Tree has relocated to ${location}!`,
});
});
app.delete("/trees/:id", (c) => {
const id = c.req.param("id");
deleteItem(`trees_${id}`);
return c.json({
message: `Tree ${id} has been cut down!`,
});
});
Deno.serve(app.fetch);
```
---
# Build a Command Line Utility
URL: https://docs.deno.com/examples/videos/command_line_utility
## Video description
Learn to build a command line tool using Deno's standard library. You'll explore
how to parse arguments, handle flags, and provide helpful messages using utility
functions. Follow along as we build a ski resort information app, handle errors
gracefully, and compile the script into an executable for multiple platforms,
including Windows, MacOS, and Linux. By the end of this video, you'll understand
how to take full advantage of Deno's features to develop and distribute your own
CLI tools.
## Transcript and code
### An introduction to Deno's Standard Library
If you want to create a command line tool you can do so with
[Deno's standard Library](https://docs.deno.com/runtime/fundamentals/standard_library/).
It contains dozens of stable libraries with helpful utility functions that can
cover a lot of the basics when working with JavaScript in the web. The standard
Library also works in multiple runtimes and environments like Node.js and the
browser.
### Setting up a command line tool
We're going to create a commandline tool, and then we're going to compile it so
it can be used on a number of different platforms as an executable.
Create a new file called `main.ts` and parse these arguments (remember we can
always grab them from `Deno.args`), and then we'll console log them:
```typescript title="main.ts"
const location = Deno.args[0];
console.log(`Welcome to ${location}`);
```
Now if I run `deno main.ts` and then I provide the name of a ski resort like
Aspen that's going to plug that into the string, eg:
```sh
deno main.ts Aspen
## Welcome to Aspen
```
### Installing and Using Standard Libraries
Now lets install one of those standard libraries. In the terminal run:
```sh
deno add jsr:@std/cli
```
This is going to install the [cli library](https://jsr.io/@std/cli), from the
Deno standard library, into our project so we could make use of some of their
helpful functions.
The Helpful function that we'll use here is called `parseArgs`. We can import
that with:
```typescript
import { parseArgs } from "jsr:@std/cli/parse-args";
```
Then we can update our code to use this function, passing the argument and
removing the zero. Our `main.ts` file now looks like this:
```typescript title="main.ts"
import { parseArgs } from "jsr:@std/cli/parse-args";
const args = parseArgs(Deno.args);
console.log(args);
```
Let's go ahead and try this out, in your terminal run:
```sh
deno main.ts -h Hello
```
We can see that `Hello` has been added to our args object. All right, so that's
working as expected.
### Building the Ski Resort Information App
Now our app is going to be a ski resort information app, so we want to populate
our app with a little bit of data to start. We're going to create a value called
`resorts`. This is an object with a few different keys so we'll say `elevation`,
`snow` and `expectedSnowfall`. Then let's just copy and paste these so that we
can move a little more quickly we'll set `Aspen` to `7945` `snow` to
`packed powder`, `expectedSnowfall` to `15`. Then let's add one more of these
we'll set `Vail` to `8120` and then we'll say `expectedSnowfall` is `25`.
```typescript title="main.ts"
const resorts = {
Whistler: {
elevation: 2214,
snow: "Powder",
expectedSnowfall: "20",
},
Aspen: {
elevation: 7945,
snow: "packed powder",
expectedSnowfall: 15,
},
Vail: {
elevation: 8120,
snow: "packed powder",
expectedSnowfall: 25,
},
};
```
We have a few different resorts here. Ultimately we want to be able to run our
app with a command line argument that's going to provide the resort name and
then have that CLI tool return the information about that resort.
### Handling Command Line Arguments
So let's go ahead and pass another object to parse args, here we're going to
define an alias - so we're going to say "if I pass the `r` flag we want to have
it assume it means `resort`. Then let's also use the default here, we'll set the
`default` `resort` to `Whistler`:
```typescript title="main.ts"
const args = parseArgs(Deno.args, {
alias: {
resort: "r",
},
default: {
resort: "Whistler",
},
});
```
From here we can set up a const called `resortName` and set it to `args.resort`.
Then get the resort, with `resorts[resortName]` (we'll fix that type error in a
second), and update the console log:
```typescript title="main.ts"
const resortName = args.resort;
const resort = resorts[resortName];
console.log(
`Resort: ${resortName} Elevation: ${resort.elevation} feet Snow: ${resort.snow} Expected Snowfall: ${resort.expectedSnowfall}`,
);
```
To test this out we can use:
```sh
deno main.ts -r Aspen
```
Which will give us a printout of all of Aspen's details.
We can also run this without any arguments which should give the details for
Whistler, because that was set as default:
```sh
deno main.ts
```
Same goes for our full name, so we could say:
```sh
deno main.ts --resort Veil
```
And that should give us those details as well.
### Improving Error Handling
Now if I tried to run this with a resort that's not there, let's say `Bachelor`;
there's an error so that's kind of an ugly one. It's hitting this moment where
it's trying to parse that out and it can't find it. So we could make this a
little nicer by saying if there's no `resort` in our data set that matches the
input, let's run a console error saying
`resort name not found, try Whistler Aspen or Veil` and then we'll hop out of
that process with a `Deno.exit`:
```typescript title="main.ts"
if (!resort) {
console.error(
`Resort ${resortName} name not found. Try Whistler, Aspen, or Veil`,
);
Deno.exit(1);
}
```
### Fixing the types
Okay so this here isn't looking so good we can look at the problems here in
typescript - it's telling us that this implicitly has an `any` type, you can
look up more about this error but I'll show you how to fix this one. Update the
type of `resortName` to be a key of `resorts`:
```typescript title="main.ts"
const resortName = args.resort as keyof typeof resorts;
```
What this has done is extract the value of `args.resort` and it's going to
assert that there is a valid key inside of the data.
### Adding Help and Color Output
Let's take this one more step, we're going to say if `args.help`, we will
console log and then we're going to give our users a little message to say "hey
this is actually how you use this" if they do happen to ask for help at any
moment, and we'll update the alias here to say `help` is `H`, finally we'll make
sure to call `Deno.exit` so that we jump out of the process as soon as we're
done with that:
```typescript title="main.ts"
const args = parseArgs(Deno.args, {
alias: {
resort: "r",
help: "h",
},
default: {
resort: "Whistler",
},
});
...
if (args.help) {
console.log(`
usage: ski-cli --resort
-h, --help Show Help
-r, --resort Name of the ski resort (default: Whistler)
`);
Deno.exit();
}
```
You can test your help setup by running the following:
```sh
deno main.ts -h
```
Next let's log our results here in color. Deno has support for CSS using the
`%C` syntax.
This will take the text and apply the style that we pass in as the second
argument to the `console.log()`. Here we could set `color:blue` as the second
argument, eg:
````typescript title="main.ts"
console.log(`
%c
Resort: ${resortName}
Elevation: ${resort.elevation} feet
Snow: ${resort.snow}
Expected Snowfall: ${resort.expectedSnowfall}
`, "color:blue"
);
Then run the program again:
```sh
deno main.ts -r Veil
````
You should see everything logged in a blue color. How cool is that?!
### Compiling the Tool for Different Platforms
I want other people to be able to enjoy the app too. Compiling this tool into an
executable is pretty easy with Deno. As you might imagine, the command for
running this is `deno compile` and then the name of our script. This is going to
compile the code to the project as an executable:
```sh
deno compile main.ts
```
You should see the executable in your project folder called MyDenoProject. Now
you can run this as an executable with `./`, eg:
```sh
./MyDenoProject --resort Aspen
```
So this is really great for me, but what happens if I want to share this to
other platforms? All you would need to do is run `deno compile` again, this time
passing in a `--target` flag for where you want to compile to.
Let's say we wanted to compile it for Windows we'd use:
```sh
deno compile --target x86_64-pc-windows-msvc --output ski-cli-windows main.ts
```
or for a Mac:
```sh
deno compile --target x86_64-apple-darwin --output ski-cli-macos main.ts
```
or for Linux:
```sh
deno compile --target x86_64-unknown-linux-gnu --output ski-cli-linux main.ts
```
You can see all of the
[options for compiling your apps](/runtime/reference/cli/compile/) in the Deno
documentation. There are a lot of different flags that you can use for your own
specific use cases.
To recap we always have access to the Deno Standard Library that we can take
advantage of with all these different helpful functions. If we wanted to create
a command line utility, like we've done here, we always have access to the
[`Deno` global namespace](/api/deno/~/Deno) for these arguments. We can parse
the arguments using the parse args function from the standard Library CLI
package and we can run a compile for all platforms so that our app can be
consumed anywhere.
---
# Configuration with Deno JSON
URL: https://docs.deno.com/examples/videos/configuration_with_deno_json
## Video description
In this video, we use the deno.json file to manage dependencies and
configurations in your Deno projects. Learn how to create and configure tasks
like 'start' and 'format' to streamline your workflow. We'll also explore
customizing formatting and linting rules, and understand the concept of import
maps for cleaner imports. Then we'll take a look at compatibility between Deno's
deno.json and Node's package.json for seamless project integration.
## Transcript and code
### Introduction to JSR Package Management
Every time we’ve installed a package with JSR it’s been placed into this
`deno.json` file as an import.
```json title="deno.json"
{
"imports": {
"@eveporcello/sing": "jsr:@eveporcello/sing@^0.1.0"
}
}
```
### Creating and Running Tasks
So, we can use this file to manage our dependencies, but we can also use it for
a bunch of other configuration tasks. Specifically, to get us started, let’s
configure some literal tasks. We’re going to create a `"start"` task. This will
run `deno --allow-net main.ts`.
```json title="deno.json"
{
"tasks": {
"start": "deno --allow-net main.ts"
},
"imports": {
"@eveporcello/sing": "jsr:@eveporcello/sing@^0.1.0"
}
}
```
So, think of this like a shortcut for running a command. So we could say
```sh
deno task start
```
This is going to run that, same with
```sh
deno run start
```
that will work as well.
Let’s add another one of these, we’re going to call it `"format"`. So, this will
combine these two different things, we’ll say `deno fmt && deno lint`.
```json title="deno.json"
{
"tasks": {
"start": "deno --allow-net main.ts",
"format": "deno fmt && deno lint"
},
"imports": {
"@eveporcello/sing": "jsr:@eveporcello/sing@^0.1.0"
}
}
```
So let’s run
```sh
deno task format
```
and then this will run everything for us.
### Formatting and Linting Configuration
You can also use this file to set configurations for these types of commands. So
we can say `"fmt"` and then use a couple different rules, so the Formatting in
the documentation [here](/runtime/fundamentals/configuration/#formatting) will
walk you through it. There’s several different options that you can take
advantage of, let’s go ahead and say, `"useTabs"`, and we’ll say `true` here,
and then we’ll use `”lineWidth”: 80`.
```json title="deno.json"
{
"tasks": {
"start": "deno --allow-net main.ts",
"format": "deno fmt && deno lint"
},
"fmt": {
"useTabs": true,
"lineWidth": 80
},
"imports": {
"@eveporcello/sing": "jsr:@eveporcello/sing@^0.1.0"
}
}
```
Now if we run
```sh
deno task format
```
This will run everything with those rules.
Linting, you could set up as well. So we’ll say `"lint"`. This is also in the
documentation, right above this, so Linting
[here](/runtime/fundamentals/configuration/#linting) will take you on the
journey of all the different configuration options depending on your project’s
needs, but in this case let’s add a key for `"rules"` here, and you can include
them, you can exclude them.
```json title="deno.json"
{
"tasks": {
"start": "deno --allow-net main.ts",
"format": "deno fmt && deno lint"
},
"lint": {
"rules": {}
},
"fmt": {
"useTabs": true,
"lineWidth": 80
},
"imports": {
"@eveporcello/sing": "jsr:@eveporcello/sing@^0.1.0"
}
}
```
Let’s say `// @ts-ignore`, and we won’t add any comments after it.
```typescript title="main.ts"
// @ts-ignore
import { sing } from "jsr:@eveporcello/sing";
console.log(sing("sun", 3));
```
This is a rule that’s going to, if I add this to the top of any file, the
intended behavior in a project, it’s going to make sure that Typescript just
ignores any of the types that are in this file, so it doesn’t matter if it
adheres to the rules. But, if I run
```sh
deno task format
```
again, this is going to tell me, “Hey, you can’t do that. You can’t ignore these
files without comment.” This is one of those rules. But, we know where to find a
way out of that trap, which, maybe you don’t want to find a way out, but I’ll
show you how anyway. We’ll say `”exclude”: [“ban-ts-comment”]`.
```json title="deno.json"
{
"tasks": {
"start": "deno --allow-net main.ts",
"format": "deno fmt && deno lint"
},
"lint": {
"rules": {
"exclude": ["ban-ts-comment"]
}
},
"fmt": {
"useTabs": true,
"lineWidth": 80
},
"imports": {
"@eveporcello/sing": "jsr:@eveporcello/sing@^0.1.0"
}
}
```
Then, we’ll try to run
```sh
deno task format
```
again. We should see that that runs appropriately and we’re getting away with
our `// @ts-ignore`.
### Handling Import Maps
There’s also a concept in this `deno.json` file of the import map. So, right now
we’re using `"@eveporcello/sing"` as the import, but it’s also possible to make
this a little bit shorter. We could use just `"sing"` for this.
```json title="deno.json"
{
"tasks": {
"start": "deno --allow-net main.ts",
"format": "deno fmt && deno lint"
},
"lint": {
"rules": {
"exclude": ["ban-ts-comment"]
}
},
"fmt": {
"useTabs": true,
"lineWidth": 80
},
"imports": {
"sing": "jsr:@eveporcello/sing@^0.1.0"
}
}
```
Now if we replace this whole thing with just `"sing"`
```typescript title="main.ts"
// @ts-ignore
import { sing } from "sing";
console.log(sing("sun", 3));
```
and we run
```sh
deno main.ts
```
This should work as expected. So this is what’s called a “bare specifier”. It’s
an import map which is going to map this particular dependency to this JSR
package, so it just allows for a nice, clean import if we’d like to.
If you want to learn more about these different options, check out the docs
[here](/runtime/fundamentals/configuration/) on configuration. Deno also
supports a `package.json` for compatibility with Node.js projects. Now, if both
a `deno.json` and a `package.json` are both found in the same directory, Deno
will understand the dependencies specified in both. So, a lot of options here,
but this is going to be extremely useful as you work on your Deno projects.
---
# Benchmarking with Deno bench
> Learn how to measure code performance using Deno's built-in benchmarking tool. Discover baseline comparisons, grouped benchmarks, and precise measurement techniques for optimizing your TypeScript and JavaScript code.
URL: https://docs.deno.com/examples/videos/deno_bench
## Video description
[`deno bench`](/runtime/reference/cli/bench/) is an easy to use benchmarking
tool that ships with the Deno runtime. Here are 3 ways that can level up how you
use deno bench.
## Transcript and code
What's up everyone it's Andy from Deno and today we're going to talk about
`deno bench`. This video is a continuation of our **Deno tool chain** series.
`deno bench` is a benchmarking tool that makes it easy to measure performance,
and if you're coming from Node, `deno bench` saves you time from finding and
integrating a third party benchmarking tool.
### Baseline Summaries
Today we're going to cover some cool use cases with `deno bench`. Most of the
time we'll want to benchmark two or more ways of doing the same thing, here
we're comparing parsing a URL from a string, parsing a URL with a path, and then
also parsing a URL with a path and a URL object:
```typescript title="main_bench.ts"
Deno.bench("url parsing", () => {
new URL("https://deno.land");
});
Deno.bench("url parsing with path", () => {
new URL("./welcome.ts", "https://deno.land/");
});
const BASE_URL = new URL("https://deno.land");
Deno.bench("url parsing with a path and a URL object", () => {
new URL("./welcome.ts", BASE_URL);
});
```
Then run:
```sh
deno bench main_bench.ts
```
The output results show how long it takes each benchmark in nano seconds, as
well as how many iterations per second. Not only that but also includes the CPU
chip and the runtime.
The results indicate that the first approach is the fastest. But what if you
want a more clear way to show exactly how much faster it is? We can pass the
`baseline:true` option into the Benchmark:
```typescript title="main_bench.ts"
Deno.bench("url parsing", { baseline: true }, () => {
new URL("https://deno.land");
});
...etc
```
When we run it there is now a summary section at the bottom of the output that
shows you exactly how much faster the benchmarks are compared to the baseline.
If you want multiple benchmarks but in the same file you can organize the output
using the `group` option. If we add a fourth Benchmark for splitting text and
run the file we'll see all of the results grouped together, which isn't very
helpful. Instead we can add a group of `url` to the URL benchmarks and a group
of `text` to the text benchmarks:
```typescript title="main_bench.ts"
Deno.bench("url parsing", { baseline: true, group: "url" }, () => {
new URL("https://deno.land");
});
...etc
const TEXT = "Lorem ipsum dolor sit amet";
Deno.bench("split on whitespace", { group: "text" }, () => {
TEXT.split(" ");
});
```
Now you will see our results are organized by group.
### More specific benchmarking with `b.start()` and `b.end()`
Did you know that you can be specific about when to start and stop measuring
your benchmarks? Here's a new Benchmark file where we plan to benchmark parsing
the first word of the releases markdown file, which is all the release notes
from the Deno runtime project over the past 5 years. It's over 6,000 lines long!
```typescript title="file_bench.ts"
const FILENAME = "./Releases.md";
Deno.bench("get first word", () => {
const file = Deno.readTextFileSync(FILENAME);
const firstWord = file.split(" ")[0];
});
```
Running `deno bench` shows that this operation takes a long time, but it's
mostly because the benchmark requires reading the file in memory. So how do we
benchmark reading just the first word? If we use the `bench`
`context parameter`, we have access to the `start()` and `end()` functions.
```typescript title="file_bench.ts"
const FILENAME = "./Releases.md";
Deno.bench("get first word", (b) => {
b.start();
const file = Deno.readTextFileSync(FILENAME);
const firstWord = file.split(" ")[0];
b.end();
});
```
Now when we run `deno bench`, you'll notice that this benchmark only measures
the reading of the first word.
This was just a glimpse into `deno bench`, if you want to check out the other
options on Deno Bench check out the other options available to you, you can use
your editor to `ctrl+click` through to the bench definitions, or look at the
[`deno bench` documentation](/runtime/reference/cli/bench/). There are some
other options that you can pass such as
[`only`](/runtime/reference/cli/bench/#bench-definition-filtering) and
[`ignore`](/runtime/reference/cli/bench/#options-ignore).
---
# Deno coverage
> Learn how to measure test coverage in Deno projects. Watch how to generate coverage reports, analyze code coverage metrics, and use the HTML report feature.
URL: https://docs.deno.com/examples/videos/deno_coverage
## Description of video
We updated `deno coverage` in 1.39 with a better output and HTML generation.
## Transcript and code
If you're using `deno test`, have you checked out `deno coverage`?
`deno coverage` is a great way to see how much test coverage you have, just add
the coverage flag to Deno test:
```sh
deno test --coverage
```
This will save coverage data to `/coverage`. Then run the coverage command:
```sh
deno coverage ./coverage
```
to see a coverage report.
In Deno 1.39, `deno coverage` was updated in two ways; first it now outputs a
concise summary table and second, if you add the `--html` flag:
```sh
deno coverage ./coverage --html
```
the coverage tool generates static HTML so that you can explore your coverage in
a browser.
We got more plans for Deno coverage, like simplifying the steps into a single
command and more.
---
# Your Deno Dev Environment
> Learn how to set up your Deno development environment. Watch how to install Deno, configure VS Code, enable type checking and autocomplete, and optimize your TypeScript development workflow.
URL: https://docs.deno.com/examples/videos/deno_dev_environment
## Video description
How to set up your development environment for Deno
## Transcript and code
To install Deno, we'll run curl. So we're going to grab this curl command
[from the documentation](https://docs.deno.com/runtime/getting_started/installation/).
```shell
curl -fsSL https://deno.land/install.sh | sh
```
We'll go to our terminal, we'll paste that in, hit enter, and this will install
Deno in the background to the most recent version. When I do this, it'll ask me
if I want to add Deno to the path. We'll go ahead and say yes, and you can add
these setup completions here.
And now we have installed this to our path. If you're on Windows, there are
installation instructions for you here in the documentation.
To generate a Deno project from scratch, let's go ahead and type
`deno init MyDenoProject`. This is going to create that folder for me. I can
then cd into that folder. Now if we open this up in VSCode, this has created a
`deno.json` file, a `main_test.ts` file, and a `main.ts` file. So this is a
quick way of getting started.
If you're using VSCode, there are a few configuration options that you'll want
to set up. So we'll go up here to code and settings. We'll select extensions. So
over here in your extensions, you're going to search for Deno, and then we'll
[select the one that has been created by Denoland here](https://marketplace.visualstudio.com/items?itemName=denoland.vscode-deno).
```javascript
{
"deno.enable": true,
}
```
We're going to run install, and this will install our Deno land extension. Next
we'll type `command shift P`. This will open up our command palette here, and we
can type `deno initialize workspace configuration`. We're going to go ahead and
click that. That's going to generate this VSCode folder with settings. This is
going to enable hints and autocomplete and all of that right here in the code
editor. So if I start to type anything from `deno serve`, for example, that's
going to give me a look at what the expected parameters of that function are.
That's very helpful.
This is also going to give us hints when importing. So we'll say import star as
path from JSR at standard slash path.
```javascript
import * as path from "jsr:@std/path";
```
So all of them are listed there. Pretty cool. And then if we wanted to do
something for a remote module, something like OpenAI from
[https://deno.land/x/openai@v4.67.1/mod.ts](https://deno.land/x/openai@v4.67.1/mod.ts)
(or now, even better, from [JSR](https://jsr.io/@openai/openai))
```javascript
import OpenAI from "jsr:@openai/openai";
```
This is then going to give us the standard library as well as X for all of those
third party APIs. So you can actually drill down into OpenAI from here. You just
need to select the version, so we'll say OpenAI at v461. And then you can even
drill down into that individual file.
If you take a look at
[the documentation
here, this will guide you through the process of setting up your own unique
environment](/runtime/getting_started/setup_your_environment/). There are
[shell completions](/runtime/getting_started/setup_your_environment/#shell-completions)
that you can add, so depending on which CLI tool you're using, you can set this
up over here, whether it's Bash or PowerShell or ZShell or whatever it might be.
---
# Formatting with Deno fmt
URL: https://docs.deno.com/examples/videos/deno_fmt
## Video description
A quick cut of tips and tricks on
[Deno's built in formatter, `deno fmt`](/runtime/reference/cli/fmt/).
what's up everyone, Andy from Deno here, back for another episode of the **Deno
tool chain series** where we dig a little deeper into the deno subcommands.
Today we're going to look at `deno fmt`, our built-in formatter that's
customizable, performant and flexible enough to fit into any workflow. Let's
dive right in.
### What is `deno fmt`?
`deno fmt` will format these file extensions:
- `.js`
- `.jsx`
- `.ts`
- `.tsx`
- `.json`
- `.jsonc`
- `.md`
- `.markdown`
The simplest way to use `deno fmt` is to run it from the command line:
```sh
deno fmt
```
You could even pipe in a string or file:
```sh
echo ' console.log( 5 );' | deno fmt -
## console.log(5);
```
You can also use the `--check` flag which will check if your code has been
formatted by `deno fmt`. If it's not formatted, it will return a nonzero exit
code:
```sh
echo ' console.log( 5 );' | deno fmt --check -
## Not formatted stdin
```
This is useful in CI where you want to check if the code is formatted properly.
### Editor integration
`deno fmt` also works in your editor, like VS Code. Set `deno fmt` as your
default formatter in your editors settings, eg for VS Code:
```json title=".vscode/settings.json"
{
"editor.defaultFormatter": "denoland.vscode-deno",
"editor.formatOnSave": true
}
```
You can also set format on save to be true
### Multiple ways to format
In some situations, there are multiple ways to format, and Deno lets you decide
how you want to format. For example an object can be formatted horizontally or
vertically, it depends on where you put your first item. Eg:
```typescript
const foo = { bar: "baz", qux: "quux" };
// or
const foo = {
bar: "baz",
qux: "quux",
};
```
Same with an array. You can format it horizontally or vertically depending on
where you put your first item. Eg:
```typescript
const foo = ["bar", "baz", "qux"];
// or
const foo = [
"bar",
"baz",
"qux",
];
```
### Remove escaped quotes
`deno fmt` can also reduce the escaped characters in your strings. For example,
if you have a string with escaped quotes, `deno fmt` will remove them:
```typescript
console.log("hello \"world\"");
```
will be formatted to:
```typescript
console.log('hello "world"');
```
### Ignoring lines or files
What if you want `deno fmt` to skip a line or a file? You can use the
`//deno-fmt-ignore` comment to tell `deno fmt` to skip the following line, eg:
```typescript
console.log("This line will be formatted");
// deno-fmt-ignore
console.log("This line will not be formatted");
```
To tell `deno fmt` to skip a file, you can use the `// deno-fmt-ignore-file`
comment at the top of the file to ignore. Or you can use your `deno.json` config
file under the `fmt` field:
```json
{
"fmt": {
"exclude": ["main.ts", "*.json"]
}
}
```
### Formatting markdown
`deno fmt` also works on markdown files. You can choose how to format prose with
the option `"proseWrap"` set to either `always`, `never` or `preserve`, eg:
```json
{
"fmt": {
"proseWrap": "always"
}
}
```
`deno fmt` can also format numbered lists if you start a number list with two
ones, for example:
```markdown title="list.md"
1. First
1. Second
1. Third
1. Fourth
1. Fifth
```
The formatter will automatically format the list to all ones, but when you
render it, it will show the number list properly!
If that's weird you can also put `1` and then `2` and then run `deno fmt`, which
will number the rest of the list correctly for you.
`deno fmt` will also format code blocks of JavaScript and TypeScript in your
markdown. It can even format markdown in markdown!
### Formatter options
Let's take a look at
[all the options available in `deno fmt`](/runtime/reference/cli/fmt/#formatting-options).
Note that all these options also have a corresponding flags in the CLI.
```json
{
"fmt": {
"useTabs": true,
"lineWidth": 80,
"indentWidth": 2,
"semiColon": false,
"singleQuote": true,
"proseWrap": "always",
"exclude": ["**/logs.json"]
}
}
```
- `--use-tabs`
- `--line-width `
- `--indent-width `
- `--no-semicolons`
- `--single-quote`
- `--prose-wrap `
- `--ignore=`
### `deno fmt`'s Performance
`deno fmt` is really fast, especially on subsequent runs due to caching, which
is enabled by default. Here's the first run that we did on Deno's standard
Library. Let's run it again! The system time shows that the second run is a
third faster. If we update a file and run it again it's still fast since
`deno fmt` checks only the changed file. Let's compare this to `Prettier` (a
popular Node formatter), we'll run Prettier with a caching flag enabled. Even on
a second run, `deno fmt` is almost 20 times faster!
---
# Getting started with Deno test
URL: https://docs.deno.com/examples/videos/deno_test
---
# Deploy Deno to AWS Lambda
URL: https://docs.deno.com/examples/videos/deploy_deno_to_aws_lambda
## Video description
Show how to deploy Deno applications to AWS Lambda (using a community runtime
for Lambda).
## Transcript and code
### Run Deno on AWS Lambda
Running Deno on AWS Lambda? Sure, you can do that. With AWS lambda the
serverless pricing can be cheaper than a VPS and can be easier to maintain
because it can auto scale behind the scenes.
To make that work, we’re going to use the aws-lambda-adapter project to make
sure that our `Deno.serve` function runs as we expect it to. This is a popular
approach to deploying to AWS lambda due to control, flexibility, and
consistency.
There’s a nice article on this on the blog if you want to learn more about these
considerations.
Let’s take a look at the Dockerfile that we can use to make this work:
```dockerfile
# Set up the base image
FROM public.ecr.aws/awsguru/aws-lambda-adapter:0.9.0 AS aws-lambda-adapter
FROM denoland/deno:bin-2.0.2 AS deno_bin
FROM debian:bookworm-20230703-slim AS deno_runtime
COPY --from=aws-lambda-adapter /lambda-adapter /opt/extensions/lambda-adapter
COPY --from=deno_bin /deno /usr/local/bin/deno
ENV PORT=8000
EXPOSE 8000
RUN mkdir /var/deno_dir
ENV DENO_DIR=/var/deno_dir
# Copy the function code
WORKDIR "/var/task"
COPY . /var/task
# Warmup caches
RUN timeout 10s deno -A main.ts || [ $? -eq 124 ] || exit 1
CMD ["deno", "-A", "main.ts"]
```
Then we’ll build the Docker image.
```shell
docker build -t my-deno-project .
```
Now we need to start interfacing with AWS. If this is your first time working
with AWS, you can create an account:
[https://aws.amazon.com](https://aws.amazon.com)
And if you haven’t installed the AWS CLI, you can do that too. You know if it’s
installed by typing `aws` into your Terminal or Command Prompt. If that returns
an error you can install with homebrew or follow the instructions through the
website:
[https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
```
brew install awscli
```
Then you’ll want to make sure that you’re set up with `aws configure`.
Everything that it is looking for is in the
[Security Credentials section of the
AWS Console](https://us-east-1.console.aws.amazon.com/ecr/private-registry/repositories).
### Use the CLI to create an ECR
The ECR is a registry service where we can push our docker container
```
aws ecr create-repository --repository-name my-deno-project --region us-east-1 | grep repositoryUri
```
This outputs a URI for the repo: \`"repositoryUri":
"\<\\>[.dkr.ecr.us-west-1.amazonaws.com/my-deno-project](http://.dkr.ecr.us-west-1.amazonaws.com/my-deno-project)",\`
Then log in using the URI that comes back
```shell
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin .dkr.ecr.us-east-1.amazonaws.com/my-deno-project
```
Tag the image
```shell
docker tag my-deno-project:latest .dkr.ecr.us-east-1.amazonaws.com/my-deno-project:latest
```
Then Push the image to ECR
```shell
docker push .dkr.ecr.us-west-1.amazonaws.com/my-deno-project:latest
```
Now we need to create a function that will host our app:
- [https://us-east-1.console.aws.amazon.com/lambda/home?region=us-east-1\#/begin](https://us-east-1.console.aws.amazon.com/lambda/home?region=us-east-1#/begin)
- Think of a function as being a place where the app is going to run
- Select Create a Function
- Select Container Image Radio Button
- Call the function `tree-app`
- Select the app from the Browse Containers button
- Halfway down the page select “Configuration”
- Select `Function URL`
- Create a URL
- Select None so the endpoint is public
- Select Save
- Check the app in the browser
One thing to keep in mind with Lambda functions is cold start performance. Cold
starts happen when AWS needs to initialize your function, and it can cause
slight delays. There’s a pretty cool
[blog here that goes through Deno vs. other
tools](https://deno.com/blog/aws-lambda-coldstart-benchmarks).
Using Deno with AWS Lambda functions is a great way to stand up your app quickly
in a familiar environment.
---
# Deploying Deno with Docker
URL: https://docs.deno.com/examples/videos/deploying_deno_with_docker
## Video description
See how to deploy Deno applications with Docker to a compatible cloud
environment.
## Resources
- https://github.com/denoland/deno_docker
- https://fly.io/
- https://docs.deno.com/runtime/reference/docker/
## Transcript and code
Deno has made a lot of things seem easy: linting, formatting, interoperability
with the Node ecosystem, testing, TypeScript, but how about deployment? How easy
is it to get Deno running in production? Pretty easy!
Let’s start with a look at our app. It’s an app that provides us with some
information about trees. On the homepage we get some text At the trees route, we
get some JSON At the dynamic route based on the tree’s id, we get information
about that single tree.
```ts
import { Hono } from "jsr:@hono/hono";
const app = new Hono();
interface Tree {
id: string;
species: string;
age: number;
location: string;
}
const oak: Tree = {
id: "1",
species: "oak",
age: 3,
location: "Jim's Park",
};
const maple: Tree = {
id: "2",
species: "maple",
age: 5,
location: "Betty's Garden",
};
const trees: Tree[] = [oak, maple];
app.get("/", (c) => {
return c.text("🌲 🌳 The Trees Welcome You! 🌲 🌳");
});
app.get("/trees", (c) => {
return c.json(trees);
});
app.get("/trees/:id", (c) => {
const id = c.req.param("id");
const tree = trees.find((tree) => tree.id === id);
if (!tree) return c.json({ message: "That tree isn't here!" }, 404);
return c.json(tree);
});
Deno.serve(app.fetch);
```
## Run Locally with Docker
Make sure that Docker is installed on your machine. In your terminal or command
prompt, you can run docker and if you get a big list of commands, you have it.
If not, head over to https://www.docker.com/ and download it based on your
operating system.
### Test run docker:
```shell
docker
```
Then run the command to get running on `localhost:8000` with Docker
```shell
docker run -it -p 8000:8000 -v $PWD:/my-deno-project denoland/deno:2.0.2 run
--allow-net /my-deno-project/main.ts
```
Visit the app running at `localhost:8000`
It’s also possible to run this with a docker config file.
```dockerfile
FROM
denoland/deno:2.0.2
# The port that your application listens to.
EXPOSE 8000
WORKDIR /app
# Prefer not to run as root.
USER deno
# These steps will be re-run upon each file change in your working directory:
COPY . .
# Compile the main app so that it doesn't need to be compiled each startup/entry.
RUN deno cache main.ts
# Warmup caches
RUN timeout 10s deno -A main.ts || [ $? -eq 124 ] || exit 1
CMD ["run", "--allow-net", "main.ts"]
```
Then build it
```shell
docker build -t my-deno-project .
```
From there, you can deploy the app to your hosting provider of choice. I’m going
to use fly.io today.
## Deploy to fly.io
If you haven’t worked with fly before, it’s a cloud platform that allows you to
deploy and run fullstack apps. They run in multiple regions throughout the world
which makes them a pretty nice option. https://fly.io/
### Install Fly
Install with curl
```shell
curl -L https://fly.io/install.sh | sh
```
### Log in with Fly via CLI
```shell
fly auth login
```
This will open the browser for you to log into your account (or create one if
you haven’t already). Then we’ll launch the app with fly using:
```shell
flyctl launch
```
This will generate a fly.toml file for the app, and you can choose different
settings if you’d like to. And more importantly it will launch it! We’ll just
wait for the process to complete, and we should be able to view our app running
at that location.
So with Deno, we can use Docker to containerize the app and with Fly we can get
the app hosted in production in just a few minutes.
## More information on working with Docker
For a closer look at Deno's support of Docker, including best practices, running
tests with Docker, using workspaces, and more, please take a look at our
[Deno and Docker reference documentation](https://docs.deno.com/runtime/reference/docker/).
---
# ECMAScript Modules
URL: https://docs.deno.com/examples/videos/esmodules
---
# Interoperability with Node.js
URL: https://docs.deno.com/examples/videos/interoperability_with_nodejs
## Video description
Deno gained lots of interpoperability capabilities at its v2.0 release. In this
video, we'll look at how to use Node.js built-in APIs, NPM modules, and JSR
packages.
## Transcript and examples
[Deno 2.0](https://deno.com/blog/v2) is here, and it's good. One of the most
amazing features of Deno is its interoperability with other platforms including
Node. For example, we can use the core Node.js built in APIs. All we have to do
is add this Node specifier here.
```ts
import fs from "node:fs/promises";
```
Deno also supports the use of NPM modules. All you need to do is add the NPM
specifier with your import and you're good to go.
```ts
import { * } as Sentry from "npm:@sentry/node";
```
We can also take advantage of [JSR](https://jsr.io), an open source package
registry for TypeScript and JavaScript.
```ts
import OpenAI from "jsr:@openai/openai";
```
JSR works with Deno, of course, but also with Node.js. bun, and CloudFlare
workers. You can even install JSR packages into Vite and Next.js applications.
Deno also gives us
[import maps](https://docs.deno.com/runtime/fundamentals/modules/#differentiating-between-imports-or-importmap-in-deno.json-and---import-map-option),
which help us manage our dependencies. You can install a package from JSR. The
import will be added to the `deno.json`, and you can even use a shorthand to
describe this to clean up your code even more. Deno 2.0 is focused on a really
solid developer experience. New projects and migrations feel a whole lot easier
with Deno.
---
# Introduction to Deno APIs
URL: https://docs.deno.com/examples/videos/intro_to_deno_apis
## Video description
In this video, we explore the powerful APIs provided by Deno in the global
namespace. We demonstrate file system operations like creating, reading,
writing, and appending to files using Deno's built-in methods. Then, examine how
to handle command line arguments, environment variables, and set up a basic
server. We can reduce the need for external APIs with these Deno built-in APIs.
## Transcript and examples
In the global name space, Deno has a ton of APIs that you can take advantage of.
Let's take a look at a few of them.
### Creating and writing to files
In order to write a file, first we will await Deno.open and we'll pass in the
name of the file that we want to create. The second argument is going to be an
object where we'll set `read`, `write` and `create` to `true`:
```ts title="main.ts"
await Deno.open("thoughts.txt", {
read: true,
write: true,
create: true,
});
```
To run this, we will use:
```sh
deno main.ts
```
When run, the console will prompt us to allow read access, so we'll say yes (or
`y`). Then it's going to ask us for write access, which is pretty cool (and
we'll allow that too with `y`), so we've granted both and now we have created a
file called `thoughts.txt`.
If we wanted to write some data to this file we could make some adjustments to
our `main.ts` file. Let's create a variable for our file (called file), then
we're going to add `append:true` to the object we pass to the `Deno.open` method
(we can also get rid of create I suppose, since the file has already been
created):
```ts title="main.ts"
const file = await Deno.open("thoughts.txt", {
read: true,
write: true,
append: true,
});
```
Next, below this, we'll make a constant called `encoder`, and make it equal a
new text encoder. Then we'll make a second constant called `data`, which will
call `encode`. Finally we'll add a string with a newline and some text to
`data`:
```ts title="main.ts"
const encoder = new TextEncoder();
const data = encoder.encode("\nI think basil is underrated.");
```
Then we'll `await file.Write(data)`, which will take that data and write it to
the thoughts file, and finally we'll close the file.
```ts title=main.ts"
await file.write(data);
file.close();
```
This time we will run the file with the required permissions:
```sh
deno --allow-read --allow-write main.ts
```
If we take a look back at our `thoughts.txt` file it will say "I think basil is
underrated". The text has been appended to our file.
### Reading and appending to files
There are some other options as well, so let's go back to the top of our file
this time instead of using `Deno.open` we'll use `Deno.readFile`. Which means we
can remove the second argument object, because we're being very specific about
what we actually want to do here. Then we'll console log the file.
```ts title="main.ts"
const file = await Deno.readFile("thoughts.txt");
console.log(file);
```
If we run this with:
```sh
deno --allow-read main.ts
```
The encoded file will be logged to the console, which isn't quite what I want. I
actually want the human readable text. So what I can do here is I can use
`Deno.readTextFile` instead of `Deno.readFile`, which will write the text from
the file directly to the console.
We can also write to the file with `Deno.writeTextFile`. For example:
```ts title="main.ts"
await Deno.writeTextFile(
"thoughts.txt",
"Fall is a great season",
);
```
Which, if we run with `deno --allow-write main.ts`, will overwrite the contents
of the `thoughts.txt` file with the string about fall.
We can update that code to use `append: true`:
```ts title="main.ts"
await Deno.writeTextFile(
"thoughts.txt",
"\nWinter is the most fun season!",
{ append: true },
);
```
If we run it again, with `deno --allow-write main.ts`, it's going to append the
second sentence to the end of the file.
### Exploring command line arguments
We also have the option to explore command line arguments, so we could say:
```ts title="main.ts"
const name = Deno.args[0];
console.log(name);
```
We can run this with our usual deno command, but this time pass in a commandline
argument, lets say `Eve`:
```sh
deno main.ts Eve
```
The name `Eve` will be logged to the console.
If we want to get fancy, we can update the logged template string to pass out a
message:
```ts title="main.ts"
const name = Deno.args[0];
console.log(`How are you today, ${name}?`);
```
## Using env variables
On the Deno global, we also have environment variables. Let's create one called
`home`, and log our home directory to the console:
```ts title="main.ts"
const home = Deno.env.get("HOME");
console.log(`Home directory: ${home}`);
```
When run with `deno main.ts`, Deno will request environment access, which we can
allow with `y`. Or we can run the command with the `--allow-env` flag, and our
home directory will be logged to the console.
### Setting up a simple HTTP server
Finally, lets look at our trusty `server` constructor. We can create a handler
that returns a response, and then pass that handler to the `Deno.serve` method.
```ts title="main.ts"
function handler(): Response {
return new Response("It's happening!");
}
Deno.serve(handler);
```
When run with
```sh
deno --allow-net main.ts
```
We'll see that a server is running and listening on port 8000. We can visit
`localhost:8000` in the browser and we should see the text "It's happening!".
So there are a ton of these that you can take advantage of but it's very nice to
know that we don't have to include an external library for everything, Deno has
us covered when it comes to managing errors handling servers and working with
the file system.
---
# Connect to Mongoose and MongoDB
URL: https://docs.deno.com/examples/videos/mongoose
---
# Connect to Prisma
URL: https://docs.deno.com/examples/videos/prisma
---
# Publishing Modules with JSR
URL: https://docs.deno.com/examples/videos/publishing_modules_with_jsr
## Transcript and examples
[JSR](https://jsr.io) is a registry specifically designed for modern JavaScript
projects. JSR - the JavaScript registry - has a bunch of cool features. But if
you've used npm before, you might be thinking, "why do I need this and why do I
need to learn another one of these?"
- Well, first it's optimized for TypeScript.
- JSR only supports ES Modules.
- And finally, npm is the centralized registry for node projects, but there are
other runtimes. Obviously Deno, but you can also use these packages in Bun,
Cloudflare workers and more
Think of it like a superset. JSR doesn't replace npm, it builds on top of it.
So here at [jsr.io](https://jsr.io), you can search for whatever you want. I'm
looking for this library called Oak that is a middleware framework for handling
HTTP requests. I'll search for it here, and this will take me to
[the documentation page](https://jsr.io/@oak/oak).
If you want to install a package, all you need to do is add it:
```sh
deno add jsr:@oak/oak
```
Then we can use it inside of our file like this.
```javascript
import { Application } from "jsr:@oak/oak/application";
import { Router } from "jsr:@oak/oak/router";
const router = new Router();
router.get("/", (context) => {
context.response.body = "HEY!";
});
const app = new Application();
app.use(router.routes());
app.use(router.allowedMethods());
app.listen({ port: 8080 });
```
Pretty cool! But what is it like to publish our own JSR package? It's actually
great.
JSR packages can depend on other packages from JSR but also on any npm package.
Let's build a small library and publish it to JSR. Remember
[our `sing` function from earlier](/examples/all-in-one_tooling/), let's make
this a function that can be consumed by other people in the JavaScript
community. You're welcome everyone.
```typescript
export function sing(
phrase: string,
times: number,
): string {
return Array(times).fill(phrase).join(" ");
}
sing("la", 3);
```
Now if we [head over to jsr.io, we can publish it](https://jsr.io/new). The
first time I ever try to publish a package, JSR will ask me which scope I want
to publish to. I can create that here.
Then I'll create the package name and follow the instructions.
Let's try using our new packaga in a project using Vite. The following command
will walk us through setting up a new Vite project.
```shell
deno run --allow-read --allow-write --allow-env npm:create-vite-extra@latest
```
Now we can import our new package by adding it to our project:
```shell
deno add jsr:@eveporcello/sing
```
And then importing it when we need it
```typescript
import { sing } from "@eveporcello/sing";
```
So if I had to give myself a grade on this, I don't even have to give myself a
grade. [JSR will give me a grade](https://jsr.io/@eveporcello/sing/score) of
29%, which I don't know. Probably not so good. But this has a whole list of
improvements that I can make.
I need to add a readme to my package. I need to add examples. All of these
different things. So I can on my own time develop this to ensure that I have 100
percent here so that my code is well documented and very consumable by other
developers.
---
# Build a React app
URL: https://docs.deno.com/examples/videos/react_app_video
---
# Build a Realtime WebSocket Application
URL: https://docs.deno.com/examples/videos/realtime_websocket_app
---
# TypeScript and JSX
URL: https://docs.deno.com/examples/videos/ts_jsx
---
# Build a Vue app
URL: https://docs.deno.com/examples/videos/vue_app_video
---
# What is Deno?
URL: https://docs.deno.com/examples/videos/what_is_deno
## Video description
A short introduction to Deno and its history
## Transcript and code
Deno is an open source runtime for JavaScript, TypeScript, and WebAssembly
projects that's built on V8 and Rust. It's modern, it's fast, it's flexible, and
it's secure by default.
Deno was created by Ryan Dahl, the creator of Node.js, and in 2018, he gave
[a famous talk at JSConf EU](https://www.youtube.com/watch?v=M3BM9TB-8yA) about
regrets that he had about Node. And Deno provides solutions to all of them.
With the hindsight of someone who's been there, Deno gives us a runtime that's
thought a lot about the details. Details like TypeScript support by default. You
can run or import TypeScript without installing anything more than the Deno CLI.
Deno has a built-in TypeScript compiler, so it'll just run your TypeScript code
without any extra configuration.
Details like linting, formatting, and testing. Deno is an all-in-one toolchain
that you can use to get started with your project without having to use all of
your finite time on earth having to configure it. Details like web standards.
Deno is built on web standards that you might recognize, like Fetch and
WebSockets.
You don't have to learn anything new to use them. If you've used them in the
browser, you're ready to use them in Deno. Deno is secure by default. You have
to specifically enable permissions for sensitive APIs like the network, the file
system, environment access. Deno has you opt into these permissions like you
would to opt into geolocation in the browser.
[In this course](https://www.youtube.com/watch?v=KPTOo4k8-GE&list=PLvvLnBDNuTEov9EBIp3MMfHlBxaKGRWTe),
we're going to walk through the most important features of Deno with hands-on
activities. Whether you've experimented with Deno in the past, or this is all
new to you, I think you're going to like it here.
---
# lint/rules/adjacent-overload-signatures.md
URL: https://docs.deno.com/lint/rules/adjacent-overload-signatures
Requires overload signatures to be adjacent to each other.
Overloaded signatures which are not next to each other can lead to code which is
hard to read and maintain.
**Invalid:**
(`bar` is declared in-between `foo` overloads)
```typescript
type FooType = {
foo(s: string): void;
foo(n: number): void;
bar(): void;
foo(sn: string | number): void;
};
```
```typescript
interface FooInterface {
foo(s: string): void;
foo(n: number): void;
bar(): void;
foo(sn: string | number): void;
}
```
```typescript
class FooClass {
foo(s: string): void;
foo(n: number): void;
bar(): void {}
foo(sn: string | number): void {}
}
```
```typescript
export function foo(s: string): void;
export function foo(n: number): void;
export function bar(): void {}
export function foo(sn: string | number): void {}
```
**Valid:**
(`bar` is declared after `foo`)
```typescript
type FooType = {
foo(s: string): void;
foo(n: number): void;
foo(sn: string | number): void;
bar(): void;
};
```
```typescript
interface FooInterface {
foo(s: string): void;
foo(n: number): void;
foo(sn: string | number): void;
bar(): void;
}
```
```typescript
class FooClass {
foo(s: string): void;
foo(n: number): void;
foo(sn: string | number): void {}
bar(): void {}
}
```
```typescript
export function foo(s: string): void;
export function foo(n: number): void;
export function foo(sn: string | number): void {}
export function bar(): void {}
```
---
# lint/rules/ban-ts-comment.md
URL: https://docs.deno.com/lint/rules/ban-ts-comment
Disallows the use of Typescript directives without a comment.
Typescript directives reduce the effectiveness of the compiler, something which
should only be done in exceptional circumstances. The reason why should be
documented in a comment alongside the directive.
**Invalid:**
```typescript
// @ts-expect-error
let a: number = "I am a string";
```
```typescript
// @ts-ignore
let a: number = "I am a string";
```
```typescript
// @ts-nocheck
let a: number = "I am a string";
```
**Valid:**
```typescript
// @ts-expect-error: Temporary workaround (see ticket #422)
let a: number = "I am a string";
```
```typescript
// @ts-ignore: Temporary workaround (see ticket #422)
let a: number = "I am a string";
```
```typescript
// @ts-nocheck: Temporary workaround (see ticket #422)
let a: number = "I am a string";
```
---
# lint/rules/ban-types.md
URL: https://docs.deno.com/lint/rules/ban-types
Bans the use of primitive wrapper objects (e.g. `String` the object is a wrapper
of `string` the primitive) in addition to the non-explicit `Function` type and
the misunderstood `Object` type.
There are very few situations where primitive wrapper objects are desired and
far more often a mistake was made with the case of the primitive type. You also
cannot assign a primitive wrapper object to a primitive leading to type issues
down the line. For reference, [the TypeScript handbook] also says we shouldn't
ever use these wrapper objects.
[the TypeScript handbook]: https://www.typescriptlang.org/docs/handbook/declaration-files/do-s-and-don-ts.html#number-string-boolean-symbol-and-object
With `Function`, it is better to explicitly define the entire function signature
rather than use the non-specific `Function` type which won't give you type
safety with the function.
Finally, `Object` and `{}` means "any non-nullish value" rather than "any object
type". `object` is a good choice for a meaning of "any object type".
**Invalid:**
```typescript
let a: Boolean;
let b: String;
let c: Number;
let d: Symbol;
let e: Function;
let f: Object;
let g: {};
```
**Valid:**
```typescript
let a: boolean;
let b: string;
let c: number;
let d: symbol;
let e: () => number;
let f: object;
let g: Record;
```
---
# lint/rules/ban-unknown-rule-code.md
URL: https://docs.deno.com/lint/rules/ban-unknown-rule-code
Warns the usage of unknown rule codes in ignore directives.
We sometimes have to suppress and ignore lint errors for some reasons. We can do
so using [ignore directives](/go/lint-ignore/) with rule names that should be
ignored like so:
```typescript
// deno-lint-ignore no-explicit-any no-unused-vars
const foo: any = 42;
```
This rule checks for the validity of the specified rule names (i.e. whether
`deno_lint` provides the rule or not).
**Invalid:**
```typescript
// typo
// deno-lint-ignore eq-eq-e
console.assert(x == 42);
// unknown rule name
// deno-lint-ignore UNKNOWN_RULE_NAME
const b = "b";
```
**Valid:**
```typescript
// deno-lint-ignore eq-eq-eq
console.assert(x == 42);
// deno-lint-ignore no-unused-vars
const b = "b";
```
---
# lint/rules/ban-untagged-ignore.md
URL: https://docs.deno.com/lint/rules/ban-untagged-ignore
Requires `deno-lint-ignore` to be annotated with one or more rule names.
Ignoring all rules can mask unexpected or future problems. Therefore you need to
explicitly specify which rule(s) are to be ignored.
**Invalid:**
```typescript
// deno-lint-ignore
export function duplicateArgumentsFn(a, b, a) {}
```
**Valid:**
```typescript
// deno-lint-ignore no-dupe-args
export function duplicateArgumentsFn(a, b, a) {}
```
---
# lint/rules/ban-untagged-todo.md
URL: https://docs.deno.com/lint/rules/ban-untagged-todo
Requires TODOs to be annotated with either a user tag (`@user`) or an issue
reference (`#issue`).
TODOs without reference to a user or an issue become stale with no easy way to
get more information.
**Invalid:**
```typescript
// TODO Improve calc engine
export function calcValue(): number {}
```
```typescript
// TODO Improve calc engine (@djones)
export function calcValue(): number {}
```
```typescript
// TODO Improve calc engine (#332)
export function calcValue(): number {}
```
**Valid:**
```typescript
// TODO(djones) Improve calc engine
export function calcValue(): number {}
```
```typescript
// TODO(@djones) Improve calc engine
export function calcValue(): number {}
```
```typescript
// TODO(#332)
export function calcValue(): number {}
```
```typescript
// TODO(#332) Improve calc engine
export function calcValue(): number {}
```
---
# lint/rules/ban-unused-ignore.md
URL: https://docs.deno.com/lint/rules/ban-unused-ignore
Warns unused ignore directives.
We sometimes have to suppress and ignore lint errors for some reasons and we can
do so using [ignore directives](/go/lint-ignore/).
In some cases, however, like after refactoring, we may end up having ignore
directives that are no longer necessary. Such superfluous ignore directives are
likely to confuse future code readers, and to make matters worse, might hide
future lint errors unintentionally. To prevent such situations, this rule
detects unused, superfluous ignore directives.
**Invalid:**
```typescript
// Actually this line is valid since `export` means "used",
// so this directive is superfluous
// deno-lint-ignore no-unused-vars
export const foo = 42;
```
**Valid:**
```typescript
export const foo = 42;
```
---
# lint/rules/button-has-type.md
URL: https://docs.deno.com/lint/rules/button-has-type
Checks that a `