# Deno Documentation - Full Content > This document contains the full content of the Deno documentation website. # Acceptable use policy > What constitutes acceptable use of Deno Deploy Classic. URL: https://docs.deno.com/deploy/acceptable_use_policy The Deno Deploy service includes resources (CPU time, request counts) that are subject to this Acceptable Use policy. This document can give a rough estimate to what we consider as "Acceptable Use", and what we do not. ### Examples of Acceptable Use - ✅ Server-side rendered websites - ✅ Jamstack sites and apps - ✅ Single page applications - ✅ APIs that query a DB or external API - ✅ A personal blog - ✅ A company website - ✅ An e-commerce site - ✅ Reverse proxy ### Not Acceptable Use - ❌ Crypto mining - ❌ Highly CPU-intensive load (e.g. machine learning) - ❌ Media hosting for external sites - ❌ Scrapers - ❌ Forward proxy - ❌ VPN ## Guidelines We expect most projects to fall well within the usage limits. We will notify you if your projects usage significantly deviates from the norm. We will reach out to you where possible before taking any action to address unreasonable burdens on our infrastructure. --- # Deno Deploy changelog > Listing notable progress in the development and evolution of Deno Deploy URL: https://docs.deno.com/deploy/changelog ## August 27th, 2025 ### Features - Deno KV can now be used with the database integration: - Provision a Deno KV database through the "Databases" tab, and link it to an app or playground. - Access the Deno KV database from your code by using `Deno.openKv()`. - KV queues, read-replication, manual backups, and choosing a primary region are not available at this time. - Playgrounds now support dragging in individual files and folders. - The playground file explorer now supports inline rename and delete of files. - New built-in environment variables have been added to enable detection of Deno Deploy EA, and the app that is running, and the organization it is running in: `DENO_DEPLOY=1`, `DENO_DEPLOY_ORG_ID`, `DENO_DEPLOY_ORG_SLUG`, `DENO_DEPLOY_APP_ID`, `DENO_DEPLOY_APP_SLUG`, `DENO_DEPLOY_REVISION_ID`. - Users can now create personal access tokens from their account page. - The Deno Deploy EA dashboard has migrated from https://app.deno.com to https://console.deno.com. All existing URLs will automatically redirect to the new URL. ### Bug Fixes - Check that Postgres database instances support dynamic provisioning of databases before allowing them to be linked to an organization. - Ensure that deleted Deno Deploy apps will never trigger GitHub status checks on push to the previously linked repo. - The playground HTTP explorer now correctly sends the set headers when making requests. - Playgrounds do not error on top level `await` anymore. - You can now add environment variables named `GOOGLE_APPLICATION_CREDENTIALS` to your Deno Deploy app. - When bulk importing environment variables in the app settings, we now correctly import them into that app, rather than mistakenly importing them into the organization environment variables. - Some versions of Next.js, that do not support `using` declarations, now correctly build again. - `npm install` in the build step now works more reliably, and does not fail with certificate related issues anymore. ## July 23rd, 2025 ### Features - New: Database support for Deno Deploy apps, allowing you to easily connect to and use Postgres databases in your applications. - Provision a Postgres database instance on AWS RDS, Neon, Supabase, or any other provider and then link it to your Deno Deploy organization. - Assign the database instance to an application, making it available in the application's environment. - Every timeline (production, each git branch, and previews) has their own isolated database with a separate schema and data, allowing you to test migrations and changes without affecting production data. - Use any Postgres client library to connect, including `npm:pg`, `npm:drizzle`, or `npm:kysely`. - Applications and playgrounds can now be renamed. Note, old `deno.net` URLs will no longer work after renaming, but custom domains will continue to function. - Applications and playgrounds can now be deleted. - Playgrounds now have an HTTP Explorer tab that allows you to make arbitrary HTTP requests to any URL served by the playground. This is useful for testing APIs or other services that do not serve a web page. - You can now delete entire folders in the playground file explorer by pressing the delete button next to the folder name. - You can now drag a zip file onto the playground file explorer to upload all files in the zip file to the playground. - You can now enable auto-format on save in the playground, which will automatically format your code when you save a file. ### Bug fixes - `DENO_` prefixed environment variables such as `DENO_CONDITIONS`, `DENO_COMPAT`, and `DENO_AUTH_TOKENS` can now be set without error. - The `DENO_REVISION_ID` environment variable is now correctly exposed to applications and playgrounds. - The custom domain assignment drawer now shows custom domains that are already assigned to another application or playground as disabled. - The network usage graph on the metrics page now correctly shows incoming and outgoing traffic. Previously, the data shown was incorrect. - For newly created organizations the first build now waits until the `.deno.net` domain is provisioned before the routing step. - Pressing `Ctrl-S` / `Cmd-S` in the playground now saves the current file and triggers a build, instead of opening the browser's save dialog. - Viewing some specific traces previously hung the trace viewer. These now show correctly. ## July 9th, 2025 ### Features - New: Cloud Connect allows you to securely connect your Deno Deploy apps to AWS and GCP, enabling you to use services like AWS S3, Google Cloud Storage, without needing to manage credentials. - This is done without storing any long-lived static credentials, but rather using short-lived tokens and OIDC (OpenID Connect) to establish a trust relationship between Deno Deploy and your cloud provider. - A setup flow in the app settings page, or a drawer in playgrounds, guides you through the process of connecting your app to AWS or GCP. - You can use the standard AWS and GCP SDKs to access the services - no need to re-write any code to use a different API. - [Learn more in the documentation.](/deploy/reference/cloud_connections/) - The application metrics page now shows more metrics, including V8 memory metrics such as heap size and garbage collection stats, as well as process level metrics such as CPU usage and overall memory usage. - There is now a new "Metrics" tab in the organization overview that shows overall metrics for all applications in the organization, including the number of requests, CPU usage, and memory usage. - You can now edit the URL you are viewing in the playground preview iframe by editing the "address bar" that is displayed above the preview. - Environment variables now default to being a secret when the key contains `SECRET`, `KEY`, `TOKEN`, `PRIVATE`, or `PASSWORD`. You can still manually switch them to plain text if needed. - The maximum length limit for environment variable values has been increased to 4096 characters, up from 1024 characters. ### Bug fixes - Playgrounds do not get stuck when attempting to deploy an empty file anymore. - Playground drawer resizing now works more reliably, especially when some drawers are collapsed. - Builds now take significantly less time to complete, especially for larger projects. The "Warmup" and "Routing" steps, which previously took more than 10 seconds respectively, now usually take less than 1 second each. - Builds can now be cancelled while they are in the "Queueing" and "Routing" steps. - The organization creation page now correctly displays whether an organization slug is taken or not, prior to submitting the form. - `npm install` can now install `esbuild` again - previously it would fail with a generic error. ## June 24th, 2025 ### Features - The playground now has live-streaming logs and traces panels - Logs and traces for the current revision are displayed for the past hour - Logs and traces can be filtered, just like in the dedicated observability pages - Framework auto-detection now works for more projects out of the box, including many Vite-based projects - The organization dropdown now highlights the currently selected organization more clearly ### Bug fixes - The sparklines in the metrics overview are now working correctly - The error rate metric now functions properly - GitHub-triggered builds no longer run multiple times - Next.js builds now work more reliably on older Next.js versions ## June 12th, 2025 ### Features - Deno Deploy now supports playgrounds! - Playgrounds can be created and accessed from the playgrounds tab in the organizations overview - Playgrounds can contain multiple files and include build steps - The playground UI features an iframe to preview your deployed app - Three templates are currently available: hello world, Next.js, and Hono - On mobile devices, there is now a floating navbar that doesn't intrude into page content ## June 9th, 2025 ### Features - Deno Deploy has a new logo! - Anyone can now join by signing up at [console.deno.com](https://console.deno.com) - Builds - Builds can now use up to 8 GB of storage, up from 2 GB - Builds can now use environment variables and secrets configured in the organization or app settings (in the new "Build" context) - Builds now have a maximum runtime of 5 minutes - The metrics page has had a complete overhaul by rewriting the chart rendering: - Dragging on a graph now zooms in on the selected area - Much more data can now be shown without the page becoming slow to load - The tooltip now follows the mouse cursor, together with a new crosshair that allows for precise analysis - Font sizes and colors have been improved for better readability ### Bug fixes - Builds should not get stuck in a pending state anymore - Dashboard pages now load significantly faster - Correctly shows spans in traces that have parents that are not exported (yet) - The metrics page correctly refreshes now when switching time ranges - The "Clear search" button in the telemetry search bar now works correctly - Older Next.js versions (such as Next.js 13) build correctly now - The environment variable drawer is now used everywhere, fixing a bug where multiple env vars with the same name but different contexts would conflict - Running `node ` in the builder does not fail anymore when the path is absolute - `npx` is now available in the builder - Astro builds will not sporadically fail with `--unstable-vsock` errors anymore - Svelte projects now deploy correctly when a project explicitly specifies `@deno/svelte-adapter` ## May 26th, 2025 ### Features - When triggering a manual build you can now choose which branch to deploy - You can now deploy Astro static sites without having to manually install the Deno adapter - There are now [reference docs for you to peruse](/deploy/). ### Bug fixes - SvelteKit auto-detection now works when using `npm` as the package manager - Prewarming does not trigger random POST requests to your app anymore - Visiting a page with a trailing slash will not 404 anymore - Drawers will no longer close if you click inside, hold and drag over the backdrop, and release ## May 22nd, 2025 ### Features - You can now bulk import env vars during app creation by pasting a `.env` file into the env var drawer - SvelteKit now works out of the box without manually installing the Deno adapter - A preset for the Lume static site generator is now available ### Bug fixes - Environment variables now show up correctly on the timelines page - The production timeline page now correctly shows all builds - console.deno.com works on older versions of Firefox now - Page titles across console.deno.com now reflect the page you are on - The "Provision certificate" button does not lock up after DNS verification failures anymore - Domains that had a provisioned certificate or attached application can now be deleted --- # Compressing response bodies URL: https://docs.deno.com/deploy/classic/api/compression :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: Compressing the response body to save bandwidth is a common practice. To take some work off your shoulder, we built the capabilities directly into Deploy. Deno Deploy Classic supports brotli and gzip compression. Compression is applied when the following conditions are met. 1. The request to your deployment has [`Accept-Encoding`][accept-encoding] header set to either `br` (brotli) or `gzip`. 2. The response from your deployment includes the [`Content-Type`][content-type] header. 3. The provided content type is compressible; we use [this database](https://github.com/jshttp/mime-db/blob/master/db.json) to determine if the content type is compressible. 4. The response body size is greater than 20 bytes. When Deploy compresses the response body, it will set `Content-Encoding: gzip` or `Content-Encoding: br` header to the response based on the compression algorithm used. ### When is compression skipped? Deno Deploy Classic skips the compression if: - The response has [`Content-Encoding`][content-encoding] header. - The response has [`Content-Range`][content-range] header. - The response's [`Cache-Control`][cache-control] header has [`no-transform`][no-transform] value (e.g. `cache-control: public, no-transform`). ### What happens to my `Etag` header? When you set an Etag header with the response, we convert the header value to a Weak Etag if we apply compression to your response body. If it is already a Weak Etag, we don't touch the header. [accept-encoding]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Encoding [cache-control]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control [content-encoding]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding [content-type]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type [no-transform]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control#other [content-range]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Range --- # Dynamic import URL: https://docs.deno.com/deploy/classic/api/dynamic-import :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: Deno Deploy Classic supports [dynamic import] but with some limitations. This page outlines these limitations. ### Specifiers must be statically determined string literals In the usual dynamic import, specifiers don't need to be determined at build time. So all of the following forms are valid: ```ts title="Valid dynamic imports in Deno CLI" // 1. Statically determined string literal await import("jsr:@std/assert"); // 2. Statically determined, but via variable const specifier = "jsr:@std/assert"; await import(specifier); // 3. Statically determined, but template literal const stdModuleName = "path"; await import(`jsr:@std/${stdModuleName}`); // 4. Dynamically determined const rand = Math.random(); const mod = rand < 0.5 ? "npm:cowsay" : "npm:node-emoji"; await import(mod); ``` In Deno Deploy Classic, however, specifiers must be string literals with no string interpolation. So among the three examples above, only the first one works in Deno Deploy Classic. ```ts title="Only static string literals work in Deno Deploy Classic" // 1. ✅ Works fine on Deno Deploy Classic await import("jsr:@std/assert"); // 2. ❌ Doesn't work on Deno Deploy Classic // because what's passed to `import` is a variable const specifier = "jsr:@std/streams"; await import(specifier); // 3. ❌ Doesn't work on Deno Deploy Classic // because this has an interpolation const stdModuleName = "path"; await import(`jsr:@std/${stdModuleName}`); // 4. ❌ Doesn't work on Deno Deploy Classic // because it's dynamic const rand = Math.random(); const mod = rand < 0.5 ? "npm:cowsay" : "npm:node-emoji"; await import(mod); ``` ### One exception - dynamic specifiers work for same project files Specifiers that are dynamically determined are supported if target files (modules) are included in the same project. ```ts title="Dynamic specifiers work for files in the same project" // ✅ Works fine on Deno Deploy Classic await import("./my_module1.ts"); // ✅ Works fine on Deno Deploy Classic const rand = Math.random(); const modPath = rand < 0.5 ? "dir1/moduleA.ts" : "dir2/dir3/moduleB.ts"; await import(`./${modPath}`); ``` Note that template literals starting with `./` tell the module resolver that the target module is in the same project. Conversely, if a specifier does not start with `./`, the possible target modules will not be included the resulting [eszip], causing dynamic imports to fail at runtime, even if the final evaluated specifier starts with `./`. ```ts // ❌ Doesn't work because the analyzer can't statically determine if the // specifier starts with `./` or not in this case. // Compare this to the previous example. Only difference is whether to put // `./` in the template literal or in the variable. const rand = Math.random(); const modPath = rand < 0.5 ? "./dir1/moduleA.ts" : "./dir2/dir3/moduleB.ts"; await import(modPath); ``` We will consider if we can relax this constraint in the future. :::tip What is eszip? When you do a new deployment on Deno Deploy Classic, the system analyzes your code, constructs the module graph by recursively traversing it, and bundles all the dependencies into a single file. We call this [eszip](https://github.com/denoland/eszip). Since its creation is done completely statically, dynamic import capabilities are limited on Deno Deploy Classic. ::: ### Data URLs [Data URL] can be used as a specifier passed to dynamic imports. ```ts title="Static data URL" // ✅ Works fine on Deno Deploy Classic const { val } = await import( "data:text/javascript,export const val = 42;" ); console.log(val); // -> 42 ``` For data URLs, fully dynamic data is supported. ```ts title="Dynamic data URL" function generateDynamicDataUrl() { const moduleStr = `export const val = ${Math.random()};`; return `data:text/javascript,${moduleStr}`; } // ✅ Works fine on Deno Deploy Classic const { val } = await import(generateDynamicDataUrl()); console.log(val); // -> Random value is printed ``` Applying this technique to JavaScript code fetched from the web, you can even simulate a true dynamic import: ```js title="external.js" export const name = "external.js"; ``` ```ts title="Dynamic data URL from fetched source" import { assert } from "jsr:@std/assert/assert"; const res = await fetch( "https://gist.githubusercontent.com/magurotuna/1cacb136f9fd6b786eb8bbad92c8e6d6/raw/56a96fd0d246fd3feabbeecea6ea1155bdf5f50d/external.js", ); assert(res.ok); const src = await res.text(); const dataUrl = `data:application/javascript,${src}`; // ✅ Works fine on Deno Deploy Classic const { name } = await import(dataUrl); console.log(`Hello from ${name}`); // -> "Hello from external.js" ``` However, note that data URL given to `import` has to be JavaScript; TypeScript, when passed, throws a [TypeError] at runtime. [dynamic import]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/import [eszip]: https://github.com/denoland/eszip [Data URL]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs [TypeError]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypeError --- # API Reference URL: https://docs.deno.com/deploy/classic/api/ :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: This is a reference for runtime APIs available on Deno Deploy Classic. This API is very similar to the standard [runtime API](/runtime/manual/runtime), but some APIs are not available in the same way, given that Deno Deploy Classic is a serverless environment. Please use this section of the documentation to explore available APIs on Deno Deploy. ### Web APIs - [`console`](https://developer.mozilla.org/en-US/docs/Web/API/console) - [`atob`](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/atob) - [`btoa`](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/btoa) - [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) - `fetch` - `Request` - `Response` - `URL` - `File` - `Blob` - [TextEncoder](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoder) - [TextDecoder](https://developer.mozilla.org/en-US/docs/Web/API/TextDecoder) - [TextEncoderStream](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoderStream) - [TextDecoderStream](https://developer.mozilla.org/en-US/docs/Web/API/TextDecoderStream) - [Performance](https://developer.mozilla.org/en-US/docs/Web/API/Performance) - [Web Crypto API](https://developer.mozilla.org/en-US/docs/Web/API/Crypto) - `randomUUID()` - `getRandomValues()` - [SubtleCrypto](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto) - [WebSocket API](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket) - [Timers](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/setTimeout) (`setTimeout`, `clearTimeout`, and `setInterval`) - [Streams API](https://developer.mozilla.org/en-US/docs/Web/API/Streams_API) - `ReadableStream` - `WritableStream` - `TransformStream` - [URLPattern API](https://developer.mozilla.org/en-US/docs/Web/API/URLPattern) - [Import Maps](https://docs.deno.com/runtime/manual/basics/import_maps/) - Note: `import maps` are currently only available via [deployctl](https://github.com/denoland/deployctl) or [deployctl GitHub Action](https://github.com/denoland/deployctl/blob/main/action/README.md) workflows. ### Deno APIs > Note: only stable APIs of Deno are made available in Deploy. - [`Deno.env`](https://docs.deno.com/api/deno/~/Deno.env) - Interact with environment variables (secrets). - `get(key: string): string | undefined` - get the value of an environment variable. - `toObject(): { [key: string]: string }` - get all environment variables as an object. - [`Deno.connect`](https://docs.deno.com/api/deno/~/Deno.connect) - Connect to TCP sockets. - [`Deno.connectTls`](https://docs.deno.com/api/deno/~/Deno.connectTls) - Connect to TCP sockets using TLS. - [`Deno.startTls`](https://docs.deno.com/api/deno/~/Deno.startTls) - Start TLS handshake from an existing TCP connection. - [`Deno.resolveDns`](https://docs.deno.com/api/deno/~/Deno.resolveDns) - Make DNS queries - File system API - [`Deno.cwd`](https://docs.deno.com/api/deno/~/Deno.cwd) - Get the current working directory - [`Deno.readDir`](https://docs.deno.com/api/deno/~/Deno.readDir) - Get directory listings - [`Deno.readFile`](https://docs.deno.com/api/deno/~/Deno.readFile) - Read a file into memory - [`Deno.readTextFile`](https://docs.deno.com/api/deno/~/Deno.readTextFile) - Read a text file into memory - [`Deno.open`](https://docs.deno.com/api/deno/~/Deno.open) - Open a file for streaming reading - [`Deno.stat`](https://docs.deno.com/api/deno/~/Deno.stat) - Get file system entry information - [`Deno.lstat`](https://docs.deno.com/api/deno/~/Deno.lstat) - Get file system entry information without following symlinks - [`Deno.realPath`](https://docs.deno.com/api/deno/~/Deno.realPath) - Get the real path of a file after resolving symlinks - [`Deno.readLink`](https://docs.deno.com/api/deno/~/Deno.readLink) - Get the target path for the given symlink ## Future support In the future, these APIs will also be added: - [Cache API](https://developer.mozilla.org/en-US/docs/Web/API/Cache) - UDP API: - `Deno.connectDatagram` for outbound UDP sockets - Customizable `fetch` options using `Deno.createHttpClient` ## Limitations Just like the Deno CLI, we do not implement the `__proto__` object field as specified in ECMA Script Annex B. --- # BroadcastChannel URL: https://docs.deno.com/deploy/classic/api/runtime-broadcast-channel :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: In Deno Deploy Classic, code is run in different data centers around the world in order to reduce latency by servicing requests at the data center nearest to the client. In the browser, the [`BroadcastChannel`](https://developer.mozilla.org/en-US/docs/Web/API/Broadcast_Channel_API) API allows different tabs with the same origin to exchange messages. In Deno Deploy, the BroadcastChannel API provides a communication mechanism between the various instances; a simple message bus that connects the various Deploy instances worldwide. ## Constructor The `BroadcastChannel()` constructor creates a new `BroadcastChannel` instance and connects to (or creates) the provided channel. ```ts let channel = new BroadcastChannel(channelName); ``` #### Parameters | name | type | description | | ----------- | -------- | --------------------------------------------------------- | | channelName | `string` | The name for the underlying broadcast channel connection. | The return type of the constructor is a `BroadcastChannel` instance. ## Properties | name | type | description | | ---------------- | ---------------------- | ------------------------------------------------------------------------------------------------------------ | | `name` | `string` | The name of the underlying broadcast channel. | | `onmessage` | `function` (or `null`) | The function that's executed when the channel receives a new message ([`MessageEvent`][messageevent]). | | `onmessageerror` | `function` (or `null`) | The function that's executed when the arrived message cannot be deserialized to a JavaScript data structure. | ## Methods | name | description | | ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------- | | `close()` | Close the connection to the underlying channel. After closing, you can no longer post messages to the channel. | | `postMessage(message)` | Post a message to the underlying channel. The message can be a string, object literal, a number or any kind of [`Object`][object]. | `BroadcastChannel` extends [`EventTarget`][eventtarget], which allows you to use methods of `EventTarget` like `addEventListener` and `removeEventListener` on an instance of `BroadcastChannel`. ## Example: Update an in-memory cache across instances One use case for a message bus like the one enabled by `BroadcastChannel` is updating an in-memory cache of data between isolates running in different data centers across the network. In the example below, we show how you can configure a simple server that uses `BroadcastChannel` to synchornize state across all running instances of the server. ```ts import { Hono } from "jsr:@hono/hono"; // in-memory cache of messages const messages = []; // A BroadcastChannel used by all isolates const channel = new BroadcastChannel("all_messages"); // When a new message comes in from other instances, add it channel.onmessage = (event: MessageEvent) => { messages.push(event.data); }; // Create a server to add and retrieve messages const app = new Hono(); // Add a message to the list app.get("/send", (c) => { // New messages can be added by including a "message" query param const message = c.req.query("message"); if (message) { messages.push(message); channel.postMessage(message); } return c.redirect("/"); }); // Get a list of messages app.get("/", (c) => { // Return the current list of messages return c.json(messages); }); Deno.serve(app.fetch); ``` You can test this example yourself on Deno Deploy Classic using [this playground](https://dash.deno.com/playground/broadcast-channel-example). [eventtarget]: https://developer.mozilla.org/en-US/docs/Web/API/EventTarget [messageevent]: https://developer.mozilla.org/en-US/docs/Web/API/MessageEvent [object]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object --- # HTTP requests (fetch) URL: https://docs.deno.com/deploy/classic/api/runtime-fetch :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: The [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) allows you to make outbound HTTP requests in Deno Deploy Classic. It is a web standard and has the following interfaces: - `fetch()` - The method that allows you to make outbound HTTP requests - [`Request`](./runtime-request) - represents a request resource of fetch() - [`Response`](./runtime-response) - represents a response resource of fetch() - [`Headers`](./runtime-headers) - represents HTTP Headers of requests and responses. This page shows usage for the fetch() method. You can click above on the other interfaces to learn more about them. Fetch also supports fetching from file URLs to retrieve static files. For more info on static files, see the [filesystem API documentation](./runtime-fs). ## `fetch()` The `fetch()` method initiates a network request to the provided resource and returns a promise that resolves after the response is available. ```ts function fetch( resource: Request | string, init?: RequestInit, ): Promise; ``` #### Parameters | name | type | optional | description | | -------- | ------------------------------------------------------------- | -------- | ------------------------------------------------------------------ | | resource | [`Request`](./runtime-request)
[`USVString`][usvstring] | `false` | The resource can either be a request object or a URL string. | | init | [`RequestInit`](./runtime-request#requestinit) | `true` | The init object lets you apply optional parameters to the request. | The return type of `fetch()` is a promise that resolves to a [`Response`](./runtime-response). ## Examples The Deno Deploy Classic script below makes a `fetch()` request to the GitHub API for each incoming request, and then returns that response from the handler function. ```ts async function handler(req: Request): Promise { const resp = await fetch("https://api.github.com/users/denoland", { // The init object here has an headers object containing a // header that indicates what type of response we accept. // We're not specifying the method field since by default // fetch makes a GET request. headers: { accept: "application/json", }, }); return new Response(resp.body, { status: resp.status, headers: { "content-type": "application/json", }, }); } Deno.serve(handler); ``` [usvstring]: https://developer.mozilla.org/en-US/docs/Web/API/USVString --- # File system APIs URL: https://docs.deno.com/deploy/classic/api/runtime-fs :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: Deno Deploy Classic supports a limited set of the file system APIs available in Deno. These file system APIs can access static files from your deployments. Static files are for example: - The files in your GitHub repository, if you deploy via the GitHub integration. - The entrypoint file in a playground deployment. The APIs that are available are: - [Deno.cwd](#deno.cwd) - [Deno.readDir](#deno.readdir) - [Deno.readFile](#deno.readfile) - [Deno.readTextFile](#deno.readtextfile) - [Deno.open](#deno.open) - [Deno.stat](#deno.stat) - [Deno.lstat](#deno.lstat) - [Deno.realPath](#deno.realpath) - [Deno.readLink](#deno.readlink) ## Deno.cwd `Deno.cwd()` returns the current working directory of your deployment. It is located at the root of your deployment's root directory. For example, if you deployed via the GitHub integration, the current working directory is the root of your GitHub repository. ## Deno.readDir `Deno.readDir()` allows you to list the contents of a directory. The function is fully compatible with [Deno](https://docs.deno.com/api/deno/~/Deno.readDir). ```ts function Deno.readDir(path: string | URL): AsyncIterable ``` The path can be a relative or absolute. It can also be a `file:` URL. ### Example This example lists the contents of a directory and returns this list as a JSON object in the response body. ```js async function handler(_req) { // List the posts in the `blog` directory located at the root // of the repository. const posts = []; for await (const post of Deno.readDir(`./blog`)) { posts.push(post); } // Return JSON. return new Response(JSON.stringify(posts, null, 2), { headers: { "content-type": "application/json", }, }); } Deno.serve(handler); ``` ## Deno.readFile `Deno.readFile()` allows you to read a file fully into memory. The function definition is similar to [Deno](https://docs.deno.com/api/deno/~/Deno.readFile), but it doesn't support [`ReadFileOptions`](https://docs.deno.com/api/deno/~/Deno.ReadFileOptions) for the time being. Support will be added in the future. ```ts function Deno.readFile(path: string | URL): Promise ``` The path can be a relative or absolute. It can also be a `file:` URL. ### Example This example reads the contents of a file into memory as a byte array, then returns it as the response body. ```js async function handler(_req) { // Let's read the README.md file available at the root // of the repository to explore the available methods. // Relative paths are relative to the root of the repository const readmeRelative = await Deno.readFile("./README.md"); // Absolute paths. // The content of the repository is available under at Deno.cwd(). const readmeAbsolute = await Deno.readFile(`${Deno.cwd()}/README.md`); // File URLs are also supported. const readmeFileUrl = await Deno.readFile( new URL(`file://${Deno.cwd()}/README.md`), ); // Decode the Uint8Array as string. const readme = new TextDecoder().decode(readmeRelative); return new Response(readme); } Deno.serve(handler); ``` > Note: to use this feature, you must link a GitHub repository to your project. Deno Deploy Classic supports the `Deno.readFile` API to read static assets from the file system. This is useful for serving static assets such as images, stylesheets, and JavaScript files. This guide demonstrates how to use this feature. Imagine the following file structure on a GitHub repository: ```console ├── mod.ts └── style.css ``` The contents of `mod.ts`: ```ts async function handleRequest(request: Request): Promise { const { pathname } = new URL(request.url); // This is how the server works: // 1. A request comes in for a specific asset. // 2. We read the asset from the file system. // 3. We send the asset back to the client. // Check if the request is for style.css. if (pathname.startsWith("/style.css")) { // Read the style.css file from the file system. const file = await Deno.readFile("./style.css"); // Respond to the request with the style.css file. return new Response(file, { headers: { "content-type": "text/css", }, }); } return new Response( `

Example

`, { headers: { "content-type": "text/html; charset=utf-8", }, }, ); } Deno.serve(handleRequest); ``` The path provided to the [`Deno.readFile`](https://docs.deno.com/api/deno/~/Deno.readFile) API is relative to the root of the repository. You can also specify absolute paths, if they are inside `Deno.cwd`. ## Deno.readTextFile This function is similar to [Deno.readFile](#Deno.readFile) except it decodes the file contents as a UTF-8 string. ```ts function Deno.readTextFile(path: string | URL): Promise ``` ### Example This example reads a text file into memory and returns the contents as the response body. ```js async function handler(_req) { const readme = await Deno.readTextFile("./README.md"); return new Response(readme); } Deno.serve(handler); ``` ## Deno.open `Deno.open()` allows you to open a file, returning a file handle. This file handle can then be used to read the contents of the file. See [`Deno.File`](#deno.file) for information on the methods available on the file handle. The function definition is similar to [Deno](https://docs.deno.com/api/deno/~/Deno.open), but it doesn't support [`OpenOptions`](https://docs.deno.com/api/deno/~/Deno.OpenOptions) for the time being. Support will be added in the future. ```ts function Deno.open(path: string | URL): Promise ``` The path can be a relative or absolute. It can also be a `file:` URL. ### Example This example opens a file, and then streams the content as the response body. ```js async function handler(_req) { // Open the README.md file available at the root of the repository. const file = await Deno.open("./README.md"); // Use the `readable` property, which is a `ReadableStream`. This will // automatically close the file handle when the response is done sending. return new Response(file.readable); } Deno.serve(handler); ``` :::note When you iterate over a file stream as shown below, the file descriptor will be automatically closed at the end of iteration. There is no need to manually close the file descriptor: `const iterator = fd.readable[Symbol.asyncIterator]();` ::: ## Deno.File `Deno.File` is a file handle returned from [`Deno.open()`](#deno.open). It can be used to read chunks of the file using the `read()` method. The file handle can be closed using the `close()` method. The interface is similar to [Deno](https://docs.deno.com/api/deno/~/Deno.File), but it doesn't support writing to the file, or seeking. Support for the latter will be added in the future. ```ts class File { readonly rid: number; close(): void; read(p: Uint8Array): Promise; } ``` The path can be a relative or absolute. It can also be a `file:` URL. ## Deno.File#read() The read method is used to read a chunk of the file. It should be passed a buffer to read the data into. It returns the number of bytes read or `null` if the end of the file has been reached. ```ts function read(p: Uint8Array): Promise; ``` ### Deno.File#close() The close method is used to close the file handle. Closing the handle will interrupt all ongoing reads. ```ts function close(): void; ``` ## Deno.stat `Deno.stat()` reads a file system entry's metadata. It returns a [`Deno.FileInfo`](#fileinfo) object. Symlinks are followed. The function definition is the same as [Deno](https://docs.deno.com/api/deno/~/Deno.stat). It does not return modification time, access time, or creation time values. ```ts function Deno.stat(path: string | URL): Promise ``` The path can be a relative or absolute. It can also be a `file:` URL. ### Example This example gets the size of a file, and returns the result as the response body. ```js async function handler(_req) { // Get file info of the README.md at the root of the repository. const info = await Deno.stat("./README.md"); // Get the size of the file in bytes. const size = info.size; return new Response(`README.md is ${size} bytes large`); } Deno.serve(handler); ``` ## Deno.lstat `Deno.lstat()` is similar to `Deno.stat()`, but it does not follow symlinks. The function definition is the same as [Deno](https://docs.deno.com/api/deno/~/Deno.lstat). It does not return modification time, access time, or creation time values. ```ts function Deno.lstat(path: string | URL): Promise ``` The path can be a relative or absolute. It can also be a `file:` URL. ## Deno.FileInfo The `Deno.FileInfo` interface is used to represent a file system entry's metadata. It is returned by the [`Deno.stat()`](#deno.stat) and [`Deno.lstat()`](#deno.lstat) functions. It can represent either a file, a directory, or a symlink. In Deno Deploy Classic, only the file type, and size properties are available. The size property behaves the same way it does on Linux. ```ts interface FileInfo { isDirectory: boolean; isFile: boolean; isSymlink: boolean; size: number; } ``` ## Deno.realPath `Deno.realPath()` returns the resolved absolute path to a file after following symlinks. The function definition is the same as [Deno](https://docs.deno.com/api/deno/~/Deno.realPath). ```ts function Deno.realPath(path: string | URL): Promise ``` The path can be a relative or absolute. It can also be a `file:` URL. ### Example This example calls `Deno.realPath()` to get the absolute path of a file in the root of the repository. The result is returned as the response body. ```ts async function handler(_req) { const path = await Deno.realPath("./README.md"); return new Response(`The fully resolved path for ./README.md is ${path}`); } Deno.serve(handler); ``` ## Deno.readLink `Deno.readLink()` returns the target path for a symlink. The function definition is the same as [Deno](https://docs.deno.com/api/deno/~/Deno.readLink). ```ts function Deno.readLink(path: string | URL): Promise ``` The path can be a relative or absolute. It can also be a `file:` URL. ### Example This example calls `Deno.readLink()` to get the absolute path of a file in the root of the repository. The result is returned as the response body. ```ts async function handler(_req) { const path = await Deno.readLink("./my_symlink"); return new Response(`The target path for ./my_symlink is ${path}`); } Deno.serve(handler); ``` --- # HTTP Headers URL: https://docs.deno.com/deploy/classic/api/runtime-headers :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: The [Headers](https://developer.mozilla.org/en-US/docs/Web/API/Headers) interface is part of the Fetch API. It allows you create and manipulate the HTTP headers of request and response resources of fetch(). - [Constructor](#constructor) - [Parameters](#parameters) - [Methods](#methods) - [Example](#example) ## Constructor The Header() constructor creates a new `Header` instance. ```ts let headers = new Headers(init); ``` #### Parameters | name | type | optional | description | | ---- | --------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------- | | init | `Headers` / `{ [key: string]: string }` | `true` | The init option lets you initialize the headers object with an existing `Headers` or an object literal. | The return type of the constructor is a `Headers` instance. ## Methods | name | description | | ------------------------------------- | ----------------------------------------------------------------- | | `append(name: string, value: string)` | Appends a header (overwrites existing one) to the Headers object. | | `delete(name: string)` | Deletes a header from the Headers object. | | `set(name: string, value: string)` | Create a new header in the Headers object. | | `get(name: string)` | Get the value of the header in the Headers object. | | `has(name: string)` | Check if the header exists in the Headers objects. | | `entries()` | Get the headers as key-value pair. The result is iterable. | | `keys()` | Get all the keys of the Headers object. The result is iterable. | ## Example ```ts // Create a new headers object from an object literal. const myHeaders = new Headers({ accept: "application/json", }); // Append a header to the headers object. myHeaders.append("user-agent", "Deno Deploy Classic"); // Print the headers of the headers object. for (const [key, value] of myHeaders.entries()) { console.log(key, value); } // You can pass the headers instance to Response or Request constructors. const request = new Request("https://api.github.com/users/denoland", { method: "POST", headers: myHeaders, }); ``` --- # Node.js built-in APIs URL: https://docs.deno.com/deploy/classic/api/runtime-node :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: Deno Deploy Classic natively supports importing built-in Node.js modules like `fs`, `path`, and `http` through `node:` specifiers. This allows running code originally written for Node.js without changes in Deno Deploy Classic. Here is an example of a Node.js HTTP server running on Deno Deploy Classic: ```js import { createServer } from "node:http"; import process from "node:process"; const server = createServer((req, res) => { const message = `Hello from ${process.env.DENO_REGION} at ${new Date()}`; res.end(message); }); server.listen(8080); ``` When using `node:` specifiers, all other features of Deno Deploy Classic are still available. For example, you can use `Deno.env` to access environment variables even when using Node.js modules. You can also import other ESM modules from external URLs as usual. The following Node.js modules are available: - `assert` - `assert/strict` - `async_hooks` - `buffer` - `child_process` - `cluster` - `console` - `constants` - `crypto` - `dgram` - `diagnostics_channel` - `dns` - `dns/promises` - `domain` - `events` - `fs` - `fs/promises` - `http` - `http2` - `https` - `module` - `net` - `os` - `path` - `path/posix` - `path/win32` - `perf_hooks` - `process` - `punycode` - `querystring` - `readline` - `stream` - `stream/consumers` - `stream/promises` - `stream/web` - `string_decoder` - `sys` - `timers` - `timers/promises` - `tls` - `tty` - `url` - `util` - `util/types` - `v8` - `vm` - `worker_threads` - `zlib` The behavior of these modules should be identical to Node.js in most cases. Due to the sandboxing behaviour of Deno Deploy Classic, some features are not available: - Executing binaries with `child_process` - Spawning workers using `worker_threads` - Creating contexts and evaluating code with `vm` > Note: the emulation of Node.js modules is sufficient for most use cases, but > it is not yet perfect. If you encounter any issues, please > [open an issue](https://github.com/denoland/deno). --- # HTTP Request URL: https://docs.deno.com/deploy/classic/api/runtime-request :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: The [Request](https://developer.mozilla.org/en-US/docs/Web/API/Request) interface is part of the Fetch API and represents the request of fetch(). - [Constructor](#constructor) - [Parameters](#parameters) - [Properties](#properties) - [Methods](#methods) - [Example](#example) ## Constructor The Request() constructor creates a new Request instance. ```ts let request = new Request(resource, init); ``` #### Parameters | name | type | optional | description | | -------- | ----------------------------- | -------- | ------------------------------------------------------------------------- | | resource | `Request` or `USVString` | `false` | The resource can either be a request object or a URL string. | | init | [`RequestInit`](#requestinit) | `true` | The init object lets you set optional parameters to apply to the request. | The return type is a `Request` instance. ##### `RequestInit` | name | type | default | description | | ---------------------------- | --------------------------------------------------------------------------------------- | -------------- | ---------------------------------------------------------- | | [`method`][method] | `string` | `GET` | The method of the request. | | [`headers`][headers] | `Headers` or `{ [key: string]: string }` | none | Th Headers for the request. | | [`body`][body] | `Blob`, `BufferSource`, `FormData`, `URLSearchParams`, `USVString`, or `ReadableStream` | none | The body of the request. | | [`cache`][cache] | `string` | none | The cache mode of the request. | | [`credentials`][credentials] | `string` | `same-origin` | The credentials mode of the request. | | [`integrity`][integrity] | `string` | none | The crypotographic hash of the request's body. | | [`mode`][mode] | `string` | `cors` | The request mode you want to use. | | [`redirect`][redirect] | `string` | `follow` | The mode of how redirects are handled. | | [`referrer`][referrer] | `string` | `about:client` | A `USVString` specifying `no-referrer`, `client` or a URL. | ## Properties | name | type | description | | ---------------------------------- | ------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------- | | [`cache`][cache] | `string` | The cache mode indicates how the (`default`, `no-cache`, etc) request should be cached by browser. | | [`credentials`][credentials] | `string` | The credentials (`omit`, `same-origin`, etc) indicate whether user agent should send cookies in case of CORs of the request. | | [`destination`][destination] | [`RequestDestination`][requestdestination] | The string indicates the type of content being requested. | | [`body`][body] | [`ReadableStream`][readablestream] | The getter exposes a `ReadableStream` of the body contents. | | [`bodyUsed`][bodyused] | `boolean` | Indicates whether the body content is read. | | [`url`][url] | `USVString` | The URL of the request. | | [`headers`][headers] | [`Headers`](runtime-headers) | The headers associated with the request. | | [`integrity`][integrity] | `string` | The crypotographic hash of the request's body. | | [`method`][method] | `string` | The request's method (`POST`, `GET`, etc). | | [`mode`][mode] | `string` | Indicates the mode of the request (e.g. `cors` ). | | [`redirect`][redirect] | `string` | The mode of how redirects are handled. | | [`referrer`][referrer] | `string` | The referrer of the request. | | [`referrerPolicy`][referrerpolicy] | `string` | The referrer policy of the request | All the above properties are read only. ## Methods | name | description | | ------------------------------ | ------------------------------------------------------------------------------------------- | | [`arrayBuffer()`][arraybuffer] | Reads the body stream to its completion and returns an `ArrayBuffer` object. | | [`blob()`][blob] | Reads the body stream to its completion and returns a `Blob` object. | | [`formData()`][formdata] | Reads the body stream to its completion and returns a `FormData` object. | | [`json()`][json] | Reads the body stream to its completion, parses it as JSON and returns a JavaScript object. | | [`text()`][text] | Reads the body stream to its completion and returns a USVString object (text). | | [`clone()`][clone] | Clones the Request object. | ## Example ```ts function handler(_req) { // Create a post request const request = new Request("https://post.deno.dev", { method: "POST", body: JSON.stringify({ message: "Hello world!", }), headers: { "content-type": "application/json", }, }); console.log(request.method); // POST console.log(request.headers.get("content-type")); // application/json return fetch(request); } Deno.serve(handler); ``` [cache]: https://developer.mozilla.org/en-US/docs/Web/API/Request/cache [credentials]: https://developer.mozilla.org/en-US/docs/Web/API/Request/credentials [destination]: https://developer.mozilla.org/en-us/docs/web/api/request/destination [requestdestination]: https://developer.mozilla.org/en-US/docs/Web/API/RequestDestination [body]: https://developer.mozilla.org/en-US/docs/Web/API/Body/body [bodyused]: https://developer.mozilla.org/en-US/docs/Web/API/Body/bodyUsed [url]: https://developer.mozilla.org/en-US/docs/Web/API/Request/url [headers]: https://developer.mozilla.org/en-US/docs/Web/API/Request/headers [method]: https://developer.mozilla.org/en-US/docs/Web/API/Request/method [integrity]: https://developer.mozilla.org/en-US/docs/Web/API/Request/integrity [mode]: https://developer.mozilla.org/en-US/docs/Web/API/Request/mode [redirect]: https://developer.mozilla.org/en-US/docs/Web/API/Request/redirect [referrer]: https://developer.mozilla.org/en-US/docs/Web/API/Request/referrer [referrerpolicy]: https://developer.mozilla.org/en-US/docs/Web/API/Request/referrerpolicy [readablestream]: https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream [arraybuffer]: https://developer.mozilla.org/en-US/docs/Web/API/Body/arrayBuffer [blob]: https://developer.mozilla.org/en-US/docs/Web/API/Body/blob [json]: https://developer.mozilla.org/en-US/docs/Web/API/Body/json [text]: https://developer.mozilla.org/en-US/docs/Web/API/Body/text [formdata]: https://developer.mozilla.org/en-US/docs/Web/API/Body/formdata [clone]: https://developer.mozilla.org/en-US/docs/Web/API/Request/clone --- # HTTP Response URL: https://docs.deno.com/deploy/classic/api/runtime-response :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: The [Response](https://developer.mozilla.org/en-US/docs/Web/API/Response) interface is part of the Fetch API and represents a response resource of fetch(). - [Constructor](#constructor) - [Parameters](#parameters) - [Properties](#properties) - [Methods](#methods) - [Example](#example) ## Constructor The Response() constructor creates a new Response instance. ```ts let response = new Response(body, init); ``` #### Parameters | name | type | optional | description | | ---- | --------------------------------------------------------------------------------------- | -------- | -------------------------------------------------------------------------- | | body | `Blob`, `BufferSource`, `FormData`, `ReadableStream`, `URLSearchParams`, or `USVString` | `true` | The body of the response. The default value is `null`. | | init | `ResponseInit` | `true` | An optional object that allows setting status and headers of the response. | The return type is a `Response` instance. ##### `ResponseInit` | name | type | optional | description | | ------------ | ----------------------------------------------------- | -------- | ----------------------------------------------------- | | `status` | `number` | `true` | The status code of the response. | | `statusText` | `string` | `true` | The status message representative of the status code. | | `headers` | `Headers` or `string[][]` or `Record` | `false` | The HTTP headers of the response. | ## Properties | name | type | read only | description | | -------------------------- | ---------------- | --------- | ----------------------------------------------------------- | | [`body`][body] | `ReadableStream` | `true` | The getter exposes a `ReadableStream` of the body contents. | | [`bodyUsed`][bodyused] | `boolean` | `true` | Indicates whether the body content is read. | | [`url`][url] | `USVString` | `true` | The URL of the response. | | [`headers`][headers] | `Headers` | `true` | The headers associated with the response. | | [`ok`][ok] | `boolean` | `true` | Indicates if the response is successful (200-299 status). | | [`redirected`][redirected] | `boolean` | `true` | Indicates if the response is the result of a redirect. | | [`status`][status] | `number` | `true` | The status code of the response | | [`statusText`][statustext] | `string` | `true` | The status message of the response | | [`type`][type] | `string` | `true` | The type of the response. | ## Methods | name | description | | ---------------------------------------------------- | ------------------------------------------------------------------------------------------- | | [`arrayBuffer()`][arraybuffer] | Reads the body stream to its completion and returns an `ArrayBuffer` object. | | [`blob()`][blob] | Reads the body stream to its completion and returns a `Blob` object. | | [`formData()`][formdata] | Reads the body stream to its completion and returns a `FormData` object. | | [`json()`][json] | Reads the body stream to its completion, parses it as JSON and returns a JavaScript object. | | [`text()`][text] | Reads the body stream to its completion and returns a USVString object (text). | | [`clone()`][clone] | Clones the response object. | | [`error()`][error] | Returns a new response object associated with a network error. | | [`redirect(url: string, status?: number)`][redirect] | Creates a new response that redirects to the provided URL. | ## Example ```ts function handler(_req) { // Create a response with html as its body. const response = new Response(" Hello ", { status: 200, headers: { "content-type": "text/html", }, }); console.log(response.status); // 200 console.log(response.headers.get("content-type")); // text/html return response; } Deno.serve(handler); ``` [clone]: https://developer.mozilla.org/en-US/docs/Web/API/Response/clone [error]: https://developer.mozilla.org/en-US/docs/Web/API/Response/error [redirect]: https://developer.mozilla.org/en-US/docs/Web/API/Response/redirect [body]: https://developer.mozilla.org/en-US/docs/Web/API/Body/body [bodyused]: https://developer.mozilla.org/en-US/docs/Web/API/Body/bodyUsed [url]: https://developer.mozilla.org/en-US/docs/Web/API/Request/url [headers]: https://developer.mozilla.org/en-US/docs/Web/API/Request/headers [ok]: https://developer.mozilla.org/en-US/docs/Web/API/Response/ok [redirected]: https://developer.mozilla.org/en-US/docs/Web/API/Response/redirected [status]: https://developer.mozilla.org/en-US/docs/Web/API/Response/status [statustext]: https://developer.mozilla.org/en-US/docs/Web/API/Response/statusText [type]: https://developer.mozilla.org/en-US/docs/Web/API/Response/type [method]: https://developer.mozilla.org/en-US/docs/Web/API/Request/method [readablestream]: https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream [arraybuffer]: https://developer.mozilla.org/en-US/docs/Web/API/Body/arrayBuffer [blob]: https://developer.mozilla.org/en-US/docs/Web/API/Body/blob [json]: https://developer.mozilla.org/en-US/docs/Web/API/Body/json [text]: https://developer.mozilla.org/en-US/docs/Web/API/Body/text [formdata]: https://developer.mozilla.org/en-US/docs/Web/API/Body/formdata --- # TCP sockets and TLS URL: https://docs.deno.com/deploy/classic/api/runtime-sockets :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: Deno Deploy Classic supports outbound TCP and TLS connections. These APIs allow you to use databases like PostgreSQL, SQLite, MongoDB, etc., with Deploy. Looking for information on _serving_ TCP? Take a look at the documentation for [`Deno.serve`](/api/deno/~/Deno.serve) including its support for [TCP options](/api/deno/~/Deno.ServeTcpOptions). ## `Deno.connect` Make outbound TCP connections. The function definition is same as [Deno](https://docs.deno.com/api/deno/~/Deno.connect) with the limitation that `transport` option can only be `tcp` and `hostname` cannot be localhost or empty. ```ts function Deno.connect(options: ConnectOptions): Promise ``` ### Example ```js async function handler(_req) { // Make a TCP connection to example.com const connection = await Deno.connect({ port: 80, hostname: "example.com", }); // Send raw HTTP GET request. const request = new TextEncoder().encode( "GET / HTTP/1.1\nHost: example.com\r\n\r\n", ); const _bytesWritten = await connection.write(request); // Read 15 bytes from the connection. const buffer = new Uint8Array(15); await connection.read(buffer); connection.close(); // Return the bytes as plain text. return new Response(buffer, { headers: { "content-type": "text/plain;charset=utf-8", }, }); } Deno.serve(handler); ``` ## `Deno.connectTls` Make outbound TLS connections. The function definition is the same as [Deno](https://docs.deno.com/api/deno/~/Deno.connectTls) with the limitation that hostname cannot be localhost or empty. ```ts function Deno.connectTls(options: ConnectTlsOptions): Promise ``` ### Example ```js async function handler(_req) { // Make a TLS connection to example.com const connection = await Deno.connectTls({ port: 443, hostname: "example.com", }); // Send raw HTTP GET request. const request = new TextEncoder().encode( "GET / HTTP/1.1\nHost: example.com\r\n\r\n", ); const _bytesWritten = await connection.write(request); // Read 15 bytes from the connection. const buffer = new Uint8Array(15); await connection.read(buffer); connection.close(); // Return the bytes as plain text. return new Response(buffer, { headers: { "content-type": "text/plain;charset=utf-8", }, }); } Deno.serve(handler); ``` --- # CI and GitHub Actions URL: https://docs.deno.com/deploy/classic/ci_github :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: Deno Deploy's Git integration enables deployment of code changes that are pushed to a GitHub repository. Commits on the production branch will be deployed as a production deployment. Commits on all other branches will be deployed as a preview deployment. There are two modes of operation for the Git integration: - **Automatic**: Deno Deploy Classic will automatically pull code and assets from your repository source every time you push, and deploy it. This mode is very fast, but does not allow for a build step. _This is the recommended mode for most users._ - **GitHub Actions**: In this mode, you push your code and assets to Deno Deploy from a GitHub Actions workflow. This allows you to perform a build step before deploying. Deno Deploy will select an appropriate mode based on your custom deployment configuration. Below, we go into more detail about the different configurations for **Automatic** and **GitHub Actions** mode. ## Automatic If your project doesn't require any additional build steps, then the system chooses **Automatic** mode. The entrypoint file is simply the file that Deno Deploy will run. ## GitHub Actions If you enter a command in **Install Step** and/or **Build Step** in the **Project Configuration**, Deno Deploy Classic will create a necessary GitHub Actions workflow file and push it into your repository. In this workflow file, we leverage the `deployctl` [Github action][deploy-action] to deploy your project. You can do whatever you need to do, such as running a build command, before deploying it to Deno Deploy. To configure preprocessing commands you want to run, click **Show advanced options** button that appears after choosing your git repository. Then enter values as needed to input boxes. :::tip For example, if you want to enable [ahead-of-time builds] for a Fresh project, you will enter `deno task build` in the **Build Step** box. See also [the Fresh doc][Deploy to production] for deploying a Fresh project to Deno Deploy. ::: The GitHub Actions workflow file that Deno Deploy Classic generates and pushes to your repository looks like as follows. ```yml title=".github/workflows/deploy.yml" name: Deploy on: push: branches: main pull_request: branches: main jobs: deploy: name: Deploy runs-on: ubuntu-latest permissions: id-token: write # Needed for auth with Deno Deploy contents: read # Needed to clone the repository steps: - name: Clone repository uses: actions/checkout@v4 - name: Install Deno uses: denoland/setup-deno@v2 with: deno-version: v2.x - name: Build step run: "deno task build" - name: Upload to Deno Deploy uses: denoland/deployctl@v1 with: project: "" entrypoint: "main.ts" root: "." ``` See [deployctl README](https://github.com/denoland/deployctl/blob/main/action/README.md) for more details. [fileserver]: https://jsr.io/@std/http#file-server [ghapp]: https://github.com/apps/deno-deploy [deploy-action]: https://github.com/denoland/deployctl/blob/main/action/README.md [ahead-of-time builds]: https://fresh.deno.dev/docs/concepts/ahead-of-time-builds [Deploy to production]: https://fresh.deno.dev/docs/getting-started/deploy-to-production --- # Scheduling cron tasks URL: https://docs.deno.com/deploy/classic/cron :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: The [`Deno.cron`](https://docs.deno.com/api/deno/~/Deno.cron) interface enables you to configure JavaScript or TypeScript code that executes on a configurable schedule using [cron syntax](https://en.wikipedia.org/wiki/Cron). In the example below, we configure a block of JavaScript code that will execute every minute. ```ts Deno.cron("Log a message", "* * * * *", () => { console.log("This will print once a minute."); }); ``` It's also possible to use JavaScript objects to define the cron schedule. In the example below, we configure a block of JavaScript code that will execute once an hour. ```ts Deno.cron("Log a message", { hour: { every: 1 } }, () => { console.log("This will print once an hour."); }); ``` `Deno.cron` takes three arguments: - A human-readable name for the cron task - A cron schedule string or JavaScript object that defines a schedule on which the cron job will run - a function to be executed on the given schedule If you are new to cron syntax, there are a number of third party modules [like this one](https://www.npmjs.com/package/cron-time-generator) that will help you generate cron schedule strings. ## Retrying failed runs Failed cron invocations are automatically retried with a default retry policy. If you would like to specify a custom retry policy, you can use the `backoffSchedule` property to specify an array of wait times (in milliseconds) to wait before retrying the function call again. In the following example, we will attempt to retry failed callbacks three times - after one second, five seconds, and then ten seconds. ```ts Deno.cron("Retry example", "* * * * *", { backoffSchedule: [1000, 5000, 10000], }, () => { throw new Error("Deno.cron will retry this three times, to no avail!"); }); ``` ## Design and limitations Below are some design details and limitations to be aware of when using `Deno.cron`. ### Tasks must be defined at the top level module scope The [`Deno.cron`](https://docs.deno.com/api/deno/~/Deno.cron) interface is designed to support static definition of cron tasks based on pre-defined schedules. All `Deno.cron` tasks must be defined at the top-level of a module. Any nested `Deno.cron` definitions (e.g. inside [`Deno.serve`](https://docs.deno.com/api/deno/~/Deno.serve) handler) will result in an error or will be ignored. If you need to schedule tasks dynamically during your Deno program execution, you can use the [Deno Queues](/deploy/classic/queues/) APIs. ### Time zone `Deno.cron` schedules are specified using UTC time zone. This helps avoid issues with time zones which observe daylight saving time. ### Overlapping executions It's possible for the next scheduled invocation of your cron task to overlap with the previous invocation. If this occurs, `Deno.cron` will skip the next scheduled invocation in order to avoid overlapping executions. ### Day-of-week numeric representation `Deno.cron` does not use 0-based day-of-week numeric representation. Instead, it uses 1-7 (or SUN-SAT) to represent Sunday through Saturday. This may be different compared to other cron engines which use 0-6 representation. ## Usage on Deno Deploy With [Deno Deploy](https://deno.com/deploy), you can run your background tasks on V8 isolates in the cloud. When doing so, there are a few considerations to keep in mind. ### Differences with Deno CLI Like other Deno runtime built-ins (like queues and Deno KV), the `Deno.cron` implementation works slightly differently on Deno Deploy. #### How cron works by default The implementation of `Deno.cron` in the Deno runtime keeps execution state in-memory. If you run multiple Deno programs that use `Deno.cron`, each program will have its own independent set of cron tasks. #### How cron works on Deno Deploy Deno Deploy provides a serverless implementation of `Deno.cron` that is designed for high availability and scale. Deno Deploy automatically extracts your `Deno.cron` definitions at deployment time, and schedules them for execution using on-demand isolates. Your latest production deployment defines the set of active cron tasks that are scheduled for execution. To add, remove, or modify cron tasks, simply modify your code and create a new production deployment. Deno Deploy guarantees that your cron tasks are executed at least once per each scheduled time interval. This generally means that your cron handler will be invoked once per scheduled time. In some failure scenarios, the handler may be invoked multiple times for the same scheduled time. ### Cron dashboard When you make a production deployment that includes a cron task, you can view a list of all your cron tasks in the [Deploy dashboard](https://dash.deno.com/projects) under the `Cron` tab for your project. ![a listing of cron tasks in the Deno dashboard](./images/cron-tasks.png) ### Pricing `Deno.cron` invocations are charged at the same rate as inbound HTTP requests to your deployments. Learn more about pricing [here](https://deno.com/deploy/pricing). ### Deploy-specific limitations - `Deno.cron` is only available for production deployments (not preview deployments) - The exact invocation time of your `Deno.cron` handler may vary by up to a minute from the scheduled time ## Cron configuration examples Here are a few common cron configurations, provided for your convenience. ```ts title="Run once a minute" Deno.cron("Run once a minute", "* * * * *", () => { console.log("Hello, cron!"); }); ``` ```ts title="Run every fifteen minutes" Deno.cron("Run every fifteen minutes", "*/15 * * * *", () => { console.log("Hello, cron!"); }); ``` ```ts title="Run once an hour on the hour" Deno.cron("Run once an hour on the hour", "0 * * * *", () => { console.log("Hello, cron!"); }); ``` ```ts title="Run every three hours" Deno.cron("Run every three hours", "0 */3 * * *", () => { console.log("Hello, cron!"); }); ``` ```ts title="Run every day at 1am" Deno.cron("Run every day at 1am", "0 1 * * *", () => { console.log("Hello, cron!"); }); ``` ```ts title="Run every Wednesday at midnight" Deno.cron("Run every Wednesday at midnight", "0 0 * * WED", () => { console.log("Hello, cron!"); }); ``` ```ts title="Run on the first of the month at midnight" Deno.cron("Run on the first of the month at midnight", "0 0 1 * *", () => { console.log("Hello, cron!"); }); ``` --- # Custom domains URL: https://docs.deno.com/deploy/classic/custom-domains :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: By default a project can be reached at its preview URL, which is `$PROJECT_ID.deno.dev`, e.g. `dead-clam-55.deno.dev`. You can also add a custom domain by following the instructions below. ## **Step 1:** Add your custom domain in the Deno Deploy Classic dashboard 1. Click the "Settings" button on the project page, then select "Domains" from the sidebar. 2. Enter the domain name you wish to add to the project and press "Add." Note that you must own the domain that you want to add to a project. If you do not own a domain yet, you can register one at a domain registrar like Google Domains, Namecheap, or gandi.net. 3. The domain is added to the domains list and will have a "setup" badge. 4. Click on the "setup" badge to visit the domain setup page, which will display the list of DNS records that need to be created/updated for your domain. ## **Step 2:** Update your custom domain's DNS records Go to the DNS configuration panel of your domain registrar (or the service you're using to manage DNS) and enter the records as described on the domain setup page. ## **Step 3:** Validate that the DNS records have been updated Go back to the Deno Deploy Classic dashboard and click the **Validate** button on the domain setup page. It will check if the DNS records are correctly set and if so, update the status to "Validated, awaiting certificate provisioning." ## **Step 4:** Provision a certificate for your custom domain At this point you have two options. 99% of the time, you should choose the first option. 1. Let us automatically provision a certificate using Let's Encrypt. To do this, press the **Get automatic certificates** button. Provisioning a TLS certificate can take up to a minute. It is possible that the provisioning fails if your domain specifies a CAA record that prevents [Let's Encrypt](https://letsencrypt.org/) from provisioning certificates. Certificates will be automatically renewed around 30 days before the certificate expires. When you have been issued certificates successfully, you will see a green checkmark. 2. Manually upload a certificate and private key. To manually upload a certificate chain and private key, press the **Upload your own certificates** button. You will be prompted to upload a certificate chain and private key. The certificate chain needs to be complete and valid, and your leaf certificate needs to be at the top of the chain. --- # Using deployctl on the command line URL: https://docs.deno.com/deploy/classic/deployctl :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: `deployctl` is a command line tool (CLI) that lets you operate the Deno Deploy platform without leaving your terminal. With it you can deploy your code, create and manage your projects and their deployments, and monitor their usage and logs. ## Dependencies The only dependency for `deployctl` is the Deno runtime. You can install it by running the following command: ```sh curl -fsSL https://deno.land/install.sh | sh ``` You don't need to setup a Deno Deploy Classic account beforehand. It will be created along the way when you deploy your first project. ## Install `deployctl` With the Deno runtime installed, you can install the `deployctl` utility with the following command: ```sh deno install -gArf jsr:@deno/deployctl ``` The `-A` option in the deno install command grants all permissions to the installed script. You can opt not to use it, in which case you will be prompted to grant the necessary permissions when needed during the execution of the tool. ## Deploy To perform a new deployment of your code, navigate to the root directory of your project and execute: ```shell deployctl deploy ``` ### Project and Entrypoint If this is the first deployment of the project, `deployctl` will guess the project name based on the Git repo or directory it is in. Similarly, it will guess the entrypoint by looking for files with common entrypoint names (main.ts, src/main.ts, etc). After the first deployment, the settings used will be stored in a config file (by default deno.json). You can specify the project name and/or the entrypoint using the `--project` and `--entrypoint` arguments respectively. If the project does not exist, it will be created automatically. By default it is created in the personal organization of the user, but it can also be created in a custom organization by specifying the `--org` argument. If the organization does not exist yet, it will also be created automatically. ```shell deployctl deploy --project=helloworld --entrypoint=src/entrypoint.ts --org=my-team ``` ### Include and Exclude Files By default, deployctl deploys all the files in the current directory (recursively, except `node_modules` directories). You can customize this behavior using the `--include` and `--exclude` arguments (also supported in the config file). These arguments accept specific files, whole directories and globs. Here are some examples: - Include only source and static files: ```shell deployctl deploy --include=./src --include=./static ``` - Include only Typescript files: ```shell deployctl deploy --include=**/*.ts ``` - Exclude local tooling and artifacts ```shell deployctl deploy --exclude=./tools --exclude=./benches ``` A common pitfall is to not include the source code modules that need to be run (entrypoint and dependencies). The following example will fail because `main.ts` is not included: ```shell deployctl deploy --include=./static --entrypoint=./main.ts ``` The entrypoint can also be a remote script. A common use case for this is to deploy an static site using `std/http/file_server.ts`. ```shell deployctl deploy --include=dist --entrypoint=jsr:@std/http/file-server ``` ### Environment variables You can set env variables using `--env` (to set individual environment variables) or `--env-file` (to load one or more environment files). These options can be combined and used multiple times: ```shell deployctl deploy --env-file --env-file=.other-env --env=DEPLOYMENT_TS=$(date +%s) ``` The deployment will have access to these variables using `Deno.env.get()`. Be aware that the env variables set with `--env` and `--env-file` are specific for the deployment being created and are not added to the list of [env variables configured for the project](./environment-variables.md). ### Production Deployments Each deployment you create have a unique URL. In addition, a project has a "production URL" and custom domains routing trafffic to its "production" deployment. Deployments can be promoted to production at any time, or created directly as production using the `--prod` flag: ```shell deployctl deploy --prod ``` Learn more about production deployments in the [Deployments](./deployments) docs. ## Deployments The deployments subcommand groups all the operations around deployments. ### List You can list the deployments of a project with: ```shell deployctl deployments list ``` Output: ``` ✔ Page 1 of the list of deployments of the project 'my-project' is ready ┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ │ Deployment │ Date │ Status │ Database │ Domain │ Entrypoint │ Branch │ Commit │ ├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ kcbxc4xwe4mc │ 12/3/2024 13:21:40 CET (2 days) │ Preview │ Preview │ https://my-project-kcbxc4xwe4mc.deno.dev │ main.ts │ main │ 4b6c506 │ │ c0ph5xa9exb3 │ 12/3/2024 13:21:25 CET (2 days) │ Production │ Production │ https://my-project-c0ph5xa9exb3.deno.dev │ main.ts │ main │ 4b6c506 │ │ kwkbev9er4h2 │ 12/3/2024 13:21:12 CET (2 days) │ Preview │ Preview │ https://my-project-kwkbev9er4h2.deno.dev │ main.ts │ main │ 4b6c506 │ │ dxseq0jc8402 │ 6/3/2024 23:16:51 CET (8 days) │ Preview │ Production │ https://my-project-dxseq0jc8402.deno.dev │ main.ts │ main │ 099359b │ │ 7xr5thz8yjbz │ 6/3/2024 22:58:32 CET (8 days) │ Preview │ Preview │ https://my-project-7xr5thz8yjbz.deno.dev │ main.ts │ another │ a4d2953 │ │ 4qr4h5ac3rfn │ 6/3/2024 22:57:05 CET (8 days) │ Failed │ Preview │ n/a │ main.ts │ another │ 56d2c88 │ │ 25wryhcqmb9q │ 6/3/2024 22:56:41 CET (8 days) │ Preview │ Preview │ https://my-project-25wryhcqmb9q.deno.dev │ main.ts │ another │ 4b6c506 │ │ 64tbrn8jre9n │ 6/3/2024 8:21:33 CET (8 days) │ Preview │ Production │ https://my-project-64tbrn8jre9n.deno.dev │ main.ts │ main │ 4b6c506 │ │ hgqgccnmzg04 │ 6/3/2024 8:17:40 CET (8 days) │ Failed │ Production │ n/a │ main.ts │ main │ 8071902 │ │ rxkh1w3g74e8 │ 6/3/2024 8:17:28 CET (8 days) │ Failed │ Production │ n/a │ main.ts │ main │ b142a59 │ │ wx6cw9aya64c │ 6/3/2024 8:02:29 CET (8 days) │ Preview │ Production │ https://my-project-wx6cw9aya64c.deno.dev │ main.ts │ main │ b803784 │ │ a1qh5fmew2yf │ 5/3/2024 16:25:29 CET (9 days) │ Preview │ Production │ https://my-project-a1qh5fmew2yf.deno.dev │ main.ts │ main │ 4bb1f0f │ │ w6pf4r0rrdkb │ 5/3/2024 16:07:35 CET (9 days) │ Preview │ Production │ https://my-project-w6pf4r0rrdkb.deno.dev │ main.ts │ main │ 6e487fc │ │ nn700gexgdzq │ 5/3/2024 13:37:11 CET (9 days) │ Preview │ Production │ https://my-project-nn700gexgdzq.deno.dev │ main.ts │ main │ c5b1d1f │ │ 98crfqxa6vvf │ 5/3/2024 13:33:52 CET (9 days) │ Preview │ Production │ https://my-project-98crfqxa6vvf.deno.dev │ main.ts │ main │ 090146e │ │ xcdcs014yc5p │ 5/3/2024 13:30:58 CET (9 days) │ Preview │ Production │ https://my-project-xcdcs014yc5p.deno.dev │ main.ts │ main │ 5b78c0f │ │ btw43kx89ws1 │ 5/3/2024 13:27:31 CET (9 days) │ Preview │ Production │ https://my-project-btw43kx89ws1.deno.dev │ main.ts │ main │ 663452a │ │ 62tg1ketkjx7 │ 5/3/2024 13:27:03 CET (9 days) │ Preview │ Production │ https://my-project-62tg1ketkjx7.deno.dev │ main.ts │ main │ 24d1618 │ │ 07ag6pt6kjex │ 5/3/2024 13:19:11 CET (9 days) │ Preview │ Production │ https://my-project-07ag6pt6kjex.deno.dev │ main.ts │ main │ 4944545 │ │ 4msyne1rvwj1 │ 5/3/2024 13:17:16 CET (9 days) │ Preview │ Production │ https://my-project-4msyne1rvwj1.deno.dev │ main.ts │ main │ dda85e1 │ └───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ Press enter to fetch the next page [Enter] ``` This command outputs pages of 20 deployments by default. You can iterate over the pages with the enter key, and use the `--page` and `--limit` options to query a specific page and page size. Like with the rest of commands, you can use the `--project` option to specify the project of which to list deployments, if you are not in a project directory or want to list deployments from a different project. ### Show Get all the details of a particular deployment using: ```shell deployctl deployments show ``` Output: ``` ✔ The production deployment of the project 'my-project' is 'c0ph5xa9exb3' ✔ The details of the deployment 'c0ph5xa9exb3' are ready: c0ph5xa9exb3 ------------ Status: Production Date: 2 days, 12 hours, 29 minutes, 46 seconds ago (12/3/2024 13:21:25 CET) Project: my-project (e54f23b5-828d-4b7f-af12-706d4591062b) Organization: my-team (d97822ac-ee20-4ce9-b942-5389330b57ee) Domain(s): https://my-project.deno.dev https://my-project-c0ph5xa9exb3.deno.dev Database: Production (0efa985f-3793-48bc-8c05-f740ffab4ca0) Entrypoint: main.ts Env Vars: HOME Git Ref: main [4b6c506] Message: change name Author: John Doe @johndoe [mailto:johndoe@deno.com] Url: https://github.com/arnauorriols/my-project/commit/4b6c50629ceeeb86601347732d01dc7ed63bf34f Crons: another cron [*/10 * * * *] succeeded at 15/3/2024 1:50:00 CET after 2 seconds (next at 15/3/2024 2:00:00 CET) newest cron [*/10 * * * *] n/a yet another cron [*/10 * * * *] failed at 15/3/2024 1:40:00 CET after 2 seconds (next at 15/3/2024 1:51:54 CET) ``` If no deployment is specified, the command shows the details of the current production deployment of the project. To see the details of the last deployment, use `--last`, and to see the details of a particular deployment, use `--id` (or positional argument). You can also use `--next` or `--prev` to navigate the deployments chronologically. For example, to see the details of the second to last deployment, you can do: ```shell deployctl deployments show --last --prev ``` And to see the details of 2 deployments after a specific deployment: ```shell deployctl deployments show 64tbrn8jre9n --next=2 ``` ### Redeploy The redeploy command creates a new deployment reusing the build of an existing deployment, for the purpose of changing the resources associated with it. This includes production domains, environment variables and KV databases. :::info The semantics of selecting the deployment to redeploy are the same as those of the [show subcommand](#show), including `--last`, `--id`, `--next` and `--prev`. ::: #### Production Domains If you want to change the routing of the production domains of the project to a particular deployment, you can redeploy it with the `--prod` option: ```shell deployctl deployments redeploy --prod 64tbrn8jre9n ``` This will create a new deployment with the same code and environment variables as the specified deployment, but with the production domains of the project pointing to it. For those projects with preview/prod databases (ie projects linked to GitHub), this will also set the production database for the new deployment. :::note This feature is similar to the "promote to production" button found in the Deno Deploy Classic web application with the exception that the "promote to production" button does not create a new deployment. Instead, the "promote to production" button changes the domain routing in-place, however it's restricted to deployments already using the production database. ::: #### KV Database If this is a GitHub deployment, it will have 2 databases, one for prod deployments and one for preview deployments. You can change the database of a deployment by redeploying it with the `--db` option: ```shell deployctl deployments redeploy --db=prod --id=64tbrn8jre9n ``` :::note When redeploying a deployment to prod, by default it will automatically configure it to use the prod database. You can combine both `--prod` and `--db` options to opt out of this behavior. For example, the following command will redeploy the current production deployment (given the lack of positional argument, `--id` or `--last`). The new deployment will become the new production deployment, but it will use the preview database instead of the production database: ```shell deployctl deployments redeploy --prod --db=preview ``` ::: If your organization has custom databases, you can also set them by UUID: ```shell deployctl deployments redeploy --last --db=5261e096-f9aa-4b72-8440-1c2b5b553def ``` #### Environment Variables When a deployment is created, it inherits the environment variables of the project. Given that the deployments are immutable, their environment variables can never be changed. To set new environment variables in a deployment, you need to redeploy it using `--env` (to set individual variables) and `--env-file` (to load one or more environment files). The following command redeploys the current production deployment with the env variables defined in the `.env` and `.other-env` files, plus the `DEPLOYMENT_TS` variable set to the current timestamp. The resulting deployment will be a preview deployment (ie the production domains won't route traffic to it, given the lack of `--prod`). ```shell deployctl deployments redeploy --env-file --env-file=.other-env --env=DEPLOYMENT_TS=$(date +%s) ``` :::note Be aware that when changing env variables, only the env variables set in the redeploy command will be used by the new deployment. The project env variables and the env variables of the deployment being redeployed are ignored. If this does not suit your needs, please report your feedback at https://github.com/denoland/deploy_feedback/issues/ ::: :::note When you change the project environment variables in the Deno Deploy Classic web application, the current production deployment is redeployed with the new environment variables, and the new deployment becomes the new production deployment. ::: ### Delete You can delete a deployment using the `delete` subcommand: ```shell deployctl deployments delete 64tbrn8jre9n ``` Like `show` and `redeploy`, `delete` can also use `--last`, `--next` and `--prev` to select the deployment to delete. Here's an example command that deletes all the deployments of a project except the last (use with caution!): ```shell while deployctl deployments delete --project=my-project --last --prev; do :; done ``` ## Projects The `projects` subcommand groups all the operations against projects as a whole. this includes `list`, `show`, `rename`, `create` and `delete`. ### List `deployctl projects list` outputs all the projects your user has access to, grouped by organization: ``` Personal org: blog url-shortener 'my-team' org: admin-site main-site analytics ``` You can filter by organization using `--org`: ```shell deployctl projects list --org=my-team ``` ### Show To see the details of a particular project, use `projects show`. If you are inside a project, it will pick up the project id from the config file. You can also specify the project using `--project` or the positional argument: ```shell deployctl projects show main-site ``` Output: ``` main-site --------- Organization: my-team (5261e096-f9aa-4b72-8440-1c2b5b553def) Domain(s): https://my-team.com https://main-site.deno.dev Dash URL: https://dash.deno.com/projects/8422c515-f68f-49b2-89f3-157f4b144611 Repository: https://github.com/my-team/main-site Databases: [main] dd28e63e-f495-416b-909a-183380e3a232 [*] e061c76e-4445-409a-bc36-a1a9040c83b3 Crons: another cron [*/10 * * * *] succeeded at 12/3/2024 14:40:00 CET after 2 seconds (next at 12/3/2024 14:50:00 CET) newest cron [*/10 * * * *] n/a yet another cron [*/10 * * * *] failed at 12/3/2024 14:40:00 CET after 2 seconds (next at 12/3/2024 14:50:00 CET) Deployments: kcbxc4xwe4mc c0ph5xa9exb3* kwkbev9er4h2 dxseq0jc8402 7xr5thz8yjbz 4qr4h5ac3rfn 25wryhcqmb9q 64tbrn8jre9n hgqgccnmzg04 rxkh1w3g74e8 wx6cw9aya64c a1qh5fmew2yf w6pf4r0rrdkb nn700gexgdzq 98crfqxa6vvf xcdcs014yc5p btw43kx89ws1 62tg1ketkjx7 07ag6pt6kjex 4msyne1rvwj1 ``` ### Rename Projects can be renamed easily with the `rename` subcommand. Similarly to the other commands, if you run the command from within a project's directory, you don't need to specify the current name of the project: ```shell deployctl projects rename my-personal-blog ``` Output: ``` ℹ Using config file '/private/tmp/blog/deno.json' ✔ Project 'blog' (8422c515-f68f-49b2-89f3-157f4b144611) found ✔ Project 'blog' renamed to 'my-personal-blog' ``` :::note Keep in mind that the name of the project is part of the preview domains (https://my-personal-blog-kcbxc4xwe4mc.deno.dev) and the default production domain (https://my-personal-blog.deno.dev). Therefore, when changing the project name, the URLs with the previous name will no longer route to the project's corresponding deployments. ::: ### Create You can create an empty project with: ```shell deployctl projects create my-new-project ``` ### Delete You can delete a project with: ```shell deployctl projects delete my-new-project ``` ## Top The `top` subcommand is used to monitor the resource usage of a project in real-time: ```shell deployctl top ``` Output: ``` ┌────────┬────────────────┬────────────────────────┬─────────┬───────┬─────────┬──────────┬─────────────┬────────────┬─────────┬─────────┬───────────┬───────────┐ │ (idx) │ deployment │ region │ Req/min │ CPU% │ CPU/req │ RSS/5min │ Ingress/min │ Egress/min │ KVr/min │ KVw/min │ QSenq/min │ QSdeq/min │ ├────────┼────────────────┼────────────────────────┼─────────┼───────┼─────────┼──────────┼─────────────┼────────────┼─────────┼─────────┼───────────┼───────────┤ │ 6b80e8 │ "kcbxc4xwe4mc" │ "asia-northeast1" │ 80 │ 0.61 │ 4.56 │ 165.908 │ 11.657 │ 490.847 │ 0 │ 0 │ 0 │ 0 │ │ 08312f │ "kcbxc4xwe4mc" │ "asia-northeast1" │ 76 │ 3.49 │ 27.58 │ 186.278 │ 19.041 │ 3195.288 │ 0 │ 0 │ 0 │ 0 │ │ 77c10b │ "kcbxc4xwe4mc" │ "asia-south1" │ 28 │ 0.13 │ 2.86 │ 166.806 │ 7.354 │ 111.478 │ 0 │ 0 │ 0 │ 0 │ │ 15e356 │ "kcbxc4xwe4mc" │ "asia-south1" │ 66 │ 0.97 │ 8.93 │ 162.288 │ 17.56 │ 4538.371 │ 0 │ 0 │ 0 │ 0 │ │ a06817 │ "kcbxc4xwe4mc" │ "asia-southeast1" │ 126 │ 0.44 │ 2.11 │ 140.087 │ 16.504 │ 968.794 │ 0 │ 0 │ 0 │ 0 │ │ d012b6 │ "kcbxc4xwe4mc" │ "asia-southeast1" │ 119 │ 2.32 │ 11.72 │ 193.704 │ 23.44 │ 8359.829 │ 0 │ 0 │ 0 │ 0 │ │ 7d9a3d │ "kcbxc4xwe4mc" │ "australia-southeast1" │ 8 │ 0.97 │ 75 │ 158.872 │ 10.538 │ 3.027 │ 0 │ 0 │ 0 │ 0 │ │ 3c21be │ "kcbxc4xwe4mc" │ "australia-southeast1" │ 1 │ 0.04 │ 90 │ 105.292 │ 0.08 │ 1.642 │ 0 │ 0 │ 0 │ 0 │ │ b75dc7 │ "kcbxc4xwe4mc" │ "europe-west2" │ 461 │ 5.43 │ 7.08 │ 200.573 │ 63.842 │ 9832.936 │ 0 │ 0 │ 0 │ 0 │ │ 33607e │ "kcbxc4xwe4mc" │ "europe-west2" │ 35 │ 0.21 │ 3.69 │ 141.98 │ 9.438 │ 275.788 │ 0 │ 0 │ 0 │ 0 │ │ 9be3d2 │ "kcbxc4xwe4mc" │ "europe-west2" │ 132 │ 0.92 │ 4.19 │ 180.654 │ 15.959 │ 820.513 │ 0 │ 0 │ 0 │ 0 │ │ 33a859 │ "kcbxc4xwe4mc" │ "europe-west3" │ 1335 │ 7.57 │ 3.4 │ 172.032 │ 178.064 │ 10967.918 │ 0 │ 0 │ 0 │ 0 │ │ 3f54ce │ "kcbxc4xwe4mc" │ "europe-west4" │ 683 │ 4.76 │ 4.19 │ 187.802 │ 74.696 │ 7565.017 │ 0 │ 0 │ 0 │ 0 │ │ cf881c │ "kcbxc4xwe4mc" │ "europe-west4" │ 743 │ 3.95 │ 3.19 │ 177.213 │ 86.974 │ 6087.454 │ 0 │ 0 │ 0 │ 0 │ │ b4565b │ "kcbxc4xwe4mc" │ "me-west1" │ 3 │ 0.21 │ 55 │ 155.46 │ 2.181 │ 0.622 │ 0 │ 0 │ 0 │ 0 │ │ b97970 │ "kcbxc4xwe4mc" │ "southamerica-east1" │ 3 │ 0.08 │ 25 │ 186.049 │ 1.938 │ 0.555 │ 0 │ 0 │ 0 │ 0 │ │ fd7a08 │ "kcbxc4xwe4mc" │ "us-east4" │ 3 │ 0.32 │ 80 │ 201.101 │ 0.975 │ 58.495 │ 0 │ 0 │ 0 │ 0 │ │ 95d68a │ "kcbxc4xwe4mc" │ "us-east4" │ 133 │ 1.05 │ 4.77 │ 166.052 │ 28.107 │ 651.737 │ 0 │ 0 │ 0 │ 0 │ │ c473e7 │ "kcbxc4xwe4mc" │ "us-east4" │ 0 │ 0 │ 0 │ 174.154 │ 0.021 │ 0 │ 0 │ 0 │ 0 │ 0 │ │ ebabfb │ "kcbxc4xwe4mc" │ "us-east4" │ 19 │ 0.15 │ 4.78 │ 115.732 │ 7.764 │ 67.054 │ 0 │ 0 │ 0 │ 0 │ │ eac700 │ "kcbxc4xwe4mc" │ "us-south1" │ 114 │ 2.37 │ 12.54 │ 183.001 │ 18.401 │ 22417.397 │ 0 │ 0 │ 0 │ 0 │ │ cd2194 │ "kcbxc4xwe4mc" │ "us-south1" │ 35 │ 0.33 │ 5.68 │ 145.871 │ 8.142 │ 91.236 │ 0 │ 0 │ 0 │ 0 │ │ 140fec │ "kcbxc4xwe4mc" │ "us-west2" │ 110 │ 1.43 │ 7.84 │ 115.298 │ 18.093 │ 977.993 │ 0 │ 0 │ 0 │ 0 │ │ 51689f │ "kcbxc4xwe4mc" │ "us-west2" │ 1105 │ 7.66 │ 4.16 │ 187.277 │ 154.876 │ 14648.383 │ 0 │ 0 │ 0 │ 0 │ │ c5806e │ "kcbxc4xwe4mc" │ "us-west2" │ 620 │ 4.38 │ 4.24 │ 192.291 │ 109.086 │ 9685.688 │ 0 │ 0 │ 0 │ 0 │ └────────┴────────────────┴────────────────────────┴─────────┴───────┴─────────┴──────────┴─────────────┴────────────┴─────────┴─────────┴───────────┴───────────┘ ⠼ Streaming... ``` The columns are defined as follows: | Column | Description | | ----------- | -------------------------------------------------------------------------------------------------- | | idx | Instance discriminator. Opaque id to discriminate different executions running in the same region. | | deployment | The id of the deployment running in the executing instance. | | Req/min | Requests per minute received by the project. | | CPU% | Percentage of CPU used by the project. | | CPU/req | CPU time per request, in milliseconds. | | RSS/5min | Max RSS used by the project during the last 5 minutes, in MB. | | Ingress/min | Data received by the project per minute, in KB. | | Egress/min | Data output by the project per minute, in KB. | | KVr/min | KV reads performed by the project per minute. | | KVw/min | KV writes performed by the project per minute. | | QSenq/min | Queues enqueues performed by the project per minute. | | QSdeq/min | Queues dequeues performed by the project per minute. | You can filter by region using `--region`, which accepts substrings and can be used multiple times: ```shell deployctl top --region=asia --region=southamerica ``` ## Logs You can fetch the logs of your deployments with `deployctl logs`. It supports both live logs where the logs are streamed to the console as they are generated, and query persisted logs where the logs generated in the past are fetched. To show the live logs of the current production deployment of a project: ```shell deployctl logs ``` :::note Unlike in the Deno Deploy Classic web application, at the moment the logs subcommand does not automatically switch to the new production deployment when it changes. ::: To show the live logs of a particular deployment: ```shell deployctl logs --deployment=1234567890ab ``` Logs can be filtered by level, region and text using `--levels` `--regions` and `--grep` options: ```shell deployctl logs --levels=error,info --regions=region1,region2 --grep='unexpected' ``` To show the persisted logs, use the `--since` and/or `--until` options: ```sh deployctl logs --since=$(date -Iseconds -v-2H) --until=$(date -Iseconds -v-30M) ``` ```sh deployctl logs --since=$(date -Iseconds --date='2 hours ago') --until=$(date -Iseconds --date='30 minutes ago') ``` ## API If you use the [subhosting API](../../subhosting/manual/index.md), `deployctl api` will help you interact with the API by handling the authentication and headers for you: ```shell deployctl api /projects/my-personal-blog/deployments ``` Use `--method` and `--body` to specify the HTTP method and the request body: ```shell deployctl api --method=POST --body='{"name": "main-site"}' organizations/5261e096-f9aa-4b72-8440-1c2b5b553def/projects ``` ## Local Development For local development you can use the `deno` CLI. To install `deno`, follow the instructions in the [Deno manual](https://deno.land/manual/getting_started/installation). After installation, you can run your scripts locally: ```shell $ deno run --allow-net=:8000 ./main.ts Listening on http://localhost:8000 ``` To watch for file changes add the `--watch` flag: ```shell $ deno run --allow-net=:8000 --watch ./main.ts Listening on http://localhost:8000 ``` For more information about the Deno CLI, and how to configure your development environment and IDE, visit the Deno Manual's [Getting Started][manual-gs] section. [manual-gs]: https://deno.land/manual/getting_started ## JSON output All the commands that output data have a `--format=json` option that outputs the data in JSON objects. This output mode is the default when stdout is not a TTY, notably when piping to another command. Together with `jq`, this mode enables the programmatic use of all the data provided by `deployctl`: Get the id of the current production deployment: ```shell deployctl deployments show | jq .build.deploymentId ``` Get a csv stream of the CPU time per request on each isolate of each region: ```shell deployctl top | jq -r '[.id,.region,.cpuTimePerRequest] | @csv' ``` --- # Deployments URL: https://docs.deno.com/deploy/classic/deployments :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: A deployment is a snapshot of the code and environment variables required to run an application. A new deployment can be created [via `deployctl`](./deployctl.md#deploy) or automatically via Deploy's Github integration if configured. Deployments are immutable after they have been created. To deploy a new version of the code for an application, a new deployment must be created. Once created, deployments remain accessible. All available deployments are listed on your project page under the `Deployments` tab, pictured below. Old deployments can be deleted [via `deployctl`](./deployctl.md#delete) and [via API](https://apidocs.deno.com/#delete-/deployments/-deploymentId-). ![showing the deployments tab in the project dashboard](./images/project_deployments.png) ## Custom domains There can also be other URLs that can point to a deployment, like [custom domains](custom-domains). ## Branch domains `.deno.dev` is also supported. ## Production vs. preview deployments All deployments have a preview URL that can be used to view this specific deployment. Preview URLs have the format `{project_name}-{deployment_id}.deno.dev`. ![image](../docs-images/preview_deployment.png) A deployment can either be a production or a preview deployment. These deployments do not have any differences in runtime functionality. The only distinguishing factor is that a project's production deployment will receive traffic from the project URL (e.g. `myproject.deno.dev`), and from custom domains in addition to traffic to the deployment's preview URL. ## Promoting preview deployments to production deployments via Deno Deploy Classic UI Preview deployments can be "promoted" to production via the Deno Deploy Classic UI: 1. Navigate to the project page. 2. Click on the **Deployments** tab. 3. Click on the three dots next to the deployment you want to promote to production and select **Promote to Production** ![promote_to_production](../docs-images/promote_to_production.png) Promoting deployments to production is restricted to deployments that already use the production KV database. This is particularly relevant for GitHub deployments that use a different database for preview and production deployments. Deployments (even those that use the preview KV database) can always be redeployed to production using [the `deployctl deployments redeploy` command](./deployctl.md#production-domains). ## Creating production deployments via `deployctl` If you are deploying your Deno code with `deployctl`, you can deploy directly to production with the `--prod` flag: ```sh deployctl deploy --prod --project=helloworld main.ts ``` --- # Connect to DynamoDB URL: https://docs.deno.com/deploy/classic/dynamodb :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: Amazon DynamoDB is a fully managed NoSQL database. To persist data to DynamoDB, follow the steps below: The tutorial assumes that you have an AWS and Deno Deploy Classic account. ## Gather credentials from DynamoDB The first step in the process is to generate AWS credentials to programmatically access DynamoDB. Generate Credentials: 1. Go to https://console.aws.amazon.com/iam/ and go to the "Users" section. 2. Click on the **Add user** button, fill the **User name** field (maybe use `denamo`), and select **Programmatic access** type. 3. Click on **Next: Permissions**, then on **Attach existing policies directly**, search for `AmazonDynamoDBFullAccess` and select it. 4. Click on **Next: Tags**, then on **Next: Review** and finally **Create user**. 5. Click on **Download .csv** button to download the credentials. ## Create a project in Deno Deploy Next, let's create a project in Deno Deploy Classic and set it up with the requisite environment variables: 1. Go to [https://dash.deno.com/new](https://dash.deno.com/new) (Sign in with GitHub if you didn't already) and click on **+ Empty Project** under **Deploy from the command line**. 2. Now click on the **Settings** button available on the project page. 3. Navigate to **Environment Variables** Section and add the following secrets. - `AWS_ACCESS_KEY_ID` - Use the value that's available under **Access key ID** column in the downloaded CSV. - `AWS_SECRET_ACCESS_KEY` - Use the value that's available under **Secret access key** column in the downloaded CSV. ## Write code that connects to DynamoDB AWS has an [official SDK](https://www.npmjs.com/package/@aws-sdk/client-dynamodb) that works with browsers. As most Deno Deploy's APIs are similar to browsers', the same SDK works with Deno Deploy. To use the SDK in Deno, import from a cdn like below and create a client: ```js import { DynamoDBClient, GetItemCommand, PutItemCommand, } from "https://esm.sh/@aws-sdk/client-dynamodb?dts"; // Create a client instance by providing your region information. // The credentials are automatically obtained from environment variables which // we set during our project creation step on Deno Deploy, so we don't have to // pass them manually here. const client = new ApiFactory().makeNew(DynamoDB); serve({ "/songs": handleRequest, }); async function handleRequest(request) { // async/await. try { const data = await client.send(command); // process data. } catch (error) { // error handling. } finally { // finally. } } ``` ## Deploy application to Deno Deploy Classic Once you have finished writing your application, you can deploy it on Deno Deploy Classic. To do this, go back to your project page at `https://dash.deno.com/projects/`. You should see a couple of options to deploy: - [Github integration](ci_github) - [`deployctl`](./deployctl.md) ```sh deployctl deploy --project= ``` Unless you want to add a build step, we recommend that you select the Github integration. For more details on the different ways to deploy on Deno Deploy Classic and the different configuration options, read [here](how-to-deploy). --- # Edge Cache URL: https://docs.deno.com/deploy/classic/edge_cache :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: The [Web Cache API](https://developer.mozilla.org/en-US/docs/Web/API/Cache) is supported on Deno Deploy. The cache is designed to provide microsecond-level read latency, multi-GB/s write throughput and unbounded storage, with the tradeoff of best-effort consistency and durability. ```ts const cache = await caches.open("my-cache"); Deno.serve(async (req) => { const cached = await cache.match(req); if (cached) { return cached; } const res = new Response("cached at " + new Date().toISOString()); await cache.put(req, res.clone()); return res; }); ``` Cached data is stored in the same Deno Deploy Classic region that runs your code. Usually your isolate observes read-after-write (RAW) and write-after-write (WAW) consistency within the same region; however, in rare cases recent writes can be lost, out-of-order or temporarily invisible. ## Expiration By default, cached data is persisted for an indefinite period of time. While we periodically scan and delete inactive objects, an object is usually kept in cache for at least 30 days. Edge Cache understands standard HTTP response headers `Expires` and `Cache-Control`. You can use them to specify an expiration time for every cached object, for example: ``` Expires: Thu, 22 Aug 2024 01:22:31 GMT ``` or: ``` Cache-Control: max-age=86400 ``` ## Limitations - If a response is not constructed from a `Uint8Array` or `string` body, the `Content-Length` header needs to be manually set. - Deletion is not yet supported. --- # Environment variables URL: https://docs.deno.com/deploy/classic/environment-variables :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: Environment variables are useful to store values like access tokens of web services. Each deployment has a set of environment variables defined at the moment of creation and accessible from the code via the `Deno.env` API. There are 2 ways to define the environment variables of a deployment: ## Project environment variables You can define environment variables at the project level. When you create a deployment, it will get the set of environment variables the project has defined _at that particular moment_. For convenience, When you change the environment variables of a project, the current production deployment is _redeployed_, creating a new production deployment with the new set of environment variables. :::note Deployments are immutable, including their environment variables. Changing the environment variables of a project does not change the environment variables of existing deployments. ::: To add an environment variable to your project, click on the **Settings** button on the project page and then on **Environment Variables** from the sidebar. Fill in the key/value fields and click on "Add" to add an environment variable to your project. ![environment_variable](../docs-images/fauna2.png) Updating an existing environment variable works the same way. Click on the "Add Variable" button, enter the same name of the environment variable you wish to update and enter the new value. Click on the "Save" button to complete the update. ## Deployment environment variables When deploying using `deployctl`, you can specify environment variables [using the `--env` or `--env-file` flags](./deployctl.md#environment-variables), complementing the environment variables already defined for the project. You can also pass multiple `--env-file` arguments (e.g., `--env-file=.env.one --env-file=.env.two`) to include variables from multiple files. :::note When multiple declarations for the same environment variable exist within a single `.env` file, the first occurrence is applied. However, if the same variable is defined across multiple `.env` files (using multiple `--env-file` arguments), the value from the last file specified takes precedence. This means that the first occurrence found in the last `.env` file listed will be applied. ::: These env variables will be specific for the deployment being created. ### Default environment variables Every deployment has the following environment variables preset, which you can access from your code. 1. `DENO_REGION` It holds the region code of the region in which the deployment is running. You can use this variable to serve region-specific content. You can refer to the region code from the [regions page](regions). 1. `DENO_DEPLOYMENT_ID` It holds the ID of the deployment. --- # Connect to Firebase URL: https://docs.deno.com/deploy/classic/firebase :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: Firebase is a platform developed by Google for creating mobile and web applications. Its features include authentication primitives for log in and a NoSQL datastore, Firestore, that you can persist data to. This tutorial covers how to connect to Firebase from an application deployed on Deno Deploy. ## Get credentials from Firebase > This tutorial assumes that you've already created a project in Firebase and > added a web application to your project. 1. Navigate to your project in Firebase and click on **Project Settings** 2. Scroll down until you see a card with your app name, and a code sample that includes a `firebaseConfig`object. It should look something like the below. Keep this handy. We will use it later: ```js var firebaseConfig = { apiKey: "APIKEY", authDomain: "example-12345.firebaseapp.com", projectId: "example-12345", storageBucket: "example-12345.appspot.com", messagingSenderId: "1234567890", appId: "APPID", }; ``` ## Create a Project in Deno Deploy 1. Go to [https://dash.deno.com/new](https://dash.deno.com/new) (Sign in with GitHub if you didn't already) and click on **+ Empty Project** under **Deploy from the command line**. 2. Now click on the **Settings** button available on the project page. 3. Navigate to the **Environment Variables** section and add the following:
FIREBASE_USERNAME
The Firebase user (email address) that was added above.
FIREBASE_PASSWORD
The Firebase user password that was added above.
FIREBASE_CONFIG
The configuration of the Firebase application as a JSON string.
The configuration needs to be a valid JSON string to be readable by the application. If the code snippet given when setting up looked like this: ```js var firebaseConfig = { apiKey: "APIKEY", authDomain: "example-12345.firebaseapp.com", projectId: "example-12345", storageBucket: "example-12345.appspot.com", messagingSenderId: "1234567890", appId: "APPID", }; ``` You would need to set the value of the string to this (noting that spacing and new lines are not required): ```json { "apiKey": "APIKEY", "authDomain": "example-12345.firebaseapp.com", "projectId": "example-12345", "storageBucket": "example-12345.appspot.com", "messagingSenderId": "1234567890", "appId": "APPID" } ``` ## Write code that connects to Firebase The first thing we will do is import the `XMLHttpRequest` polyfill that Firebase needs to work under Deploy as well as a polyfill for `localStorage` to allow the Firebase auth to persist logged in users: ```js import "https://deno.land/x/xhr@0.1.1/mod.ts"; import { installGlobals } from "https://deno.land/x/virtualstorage@0.1.0/mod.ts"; installGlobals(); ``` > ℹ️ we are using the current version of packages at the time of the writing of > this tutorial. They may not be up-to-date and you may want to double check > current versions. Because Deploy has a lot of the web standard APIs, it is best to use the web libraries for Firebase under deploy. Currently v9 is in still in beta for Firebase, so we will use v8: ```js import firebase from "https://esm.sh/firebase@9.17.0/app"; import "https://esm.sh/firebase@9.17.0/auth"; import "https://esm.sh/firebase@9.17.0/firestore"; ``` Now we need to setup our Firebase application. We will be getting the configuration from the environment variables we set up previously and get references to the parts of Firebase we are going to use: ```js const firebaseConfig = JSON.parse(Deno.env.get("FIREBASE_CONFIG")); const firebaseApp = firebase.initializeApp(firebaseConfig, "example"); const auth = firebase.auth(firebaseApp); const db = firebase.firestore(firebaseApp); ``` Ok, we are almost done. We just need to create our middleware application and add the `localStorage` middleware we imported: ```js const app = new Application(); app.use(virtualStorage()); ``` And then we need to add middleware to authenticate the user. In this tutorial we are simply grabbing the username and password from the environment variables we will be setting up, but this could easily be adapted to redirect a user to a sign-in page if they are not logged in: ```js app.use(async (ctx, next) => { const signedInUid = ctx.cookies.get("LOGGED_IN_UID"); const signedInUser = signedInUid != null ? users.get(signedInUid) : undefined; if (!signedInUid || !signedInUser || !auth.currentUser) { const creds = await auth.signInWithEmailAndPassword( Deno.env.get("FIREBASE_USERNAME"), Deno.env.get("FIREBASE_PASSWORD"), ); const { user } = creds; if (user) { users.set(user.uid, user); ctx.cookies.set("LOGGED_IN_UID", user.uid); } else if (signedInUser && signedInUid.uid !== auth.currentUser?.uid) { await auth.updateCurrentUser(signedInUser); } } return next(); }); ``` ## Deploy the application to Deno Deploy Once you have finished writing your application, you can deploy it on Deno Deploy. To do this, go back to your project page at `https://dash.deno.com/projects/`. You should see a couple of options to deploy: - [Github integration](ci_github) - [`deployctl`](./deployctl.md) ```sh deployctl deploy --project= ``` Unless you want to add a build step, we recommend that you select the Github integration. For more details on the different ways to deploy on Deno Deploy Classic and the different configuration options, read [here](how-to-deploy). --- # Deploy with GitHub integration URL: https://docs.deno.com/deploy/classic/how-to-deploy :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: The simplest way to deploy more complex projects is via our Github integration. This allows you to link a Deno Deploy Classic project to a GitHub repository. Every time you push to the repository, your changes will be automatically deployed. Via the Github integration, you can add a Github Action that defines a build step in your deployment process. See [the Github integration page](ci_github) for more details. ### Deploy from command line with [`deployctl`](./deployctl.md) `deployctl` is a command line tool for deploying your code to Deno Deploy. You can control more details of your deployment than the above automatic GitHub integration by using `deployctl`. See [the `deployctl` page](./deployctl.md) for more details. ### Deploy with playground The easiest way to deploy some code is via a Deno Deploy Classic playground. See the [playground page](playgrounds) for more details. --- # Deploy Classic URL: https://docs.deno.com/deploy/classic/ :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: This section documents the legacy Deno Deploy Classic platform (dash.deno.com). We are no longer onboarding new users or organizations to Deploy Classic. If you already have existing Deploy Classic projects, you may continue to operate them for now, but we strongly encourage you to begin migrating to the new Deno Deploy platform at [console.deno.com](https://console.deno.com). Key differences: - Deploy Classic is in maintenance mode (no new features, limited updates). - New capabilities (enhanced Node/NPM support, integrated builds, metrics, tracing, framework presets, static assets, improved infra) are available in Deno Deploy. Get started with the new platform here: [About Deno Deploy Early Access](/deploy/). Migration guidance is coming soon. In the meantime, you can set up a new Deno Deploy org and redeploy your apps there. Reach out to support if you need assistance. ## What is Deno Deploy Classic? Deno Deploy Classic is a globally distributed platform for serverless JavaScript applications. Your JavaScript, TypeScript, and WebAssembly code runs on managed servers geographically close to your users, enabling low latency and faster response times. Deploy Classic applications run on fast, light-weight [V8 isolates](https://deno.com/blog/anatomy-isolate-cloud) rather than virtual machines, powered by the [Deno runtime](/runtime/manual). Let's deploy your first application - it should only take a few minutes. ## Install Deno and `deployctl` If you haven't already, you can [install the Deno runtime](/runtime/getting_started/installation) using one of the commands below: ```sh curl -fsSL https://deno.land/install.sh | sh ``` ```powershell irm https://deno.land/install.ps1 | iex ``` ```sh curl -fsSL https://deno.land/install.sh | sh ``` After Deno is installed, install the [`deployctl`](./deployctl.md) utility: ```sh deno install -A jsr:@deno/deployctl --global ``` You can confirm `deployctl` has been installed correctly by running: ```console deployctl --help ``` Now, you're ready to deploy a Deno script from the command line! ## Write and test a Deno program First, create a directory for the project and create a file called `main.ts` in it, with the following "Hello World" web server: ```ts title="main.ts" Deno.serve(() => new Response("Hello, world!")); ``` You can test that it works by running it with the command below: ```sh deno run --allow-net main.ts ``` Your server should be viewable at [localhost:8000](http://localhost:8000). Now let's run this code on the edge with Deno Deploy! ## Deploy your project From the directory of the `main.ts` file you just created, run this command: ```sh deployctl deploy ``` You will be asked to authorize Deno Deploy in GitHub to sign up to Deno Deploy and/or to provision an access token for `deployctl`. A few moments after that, your Hello World server will be deployed in Deno Deploy Classic infrastructure all around the world, ready to handle all the traffic you expect. ## Next Steps Now that you've created your first deployment, you can [learn what kinds of apps](./use-cases.md) you can run on Deno Deploy, check out [what else you can do with deployctl](./deployctl.md), or keep reading to find out what other options you have to deploy your code to Deno Deploy. We're so excited to see what you'll ship with Deno Deploy! ### Deploy your existing project Import a project and run it on the edge with Deno Deploy. 1. [From the Deno Deploy Classic dashboard](https://dash.deno.com) click the "New Project" button. 2. Connect to your GitHub account and select the repository you would like to deploy. 3. Follow the on-screen instructions to deploy your existing application. If your project requires a build step, use the Project Configuration form to create a GitHub action to deploy your project. Give your project a name and select from the optional framework presets. If you are not using a framework, you can set up your build settings using the form. 4. Confirm that your build options are correct and click the "Deploy Project" button to kick off your new Github action and deploy your project. In a few moments, your project will be deployed across ~12 data centers around the world, ready to handle large volumes of traffic. Once your deployment is successful you can visit your newly deployed project at the url provided on the success page or manage it in your dashboard. ### Start with a playground A [playground](./playgrounds.md) is a browser-based editor that enables you to write and run JavaScript or TypeScript code right away This is a great choice for just kicking the tires on Deno and Deno Deploy! From the [Deno Deploy Classic dashboard](https://dash.deno.com), click the "New Playground" button to create a playground. We also have a variety of ready built tutorials for you to try out Deno Deploy Classic try them out by clicking on "Learning Playground" or visiting:\ [Simple HTTP server playground](https://dash.deno.com/tutorial/tutorial-http)\ [Using the Deno KV database playground](https://dash.deno.com/tutorial/tutorial-http-kv)\ [RESTful API server playground](https://dash.deno.com/tutorial/tutorial-restful)\ [Realtime app with WebSockets playground](https://dash.deno.com/tutorial/tutorial-websocket)\ [Recurring tasks with Deno.cron playground](https://dash.deno.com/tutorial/tutorial-cron) --- # KV on Deno Deploy URL: https://docs.deno.com/deploy/classic/kv_on_deploy :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: Deno Deploy Classic offers a built-in serverless key-value database called Deno KV. Additionally, Deno KV is available within Deno itself, utilizing SQLite as its backend. This feature has been accessible since Deno v1.32 with the `--unstable` flag. Learn more about [Deno KV](/deploy/kv/manual). ## Consistency Deno KV, by default, is a strongly-consistent database. It provides the strictest form of strong consistency called _external consistency_, which implies: - **Serializability**: This is the highest level of isolation for transactions. It ensures that the concurrent execution of multiple transactions results in a system state that would be the same as if the transactions were executed sequentially, one after another. In other words, the end result of serializable transactions is equivalent to some sequential order of these transactions. - **Linearizability**: This consistency model guarantees that operations, such as read and write, appear to be instantaneous and occur in real-time. Once a write operation completes, all subsequent read operations will immediately return the updated value. Linearizability ensures a strong real-time ordering of operations, making the system more predictable and easier to reason about. Meanwhile, you can choose to relax consistency constraints by setting the `consistency: "eventual"` option on individual read operations. This option allows the system to serve the read from global replicas and caches for minimal latency. Below are the latency figures observed in our top regions: | Region | Latency (Eventual Consistency) | Latency (Strong Consistency) | | -------------------------- | ------------------------------ | ---------------------------- | | North Virginia (us-east4) | 7ms | 7ms | | Frankfurt (europe-west3) | 7ms | 94ms | | Netherlands (europe-west4) | 13ms | 95ms | | California (us-west2) | 72ms | 72ms | | Hong Kong (asia-east2) | 42ms | 194ms | ## Connect to managed databases from outside of Deno Deploy You can connect to your Deno Deploy KV database from your Deno application outside of Deno Deploy. To open a managed database, set the `DENO_KV_ACCESS_TOKEN` environment variable to a Deno Deploy personal access token and provide the URL of the database to `Deno.openKv`: ```ts const kv = await Deno.openKv( "https://api.deno.com/databases//connect", ); ``` Please check the [docs](https://github.com/denoland/deno/tree/main/ext/kv#kv-connect) for the specification of the protocol for connecting to a remote KV database ## Data distribution Deno KV databases are replicated across at least 3 data centers in the primary region. Once a write operation is committed, its mutations are persistently stored in a quorum of data centers within the primary region. If cross-region replication is enabled, asynchronous replication typically transfers mutations to the destination region in under 5 seconds. The system is designed to tolerate most data center-level failures without experiencing downtime or data loss. Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO) help quantify the system's resilience under various failure modes. RPO represents the maximum acceptable amount of data loss measured in time, whereas RTO signifies the maximum acceptable time required to restore the system to normal operations after a failure. - Loss of one data center in the primary region: RPO=0 (no data loss), RTO<5s (system restoration in under 5 seconds) - Loss of any number of data centers in a replica region: RPO=0, RTO<5s - Loss of two or more data centers in the primary region: RPO<60s (under 60 seconds of data loss) --- # Application logging URL: https://docs.deno.com/deploy/classic/logs :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: Applications can generate logs at runtime using the console API, with methods such as `console.log`, `console.error`, etc. These logs can be viewed in real time by either: - Navigating to the `Logs` panel of a project or deployment. - Using the `logs` subcommand in [deployctl](/deploy/classic/deployctl). Logs will be streamed directly from the application to the log panel or displayed in `deployctl logs`. In addition to real-time logs, logs are also retained for a certain duration, which depends on the subscription plan you are on. To view persisted logs, you can: - If you are using the log panel in your browser, switch from `Live` to either `Recent` or `Custom` in the dropdown menu next to the search box. - If you prefer the command line, add `--since=` and/or `--until=` to your `deployctl logs` command. For more details, consult `deployctl logs --help`. Logs older than the retention period are automatically deleted from the system. ## Limits There are limits on both the size of a log message and the volume of logs produced in a certain amount of time. Log messages have a maximum size of 2KB. Messages larger than this limit are trimmed to 2KB. A deployment is allowed to produce up to 1000 log entries per second. If it is exceeded, we may terminate the deployment. --- # Reverse proxy middleware URL: https://docs.deno.com/deploy/classic/middleware :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: This quickstart will cover how to deploy a small piece of middleware that reverse proxies another server (in this case example.com). For additional examples of common middleware functions, see the [example gallery](../tutorials/index.md). ## **Step 1:** Create a new playground project on Deno Deploy Navigate to https://dash.deno.com/projects and click on the "New Playground" button. ## **Step 2:** Deploy middleware code via playground On the next page, copy and paste the code below into the editor. It is an HTTP server that proxies all requests to https://example.com. ```ts async function reqHandler(req: Request) { const reqPath = new URL(req.url).pathname; return await fetch("https://example.com" + reqPath, { headers: req.headers }); } Deno.serve(reqHandler); ``` Click **Save and Deploy**. You should see something like this: ![image](../docs-images/proxy_to_example.png) --- # Connect to Neon Postgres URL: https://docs.deno.com/deploy/classic/neon-postgres :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: This tutorial covers how to connect to a Neon Postgres database from an application deployed on Deno Deploy. ## Setup Postgres To get started, we need to create a new Postgres instance for us to connect to. For this tutorial, we will be using [Neon Postgres](https://neon.tech/) as they provide free, managed Postgres instances. If you like to host your database somewhere else, you can do that too. 1. Visit https://neon.tech/ and click **Sign up** to sign up with an email, Github, Google, or partner account. After signing up, you are directed to the Neon Console to create your first project. 2. Enter a name for your project, select a Postgres version, provide a database name, and select a region. Generally, you'll want to select the region closest to your application. When you're finished, click **Create project**. 3. You are presented with the connection string for your new project, which you can use to connect to your database. Save the connection string, which looks something like this: ```sh postgres://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require ``` You will need the connection string in the next step. ## Create a project in Deno Deploy Next, let's create a project in Deno Deploy Classic and set it up with the requisite environment variables: 1. Go to [https://dash.deno.com/new](https://dash.deno.com/new) (Sign in with GitHub if you didn't already) and click on **Create an empty project** under **Deploy your own code**. 2. Now click on the **Settings** button available on the project page. 3. Navigate to **Environment Variables** Section and add the following secret. - `DATABASE_URL` - The value should be set to the connection string you saved in the last step. ![postgres_env_variable](../docs-images/neon_postgres_env_variable.png) ## Write code that connects to Postgres To read/write to Postgres using the [Neon serverless driver](https://deno.com/blog/neon-on-jsr), first install it using the `deno add` command: ```sh deno add jsr:@neon/serverless ``` This will create or update your `deno.json` file with the dependency: ```json { "imports": { "@neon/serverless": "jsr:@neon/serverless@^0.10.1" } } ``` Now you can use the driver in your code: ```ts import { neon } from "@neon/serverless"; // Get the connection string from the environment variable "DATABASE_URL" const databaseUrl = Deno.env.get("DATABASE_URL")!; // Create a SQL query executor const sql = neon(databaseUrl); try { // Create the table await sql` CREATE TABLE IF NOT EXISTS todos ( id SERIAL PRIMARY KEY, title TEXT NOT NULL ) `; } catch (error) { console.error(error); } ``` ## Deploy application to Deno Deploy Classic Once you have finished writing your application, you can deploy it on Deno Deploy Classic. To do this, go back to your project page at `https://dash.deno.com/projects/`. You should see a couple of options to deploy: - [Github integration](ci_github) - [`deployctl`](./deployctl.md) ```sh deployctl deploy --project= ``` Unless you want to add a build step, we recommend that you select the GitHub integration. For more details on the different ways to deploy on Deno Deploy Classic and the different configuration options, read [here](how-to-deploy). --- # Organizations URL: https://docs.deno.com/deploy/classic/organizations :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: **Organizations** allow you to collaborate with other users. A project created in an organization is accessible to all members of the organization. Users should first signup for Deno Deploy Classic before they can be added to an organization. Currently, all organization members have full access to the organization. They can add/remove members, and create/delete/modify all projects in the organization. ### Create an organization 1. On your Classic dashboard, click on the organization dropdown in the top left of the screen, in the navigation bar. ![organizations](../docs-images/organizations.png) 2. Select **Organization +**. 3. Enter a name for your organization and click on **Create**. ### Add members 1. Select the desired organization in the organization dropdown in the top left of the screen, in the navigation bar. 2. Click on the **Members** icon button. 3. Under the **Members** panel, click on **+ Invite member**. > **Note:** Users should first signup for Deno Deploy Classic using > [this link](https://dash.deno.com/signin) before you invite them. 4. Enter the GitHub username of the user and click on **Invite**. Deno Deploy Classic will send the user an invite email. They can then can either accept or decline your invite. Once they accept the invite, they're added to your organization and shown in the members panel. Pending invites are displayed in the **Invites** panel. You can revoke pending invites by clicking on the delete icon next to the pending invite. ### Remove members 1. Select the desired organization in the organization dropdown in the top left of the screen, in the navigation bar. 2. Click on the **Members** icon button. 3. In the **Members** panel, click on the delete button beside the user you want to remove. --- # Playgrounds URL: https://docs.deno.com/deploy/classic/playgrounds :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: **Playgrounds** are an easy way to play around with Deno Deploy, and to create small projects. Using playgrounds you can write code, run it, and see the output fully inside the browser. Playgrounds have the full power of Deno Deploy: they support all the same features as a normal project, including environment variables, custom domains, and logs. Playgrounds are also just as performant as all other projects on Deno Deploy: they make full use of our global network to run your code as close to users as possible. - [Creating a playground](#creating-a-playground) - [Using the playground editor](#using-the-playground-editor) - [Making a playground public](#making-a-playground-public) - [Exporting a playground to GitHub](#exporting-a-playground-to-github) ## Creating a playground To create a new playground press the **New Playground** button in the top right corner of the [project overview page](https://dash.deno.com/projects). This will create a new playground with a randomly generated name. You can change this name in the project settings later. ## Using the playground editor The playground editor is opened automatically when you create a new playground. You can also open it by navigating to your project's overview page and clicking the **Edit** button. The editor consists of two main areas: the editor on the left, and the preview panel on the right. The editor is where you write your code, and the preview panel is where you can see the output of your code through a browser window. There is also a logs panel underneath the editor panel on the left side. This panel shows the console output of your code, and is useful for debugging your code. After editing your code, you need to save and deploy it so the preview on the right updates. You can do this by clicking the **Save & Deploy** button in the top right, by pressing Ctrl + S, or opening the command palette with F1 and selecting **Deploy: Save & Deploy**. In the tool bar in the top right of the editor you can see the current deployment status of your project while saving. The preview panel on the right will refresh automatically every time you save and deploy your code. The language dropdown in the top right of the editor allows you to switch between JavaScript, JSX, TypeScript, and TSX. The default selected language is TSX which will work for most cases. ## Making a playground public Playgrounds can be shared with other users by making them public. This means that anyone can view the playground and its preview. Public playgrounds can not be edited by anyone: they can still only be edited by you. Logs are also only shown to you. Users have the option to fork a public playground to make a private copy of it that they can edit. To make a playground public, press the **Share** button in the top tool bar in the editor. The URL to your playground will be copied to your clipboard automatically. You can also change the playground visibility from the playground settings page in the Deno Deploy Classic dashboard. This can be used to change the visibility of a playground from public to private again. ## Exporting a playground to GitHub Playgrounds can be exported to GitHub. This is useful if your project is starting to outgrow the single file limit of the playground editor. Doing this will create a new GitHub repository containing the playground code. This project will be automatically turned into a git project that is linked to this new GitHub repository. Environment variables and domains will be retained. The new GitHub repository will be created in your personal account, and will be set to private. You can change these settings later in the GitHub repository settings. After exporting a playground, you can no longer use the Deno Deploy Classic playground editor for this project. This is a one-way operation. To export the playground visit the playground settings page in the Deno Deploy dashboard or select **Deploy: Export to GitHub** from the command palette (press F1 in the editor). Here you can enter a name for the new GitHub repository. This name will be used to create the repository on GitHub. The repository must not already exist. Press **Export** to export the playground to GitHub. --- # Connect to Postgres URL: https://docs.deno.com/deploy/classic/postgres :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: This tutorial covers how to connect to a Postgres database from an application deployed on Deno Deploy. ## Setup Postgres > This tutorial will focus entirely on connecting to Postgres unencrypted. If > you would like to use encryption with a custom CA certificate, use the > documentation [here](https://deno-postgres.com/#/?id=ssltls-connection). To get started, we need to create a new Postgres instance for us to connect to. For this tutorial, we will be using [Supabase](https://supabase.com) as they provide free, managed Postgres instances. If you like to host your database somewhere else, you can do that too. 1. Visit https://app.supabase.io/ and click **New project**. 2. Select a name, password, and region for your database. Make sure to save the password, as you will need it later. 3. Click **Create new project**. Creating the project can take a while, so be patient. ## Gather credentials from Postgres Once you've set up your Postgres database, gather your connection information from your Postgres instance. ### Supabase For the Supabase instance above, to get your connection information: 1. Navigate to the **Database** tab on the left. 2. Go to the **Project Settings** >> **Database** and copy the connection string from the **Connection String** >> **URI** field. This is the connection string you will use to connect to your database. Insert the password you saved earlier into this string, and then save the string somewhere - you will need it later. ### psql If you are using psql, you should generally be able to find your connection information by running: ```psql test=# \conninfo ``` Your Postgres connection string will take the form: ```sh postgres://user:password@127.0.0.1:5432/deploy?sslmode=disable ``` ## Create a project in Deno Deploy Next, let's create a project in Deno Deploy Classic and set it up with the requisite environment variables: 1. Go to [https://dash.deno.com/new](https://dash.deno.com/new) (Sign in with GitHub if you didn't already) and click on **+ Empty Project** under **Deploy from the command line**. 2. Now click on the **Settings** button available on the project page. 3. Navigate to **Environment Variables** Section and add the following secrets. - `DATABASE_URL` - The value should be your connection string that you retrieved in the last step. ![postgres_env_variable](../docs-images/postgres_env_variable.png) ## Write code that connects to Postgres To read/write to Postgres, import a suitable Postgres module such as [this one from JSR](https://jsr.io/@bartlomieju/postgres), read the connection string from the environment variables, and create a connection pool. ```ts import { Pool } from "jsr:@bartlomieju/postgres"; // Get the connection string from the environment variable "DATABASE_URL" const databaseUrl = Deno.env.get("DATABASE_URL")!; // Create a database pool with three connections that are lazily established const pool = new Pool(databaseUrl, 3, true); // Connect to the database const connection = await pool.connect(); try { // Create the table await connection.queryObject` CREATE TABLE IF NOT EXISTS todos ( id SERIAL PRIMARY KEY, title TEXT NOT NULL ) `; } finally { // Release the connection back into the pool connection.release(); } ``` ## Deploy application to Deno Deploy Classic Once you have finished writing your application, you can deploy it on Deno Deploy Classic. To do this, go back to your project page at `https://dash.deno.com/projects/`. You should see a couple of options to deploy: - [Github integration](ci_github) - [`deployctl`](./deployctl.md) ```sh deployctl deploy --project= ``` Unless you want to add a build step, we recommend that you select the Github integration. For more details on the different ways to deploy on Deno Deploy Classic and the different configuration options, read [here](how-to-deploy). --- # Connect to Prisma Postgres URL: https://docs.deno.com/deploy/classic/prisma-postgres :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: This tutorial covers how to connect to a Prisma Postgres database from an application deployed on Deno Deploy. ## Setup Postgres There are several ways to set up a Prisma Postgre database for your Prisma project. This guide covers the most common approaches. ### Method 1: Using Prisma CLI Run the following command to initialize a new Prisma project with a database: ```bash npx prisma init --db ``` This will prompt you to select your preferred region and database name. Once completed, you'll find the `DATABASE_URL` connection string in your `.env` file. ### Method 2: Using `npx create-db` Alternatively, you can use the dedicated database creation tool: ```bash npx create-db@latest ``` This command will provide you with two connection strings tied to the same database: **Prisma ORM optimized connection string:** ```txt prisma+postgres://accelerate.prisma-data.net/?api_key= ``` **Standard Prisma Postgres connection string:** ```txt postgresql://:@db.prisma.io:5432/postgres ``` In order to keep the database created with `npx create-db`, you must follow through with the claim process. That can be done via the claim link provided in the terminal. The Prisma ORM optimized connection string (`prisma+postgres://`) only works with the Prisma ORM, while the standard Prisma Postgre connection string can be used with other database tools and libraries. ## Create a project in Deno Deploy Next, let's create a project in Deno Deploy Classic and set it up with the requisite environment variables: 1. Go to [https://dash.deno.com/new](https://dash.deno.com/new) (Sign in with GitHub if you didn't already) and click on **Create an empty project** under **Deploy your own code**. 2. Now click on the **Settings** button available on the project page. 3. Navigate to **Environment Variables** Section and add the following secret. - `DATABASE_URL` - The value should be set to the connection string you saved in the last step. ![postgres_env_variable](../docs-images/prisma_postgres_env_variable.png) ## Write code that connects to Postgres Now that you have your database set up, let's create a simple application that connects to the Prisma Postgres database using Prisma ORM. ### 1. Install dependencies First, install the required dependencies: ```bash deno install npm:@prisma/client deno install npm:@prisma/extension-accelerate deno install npm:dotenv-cli ``` :::note The `dotenv-cli` package is needed because Prisma Client doesn't read `.env` files by default on Deno. ::: ### 2. Create the database schema With your database connection configured, you can now apply the data model to your database: ```bash deno run -A npm:prisma migrate dev --name init ``` This command creates a new SQL migration file and runs it against your database. ### 3. Update your Prisma schema Edit your `prisma/schema.prisma` file to define a `Log` model and configure it for Deno: ```ts generator client { provider = "prisma-client" output = "../generated/prisma" runtime = "deno" } datasource db { provider = "postgresql" url = env("DATABASE_URL") } model Log { id Int @id @default(autoincrement()) level Level message String meta Json } enum Level { Info Warn Error } ``` ### 4. Create your application Create `index.ts` in your project root with the following content: ```typescript import { serve } from "https://deno.land/std@0.140.0/http/server.ts"; import { withAccelerate } from "npm:@prisma/extension-accelerate"; import { PrismaClient } from "./generated/prisma/client.ts"; const prisma = new PrismaClient().$extends(withAccelerate()); async function handler(request: Request) { // Ignore /favicon.ico requests: const url = new URL(request.url); if (url.pathname === "/favicon.ico") { return new Response(null, { status: 204 }); } const log = await prisma.log.create({ data: { level: "Info", message: `${request.method} ${request.url}`, meta: { headers: JSON.stringify(request.headers), }, }, }); const body = JSON.stringify(log, null, 2); return new Response(body, { headers: { "content-type": "application/json; charset=utf-8" }, }); } serve(handler); ``` ### 4. Test your application locally Start your application locally to test the database connection: ```bash npx dotenv -- deno run -A ./index.ts ``` Visit `http://localhost:8000` in your browser. Each request will create a new log entry in your database and return the log data as JSON. ## Deploy application to Deno Deploy Classic Once you have finished writing your application, you can deploy it on Deno Deploy Classic. To do this, go back to your project page at `https://dash.deno.com/projects/`. You should see a couple of options to deploy: - [Github integration](ci_github) - [`deployctl`](./deployctl.md) ```sh deployctl deploy --project= ``` Unless you want to add a build step, we recommend that you select the GitHub integration. For more details on the different ways to deploy on Deno Deploy Classic and the different configuration options, read [here](how-to-deploy). --- # Using Queues URL: https://docs.deno.com/deploy/classic/queues :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: The Deno runtime includes a queueing API that supports offloading larger workloads for async processing, with guaranteed at-least-once delivery of queued messages. Queues can be used to offload tasks in a web application, or to schedule units of work for a time in the future. The primary APIs you'll use with queues are in the `Deno.Kv` namespace as [`enqueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.enqueue) and [`listenQueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.listenQueue). ## Enqueue a message To enqueue a message for processing, use the `enqueue` method on an instance of [`Deno.Kv`](https://docs.deno.com/api/deno/~/Deno.Kv). In the example below, we show what it might look like to enqueue a notification for delivery. ```ts title="queue_example.ts" // Describe the shape of your message object (optional) interface Notification { forUser: string; body: string; } // Get a reference to a KV instance const kv = await Deno.openKv(); // Create a notification object const message: Notification = { forUser: "alovelace", body: "You've got mail!", }; // Enqueue the message for immediate delivery await kv.enqueue(message); ``` You can enqueue a message for later delivery by specifying a `delay` option in milliseconds. ```ts // Enqueue the message for delivery in 3 days const delay = 1000 * 60 * 60 * 24 * 3; await kv.enqueue(message, { delay }); ``` You can also specify a key in Deno KV where your message value will be stored if your message isn't delivered for any reason. ```ts // Configure a key where a failed message would be sent const backupKey = ["failed_notifications", "alovelace", Date.now()]; await kv.enqueue(message, { keysIfUndelivered: [backupKey] }); // ... disaster strikes ... // Get the unsent message const r = await kv.get(backupKey); // This is the message that didn't get sent: console.log("Found failed notification for:", r.value?.forUser); ``` ## Listening for messages You can configure a JavaScript function that will process items added to your queue with the `listenQueue` method on an instance of [`Deno.Kv`](https://docs.deno.com/api/deno/~/Deno.Kv). ```ts title="listen_example.ts" // Define the shape of the object we expect as a message in the queue interface Notification { forUser: string; body: string; } // Create a type guard to check the type of the incoming message function isNotification(o: unknown): o is Notification { return ( ((o as Notification)?.forUser !== undefined && typeof (o as Notification).forUser === "string") && ((o as Notification)?.body !== undefined && typeof (o as Notification).body === "string") ); } // Get a reference to a KV database const kv = await Deno.openKv(); // Register a handler function to listen for values - this example shows // how you might send a notification kv.listenQueue((msg: unknown) => { // Use type guard - then TypeScript compiler knows msg is a Notification if (isNotification(msg)) { console.log("Sending notification to user:", msg.forUser); // ... do something to actually send the notification! } else { // If the message is of an unknown type, it might be an error console.error("Unknown message received:", msg); } }); ``` ## Queue API with KV atomic transactions You can combine the queue API with [KV atomic transactions](/deploy/kv/transactions/) to atomically enqueue messages and modify keys in the same transaction. ```ts title="kv_transaction_example.ts" const kv = await Deno.openKv(); kv.listenQueue(async (msg: unknown) => { const nonce = await kv.get(["nonces", msg.nonce]); if (nonce.value === null) { // This messaged was already processed return; } const change = msg.change; const bob = await kv.get(["balance", "bob"]); const liz = await kv.get(["balance", "liz"]); const success = await kv.atomic() // Ensure this message was not yet processed .check({ key: nonce.key, versionstamp: nonce.versionstamp }) .delete(nonce.key) .sum(["processed_count"], 1n) .check(bob, liz) // balances did not change .set(["balance", "bob"], bob.value - change) .set(["balance", "liz"], liz.value + change) .commit(); }); // Modify keys and enqueue messages in the same KV transaction! const nonce = crypto.randomUUID(); await kv .atomic() .check({ key: ["nonces", nonce], versionstamp: null }) .enqueue({ nonce: nonce, change: 10 }) .set(["nonces", nonce], true) .sum(["enqueued_count"], 1n) .commit(); ``` ## Queue behavior ### Message delivery guarantees The runtime guarantees at-least-once delivery. This means that for majority of enqueued messages, the [`listenQueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.listenQueue) handler will be invoked once for each message. In some failure scenarios, the handler may be invoked multiple times for the same message to ensure delivery. It's important to design your applications such that duplicate messages are handled correctly. You may use queues in combination with [KV atomic transactions](/deploy/kv/transactions) primitives to ensure that your queue handler KV updates are performed exactly once per message. See [Queue API with KV atomic transactions](#queue-api-with-kv-atomic-transactions). ### Automatic retries [`listenQueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.listenQueue) handler is invoked to process your queued messages when they're ready for delivery. If your handler throws an exception the runtime will automatically retry to call the handler again until it succeeds or until maximum retry attempts are reached. The message is considered to be successfully processed once the [`listenQueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.listenQueue) handler invocation completes successfully. The message will be dropped if the handler consistently fails on retries. ### Message delivery order The runtime makes best effort to deliver messages in the order they were enqueued. However, there is not strict order guarantee. Occasionally, messages may be delivered out of order to ensure maximum throughput. ## Queues on Deno Deploy Deno Deploy offers global, serverless, distributed implementation of the queueing API, designed for high availability and throughput. You can use it to build applications that scale to handle large workloads. ### Just-in-time isolate spin-up When using queues with Deno Deploy, isolates are automatically spun up on demand to invoke your [`listenQueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.listenQueue) handler when a message becomes available for processing. Defining [`listenQueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.listenQueue) handler is the only requirement to enable queue processing in your Deno Deploy application, no additional configuration is needed. ### Queue size limit The maximum number of undelivered queue messages is limited to 100,000. [`enqueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.enqueue) method will fail with an error if the queue is full. ### Pricing details and limits - [`enqueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.enqueue) is treated just like other [`Deno.Kv`](https://docs.deno.com/api/deno/~/Deno.Kv) write operations. Enqueued messages consume KV storage and write units. - Messages delivered through [`listenQueue`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.listenQueue) consume requests and KV write units. - See [Pricing details](https://deno.com/deploy/pricing) for more information. ## Use cases Queues can be useful in many different scenarios, but there are a few use cases you might see a lot when building web applications. ### Offloading async processes Sometimes a task that's initiated by a client (like sending a notification or API request), may take long enough where you don't want to make clients wait for that task to be completed before returning a response. Other times, clients don't actually need a response at all, such as when a client is sending your application a [webhook request](https://en.wikipedia.org/wiki/Webhook), so there's no need to wait for the underlying task to be completed before returning a response. In these cases, you can offload work to a queue to keep your web application responsive and send immediate feedback to clients. ### Scheduling work for the future Another helpful application of queues (and queue APIs like this one), is to schedule work to happen at an appropriate time in the future. Maybe you'd like to send a notification to a new customer a day after they have placed an order to send them a satisfaction survey. You can schedule a queue message to be delivered 24 hours into the future, and set up a listener to send out the notification at that time. --- # Regions URL: https://docs.deno.com/deploy/classic/regions :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: Deno Deploy Classic deploys your code throughout the world. Each new request is served from the closest region to your user. Deno Deploy Classic is presently located in the following regions: - Singapore (`asia-southeast1`) - London (`europe-west2`) - Frankfurt (`europe-west3`) - São Paulo (`southamerica-east1`) - North Virginia (`us-east4`) - California (`us-west2`) This list will be maintained to reflect the latest summary of our regions. Code is deployed to all regions and is served from the region closest to the end user to minimize latency. It is not currently possible to restrict the regions in which your code is deployed. --- # Local development URL: https://docs.deno.com/deploy/classic/running-scripts-locally :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: For local development you can use the `deno` CLI. To install `deno`, follow the instructions in the [Deno manual](https://deno.land/manual/getting_started/installation). After installation, you can run your scripts locally: ```shell $ deno run --allow-net=:8000 https://deno.com/examples/hello.js Listening on http://localhost:8000 ``` To watch for file changes add the `--watch` flag: ```shell $ deno run --allow-net=:8000 --watch ./main.js Listening on http://localhost:8000 ``` For more information about the Deno CLI, and how to configure your development environment and IDE, visit the Deno Manual's [Getting Started][manual-gs] section. [manual-gs]: https://deno.land/manual/getting_started --- # Deno Deploy Classic Use Cases URL: https://docs.deno.com/deploy/classic/use-cases :::info Legacy Documentation You are viewing legacy documentation for Deno Deploy Classic. We recommend migrating to the new Deno Deploy platform. ::: Some popular use-cases for Deno currently are: - [Middleware](#middleware) - [API servers](#api-servers) - [Full websites](#full-websites) ## Middleware Middleware refers to bits of code that execute before and after the request gets to the application server. You'll be writing middleware if you want to execute some JavaScript or any other code very fast, early in the request. By deploying your middleware code at the edge, Deno Deploy Classic ensures the best performance for your app. Some examples include: - setting a cookie - serving different versions of a site depending on geolocation - path rewriting - redirecting requests - dynamically changing the HTML on its way back from the server before it gets to the user. Deno Deploy Classic is a good alternative to other platforms you might be using to host your middleware right now, for example: - Cloudflare Workers - AWS Lambda@Edge - Traditional load balancers like nginx - Custom rules ## API servers Deno is also a great fit for API servers. By deploying these servers "at the edge", closer to clients who are using them, Deno Deploy Classic is able to offer lower latency, improved performance, and reduced bandwidth costs compared to traditional hosting platforms like Heroku or even modern centralized hosting services like DigitalOcean. ## Full websites We foresee a future where you can actually write your entire website on edge functions. Some examples of sites that are already doing this include: - [blog](https://github.com/ry/tinyclouds) - [chat](https://github.com/denoland/showcase_chat) - [calendly clone](https://github.com/denoland/meet-me) --- # Fulfillment Policy > Our policies regarding refunds and cancellations for Deno Deploy. URL: https://docs.deno.com/deploy/fulfillment_policy ## Refund Policy At Deno Deploy, we strive to provide exceptional service. If you are not satisfied with our service, you may request a refund under the following conditions: A refund must be requested within 14 days of the initial purchase or upgrade of any subscription plan. Refunds may be considered if the service fails to function correctly and if the issue cannot be resolved by our support team within a reasonable time frame. No refunds will be issued for services used in violation of our terms of service or for problems clearly attributable to user error or external platform changes. Recurring subscriptions may be canceled but are only eligible for a refund for the initial billing cycle if requested within the 14-day period. ## Cancellation Policy You can cancel your Deno Deploy or Deno Deploy Classic subscription at any time under the following terms: Subscription cancellations are effective immediately, and the service will continue to run until the end of the current billing period. To cancel your subscription, please navigate to your account settings on the Deno Deploy dashboard and select 'Cancel Subscription'. Once the subscription is canceled, no further charges will be incurred, but you are responsible for any charges accrued before the effective date of cancellation. Contact Us For more information about our fulfillment policies, or if you require assistance, please contact our support team at [deploy@deno.com](mailto:deploy@deno.com). --- # Getting started > Step-by-step guide to creating and configuring your first Deno Deploy application, including organization setup, build configuration, environment variables, and deployment monitoring. URL: https://docs.deno.com/deploy/getting_started ## Create an organization Deno Deploy will eventually replace Deno Deploy Classic. Until that time both systems will be available simultaneously. By creating an organisation for use with Deploy it is possible to explore Deploy without disrupting projects already using Deploy Classic. To get started with Deno Deploy: 1. Visit [console.deno.com](http://console.deno.com) 2. Create an organization: ![The Deno Deploy organization creation screen.](./images/create_org.png) Note that you cannot create an organization with the same slug as any existing project in Deploy Classic. Organization names and slugs cannot be changed after creation. ## Create an app After creating an organization, you'll be directed to the organization apps page, which shows all your applications and provides access to organization settings and custom domains. To create an app, press the `+ New App` button: ![Screenshot of deploy app creation screen](./images/create_app.png) An application is a single deployed web service with one build configuration, build history, environment variables, attached custom domains, a linked GitHub repository, etc. ## Select a repo 1. Choose the GitHub repository for your application: ![Screenshot of deploy org selection screen](./images/select_org.png) If your repository doesn't appear, use the `Add another GitHub account` or `Configure GitHub App permissions` buttons to grant the Deno Deploy GitHub app access to your repositories. > ⏳ Mono-repos (repositories where the application lives in a subdirectory) are > not yet supported. ## Configure your app Deno Deploy automatically attempts to detect your application type and configure an appropriate build setup. You can see the detected configuration in the `App Config` box: ![Screenshot of Deploy application configuration screen](./images/app_config.png) To modify this configuration, click `Edit build config`. ![Screenshot of Deploy build configuration screen](./images/build_config.png) ## Configure your build In the build config drawer, you can customize: ### Framework preset Select your framework or choose `No Preset` if using a custom setup. ### Install command Command for installing dependencies (e.g., `npm install`, `deno install`). This can be empty for Deno applications without a `package.json`. ### Build command Command to compile/bundle your application (e.g., `next build`, `deno task build`). Leave empty if your application doesn't require building. ### Runtime configuration For most frameworks there are no options to configure here, as Deno Deploy will figure out the ideal runtime configuration for the app based on the framework preset. When a framework is not configured, you can choose here whether the app is a `Dynamic` app that needs to execute code server side for every request, such as an API server, server-side rendered application, etc., or a `Static` app that consists only of a set of static files that need to be hosted. ### Dynamic Entrypoint The JavaScript or TypeScript file that should be executed to start the application. This is the file path that you would pass locally to `deno run` or `node` to start the app. The path has to be relative to the working directory. ### Dynamic arguments Additional command line arguments to pass to the app on startup, after the entrypoint. These are arguments that are passed to the application not to Deno itself. ### Static Directory The directory in the working directory that contains the static files to be served. For example,`dist`,`_site`, or`.output`. ### Single Page App mode Whether the application is a single page app that should have the root `index.html` served for any paths that do not exist as files in the static directory, instead of a 404 page. Closing the drawer saves the settings. ### Environment variables To add environment variables: 1. Click `Add/Edit environment variables` 2. Click `+ Add variable` in the drawer 3. Enter the name and value 4. Choose whether it's a plain text variable or secret 5. Select the contexts where it should be available: - **Production**: For requests to production domains - **Development**: For requests to preview/branch domains 6. Click `Save` to apply your changes ![Screenshot of the Deploy env variables config screen](./images/env_var.png) ## Build and deploy your app 1. Click `Create App` to create the application and start the first build 2. Watch the build progress through the live logs: ![Screenshot of app build logs](./images/build_logs.png) The build logs show these stages: - **Prepare**: Cloning the repository and restoring caches - **Install**: Running the install command and framework-specific setup - **Build**: Executing the build command and preparing the deployment artifact - **Warm up**: Testing the deployment with a request - **Route**: Deploying the build to global regions You can cancel a build with the button in the top-left corner, or restart failed builds from the same location. After completion, the top-right shows the preview URL, and below that, all timelines where the build is deployed. ## Monitor your application After deploying, use the observability tools to monitor your application: ### Logs View application logs with filtering options for context, revision, and text content: ![Screenshot of the Logs page](./images/logs.png) Use the search bar to filter logs (e.g., `context:production`, `revision:`). The time picker adjusts the displayed time range. If a log is associated with a trace, you can click "View trace" to see the corresponding trace information. ### Traces View request traces with detailed timing information: ![Screenshot of the Traces page](./images/traces.png) Click any trace to open the trace view showing all spans in a waterfall visualization: ![Screenshot of the Trace view](./images/trace.png) The trace view shows: - Timeline of spans with duration - Span details including attributes - Logs emitted during the span To save the environment variables, press the save button. You can re-open the drawer to edit / remove environment variables you have added. You can also edit the app name on this page, and select which region(s) the application should be served from. ## Build and deploy your app Finally, you can press the `Create App` button to create the app. This will create the app and immediately trigger the first build: ![Screenshot of app build logs](./images/build_logs.png) On the build page you can see live streaming build logs split into multiple sections: - **Prepare:** cloning the GitHub repository and restoring build cache - **Install:** executing the install command, and any framework specific pre-install setup - **Build:** executing the build command, any framework specific pre- and post-build setup, and preparing the build artifact for deployment - **Warm up:** sending a request to the preview URL of the deployment to ensure it starts up correctly. The logs shown in the Warm up section are Runtime logs, not build logs. - **Route:** Deno Deploy is rolling out the new version of this build into all global regions. In the top left of this build is a button to cancel the build. For failed builds, there is also a button to restart the build. For completed builds, the top right shows the preview URL of the build. Further down all timelines that this build is deployed to are shown, such as `Production`, or `Git Branch` timelines. You can also see how the build was triggered on this page. This can either be `manual action`, for builds triggered through the UI, or `GitHub repo` for builds triggered through the GitHub integration. You can view the application through either the preview URL, or any of the other URLs shown in the timelines list. ## Monitor your application After visiting your application, you can view telemetry about your application in the form of the logs and traces available in our observability panels. You can visit these pages by clicking the respective buttons in the left sidebar. ### Logs ![Screenshot of the Logs page](./images/logs.png) The logs page shows all recent logs in the project. By default logs from all contexts (production and development) are shown, but using the filter button and search bar at the top, the shown logs can be restricted. For example, to filter to only production logs, add `context:production` to the search bar. To only show logs from a certain revision, use `revision:` etc. You can also use full text search in the search bar. The full text search fill filter down the log entries to only those containing the text written, case-insensitively. By default logs from the last hour are shown. The time picker in the top right can be used to adjust the time frame that logs are shown for. The time zone of the timestamps shown is the time zone set in the time picker. The "view trace" button on the right of a log line shows up if a log line is correlated with a trace. This happens when a log line occurs within an active trace. Clicking this button will open the respective trace as an overlay. ### Traces ![Screenshot of the Traces page](./images/traces.png) The traces page shows all recent traces in the project. By default traces from all contexts (production and development) are shown, but using the filter button and search bar at the top, the shown traces can be restricted. For example, to filter to only production traces, add `context:production` to the search bar. To only show traces from a certain revision, use `revision:` etc. All traces that contain an incoming HTTP request are shown in the list. The text shown for each trace is the path of the request, and the duration of the trace in milliseconds. Clicking on a trace will open the trace view, which shows the full trace including all spans and logs that are part of the trace. ![Screenshot of the Trace view](./images/trace.png) For each span in the trace you can see the duration of the span, the name of the span, the start and end time, and the recorded attributes. By clicking on a span in the timeline, the details of that span will be shown in the summary panel at the bottom. The logs that are emitted as part of a given span are shown in the logs tab at the bottom. Changing the selected span will update which logs are shown in this panel. --- # About Deno Deploy > Guide to Deno Deploy features, comparison with Deploy Classic, and getting started instructions for deployment. URL: https://docs.deno.com/deploy/ Go to the Deno Deploy dashboard Deno Deploy comes with an easy to use dashboard at [console.deno.com](https://console.deno.com). In this dashboard, you can create new Deno Deploy organizations that contain Deno Deploy apps. Within a single organization, you cannot mix Deno Deploy apps with Deploy Classic projects. You can switch between different organizations using the organization picker in the top left of the dashboard. ## What is Deno Deploy? Deno Deploy is a serverless platform for running JavaScript and TypeScript applications in the cloud (or self-hosted on your own infrastructure). It provides a management plane for deploying and running applications with the built-in CI or through integrations such as GitHub actions. ## Comparison to Deploy Classic Deno Deploy is a complete rework of Deploy Classic. It has a new dashboard, and a new execution environment that uses Deno 2.0 and is much more powerful than Deploy Classic. The below table compares the two versions of Deno Deploy. | Feature | Deno Deploy | Deploy Classic | | ------------------------------- | ------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------- | | Web interface | console.deno.com | dash.deno.com | | Dark mode | ✅ Supported | ❌ Not supported | | Builds | ✅ Fully integrated | 🟠 Runs in GitHub Actions, no live streamed logs in the dashboard, caching requires manual setup, changing config requires editing YAML | | Can run Deno apps | ✅ Full support | 🟠 Limited (no FFI, subprocesses, write permission) | | Can run Node apps | ✅ Full support | 🟠 Limited (no FFI, native addons, subprocesses, write permission, and degraded NPM compatibility) | | Can run Next.js/Astro/SvelteKit | ✅ First-class support | 🟠 Framework dependent, requires manual setup | | First class static sites | ✅ Supported | ❌ Not supported | | Environment Variables | ✅ Different dev/prod env vars | 🟠 One set of env vars for all deployments | | CDN caching | ✅ Supported | ❌ Not supported | | Web Cache API | ✅ Supported | ✅ Supported | | Databases | ✅ Supported | 🟠 Deno KV | | Queues | ❌ Not supported | ✅ Supported | | Cron | ❌ Not supported | ✅ Supported | | Deploy from GitHub | ✅ Supported | ✅ Supported | | Deploy from CLI | ✅ Supported | ✅ Supported | | Instant Rollback | ✅ Supported | ✅ Supported | | Logs | ✅ Supported | ✅ Supported | | Tracing | ✅ Supported | ❌ Not supported | | Metrics | ✅ Supported | ❌ Not supported | | OpenTelemetry export | ⏳ Work in progress | ❌ Not supported | | Regions | 2 | 6 | | Self hostable regions | ✅ Supported | ❌ Not supported | ## How to access Deno Deploy To begin using Deno Deploy: 1. Visit [console.deno.com](https://console.deno.com) to access the new dashboard 2. Create a new Deno Deploy organization 3. Create your first application within this organization 4. Deploy from your GitHub repository or directly from the dashboard For detailed configuration instructions and framework-specific guides, please refer to our reference documentation. --- # Backups URL: https://docs.deno.com/deploy/kv/backup KV databases hosted on Deno Deploy can be continuously backed up to your own S3-compatible storage buckets. This is in addition to the replication and backups that we internally perform for all data stored in hosted Deno KV databases to ensure high availability and data durability. This backup happens continuously with very little lag, enabling _[point-in-time-recovery](https://en.wikipedia.org/wiki/Point-in-time_recovery)_ and live replication. Enabling backup for KV databases unlocks various interesting use-cases: - Retrieving a consistent snapshot of your data at any point in time in the past - Running a read-only data replica independent of Deno Deploy - Pushing data into your favorite data pipeline by piping mutations into streaming platforms and analytical databases like Kafka, BigQuery and ClickHouse ## Configuring backup to Amazon S3 First you must create a bucket on AWS: 1. Go to the [AWS S3 console](https://s3.console.aws.amazon.com/s3/home) 2. Click "Create bucket" 3. Enter a bucket name and choose a AWS region, then scroll down and click "Next" 1. Install the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) 2. Run `aws s3api create-bucket --bucket --region --create-bucket-configuration LocationConstraint=` (replace `` and `` with your own values) Then, create an IAM policy with `PutObject` access to the bucket, attach it to an IAM user, and create access keys for that user: 1. Go to the [AWS IAM console](https://console.aws.amazon.com/iam/home) 2. Click "Policies" in the left sidebar 3. Click on "Create policy" 4. Select the "JSON" the policy editor and paste the following policy: ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "KVBackup", "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::/*" } ] } ``` Replace `` with the name of the bucket you created earlier. 5. Click "Review policy" 6. Enter a name for the policy and click "Create policy" 7. Click "Users" in the left sidebar 8. Click "Add user" 9. Enter a name for the user and click "Next" 10. Click "Attach policies directly" 11. Search for the policy you created earlier and click the checkbox next to it 12. Click "Next" 13. Click "Create user" 14. Click on the user you just created 15. Click "Security credentials" and then "Create access key" 16. Select "Other", then click "Next" 17. Enter a description for the access key and click "Create access key" 18. Copy the access key ID and secret access key and save them somewhere safe. You will need them later, and you will not be able to retrieve them again. 1. Copy the following command to your terminal, and replace `` with the name of the bucket you created earlier, then run it: ``` aws iam create-policy --policy-name --policy-document '{"Version":"2012-10-17","Statement":[{"Sid":"KVBackup","Effect":"Allow","Action":"s3:PutObject","Resource":"arn:aws:s3:::/*"}]}' ``` 2. Copy the following command to your terminal, and replace `` with a name for the user you are creating, then run it: ``` aws iam create-user --user-name ``` 3. Copy the following command to your terminal, and replace `` with the ARN of the policy you created in step 1, and `` with the name of the user you created in the previous step, then run it: ``` aws iam attach-user-policy --policy-arn --user-name ``` 4. Copy the following command to your terminal, and replace `` with the name of the user you created in step 2, then run it: ``` aws iam create-access-key --user-name ``` 5. Copy the access key ID and secret access key and save them somewhere safe. You will need them later, and you will not be able to retrieve them again. Now visit the [Deno Deploy dashboard](https://dash.deno.com), and click on the "KV" tab in your project. Scroll to the "Backup" section, and click on "AWS S3". Enter the bucket name, access key ID, and secret access key you created earlier, and the region the bucket is in. Then click "Save". add backup to dashboard The backup will start immediately. Once the data has been backed up, and continuous backup is active, you will see the status change to "Active". ## Configuring backup to Google Cloud Storage Google Cloud Storage (GCS) is compatible with the S3 protocol, and can also be used as a backup target. First you must create a bucket on GCP: 1. Go to the [GCP Cloud Storage console](https://console.cloud.google.com/storage/browser) 2. Click on "Create" in the top bar 3. Enter a bucket name, choose a location, and click "Create" 1. Install the [gcloud CLI](https://cloud.google.com/sdk/docs/install) 2. Run `gcloud storage buckets create --location ` (replace `` and `` with your own values) Then, create a service account with `Storage Object Admin` access to the bucket, and create an HMAC access key for the service account: 1. Go to the [GCP IAM console](https://console.cloud.google.com/iam-admin/iam) 2. Click on "Service accounts" in the left sidebar 3. Click on "Create service account" 4. Enter a name for the service account and click "Done" 5. Copy the email for the service account you just created. You will need it later. 6. Go to the [GCP Cloud Storage console](https://console.cloud.google.com/storage/browser) 7. Click on the bucket you created earlier 8. Click on "Permissions" in the toolbar 9. Click "Grant access" 10. Paste the email for the service account you copied earlier into the "New principals" field 11. Select "Storage Object Admin" from the "Select a role" dropdown 12. Click "Save" 13. Click on "Settings" in the left sidebar (still in the Cloud Storage console) 14. Click on the "Interoperability" tab 15. Click on "Create a key for a service account" 16. Select the service account you created earlier 17. Click "Create key" 18. Copy the access key and secret access key and save them somewhere safe. You will need them later, and you will not be able to retrieve them again. 1. Run the following command, replacing `` with a name for the service account you are creating: ``` gcloud iam service-accounts create ``` 2. Run the following command, replacing `` with the name of the bucket you created earlier, and `` with the email of the service account you created in the previous step: ``` gsutil iam ch serviceAccount::objectAdmin gs:// ``` 3. Run the following command, replacing `` with the email of the service account you created in the previous step: ``` gcloud storage hmac create ``` 4. Copy the `accessId` and `secret` and save them somewhere safe. You will need them later, and you will not be able to retrieve them again. Now visit the [Deno Deploy dashboard](https://dash.deno.com), and click on the "KV" tab in your project. Scroll to the "Backup" section, and click on "Google Cloud Storage". Enter the bucket name, access key ID, and secret access key you created earlier, and the region the bucket is in. Then click "Save". The backup will start immediately. Once the data has been backed up, and continuous backup is active, you will see the status change to "Active". ## Using backups S3 backups can be used with the `denokv` tool. Please refer to the [documentation](https://github.com/denoland/denokv) for more details. --- # Data Modeling in TypeScript URL: https://docs.deno.com/deploy/kv/data_modeling_typescript In TypeScript applications, it is usually desirable to create strongly-typed, well-documented objects to contain the data that your application operates on. Using [interfaces](https://www.typescriptlang.org/docs/handbook/2/objects.html) or [classes](https://www.typescriptlang.org/docs/handbook/2/classes.html), you can describe both the shape and behavior of objects in your programs. If you are using Deno KV, however, there is a bit of extra work required to persist and retrieve objects that are strongly typed. In this guide, we'll cover strategies for working with strongly typed objects going into and back out from Deno KV. ## Using interfaces and type assertions When storing and retrieving application data in Deno KV, you might want to begin by describing the shape of your data using TypeScript interfaces. Below is an object model which describes some key components of a blogging system: ```ts title="model.ts" export interface Author { username: string; fullName: string; } export interface Post { slug: string; title: string; body: string; author: Author; createdAt: Date; updatedAt: Date; } ``` This object model describes a blog post and an associated author. With Deno KV, you can use these TypeScript interfaces like [data transfer objects (DTOs)](https://martinfowler.com/bliki/LocalDTO.html) - a strongly typed wrapper around the otherwise untyped objects you might send to or receive from Deno KV. Without any additional work, you can happily store the contents of one of these DTOs in Deno KV. ```ts import { Author } from "./model.ts"; const kv = await Deno.openKv(); const a: Author = { username: "acdoyle", fullName: "Arthur Conan Doyle", }; await kv.set(["authors", a.username], a); ``` When retrieving this same object from Deno KV, however, it won't by default have type information associated with it. If you know the shape of the object that was stored for the key, however, you can use [type assertion](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#type-assertions) to inform the TypeScript compiler about the shape of an object. ```ts import { Author } from "./model.ts"; const kv = await Deno.openKv(); const r = await kv.get(["authors", "acdoyle"]); const ac = r.value as Author; console.log(ac.fullName); ``` You can also specify an optional [type parameter](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.get) for `get`: ```ts import { Author } from "./model.ts"; const kv = await Deno.openKv(); const r = await kv.get(["authors", "acdoyle"]); console.log(r.value.fullName); ``` For simpler data structures, this technique may be sufficient. But often, you will want or need to apply some business logic when creating or accessing your domain objects. When this need arises, you can develop a set of pure functions that can operate on your DTOs. ## Encapsulating business logic with a service layer When your application's persistence needs become more complex - such as when you need to create [secondary indexes](./secondary_indexes) to query your data by different keys, or maintain relationships between objects - you will want to create a set of functions to sit on top of your DTOs to ensure that the data being passed around is valid (and not merely typed correctly). From our business objects above, the `Post` object is complex enough where it is likely to need a small layer of code to save and retrieve an instance of the object. Below is an example of two functions that wrap the underlying Deno KV APIs, and return strongly typed object instances for the `Post` interface. Notably, we need to store an identifier for an `Author` object, so we can retrieve author information from KV later. ```ts import { Author, Post } from "./model.ts"; const kv = await Deno.openKv(); interface RawPost extends Post { authorUsername: string; } export async function savePost(p: Post): Promise { const postData: RawPost = Object.assign({}, p, { authorUsername: p.author.username, }); await kv.set(["posts", p.slug], postData); return p; } export async function getPost(slug: string): Promise { const postResponse = await kv.get(["posts", slug]); const rawPost = postResponse.value as RawPost; const authorResponse = await kv.get(["authors", rawPost.authorUsername]); const author = authorResponse.value as Author; const post = Object.assign({}, postResponse.value, { author, }) as Post; return post; } ``` This thin layer uses a `RawPost` interface, which extends the actual `Post` interface, to include some additional data that is used to reference data at another index (the associated `Author` object). The `savePost` and `getPost` functions take the place of a direct Deno KV `get` or `set` operation, so that they can properly serialize and "hydrate" model objects for us with appropriate types and associations. --- # Deno KV Quick Start URL: https://docs.deno.com/deploy/kv/ Deno KV is a [key-value database](https://en.wikipedia.org/wiki/Key%E2%80%93value_database) built directly into the Deno runtime, available in the [`Deno.Kv` namespace](https://docs.deno.com/api/deno/~/Deno.Kv). It can be used for many kinds of data storage use cases, but excels at storing simple data structures that benefit from very fast reads and writes. Deno KV is available in the Deno CLI and on [Deno Deploy](/deploy/reference/deno_kv/). :::caution Deno KV is still in development and may change. To use it, you must pass the `--unstable-kv` flag to Deno. ::: Let's walk through the key features of Deno KV. ## Opening a database In your Deno program, you can get a reference to a KV database using [`Deno.openKv()`](https://docs.deno.com/api/deno/~/Deno.openKv). You may pass in an optional file system path to where you'd like to store your database, otherwise one will be created for you based on the current working directory of your script. ```ts const kv = await Deno.openKv(); ``` ## Creating, updating, and reading a key-value pair Data in Deno KV is stored as key-value pairs, much like properties of a JavaScript object literal or a [Map](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map). [Keys](./key_space) are represented as an array of JavaScript types, like `string`, `number`, `bigint`, or `boolean`. Values can be arbitrary JavaScript objects. In this example, we create a key-value pair representing a user's UI preferences, and save it with [`kv.set()`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.set). ```ts const kv = await Deno.openKv(); const prefs = { username: "ada", theme: "dark", language: "en-US", }; const result = await kv.set(["preferences", "ada"], prefs); ``` Once a key-value pair is set, you can read it from the database with [`kv.get()`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.get): ```ts const entry = await kv.get(["preferences", "ada"]); console.log(entry.key); console.log(entry.value); console.log(entry.versionstamp); ``` Both `get` and `list` [operations](./operations) return a [KvEntry](https://docs.deno.com/api/deno/~/Deno.KvEntry) object with the following properties: - `key` - the array key you used to set the value - `value` - the JavaScript object you set for this key - `versionstamp` - a generated value used to determine if a key has been updated. The `set` operation is also used to update objects that already exist for a given key. When a key's value is updated, its `versionstamp` will change to a new generated value. ## Listing several key-value pairs To get values for a finite number of keys, you may use [`kv.getMany()`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.getMany). Pass in several keys as arguments, and you'll receive an array of values for each key. Note that **values and versionstamps can be `null`** if no value exists for the given key(s). ```ts const kv = await Deno.openKv(); const result = await kv.getMany([ ["preferences", "ada"], ["preferences", "grace"], ]); result[0].key; // ["preferences", "ada"] result[0].value; // { ... } result[0].versionstamp; // "00000000000000010000" result[1].key; // ["preferences", "grace"] result[1].value; // null result[1].versionstamp; // null ``` Often, it is useful to retrieve a list of key-value pairs from all keys that share a given prefix. This type of operation is possible using [`kv.list()`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.list). In this example, we get a list of key-value pairs that share the `"preferences"` prefix. ```ts const kv = await Deno.openKv(); const entries = kv.list({ prefix: ["preferences"] }); for await (const entry of entries) { console.log(entry.key); // ["preferences", "ada"] console.log(entry.value); // { ... } console.log(entry.versionstamp); // "00000000000000010000" } ``` Returned keys are ordered lexicographically based on the next component of the key after the prefix. So KV pairs with these keys: - `["preferences", "ada"]` - `["preferences", "bob"]` - `["preferences", "cassie"]` Will be returned in that order by `kv.list()`. Read operations can either be performed in [**strong or eventual consistency mode**](./operations). Strong consistency mode guarantees that the read operation will return the most recently written value. Eventual consistency mode may return a stale value, but is faster. By contrast, writes are always performed in strong consistency mode. ## Deleting key-value pairs You can delete a key from the database using [`kv.delete()`](https://docs.deno.com/api/deno/~/Deno.Kv.prototype.delete). No action is taken if no value is found for the given key. ```ts const kv = await Deno.openKv(); await kv.delete(["preferences", "alan"]); ``` ## Atomic transactions Deno KV is capable of executing [atomic transactions](./transactions), which enables you to conditionally execute one or many data manipulation operations at once. In the following example, we create a new preferences object only if it hasn't been created already. ```ts const kv = await Deno.openKv(); const key = ["preferences", "alan"]; const value = { username: "alan", theme: "light", language: "en-GB", }; const res = await kv.atomic() .check({ key, versionstamp: null }) // `null` versionstamps mean 'no value' .set(key, value) .commit(); if (res.ok) { console.log("Preferences did not yet exist. Inserted!"); } else { console.error("Preferences already exist."); } ``` Learn more about transactions in Deno KV [here](./transactions). ## Improve querying with secondary indexes [Secondary indexes](./secondary_indexes) store the same data by multiple keys, allowing for simpler queries of the data you need. Let's say that we need to be able to access user preferences by both username AND email. To enable this, you could provide a function that wraps the logic to save the preferences to create two indexes. ```ts const kv = await Deno.openKv(); async function savePreferences(prefs) { const key = ["preferences", prefs.username]; // Set the primary key const r = await kv.set(key, prefs); // Set the secondary key's value to be the primary key await kv.set(["preferencesByEmail", prefs.email], key); return r; } async function getByUsername(username) { // Use as before... const r = await kv.get(["preferences", username]); return r; } async function getByEmail(email) { // Look up the key by email, then second lookup for actual data const r1 = await kv.get(["preferencesByEmail", email]); const r2 = await kv.get(r1.value); return r2; } ``` Learn more about [secondary indexes in the manual here](./secondary_indexes/). ## Watching for updates in Deno KV You can also listen for updates from Deno KV with `kv.watch()`, which will emit a new value or values of the key or keys you provide. In the below chat example, we watch for updates on the key `["last_message_id", roomId]`. We retrieve `messageId`, which we then use with `kv.list()` to grab all the new messages from `seen` and `messageId`. ```ts let seen = ""; for await (const [messageId] of kv.watch([["last_message_id", roomId]])) { const newMessages = await Array.fromAsync(kv.list({ start: ["messages", roomId, seen, ""], end: ["messages", roomId, messageId, ""], })); await websocket.write(JSON.stringify(newMessages)); seen = messageId; } ``` Learn more about [using Deno KV watch here](./operations#watch). ## Production usage Deno KV is available for use in live applications on [Deno Deploy](/deploy/reference/deno_kv/). In production, Deno KV is backed by [FoundationDB](https://www.foundationdb.org/), the open source key-value store created by Apple. ## Testing By default, [`Deno.openKv()`](https://docs.deno.com/api/deno/~/Deno.openKv) creates or opens a persistent store based on the path from which the script that invoked it was run. This isn't usually desirable for tests, which need to produce the same behavior when run many times in a row. To test code that uses Deno KV, you can use the special argument `":memory:"` to create an ephemeral Deno KV datastore. ```ts async function setDisplayName( kv: Deno.Kv, username: string, displayname: string, ) { await kv.set(["preferences", username, "displayname"], displayname); } async function getDisplayName( kv: Deno.Kv, username: string, ): Promise { return (await kv.get(["preferences", username, "displayname"])) .value as string; } Deno.test("Preferences", async (t) => { const kv = await Deno.openKv(":memory:"); await t.step("can set displayname", async () => { const displayName = await getDisplayName(kv, "example"); assertEquals(displayName, null); await setDisplayName(kv, "example", "Exemplary User"); const displayName = await getDisplayName(kv, "example"); assertEquals(displayName, "Exemplary User"); }); }); ``` This works because Deno KV is backed by SQLite when run for local development. Just like in-memory SQLite databases, multiple ephemeral Deno KV stores can exist at once without interfering with one another. For more information about special database addressing modes, see [the SQLite docs on the topic](https://www.sqlite.org/inmemorydb.html). ## Next steps At this point, you're just beginning to scratch the surface with Deno KV. Be sure to check out our guide on the [Deno KV key space](./key_space), and a collection of [tutorials and example applications](../tutorials/index.md) here. --- # Key Expiration (TTL for keys) URL: https://docs.deno.com/deploy/kv/key_expiration Since version 1.36.2, Deno KV supports key expiration, allowing developers to control time to live (TTL) for keys in a KV database. This allows an expiration timestamp to be associated with a key, after which the key will be automatically deleted from the database: ```ts const kv = await Deno.openKv(); // `expireIn` is the number of milliseconds after which the key will expire. function addSession(session: Session, expireIn: number) { await kv.set(["sessions", session.id], session, { expireIn }); } ``` Key expiration is supported on both Deno CLI and Deno Deploy. ## Atomic expiration of multiple keys If multiple keys are set in the same atomic operation and have the same `expireIn` value, the expiration of those keys will be atomic. For example: ```ts const kv = await Deno.openKv(); function addUnverifiedUser( user: User, verificationToken: string, expireIn: number, ) { await kv.atomic() .set(["users", user.id], user, { expireIn }) .set(["verificationTokens", verificationToken], user.id, { expireIn }) .commit(); } ``` ## Caveats The expire timestamp specifies the _earliest_ time after which the key can be deleted from the database. An implementation is allowed to expire a key at any time after the specified timestamp, but not before. If you need to strictly enforce an expiration time (e.g. for security purposes), please also add it as a field of your value and do a check after retrieving the value from the database. --- # Key Space URL: https://docs.deno.com/deploy/kv/key_space Deno KV is a key value store. The key space is a flat namespace of key+value+versionstamp pairs. Keys are sequences of key parts, which allow modeling of hierarchical data. Values are arbitrary JavaScript objects. Versionstamps represent when a value was inserted / modified. ## Keys Keys in Deno KV are sequences of key parts, which can be `string`s, `number`s, `boolean`s, `Uint8Array`s, or `bigint`s. Using a sequence of parts, rather than a single string eliminates the possibility of delimiter injection attacks, because there is no visible delimiter. > A key injection attack occurs when an attacker manipulates the structure of a > key-value store by injecting delimiters used in the key encoding scheme into a > user controlled variable, leading to unintended behavior or unauthorized > access. For example, consider a key-value store using a slash (/) as a > delimiter, with keys like "users/alice/settings" and "users/bob/settings". An > attacker could create a new user with the name "alice/settings/hacked" to form > the key "users/alice/settings/hacked/settings", injecting the delimiter and > manipulating the key structure. In Deno KV, the injection would result in the > key `["users", "alice/settings/hacked", "settings"]`, which is not harmful. Between key parts, invisible delimiters are used to separate the parts. These delimiters are never visible, but ensure that one part can not be confused with another part. For example, the key parts `["abc", "def"]`, `["ab", "cdef"]`, `["abc", "", "def"]` are all different keys. Keys are case sensitive and are ordered lexicographically by their parts. The first part is the most significant, and the last part is the least significant. The order of the parts is determined by both the type and the value of the part. ### Key Part Ordering Key parts are ordered lexicographically by their type, and within a given type, they are ordered by their value. The ordering of types is as follows: 1. `Uint8Array` 1. `string` 1. `number` 1. `bigint` 1. `boolean` Within a given type, the ordering is: - `Uint8Array`: byte ordering of the array - `string`: byte ordering of the UTF-8 encoding of the string - `number`: -Infinity < -1.0 < -0.5 < -0.0 < 0.0 < 0.5 < 1.0 < Infinity < NaN - `bigint`: mathematical ordering, largest negative number first, largest positive number last - `boolean`: false < true This means that the part `1.0` (a number) is ordered before the part `2.0` (also a number), but is greater than the part `0n` (a bigint), because `1.0` is a number and `0n` is a bigint, and type ordering has precedence over the ordering of values within a type. ### Key Examples ```js ["users", 42, "profile"]; // User with ID 42's profile ["posts", "2023-04-23", "comments"]; // Comments for all posts on 2023-04-23 ["products", "electronics", "smartphones", "apple"]; // Apple smartphones in the electronics category ["orders", 1001, "shipping", "tracking"]; // Tracking information for order ID 1001 ["files", new Uint8Array([1, 2, 3]), "metadata"]; // Metadata for a file with Uint8Array identifier ["projects", "openai", "tasks", 5]; // Task with ID 5 in the OpenAI project ["events", "2023-03-31", "location", "san_francisco"]; // Events in San Francisco on 2023-03-31 ["invoices", 2023, "Q1", "summary"]; // Summary of Q1 invoices for 2023 ["teams", "engineering", "members", 1n]; // Member with ID 1n in the engineering team ``` ### Universally Unique Lexicographically Sortable Identifiers (ULIDs) Key part ordering allows keys consisting of timestamps and ID parts to be listed chronologically. Typically, you can generate a key using the following: [`Date.now()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now) and [`crypto.randomUUID()`](https://developer.mozilla.org/en-US/docs/Web/API/Crypto/randomUUID): ```js async function setUser(user) { await kv.set(["users", Date.now(), crypto.randomUUID()], user); } ``` Run multiple times sequentially, this produces the following keys: ```js ["users", 1691377037923, "8c72fa25-40ad-42ce-80b0-44f79bc7a09e"]; // First user ["users", 1691377037924, "8063f20c-8c2e-425e-a5ab-d61e7a717765"]; // Second user ["users", 1691377037925, "35310cea-58ba-4101-b09a-86232bf230b2"]; // Third user ``` However, having the timestamp and ID represented within a single key part may be more straightforward in some cases. You can use a [Universally Unique Lexicographically Sortable Identifier (ULID)](https://github.com/ulid/spec) to do this. This type of identifier encodes a UTC timestamp, is lexicographically sortable and is cryptographically random by default: ```js import { ulid } from "jsr:@std/ulid"; const kv = await Deno.openKv(); async function setUser(user) { await kv.set(["users", ulid()], user); } ``` ```js ["users", "01H76YTWK3YBV020S6MP69TBEQ"]; // First user ["users", "01H76YTWK4V82VFET9YTYDQ0NY"]; // Second user ["users", "01H76YTWK5DM1G9TFR0Y5SCZQV"]; // Third user ``` Furthermore, you can generate ULIDs monotonically increasingly using `monotonicUlid` function: ```js import { monotonicUlid } from "jsr:@std/ulid"; async function setUser(user) { await kv.set(["users", monotonicUlid()], user); } ``` ```js // Strict ordering for the same timestamp by incrementing the least-significant random bit by 1 ["users", "01H76YTWK3YBV020S6MP69TBEQ"]; // First user ["users", "01H76YTWK3YBV020S6MP69TBER"]; // Second user ["users", "01H76YTWK3YBV020S6MP69TBES"]; // Third user ``` ## Values Values in Deno KV can be arbitrary JavaScript values that are compatible with the [structured clone algorithm][structured clone algorithm]. This includes: - `undefined` - `null` - `boolean` - `number` - `string` - `bigint` - `Uint8Array` - `Array` - `Object` - `Map` - `Set` - `Date` - `RegExp` Objects and arrays can contain any of the above types, including other objects and arrays. `Map`s and `Set`s can contain any of the above types, including other `Map`s and `Set`s. Circular references within values are supported. Objects with a non-primitive prototype are not supported (such as class instances or Web API objects). Functions and symbols can also not be serialized. ### `Deno.KvU64` type In addition to structured serializable values, the special value `Deno.KvU64` is also supported as a value. This object represents a 64-bit unsigned integer, represented as a bigint. It can be used with the `sum`, `min`, and `max` KV operations. It can not be stored within an object or array. It must be stored as a top-level value. It can be created with the `Deno.KvU64` constructor: ```js const u64 = new Deno.KvU64(42n); ``` ### Value Examples ```js,ignore undefined; null; true; false; 42; -42.5; 42n; "hello"; new Uint8Array([1, 2, 3]); [1, 2, 3]; { a: 1, b: 2, c: 3 }; new Map([["a", 1], ["b", 2], ["c", 3]]); new Set([1, 2, 3]); new Date("2023-04-23"); /abc/; // Circular references are supported const a = {}; const b = { a }; a.b = b; // Deno.KvU64 is supported new Deno.KvU64(42n); ``` ## Versionstamp All data in the Deno KV key-space is versioned. Every time a value is inserted or modified, a versionstamp is assigned to it. Versionstamps are monotonically increasing, non-sequential, 12 byte values that represent the time that the value was modified. Versionstamps do not represent real time, but rather the order in which the values were modified. Because versionstamps are monotonically increasing, they can be used to determine whether a given value is newer or older than another value. This can be done by comparing the versionstamps of the two values. If versionstamp A is greater than versionstamp B, then value A was modified more recently than value B. ```js versionstampA > versionstampB; "000002fa526aaccb0000" > "000002fa526aacc90000"; // true ``` All data modified by a single transaction are assigned the same versionstamp. This means that if two `set` operations are performed in the same atomic operation, then the versionstamp of the new values will be the same. Versionstamps are used to implement optimistic concurrency control. Atomic operations can contain checks that ensure that the versionstamp of the data they are operating on matches a versionstamp passed to the operation. If the versionstamp of the data is not the same as the versionstamp passed to the operation, then the transaction will fail and the operation will not be applied. [structured clone algorithm]: https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Structured_clone_algorithm --- # Using KV in Node.js URL: https://docs.deno.com/deploy/kv/node Connecting to a Deno KV database in Node.js is supported via our [official client library on npm](https://www.npmjs.com/package/@deno/kv). You can find usage instructions for this option below. ## Installation and usage Use your preferred npm client to install the client library for Node.js using one of the commands below. ```sh npm install @deno/kv ``` ```sh pnpm add @deno/kv ``` ```sh yarn add @deno/kv ``` Once you've added the package to your Node project, you can import the `openKv` function (supports both ESM `import` and CJS `require`-based usage): ```js import { openKv } from "@deno/kv"; // Connect to a KV instance const kv = await openKv(""); // Write some data await kv.set(["users", "alice"], { name: "Alice" }); // Read it back const result = await kv.get(["users", "alice"]); console.log(result.value); // { name: "Alice" } ``` By default, the access token used for authentication comes from the `DENO_KV_ACCESS_TOKEN` environment variable. You can also pass it explicitly: ```js import { openKv } from "@deno/kv"; const kv = await openKv("", { accessToken: myToken }); ``` Once your Deno KV client is initialized, the same API available in Deno may be used in Node as well. ## KV Connect URLs Connecting to a KV database outside of Deno requires a [KV Connect](https://github.com/denoland/denokv/blob/main/proto/kv-connect.md) URL. A KV Connect URL for a database hosted on Deno Deploy will be in this format: `https://api.deno.com/databases//connect`. The `database-id` for your project can be found in the [Deno Deploy dashboard](https://dash.deno.com/projects), under the project's "KV" tab. ![Connection string locations in Deploy](./images/kv-connect.png) ## More information More information about how to use the Deno KV module for Node can be found on the project's [README page](https://www.npmjs.com/package/@deno/kv). --- # Operations URL: https://docs.deno.com/deploy/kv/operations The Deno KV API provides a set of operations that can be performed on the key space. There are two operations that read data from the store, and five operations that write data to the store. Read operations can either be performed in strong or eventual consistency mode. Strong consistency mode guarantees that the read operation will return the most recently written value. Eventual consistency mode may return a stale value, but is faster. Write operations are always performed in strong consistency mode. ## `get` The `get` operation returns the value and versionstamp associated with a given key. If a value does not exist, get returns a `null` value and versionstamp. There are two APIs that can be used to perform a `get` operation. The [`Deno.Kv.prototype.get(key, options?)`][get] API, which can be used to read a single key, and the [`Deno.Kv.prototype.getMany(keys, options?)`][getMany] API, which can be used to read multiple keys at once. Get operations are performed as a "snapshot read" in all consistency modes. This means that when retrieving multiple keys at once, the values returned will be consistent with each other. ```ts const res = await kv.get(["config"]); console.log(res); // { key: ["config"], value: "value", versionstamp: "000002fa526aaccb0000" } const res = await kv.get(["config"], { consistency: "eventual" }); console.log(res); // { key: ["config"], value: "value", versionstamp: "000002fa526aaccb0000" } const [res1, res2, res3] = await kv.getMany<[string, string, string]>([ ["users", "sam"], ["users", "taylor"], ["users", "alex"], ]); console.log(res1); // { key: ["users", "sam"], value: "sam", versionstamp: "00e0a2a0f0178b270000" } console.log(res2); // { key: ["users", "taylor"], value: "taylor", versionstamp: "0059e9035e5e7c5e0000" } console.log(res3); // { key: ["users", "alex"], value: "alex", versionstamp: "00a44a3c3e53b9750000" } ``` ## `list` The `list` operation returns a list of keys that match a given selector. The associated values and versionstamps for these keys are also returned. There are 2 different selectors that can be used to filter the keys matched. The `prefix` selector matches all keys that start with the given prefix key parts, but not inclusive of an exact match of the key. The prefix selector may optionally be given a `start` OR `end` key to limit the range of keys returned. The `start` key is inclusive, and the `end` key is exclusive. The `range` selector matches all keys that are lexicographically between the given `start` and `end` keys. The `start` key is inclusive, and the `end` key is exclusive. > Note: In the case of the prefix selector, the `prefix` key must consist only > of full (not partial) key parts. For example, if the key `["foo", "bar"]` > exists in the store, then the prefix selector `["foo"]` will match it, but the > prefix selector `["f"]` will not. The list operation may optionally be given a `limit` to limit the number of keys returned. List operations can be performed using the [`Deno.Kv.prototype.list(selector, options?)`][list] method. This method returns a `Deno.KvListIterator` that can be used to iterate over the keys returned. This is an async iterator, and can be used with `for await` loops. ```ts // Return all users const iter = kv.list({ prefix: ["users"] }); const users = []; for await (const res of iter) users.push(res); console.log(users[0]); // { key: ["users", "alex"], value: "alex", versionstamp: "00a44a3c3e53b9750000" } console.log(users[1]); // { key: ["users", "sam"], value: "sam", versionstamp: "00e0a2a0f0178b270000" } console.log(users[2]); // { key: ["users", "taylor"], value: "taylor", versionstamp: "0059e9035e5e7c5e0000" } // Return the first 2 users const iter = kv.list({ prefix: ["users"] }, { limit: 2 }); const users = []; for await (const res of iter) users.push(res); console.log(users[0]); // { key: ["users", "alex"], value: "alex", versionstamp: "00a44a3c3e53b9750000" } console.log(users[1]); // { key: ["users", "sam"], value: "sam", versionstamp: "00e0a2a0f0178b270000" } // Return all users lexicographically after "taylor" const iter = kv.list({ prefix: ["users"], start: ["users", "taylor"] }); const users = []; for await (const res of iter) users.push(res); console.log(users[0]); // { key: ["users", "taylor"], value: "taylor", versionstamp: "0059e9035e5e7c5e0000" } // Return all users lexicographically before "taylor" const iter = kv.list({ prefix: ["users"], end: ["users", "taylor"] }); const users = []; for await (const res of iter) users.push(res); console.log(users[0]); // { key: ["users", "alex"], value: "alex", versionstamp: "00a44a3c3e53b9750000" } console.log(users[1]); // { key: ["users", "sam"], value: "sam", versionstamp: "00e0a2a0f0178b270000" } // Return all users starting with characters between "a" and "n" const iter = kv.list({ start: ["users", "a"], end: ["users", "n"] }); const users = []; for await (const res of iter) users.push(res); console.log(users[0]); // { key: ["users", "alex"], value: "alex", versionstamp: "00a44a3c3e53b9750000" } ``` The list operation reads data from the store in batches. The size of each batch can be controlled using the `batchSize` option. The default batch size is 500 keys. Data within a batch is read in a single snapshot read, so the values are consistent with each other. Consistency modes apply to each batch of data read. Across batches, data is not consistent. The borders between batches is not visible from the API as the iterator returns individual keys. The list operation can be performed in reverse order by setting the `reverse` option to `true`. This will return the keys in lexicographically descending order. The `start` and `end` keys are still inclusive and exclusive respectively, and are still interpreted as lexicographically ascending. ```ts // Return all users in reverse order, ending with "sam" const iter = kv.list({ prefix: ["users"], start: ["users", "sam"] }, { reverse: true, }); const users = []; for await (const res of iter) users.push(res); console.log(users[0]); // { key: ["users", "taylor"], value: "taylor", versionstamp: "0059e9035e5e7c5e0000" } console.log(users[1]); // { key: ["users", "sam"], value: "sam", versionstamp: "00e0a2a0f0178b270000" } ``` > Note: in the above example we set the `start` key to `["users", "sam"]`, even > though the first key returned is `["users", "taylor"]`. This is because the > `start` and `end` keys are always evaluated in lexicographically ascending > order, even when the list operation is performed in reverse order (which > returns the keys in lexicographically descending order). ## `set` The `set` operation sets the value of a key in the store. If the key does not exist, it is created. If the key already exists, its value is overwritten. The `set` operation can be performed using the [`Deno.Kv.prototype.set(key, value)`][set] method. This method returns a `Promise` that resolves to a `Deno.KvCommitResult` object, which contains the `versionstamp` of the commit. Set operations are always performed in strong consistency mode. ```ts const res = await kv.set(["users", "alex"], "alex"); console.log(res.versionstamp); // "00a44a3c3e53b9750000" ``` ## `delete` The `delete` operation deletes a key from the store. If the key does not exist, the operation is a no-op. The `delete` operation can be performed using the [`Deno.Kv.prototype.delete(key)`][delete] method. Delete operations are always performed in strong consistency mode. ```ts await kv.delete(["users", "alex"]); ``` ## `sum` The `sum` operation atomically adds a value to a key in the store. If the key does not exist, it is created with the value of the sum. If the key already exists, its value is added to the sum. The `sum` operation can only be performed as part of an atomic operation. The [`Deno.AtomicOperation.prototype.mutate({ type: "sum", value })`][mutate] method can be used to add a sum mutation to an atomic operation. The sum operation can only be performed on values of type `Deno.KvU64`. Both the operand and the value in the store must be of type `Deno.KvU64`. If the new value of the key is greater than `2^64 - 1` or less than `0`, the sum operation wraps around. For example, if the value in the store is `2^64 - 1` and the operand is `1`, the new value will be `0`. Sum operations are always performed in strong consistency mode. ```ts await kv.atomic() .mutate({ type: "sum", key: ["accounts", "alex"], value: new Deno.KvU64(100n), }) .commit(); ``` ## `min` The `min` operation atomically sets a key to the minimum of its current value and a given value. If the key does not exist, it is created with the given value. If the key already exists, its value is set to the minimum of its current value and the given value. The `min` operation can only be performed as part of an atomic operation. The [`Deno.AtomicOperation.prototype.mutate({ type: "min", value })`][mutate] method can be used to add a min mutation to an atomic operation. The min operation can only be performed on values of type `Deno.KvU64`. Both the operand and the value in the store must be of type `Deno.KvU64`. Min operations are always performed in strong consistency mode. ```ts await kv.atomic() .mutate({ type: "min", key: ["accounts", "alex"], value: new Deno.KvU64(100n), }) .commit(); ``` ## `max` The `max` operation atomically sets a key to the maximum of its current value and a given value. If the key does not exist, it is created with the given value. If the key already exists, its value is set to the maximum of its current value and the given value. The `max` operation can only be performed as part of an atomic operation. The [`Deno.AtomicOperation.prototype.mutate({ type: "max", value })`][mutate] method can be used to add a max mutation to an atomic operation. The max operation can only be performed on values of type `Deno.KvU64`. Both the operand and the value in the store must be of type `Deno.KvU64`. Max operations are always performed in strong consistency mode. ```ts await kv.atomic() .mutate({ type: "max", key: ["accounts", "alex"], value: new Deno.KvU64(100n), }) .commit(); ``` ## `watch` The `watch` operation accepts an array of keys, and returns a [`ReadableStream`](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream), which emits a new value whenever any of the watched keys change their `versionstamp`. The emitted value is an array of [Deno.KvEntryMaybe](https://docs.deno.com/api/deno/~/Deno.KvEntryMaybe) objects. Note that the returned stream does not return every single intermediate state of the watched keys, but keeps you up to date with the latest state of keys. This means if a key is modified multiple times quickly, you may not receive a notification for every change, but the latest state of the key. ```ts const db = await Deno.openKv(); const stream = db.watch([["foo"], ["bar"]]); for await (const entries of stream) { entries[0].key; // ["foo"] entries[0].value; // "bar" entries[0].versionstamp; // "00000000000000010000" entries[1].key; // ["bar"] entries[1].value; // null entries[1].versionstamp; // null } ``` [get]: https://docs.deno.com/api/deno/~/Deno.Kv.prototype.get [getMany]: https://docs.deno.com/api/deno/~/Deno.Kv.prototype.getMany [list]: https://docs.deno.com/api/deno/~/Deno.Kv.prototype.list [set]: https://docs.deno.com/api/deno/~/Deno.Kv.prototype.set [delete]: https://docs.deno.com/api/deno/~/Deno.Kv.prototype.delete [mutate]: https://docs.deno.com/api/deno/~/Deno.AtomicOperation.prototype.mutate --- # Secondary Indexes URL: https://docs.deno.com/deploy/kv/secondary_indexes Key-value stores like Deno KV organize data as collections of key-value pairs, where each unique key is associated with a single value. This structure enables easy retrieval of values based on their keys but does not allow for querying based on the values themselves. To overcome this constraint, you can create secondary indexes, which store an additional key that lets you look up related data by an alternate attribute (for example, email → user). A best practice is to store a pointer to the primary key in the secondary index, rather than duplicating the full value. :::tip Recommended approach for Pointer indexes Prefer storing the primary key (or a compact reference to it) as the value in a secondary index. This reduces storage usage and avoids keeping multiple copies of the same data in sync. The trade‑off is a double read when querying through the index (index → primary). Pros - Lower storage and write amplification - Fewer updates when non‑indexed fields change - Clearer transactional updates: update primary + index together Cons - Requires a second read to resolve the primary value - You must maintain referential integrity atomically (create/update/delete in a single transaction) ::: Maintaining consistency between primary and secondary keys is crucial when using secondary indexes. If a value is updated at the primary key without updating the secondary key, the data returned from a query targeting the secondary key will be incorrect. To ensure that primary and secondary keys always represent the same data, use atomic operations when inserting, updating, or deleting data. This approach ensures that the group of mutation actions are executed as a single unit, and either all succeed or all fail, preventing inconsistencies. ## Unique indexes (one-to-one) Unique indexes have each key in the index associated with exactly one primary key. For example, when storing user data and looking up users by both their unique IDs and email addresses, store user data under two separate keys: one for the primary key (user ID) and another for the secondary index (email → user ID). This setup allows querying users based on either their ID or their email. The secondary index can also enforce uniqueness constraints on values in the store. In the case of user data, use the index to ensure that each email address is associated with only one user. To implement a unique secondary index for this example, follow these steps: 1. Create a `User` interface representing the data: ```ts interface User { id: string; name: string; email: string; } ``` 2. Define an `insertUser` function that stores user data at the primary key and stores a pointer (the primary key) at the secondary key: ```ts async function insertUser(user: User) { const primaryKey = ["users", user.id] as const; const byEmailKey = ["users_by_email", user.email.toLowerCase()] as const; const res = await kv.atomic() .check({ key: primaryKey, versionstamp: null }) .check({ key: byEmailKey, versionstamp: null }) .set(primaryKey, user) // store pointer, not full user .set(byEmailKey, user.id) .commit(); if (!res.ok) { throw new TypeError("User with ID or email already exists"); } } ``` > This function performs the insert using an atomic operation that checks > that no user with the same ID or email already exists. If either of these > constraints is violated, the insert fails and no data is modified. 3. Define a `getUser` function to retrieve a user by their ID: ```ts async function getUser(id: string): Promise { const res = await kv.get(["users", id]); return res.value; } ``` 4. Define a `getUserByEmail` function to retrieve a user by their email address using a double lookup (email → user ID → user): ```ts async function getUserByEmail(email: string): Promise { const idRes = await kv.get([ "users_by_email", email.toLowerCase(), ]); if (!idRes.value) return null; const res = await kv.get(["users", idRes.value]); return res.value; } ``` This function queries the store using the secondary key (`["users_by_email", email]`). 5. Define a `deleteUser` function to delete users by their ID, removing the index entry too: ```ts async function deleteUser(id: string) { let res = { ok: false } as { ok: boolean }; while (!res.ok) { const cur = await kv.get(["users", id]); if (cur.value === null) return; res = await kv.atomic() .check(cur) .delete(["users", id]) .delete(["users_by_email", cur.value.email.toLowerCase()]) .commit(); } } ``` > This function first retrieves the user by their ID to get the users email > address. This is needed to retrieve the email that is needed to construct > the key for the secondary index for this user address. It then performs an > atomic operation that checks that the user in the database has not changed, > and then deletes both the primary and secondary key pointing to the user > value. If this fails (the user has been modified between query and delete), > the atomic operation aborts. The entire procedure is retried until the > delete succeeds. The check is required to prevent race conditions where > value may have been modified between the retrieve and delete. This race can > occur if an update changes the user's email, because the secondary index > moves in this case. The delete of the secondary index then fails, because > the delete is targeting the old secondary index key. ## Non-Unique Indexes (One-to-Many) Non-unique indexes are secondary indexes where a single key can be associated with multiple primary keys, allowing you to query for multiple items based on a shared attribute. For example, when querying users by their favorite color, implement this using a non-unique secondary index. The favorite color is a non-unique attribute since multiple users can have the same favorite color. To implement a non-unique secondary index for this example, follow these steps: 1. Define the `User` interface: ```ts interface User { id: string; name: string; favoriteColor: string; } ``` 2. Define the `insertUser` function (store the primary key as the value in the non‑unique index; note the composite key includes the user ID to avoid collisions): ```ts async function insertUser(user: User) { const primaryKey = ["users", user.id] as const; const byColorKey = [ "users_by_favorite_color", user.favoriteColor, user.id, ] as const; await kv.atomic() .check({ key: primaryKey, versionstamp: null }) .set(primaryKey, user) // store pointer, not full user .set(byColorKey, user.id) .commit(); } ``` 3. Define a function to retrieve users by their favorite color. This performs a double lookup per result (index → primary): ```ts async function getUsersByFavoriteColor(color: string): Promise { const iter = kv.list({ prefix: ["users_by_favorite_color", color], }); const ids: string[] = []; for await (const { value: id } of iter) { ids.push(id); } if (ids.length === 0) return []; const results = await kv.getMany( ids.map((id) => ["users", id] as const), ); return results.map((r) => r.value!).filter(Boolean); } ``` This example demonstrates the use of a non-unique secondary index, `users_by_favorite_color`, which allows querying users based on their favorite color. The index stores pointers (user IDs) and requires resolving to the primary key to read full values. The primary difference between unique and non‑unique indexes lies in the structure and organization of secondary keys. In unique indexes, each secondary key is associated with exactly one primary key, ensuring that the indexed attribute is unique across all records. In non‑unique indexes, a single secondary key can be associated with multiple primary keys, as the indexed attribute may be shared among multiple records. To achieve this, non‑unique secondary keys are typically structured with an additional unique identifier (e.g., primary key) as part of the key, allowing multiple records with the same attribute to coexist without conflicts. ### When duplicating values may be acceptable While pointer indexes are recommended, duplicating the full value in a secondary index can be acceptable when: - The value is small and reads occur almost exclusively via the secondary index - You want to avoid a second read and can tolerate the extra storage - You can reliably keep the primary and secondary in sync via atomic transactions If duplicating, ensure inserts/updates/deletes modify both keys in the same atomic transaction. ### Migration from duplicated-value indexes To migrate existing duplicated-value indexes to pointer indexes: 1. Backfill: scan primary keys and set secondary index values to the primary key (e.g., user ID). 2. Cutover: update write paths to maintain pointer indexes; keep the old index temporarily for reads. 3. Cleanup: switch readers to the pointer index, then remove the duplicated index entries. --- # Transactions URL: https://docs.deno.com/deploy/kv/transactions The Deno KV store utilizes _optimistic concurrency control transactions_ rather than _interactive transactions_ like many SQL systems like PostgreSQL or MySQL. This approach employs versionstamps, which represent the current version of a value for a given key, to manage concurrent access to shared resources without using locks. When a read operation occurs, the system returns a versionstamp for the associated key in addition to the value. To execute a transaction, one performs an atomic operations that can consist of multiple mutation actions (like set or delete). Along with these actions, key+versionstamp pairs are provided as a condition for the transaction's success. The optimistic concurrency control transaction will only commit if the specified versionstamps match the current version for the values in the database for the corresponding keys. This transaction model ensures data consistency and integrity while allowing concurrent interactions within the Deno KV store. Because OCC transactions are optimistic, they can fail on commit because the version constraints specified in the atomic operation were violated. This occurs when an agent updates a key used within the transaction between read and commit. When this happens, the agent performing the transaction must retry the transaction. To illustrate how to use OCC transactions with Deno KV, this example shows how to implement a `transferFunds(from: string, to: string, amount: number)` function for an account ledger. The account ledger stores the balance for each account in the key-value store. The keys are prefixed by `"account"`, followed by the account identifier: `["account", "alice"]`. The value stored for each key is a number that represents the account balance. Here's a step-by-step example of implementing this `transferFunds` function: ```ts async function transferFunds(sender: string, receiver: string, amount: number) { if (amount <= 0) throw new Error("Amount must be positive"); // Construct the KV keys for the sender and receiver accounts. const senderKey = ["account", sender]; const receiverKey = ["account", receiver]; // Retry the transaction until it succeeds. let res = { ok: false }; while (!res.ok) { // Read the current balance of both accounts. const [senderRes, receiverRes] = await kv.getMany([senderKey, receiverKey]); if (senderRes.value === null) { throw new Error(`Account ${sender} not found`); } if (receiverRes.value === null) { throw new Error(`Account ${receiver} not found`); } const senderBalance = senderRes.value; const receiverBalance = receiverRes.value; // Ensure the sender has a sufficient balance to complete the transfer. if (senderBalance < amount) { throw new Error( `Insufficient funds to transfer ${amount} from ${sender}`, ); } // Perform the transfer. const newSenderBalance = senderBalance - amount; const newReceiverBalance = receiverBalance + amount; // Attempt to commit the transaction. `res` returns an object with // `ok: false` if the transaction fails to commit due to a check failure // (i.e. the versionstamp for a key has changed) res = await kv.atomic() .check(senderRes) // Ensure the sender's balance hasn't changed. .check(receiverRes) // Ensure the receiver's balance hasn't changed. .set(senderKey, newSenderBalance) // Update the sender's balance. .set(receiverKey, newReceiverBalance) // Update the receiver's balance. .commit(); } } ``` In this example, the `transferFunds` function reads the balances and versionstamps of both accounts, calculates the new balances after the transfer, and checks if there are sufficient funds in account A. It then performs an atomic operation, setting the new balances with the versionstamp constraints. If the transaction is successful, the loop exits. If the version constraints are violated, the transaction fails, and the loop retries the transaction until it succeeds. ## Limits In addition to a max key size of 2 KiB and max value size of 64 KiB, there are certain limits with the Deno KV transaction API: - **Max keys per `kv.getMany()`**: 10 - **Max batch size per `kv.list()`**: 1000 - **Max checks in an atomic operation**: 100 - **Max mutations in an atomic operation**: 1000 - **Max total size of an atomic operation**: 800 KiB. This includes all keys and values in checks and mutations, and encoding overhead counts toward this limit as well. - **Max total size of keys**: 90 KiB. This includes all keys in checks and mutations, and encoding overhead counts toward this limit as well. - **Max watched keys per `kv.watch()`**:10 --- # Pricing and limitations > Important limitations, service level expectations, and terms of use for Deno Deploy. URL: https://docs.deno.com/deploy/pricing_and_limits Please see [our pricing page](https://deno.com/deploy/pricing) for the overview of the available features in all plans. If you have a use case that exceeds any of these limits, [please reach out](mailto:deploy@deno.com). No uptime guarantees are provided during the initial public beta for Deno Deploy. Access to the service will be controlled by [our acceptable use policy](/deploy/acceptable_use_policy). Any user we deem to be in violation of this policy, runs the risk of having their account terminated. ## Maximum size for deployments When uploading assets to a deployment, the total size of all files within the deployment (source files and static files) **should not exceed 1 gigabyte**. ## Memory allocation Applications have a maximum memory allocation of 512MB ## Upload request limits We do not set a limit for the number of upload requests your application may handle as long as your application is within [our acceptable use policy](/deploy/acceptable_use_policy). ## TLS proxying TLS termination is required for outgoing connections to port 443 (the port used for HTTPS). Using [Deno.connect](https://docs.deno.com/api/deno/~/Deno.connect) to connect to these ports is prohibited. If you need to establish a TLS connection to port 443, please use [Deno.connectTls](https://docs.deno.com/api/deno/~/Deno.connectTls) instead. `fetch` is not impacted by this restriction. This restriction is in place because connecting to port 443 without terminating TLS is frequently used in TLS-over-TLS proxies, which are prohibited on Deno Deploy Classic as per [our acceptable use policy](/deploy/acceptable_use_policy). --- # Privacy Policy > Deno's Privacy Policy URL: https://docs.deno.com/deploy/privacy_policy **DENO PRIVACY POLICY** 09 September 2024 Deno Land Inc. (“Deno,” “we,” “us,” or “our”) collects and uses personal information in order to provide its products and services to you. This Privacy Policy (the “Policy”) describes the personal information we collect, the purposes for which we use it, the parties with whom we may share it, and your choices with respect to such information. For purposes of this Privacy Policy, “personal information” means any information that relates to you as an individual and could reasonably be used to identify you. This Privacy Policy applies to our collection and use of personal information through (i) our website at [https://deno.com](https://deno.com) (the “Site”); (ii) any websites, applications or other digital properties that link to this Privacy Policy; and (iii) the products and services (the “Deno Offerings”) we offer to you on our proprietary platform (the “Platform”) via the following websites: - Deno Deploy ([https://deno.com/deploy](https://deno.com/deploy)) - Deno Deploy Classic ([https://deno.com/deploy/classic](https://deno.com/deploy/classic)) - Deno Subhosting ([https://deno.com/subhosting](https://deno.com/subhosting)) By accessing or using the Site or any other digital property that links to this Privacy Policy, you may learn about Deno and our technology platform, and registered customers may also access the Deno Offerings (collectively, the “Services”). To the extent permitted by applicable law, your use of Deno’ products and services constitutes your acknowledgment and/or consent to the practices described in this Policy. This Privacy Policy incorporates [Deno’s Terms and Conditions](https://docs.deno.com/deploy/terms_and_conditions/) (the “Terms”). Capitalized terms that are not defined in the Privacy Policy have the meaning given to them in the Terms. **I. The Information We Collect, And How We Collect It** We collect the following categories of information, which may include personal information (collectively, the “**Information**”). **1\. Information You Provide To Us** We collect information from and about you directly when you provide it to us. This information may be collected when you contact us, fill out a form, create an account, subscribe to our blog, access or participate on our Sites, respond to surveys, or otherwise interact with us. This information may include: _Contact Information._ We collect your contact information when you voluntarily provide it to us. For example, you may disclose contact information to us via the “Contact” link on our Sites, submit information by mail, telephone, in person or electronically, when signing up for our newsletters and other marketing communications, or when you register to attend an event or program. Contact Information typically includes first name, last name, e-mail address, postal address, organization, telephone number and other information that identifies you or can be used to identify or contact you. _Account Credentials_. When you register to create an account with us, we will collect certain additional personal information, including your name, email address, and potentially other information such as your GitHub user name and public GitHub profile. In addition to Contact Information and Account Credentials, we may collect other kinds of information, such as: - Comments, questions, and requests you may make; - Information about your preferences, such as your preferred methods of communication and the types of information in which you are interested; - Event and service-related information (such as information required for registration, access to premises or online resources, dietary restrictions, and areas of interest); - Audio and visual information, such as photographs, video and voice recordings (e.g., from events you attended with us), or security camera recordings if you visit our premises; - Details of downloads from our Sites; - Records and copies of your correspondence (including email addresses and phone numbers), if you contact us; and - Any other information you voluntarily provide. **2\. Information Obtained From Third Parties** We may receive certain information about you from other sources, including publicly available sources (such as public records and social media platforms), as well as our service providers and marketing partners. When we collect personal information from users and visitors of other sites on which you have interacted with us, we will do so in accordance with the terms of use and privacy policies of those sites and applicable law. We may also receive personal information when you comment on our social media advertisements, post comments about us, or tag us in a public-facing social media post. Personal information may also be collected by the third-party social media sites that host our social media pages. These sites may provide aggregate information and analysis to us about visitors’ use of our social media pages. This allows us to better understand and analyze our user growth, general demographic information about the users of these pages, and interaction with the content that we post. Overall, this information may be used to help us understand the types of visitors and users of our social media pages and use of the content. This Privacy Policy does not cover personal information collected by such third-party social media sites. For more information on their privacy and security practices please review the privacy policies and terms of use on their respective websites. **3\. Information Collected Automatically** We and our service providers may automatically obtain certain information about you, your electronic device, and your interactions with us, including the following: - _Device data_. We may collect data such as the type of device and its operating system and settings, browser type, mobile device carrier, country, IP address, and unique identifiers. - _Internet and other or electronic activity data_. This includes information about your interaction with our Sites, emails, and other online content. - _Tracking Data_. We may collect tracking data using first and third-party cookies, pixels, web server logs, web beacons, and similar data collection and tracking technologies on the Sites, third party websites, apps and online services, and across your devices (such as IP address, browser, type, ISP, platform type, device type). Third parties such as advertising networks and analytics providers may also collect information about your online activities over time and across different websites and devices when you access or use the Sites. **II. How We Use And Share Your Information** Deno uses the Information for the purpose for which it was collected and in a manner that is consistent with this Privacy Policy. These functions include operation, maintenance and improvements to the Sites, providing our products and services, solicitation of your feedback, gaining a better understanding of our customers and visitors of our Sites, responding to your requests and questions, hosting events, and informing you about our organization, products, services, events, and other areas of interest. _Analytics Services_. We may use third-party web analytics services, such as Google Analytics, to help us understand and analyze how Site visitors use our services. For more information on how Google Analytics uses data collected through our Sites, visit [www.google.com/policies/privacy/partners](http://www.google.com/policies/privacy/partners). _Aggregated Data_. We may analyze your personal information in aggregate form which does not identify you personally (“**Aggregated Data**”). The Aggregated Data may be used to operate, maintain, manage, and improve the Sites, shared with our affiliates, agents, and business partners, and otherwise used and disclosed for lawful business purposes. We do not re-identify de-identified or aggregated information. _Service Providers/Vendors_. Like many businesses, we hire other companies to perform certain business-related services. We may disclose personal information to certain types of third party companies but only to the extent needed to enable them to provide such services, for example web hosting, disaster recovery, client survey and marketing, and data storage. _Reorganization_. If, in the future, Deno undergoes a corporate, partnership, or business reorganization, we may transfer the Information, including personal information, to the new or surviving entity.  _Protection of Rights and Compliance_. We may use your Information to protect the rights, privacy or safety of you, us or others; to ensure our compliance with legal and contractual requirements; and to prevent and investigate illegal, unethical, or unauthorized activities (including cyberattacks and identity theft). If Deno intends on using or disclosing your personal information in any manner that is not consistent with this Privacy Policy, you will be informed of such anticipated use prior to or at the time at which the personal information is collected. **III. How We Protect Your Information** We take commercially reasonable steps to protect your personal information from loss, misuse, and unauthorized access, disclosure, alteration, or destruction. Please understand, however, that no security system is impenetrable. We cannot guarantee the security of our databases, nor can we guarantee that the personal information that you supply will not be intercepted while being transmitted to and from us over the Internet. **IV. Data Retention** Deno determines the retention period for all Information based on the purposes for which we collect and/or receive the Information and/or tax, legal and regulatory requirements. In addition to this, we may consider other factors, such as the nature and sensitivity of the data, and whether we can achieve the purpose for which we collected the data through other means. **V. Your Privacy Choices** **1\. Your Information** You may request access to, correction of, or deletion of the personal information we maintain about you, and we will endeavor to respond promptly to your request. In order to make such a request, please contact us as indicated below. **2\. Marketing Communications** You may opt-out of marketing-related emails by clicking on the “unsubscribe” link located on the bottom of any marketing email or emailing us at [support@deno.com](mailto:support@deno.com). We will use commercially reasonable efforts to process such requests in a timely manner. Please note that even if you opt-out of marketing-related emails, you will continue to receive service-related and other non-marketing emails. **3\. Tracking Technology** You can choose not to permit tracking technologies, such as cookies and web beacons, when you use our services, but blocking some types of these tracking technologies may interfere with your experience. _Browser-Based Opt-Outs_. You may be able to disable tracking technologies using your web browser settings. Please review your browser’s instructions or visit [All About Cookies](https://allaboutcookies.org/) for general information. Note that your web browser may have settings that allow you to transmit a “Do Not Track” signal when you use online services. Like many websites, our Sites are not currently designed to respond to “Do Not Track” signals received from browsers. _Self-Regulatory Program Opt-Outs_. Two self-regulatory programs are available to help you control the use of tracking technologies on your browsers — the [Digital Advertising Alliance](https://digitaladvertisingalliance.org/) and the [Network Advertising Initiative](https://thenai.org/). Both programs help to regulate vendors in the digital advertising space. One function of their self-regulatory programs is to give you the ability to opt out of targeted (or interest-based) advertising, including the use of tracking technologies, from their member companies. You can visit the Digital Advertising Alliance’s Your Ad Choices website to opt out of targeted advertising for participating vendors. The Network Advertising Initiative similarly assists with opt outs through their Opt Out of Interest-Based Advertising webpage. _Google Analytics Opt-Out._ To opt out of Google Analytics cookies, visit Google’s [My Ad Center](https://myadcenter.google.com/personalizationoff) and/or download the Google Analytics Opt-Out Browser Add-On. **VI. Children** We do not knowingly collect personal information from children under the age of 18 through the Sites. If you are under 18, please do not give us any personal information. We encourage parents and legal guardians to monitor their children’s Internet usage and to help enforce our Privacy Policy by instructing their children never to provide personal information through the Sites without their permission. If you have reason to believe that a child under the age of 18 has provided personal information to us, please contact us, at [support@deno.com](mailto:support@deno.com) and we will endeavor to delete that information from our databases. **VII. External Websites** The Sites may contain links to third-party websites. These third-party sites may collect information about you if you click on a link. We have no control over the privacy practices or the content of these websites. As such, we are not responsible for the content or the privacy policies of those third-party websites. You should check the applicable third-party privacy policy and terms of use when visiting any other websites. **VIII. Important Notice To Non-U.S. Residents** The Sites are hosted in and provided from the United States and other countries. If you are located outside of the United States, please be aware that any information you provide to us may be transferred to the United States or other countries where the privacy laws may not be as protective as those in your country of origin. If you are located outside the United States and choose to use the Sites, you consent to any transfer and processing of your personal information in accordance with this Privacy Policy, and you do so at your own risk. **IX. Notice To California Residents** Pursuant to Section 1798.83 of the California Civil Code, residents of California have the right to obtain certain information about the types of personal information that companies with whom they have an established business relationship (and that are not otherwise exempt) have shared with third parties for direct marketing purposes during the preceding calendar year, including the names and addresses of those third parties, and examples of the types of services or products marketed by those third parties. In order to submit such a request, please contact us using the contact information provided at the end of this document. Please note, however, that we do not share, nor have we shared in the past, personal information with third parties for direct marketing purposes. **X. Notice To Nevada Residents** If you are a resident of Nevada, you have the right to opt-out of the sale of personal information to third parties. You can exercise this right by contacting us at [support@deno.com](mailto:support@deno.com) with the subject line “Nevada Do Not Sell Request” and providing us with your name and the email address. Please note, however, that we do not sell any personal information to third parties. **XI. Changes To This Privacy Policy** This Privacy Policy is effective as of the date stated at the top of this Privacy Policy. We may change this Privacy Policy from time to time. Any such changes will be posted on the Sites. By accessing the Sites after we make any such changes to this Privacy Policy, you are deemed to have accepted such changes. Please be aware that, to the extent permitted by applicable law, our use of the Information is governed by the Privacy Policy in effect at the time we collect the Information. Please refer back to this Privacy Policy on a regular basis. **XII. How To Contact Us** Please reach out to [support@deno.com](mailto:support@deno.com) for any questions, complaints, or requests regarding this Privacy Policy, and include in the subject line “Privacy Policy", or contact us by mail at: Deno Land Inc.\ 1111 6th Ave Ste 550\ PMB 702973\ San Diego CA, 92101\ USA **© 2024 Deno Land Inc. All rights reserved.** --- # deploy/reference/accounts.md > Information about user accounts, authentication via GitHub, and managing your profile in Deno Deploy. URL: https://docs.deno.com/deploy/reference/accounts Deno Deploy accounts are linked to GitHub accounts. You can only sign in to Deno Deploy with GitHub authentication. Your primary contact email address and name are synced from GitHub. Both your username and email address update on every sign in. After changing your email, login, or name on GitHub, sign in again to see these changes reflected in the Deno Deploy dashboard. --- # deploy/reference/apps.md > Guide to managing applications in Deno Deploy, including app creation, configuration, GitHub integration, and deployment options. URL: https://docs.deno.com/deploy/reference/apps Applications are web services that serve traffic within an organization. Each application contains a history of revisions (previous versions), typically corresponding to Git commits when using the GitHub integration. Applications are identified by a slug, which must be unique within the organization and is used in default domain names. ## Creating an application To create an application: 1. Click the "+ Create App" button on the organization page 2. Select the GitHub repository to deploy from 3. Configure the app slug (name) 4. Set up build configuration 5. Add any required environment variables > ⚠️ Currently, applications must be linked to a GitHub repository during > creation. The build configuration determines how the application is built during the deployment process. Builds are automatically triggered on each push to the linked repository or when manually clicking "Deploy Default Branch". For detailed build configuration information, see the [Builds documentation](/deploy/reference/builds/). You can add environment variables during app creation by clicking "Edit Environment Variables". For more details on environment variables, see the [Environment Variables and Contexts](/deploy/reference/env_vars_and_contexts/) documentation. ## Renaming an application Applications can be renamed by editing the app slug on the app settings page. This will update the default domain names associated with the app since they are based on the app slug. The new slug must be unique within the organization (i.e. must not be in use by another app or playground in the same organization). :::warning Any previous `deno.net` URLs pointing to the app will no longer work after renaming. Custom domains will continue to work, as they are not tied to the app slug. ::: ## Deleting an application Applications can be deleted from the app settings page. This will remove the app and all its revisions from the organization. All existing deployments will immediately stop serving traffic, and all custom domain associations will be removed. The app and its revisions will no longer be accessible after deletion, and no traffic will be served from it. Deleted apps cannot be restored through the Deno Deploy UI. :::info Deleted an app by mistake? Contact Deno support within 30 days to restore it. ::: ## Limitations > ⚠️ Apps cannot currently be transferred to another organization. ## GitHub integration The GitHub integration enables automatic deployments of the app from a GitHub repository. Every push to the repository will trigger a new build of the app. Depending on the branch of the commit, the build will be deployed to different [timelines](/deploy/reference/timelines/). Apps are linked to a GitHub repository during creation. However, it is possible to unlink the repository after creation, and optionally link it to a new GitHub repository. This can be done from the app settings page. Only accounts that have been authorized with the Deno Deploy GitHub app will be visible in the GitHub repository dropdown. You can authorize new organizations or repositories by clicking the "+ Add another GitHub account" button in the user or organization dropdown, or the "Configure GitHub app permissions" button in the repository dropdown. This will redirect you to GitHub to authorize the Deno Deploy GitHub app with the selected GitHub account or organization. After authorization, you will be redirected back to the app settings page, where you can select the newly authorized GitHub repository. ### GitHub events integration Whenever Deno Deploy builds an app from a GitHub repository, it will send a [`repository_dispatch`](https://docs.github.com/en/actions/reference/workflows-and-actions/events-that-trigger-workflows#repository_dispatch) event to the repository at the start and end of the build. This allows you to trigger GitHub Actions workflows based on the build status. Deno Deploy will send the following events: | Event Name | Description | | ----------------------------- | -------------------------------------------------------------------------- | | `deno_deploy.build.enqueued` | Sent when a build is enqueued, i.e. when a push is made to the repository. | | `deno_deploy.build.cancelled` | Sent when a build is cancelled, either manually or due to a timeout. | | `deno_deploy.build.failed` | Sent when a build fails. | | `deno_deploy.build.routed` | Sent when a build completes successfully, and traffic is routed to it. | The payload of the event follows the following TypeScript type definition: ```ts interface DenoDeployBuildEventPayload { app: { /** The UUID of the Deno Deploy app. */ id: string; /** The slug (name) of the Deno Deploy app. */ slug: string; }; organization: { /** The UUID of the Deno Deploy organization containing the app. */ id: string; /** The slug (name) of the Deno Deploy organization containing the app. */ slug: string; }; revision: { /** The ID of the revision being built. */ id: string; /** A URL to view the revision and build status in the Deno Deploy dashboard. */ html_url: string; /** The Git commit SHA being built. */ git: { sha: string }; /** The preview URL the revision is available at, if the build succeeded. */ preview_url: string | null; }; } ``` You can receive these events in a GitHub Actions workflow by adding a `repository_dispatch` trigger. For example: ```yaml on: repository_dispatch: types: [deno_deploy.build.routed] # Listen for successful builds jobs: notify: runs-on: ubuntu-latest steps: - name: Test the preview_url run: | echo "The Deno Deploy app is available at ${{ github.event.client_payload.revision.preview_url }}" curl -I ${{ github.event.client_payload.revision.preview_url }} ``` --- # deploy/reference/builds.md > Detailed explanation of the build process in Deno Deploy, covering build triggers, stages, configuration options, caching, and the build environment. URL: https://docs.deno.com/deploy/reference/builds In Deno Deploy, each version of your application code is represented as a revision (or build). When deploying from GitHub, revisions generally map one-to-one to git commits in your repository. ## Build triggers Builds can be triggered in two ways: - **Manually**: Using the "Deploy Default Branch" button on the builds page, which deploys the default git branch (usually `main`). The dropdown menu lets you select a different branch. - **Automatically**: When a new commit is pushed to a GitHub repository linked to your app. ## Build stages A revision goes through these stages before becoming available: 1. **Queuing**: The revision waits to be assigned to a builder. 2. **Preparing**: A builder downloads the source code and restores any available build caches. 3. **Install**: The install command executes (if specified), typically downloading dependencies. 4. **Build**: The build command executes (if specified), creating a build artifact that is uploaded to the runtime infrastructure. 5. **Warm up**: A `GET /` request tests that the application boots correctly and can handle HTTP requests. 6. **Route**: The global infrastructure is configured to route requests to the new revision based on its timelines. If any step fails, the build enters a "Failed" state and does not receive traffic. Build logs are streamed live to the dashboard during the build process and remain available on the build page after completion. Build caching speeds up builds by reusing files that haven't changed between builds. This happens automatically for framework presets and the `DENO_DIR` dependency cache. You can cancel a running build using the "Cancel" button in the top-right corner of the build page. Builds automatically cancel after running for 5 minutes. ## Build configuration Build configuration defines how to convert source code into a deployable artifact. You can modify build configuration in three places: - During app creation by clicking "Edit build config" - In app settings by clicking "Edit" in the build configuration section - In the retry drawer on a failed build's page When creating an app, build configuration may be automatically detected from your repository if you're using a recognized framework or common build setup. ### Configuration options - **Framework preset**: Optimized configuration for supported frameworks like Next.js or Fresh. [Learn more about framework integrations](./frameworks/). - **Install command**: Shell command for installing dependencies, such as `npm install` or `deno install`. - **Build command**: Shell command for building the project, often a task from `package.json` or `deno.json`, such as `deno task build` or `npm run build`. - **Runtime configuration**: Determines how the application serves traffic: - **Dynamic**: For applications that respond to requests using a server (API servers, server-rendered websites, etc.) - **Entrypoint**: The JavaScript or TypeScript file to execute - **Arguments** (optional): Command-line arguments to pass to the application - **Static**: For static websites serving pre-rendered content - **Directory**: Folder containing static assets (e.g., `dist`, `.output`) - **Single page app mode** (optional): Serves `index.html` for paths that don't match static files instead of returning 404 errors ## Build environment The build environment runs on Linux using either x64 or ARM64 architecture. Available tools include: - `deno` (same version as at runtime) - `node` - `npm` - `npx` - `yarn` (v1) - `pnpm` - `git` - `tar` - `gzip` :::info All JavaScript inside of the builder is executed using Deno. The `node` command is actually a shim that translates Node.js invocations to `deno run`. Similarly, `npm`, `npx`, `yarn`, and `pnpm` run through Deno rather than Node.js. ::: Environment variables configured for the "Build" context are available during builds, but variables from "Production" or "Development" contexts are not. [Learn more about environment variables](/deploy/reference/env_vars_and_contexts/). Builders have 8 GB of storage available during the build process. --- # deploy/reference/button.md > Help users quickly and easily clone code and deploy it to Deno Deploy with the click of a button URL: https://docs.deno.com/deploy/reference/button The Deploy Button offers a shortcut for users to create and deploy a new application on Deno Deploy based on existing code hosted in a Git repository. It provides a link directly into the Deno Deploy application creation flow, and populates settings in the creation flow based on provided query parameters or values found in the `deno.json` file of the specified source. The specified repository will be cloned to the user’s GitHub account and set as the source for a new project. By default, the new repository will be public, but can be set to be private if required. ## Example The deploy button below demonstrates the creation of a new application based on a simple starter project [![Deploy on Deno](https://deno.com/button)](https://console.deno.com/new?clone=https://github.com/denoland/examples&path=hello-world) ## Create and deploy a new application Use the code below to give a button which creates and deploys a new application: **Markdown** ```bash [![Deploy on Deno](https://deno.com/button)](https://console.deno.com/new?clone=REPOSITORY_URL) ``` **HTML** ```bash Deploy on Deno ``` **URL** ```bash https://console.deno.com/new?clone=REPOSITORY_URL ``` ### Parameters The following query parameters can be used to configure a Deploy Button: - `clone` — (required) The URL of the source repo to clone as a new repo which will then be deployed - `path` — (optional) The path within the source repo to clone from. Providing this will create a new repo whose root is this directory from within the source repository. - `install` — (optional) the command to execute prior to a build in order to install dependencies - `build` — (optional) the command to execute to build the application --- # deploy/reference/caching.md > Overview of CDN caching functionality in Deno Deploy, including cache configuration, directives, and best practices. URL: https://docs.deno.com/deploy/reference/caching Deno Deploy includes a built-in CDN that can cache responses from your application. This improves performance for: - Static assets (images, CSS, JavaScript files) - API responses and server-rendered pages that don't change frequently Caching is enabled by default for all applications, but only responses with appropriate caching headers are actually cached. Deno Deploy integrates with popular frameworks like Next.js to automatically optimize caching for features such as Incremental Static Regeneration (ISR). The CDN cache is tied to both the revision and context. When you deploy a new revision, the cache is automatically invalidated, ensuring users always see the latest version of your application. Note that browser caching may still serve older content if the `Cache-Control` header permits it. ## Caching a resource To cache a resource, set the `Cache-Control` header in your response. This standard HTTP header tells browsers and the CDN how to cache your content. ### Supported caching directives Deno Deploy supports these caching directives: | Directive | Description | | ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `max-age` | Maximum time (in seconds) the response is considered fresh by both CDN and browsers. After this time, the response is considered stale and revalidated with the server. | | `s-maxage` | Maximum time (in seconds) the response is considered fresh by shared caches (CDNs only, not browsers). After this time, the response is revalidated with the server. | | `stale-while-revalidate` | Maximum time (in seconds) a stale response can be served while a fresh one is fetched in the background. | | `stale-if-error` | Maximum time (in seconds) a stale response can be served if the server returns an error. | | `immutable` | Indicates the response will never change, allowing indefinite caching. Ideal for content-hashed static assets. | | `no-store` | Prevents caching of the response. Use for dynamic content that should never be cached. | | `no-cache` | Requires revalidation with the server before serving from cache. Use for content that changes frequently but can benefit from conditional requests. | ### Additional caching headers - `Vary`: Specifies which request headers should be included in the cache key, creating separate cached versions based on those headers. - `Expires`: Sets an absolute expiration date for the response (alternative to `max-age`). do not change, such as images or CSS files. - `no-store`: The response should not be cached. This is useful for dynamic responses that should not be cached, such as API responses or server rendered pages. - `no-cache`: The response should be revalidated with the server before being served from the cache. This is useful for dynamic responses that may change frequently. The `Vary` header can be used to specify which request headers should be part of the cache key for the request. The `Expires` header can be used to specify an absolute expiration date for the response. This is an alternative to the `max-age` directive. --- # Create trust policy file for entire app URL: https://docs.deno.com/deploy/reference/cloud_connections Deno Deploy allows you to connect to cloud providers like AWS and Google Cloud Platform (GCP) without needing to manually manage static credentials. This is done through the use of OpenID Connect (OIDC) and identity federation. ## How it works Deno Deploy is an OIDC provider. Every running application of Deno Deploy can be issued short-lived JWT tokens that are signed by Deno Deploy. These tokens contain information about the application, such as the organization and application ids and slugs, the context in which an application is executing, and the running revision ID. Learn more about [OIDC in Deno Deploy](/deploy/reference/oidc). By sending these tokens to AWS or GCP, one can exchange them for short-lived AWS or GCP credentials that can be used to access cloud resources such as AWS S3 buckets or Google Cloud Spanner instances. When sending the token to AWS or GCP, the token is verified by the cloud provider, which checks that it was issued by Deno Deploy and that it is valid for the application and context that should be allowed to access the cloud resources. To enable AWS or GCP to exchange OIDC tokens for credentials, the cloud provider needs to be configured to trust Deno Deploy as an OIDC identity provider, and an AWS IAM role or GCP service account needs to be created that allows the exchange of tokens for credentials, for a specific Deno Deploy application. ## Setting up AWS This guide contains three guides for setting up these AWS resources. You can use any of these to set up the AWS resources. - [Using the `deno deploy setup-aws` command from your local machine](#aws%3A-easy-setup-with-deno-deploy-setup-aws) (recommended) - [Using the `aws` CLI](#setup-aws-cli) - [Using the AWS Console](#setup-aws-console) - [Using Terraform](#setup-aws-terraform) To set up AWS with Deno Deploy, the following resources need to be created inside of your AWS account: - An [AWS IAM OIDC Identity Provider](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html) that trusts Deno Deploy as an OIDC provider. - The OIDC provider URL is `https://oidc.deno.com`. - The audience (client ID) is `sts.amazonaws.com`. - An [AWS IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp_oidc.html) that can be "assumed" (signed into) using a Deno Deploy OIDC token. - The trust policy of the role should allow the OIDC provider to assume the role, such as: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam:::oidc-provider/oidc.deno.com" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.deno.com:aud": "sts.amazonaws.com", "oidc.deno.com:sub": "deployment://" } } } ] } ``` - The role should have permissions to access the AWS resources you want to use, such as S3 buckets or DynamoDB tables. After setting up the AWS resources, navigate to the AWS cloud integration setup page from the app settings. There you must select the context(s) in which the cloud connection should be available. Then you must enter the ARN (Amazon Resource Name) for the AWS IAM Role created earlier. After entering the ARN you can start a connection test by pressing the "Test connection" button. The connection test will check that the AWS IAM Role and OIDC provider are configured correctly, and does not allow access from apps, orgs, or contexts that should not have access. After testing the connection, you can save the cloud connection. ### Usage After setting up a cloud connection between AWS and Deno Deploy you can access AWS resources such as S3 directly from your application code, without having to configure any credentials. The AWS SDK v3 automatically picks up on the cloud connection configuration. Here is an example of accessing an S3 bucket from a Deno Deploy application with a configured AWS account. ```ts import { ListBucketsCommand, S3Client } from "@aws-sdk/client-s3"; const s3 = new S3Client({ region: "us-west-2" }); Deno.serve(() => { const { Buckets } = await s3.send(new ListBucketsCommand({})); return Response.json(Buckets); }); ``` ## Setting up GCP To set up GCP with Deno Deploy, the following resources need to be created inside of your GCP account: - A [Workload Identity Pool and Workload Identity Provider](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers) that trusts Deno Deploy as an OIDC provider. - The OIDC provider URL is `https://oidc.deno.com`. - The audience should be the default (starts with `https://iam.googleapis.com`). - At least the following attribute mappings must be set: - `google.subject = assertion.sub` - `attribute.full_slug = assertion.org_slug + "/" + assertion.app_slug` - A [Service account](https://cloud.google.com/iam/docs/service-accounts-create) that can be "impersonated" (signed into) using the OIDC token. - A principal or principal set from the workload identity pool should have access to the service account using the Workload Identity User role (`roles/iam.workloadIdentityUser`). Examples: - A specific context in an app: `principal://iam.googleapis.com/projects//locations/global/workloadIdentityPools/oidc-deno-com/subject/deployment://` - All contexts in an app: `principalSet://iam.googleapis.com/projects//locations/global/workloadIdentityPools/oidc-deno-com/attribute.full_slug//` - The service account should have access to the GCP resources you want to use, such as a Google Cloud Storage bucket. This guide contains three guides for setting up these GCP resources. You can use any of these to set up the GCP resources. - [Using the `deno deploy setup-gcp` command from your local machine](#setup-gcp-easy) (recommended) - [Using the `gcloud` CLI](#setup-gcp-cli) - [Using the GCP Console](#setup-gcp-console) - [Using Terraform](#setup-gcp-terraform) After setting up the GCP resources, navigate to the GCP cloud integration setup page from the app settings. There you must select the context(s) in which the cloud connection should be available. Then you must enter the workload identity provider ID, in the form `projects//locations/global/workloadIdentityPools/oidc-deno-com/providers/oidc-deno-com`, and the email address of the GCP Service Account created earlier. After entering the email address you can start a connection test by pressing the "Test connection" button. The connection test will check that the GCP Service Account and OIDC provider are configured correctly, and does not allow access from apps, orgs, or contexts that should not have access. After testing the connection, you can save the cloud connection. ### Usage After setting up a cloud connection between GCP and Deno Deploy you can access GCP resources such as Cloud Storage directly from your application code, without having to configure any credentials. The Google Cloud SDK automatically picks up on the cloud connection configuration. Here is an example of accessing a Cloud Storage bucket from a Deno Deploy application with a configured GCP account. ```ts import { Storage } from "@google-cloud/storage"; const storage = new Storage(); Deno.serve(() => { const [buckets] = await storage.getBuckets(); return Response.json(buckets); }); ``` ## Removing a cloud integration You can remove a cloud connection by pressing the "Delete" button in the cloud integration section, next to a specific cloud connection. ## Setup Guides ### AWS: Easy setup with `deno deploy setup-aws` For instructions on how to set up AWS with Deno Deploy using the `deno deploy setup-aws` command, please see the instructions on the AWS cloud integration setup page in your app settings. ### AWS: Using the `aws` CLI You can manually set up AWS resources using the AWS CLI. This requires having the AWS CLI installed and configured with appropriate permissions to create IAM roles, OIDC providers, and attach policies. #### Prerequisites - AWS CLI installed and configured - Permissions to create IAM roles, OIDC providers, and attach policies #### Step 1: Create OIDC Provider First, create the OIDC provider if it doesn't already exist: ```bash aws iam create-open-id-connect-provider \ --url https://oidc.deno.com \ --client-id-list sts.amazonaws.com ``` #### Step 2: Create IAM Role with Trust Policy Create a trust policy file that allows your Deno Deploy application to assume the role. You can choose between allowing access to all contexts or specific contexts only. **For all contexts in your app:** ```bash # Create trust policy file for entire app cat > trust-policy-all-contexts.json << EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::YOUR_ACCOUNT_ID:oidc-provider/oidc.deno.com" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringLike": { "oidc.deno.com:sub": "deployment:YOUR_ORG/YOUR_APP/*" } } } ] } EOF ``` **For specific contexts only:** ```bash # Create trust policy file for specific contexts cat > trust-policy-specific-contexts.json << EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::YOUR_ACCOUNT_ID:oidc-provider/oidc.deno.com" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.deno.com:sub": [ "deployment:YOUR_ORG/YOUR_APP/production", "deployment:YOUR_ORG/YOUR_APP/staging" ] } } } ] } EOF ``` #### Step 3: Create the IAM Role Create the role using the appropriate trust policy: ```bash # For entire app aws iam create-role \ --role-name DenoDeploy-YourOrg-YourApp \ --assume-role-policy-document file://trust-policy-all-contexts.json # OR for specific contexts aws iam create-role \ --role-name DenoDeploy-YourOrg-YourApp \ --assume-role-policy-document file://trust-policy-specific-contexts.json ``` #### Step 4: Attach Policies Attach the necessary policies to grant permissions for the AWS resources your application needs: ```bash aws iam attach-role-policy \ --role-name DenoDeploy-YourOrg-YourApp \ --policy-arn arn:aws:iam::aws:policy/POLICY_NAME ``` Replace `POLICY_NAME` with the appropriate AWS policies (e.g., `AmazonS3ReadOnlyAccess`, `AmazonDynamoDBReadOnlyAccess`, etc.) based on your requirements. After completing these steps, use the Role ARN in your Deno Deploy cloud connection configuration. ### AWS: Using the AWS Console You can set up AWS resources using the AWS Management Console web interface. This method provides a visual way to configure the necessary IAM resources. #### Step 1: Create OIDC Identity Provider 1. **Navigate to IAM Console** → Identity providers 2. **Create OIDC Provider**: - Click "Add provider" - Select "OpenID Connect" - Provider URL: `https://oidc.deno.com` - Audience: `sts.amazonaws.com` - Click "Add provider" #### Step 2: Create IAM Role 1. **Navigate to IAM Console** → Roles 2. **Create role**: - Click "Create role" - Trusted entity type: **Web identity** - Identity provider: Select the created OIDC provider (`oidc.deno.com`) - Audience: `sts.amazonaws.com` #### Step 3: Configure Trust Policy Conditions Add a condition to restrict which Deno Deploy applications can assume this role. Choose one approach: **For all contexts in your app:** - Condition key: `oidc.deno.com:sub` - Operator: `StringLike` - Value: `deployment:YOUR_ORG/YOUR_APP/*` **For specific contexts only:** - Condition key: `oidc.deno.com:sub` - Operator: `StringEquals` - Value: `deployment:YOUR_ORG/YOUR_APP/production` - Add additional conditions for each context (e.g., staging, development) Click "Next" to continue. #### Step 4: Attach Permissions Policies 1. Search and select appropriate policies based on your needs: - For S3 access: `AmazonS3ReadOnlyAccess` or `AmazonS3FullAccess` - For DynamoDB access: `AmazonDynamoDBReadOnlyAccess` or `AmazonDynamoDBFullAccess` - For other services: Select relevant policies 2. Click "Next" #### Step 5: Name and Create Role 1. **Role name**: `DenoDeploy-YourOrg-YourApp` (replace with your actual organization and app names) 2. **Description**: Optional description of the role's purpose 3. Review the trust policy and permissions 4. Click "Create role" #### Step 6: Copy Role ARN After creating the role: 1. Go to the role details page 2. Copy the Role ARN (it looks like `arn:aws:iam::123456789012:role/DenoDeploy-YourOrg-YourApp`) 3. Use this ARN in your Deno Deploy cloud connection configuration ### AWS: Using Terraform You can use Terraform to programmatically create the AWS resources needed for cloud connections. This approach is ideal for infrastructure-as-code workflows. #### Terraform Configuration Create a Terraform configuration file with the following content: ```hcl # Variables variable "org" { description = "Deno Deploy organization name" type = string } variable "app" { description = "Deno Deploy app name" type = string } variable "contexts" { description = "List of specific contexts to allow (leave empty for all contexts)" type = list(string) default = [] } # OIDC Provider resource "aws_iam_openid_connect_provider" "deno_deploy" { url = "https://oidc.deno.com" client_id_list = ["sts.amazonaws.com"] } # IAM Role with dynamic trust policy based on contexts resource "aws_iam_role" "deno_deploy_role" { name = "DenoDeploy-${var.org}-${var.app}" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Principal = { Federated = aws_iam_openid_connect_provider.deno_deploy.arn } Action = "sts:AssumeRoleWithWebIdentity" Condition = length(var.contexts) > 0 ? { # Specific contexts only StringEquals = { "oidc.deno.com:sub" = [ for context in var.contexts : "deployment:${var.org}/${var.app}/${context}" ] } } : { # All contexts (wildcard) StringLike = { "oidc.deno.com:sub" = "deployment:${var.org}/${var.app}/*" } } } ] }) } # Attach policies resource "aws_iam_role_policy_attachment" "example" { role = aws_iam_role.deno_deploy_role.name policy_arn = "arn:aws:iam::aws:policy/POLICY_NAME" } # Output the role ARN output "role_arn" { value = aws_iam_role.deno_deploy_role.arn } ``` #### Usage Examples **For entire app access (all contexts):** ```hcl module "deno_deploy_aws" { source = "./path-to-terraform-module" org = "your-org" app = "your-app" contexts = [] # Empty list allows all contexts } ``` **For specific contexts only:** ```hcl module "deno_deploy_aws" { source = "./path-to-terraform-module" org = "your-org" app = "your-app" contexts = ["production", "staging"] } ``` #### Applying the Configuration 1. Initialize Terraform: ```bash terraform init ``` 2. Plan the deployment: ```bash terraform plan ``` 3. Apply the configuration: ```bash terraform apply ``` After applying, Terraform will output the Role ARN that you can use in your Deno Deploy cloud connection configuration. #### Customizing Policies Replace `POLICY_NAME` in the `aws_iam_role_policy_attachment` resource with the appropriate AWS managed policies or create custom policies based on your requirements. You can add multiple policy attachments by creating additional `aws_iam_role_policy_attachment` resources. ### GCP: Easy setup with `deno deploy setup-gcp` For instructions on how to set up GCP with Deno Deploy using the `deno deploy setup-gcp` command, please see the instructions on the Google cloud integration setup page in your app settings. ### GCP: Using the `gcloud` CLI You can manually set up GCP resources using the gcloud CLI. This requires having the gcloud CLI installed and authenticated with appropriate permissions to create workload identity pools, service accounts, and grant IAM roles. #### Prerequisites - gcloud CLI installed and authenticated - Access to create workload identity pools, service accounts, and grant IAM roles - Required APIs enabled: - `iam.googleapis.com` - `iamcredentials.googleapis.com` - `sts.googleapis.com` #### Step 1: Enable Required APIs First, enable the required APIs for your project: ```bash gcloud services enable iam.googleapis.com gcloud services enable iamcredentials.googleapis.com gcloud services enable sts.googleapis.com ``` #### Step 2: Create Workload Identity Pool Create a workload identity pool to manage external identities: ```bash gcloud iam workload-identity-pools create oidc-deno-com \ --location=global \ --display-name="Deno Deploy Workload Identity Pool" ``` #### Step 3: Create Workload Identity Provider Configure the OIDC provider within the workload identity pool: ```bash gcloud iam workload-identity-pools providers create-oidc oidc-deno-com \ --workload-identity-pool=oidc-deno-com \ --location=global \ --issuer-uri=https://oidc.deno.com \ --attribute-mapping="google.subject=assertion.sub,attribute.org_slug=assertion.org_slug,attribute.app_slug=assertion.app_slug,attribute.full_slug=assertion.org_slug+\"/\"+assertion.app_slug" ``` #### Step 4: Create Service Account Create a service account that will be used by your Deno Deploy application: ```bash gcloud iam service-accounts create deno-your-org-your-app \ --display-name="Deno Deploy YourOrg/YourApp" ``` #### Step 5: Configure Workload Identity Binding Get your project number and configure the workload identity binding. Choose between allowing access to all contexts or specific contexts only. ```bash # Get project number PROJECT_NUMBER=$(gcloud projects describe PROJECT_ID --format="value(projectNumber)") ``` **For all contexts in your app:** ```bash gcloud iam service-accounts add-iam-policy-binding \ deno-your-org-your-app@PROJECT_ID.iam.gserviceaccount.com \ --role=roles/iam.workloadIdentityUser \ --member="principalSet://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/oidc-deno-com/attribute.full_slug/YOUR_ORG/YOUR_APP" ``` **For specific contexts only:** ```bash # Bind for production context gcloud iam service-accounts add-iam-policy-binding \ deno-your-org-your-app@PROJECT_ID.iam.gserviceaccount.com \ --role=roles/iam.workloadIdentityUser \ --member="principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/oidc-deno-com/subject/deployment:YOUR_ORG/YOUR_APP/production" # Bind for staging context gcloud iam service-accounts add-iam-policy-binding \ deno-your-org-your-app@PROJECT_ID.iam.gserviceaccount.com \ --role=roles/iam.workloadIdentityUser \ --member="principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/oidc-deno-com/subject/deployment:YOUR_ORG/YOUR_APP/staging" # Add more bindings for each specific context as needed ``` #### Step 6: Grant Roles to Service Account Grant the necessary roles to the service account for accessing GCP resources: ```bash gcloud projects add-iam-policy-binding PROJECT_ID \ --member="serviceAccount:deno-your-org-your-app@PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/ROLE_NAME" ``` Replace `ROLE_NAME` with appropriate roles such as: - `roles/storage.objectViewer` for Cloud Storage read access - `roles/storage.objectAdmin` for Cloud Storage full access - `roles/cloudsql.client` for Cloud SQL access - Other roles based on your requirements #### Step 7: Get Required Values After completing the setup, you'll need two values for your Deno Deploy configuration: 1. **Workload Provider ID**: `projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/oidc-deno-com/providers/oidc-deno-com` 2. **Service Account Email**: `deno-your-org-your-app@PROJECT_ID.iam.gserviceaccount.com` Use these values in your Deno Deploy cloud connection configuration. ### GCP: Using the GCP Console You can set up GCP resources using the Google Cloud Console web interface. This method provides a visual way to configure workload identity federation and service accounts. #### Step 1: Enable Required APIs 1. **Navigate to APIs & Services** → Library 2. Search for and enable the following APIs: - "Identity and Access Management (IAM) API" - "IAM Service Account Credentials API" - "Security Token Service API" #### Step 2: Create Workload Identity Pool 1. **Navigate to IAM & Admin** → Workload Identity Federation 2. **Create Pool**: - Click "Create Pool" - Pool name: `Deno Deploy Workload Id Pool` - Pool ID: `oidc-deno-com` - Click "Continue" #### Step 3: Add Provider to Pool 1. **Add a provider**: - Click "Add a provider" - Provider type: **OpenID Connect (OIDC)** - Provider name: `Deno Deploy OIDC Provider` - Provider ID: `oidc-deno-com` - Issuer URL: `https://oidc.deno.com` 2. **Configure attribute mappings**: - `google.subject` → `assertion.sub` - `attribute.org_slug` → `assertion.org_slug` - `attribute.app_slug` → `assertion.app_slug` - `attribute.full_slug` → `assertion.org_slug + "/" + assertion.app_slug` 3. Click "Save" #### Step 4: Create Service Account 1. **Navigate to IAM & Admin** → Service Accounts 2. **Create Service Account**: - Click "Create Service Account" - Service account name: `deno-your-org-your-app` - Service account ID: `deno-your-org-your-app` - Description: `Service account for Deno Deploy project your-org/your-app` - Click "Create and Continue" #### Step 5: Grant Roles to Service Account 1. Select appropriate roles based on your needs: - For Cloud Storage: `Storage Object Viewer` or `Storage Admin` - For Cloud SQL: `Cloud SQL Client` - For other services: Select relevant roles 2. Click "Continue" then "Done" #### Step 6: Configure Workload Identity Binding 1. **Go back to the created service account** 2. Click on the "Principals with access" tab 3. Click "Grant Access" 4. Configure principals - choose one approach: **For all contexts in your app:** - New principals: `principalSet://iam.googleapis.com/projects/YOUR_PROJECT_NUMBER/locations/global/workloadIdentityPools/oidc-deno-com/attribute.full_slug/YOUR_ORG/YOUR_APP` **For specific contexts only:** - New principals: `principal://iam.googleapis.com/projects/YOUR_PROJECT_NUMBER/locations/global/workloadIdentityPools/oidc-deno-com/subject/deployment:YOUR_ORG/YOUR_APP/production` - Repeat for each context (staging, etc.) 5. Role: **Workload Identity User** 6. Click "Save" #### Step 7: Get Required Values You'll need two values for your Deno Deploy configuration: 1. **Workload Provider ID**: - Navigate back to Workload Identity Federation - Click on your pool, then your provider - Copy the provider resource name (full path starting with `projects/`) 2. **Service Account Email**: Copy from the service account details page #### Step 8: Verify Configuration The final workload identity pool overview should show: - Your pool with the OIDC provider - The connected service account - Proper bindings configured Use the Service Account Email and Workload Provider ID in your Deno Deploy cloud connection configuration. ### GCP: Using Terraform You can use Terraform to programmatically create the GCP resources needed for cloud connections. This approach is ideal for infrastructure-as-code workflows. #### Terraform Configuration Create a Terraform configuration file with the following content: ```hcl # Variables variable "org" { description = "Deno Deploy organization name" type = string } variable "app" { description = "Deno Deploy app name" type = string } variable "contexts" { description = "List of specific contexts to allow (leave empty for all contexts)" type = list(string) default = [] } variable "project_id" { description = "GCP Project ID" type = string } variable "roles" { description = "List of IAM roles to grant to the service account" type = list(string) default = [] } # Data source for project information data "google_project" "project" { project_id = var.project_id } # Workload Identity Pool resource "google_iam_workload_identity_pool" "deno_deploy" { workload_identity_pool_id = "oidc-deno-com" display_name = "Deno Deploy Workload Id Pool" } # Workload Identity Provider resource "google_iam_workload_identity_pool_provider" "deno_deploy" { workload_identity_pool_id = google_iam_workload_identity_pool.deno_deploy.workload_identity_pool_id workload_identity_pool_provider_id = "oidc-deno-com" display_name = "Deno Deploy OIDC Provider" attribute_mapping = { "google.subject" = "assertion.sub" "attribute.org_slug" = "assertion.org_slug" "attribute.app_slug" = "assertion.app_slug" "attribute.full_slug" = "assertion.org_slug + \"/\" + assertion.app_slug" } oidc { issuer_uri = "https://oidc.deno.com" } } # Service Account resource "google_service_account" "deno_deploy" { account_id = "deno-${var.org}-${var.app}" display_name = "Deno Deploy ${var.org}/${var.app}" } # Workload Identity Binding - dynamic based on contexts resource "google_service_account_iam_binding" "workload_identity" { service_account_id = google_service_account.deno_deploy.name role = "roles/iam.workloadIdentityUser" members = length(var.contexts) > 0 ? [ # Specific contexts only for context in var.contexts : "principal://iam.googleapis.com/projects/${data.google_project.project.number}/locations/global/workloadIdentityPools/${google_iam_workload_identity_pool.deno_deploy.workload_identity_pool_id}/subject/deployment:${var.org}/${var.app}/${context}" ] : [ # All contexts (using attribute mapping) "principalSet://iam.googleapis.com/projects/${data.google_project.project.number}/locations/global/workloadIdentityPools/${google_iam_workload_identity_pool.deno_deploy.workload_identity_pool_id}/attribute.full_slug/${var.org}/${var.app}" ] } # Grant roles to service account resource "google_project_iam_member" "service_account_roles" { for_each = toset(var.roles) project = var.project_id role = each.value member = "serviceAccount:${google_service_account.deno_deploy.email}" } # Outputs output "workload_provider_id" { value = "projects/${data.google_project.project.number}/locations/global/workloadIdentityPools/${google_iam_workload_identity_pool.deno_deploy.workload_identity_pool_id}/providers/${google_iam_workload_identity_pool_provider.deno_deploy.workload_identity_pool_provider_id}" } output "service_account_email" { value = google_service_account.deno_deploy.email } ``` #### Usage Examples **For entire app access (all contexts):** ```hcl module "deno_deploy_gcp" { source = "./path-to-terraform-module" org = "your-org" app = "your-app" project_id = "your-gcp-project-id" contexts = [] # Empty list allows all contexts roles = [ "roles/storage.objectViewer", "roles/cloudsql.client" ] } ``` **For specific contexts only:** ```hcl module "deno_deploy_gcp" { source = "./path-to-terraform-module" org = "your-org" app = "your-app" project_id = "your-gcp-project-id" contexts = ["production", "staging"] roles = [ "roles/storage.objectAdmin", "roles/cloudsql.client" ] } ``` #### Applying the Configuration 1. Initialize Terraform: ```bash terraform init ``` 2. Plan the deployment: ```bash terraform plan ``` 3. Apply the configuration: ```bash terraform apply ``` After applying, Terraform will output the Workload Provider ID and Service Account Email that you can use in your Deno Deploy cloud connection configuration. #### Customizing Roles The `roles` variable accepts a list of GCP IAM roles. Common roles include: - `roles/storage.objectViewer` - Read access to Cloud Storage - `roles/storage.objectAdmin` - Full access to Cloud Storage objects - `roles/cloudsql.client` - Access to Cloud SQL instances - `roles/secretmanager.secretAccessor` - Access to Secret Manager secrets - Custom roles can also be specified --- # deploy/reference/databases.md URL: https://docs.deno.com/deploy/reference/databases The databases feature allows you to connect your applications to external databases and provision managed data stores. When you assign a database to an app, Deno Deploy automatically provisions separate databases for each deployment environment - production, Git branches, and preview timelines. Your code automatically connects to the correct database for each environment without requiring timeline detection or manual database name handling. Simply use your favorite database driver to connect - Deno Deploy handles the connection details automatically via environment variables. ## Getting Started There are two ways to add data backends to your apps on the Databases page: - Link Database: Connect an existing external database (for example, a PostgreSQL server you run or a managed instance from a cloud provider). - Provision Database: Create and attach a managed data store from Deploy (Deno KV or Prisma Postgres). ### Adding a Database Navigate to your organization dashboard and click "Databases" in the navigation bar. From here, choose the flow that matches your use case: #### Link an external database - Click "**Link Database**" to connect an existing database instance. - Choose PostgreSQL and either enter connection details manually or paste a connection string to automatically populate the form. - Details typically include hostname, port (usually 5432), username, password, and optionally a CA certificate if required by your provider. - Use "Test Connection" to verify settings, then give the instance a name and click "Save". Instead of filling out individual fields, you can paste a connection string like `postgresql://username:password@hostname:port/database` to automatically populate the form fields. **Common formats:** - PostgreSQL: `postgresql://user:pass@localhost:5432/dbname` or `postgres://user:pass@localhost:5432/dbname` #### Provision a managed database - Click "Provision Database" to create a managed data store from Deploy. - Available today: - Deno KV — a fast, globally distributed key‑value store built for the edge. - Prisma Postgres - the world's most advanced open source relational database, hosted by Prisma. ### Connecting an App to a Database Once you have a database instance (linked or provisioned), you can assign it to your apps. From the database instances list, click "Assign" next to your database instance and select the app from the dropdown. Optionally, you can configure a migration command that will run automatically after each build (see [Automated Migration Commands](#automated-migration-commands) for details). Deno Deploy automatically creates isolated data scopes for each timeline. For PostgreSQL, this means separate databases with the following naming scheme: - Production deployments use `{app-id}-production` - Git branches get `{app-id}--{branch-name}` - Preview deployments use `{app-id}-preview` This ensures your production data stays safe while developing and testing. You can monitor the provisioning process and watch the status change to "Connected". If there are any errors, use the "Fix" button to retry. ## Using Databases in Your Code ### Zero Configuration Required Once you've assigned a database to your app, connecting to it from your code is simple. You don't need to configure connection strings, set up environment variables, or manage credentials - Deno Deploy handles all of this automatically. Simply use your favorite database library as you normally would, and it will automatically connect to the correct database for your current environment. ### Automatic Environment Variables For PostgreSQL databases (both linked and provisioned), Deno Deploy automatically injects standard database environment variables into your app's runtime environment: `PGHOST`, `PGPORT`, `PGDATABASE` (automatically selected for your environment), `PGUSER`, `PGPASSWORD`, `PGSSLMODE`, and `DATABASE_URL`. These variables follow standard conventions, so most database libraries automatically detect and use them without any configuration. ### PostgreSQL Example Here's how to connect to PostgreSQL in your Deno Deploy app: ```typescript import { Pool } from "npm:pg"; // No configuration needed - Deno Deploy handles this automatically const pool = new Pool(); Deno.serve(async () => { // Use the database const result = await pool.query("SELECT * FROM users WHERE id = $1", [123]); return new Response(JSON.stringify(result.rows), { headers: { "content-type": "application/json" }, }); }); ``` ### How It Works Deno Deploy automatically detects which environment your code is running in (production, Git branch, or preview), then selects the appropriate database based on that environment. The correct connection details are automatically set as environment variables, and your database library reads these standard environment variables automatically. Your code runs the same way across all environments but connects to different databases. The same `new Pool()` code works in production (connecting to `myappid-production`), Git branches (connecting to `myappid--branch-name`), and previews (connecting to `myappid-preview`). ### Migration and Schema Management Since each environment has its own database, you can safely test migrations in a Git branch without affecting production or other branch-specific databases. #### Automated Migration Commands When assigning a database to an app, you can configure a migration command that automatically runs after each successful build. This ensures your database schema stays synchronized with your application code across all environments. **Setting Up a Migration Command:** 1. When assigning a database to an app (or editing an existing assignment), enter a migration command in the "Migration Command" field (e.g., `deno task migrate` or `npm run migrate`). 2. This command executes automatically after every successful build of a new revision. 3. The command runs once for each database that the revision can connect to - meaning it executes separately for production, each Git branch database, and preview databases. 4. The migration command runs with the same environment variables available to your application, including `PGHOST`, `PGPORT`, `PGDATABASE`, etc. **Example migration setup using node-pg-migrate:** Add a task to your `deno.json`: ```json { "tasks": { "migrate": "deno run --allow-net --allow-env --allow-read --allow-write npm:node-pg-migrate up" } } ``` Create a migrations directory and add migration files. For example, `migrations/1234567890_create-users-table.js`: ```javascript exports.up = (pgm) => { pgm.createTable("users", { id: "id", name: { type: "varchar(100)", notNull: true }, email: { type: "varchar(100)", notNull: true }, created_at: { type: "timestamp", notNull: true, default: pgm.func("current_timestamp"), }, }); }; exports.down = (pgm) => { pgm.dropTable("users"); }; ``` Then set your migration command to `deno task migrate` when assigning the database to your app. Deno Deploy will automatically run this command after each build, ensuring all your environment-specific databases stay up to date. ### Local Development When developing locally, you can use either a local PostgreSQL instance (install PostgreSQL through your package manager or download it from postgresql.org) or connect to a remote database server. Create a `.env` file (if one does not yet exist) in your project root and add to it the PostgreSQL connection details: ```bash PGHOST=localhost # or your remote host PGPORT=5432 PGDATABASE=myapp_dev PGUSER=myuser PGPASSWORD=mypassword PGSSLMODE=prefer # or `require` for remote connections ``` Then run your application with the `--env` flag to automatically load these environment variables: ```bash deno run --env --allow-all main.ts ``` Your application code remains the same - it will automatically use these environment variables to connect to your chosen database during local development. ## SSL Configuration (Linked databases) All connections to linked external databases use SSL encryption for security. The main difference is how certificates are handled depending on your database provider. This section does not apply to provisioned Deno KV. ### Certificate Types **Trusted Root CA Certificates:** Some database providers use certificates signed by trusted root Certificate Authorities (like Let's Encrypt or DigiCert). These work automatically without any configuration. **Private Root CA Certificates:** Some providers use self-signed certificates or private Certificate Authorities. In these cases, you need to upload the CA certificate that was used to sign your database's certificate. ### Certificate Configuration **For databases with trusted root CA certificates:** No certificate upload is needed and SSL connections work automatically. Some managed database services fall into this category. **For databases with private root CA certificates:** AWS RDS users can click "Use AWS Certificate Bundle" to automatically configure AWS RDS certificates without downloading them from AWS documentation. Other providers require you to upload the specific CA certificate provided by your database provider. ### Common Providers **AWS RDS** uses AWS's own Certificate Authority (not publicly trusted). Click "Use AWS Certificate Bundle" for automatic configuration without needing to manually download certificates from AWS docs. **Google Cloud SQL** uses Google's own Certificate Authority (not publicly trusted). You need to upload the Google Cloud SQL CA certificate, which you can download from your Google Cloud Console. **Self-Hosted Databases** require you to upload your custom CA certificate if using self-signed certificates, or you can configure your database to use publicly trusted CA certificates. ## Database Management ### Viewing Database Details Click on any database instance to see connection information (hostname, port, engine type), assigned apps, individual databases created within the instance, and overall health and connection status. ### Database Status Indicators The dashboard shows clear status indicators: - **🟢 Connected** - All databases are ready and working - **🟡 Creating** - Databases are being provisioned - **🔴 Error** - Some databases failed to create - **⚪ Unassigned** - No apps are using this database yet ### Managing App Assignments To assign a database to an app, click "Assign" on the database instance, select the app from the dropdown, optionally configure a migration command (see [Automated Migration Commands](#automated-migration-commands)), and confirm the assignment. To edit an existing app-database assignment (including updating the migration command), go to the database detail page, find the app in the "Assigned Apps" table, and click "Edit" next to the app. To remove an app from a database, go to the database detail page, find the app in the "Assigned Apps" table, and click "Remove" next to the app. ### Editing Database Settings Click "Edit" on any database instance to update connection details. Test the connection to ensure it still works before saving your changes. ## Supported Database Engines - Deno KV (Provision Database) — fast, globally distributed key‑value store built for the edge. - Prisma Postgres (Provision Database) — the world's most advanced open source relational database, hosted by Prisma. - PostgreSQL (Link Database) — connect an existing external instance. Coming soon: additional engines such as MySQL, MongoDB, Redis, and more are planned for future releases. ## Troubleshooting ### Connection Issues **"Connection failed" errors** typically indicate: - Incorrect hostname and port - Wrong username and password - Database server not running - Network connectivity issues Verify all connection details and ensure your database server is accessible. **"Permission denied" errors** mean the database user lacks necessary permissions. Verify the database user has the required permissions, can create databases, and can connect from Deno Deploy's servers. **SSL connection issues** occur when: - Database instance uses a trusted root CA, but SSL connectivity is not configured correctly on your database server - Database instance uses a private root CA, but you haven't uploaded the correct CA certificate - Database server doesn't support SSL connections - Certificate has expired Check your database server's SSL configuration and certificate validity. ### Provisioning Issues **"Database creation failed"** usually indicates: - Database user lacks CREATE privileges - Insufficient disk space - Naming conflicts with existing databases Check your database user permissions and server capacity. **"Timeout" errors** suggest: - Network connectivity issues between Deno Deploy and your database server - Database server is slow to respond Check server load and performance. **"Error" status** can be resolved by: - Using the "Fix" button to retry failed operations - Checking your database server logs for more detailed information ## Frequently Asked Questions **Q: Can multiple apps share the same database instance?** Yes! Multiple apps can be assigned to the same database instance. Each app gets its own isolated databases within that instance. **Q: What happens to my data when I remove an app assignment?** The databases remain on your database server. Only the connection between your app and the database instance is removed. **Q: Can I use the same database for multiple environments?** By default, each environment (production, branch, preview) gets its own database to ensure isolation and prevent data conflicts. However, you can customize the database your code connects to using options in your database library. **Q: How do I access my databases directly?** You can connect directly to your database server using the connection details you provided. Use the database names shown in the Deno Deploy dashboard. **Q: Can I change database connection details?** Yes, click "Edit" on any database instance to update connection details. Test the connection before saving to ensure it works. **Q: How do I delete a database instance?** First remove all app assignments, then click "Delete" on the database instance. This only removes the connection from Deno Deploy - your actual database server is not affected. --- # deploy/reference/deno_kv.md URL: https://docs.deno.com/deploy/reference/deno_kv [Deno KV] is a Key Value database supported in Deno Deploy as a database engine option in the [databases] feature. Thanks to the new [timelines] capability in Deno Deploy Early Access (EA), your apps have full control over the Deno KV databases they use (for example, one for production and one for each Git branch), ensuring data isolation and security across environments. As with other database engines, your code automatically connects to the correct database for each environment—no timeline detection or manual database naming required. ## Getting Started ### Add a KV database Navigate to your organization dashboard and click "Databases" in the navigation bar. Click "Provision Database", choose Deno KV as the database engine, provide a memorable name, and save. ### Connect an app to a KV database Once you have a database instance you can assign it to an app. From the database instances list, click "Assign" next to the database you wish to use and select the app from the dropdown. Deno Deploy automatically creates a separate database for each timeline. This keeps your production data safe while you develop and test. You can monitor provisioning and watch the status change to "Connected." If any errors occur, click "Fix" to retry. ## Using Deno KV in Your Code Once you've assigned a database to your app, connecting from code is simple. Deno Deploy sets up the connection to the correct database based on the current environment. ### Example Here's how to connect to Deno KV in your Deno Deploy app: ```typescript const kv = await Deno.openKv(); Deno.serve(async () => { const res = await kv.get(["requests"]); const requests = res.value + 1; await kv.set(["requests"], requests); return new Response(JSON.stringify(requests)); }); ``` For detailed information about Deno KV and its features, see the [Deno KV documentation][Deno KV]. ## Un-assigning a KV database If you remove a database assignment from an app, the app will no longer be able to access that database. However, the database itself and its data will remain intact and can be reassigned to another app or the same app at a later time. Hover over the name of the assigned app in the databases list and click the 'remove app assignment' icon to un-assign it. ## Data Distribution Deno KV databases are replicated across at least three data centers in the primary region, Northern Virginia (us-east4). Once a write operation is committed, its mutations are durably stored in a quorum of data centers within the primary region. Cross-region replication is not currently available. ## Data storage In local development, data is kept in memory. You do not need to create or allocate a database before using the KV APIs locally, and your KV code remains consistent across environments. ## Deleting a database instance Click "Delete" on the Deno KV entry in the database instances list. Unlike other database engines, this action deletes all existing Deno KV databases and their data. Be sure to back up your data before proceeding. [Deno KV]: /deploy/kv/ [databases]: /deploy/reference/databases/ [timelines]: /deploy/reference/timelines/ --- # deploy/reference/domains.md > Complete guide to domain management in Deno Deploy, including organization domains, custom domains, DNS configuration, TLS certificates, and domain assignments. URL: https://docs.deno.com/deploy/reference/domains Every organization has a default domain used for all applications deployed within that organization. For example, an organization with the slug `acme-inc` would have a default domain of `acme-inc.deno.net`. An application named `my-app` would automatically receive the production domain `my-app.acme-inc.deno.net`. In addition to these default domains, you can add custom domains to your applications. Custom domains are domains that you own and control. To use a custom domain, you must: 1. Own the domain (purchased from a domain registrar) 2. Have access to edit its DNS records Custom domains belong to an organization and can be attached to any application within that organization. A custom domain can be added as: - A base domain (e.g., `example.com` or a specific subdomain) - A wildcard domain (e.g., `*.example.com`) A base domain works with a single application, while a wildcard domain offers more flexibility. You can either: - Assign the entire wildcard to one application (all subdomains point to the same app) - Partially assign it to multiple applications (different subdomains point to different apps) All custom domains require valid TLS certificates. Deno Deploy can automatically provision these certificates using [Let's Encrypt](https://letsencrypt.org/). Alternatively, you can bring your own TLS certificates, which you will then need to renew manually. ## Adding a custom domain 1. Go to the organization domains page (click your organization name in the top left corner, then the "Domains" tab) 2. Click "Add Domain" 3. Enter your domain (e.g., `example.com`) 4. Select whether to add just this domain or also include the wildcard subdomain 5. Click "Add Domain" This will open the domain configuration drawer. ### DNS configuration The domain configuration drawer shows the DNS records needed to: - Verify domain ownership - Optionally provision TLS certificates - Route traffic to Deno Deploy There are three possible configuration methods, depending on your domain registrar's capabilities: #### ANAME/ALIAS method (preferred) If your registrar supports `ANAME` or `ALIAS` records, this is the best option: - Add one `ANAME`/`ALIAS` record - Add one `CNAME` record for verification #### CNAME method Works well for subdomains but not for apex domains: - Add two `CNAME` records - Note: This method doesn't allow other DNS records (like `MX` records) on the same domain #### A record method Most compatible but requires more configuration: - Add one `A` record - Add one `CNAME` record for verification > Note: Deno Deploy does not currently support IPv6. When using the > `ANAME/ALIAS` or `CNAME` methods, your domain will automatically use IPv6 when > supported. With the `A` method, you'll receive an email when it's time to add > an `AAAA` record. :::caution When using Cloudflare as your DNS provider, you **MUST** disable the proxying feature (orange cloud) for the `_acme-challenge` CNAME record, or verification and certificate provisioning will fail. ::: ### Verification After adding the DNS records, Deno Deploy will verify your domain ownership. This process may take a few minutes depending on your DNS provider. You can leave the domain configuration drawer open during verification — it will refresh automatically when complete. You can manually trigger verification by clicking the "Provision Certificate" button. Successful verification also initiates TLS certificate provisioning. ### TLS certificates After domain verification, you need a valid TLS certificate to use the domain with Deno Deploy. You can either have Deno Deploy provision a certificate for you using Let's Encrypt, or you can bring your own certificate. #### Automatic provisioning (Let's Encrypt) After domain verification, click "Provision Certificate" to generate a TLS certificate through Let's Encrypt. This process can take up to 90 seconds. Once provisioned, you'll see certificate details including expiration date and issue time. Certificates are automatically renewed near expiration. You can check the current certificate status in the domain configuration drawer. If automatic renewal fails (for example, because DNS records changed), you will receive an email notification 14 days before the certificate expires. You then have a chance to fix the issue and contact support to retry the renewal. If the certificate is not renewed before expiration, the domain will stop working. #### Bring your own certificate If you prefer to use your own TLS certificate, you can upload it in the domain configuration drawer. You'll need to provide the following: - The certificate file (PEM format) - The private key file (PEM format) Once uploaded, the certificate will be used for the domain. You are responsible for renewing and updating the certificate before it expires. You will receive email notifications 14 days before the certificate expires reminding you to update it. If the certificate expires, the domain will stop working. The TLS certificate must be valid at the time of upload. It must cover the base domain (and, if you have a wildcard domain, the wildcard subdomain as well) through either the common name or the subject alternative names in the certificate. The private key and certificate must match, and must be either RSA (2048, 3072, or 4096 bits) or ECDSA (P-256, P-384, or P-521). ## Assigning a custom domain to an application After adding a custom domain to your organization: 1. Go to the organization domains page 2. Click "Assign" next to the custom domain 3. Select the target application 4. If using a wildcard domain, choose whether to attach the base domain, the wildcard, or a specific subdomain 5. Click "Assign Domain" ## Unassigning a custom domain from an application 1. Go to the application settings page 2. Find the "Custom Domains" section 3. Click "Remove" next to the domain you want to unassign This removes the domain from the application but keeps it available in your organization for use with other applications. ## Removing a custom domain 1. Go to the organization domains page 2. Open the domain configuration drawer 3. Click "Delete" and confirm This removes the custom domain from your organization and deletes all domain assignments across all applications. ## Migrating a custom domain from Deploy Classic to Deno Deploy If you have previously set up a custom domain on Deploy Classic and want to migrate it to Deno Deploy, we've created a [step-by-step tutorial](/examples/migrate_custom_domain_tutorial/) to guide you through the process. --- # deploy/reference/env_vars_and_contexts.md > Guide to managing environment variables and contexts in Deno Deploy, including variable types, creation, editing, and accessing them in your code. URL: https://docs.deno.com/deploy/reference/env_vars_and_contexts Environment variables in Deno Deploy allow you to configure your application with static values such as API keys or database connection strings. ## Types of environment variables Environment variables can be stored as: - **Plain text**: Visible in the UI and suitable for non-sensitive values like feature flags - **Secrets**: Never visible in the UI after creation, only readable from application code, suitable for sensitive values like API keys Variables can be set at: - **Application level**: Specific to a single application - **Organization level**: Applied to all applications in the organization, but can be overridden by application-level variables ## Contexts Each environment variable applies to one or more contexts. Contexts represent the logical "environments" in which your code runs, each with its own set of variables and secrets. By default, there are two contexts: - **Production**: Used for the production timeline serving production traffic - **Development**: Used for development timelines serving non-production traffic (preview URLs and branch URLs) :::info Need additional contexts? Please contact [support](../support). ::: Additionally, there is a **Build** context used during the build process. Environment variables in the Build context are only available during builds and aren't accessible in Production or Development contexts (and vice versa). This separation enables different configuration for build-time vs. runtime. Within a single application or organization, you cannot have multiple environment variables with the same name in the same context. You can, however, have variables with the same name in different non-overlapping contexts. ## Adding, editing and removing environment variables You can manage environment variables from several locations: - On the "New App" page while creating an application - In the application settings under the "Environment Variables" section - In the organization settings under the "Environment Variables" section In each location, click the relevant edit button to open the environment variables drawer. Changes only apply when you click "Save." Clicking "Cancel" discards your changes. To add a variable: 1. Click "Add Environment Variable" 2. Enter the name and value 3. Specify whether it's a secret 4. Select the contexts where it should apply You can also bulk import variables from a `.env` file: 1. Click "+ Add from .env file" 2. Paste the contents of your `.env` file 3. Click "Import variables" Note that lines starting with `#` are treated as comments. To remove a variable, click the "Remove" button next to it. To edit a variable, click the "Edit" button next to it to modify its name, value, secret status, or applicable contexts. ## Using environment variables in your code Access environment variables using the `Deno.env.get` API: ```ts const myEnvVar = Deno.env.get("MY_ENV_VAR"); ``` ## Limits Environment variables have the following limits: - Environment variable keys can be at most 128 bytes long.\ - Environment variable keys can not start with: - `DENO_`, except for `DENO_AUTH_TOKENS`, `DENO_COMPAT`, `DENO_CONDITIONS`, `DENO_DEPLOY_ENDPOINT`, or `DENO_DEPLOY_TOKEN` - `LD_` - `OTEL_` - Environment variable values can be at most 16 KB (16,384 bytes) long. - Environment variable keys can not be any of these keys. Instead, use [Cloud Connections](/deploy/reference/cloud_connections) - `AWS_ROLE_ARN` - `AWS_WEB_IDENTITY_TOKEN_FILE` - `GCP_WORKLOAD_PROVIDER_ID` - `GCP_SERVICE_ACCOUNT_EMAIL` - `GCP_PROJECT_ID` - `AZURE_CLIENT_ID` - `AZURE_TENANT_ID` - `AZURE_FEDERATED_TOKEN_FILE` ## Predefined environment variables Deno Deploy provides these predefined environment variables in all contexts: - `DENO_DEPLOY=1`: Indicates that the application is running in the Deno Deploy environment. - `DENO_DEPLOYMENT_ID`: A unique identifier representing the entire configuration set (application ID, revision ID, context, and environment variables). Changes if any of these components change. - `DENO_DEPLOY_ORG_ID`: The ID of the organization the application belongs to. - `DENO_DEPLOY_ORG_SLUG`: The slug of the organization the application belongs to. - `DENO_DEPLOY_APP_ID`: The ID of the application. - `DENO_DEPLOY_APP_SLUG`: The slug of the application. - `DENO_DEPLOY_BUILD_ID`: The ID of the currently running revision. During builds, the environment variable `CI=1` is additionally set. --- # or npm install @deno/astro-adapter > Detailed guide to supported JavaScript and TypeScript frameworks in Deno Deploy, including Next.js, Astro, Nuxt, SvelteKit, and more. URL: https://docs.deno.com/deploy/reference/frameworks Deno Deploy supports a number of JavaScript and TypeScript frameworks out of the box. This means that you can use these frameworks without any additional configuration or setup. Natively supported frameworks are tested to work with Deno Deploy and are automatically detected when you create a new app. Deno Deploy automatically optimizes the build and runtime configuration for these frameworks to be as optimal as possible. Frameworks not listed here are still likely to work, but may require manually configuring the install and/or build command and the runtime configuration in the build settings. Feel like a framework is missing? Let us know in the [Deno Deploy Discord channel](https://discord.gg/deno) or [contact Deno support](/deploy/support/). ## Supported frameworks ### Next.js Next.js is a React framework for building full-stack web applications. You use React Components to build user interfaces, and Next.js for additional features and optimizations. Both pages and app router are supported out of the box. ISR, SSG, SSR, and PPR are supported. Caching is supported out of the box, including using the new `"use cache"`. `next/image` works out of the box. Next.js on Deno Deploy always builds in standalone mode. Tracing is supported out of the box, and Next.js automatically emits some spans for incoming requests, routing, rendering, and other operations. ### Astro Astro is a web framework for building content-driven websites like blogs, marketing, and e-commerce. Astro leverages server rendering over client-side rendering in the browser as much as possible. For static Astro sites, no additional configuration is needed to use Deno Deploy . When using SSR in Astro with Deno Deploy , you need to install the [`@deno/astro-adapter`](https://github.com/denoland/deno-astro-adapter) package and configure your `astro.config.mjs` file to use the adapter: ```bash $ deno add npm:@deno/astro-adapter # or npm install @deno/astro-adapter # or yarn add @deno/astro-adapter # or pnpm add @deno/astro-adapter ``` ```diff title="astro.config.mjs" import { defineConfig } from 'astro/config'; + import deno from '@deno/astro-adapter'; export default defineConfig({ + output: 'server', + adapter: deno(), }); ``` Sharp image optimization is supported. The `astro:env` API is supported. ### Nuxt Create high-quality web applications with Nuxt, the open source framework that makes full-stack development with Vue.js intuitive. Nuxt requires no additional setup. ### SolidStart SolidStart is an open source meta-framework designed to unify components that make up a web application. It is built on top of Solid. SolidStart requires no additional setup. ### SvelteKit SvelteKit is a framework for rapidly developing robust, performant web applications using Svelte. SvelteKit requires no additional setup. ### Fresh Fresh is a full stack modern web framework for JavaScript and TypeScript developers. Fresh uses Preact as the JSX rendering engine. Fresh requires no additional setup. ### Lume Lume is a static site generator for building fast and modern websites using Deno. Lume requires no additional setup. ### Remix > ⚠️ **Experimental**: Remix is not yet fully supported. It is in the process of > being integrated into Deno Deploy. Some features may not work as expected. > Please report any issues you encounter to the Deno team. --- # deploy/reference/observability.md > Comprehensive overview of monitoring features in Deno Deploy, including logs, traces, metrics, and filtering options. URL: https://docs.deno.com/deploy/reference/observability Deno Deploy provides comprehensive observability features to help you understand application performance, debug errors, and monitor usage. These features leverage OpenTelemetry and the [built-in OpenTelemetry integration in Deno](/runtime/fundamentals/open_telemetry/). The three main observability features in Deno Deploy are: - **Logs**: Unstructured debug information emitted by your application code - **Traces**: Structured information about request handling, including execution time for each step and automatic capture of outbound I/O operations - **Metrics**: Structured, high-level data about application performance and usage, such as request count, error count, and latency ## Logs Logs in Deno Deploy are captured using the standard `console` API and can be queried from the logs page in the dashboard. Logs are organized by application. You can use the search bar to filter logs based on various attributes and message content. When logs are emitted inside the context of a trace, they become associated with that specific trace and span. For such logs, a "View trace" button appears in the logs interface, allowing you to open the relevant trace in an overlay drawer for detailed inspection. ## Traces Traces in Deno Deploy are captured in three ways: - **Automatically for built-in operations**: Incoming HTTP requests, outbound fetch calls, and other system operations are traced automatically. This cannot be disabled. - **Automatically for supported frameworks**: Frameworks like Next.js, Fresh, and Astro include built-in instrumentation. The specific frameworks and operations covered may change over time. - **Manually through custom instrumentation**: Your application code can create new traces or spans using the OpenTelemetry API. Traces are organized by application. The search bar lets you filter based on various attributes and span names. Clicking a trace opens the trace overlay drawer, showing all spans within that trace in a waterfall view. This visualization displays the start time, end time, and duration of each span, grouped by parent span with the root span at the top. Clicking any span shows its details at the bottom of the drawer, including all captured attributes. For example, outbound HTTP requests include the method, URL, and status code. The span details section also includes a "Logs" tab showing all logs emitted within the selected span's context. You can click "View logs" on any trace to open the logs page with the trace ID pre-filled in the search bar, showing all logs related to that trace. ## Metrics Metrics in Deno Deploy are automatically captured for various operations such as incoming HTTP requests and outbound fetch calls. This automatic capture cannot be disabled. Metrics are organized by application and displayed in time-series graphs showing values over time. You can use the search bar to filter metrics based on various attributes. ## Filtering Logs, traces, and metrics can be filtered using these general attributes: - **Revision**: The ID of the application revision that emitted the data - **Context**: The context in which the data was emitted ("Production" or "Development") For logs and traces, this additional filter is available: - **Trace**: The ID of the trace containing the log or spans For traces only, these additional filters are available: - **HTTP Method**: The HTTP method of the request that triggered the trace - **HTTP Path**: The path of the request that triggered the trace - **HTTP Status**: The HTTP status code of the response ### Time range filter By default, the observability pages show data for the last hour. You can change this using the time range filter in the top right corner of each page. You can select predefined time ranges like "Last 1 hour," "Last 24 hours," or "Last 7 days," or set a custom time range by clicking the "Custom" button. Custom time ranges can be either absolute (specific start and end times) or relative (e.g., 3 days ago, 1 hour from now). Relative time ranges use the same syntax as Grafana: - `now` - the current time - `now-1h` - 1 hour ago - `now/h` - the start of the current hour - `now-1h/h` - the start of the previous hour - `now/d+3h` - 3 hours from the start of the current day - `now-1d/d` - the start of the previous day page. The time range filter can be set to a predefined time range, like "Last 1 hour", "Last 24 hours", or "Last 7 days", or a custom time range. The custom time range can be set by clicking on the "Custom" button. A custom time range can either be absolute (a specific start and end time) or relative (3 days ago, 1 hour from now, etc.). The time range filter is shown in the top right corner of the page. Relative time ranges use the same syntax as Grafana, where `now` is the current time, and `now-1h` is 1 hour ago. Furthermore syntax such as `now-1h/h` can be used to round the time to the nearest hour. Some examples: - `now-1h` - 1 hour ago - `now/h` - the start of the current hour - `now-1h/h` - the start of the previous hour - `now/d+3h` - 3 hours from the start of the current day - `now-1d/d` - the start of the previous day --- # deploy/reference/oidc.md URL: https://docs.deno.com/deploy/reference/oidc Deno Deploy is an OIDC provider. Every running application of Deno Deploy can be issued short-lived JWT tokens that are signed by Deno Deploy. These tokens contain information about the application, such as the organization and application ids and slugs, the context in which an application is executing, and the running revision ID. The tokens can be used to authenticate with third-party services that support OIDC authentication, such as major cloud providers, but also HashiCorp Vault, NPM, and others. :::tip Do you want to use OIDC tokens to authenticate with AWS or Google Cloud? Use the [Cloud Connections](/deploy/reference/cloud_connections) feature instead of manually configuring OIDC authentication. Cloud Connections handle the entire configuration for you, including setting up trust relationships and permissions. OIDC is used under the hood. ::: ## Issuing Tokens To issue a token for the currently running application, use the `getIdToken()` function from the [`@deno/oidc` module on JSR](http://jsr.io/@deno/oidc). First, install `@deno/oidc` as a dependency of your application: ```sh deno add jsr:@deno/oidc ``` Then, import the `getIdToken()` function and call it with the desired audience: ```ts import { getIdToken } from "jsr:@deno/oidc"; const token = await getIdToken("https://example.com/"); console.log(token); ``` The `audience` parameter is a string that identifies the intended recipient of the token. It is typically a URL or an identifier that represents the service or application that will consume the token. The audience value must match the value configured in the third-party service that you want to authenticate with. It will be placed into the `aud` claim of the issued JWT token. The `getIdToken()` function returns a promise that resolves to a JWT token as a string. To check whether your current environment supports OIDC (i.e. whether your application is running on Deno Deploy), you can use the `supportsIssuingIdTokens` namespaced property: ```ts import { supportsIssuingIdTokens } from "jsr:@deno/oidc"; if (supportsIssuingIdTokens) { // OIDC is supported } else { // OIDC is not supported } ``` ## Token Structure The issued tokens are JWT tokens that are signed using the RS256 algorithm. The tokens contain the following claims: | Claim Name | Example Value | Description | | --------------- | -------------------------------------- | -------------------------------------------------------------------------------------------------------------- | | `iss` | `https://oidc.deno.com` | The issuer of the token, which is always `https://oidc.deno.com`. | | `aud` | `https://example.com/` | The audience of the token, which is the value passed to the `getIdToken()` function. | | `iat` | `1757924011` | The issued-at time of the token, which is a Unix timestamp indicating when the token was issued. | | `exp` | `1757924311` | The expiration time of the token, which is a Unix timestamp indicating when the token will expire. | | `nbf` | `1757923951` | The not-before time of the token, which is a Unix timestamp indicating when the token becomes valid. | | `sub` | `deployment:deno/astro-app/production` | The subject of the token, which is a string concatenation of `deployment://` | | `org_id` | `729adb8f-20d6-4b09-bb14-fac14cb260d1` | The unique identifier of the organization that owns the application. | | `org_slug` | `deno` | The slug of the organization that owns the application. | | `app_id` | `16ad21d8-7aeb-4155-8aa3-9f58df87cd3e` | The unique identifier of the application. | | `app_slug` | `astro-app` | The slug of the application. | | `context_id` | `1d685676-92d7-418d-b103-75b46f1a58b4` | The unique identifier of the context in which the application is running. | | `context_name` | `production` | The context in which the application is running. | | `revision_id` | `rh2r15rgy802` | The unique identifier of the revision of the application that is currently running. | | `deployment_id` | | A unique hash containing the entire deployment metadata, including the application, revision, and context IDs. | Tokens expire 5 minutes after they are issued. To account for clock skew, the tokens `nbf` claim is set to 1 minute before the `iat` claim. ## Verifying Tokens To verify the tokens issued by Deno Deploy, you need to fetch the public keys from the OIDC provider's JWKS endpoint. The JWKS endpoint for Deno Deploy is: ``` https://oidc.deno.com/.well-known/jwks.json ``` Use the `kid` (key ID) from the JWT token header to select the correct key from the JWKS response. Deno Deploy also provides a standard OIDC discovery document at: ``` https://oidc.deno.com/.well-known/openid-configuration ``` Deno Deploy rotates its signing keys periodically. Therefore, it is important to fetch the JWKS keys dynamically from the JWKS endpoint rather than hardcoding them. Currently, Deno Deploy signing keys use the `ES256` algorithm. This may change in the future, depending on security requirements, best practices, and support in third-party services. To verify the tokens, you can use a JWT library that supports OIDC and JWKS. In TypeScript, you can use the [`jose`](https://jsr.io/@panva/jose) library. --- # deploy/reference/organizations.md > Guide to creating and managing organizations in Deno Deploy, including members, permissions, and organization administration. URL: https://docs.deno.com/deploy/reference/organizations Organizations are groups of users that collectively own apps and domains. When signing up for Deno Deploy, each user can either create an organization or join an existing organization through invitation. All users must belong to an organization to use Deno Deploy, as all resources are owned at the organization level. Organizations have both a name and a slug. The name is visible only to organization members and appears in the organization dropdown in both Deno Deploy and Deploy Classic. The slug forms part of the default domain for all applications in the organization. :::caution Organizations cannot be renamed, nor can their slug be changed after creation. ::: Every organization has a default domain used for production, git branch, and preview URLs for projects in that organization. For example, an organization with the slug `acme-inc` would have a default domain of `acme-inc.deno.net`. Organizations can have multiple members. Currently, all members have owner permissions for the organization, which means they can invite other members, create and delete apps, and manage domains. ## Create an organization Organizations in Deno Deploy are created when you sign up for a Deno Deploy account. If you do not yet have a Deno Deploy account, you can create one by visiting the [Deno Deploy dashboard](https://console.deno.com) and signing in with your GitHub account. You will be prompted to create an organization as part of the sign-up process. :::info Organization slugs must be unique across all Deno Deploy organizations and cannot match any existing project name in Deno Deploy Classic. ::: ## Deleting an organization Organizations cannot currently be deleted from the dashboard. Please [contact Deno support](/deploy/support/) if you need to delete an organization. ## Inviting users to an organization To invite a user: 1. Go to the organization settings page and click "+ Invite User" 2. Enter the user's GitHub account username (e.g., `ry`) 3. Optionally enter an email address to send the invitation to 4. Click "Invite" If you don't specify an email address, we'll attempt to send the invitation to the email in the user's public GitHub profile or another email we may have on record. After inviting a user, they will receive an email with an invite link (if we have their email address). They must click this link and accept the invitation to join the organization. You can also directly share the personalized invite link displayed in the members table after inviting a user. You can cancel an invitation before it's accepted by clicking the delete button next to the invited user in the members table and confirming by clicking "Save". This invalidates the previously sent invitation link. ## Removing users from an organization To remove a member from the organization, find the user in the members table in the organization settings, click the remove button, and confirm by clicking "Delete". "Delete". --- # deploy/reference/playgrounds.md > Write and deploy code completely from Deno Deploy, without the need for a git repository. URL: https://docs.deno.com/deploy/reference/playgrounds ![Deno Deploy Playground](https://deno.com/video/playground-1-720.mp4) Playground applications enable you to create, edit, and deploy applications entirely from the Deno Deploy web dashboard, without needing to create a GitHub repository. Playgrounds contain one or more files (JavaScript, TypeScript, TSX, JSON, etc.) that you can edit directly in the playground editor. ## Creating a playground You can create playgrounds from the "Applications" page in your organization. Click the "New Playground" button to create a basic "Hello World" playground. Using the dropdown on the "New Playground" button lets you create playgrounds from other templates, such as Next.js or Hono. ## Editing a playground To edit a playground, open it from the "Applications" page in your organization. The playground editor consists of five main sections: - **Code editor**: The central area where you edit code for the currently selected file. Above the editor is a navbar showing the current file name, which you can click to edit. - **File browser**: Located on the left of the code editor, this panel shows all files in the playground. Click any file to open it in the editor. Create new files by clicking the "New" icon at the top of the file browser. Delete files using the delete button next to each file name. - **Top bar**: Located above the code editor, this contains action buttons for the playground. The "Deploy" button saves current changes and triggers a build. "Build Config" and "Env Variables" buttons open their respective configuration drawers. The left side of the top bar displays the playground URL (unless the playground hasn't been deployed yet). - **Bottom drawer**: Located beneath the code editor, this contains debugging tools including "Build Logs" that show build progress during deployment, and tabs for viewing logs and traces. - **Right drawer**: Located to the right of the code editor, this contains tools for inspecting application output. The "Preview" tab displays an iframe showing the deployed application, while "HTTP Explorer" lets you send individual HTTP requests to your deployment. The playground content automatically saves when you click the "Deploy" button or when the editor loses focus. ## Uploading files You can upload a zip file containing files and directories to the playground by dragging it into the file browser area. The contents of the zip file will be extracted into the playground, preserving the directory structure. > ⚠️ The playground editor does not support uploading individual files or > directories. ## Using the HTTP explorer The HTTP Explorer tab in the playground allows you to make arbitrary HTTP requests to any URL served by the playground. This is useful for testing APIs or other services that do not serve a web page. To use the HTTP Explorer, enter the path and query parameters for the request you want to make, select the HTTP method (GET, POST, etc.), and click on the button labeled with the selected method. Additional request headers can be added by clicking the "Set Headers" button. After the response has been made, the HTTP Explorer will display the response status, headers, and body. To view the trace for the request, click on the "Trace" button in the response section. This will open the request trace for the request in a drawer on top of the playground editor. From there you can also view any `console.log` output that was captured during the request. ## Renaming a playground You can rename a playground by editing the playground slug on the playground settings page. This will update the default domain names associated with the playground since they are based on the playground slug. The new slug must be unique within the organization (i.e. must not be in use by another app or playground in the same organization). :::info Any previous `deno.net` URLs pointing to the playground will no longer work after renaming. Custom domains will continue to work, as they are not tied to the playground slug. ::: ## Deleting a playground Playgrounds can be deleted from the playground settings page. This will remove the playground and all its revisions from the organization. All existing deployments will immediately stop serving traffic, and all custom domain associations will be removed. The playground and its revisions will no longer be accessible after deletion. Deleted playgrounds cannot be restored through the Deno Deploy UI. :::info Deleted a playground by mistake? Contact Deno support within 30 days to restore it. ::: ## Limitations > ⚠️ Playgrounds cannot currently be transferred to another organization. --- # Apply migrations to production database URL: https://docs.deno.com/deploy/reference/prisma_postgres Prisma Postgres is a serverless PostgreSQL database that is instantly responsive and effortlessly scalable. Built on bare metal infrastructure with zero cold starts and built-in global caching, it scales to zero when idle and handles traffic spikes seamlessly. Through Deno Deploy's database provisioning feature, you can create and manage Prisma Postgres instances that automatically integrate with your applications and their deployment environments. ## Overview When you provision a Prisma Postgres database and assign it to an app, Deno Deploy automatically creates separate databases for each deployment environment: - Production deployments use `{app-id}-production` - Git branches get `{app-id}--{branch-name}` - Preview deployments use `{app-id}-preview` Your code automatically connects to the correct database for each environment without requiring timeline detection or manual configuration. ## Provisioning a Prisma Postgres Database ### Creating the Instance 1. Navigate to your organization dashboard and click "Databases" in the navigation bar. 2. Click "Provision Database". 3. Select "Prisma Postgres" from the available options. 4. Give your database instance a name. 5. Complete the provisioning flow. ### Assigning to an App Once your Prisma Postgres instance is provisioned: 1. From the database instances list, click "Assign" next to your Prisma Postgres instance. 2. Select the app from the dropdown. 3. Optionally, configure a migration command that will run automatically after each build (see [Automated Migrations](#automated-migrations) for details). 4. Deno Deploy will automatically provision separate databases for production, Git branches, and preview environments. 5. Monitor the provisioning status as it changes to "Connected". ## Using Prisma Postgres in Your Code ### Zero Configuration Required Once assigned, your code automatically connects to the correct Prisma Postgres database for each environment. Deno Deploy injects standard PostgreSQL environment variables into your runtime: - `PGHOST` - Database host (db.prisma.io) - `PGPORT` - Database port (5432) - `PGDATABASE` - Database name (automatically selected for your environment) - `PGUSER` - Database username - `PGPASSWORD` - Database password - `PGSSLMODE` - SSL mode configuration - `DATABASE_URL` - Standard PostgreSQL connection string (`postgresql://user:password@db.prisma.io:5432/database`) - `PRISMA_ACCELERATE_URL` - Connection URL for Prisma Accelerate, a global connection pooling and caching layer that provides optimized database access with reduced latency ### Example with pg ```typescript import { Pool } from "npm:pg"; // No configuration needed - Deno Deploy handles this automatically const pool = new Pool(); Deno.serve(async () => { const result = await pool.query("SELECT * FROM users WHERE id = $1", [123]); return new Response(JSON.stringify(result.rows), { headers: { "content-type": "application/json" }, }); }); ``` ### Example with Prisma ORM ```typescript import { PrismaClient } from "@prisma/client"; // Prisma Client automatically uses DATABASE_URL environment variable const prisma = new PrismaClient(); Deno.serve(async () => { const users = await prisma.user.findMany(); return new Response(JSON.stringify(users), { headers: { "content-type": "application/json" }, }); }); ``` ## Environment-Specific Databases Each environment automatically receives its own isolated database: - **Production**: When code is deployed to production, it connects to `{app-id}-production` - **Git Branches**: Branch deployments connect to `{app-id}--{branch-name}` - **Preview Deployments**: Preview timelines connect to `{app-id}-preview` This isolation ensures production data stays safe while developing and testing. ## Schema Management and Migrations Since each environment has its own database, you can safely test schema changes and migrations without affecting production data. ### Automated Migrations When assigning a Prisma Postgres database to an app, you can configure a migration command that automatically runs after each successful build. This ensures your database schema stays synchronized with your application code across all environments. **Setting Up Automated Migrations:** 1. When assigning a database to an app (or editing an existing assignment), enter a migration command in the "Migration Command" field. 2. This command executes automatically after every successful build of a new revision. 3. The command runs once for each database that the revision can connect to - meaning it executes separately for production, each Git branch database, and preview databases. 4. The migration command runs with the same environment variables available to your application, including `DATABASE_URL`. **Example using Prisma Migrate:** Add a task to your `deno.json`: ```json { "tasks": { "migrate": "deno run --allow-net --allow-env --allow-read npm:prisma migrate deploy" } } ``` Then set your migration command to `deno task migrate` when assigning the database to your app. Deno Deploy will automatically run this command after each build, applying your migrations to all environment-specific databases. ### Using Prisma Tooling Locally To manage your database schema with Prisma from your local machine, you'll need the connection string for the specific environment database you want to work with. You can obtain the `DATABASE_URL` by clicking the URL button found in the database table on your database instance detail page in the Deno Deploy dashboard. #### Generate Prisma Client After defining or updating your Prisma schema, generate the Prisma Client: ```bash npx prisma generate ``` This creates the type-safe database client based on your schema. #### Run Migrations To apply migrations to a specific environment database from your local machine, use the connection string for that environment: ```bash # Apply migrations to production database DATABASE_URL="postgresql://user:pass@db.prisma.io:5432/3ba03b-production" npx prisma migrate deploy # Apply migrations to a branch database DATABASE_URL="postgresql://user:pass@db.prisma.io:5432/3ba03b--feature-branch" npx prisma migrate deploy # Apply migrations to preview database DATABASE_URL="postgresql://user:pass@db.prisma.io:5432/3ba03b-preview" npx prisma migrate deploy ``` For development, you can create and apply migrations interactively: ```bash DATABASE_URL="postgresql://user:pass@db.prisma.io:5432/3ba03b-dev" npx prisma migrate dev ``` #### Seed the Database To populate your database with initial data using Prisma's seeding feature: ```bash # Seed production database DATABASE_URL="postgresql://user:pass@db.prisma.io:5432/3ba03b-production" npx prisma db seed # Seed branch database DATABASE_URL="postgresql://user:pass@db.prisma.io:5432/3ba03b--feature-branch" npx prisma db seed ``` ## Local Development When developing locally with a Prisma Postgres database from Deploy, create a `.env` file in your project root with the connection details. You can obtain the `DATABASE_URL` by clicking the URL button found in the database table on your database instance detail page in the Deno Deploy dashboard. ```bash PGHOST=db.prisma.io PGPORT=5432 PGDATABASE=3ba03b-dev PGUSER=your-username PGPASSWORD=your-password PGSSLMODE=require DATABASE_URL=postgresql://your-username:your-password@db.prisma.io:5432/3ba03b-dev ``` Run your application with the `--env` flag: ```bash deno run --env --allow-all main.ts ``` ## Managing Your Prisma Postgres Instance ### Viewing Details Click on your Prisma Postgres instance in the Databases dashboard to view: - Connection information - Assigned apps - Individual databases created for each environment - Health and connection status ### Claiming Your Prisma Project When you provision a Prisma Postgres database, Deno Deploy creates a free-tier project on prisma.io. This free tier includes 100K operations per month, 500 MB storage, and 5 databases. To upgrade your Prisma subscription plan and lift the free tier limits, you'll need to claim your database project on prisma.io: 1. Go to your database instance detail page in the Deno Deploy dashboard. 2. Click the "Claim on Prisma" button. 3. You'll be guided through the Prisma project claim flow. 4. Select a workspace in Prisma where you want to claim the project. Once claimed, you can manage your Prisma subscription and upgrade your plan directly through prisma.io to increase operation limits, storage capacity, and access additional features. ### Status Indicators - **🟢 Connected** - All databases are ready and working - **🟡 Creating** - Databases are being provisioned - **🔴 Error** - Some databases failed to create - **⚪ Unassigned** - No apps are using this database yet ### Managing App Assignments To edit an existing app-database assignment (including updating the migration command): 1. Go to the database detail page 2. Find the app in the "Assigned Apps" table 3. Click "Edit" next to the app To disconnect an app from your Prisma Postgres instance: 1. Go to the database detail page 2. Find the app in the "Assigned Apps" table 3. Click "Remove" next to the app The databases remain in your Prisma Postgres instance - only the connection between your app and the instance is removed. ## Troubleshooting ### Provisioning Issues **"Database creation failed"** may indicate: - Insufficient capacity or quota limits - Naming conflicts with existing databases - Temporary service issues Try using the "Fix" button to retry failed operations. ### Connection Issues **"Error" status** can be resolved by: - Using the "Fix" button to retry failed operations - Checking the database detail page for more information - Verifying your app is deployed and running ## Frequently Asked Questions **Q: Can multiple apps share the same Prisma Postgres instance?** Yes! Multiple apps can be assigned to the same Prisma Postgres instance. Each app gets its own isolated databases within that instance. **Q: What happens to my data when I remove an app assignment?** The databases remain in your Prisma Postgres instance. Only the connection between your app and the database is removed. **Q: How do I access my Prisma Postgres databases directly?** Use the connection details from your Deno Deploy dashboard with any PostgreSQL client tool (psql, pgAdmin, TablePlus, etc.). Connect using the specific database name shown for each environment. **Q: Can I use the same database for multiple environments?** By default, each environment gets its own database for isolation. You can override this by explicitly configuring your database connection in code, though this is not recommended for production applications. **Q: How do I delete a Prisma Postgres instance?** First remove all app assignments, then click "Delete" on the database instance. This removes the Prisma Postgres instance and all its data permanently. --- # deploy/reference/runtime.md > Details about the Deno Deploy runtime environment, including application lifecycle, startup, shutdown, and cold start optimization. URL: https://docs.deno.com/deploy/reference/runtime In Deno Deploy, all applications execute using a standard Deno runtime in a secure, isolated Linux environment. The Deno runtime used in Deno Deploy is [the standard Deno runtime](/runtime/), with full support for all features of the Deno CLI, including JSR and NPM dependencies, reading and writing to the file system, making network requests, spawning subprocesses, and loading FFI and node native addons. The Deno runtime runs using `--allow-all` permissions. Custom flags cannot be passed to the Deno runtime. ## Runtime environment The runtime environment is a Linux-based environment running either x64 or ARM64 architecture. The exact set of tools available in the runtime environment is subject to change and thus cannot be relied upon. Currently Deno Deploy runs on Deno 2.5.0 ## Lifecycle Deno Deploy runs applications in a serverless environment. This means that an application is not always running and is only started when a request is received. When no incoming traffic is received for a period of time, the application is stopped. Applications can be started and stopped at any time. They should start quickly to respond to incoming requests without delay. Multiple instances of the same application can run simultaneously. For example, one instance could be running in the US and another in Europe. Each instance is completely isolated from the others and they do not share CPU, memory, or disk resources. Multiple instances can also start in the same region when needed, such as to handle high traffic or during infrastructure updates. ### Startup When the system decides to start an application, it provisions a new sandbox environment for the application. This environment is isolated from all other applications. It then starts the application using the configured entrypoint and waits for the HTTP server to start. If the application crashes before the HTTP server starts, the request that triggered the start will fail with a 502 Bad Gateway error. Once the application is started, incoming requests are routed to it and responses are sent back to the client. ### Shutdown The application remains alive until no new incoming requests are received or responses (including response body bytes) are sent for a period of time. The exact timeout is between 5 seconds and 10 minutes. WebSocket connections that actively transmit data (including ping/pong frames) also keep the application alive. Once the system decides to stop the application, it sends a `SIGINT` signal to the application as a trigger to shut down. From this point on, the application has 5 seconds to shut down gracefully before it will be forcibly killed with a `SIGKILL` signal. ### Eviction Sometimes an isolate may shut down even if the application is actively receiving traffic. Some examples of when this can happen are: - An application was scaled up to handle load, but the load has decreased enough to be handled by a single instance again. - The underlying server executing the instance is too resource constrained to continue running this application instance. - The underlying infrastructure is being updated or has experienced a failure. When the system decides to evict an application, it attempts to divert traffic away from the instance being evicted as early as possible. Sometimes this means that a request will wait for a new instance to boot up even though an existing instance is already running. When an application only serves requests that finish quickly, evictions are usually unnoticeable. For applications that serve long-running requests or WebSockets, evictions can be more noticeable because the application may need to be evicted while still processing a request. The system will try to avoid these scenarios, but it is not always possible. After traffic has been diverted away from the old instance, the system sends a `SIGINT` signal to trigger a graceful shutdown. The application should finish processing any remaining requests quickly and shut down websockets and other long-running connections. Clients making long-running requests should be prepared to handle these disruptions and reconnect when disconnected. 5 seconds after the `SIGINT` signal is sent, the old instance will be forcibly killed with a `SIGKILL` signal if it has not already shut down gracefully. ## Cold starts Because applications are not always running, they may need to start when a request is received. This is called a cold start. Cold starts in Deno Deploy are highly optimized and complete within 100 milliseconds for hello world applications, and within a few hundred milliseconds for larger applications. Deno Deploy uses multiple optimizations to enable fast cold starts: - Sandboxes and the Deno runtime are pre-provisioned to ensure they don't need to be created from scratch when starting an application. - Applications start immediately when the client sends the first TCP packet to establish a TLS connection. For fast-starting applications, depending on the network round trip latency, the application may already be running before the client sends the HTTP request. - File system access is optimized for frequently used startup files. Deno Deploy analyzes file access patterns during the build step's warmup phase and optimizes the file system for faster access. When cold starts are slow, they can negatively impact user experience. To optimize your application for quick startup: 1. Minimize dependencies used by your application. 2. Load infrequently accessed code and dependencies lazily using dynamic `import()`. 3. Minimize I/O operations during startup, especially top-level `await` operations and network requests. If your application starts slowly, please [contact Deno support](../support) for help investigating the issue. --- # deploy/reference/timelines.md > Understanding deployment timelines in Deno Deploy, including production and development contexts, active revisions, rollbacks, and timeline locking. URL: https://docs.deno.com/deploy/reference/timelines A timeline is a representation of the history of one branch of the application. Each timeline has a set of revisions, which are the individual items in the timeline. One of the revisions (usually the most recent one) is the "active" revision, which is the one that is currently serving traffic. The active revision receives traffic on all URLs that are assigned to the timeline. Each timeline is associated with a [context](/deploy/reference/env_vars_and_contexts/), which determines which environment variables are available to the code running in that timeline. By default, there are multiple timelines set up for each application: - **Production**: The production timeline contains all of the revisions from the default git branch. This is the timeline that serves production traffic. This timeline is associated with `https://..deno.net`, and any custom domains that are mapped to the application. It uses the production context. - **Git Branch / ``**: Each git branch has its own timeline. This timeline contains all of the revisions from that git branch. This timeline is associated with `https://--..deno.net`. It uses the development context. > There is also one timeline for each revision, that contains only that > revision. This is the timeline that backs the preview URL for that revision. > This timeline is associated with > `https://-..deno.net`. It uses the > development context. > > Preview timelines are not visible in timeline pages in the UI. You can view > the preview URL for a revision on that revision's build page. You can view the timelines that each revision is associated with on the revision's build page. You can also view the revisions that are associated with a given timeline from the timeline pages. ## Active revision Each timeline has an active revision. The active revision is the revision that is currently serving traffic for that timeline. You can view the active revision for a timeline on the timeline page. Usually, the active revision is the most recently built revision on the timeline. However, a different revision can be manually locked to be the active revision. This enables rollback, and timeline locking: ### Rollback Rollback is the process of reverting the active revision to a previous revision, usually because the newer revision has some sort of bug or issue. By rolling back to a known good revision, you can restore the application to a working state without having to deploy new code via Git, and waiting for a build to complete. Refer to "changing the active revision" below for more information on how to rollback a timeline. ### Timeline locking Timeline locking is the process of locking a timeline to a specific revision, to ensure that new builds do not automatically become the active revision. This is useful if you are in a feature freeze situation, for example during a big event, and want to de-risk by not allowing new builds to be deployed. When a timeline is locked to a specific revision you can still create new builds by pushing to Git, but they will not automatically become the active revision on the locked timeline. Refer to "changing the active revision" below for more information on how to lock a timeline to a specific revision. ### Changing the active revision On the timelines page, you can lock any revision on that timeline to be the active revision. This will lock the timeline to that revision, and new builds will not automatically become the active revision on this timeline anymore. You can then either unlock the revision from the timeline, reverting back to the default behavior of the latest revision being the active revision, or you can lock a different revision to be the active revision. --- # Security and responsible disclosure > How to report security vulnerabilities in Deno Deploy. URL: https://docs.deno.com/deploy/security We consider the security of our systems, and all data controlled by those systems a top priority. No matter how much effort we put into system security, it is still possible that security vulnerabilities are present. We appreciate investigative work into system security carried out by well-intentioned, ethical security researchers. If you discover a vulnerability, however small, we would like to know about it so we can address it with appropriate measures, as quickly as possible. This page outlines the method we use to work with the security research community to address our system security. ## Reporting a vulnerability Please email you findings to security@deno.com. We strive to resolve all problems as quickly as possible, and are more than happy to play an active role in publication of writeups after the problem is resolved. ## Please do the following: - Do not take advantage of the vulnerability or problem you have discovered. For example only download data that is necessary to demonstrate the vulnerability - do not download any more. Also do not delete, modify, or view other people's data. - Do not publish or reveal the problem until it has been resolved. - Do not use attacks on physical security, social engineering, distributed denial of service, spam or applications of third parties. - Do provide sufficient information to reproduce the problem, so we will be able to resolve it as quickly as possible. Usually, the IP address or the URL of the affected system and a description of the vulnerability will be sufficient, but complex vulnerabilities may require further explanation. ## Our commitment - If you act in accordance with this policy, we will not take legal action against you in regard to your report. - We will handle your report with strict confidentiality, and not pass on your personal details to third parties without your permission. --- # deploy/support/index.md URL: https://docs.deno.com/deploy/support/ If you have any questions or feedback about Deno Deploy, please reach out to us on the [Deno Discord](https://discord.gg/deno) in the `#deploy` channel or [contact us](mailto:deploy@deno.com). We are actively working on improving the platform and would love to hear your thoughts! --- # Terms and Conditions > Deno Terms and Conditions URL: https://docs.deno.com/deploy/terms_and_conditions **DENO TERMS AND CONDITIONS** 09 September 2024 These Terms and Conditions (these “Terms”) are a legal agreement between you and Deno Land Inc. (“Deno,” “we,” “us,” or “our”). They specify the terms under which you may access and use (i) our website at [https://deno.com](https://deno.com) (the “Site”); (ii) any websites, applications or other digital properties that link to these Terms; and (iii) the products and services (the “Deno Offerings”) we offer to you on our proprietary platform (the “Platform”) via the following websites: - Deno Deploy ([https://deno.com/deploy](https://deno.com/deploy)) - Deno Deploy Classic ([https://deno.com/deploy/classic/](https://deno.com/deploy/classic)) - Deno Subhosting ([https://deno.com/subhosting](https://deno.com/subhosting)) By accessing or using the Site or any other digital property that links to these Terms, you may learn about Deno and our technology platform, and registered customers may also access the Deno Offerings (collectively, the “Services”). PLEASE READ THESE TERMS CAREFULLY. BY ACCESSING AND/OR USING THE SERVICES, YOU ACKNOWLEDGE THAT YOU HAVE READ, UNDERSTOOD, AND AGREE TO BE LEGALLY BOUND BY THESE TERMS, THE DATA PROCESSING ADDENDUM (THE “DPA”), AND THE TERMS AND CONDITIONS OF OUR PRIVACY POLICY (THE “PRIVACY POLICY”), WHICH ARE HEREBY INCORPORATED INTO THESE TERMS AND MADE A PART HEREOF BY REFERENCE (COLLECTIVELY, THE “AGREEMENT”). IF YOU DO NOT AGREE TO ANY OF THE TERMS IN THIS AGREEMENT, THEN PLEASE DO NOT USE THE SERVICES. If you accept or agree to the Agreement on behalf of a company or other legal entity, you represent and warrant that you have the authority to bind that company or other legal entity to the Agreement and, in such event, “you” and “your” will refer and apply to that company or other legal entity. We reserve the right, at our sole discretion, to modify, discontinue, or terminate the availability of any Services, or modify this Agreement, at any time and without prior notice. We encourage you to check these Terms and the “Last Update” date above whenever you access or use the Services. By continuing to access or use the Services after we have posted a modification to these Terms, you are indicating that you agree to be bound by the modified Agreement. If the modified Agreement is not acceptable to you, your only recourse is to cease accessing or using the Services. Deno also offers fee-based products and services (including, from time to time, as free trials), which may offer access to certain data products and/or services (“Paid Products”). We provide access to and use of our Paid Products pursuant to commercial agreements, associated with the applicable Paid Products made available to you at the time of purchase (each, a “Commercial Agreement”). If there is a conflict between these Terms and terms and conditions of the applicable Commercial Agreement associated with the Paid Products you are purchasing, the terms and conditions of the Commercial Agreement will take precedence with respect to the use of or access to such Paid Products. Capitalized terms not defined in these Terms shall have the meaning set forth in our Privacy Policy. **THE SECTIONS BELOW TITLED “BINDING ARBITRATION” AND “CLASS ACTION WAIVER” CONTAIN A BINDING ARBITRATION AGREEMENT AND CLASS ACTION WAIVER. THEY AFFECT YOUR LEGAL RIGHTS. PLEASE READ THEM CAREFULLY.** 1. **DESCRIPTION OF THE SERVICES; RIGHT TO ACCESS AND USE THE SERVICES** **Deno Deploy** and **Deno Subhosting** are globally distributed platforms for serverless JavaScript applications. Your JavaScript, TypeScript, and WebAssembly code runs on managed servers geographically close to your users, enabling low latency and faster response times. Deploy and Subhosting applications run on fast, light-weight V8 isolates rather than virtual machines, powered by the Deno runtime. Subject to the terms and conditions of this Agreement, Deno hereby grants you during the term of this Agreement a limited, non-exclusive, non-transferable, non-sublicensable, revocable right, to access and use the Services solely for your internal business purposes. Deno reserves the right to, at any time, and without notice or liability to you: 1. Block and disable any deployments that, for any reason, make the Platform unstable; 2. Change the regions in which the Services run, 3. Change which features are supported by the Services; and 4. Modify or discontinue the availability of any other feature, function, or content relating to the Services. You agree that we will not be liable to you or to any third party for any modification, suspension, or discontinuance of the Services or any part thereof. You are free to stop using the Services at any time. 2. **ACCOUNT CREDENTIALS** In order to use the Deno Offerings, you must be an “Authorized User”. To become an Authorized User, you need to create an account on the Platform, and authenticate via GitHub (collectively, the “Account Credentials”). When creating the account, each Authorized User must provide true, accurate, current, and complete information. Each Account Credential can be used by only one Authorized User. Each Authorized User is responsible for the confidentiality and use of his/her Account Credentials, including all activities that are associated with his/her Account Credentials. Authorized Users must promptly inform us of any need to deactivate any Account Credentials. Deno is under no obligation to accept any individual as Authorized User, and may accept or reject any registration in its sole and complete discretion. We have the right to disable any Account Credentials at any time for any reason, including if in our sole discretion if we believe that you have failed to comply with these Terms. 3. **USE OF PERSONAL INFORMATION** Your use of the Services may involve the transmission to us of certain personal information. Our policies with respect to the collection and use of such personal information are governed according to our Privacy Policy, which is hereby incorporated by reference in its entirety. 4. **INTELLECTUAL PROPERTY** The Services may contain material, such as software, text, graphics, images, sound recordings, audiovisual works, and other material provided by or on behalf of Deno (collectively referred to as the “Content”). The Content may be owned by us or by third parties. The Content is protected under both United States and foreign laws. Unauthorized use of the Content may violate copyright, trademark, and other laws. You have no rights in or to the Content, and you will not use the Content except as permitted under this Agreement. No other use is permitted without prior written consent from us. You must retain all copyright and other proprietary notices contained in the original Content on any copy you make of the Content. You may not sell, transfer, assign, license, sublicense, or modify the Content or reproduce, display, publicly perform, make a derivative version of, distribute, or otherwise use the Content in any way for any public or commercial purpose. The use or posting of the Content on any other website or in a networked computer environment for any purpose is expressly prohibited. If you violate any part of this Agreement, your permission to access and/or use the Content, and the Services automatically terminates and you must immediately destroy any copies you have made of the Content. The trademarks, service marks, and logos of Deno (the “Deno Trademarks”) used and displayed on the Services are registered and unregistered trademarks or service marks of Deno. Other company, product, and service names located on the Services may be trademarks or service marks owned by others (the “Third-Party Trademarks,” and, collectively with Deno Trademarks, the “Trademarks”). Nothing on the Services should be construed as granting, by implication, estoppel, or otherwise, any license or right to use the Trademarks, without our prior written permission specific for each such use. Use of the Trademarks as part of a link to or from any website is prohibited unless establishment of such a link is approved in advance by us in writing. All goodwill generated from the use of Deno Trademarks inures to our benefit. Elements of the Services are protected by trade dress, trademark, unfair competition, and other state and federal laws and may not be copied or imitated in whole or in part, by any means, including, but not limited to, the use of framing or mirrors. None of the Content may be retransmitted without our express, written consent for each and every instance. 5. **USER DATA; USAGE DATA; AGGREGATE DATA** For purposes of this Agreement, “User Data” means (i) any data and information that we ingest by connecting to Authorized Users’ business systems, including but not limited to event logs; and (ii) any data and information that Authorized Users submit through the Services; and “Usage Data” means anonymous, analytical data that Deno collects concerning the performance and your use of the Services, including, without limitation, date and time that you access the Services, the portions of the Services visited, the frequency and number of times such pages are accessed, the number of times the Services is used in a given time period and other usage and performance data. As between the parties, Authorized Users own all right, title, and interest in and to User Data, including all modifications, improvements, adaptations, enhancements, or translations made thereto, and all intellectual rights therein. Authorized Users hereby grant Deno a non-exclusive, worldwide, fully paid-up, royalty-free right and license, with the right to grant sublicenses, to reproduce, execute, use, store, archive, modify, perform, display and distribute User Data: (i) during the term of this Agreement, in furtherance of Deno’ obligations hereunder; and (ii) for Deno’s internal business purposes, including using such data to analyze, update, and improve the Services and Deno’s analytics capabilities and for benchmarking purposes. Notwithstanding anything to the contrary herein, we may use, and may permit our third-party service providers to access and use, User Data, as well as any Usage Data that we may collect, in an anonymous and aggregated form (“Aggregate Data”) for the purposes of operating, maintaining, managing, and improving our products and services including the Services. Aggregate Data does not identify Authorized Users or any individual. You hereby agree that we may collect, use, publish, disseminate, transfer, and otherwise exploit such Aggregate Data. 6. **FEES** Deno offers and Authorized Users can purchase a monthly or annual subscription for the Services (“Subscription”) for a fee set forth on our website (the “Subscription Fee”). Deno may add new fees and charges, or amend fees and charges, at any time in its sole discretion. Payment for a Subscription is due immediately upon making a purchase for a subscription. By making a purchase, you agree to pay Deno, through our third-party payment processor (“Third-Party Payment Processor”), all charges at the fees then in effect for Subscriptions. Any information you provide to the Third-Party Payment Processor will be processed by such Third-Party Payment Processor in accordance with its privacy policy and terms of use. YOU MUST PROVIDE CURRENT, COMPLETE, AND ACCURATE INFORMATION FOR YOUR ACCOUNT, AND PROMPTLY UPDATE ALL INFORMATION TO KEEP SUCH ACCOUNT INFORMATION CURRENT, COMPLETE, AND ACCURATE (SUCH AS A CHANGE IN BILLING ADDRESS, CREDIT CARD NUMBER, OR CREDIT CARD EXPIRATION DATE). FURTHER, YOU MUST PROMPTLY NOTIFY US IF A PAYMENT METHOD IS CANCELED (E.G., FOR LOSS OR THEFT) OR IF YOU BECOME AWARE OF A POTENTIAL BREACH OF SECURITY, SUCH AS THE UNAUTHORIZED DISCLOSURE OR USE OF YOUR USERNAME OR PASSWORD. CHANGES TO SUCH INFORMATION CAN BE MADE THROUGH YOUR ACCOUNT. By purchasing a Subscription, you acknowledge that your Subscription has an initial and recurring payment charge at the then-current Subscription rate, and you agree that Deno may submit monthly charges, in advance to your chosen payment method without further authorization from you, until you provide notice that you wish to cancel your Subscription or to change your payment method. You further accept responsibility for all recurring charges prior to cancellation, including, where applicable, any charges processed by Deno after the expiration date of your payment card. You may change or terminate your Subscription by emailing us at [support@deno.com](mailto:support@deno.com). If you terminate your Subscription, you may use your Subscription until the end of the then-current billing cycle, and the Subscription will not be renewed after that period expires. Deno does not refund any pre-paid portion of the Subscription fee. Deno may immediately terminate or suspend your Subscription for any reason or no reason in accordance with these Terms, including for failure to pay the applicable fees when due. If we terminate or suspend your Subscription, your right to use any software or content provided in connection with the Subscription is also terminated or suspended (as applicable). From time to time, Deno may offer free trial of the Services. Deno reserves the right in its sole discretion to stop offering free trial of the Services at any time without any liability to you. 7. **COMMUNITY GUIDELINES** By accessing and/or using the Services, you hereby agree to comply with the following guidelines: - You will not use the Services for any unlawful purpose; - You will not access or use the Services to collect any market research for a competing businesses; - You will not upload, post, e-mail, transmit, or otherwise make available any content that infringes any copyright, trademark, right of publicity, or other proprietary rights of any person or entity; - You will not impersonate any person or entity or falsely state or otherwise misrepresent your affiliation with a person or entity; - You will not decompile, reverse engineer, disassemble, or otherwise attempt to discern the source code or interface protocols of any software or other products or processes accessible through the Services; - You will not remove or modify any proprietary markings or restrictive legends placed on the Services; - You will not use the Services, or any portion or component thereof in violation of any applicable law, in order to build a competitive product or service, or for any purpose not specifically permitted in these Terms; - You will not cover, obscure, block, or in any way interfere with any advertisements and/or safety features on the Services; - You will not circumvent, remove, alter, deactivate, degrade, or thwart any of the protections in the Services; - You will not introduce, post, or upload to the Services any Harmful Code. As used herein, “Harmful Code” means computer code, programs, or programming devices that are intentionally designed to disrupt, modify, access, delete, damage, deactivate, disable, harm, or otherwise impede in any manner, including aesthetic disruptions or distortions, the operation of the Services, or any other associated software, firmware, hardware, computer system, or network (including, without limitation, “Trojan horses,” “viruses,” “worms,” “time bombs,” “time locks,” “devices,” “traps,” “access codes,” or “drop dead” or “trap door” devices) or any other harmful, malicious, or hidden procedures, routines or mechanisms that would cause the Services to cease functioning or to damage or corrupt data, storage media, programs, equipment, or communications, or otherwise interfere with the operations of the Services; - You will not take any action that imposes or may impose (in our sole discretion) an unreasonable or disproportionately large load on our technical infrastructure; and - You will not interfere with or attempt to interrupt the proper operation of the Services through the use of any virus, device, information collection or transmission mechanism, software or routine, or access or attempt to gain access to any data, files, or passwords related to the Services through hacking, password or data mining, or any other means. Although we are not obligated to monitor access to or use of the Services, we have the right to do so for the purpose of operating them, to ensure compliance with these Terms, and to comply with applicable law or other legal requirements. We have the right to investigate violations of these Terms or conduct that affects the Services. We may also consult and cooperate with law enforcement authorities to prosecute Users who violate the law. If you find something that violates our User Guidelines, please let us know, and we will review it. 8. **LINKING AND CITATION OF CONTENT** Deno does not object to links on third-party Services to our homepage in an appropriate context. However, “framing” or “mirroring” the Services or the Content is prohibited without the prior express written consent of Deno. 9. **RESTRICTIONS** The Services are available only for individuals aged 18 years or older. If you are under 18 years of age, then please do not access and/or use the Services. By entering into this Agreement, you represent and warrant that you are 18 years or older. 10. **FEEDBACK** We welcome and encourage you to provide feedback, comments, and suggestions for improvements to the Services and our services (“Feedback”). Although we encourage you to e-mail us, we do not want you to, and you should not, e-mail us any content that contains confidential information. With respect to any Feedback you provide, we shall be free to use and disclose any ideas, concepts, know-how, techniques, or other materials contained in your Feedback for any purpose whatsoever, including, but not limited to, the development, production and marketing of products and services that incorporate such information, without compensation or attribution to you. 11. **NO WARRANTIES; LIMITATION OF LIABILITY** THE SERVICES AND THE CONTENT ARE PROVIDED ON AN “AS IS” AND “AS AVAILABLE” BASIS, AND NEITHER DENO NOR DENO’S SUPPLIERS MAKE ANY WARRANTIES WITH RESPECT TO THE SAME OR OTHERWISE IN CONNECTION WITH THIS AGREEMENT, AND DENO HEREBY DISCLAIMS ANY AND ALL EXPRESS, IMPLIED, OR STATUTORY WARRANTIES, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AVAILABILITY, ERROR-FREE OR UNINTERRUPTED OPERATION, AND ANY WARRANTIES ARISING FROM A COURSE OF DEALING, COURSE OF PERFORMANCE, OR USAGE OF TRADE. TO THE EXTENT THAT DENO AND DENO’S SUPPLIERS MAY NOT AS A MATTER OF APPLICABLE LAW DISCLAIM ANY IMPLIED WARRANTY, THE SCOPE AND DURATION OF SUCH WARRANTY WILL BE THE MINIMUM PERMITTED UNDER SUCH LAW. WITHOUT LIMITING THE FOREGOING, WE DO NOT WARRANT, GUARANTEE OR MAKE ANY REPRESENTATION, NOR SHALL WE BE RESPONSIBLE FOR (A) THE CORRECTNESS, ACCURACY, RELIABILITY, COMPLETENESS OR CURRENCY OF THE SERVICES; OR (B) ANY RESULTS ACHIEVED OR ACTION TAKEN BY YOU IN RELIANCE ON THE SERVICES OR THE CONTENT OR ALERTS PROVIDED THROUGH THE SERVICES. ANY DECISION, ACT OR OMISSION OF YOURS THAT IS BASED ON THE SERVICES OR THE CONTENT OR ALERTS PROVIDED THROUGH THE SERVICES IS AT YOUR OWN AND SOLE RISK. THE SERVICES AND THE CONTENT AND ALERTS PROVIDED THROUGH THE SERVICES IS PROVIDED AS A CONVENIENCE ONLY AND DOES NOT REPLACE THE NEED TO REVIEW ITS ACCURACY, COMPLETENESS AND CORRECTNESS. IN CONNECTION WITH ANY WARRANTY, CONTRACT, OR COMMON LAW TORT CLAIMS: (I) WE SHALL NOT BE LIABLE FOR ANY INCIDENTAL OR CONSEQUENTIAL DAMAGES, LOST PROFITS, OR DAMAGES RESULTING FROM LOST DATA OR BUSINESS INTERRUPTION RESULTING FROM THE USE OR INABILITY TO ACCESS AND USE THE SERVICES, EVEN IF WE HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES; AND (II) ANY DIRECT DAMAGES THAT YOU MAY SUFFER AS A RESULT OF YOUR USE OF THE SERVICES, SHALL BE LIMITED TO THE GREATER OF (I) MONIES YOU HAVE PAID US IN CONNECTION WITH YOUR USE OF THE SERVICES DURING THE TWELVE (12) MONTHS IMMEDIATELY PRECEDING THE DATE THAT GAVE RISE TO THE CLAIM OR (II) ONE HUNDRED DOLLARS ($100). 12. **EXTERNAL SITES** The Services may contain links to third-party websites (“External Sites”). These links are provided solely as a convenience to you and not as an endorsement by us of the content on such External Sites. The content of such External Sites is developed and provided by others. You should contact the website administrator or webmaster for those External Sites if you have any concerns regarding such links or any content located on such External Sites. We are not responsible for the content of any linked External Sites and do not make any representations regarding the content or accuracy of materials on such External Sites. You should take precautions when downloading files from all websites to protect your computer from viruses and other destructive programs. If you decide to access linked External Sites, you do so at your own risk. 13. **REPRESENTATIONS AND WARRANTIES** You represent and warrant that you have: (i) all rights and permissions necessary to provide us with or grant us access to and use of User Data, and (ii) obtained all necessary and appropriate consents, permissions, and authorizations in accordance with all applicable laws and regulations with respect to User Data provided hereunder. 14. **INDEMNIFICATION** You will indemnify, defend, and hold Deno, its affiliates, and our and their respective shareholders, members, officers, directors, employees, agents, and representatives (collectively, “Deno Indemnitees”) harmless from and against any and all damages, liabilities, losses, costs, and expenses, including reasonable attorney’s fees (collectively, “Losses”) incurred by any Deno Indemnitee in connection with a third-party claim, action, or proceeding (each, a “Claim”) arising from your (i) breach of this Agreement, including but not limited to, any breach of your representations and warranties; (ii) misuse of the Services, and/or the Content; (iii) negligence, gross negligence, willful misconduct, fraud, misrepresentation or violation of law; or (iv) violation of any third-party right, including without limitation any copyright, trademark, property, or privacy right; _provided_, _however_, that the foregoing obligations shall be subject to our: (i) promptly notifying you of the Claim; (ii) providing you, at your expense, with reasonable cooperation in the defense of the Claim; and (iii) providing you with sole control over the defense and negotiations for a settlement or compromise. 15. **COMPLIANCE WITH APPLICABLE LAWS** The Services are based in the United States. We make no claims concerning whether the Services may be viewed or be appropriate for use outside of the United States. If you access the Services from outside of the United States, you do so at your own risk. Whether inside or outside of the United States, you are solely responsible for ensuring compliance with the laws of your specific jurisdiction. 16. **TERM; TERMINATION** These Terms, and your right to access and use the Services, will commence upon your acceptance of these Terms and will continue for the period of your Subscription and/or use of the Services. We reserve the right, in our sole discretion, to restrict, suspend, or terminate these Terms and your access to all or any part of the Services, at any time and for any reason without prior notice or liability. We reserve the right to change, suspend, or discontinue all or any part of the Services at any time without prior notice or liability. The Sections “Description of the Services; Right to Use and Access the Service;” “Use of Personal Information,” “Intellectual Property,” “Feedback,” “No Warranties; Limitation of Liability,” “Indemnification,” “Compliance with Applicable Laws,” “Term; Termination,” “Binding Arbitration,” “Class Action Waiver,” “Equitable Relief,” “Controlling Law; Exclusive Forum,” and “Miscellaneous” shall survive the termination of these Terms. 17. **BINDING ARBITRATION** In the event of a dispute arising under or relating to this Agreement, and/or the Services (each, a “Dispute”), such dispute will be finally and exclusively resolved by binding arbitration governed by the Federal Arbitration Act (“FAA”). NEITHER PARTY SHALL HAVE THE RIGHT TO LITIGATE SUCH CLAIM IN COURT OR TO HAVE A JURY TRIAL, EXCEPT EITHER PARTY MAY BRING ITS CLAIM IN ITS LOCAL SMALL CLAIMS COURT, IF PERMITTED BY THAT SMALL CLAIMS COURT RULES AND IF WITHIN SUCH COURT’S JURISDICTION. ARBITRATION IS DIFFERENT FROM COURT, AND DISCOVERY AND APPEAL RIGHTS MAY ALSO BE LIMITED IN ARBITRATION. All disputes will be resolved before a neutral arbitrator selected jointly by the parties, whose decision will be final, except for a limited right of appeal under the FAA. The arbitration shall be commenced and conducted by JAMS pursuant to its then current Comprehensive Arbitration Rules and Procedures and in accordance with the Expedited Procedures in those rules, or, where appropriate, pursuant to JAMS’ Streamlined Arbitration Rules and Procedures. All applicable JAMS’ rules and procedures are available at the JAMS website [www.jamsadr.com](http://www.jamsadr.com). Each party will be responsible for paying any JAMS filing, administrative, and arbitrator fees in accordance with JAMS rules. Judgment on the arbitrator’s award may be entered in any court having jurisdiction. This clause shall not preclude parties from seeking provisional remedies in aid of arbitration from a court of appropriate jurisdiction. The arbitration may be conducted in person, through the submission of documents, by phone, or online. If conducted in person, the arbitration shall take place in the United States county where you reside. The parties may litigate in court to compel arbitration, to stay a proceeding pending arbitration, or to confirm, modify, vacate, or enter judgment on the award entered by the arbitrator. The parties shall cooperate in good faith in the voluntary and informal exchange of all non-privileged documents and other information (including electronically stored information) relevant to the Dispute immediately after commencement of the arbitration. As set forth in Section 18 below, nothing in this Agreement will prevent us from seeking injunctive relief in any court of competent jurisdiction as necessary to protect our proprietary interests. 18. **CLASS ACTION WAIVER** You agree that any arbitration or proceeding shall be limited to the Dispute between us and you individually. To the full extent permitted by law, (i) no arbitration or proceeding shall be joined with any other; (ii) there is no right or authority for any Dispute to be arbitrated or resolved on a class action-basis or to utilize class action procedures; and (iii) there is no right or authority for any Dispute to be brought in a purported representative capacity on behalf of the general public or any other persons. YOU AGREE THAT YOU MAY BRING CLAIMS AGAINST US ONLY IN YOUR INDIVIDUAL CAPACITY AND NOT AS A PLAINTIFF OR CLASS MEMBER IN ANY PURPORTED CLASS OR REPRESENTATIVE PROCEEDING. 19. **EQUITABLE RELIEF** You acknowledge and agree that in the event of a breach or threatened violation of our intellectual property rights and confidential and proprietary information by you, we will suffer irreparable harm and will therefore be entitled to injunctive relief to enforce this Agreement. We may, without waiving any other remedies under this Agreement, seek from any court having jurisdiction any interim, equitable, provisional, or injunctive relief that is necessary to protect our rights and property pending the outcome of the arbitration referenced above. You hereby irrevocably and unconditionally consent to the personal and subject matter jurisdiction of the federal and state courts in the State of New York for purposes of any such action by us. 20. **CONTROLLING LAW; EXCLUSIVE FORUM** The Agreement and any action related thereto will be governed by the laws of the State of New York without regard to its conflict of laws provisions. The parties hereby consent and agree to the exclusive jurisdiction of the state and federal courts located in the State of New York for all suits, actions, or proceedings directly or indirectly arising out of or relating to this Agreement, and waive any and all objections to such courts, including but not limited to, objections based on improper venue or inconvenient forum, and each party hereby irrevocably submits to the exclusive jurisdiction of such courts in any suits, actions, or proceedings arising out of or relating to this Agreement 21. **MISCELLANEOUS** Notwithstanding anything to the contrary set forth in these Terms, each party may during the term of this Agreement, use the other party’s name and/or logo for marketing and promotional purposes, including, without limitation, identifying Authorized Users as a customer of Deno on Deno’s website or elsewhere. You may not assign any of your rights, duties, or obligations under these Terms to any person or entity, in whole or in part, without written consent from Deno. Our failure to act on or enforce any provision of the Agreement shall not be construed as a waiver of that provision or any other provision in this Agreement. No waiver shall be effective against us unless made in writing, and no such waiver shall be construed as a waiver in any other or subsequent instance. Except as expressly agreed by us and you in writing, the Agreement constitutes the entire agreement between you and us with respect to the subject matter, and supersedes all previous or contemporaneous agreements, whether written or oral, between the parties with respect to the subject matter. The section headings are provided merely for convenience and shall not be given any legal import. This Agreement will inure to the benefit of our successors, assigns, licensees, and sublicensees. **Copyright 2025 Deno Land Inc. All rights reserved.** --- # Deno Deploy Usage Guidelines > Important limitations, service level expectations, and terms of use for Deno Deploy. URL: https://docs.deno.com/deploy/usage Deno Deploy offers a generous free tier that allows you to run applications at the edge with minimal cost. However, to ensure a fair and reliable service for all users, there are certain usage guidelines and limitations in place. To review the pricing details, please visit the [Deno Deploy pricing page](https://deno.com/deploy/pricing). If your application experiences a surprise traffic spike, we want to protect you from an unexpectedly large bill. At the same time, we don't want to dampen your success by automatically suspending traffic when limits are reached if that is your preference. Before October 1st 2025, you'll be able to configure alert thresholds and hard spending limits directly from your organization dashboard. These controls, along with timely notifications about your quota usage, will help protect you from unexpected billing charges. You’ll be able to review and control these in the [Billing](https://console.deno.com/go/billing) section of your console. The Deno company is now using Deno Deploy to host our own websites and is putting significant efforts into ensuring service reliability. - [Deno Deploy Acceptable Use Policy](/deploy/acceptable_use_policy/) - [Deno Deploy Terms and Conditions](/deploy/terms_and_conditions/) Deno reserves the right to terminate any user, organization, or app that we find to be in violation of the terms and conditions. --- # How to use Apollo with Deno > Step-by-step tutorial on integrating Apollo GraphQL with Deno. Learn how to set up an Apollo Server, define schemas, implement resolvers, and build a complete GraphQL API using TypeScript. URL: https://docs.deno.com/examples/tutorials/apollo [Apollo Server](https://www.apollographql.com/) is a GraphQL server that you can set up in minutes and use with your existing data source (or REST API). You can then connect any GraphQL client to it to receive the data and take advantage of GraphQL benefits, such as type-checking and efficient fetching. We're going to get a simple Apollo server up and running that will allow us to query some local data. We're only going to need three files for this: 1. `schema.ts` to set up our data model 2. `resolvers.ts` to set up how we're going to populate the data fields in our schema 3. Our `main.ts` where the server is going to launch We'll start by creating them: ```shell touch schema.ts resolvers.ts main.ts ``` Let's go through setting up each. [View source here.](https://github.com/denoland/examples/tree/main/with-apollo) ## schema.ts Our `schema.ts` file describes our data. In this case, our data is a list of dinosaurs. We want our users to be able to get the name and a short description of each dino. In GraphQL language, this means that `Dinosaur` is our **type**, and `name` and `description` are our **fields**. We can also define the data type for each field. In this case, both are strings. This is also where we describe the queries we allow for our data, using the special **Query** type in GraphQL. We have two queries: - `dinosaurs` which gets a list of all dinosaurs - `dinosaur` which takes in the `name` of a dinosaur as an argument and returns information about that one type of dinosaur. We're going to export all this within our `typeDefs` type definitions, variable: ```tsx export const typeDefs = ` type Dinosaur { name: String description: String } type Query { dinosaurs: [Dinosaur] dinosaur(name: String): Dinosaur } `; ``` If we wanted to write data, this is also where we would describe the **Mutation** to do so. Mutations are how you write data with GraphQL. Because we are using a static dataset here, we won't be writing anything. ## resolvers.ts A resolver is responsible for populating the data for each query. Here we have our list of dinosaurs and all the resolver is going to do is either a) pass that entire list to the client if the user requests the `dinosaurs` query, or pass just one if the user requests the `dinosaur` query. ```tsx const dinosaurs = [ { name: "Aardonyx", description: "An early stage in the evolution of sauropods.", }, { name: "Abelisaurus", description: '"Abel\'s lizard" has been reconstructed from a single skull.', }, ]; export const resolvers = { Query: { dinosaurs: () => dinosaurs, dinosaur: (_: any, args: any) => { return dinosaurs.find((dinosaur) => dinosaur.name === args.name); }, }, }; ``` With the latter, we pass the arguments from the client into a function to match the name to a name in our dataset. ## main.ts In our `main.ts` we're going to import the `ApolloServer` as well as `graphql` and our `typeDefs` from the schema and our resolvers: ```tsx import { ApolloServer } from "npm:@apollo/server@^4.1"; import { startStandaloneServer } from "npm:@apollo/server@4.1/standalone"; import { graphql } from "npm:graphql@16.6"; import { typeDefs } from "./schema.ts"; import { resolvers } from "./resolvers.ts"; const server = new ApolloServer({ typeDefs, resolvers, }); const { url } = await startStandaloneServer(server, { listen: { port: 8000 }, }); console.log(`Server running on: ${url}`); ``` We pass our `typeDefs` and `resolvers` to `ApolloServer` to spool up a new server. Finally, `startStandaloneServer` is a helper function to get the server up and running quickly. ## Running the server All that is left to do now is run the server: ```shell deno run --allow-net --allow-read --allow-env main.ts ``` You should see `Server running on: 127.0.0.1:8000` in your terminal. If you go to that address you will see the Apollo sandbox where we can enter our `dinosaurs` query: ```graphql query { dinosaurs { name description } } ``` This will return our dataset: ```graphql { "data": { "dinosaurs": [ { "name": "Aardonyx", "description": "An early stage in the evolution of sauropods." }, { "name": "Abelisaurus", "description": "\"Abel's lizard\" has been reconstructed from a single skull." } ] } } ``` Or if we want just one `dinosaur`: ```graphql query { dinosaur(name:"Aardonyx") { name description } } ``` Which returns: ```graphql { "data": { "dinosaur": { "name": "Aardonyx", "description": "An early stage in the evolution of sauropods." } } } ``` Awesome! [Learn more about using Apollo and GraphQL in their tutorials](https://www.apollographql.com/tutorials/). --- # Build Astro with Deno > Step-by-step tutorial on building web applications with Astro and Deno. Learn how to scaffold projects, create dynamic pages, implement SSR, and deploy your Astro sites using Deno's Node.js compatibility. URL: https://docs.deno.com/examples/tutorials/astro [Astro](https://astro.build/) is a modern web framework focused on content-centric websites, which leverages islands architecture and sends zero JavaScript to the client by default. You can see the [finished app on GitHub](https://github.com/denoland/tutorial-with-astro). You can see a live version of the app on [Deno Deploy](https://tutorial-with-astro.deno.deno.net/). :::info Deploy your own Want to skip the tutorial and deploy the finished app right now? Click the button below to instantly deploy your own copy of the complete Astro dinosaur app to Deno Deploy. You'll get a live, working application that you can customize and modify as you learn! [![Deploy on Deno](https://deno.com/button)](https://console.deno.com/new?clone=https://github.com/denoland/tutorial-with-astro) ::: ## Scaffold an Astro project Astro provides a CLI tool to quickly scaffold a new Astro project. In your terminal, run the following command to create a new Astro project with Deno. ```sh deno init --npm astro@latest ``` For this tutorial, we’ll select the “Empty” template so we can start from scratch and we'll install the dependencies. this will set us up with a basic Astro project structure, including a `package.json` file, and a `src` directory where our application code will live. ## Start the Astro server We can start the local Astro server with the `dev` task. In your terminal, change directory into your new project and run run ```sh deno task dev ``` This will start the Astro development server, which will watch for changes in your files and automatically reload the page in your browser. You should see a message indicating that the server is running on `http://localhost:4321`. Upon visiting the output URL in your browser, you should see a very basic Astro welcome page. ## Build out the app architecture Now that we have a basic Astro project set up, let's build out the architecture of our app. We'll create a few directories to organize our code and set up some basic routing. Create the following directories ```text src/ ├── data/ ├── lib/ └── pages/ └── index.astro ``` ## Add dinosaur data In the `data` directory, create a new file called `data.json` file, which will contain the hard coded dinosaur data. Copy and paste [this json file](https://raw.githubusercontent.com/denoland/tutorial-with-astro/refs/heads/main/src/data/data.json) into the `data.json` file. (If you were building a real app, you would probably fetch this data from a database or an external API.) ## Set up the business logic Next, we’ll create a `lib` directory to hold our business logic. In this case, we’ll create a file called `dinosaur-service.ts` that will contain a function to fetch the dinosaur data. Create `src/lib/dinosaur-service.ts` with the following code: ```ts title="src/lib/dinosaur-service.ts" // Simple utility functions for working with dinosaur data import dinosaursData from "../data/data.json"; export interface Dinosaur { name?: string; description: string; } export class DinosaurService { private static dinosaurs: Dinosaur[] = dinosaursData; // Get all dinosaurs with names (filter out unnamed ones) static getNamedDinosaurs(): Dinosaur[] { return this.dinosaurs.filter((dino) => dino.name); } // Create a URL-friendly slug from dinosaur name static createSlug(name: string): string { return name .toLowerCase() .replace(/[^a-z0-9]+/g, "-") .replace(/^-+|-+$/g, ""); } // Get dinosaur by slug static getDinosaurBySlug(slug: string): Dinosaur | undefined { return this.dinosaurs.find((dino) => { if (!dino.name) return false; return this.createSlug(dino.name) === slug; }); } // Get all dinosaurs with their slugs for linking static getDinosaursWithSlugs() { return this.getNamedDinosaurs().map((dino) => ({ ...dino, slug: this.createSlug(dino.name!), })); } } export default DinosaurService; ``` This file contains a `DinosaurService` class with methods to get all dinosaurs, create a URL-friendly slug from a dinosaur name, and get a dinosaur by its slug. ## Update the index page to use the service Now we can update our `index.astro` page to use the `DinosaurService` to fetch the dinosaur data and render it as a list of links. Update the `src/pages/index.astro` file to look like this: ```jsx title="src/pages/index.astro" --- import DinosaurService from '../lib/dinosaur-service'; import '../../styles/index.css'; // Get all dinosaurs with slugs for linking const dinosaursWithSlugs = DinosaurService.getDinosaursWithSlugs(); --- Dinosaur Directory

🦕 Dinosaur Directory

Click on any dinosaur name to learn more about it!

{dinosaursWithSlugs.map((dinosaur) => ( {dinosaur.name} ))}
``` We import the `DinosaurService`, then map over the dinosaurs to create links to individual dinosaur pages. ## Create individual dinosaur pages Next, we’ll create individual pages for each dinosaur. In the `src/pages` directory, create a directory called `dinosaurs`, and inside that directory, create a file called `[slug].astro`. This file will be used to render the individual dinosaur pages: ```jsx title="src/pages/dinosaurs/[slug].astro" --- import DinosaurService from '../../lib/dinosaur-service'; import '../../styles/index.css'; export async function getStaticPaths() { const dinosaursWithSlugs = DinosaurService.getDinosaursWithSlugs(); return dinosaursWithSlugs.map((dinosaur) => ({ params: { slug: dinosaur.slug }, props: { dinosaur } })); } const { dinosaur } = Astro.props; --- {dinosaur.name} - Dinosaur Directory

🦕 {dinosaur.name}

{dinosaur.description}

Back to Directory
``` This file uses the `getStaticPaths` function to generate static paths for each dinosaur based on the slugs we created earlier. The `Astro.props` object will contain the dinosaur data for the specific slug, which we can then render in the page. ## Add some styles You can style your app to make it your own in the `src/styles/index.css`. This file is imported in both the `index.astro` and `[slug].astro` files, so any styles you add here will apply to both pages. ## Build and deploy Astro has a built-in command to build your site for production: ```sh deno run build ``` This will: - Generate static HTML files for each page in the `dist` directory. - Optimize your assets (CSS, JavaScript, images, etc.) for production. You can deploy this app to your favorite cloud provider. We recommend using [Deno Deploy](https://deno.com/deploy) for a simple and easy deployment experience. You can deploy your app directly from GitHub, simply create a GitHub repository and push your code there, then connect it to Deno Deploy. ### Create a GitHub repository [Create a new GitHub repository](https://github.com/new), then initialize and push your app to GitHub: ```sh git init -b main git remote add origin https://github.com//.git git add . git commit -am 'initial commit' git push -u origin main ``` ### Deploy to Deno Deploy Once your app is on GitHub, you can [deploy to Deno Deploy](https://console.deno.com/) dashboard. For a walkthrough of deploying your app, check out the [Deno Deploy tutorial](/examples/deno_deploy_tutorial/). 🦕 Now you can scaffold and develop an Astro app that will run on Deno! You could extend this app by adding more features, such as user authentication, a database, or even a CMS. We can’t wait to see what you build with Astro and Deno! --- # How to Deploy Deno to AWS Lambda > Step-by-step tutorial on deploying Deno applications to AWS Lambda. Learn about Docker containerization, ECR repositories, function configuration, and how to set up serverless Deno apps on AWS. URL: https://docs.deno.com/examples/tutorials/aws_lambda AWS Lambda is a serverless computing service provided by Amazon Web Services. It allows you to run code without provisioning or managing servers. Here's a step by step guide to deploying a Deno app to AWS Lambda using Docker. The pre-requisites for this are: - [`docker` CLI](https://docs.docker.com/reference/cli/docker/) - an [AWS account](https://aws.amazon.com) - [`aws` CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) ## Step 1: Create a Deno App Create a new Deno app using the following code: ```ts title="main.ts" Deno.serve((req) => new Response("Hello World!")); ``` Save this code in a file named `main.ts`. ## Step 2: Create a Dockerfile Create a new file named `Dockerfile` with the following content: ```Dockerfile # Set up the base image FROM public.ecr.aws/awsguru/aws-lambda-adapter:0.9.0 AS aws-lambda-adapter FROM denoland/deno:bin-1.45.2 AS deno_bin FROM debian:bookworm-20230703-slim AS deno_runtime COPY --from=aws-lambda-adapter /lambda-adapter /opt/extensions/lambda-adapter COPY --from=deno_bin /deno /usr/local/bin/deno ENV PORT=8000 EXPOSE 8000 RUN mkdir /var/deno_dir ENV DENO_DIR=/var/deno_dir # Copy the function code WORKDIR "/var/task" COPY . /var/task # Warmup caches RUN timeout 10s deno run -A main.ts || [ $? -eq 124 ] || exit 1 CMD ["deno", "run", "-A", "main.ts"] ``` This Dockerfile uses the [`aws-lambda-adapter`](https://github.com/awslabs/aws-lambda-web-adapter) project to adapt regular HTTP servers, like Deno's `Deno.serve`, to the AWS Lambda runtime API. We also use the `denoland/deno:bin-1.45.2` image to get the Deno binary and `debian:bookworm-20230703-slim` as the base image. The `debian:bookworm-20230703-slim` image is used to keep the image size small. The `PORT` environment variable is set to `8000` to tell the AWS Lambda adapter that we are listening on port `8000`. We set the `DENO_DIR` environment variable to `/var/deno_dir` to store cached Deno source code and transpiled modules in the `/var/deno_dir` directory. The warmup caches step is used to warm up the Deno cache before the function is invoked. This is done to reduce the cold start time of the function. These caches contain the compiled code and dependencies of your function code. This step starts your server for 10 seconds and then exits. When using a package.json, remember to run `deno install` to install `node_modules` from your `package.json` file before warming up the caches or running the function. ## Step 3: Build the Docker Image Build the Docker image using the following command: ```bash docker build -t hello-world . ``` ## Step 4: Create an ECR Docker repository and push the image With the AWS CLI, create an ECR repository and push the Docker image to it: ```bash aws ecr create-repository --repository-name hello-world --region us-east-1 | grep repositoryUri ``` This should output a repository URI that looks like `.dkr.ecr.us-east-1.amazonaws.com/hello-world`. Authenticate Docker with ECR, using the repository URI from the previous step: ```bash aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin .dkr.ecr.us-east-1.amazonaws.com ``` Tag the Docker image with the repository URI, again using the repository URI from the previous steps: ```bash docker tag hello-world:latest .dkr.ecr.us-east-1.amazonaws.com/hello-world:latest ``` Finally, push the Docker image to the ECR repository, using the repository URI from the previous steps: ```bash docker push .dkr.ecr.us-east-1.amazonaws.com/hello-world:latest ``` ## Step 5: Create an AWS Lambda function Now you can create a new AWS Lambda function from the AWS Management Console. 1. Go to the AWS Management Console and [navigate to the Lambda service](https://us-east-1.console.aws.amazon.com/lambda/home?region=us-east-1). 2. Click on the "Create function" button. 3. Choose "Container image". 4. Enter a name for the function, like "hello-world". 5. Click on the "Browse images" button and select the image you pushed to ECR. 6. Click on the "Create function" button. 7. Wait for the function to be created. 8. In the "Configuration" tab, go to the "Function URL" section and click on "Create function URL". 9. Choose "NONE" for the auth type (this will make the lambda function publicly accessible). 10. Click on the "Save" button. ## Step 6: Test the Lambda function You can now visit your Lambda function's URL to see the response from your Deno app. 🦕 You have successfully deployed a Deno app to AWS Lambda using Docker. You can now use this setup to deploy more complex Deno apps to AWS Lambda. --- # examples/tutorials/aws_lightsail.md > Step-by-step tutorial on deploying Deno applications to AWS Lightsail. Learn about Docker containers, GitHub Actions automation, continuous deployment, and how to set up cost-effective cloud hosting for Deno apps. URL: https://docs.deno.com/examples/tutorials/aws_lightsail [Amazon Lightsail](https://aws.amazon.com/lightsail/) is the easiest and cheapest way to get started with Amazon Web Services. It allows you to host virtual machines and even entire container services. This How To guide will show you how to deploy a Deno app to Amazon Lightsail using Docker, Docker Hub, and GitHub Actions. Before continuing, make sure you have: - [`docker` CLI](https://docs.docker.com/engine/reference/commandline/cli/) - a [Docker Hub account](https://hub.docker.com) - a [GitHub account](https://github.com) - an [AWS account](https://aws.amazon.com/) ## Create Dockerfile and docker-compose.yml To focus on the deployment, our app will simply be a `main.ts` file that returns a string as an HTTP response: ```ts import { Application } from "jsr:@oak/oak"; const app = new Application(); app.use((ctx) => { ctx.response.body = "Hello from Deno and AWS Lightsail!"; }); await app.listen({ port: 8000 }); ``` Then, we'll create two files -- `Dockerfile` and `docker-compose.yml` -- to build the Docker image. In our `Dockerfile`, let's add: ```Dockerfile FROM denoland/deno EXPOSE 8000 WORKDIR /app ADD . /app RUN deno install --entrypoint main.ts CMD ["run", "--allow-net", "main.ts"] ``` Then, in our `docker-compose.yml`: ```yml version: "3" services: web: build: . container_name: deno-container image: deno-image ports: - "8000:8000" ``` Let's test this locally by running `docker compose -f docker-compose.yml build`, then `docker compose up`, and going to `localhost:8000`. ![hello world from localhost](./images/how-to/aws-lightsail/hello-world-from-localhost.png) It works! ## Build, Tag, and Push to Docker Hub First, let's sign into [Docker Hub](https://hub.docker.com/repositories) and create a repository. Let's name it `deno-on-aws-lightsail`. Then, let's tag and push our new image, replacing `username` with yours: Then, let's build the image locally. Note our `docker-compose.yml` file will name the build `deno-image`. ```shell docker compose -f docker-compose.yml build ``` Let's [tag](https://docs.docker.com/engine/reference/commandline/tag/) the local image with `{{ username }}/deno-on-aws-lightsail`: ```shell docker tag deno-image {{ username }}/deno-on-aws-lightsail ``` We can now push the image to Docker Hub: ```shell docker push {{ username }}/deno-on-aws-lightsail ``` After that succeeds, you should be able to see the new image on your Docker Hub repository: ![new image on docker hub](./images/how-to/aws-lightsail/new-image-on-docker-hub.png) ## Create and Deploy to a Lightsail Container Let's head over to [the Amazon Lightsail console](https://lightsail.aws.amazon.com/ls/webapp/home/container-services). Then click "Containers" and "Create container service". Half way down the page, click "Setup your first Deployment" and select "Specify a custom deployment". You can write whatever container name you'd like. In `Image`, be sure to use `{{ username }}/{{ image }}` that you have set in your Docker Hub. For this example, it is `lambtron/deno-on-aws-lightsail`. Let's click `Add open ports` and add `8000`. Finally, under `PUBLIC ENDPOINT`, select the container name that you just created. The full form should look like below: ![create container service interface](./images/how-to/aws-lightsail/create-container-service-on-aws.png) When you're ready, click "Create container service". After a few moments, your new container should be deployed. Click on the public address and you should see your Deno app: ![Hello world from Deno and AWS Lightsail](./images/how-to/aws-lightsail/hello-world-from-deno-and-aws-lightsail.png) ## Automate using GitHub Actions In order to automate that process, we'll use the `aws` CLI with the [`lightsail` subcommand](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lightsail/push-container-image.html). The steps in our GitHub Actions workflow will be: 1. Checkout the repo 2. Build our app as a Docker image locally 3. Install and authenticate AWS CLI 4. Push local Docker image to AWS Lightsail Container Service via CLI Pre-requisites for this GitHub Action workflow to work: - an AWS Lightsail Container Instance is created (see section above) - IAM user and relevant permissions set. ([Learn more about managing access to Amazon Lightsail for an IAM user.](https://docs.aws.amazon.com/lightsail/latest/userguide/amazon-lightsail-managing-access-for-an-iam-user.html)) - `AWS_ACCESS_KEY_ID` and `AWS_SUCCESS_ACCESS_KEY` for your user with permissions. (Follow [this AWS guide](https://lightsail.aws.amazon.com/ls/docs/en_us/articles/lightsail-how-to-set-up-access-keys-to-use-sdk-api-cli) to get generate an `AWS_ACCESS_KEY_ID` and `AWS_SUCCESS_ACCESS_KEY`.) Let's create a new file `container.template.json`, which contains configuration for how to make the service container deployment. Note the similarities these option values have with the inputs we entered manually in the previous section. ```json { "containers": { "app": { "image": "", "environment": { "APP_ENV": "release" }, "ports": { "8000": "HTTP" } } }, "publicEndpoint": { "containerName": "app", "containerPort": 8000, "healthCheck": { "healthyThreshold": 2, "unhealthyThreshold": 2, "timeoutSeconds": 5, "intervalSeconds": 10, "path": "/", "successCodes": "200-499" } } } ``` Let's add the below to your `.github/workflows/deploy.yml` file: ```yml name: Build and Deploy to AWS Lightsail on: push: branches: - main env: AWS_REGION: us-west-2 AWS_LIGHTSAIL_SERVICE_NAME: container-service-2 jobs: build_and_deploy: name: Build and Deploy runs-on: ubuntu-latest steps: - name: Checkout main uses: actions/checkout@v4 - name: Install Utilities run: | sudo apt-get update sudo apt-get install -y jq unzip - name: Install AWS Client run: | curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install || true aws --version curl "https://s3.us-west-2.amazonaws.com/lightsailctl/latest/linux-amd64/lightsailctl" -o "lightsailctl" sudo mv "lightsailctl" "/usr/local/bin/lightsailctl" sudo chmod +x /usr/local/bin/lightsailctl - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v1 with: aws-region: ${{ env.AWS_REGION }} aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - name: Build Docker Image run: docker build -t ${{ env.AWS_LIGHTSAIL_SERVICE_NAME }}:release . - name: Push and Deploy run: | service_name=${{ env.AWS_LIGHTSAIL_SERVICE_NAME }} aws lightsail push-container-image \ --region ${{ env.AWS_REGION }} \ --service-name ${service_name} \ --label ${service_name} \ --image ${service_name}:release aws lightsail get-container-images --service-name ${service_name} | jq --raw-output ".containerImages[0].image" > image.txt jq --arg image $(cat image.txt) '.containers.app.image = $image' container.template.json > container.json aws lightsail create-container-service-deployment --service-name ${service_name} --cli-input-json file://$(pwd)/container.json ``` Whoa there is a lot going on here! The last two steps are most important: `Build Docker Image` and `Push and Deploy`. ```shell docker build -t ${{ env.AWS_LIGHTSAIL_SERVICE_NAME }}:release . ``` This command builds our Docker image with the name `container-service-2` and tags it `release`. ```shell aws lightsail push-container-image ... ``` This command pushes the local image to our Lightsail container. ```shell aws lightsail get-container-images --service-name ${service_name} | jq --raw-output ".containerImages[0].image" > image.txt ``` This command retrieves the image information and, using [`jq`](https://stedolan.github.io/jq/), parses it and saves the image name in a local file `image.txt`. ```shell jq --arg image $(cat image.txt) '.containers.app.image = $image' container.template.json > container.json ``` This command uses the image name saved in `image.txt` and `container.template.json` and creates a new options file called `container.json`. This options file will be passed to `aws lightsail` for the final deployment in the next step. ```shell aws lightsail create-container-service-deployment --service-name ${service_name} --cli-input-json file://$(pwd)/container.json ``` Finally, this command creates a new deployment using the `service_name`, along with the config settings in `container.json`. When you push to GitHub and the Action succeeds, you'll be able to see your new Deno app on AWS: ![deno on aws](./images/how-to/aws-lightsail/hello-world-from-deno-and-aws-lightsail.png) 🦕 Now you can deploy a Deno app to Amazon Lightsail using Docker, Docker Hub, and GitHub Actions. --- # Getting Started with OpenTelemetry in Deno > Set up basic OpenTelemetry instrumentation in a Deno application. This tutorial covers creating a simple HTTP server with custom metrics and traces, and viewing the telemetry data. URL: https://docs.deno.com/examples/tutorials/basic_opentelemetry OpenTelemetry provides powerful observability tools for your applications. With Deno's built-in OpenTelemetry support, you can easily instrument your code to collect metrics, traces, and logs. This tutorial will walk you through setting up a simple Deno application with OpenTelemetry instrumentation. ## Prerequisites - Deno 2.3 or later ## Step 1: Create a Simple HTTP Server Let's start by creating a basic HTTP server that simulates a small web application: ```ts title="server.ts" import { metrics, trace } from "npm:@opentelemetry/api@1"; // Create a tracer and meter for our application const tracer = trace.getTracer("my-server", "1.0.0"); const meter = metrics.getMeter("my-server", "1.0.0"); // Create some metrics const requestCounter = meter.createCounter("http_requests_total", { description: "Total number of HTTP requests", }); const requestDuration = meter.createHistogram("http_request_duration_ms", { description: "HTTP request duration in milliseconds", unit: "ms", }); // Start the server Deno.serve({ port: 8000 }, (req) => { // Record the start time for measuring request duration const startTime = performance.now(); // Create a span for this request return tracer.startActiveSpan("handle_request", async (span) => { try { // Extract the path from the URL const url = new URL(req.url); const path = url.pathname; // Add attributes to the span span.setAttribute("http.route", path); span.setAttribute("http.method", req.method); span.updateName(`${req.method} ${path}`); // Add an event to the span span.addEvent("request_started", { timestamp: startTime, request_path: path, }); // Simulate some processing time const waitTime = Math.random() * 100; await new Promise((resolve) => setTimeout(resolve, waitTime)); // Add another event to the span span.addEvent("processing_completed"); // Create the response const response = new Response(`Hello from ${path}!`, { headers: { "Content-Type": "text/plain" }, }); // Record metrics requestCounter.add(1, { method: req.method, path, status: 200, }); const duration = performance.now() - startTime; requestDuration.record(duration, { method: req.method, path, }); span.setAttribute("request.duration_ms", duration); return response; } catch (error) { // Record error in span if (error instanceof Error) { span.recordException(error); span.setStatus({ code: trace.SpanStatusCode.ERROR, message: error.message, }); } return new Response("Internal Server Error", { status: 500 }); } finally { // Always end the span span.end(); } }); }); ``` This server: 1. Creates a tracer and meter for our application 2. Sets up metrics to count requests and measure their duration 3. Creates a span for each request with attributes and events 4. Simulates some processing time 5. Records metrics for each request ## Step 2: Run the Server with OpenTelemetry Enabled To run the server with OpenTelemetry, use these flags: ```sh OTEL_DENO=true OTEL_SERVICE_NAME=my-server deno run --allow-net server.ts ``` ## Step 3: Create a Test Client Let's create a simple client to send requests to our server: ```ts title="client.ts" // Send 10 requests to different paths for (let i = 0; i < 10; i++) { const path = ["", "about", "users", "products", "contact"][i % 5]; const url = `http://localhost:8000/${path}`; console.log(`Sending request to ${url}`); try { const response = await fetch(url); const text = await response.text(); console.log(`Response from ${url}: ${text}`); } catch (error) { console.error(`Error fetching ${url}:`, error); } } ``` ## Step 4: Run the Client In a separate terminal, run the client: ```sh deno run --allow-net client.ts ``` ## Step 5: View the Telemetry Data By default, Deno exports telemetry data to `http://localhost:4318` using the OTLP protocol. You'll need an OpenTelemetry collector to receive and visualize this data. ### Setting up a Local Collector The quickest way to get started is with a local LGTM stack (Loki, Grafana, Tempo, Mimir) in Docker: ```sh docker run --name lgtm -p 3000:3000 -p 4317:4317 -p 4318:4318 --rm -ti \ -v "$PWD"/lgtm/grafana:/data/grafana \ -v "$PWD"/lgtm/prometheus:/data/prometheus \ -v "$PWD"/lgtm/loki:/data/loki \ -e GF_PATHS_DATA=/data/grafana \ docker.io/grafana/otel-lgtm:0.8.1 ``` Then access Grafana at http://localhost:3000 (username: admin, password: admin). In Grafana, you can: 1. View **Traces** in Tempo to see the individual request spans 2. View **Metrics** in Mimir/Prometheus to see request counts and durations 3. View **Logs** in Loki to see any logs from your application ## Understanding What You're Seeing ### Traces In the Traces view, you'll see spans for: - Each HTTP request processed by your server - Each fetch request made by your client - The relationships between these spans Click on any span to see its details, including: - Duration - Attributes (http.route, http.method, etc.) - Events (request_started, processing_completed) ### Metrics In the Metrics view, you can query for: - `http_requests_total` - The counter tracking the number of HTTP requests - `http_request_duration_ms` - The histogram of request durations You can also see built-in Deno metrics like: - `http.server.request.duration` - `http.server.active_requests` ### Logs In the Logs view, you'll see all console logs from your application with correct trace context. ## Troubleshooting If you're not seeing data in your collector: 1. Check that you've set `OTEL_DENO=true` 2. Verify the collector is running and accessible at the default endpoint 3. Check if you need to set `OTEL_EXPORTER_OTLP_ENDPOINT` to a different URL 4. Look for errors in your Deno console output Remember that OpenTelemetry support in Deno is still marked as unstable and may change in future versions. 🦕 This tutorial provides a simple starting point for users who want to experiment with OpenTelemetry in Deno without diving into more complex concepts immediately. This basic example can be extended in many ways: - Add more custom metrics for business logic - Create additional spans for important operations - Use baggage to pass context attributes between services - Set up alerts based on metrics thresholds For more advanced usage, see our [Distributed Tracing with Context Propagation](/examples/otel_span_propagation_tutorial/) tutorial. --- # Behavior-Driven Development (BDD) > Implementing Behavior-Driven Development with Deno's Standard Library's BDD module. Create readable, well organised tests with effective assertions. URL: https://docs.deno.com/examples/tutorials/bdd Behavior-Driven Development (BDD) is an approach to software development that encourages collaboration between developers, QA, and non-technical stakeholders. BDD focuses on defining the behavior of an application through examples written in a natural, ubiquitous language that all stakeholders can understand. Deno's Standard Library provides a BDD-style testing module that allows you to structure tests in a way that's both readable for non-technical stakeholders and practical for implementation. In this tutorial, we'll explore how to use the BDD module to create descriptive test suites for your applications. ## Introduction to BDD BDD extends [Test-Driven Development](https://en.wikipedia.org/wiki/Test-driven_development) (TDD) by writing tests in a natural language that is easy to read. Rather than thinking about "tests," BDD encourages us to consider "specifications" or "specs" that describe how software should behave from the user's perspective. This approach helps to keep tests focused on what the code should do rather than how it is implemented. The basic elements of BDD include: - **Describe** blocks that group related specifications - **It** statements that express a single behavior - **Before/After** hooks for setup and teardown operations ## Using Deno's BDD module To get started with BDD testing in Deno, we'll use the `@std/testing/bdd` module from the [Deno Standard Library](https://jsr.io/@std/testing/doc/bdd). First, let's import the necessary functions: ```ts import { afterAll, afterEach, beforeAll, beforeEach, describe, it, } from "jsr:@std/testing/bdd"; import { assertEquals, assertThrows } from "jsr:@std/assert"; ``` These imports provide the core BDD functions: - `describe` creates a block that groups related tests - `it` declares a test case that verifies a specific behavior - `beforeEach`/`afterEach` run before or after each test case - `beforeAll`/`afterAll` run once before or after all tests in a describe block We'll also use assertion functions from [`@std/assert`](https://jsr.io/@std/assert) to verify our expectations. ### Writing your first BDD test Let's create a simple calculator module and test it using BDD: ```ts title="calculator.ts" export class Calculator { private value: number = 0; constructor(initialValue: number = 0) { this.value = initialValue; } add(number: number): Calculator { this.value += number; return this; } subtract(number: number): Calculator { this.value -= number; return this; } multiply(number: number): Calculator { this.value *= number; return this; } divide(number: number): Calculator { if (number === 0) { throw new Error("Cannot divide by zero"); } this.value /= number; return this; } get result(): number { return this.value; } } ``` Now, let's test this calculator using the BDD style: ```ts title="calculator_test.ts" import { afterEach, beforeEach, describe, it } from "jsr:@std/testing/bdd"; import { assertEquals, assertThrows } from "jsr:@std/assert"; import { Calculator } from "./calculator.ts"; describe("Calculator", () => { let calculator: Calculator; // Before each test, create a new Calculator instance beforeEach(() => { calculator = new Calculator(); }); it("should initialize with zero", () => { assertEquals(calculator.result, 0); }); it("should initialize with a provided value", () => { const initializedCalculator = new Calculator(10); assertEquals(initializedCalculator.result, 10); }); describe("add method", () => { it("should add a positive number correctly", () => { calculator.add(5); assertEquals(calculator.result, 5); }); it("should handle negative numbers", () => { calculator.add(-5); assertEquals(calculator.result, -5); }); it("should be chainable", () => { calculator.add(5).add(10); assertEquals(calculator.result, 15); }); }); describe("subtract method", () => { it("should subtract a number correctly", () => { calculator.subtract(5); assertEquals(calculator.result, -5); }); it("should be chainable", () => { calculator.subtract(5).subtract(10); assertEquals(calculator.result, -15); }); }); describe("multiply method", () => { beforeEach(() => { // For multiplication tests, start with value 10 calculator = new Calculator(10); }); it("should multiply by a number correctly", () => { calculator.multiply(5); assertEquals(calculator.result, 50); }); it("should be chainable", () => { calculator.multiply(2).multiply(3); assertEquals(calculator.result, 60); }); }); describe("divide method", () => { beforeEach(() => { // For division tests, start with value 10 calculator = new Calculator(10); }); it("should divide by a number correctly", () => { calculator.divide(2); assertEquals(calculator.result, 5); }); it("should throw when dividing by zero", () => { assertThrows( () => calculator.divide(0), Error, "Cannot divide by zero", ); }); }); }); ``` To run this test, use the `deno test` command: ```sh deno test calculator_test.ts ``` You'll see output similar to this: ```sh running 1 test from file:///path/to/calculator_test.ts Calculator ✓ should initialize with zero ✓ should initialize with a provided value add method ✓ should add a positive number correctly ✓ should handle negative numbers ✓ should be chainable subtract method ✓ should subtract a number correctly ✓ should be chainable multiply method ✓ should multiply by a number correctly ✓ should be chainable divide method ✓ should divide by a number correctly ✓ should throw when dividing by zero ok | 11 passed | 0 failed (234ms) ``` ## Organizing tests with nested describe blocks One of the powerful features of BDD is the ability to nest `describe` blocks, which helps organize tests hierarchically. In the calculator example, we grouped tests for each method within their own `describe` blocks. This not only makes the tests more readable, but also makes it easier to locate issues when the test fails. You can nest `describe` blocks, but be cautious of nesting too deep as excessive nesting can make tests harder to follow. ## Hooks The BDD module provides four hooks: - `beforeEach` runs before each test in the current describe block - `afterEach` runs after each test in the current describe block - `beforeAll` runs once before all tests in the current describe block - `afterAll` runs once after all tests in the current describe block ### beforeEach/afterEach These hooks are ideal for: - Setting up a fresh test environment for each test - Cleaning up resources after each test - Ensuring test isolation In the calculator example, we used `beforeEach` to create a new calculator instance before each test, ensuring each test starts with a clean state. ### beforeAll/afterAll These hooks are useful for: - Expensive setup operations that can be shared across tests - Setting up and tearing down database connections - Creating and cleaning up shared resources Here's an example of how you might use `beforeAll` and `afterAll`: ```ts describe("Database operations", () => { let db: Database; beforeAll(async () => { // Connect to the database once before all tests db = await Database.connect(TEST_CONNECTION_STRING); await db.migrate(); }); afterAll(async () => { // Disconnect after all tests are complete await db.close(); }); it("should insert a record", async () => { const result = await db.insert({ name: "Test" }); assertEquals(result.success, true); }); it("should retrieve a record", async () => { const record = await db.findById(1); assertEquals(record.name, "Test"); }); }); ``` ## Gherkin vs. JavaScript-style BDD If you're familiar with Cucumber or other BDD frameworks, you might be expecting Gherkin syntax with "Given-When-Then" statements. Deno's BDD module uses a JavaScript-style syntax rather than Gherkin. This approach is similar to other JavaScript testing frameworks like Mocha or Jasmine. However, you can still follow BDD principles by: 1. Writing clear, behavior-focused test descriptions 2. Structuring your tests to reflect user stories 3. Following the "Arrange-Act-Assert" pattern in your test implementations For example, you can structure your `it` blocks to mirror the Given-When-Then format: ```ts describe("Calculator", () => { it("should add numbers correctly", () => { // Given const calculator = new Calculator(); // When calculator.add(5); // Then assertEquals(calculator.result, 5); }); }); ``` If you need full Gherkin support with natural language specifications, consider using a dedicated BDD framework that integrates with Deno, such as [cucumber-js](https://github.com/cucumber/cucumber-js). ## Best Practices for BDD with Deno ### Write your tests for humans to read BDD tests should read like documentation. Use clear, descriptive language in your `describe` and `it` statements: ```ts // Good describe("User authentication", () => { it("should reject login with incorrect password", () => { // Test code }); }); // Not good describe("auth", () => { it("bad pw fails", () => { // Test code }); }); ``` ### Keep tests focused Each test should verify a single behavior. Avoid testing multiple behaviors in a single `it` block: ```ts // Good it("should add an item to the cart", () => { // Test adding to cart }); it("should calculate the correct total", () => { // Test total calculation }); // Bad it("should add an item and calculate total", () => { // Test adding to cart // Test total calculation }); ``` ### Use context-specific setup When tests within a describe block need different setup, use nested describes with their own `beforeEach` hooks rather than conditional logic: ```ts // Good describe("User operations", () => { describe("when user is logged in", () => { beforeEach(() => { // Setup logged-in user }); it("should show the dashboard", () => { // Test }); }); describe("when user is logged out", () => { beforeEach(() => { // Setup logged-out state }); it("should redirect to login", () => { // Test }); }); }); // Avoid describe("User operations", () => { beforeEach(() => { // Setup base state if (isLoggedInTest) { // Setup logged-in state } else { // Setup logged-out state } }); it("should show dashboard when logged in", () => { isLoggedInTest = true; // Test }); it("should redirect to login when logged out", () => { isLoggedInTest = false; // Test }); }); ``` ### Handle asynchronous tests properly When testing asynchronous code, remember to: - Mark your test functions as `async` - Use `await` for promises - Handle errors properly ```ts it("should fetch user data asynchronously", async () => { const user = await fetchUser(1); assertEquals(user.name, "John Doe"); }); ``` 🦕 By following the BDD principles and practices outlined in this tutorial, you can build more reliable software and solidify your reasoning about the 'business logic' of your code. Remember that BDD is not just about the syntax or tools but about the collaborative approach to defining and verifying application behavior. The most successful BDD implementations combine these technical practices with regular conversations between developers, testers, product and business stakeholders. To continue learning about testing in Deno, explore other modules in the Standard Library's testing suite, such as [mocking](/examples/mocking_tutorial/) and [snapshot testing](/examples/snapshot_tutorial/). --- # Chat application with WebSockets > A tutorial on building a real-time chat app using Deno WebSockets. Learn how to create a WebSocket server with Oak, handle multiple client connections, manage state, and build an interactive chat interface with HTML, CSS, and JavaScript. URL: https://docs.deno.com/examples/tutorials/chat_app WebSockets are a powerful tool for building real-time applications. They allow for bidirectional communication between the client and server without the need for constant polling. A frequent use case for WebSockets are chat applications. In this tutorial we'll create a simple chat app using Deno and the built in [WebSockets API](/api/web/websockets). The chat app will allow multiple chat clients to connect to the same backend and send group messages. After a client enters a username, they can then start sending messages to other online clients. Each client also displays the list of currently active users. You can see the [finished chat app on GitHub](https://github.com/denoland/tutorial-with-websockets). ![Chat app UI](./images/websockets.gif) ## Initialize a new project First, create a new directory for your project and navigate into it. ```sh deno init chat-app cd deno-chat-app ``` ## Build the backend We'll start by building the backend server that will handle the WebSocket connections and broadcast messages to all connected clients. We'll use the [`oak`](https://jsr.io/@oak/oak) middleware framework to set up our server, clients can connect to the server, send messages and receive updates about other connected users. Additionally the server will serve the static HTML, CSS and JavaScript files that make up the chat client. ### Import dependencies First, we'll need to import the necessary dependencies. Use the `deno add` command to add Oak to your project: ```sh deno add jsr:@oak/oak ``` ### Set up the server In your `main.ts` file, add the following code: ```ts title="main.ts" import { Application, Context, Router } from "@oak/oak"; import ChatServer from "./ChatServer.ts"; const app = new Application(); const port = 8080; const router = new Router(); const server = new ChatServer(); router.get("/start_web_socket", (ctx: Context) => server.handleConnection(ctx)); app.use(router.routes()); app.use(router.allowedMethods()); app.use(async (context) => { await context.send({ root: Deno.cwd(), index: "public/index.html", }); }); console.log("Listening at http://localhost:" + port); await app.listen({ port }); ``` Next, create a new file called `ChatServer.ts` in the same directory as your `main.ts` file. In this file we'll put the logic for handling the WebSocket connections: ```ts title="ChatServer.ts" import { Context } from "@oak/oak"; type WebSocketWithUsername = WebSocket & { username: string }; type AppEvent = { event: string; [key: string]: any }; export default class ChatServer { private connectedClients = new Map(); public async handleConnection(ctx: Context) { const socket = await ctx.upgrade() as WebSocketWithUsername; const username = ctx.request.url.searchParams.get("username"); if (this.connectedClients.has(username)) { socket.close(1008, `Username ${username} is already taken`); return; } socket.username = username; socket.onopen = this.broadcastUsernames.bind(this); socket.onclose = () => { this.clientDisconnected(socket.username); }; socket.onmessage = (m) => { this.send(socket.username, m); }; this.connectedClients.set(username, socket); console.log(`New client connected: ${username}`); } private send(username: string, message: any) { const data = JSON.parse(message.data); if (data.event !== "send-message") { return; } this.broadcast({ event: "send-message", username: username, message: data.message, }); } private clientDisconnected(username: string) { this.connectedClients.delete(username); this.broadcastUsernames(); console.log(`Client ${username} disconnected`); } private broadcastUsernames() { const usernames = [...this.connectedClients.keys()]; this.broadcast({ event: "update-users", usernames }); console.log("Sent username list:", JSON.stringify(usernames)); } private broadcast(message: AppEvent) { const messageString = JSON.stringify(message); for (const client of this.connectedClients.values()) { client.send(messageString); } } } ``` This code sets up a `handleConnection` method that is called when a new WebSocket connection is established. It receives a Context object from the Oak framework and upgrades it to a WebSocket connection. It extracts the username from the URL query parameters. If the username is already taken (i.e., exists in connectedClients), it closes the socket with an appropriate message. Otherwise, it sets the username property on the socket, assigns event handlers, and adds the socket to `connectedClients`. When the socket opens, it triggers the `broadcastUsernames` method, which sends the list of connected usernames to all clients. When the socket closes, it calls the `clientDisconnected` method to remove the client from the list of connected clients. When a message of type `send-message` is received, it broadcasts the message to all connected clients, including the sender’s username. ## Build the frontend We'll build a simple UI that shows a text input and a send button and displays the sent messages, alongside a list of users in the chat. ### HTML In your new project directory, create a `public` folder and add an `index.html` file and add the following code: ```html title="index.html" Deno Chat App

🦕 Deno Chat App

``` ### CSS If you'd like to style your chat app, create a `style.css` file in the `public` folder and add this [pre-made CSS](https://raw.githubusercontent.com/denoland/tutorial-with-websockets/refs/heads/main/public/style.css). ### JavaScript We'll set up the client side JavaScript in an `app.js` file, you'll have seen it linked in the HTML we just wrote. In the `public` folder and add an `app.js` file with the following code: ```js title="app.js" const myUsername = prompt("Please enter your name") || "Anonymous"; const url = new URL(`./start_web_socket?username=${myUsername}`, location.href); url.protocol = url.protocol.replace("http", "ws"); const socket = new WebSocket(url); socket.onmessage = (event) => { const data = JSON.parse(event.data); switch (data.event) { case "update-users": updateUserList(data.usernames); break; case "send-message": addMessage(data.username, data.message); break; } }; function updateUserList(usernames) { const userList = document.getElementById("users"); userList.replaceChildren(); for (const username of usernames) { const listItem = document.createElement("li"); listItem.textContent = username; userList.appendChild(listItem); } } function addMessage(username, message) { const template = document.getElementById("message"); const clone = template.content.cloneNode(true); clone.querySelector("span").textContent = username; clone.querySelector("p").textContent = message; document.getElementById("conversation").prepend(clone); } const inputElement = document.getElementById("data"); inputElement.focus(); const form = document.getElementById("form"); form.onsubmit = (e) => { e.preventDefault(); const message = inputElement.value; inputElement.value = ""; socket.send(JSON.stringify({ event: "send-message", message })); }; ``` This code prompts the user for a username, then creates a WebSocket connection to the server with the username as a query parameter. It listens for messages from the server and either updates the list of connected users or adds a new message to the chat window. It also sends messages to the server when the user submits the form either by pressing enter or clicking the send button. We use an [HTML template](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/template) to scaffold out the new messages to show in the chat window. ## Run the server To run the server we'll need to grant the necessary permissions to Deno. In your `deno.json` file, update the `dev` task to allow read and network access: ```diff title="deno.json" -"dev": "deno run --watch main.ts" +"dev": "deno run --allow-net --allow-read --watch main.ts" ``` Now if you visit [http://localhost:8080](http://localhost:8080/) you will be able to start a chat session. You can open 2 simultaneous tabs and try chatting with yourself. ![Chat app UI](./images/websockets.gif) 🦕 Now you can use WebSockets with Deno you're ready to build all kinds of realtime applications! WebSockets can be used to build realtime dashboards, games and collaborative editing tools and much more! If you're looking for ways to expand upon your chat app, perhaps you could consider adding data to the messages to allow you to style messages differently if they're sent from you or someone else. Whatever you're building, Deno will WebSocket to ya! --- # Updating from CommonJS to ESM > Step-by-step guide to migrating Node.js projects from CommonJS to ESM modules. Learn about import/export syntax changes, module resolution differences, and how to use modern JavaScript features in Deno. URL: https://docs.deno.com/examples/tutorials/cjs_to_esm If your Node.js project uses CommonJS modules (e.g. it uses `require`), you'll need to update your code to use [ECMAScript modules (ESM)](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules) to run it in Deno. This guide will help you update your code to use ESM syntax. ## Module imports and exports Deno supports [ECMAScript modules](/runtime/fundamentals/modules/) exclusively. If your Node.js code uses [`require`](https://nodejs.org/api/modules.html#modules-commonjs-modules), you should update it to use `import` statements instead. If your internal code uses CommonJS-style exports, those will also need to be updated. A typical CommonJS-style project might look similar to this: ```js title="add_numbers.js" module.exports = function addNumbers(num1, num2) { return num1 + num2; }; ``` ```js title="index.js" const addNumbers = require("./add_numbers"); console.log(addNumbers(2, 2)); ``` To convert these to [ECMAScript modules](/runtime/fundamentals/modules/), we'll make a few minor changes: ```js title="add_numbers.js" export function addNumbers(num1, num2) { return num1 + num2; } ``` ```js title="index.js" import { addNumbers } from "./add_numbers.js"; console.log(addNumbers(2, 2)); ``` Exports: | CommonJS | ECMAScript modules | | ------------------------------------ | ---------------------------------- | | `module.exports = function add() {}` | `export default function add() {}` | | `exports.add = function add() {}` | `export function add() {}` | Imports: | CommonJS | ECMAScript modules | | ------------------------------------------ | ---------------------------------------- | | `const add = require("./add_numbers");` | `import add from "./add_numbers.js";` | | `const { add } = require("./add_numbers")` | `import { add } from "./add_numbers.js"` | ### Quick fix with VS Code If you are using VS Code, you can use its built-in feature to convert CommonJS to ES6 modules. Right-click on the `require` statement, or the lightbulb icon and select `Quick Fix` and then `Convert to ES module`. ![Quick Fix](./images/quick-fix.png) ### CommonJS vs ECMAScript resolution An important distinction between the two module systems is that ECMAScript resolution requires the full specifier **including the file extension**. Omitting the file extension, and special handling of `index.js`, are features unique to CommonJS. The benefit of the ECMAScript resolution is that it works the same across the browser, Deno, and other runtimes. | CommonJS | ECMAScript modules | | -------------------- | ----------------------------- | | `"./add_numbers"` | `"./add_numbers.js"` | | `"./some/directory"` | `"./some/directory/index.js"` | :::tip Deno can add all the missing file extensions for you by running `deno lint --fix`. Deno's linter comes with a `no-sloppy-imports` rule that will show a linting error when an import path doesn't contain the file extension. ::: 🦕 Now that you know how to port from CJS to ESM you can take advantage of the modern features that ESM offers, such as async module loading, interop with browsers, better readability, standardization and future proofing. --- # Deploying Deno to Cloudflare Workers > Step-by-step tutorial on deploying Deno functions to Cloudflare Workers. Learn how to configure denoflare, create worker modules, test locally, and deploy your code to Cloudflare's global edge network. URL: https://docs.deno.com/examples/tutorials/cloudflare_workers Cloudflare Workers allows you to run JavaScript on Cloudflare's edge network. This is a short How To guide on deploying a Deno function to Cloudflare Workers. Note: You would only be able to deploy [Module Workers](https://developers.cloudflare.com/workers/learning/migrating-to-module-workers/) instead of web servers or apps. ## Setup `denoflare` In order to deploy Deno to Cloudflare, we'll use this community created CLI [`denoflare`](https://denoflare.dev/). [Install it](https://denoflare.dev/cli/#installation): ```shell deno install --unstable-worker-options --allow-read --allow-net --global --allow-env --allow-run --name denoflare --force \ https://raw.githubusercontent.com/skymethod/denoflare/v0.6.0/cli/cli.ts ``` ## Create your function In a new directory, let's create a `main.ts` file, which will contain our Module Worker function: ```ts export default { fetch(request: Request): Response { return new Response("Hello, world!"); }, }; ``` At the very minimum, a Module Worker function must `export default` an object that exposes a `fetch` function, which returns a `Response` object. You can test this locally by running: ```shell denoflare serve main.ts ``` If you go to `localhost:8080` in your browser, you'll see the response will say: ```console Hello, world! ``` ## Configure `.denoflare` The next step is to create a `.denoflare` config file. In it, let's add: ```json { "$schema": "https://raw.githubusercontent.com/skymethod/denoflare/v0.5.11/common/config.schema.json", "scripts": { "main": { "path": "/absolute/path/to/main.ts", "localPort": 8000 } }, "profiles": { "myprofile": { "accountId": "abcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", "apiToken": "abcxxxxxxxxx_-yyyyyyyyyyyy-11-dddddddd" } } } ``` You can find your `accountId` by going to your [Cloudflare dashboard](https://dash.cloudflare.com/), clicking "Workers", and finding "Account ID" on the right side. You can generate an `apiToken` from your [Cloudflare API Tokens settings](https://dash.cloudflare.com/profile/api-tokens). When you create an API token, be sure to use the template "Edit Cloudflare Workers". After you add both to your `.denoflare` config, let's try pushing it to Cloudflare: ```console denoflare push main ``` Next, you can view your new function in your Cloudflare account: ![New function on Cloudflare Workers](./images/how-to/cloudflare-workers/main-on-cloudflare.png) Boom! --- # Connecting to databases > A guide to database connectivity in Deno. Learn how to use MySQL, PostgreSQL, MongoDB, SQLite, Firebase, Supabase, and popular ORMs to build data-driven applications with TypeScript. URL: https://docs.deno.com/examples/tutorials/connecting_to_databases It is common for applications to store and retrieve data from databases. Deno supports connecting to many database management systems. Deno supports multiple third-party modules that allow you to connect to SQL and NoSQL databases, including MySQL, PostgreSQL, MongoDB, SQLite, Firebase, and Supabase. You can find helpful database connectivity modules on [JSR](https://jsr.io/@db) and deno supports many npm packages with the use of [npm specifiers](/runtime/fundamentals/node/#using-npm-packages). ## SQLite SQLite is a self-contained, serverless, zero-configuration, and transactional SQL database engine. It is a popular choice for local storage in applications. You can use multiple modules to connect to SQLite in Deno, including the built-in [`node:sqlite` module](/api/node_sqlite/) and the [sqlite](https://jsr.io/@db/sqlite) module on JSR. To use the [sqlite](https://jsr.io/@db/sqlite) module to connect to SQLite in your Deno apps: ```sh deno add jsr:@db/sqlite ``` Then, import the `Database` class from the module and create a new database instance. You can then execute SQL queries against the database: ```ts title="main.ts import { Database } from "@db/sqlite"; const db = new Database("test.db"); const [version] = db.prepare("select sqlite_version()").value<[string]>()!; console.log(version); db.close(); ``` This module depends on Deno FFI, so you need to run your script with the `--allow-ffi` flag: ```sh deno run --allow-ffi main.ts ``` ## MySQL You can use the [mysql npm module](https://www.npmjs.com/package/mysql) to connect to MySQL databases. Install the module with the npm specifier: ```sh deno add npm:mysql ``` Then, import the `mysql` module and create a connection to your MySQL database: ```ts import mysql from "mysql"; // Minimal connection config (edit as needed or use env vars) const connection = mysql.createConnection({ host: Deno.env.get("MYSQL_HOST") || "localhost", port: Number(Deno.env.get("MYSQL_PORT") || "3306"), user: Deno.env.get("MYSQL_USER") || "root", password: Deno.env.get("MYSQL_PASSWORD") || "", database: Deno.env.get("MYSQL_DATABASE") || "test", }); connection.connect((err) => { if (err) { console.error("Connection error:", err); return; } console.log("Connected!"); connection.query("SELECT VERSION() AS version", (err, results) => { if (err) { console.error("Query error:", err); } else { console.log("MySQL version:", results[0].version); } connection.end(); }); }); ``` ## Postgres PostgreSQL is a powerful, open source object-relational database system. You can use multiple modules to connect to PostgreSQL in Deno, including [pg](https://www.npmjs.com/package/pg) or [postgresjs](https://www.npmjs.com/package/postgres). Install the module with the npm specifier: ```sh deno add npm:pg ``` First, import the `Client` class from the `pg` module and create a new client instance. Then connect to the database passing an object with the connection details: ```ts import { Client } from "pg"; // Connection config (edit or use env vars) const client = new Client({ host: Deno.env.get("PGHOST") || "localhost", port: Number(Deno.env.get("PGPORT") || "5432"), user: Deno.env.get("PGUSER") || "postgres", password: Deno.env.get("PGPASSWORD") || "postgres", database: Deno.env.get("PGDATABASE") || "postgres", }); async function main() { try { await client.connect(); console.log("Connected!"); const res = await client.query("SELECT version() AS version"); console.log("Postgres version:", res.rows[0].version); } catch (err) { console.error("Connection/query error:", err); } finally { await client.end(); } } main(); ``` ## MongoDB MongoDB is a popular NoSQL database that stores data in flexible, JSON-like documents. You can use the official [MongoDB Node.js](https://www.npmjs.com/package/mongodb) driver to connect to MongoDB, or the [Mongo db driver](https://jsr.io/@db/mongo) from JSR. Import the MongoDB driver, set up connection configuration then connect to a MongoDB instance: ```ts title="main.js" import { MongoClient } from "mongodb"; const url = "mongodb://mongo:mongo@localhost:27017"; // username:password@host:port const client = new MongoClient(url); const dbName = "myProject"; await client.connect(); console.log("Connected successfully to server"); const db = client.db(dbName); const collection = db.collection("documents"); const insertResult = await collection.insertMany([{ a: 1 }, { a: 2 }]); console.log("Inserted documents =>", insertResult); await client.close(); ``` ## Firebase Firebase is a platform developed by Google for creating mobile and web applications. It provides a variety of services, including a NoSQL database, authentication, and hosting. To connect to Firebase, you can use the official npm modules provided by Firebase, you will need to update your `deno.json` to tell deno to use a `node_modules` directory, and allow scripts when installing: ```json title="deno.json" "nodeModulesDir": auto ``` ```sh deno add npm:firebase --allow-scripts ``` Then import the necessary functions from the Firebase modules and initialize your app and services: ```js import { initializeApp } from "firebase/app"; import { doc, getDoc, getFirestore, setDoc } from "firebase/firestore"; // Replace with your Firebase config (get from Firebase Console) const firebaseConfig = { apiKey: "YOUR_API_KEY", authDomain: "YOUR_PROJECT_ID.firebaseapp.com", projectId: "YOUR_PROJECT_ID", storageBucket: "YOUR_PROJECT_ID.appspot.com", messagingSenderId: "YOUR_MESSAGING_SENDER_ID", appId: "YOUR_APP_ID", }; // Initialize Firebase const app = initializeApp(firebaseConfig); const db = getFirestore(app); // Demo: write and read a document async function demo() { const ref = doc(db, "demo", "testdoc"); await setDoc(ref, { hello: "world", time: Date.now() }); const snap = await getDoc(ref); console.log("Document data:", snap.data()); } demo().catch(console.error); ``` ## Supabase Supabase is an open-source Firebase alternative that provides a suite of tools and services to help you build and scale applications. It offers a hosted PostgreSQL database, authentication, real-time subscriptions, and storage. To connect to Supabase, you can use the [@supabase/supabase-js](https://www.npmjs.com/package/@supabase/supabase-js) npm module. First, install the module with the npm specifier: ```sh deno add npm:@supabase/supabase-js --allow-scripts ``` Then, import the `createClient` function from the module and create a new Supabase client instance. You will need your Supabase project URL and an API key, which you can find in your Supabase project settings: ```ts import { createClient } from "@supabase/supabase-js"; const url = Deno.env.get("SUPABASE_URL") ?? "https://YOUR-PROJECT.ref.supabase.co"; const key = Deno.env.get("SUPABASE_SERVICE_ROLE_KEY") ?? ""; const supabase = createClient(url, key); async function main() { const { data, error } = await supabase .from("demo") .insert({ message: `Hello @ ${new Date().toISOString()}` }) .select() .maybeSingle(); if (error) { console.error("Insert failed:", error.message); console.error( "Hint: If this is an RLS error, either disable RLS on 'demo' or add a policy allowing anon inserts.", ); return; } console.log("Inserted row:", data); } if (import.meta.main) main(); ``` ## ORMs Object-Relational Mappings (ORM) define your data models as classes that you can persist to a database. You can read and write data in your database through instances of these classes. Deno supports multiple ORMs, including Prisma, Drizzle, and Kysely. 🦕 Now you can connect your Deno project to a database you'll be able to work with persistent data, perform CRUD operations and start building more complex applications. --- # Better debugging with the console API > An in-depth guide to advanced console debugging in Deno. Learn about console.table, timers, counters, tracers, and how to leverage the full console API beyond basic logging for better debugging workflows. URL: https://docs.deno.com/examples/tutorials/debugging_with_console Some of the console API is probably muscle memory for web developers, but there is so much more than just `console.log()` for you to use. Deno has great support for this API, so whether you’re writing JavaScript for the browser of for the server it’s worth learning about these helpful utilities. Let’s take a look at some of this API’s most useful methods. Your debugging is going to get so much easier! ## `console.log()` Hello, old friend! You’ll most likely be using this to output logging messages to the console to help you debug. ```js console.log("Hello, world!"); // "Hello, world!" ``` You can output multiple items by separated by commas like so: ```jsx const person = { "name": "Jane", "city": "New York" }; console.log("Hello, ", person.name, "from ", person.city); // "Hello, Jane from New York" ``` Or you can use string literals: ```jsx const person = { "name": "Jane", "city": "New York" }; console.log(`Hello ${person.name} from ${person.city}`); // "Hello, Jane from New York" ``` You can also [apply some styling using CSS](/examples/color_logging/) using the `%c` directive: ```jsx console.log("Wild %cblue", "color: blue", "yonder"); // Applies a blue text color to the word "blue" ``` …but there is much more you can do with the console API. ## `console.table()` The `table` method is helpful for outputting structured data like objects for easier inspection. ```jsx const people = { "john": { "age": 30, "city": "New York", }, "jane": { "age": 25, "city": "Los Angeles", }, }; console.table(people); /* ┌───────┬─────┬───────────────┐ │ (idx) │ age │ city │ ├───────┼─────┼───────────────┤ │ john │ 30 │ "New York" │ │ jane │ 25 │ "Los Angeles" │ └───────┴─────┴───────────────┘ */ ``` You can also specify the properties of your object that you’d like to include in the table. Great for inspecting a summary of those detailed objects to see just the part you are concerned with. ```jsx console.table(people, ["city"]); /* outputs ┌───────┬───────────────┐ │ (idx) │ city │ ├───────┼───────────────┤ │ john │ "New York" │ │ jane │ "Los Angeles" │ └───────┴───────────────┘ */ ``` ## Timer methods Understanding how long specific parts of your application take is key to removing performance bottlenecks and expensive operations. If you’ve ever reached for JavaScript’s date method to make yourself a timer, you’ll wish you’d know this one long ago. It’s more convenient and more accurate. Try using[`console.time()`](https://developer.mozilla.org/en-US/docs/Web/API/console/time_static), [`console.timeLog()`](https://developer.mozilla.org/en-US/docs/Web/API/console/timeLog_static), and [`console.timeEnd()`](https://developer.mozilla.org/en-US/docs/Web/API/console/timeEnd_static) instead. ```jsx console.time("My timer"); // starts a timer with label "My timer" // do some work... console.timeLog("My timer"); // outputs the current timer value, e.g. "My timer: 9000ms" // do more work... console.timeEnd("My timer"); // stops "My timer" and reports its value, e.g. "My timer: 97338ms" ``` You can create multiple timers each with their own label. Very handy! ## Counting things with `console.count()` It can be helpful to keep a count of how many times specific operations in your code have been executed. Rather than doing this manually you can use [`console.count()`](https://developer.mozilla.org/en-US/docs/Web/API/console/count_static) which can maintain multiple counters for you based on the label you provide. ```jsx // increment the default counter console.count(); console.count(); console.count(); /* "default: 1" "default: 2" "default: 3" */ ``` This can be very handy inside a function and passing in a label, like so: ```jsx function pat(animal) { console.count(animal); return `Patting the ${animal}`; } pat("cat"); pat("cat"); pat("dog"); pat("cat"); /* "cat: 1" "cat: 2" "dog: 1" "cat: 3" */ ``` ## Going deeper with `console.trace()` For a detailed view of what is happening in your application, you can output a stack trace to the console with [`console.trace()`](https://developer.mozilla.org/en-US/docs/Web/API/console/trace_static): ```jsx // main.js function foo() { function bar() { console.trace(); } bar(); } foo(); /* Trace at bar (file:///PATH_TO/main.js:3:13) at foo (file:///PATH_TO/main.js:5:3) at file:///PATH_TO/main.js:8:1 */ ``` There’s more to explore, but these handy methods can give your JavaScript debugging a boost and they are ready and waiting for you to use right in your browser or in your Deno application. Take a look at [console support](/api/web/~/Console) in the API Reference docs. for more. --- # Deploy an app with Deno Deploy > A step-by-step tutorial for deploying your first Deno application to Deno Deploy. URL: https://docs.deno.com/examples/tutorials/deno_deploy Deno Deploy allows you to host your Deno applications on a global edge network, with built in telemetry and CI/CD tooling. This tutorial guides you through creating and deploying a simple Deno application using Deno Deploy. ## Prerequisites 1. A [GitHub](https://github.com) account 2. [Deno installed](https://docs.deno.com/runtime/manual/getting_started/installation) on your local machine 3. A [Deno Deploy](https://console.deno.com/account) account ## Create a simple Deno application with Vite First, let's create a basic application with Vite, initialize a new [Vite](https://vite.dev/guide/) project: ```sh deno init --npm vite ``` Give your project a name and select your framework and variant. For this tutorial, we'll create a vanilla TypeScript app. Change directory to your newly created project name with `cd my-project-name` then run: ```sh deno install deno run dev ``` You should see a basic app running at [http://127.0.0.1:5173/](http://127.0.0.1:5173/). You can edit the `main.ts` file to see changes in the browser. ## Create a GitHub repository 1. Go to [GitHub](https://github.com) and create a new repository. 2. Initialize your local directory as a Git repository: ```sh git init git add . git commit -m "Initial commit" ``` 3. Add your GitHub repository as a remote and push your code: ```sh git remote add origin https://github.com/your-username/my-first-deno-app.git git branch -M main git push -u origin main ``` ## Create a Deno Deploy organization 1. Navigate to [console.deno.com](https://console.deno.com) 2. Click "+ New Organization" 3. Select the 'Standard Deploy' organization type 4. Enter an organization name and slug (this cannot be changed later) 5. Click "Create Standard Deploy organization" ## Create and deploy your application 1. Click "+ New App" 2. Select the GitHub repository you created earlier 3. The app configuration should be automatically detected, but you can verify these settings by clicking the "Edit build config" button: - Framework preset: No preset - Runtime configuration: Static Site - Install command: `deno install` - Build command: `deno task build` - Static Directory: `dist` 4. Click "Create App" to start the deployment process ## Monitor your deployment 1. Watch the build logs as your application is deployed 2. Once deployment completes, you'll see a preview URL (typically `https://your-app-name.your-org-name.deno.net`) 3. Click the URL to view your deployed application! ## Make changes and redeploy Let's update the application and see how changes are deployed: Update your `main.ts` file locally: ```ts title="main.ts" import './style.css' import typescriptLogo from './typescript.svg' import viteLogo from '/vite.svg' import { setupCounter } from './counter.ts' document.querySelector('#app')!.innerHTML = `

Hello from Deno Deploy!

Click on the Vite and TypeScript logos to learn more

setupCounter(document.querySelector('#counter')!) ``` 2. Commit and push your changes: ```sh git add . git commit -m "Update application" git push ``` Return to your Deno Deploy dashboard to see a new build automatically start. Once the build completes, visit your application URL to see the update. ## Explore observability features Deno Deploy provides comprehensive observability tools: 1. From your application dashboard, click "Logs" in the sidebar - You'll see console output from your application - Use the search bar to filter logs (e.g., `context:production`) 2. Click "Traces" to view request traces - Select a trace to see detailed timing information - Examine spans to understand request processing 3. Click "Metrics" to view application performance metrics - Monitor request counts, error rates, and response times 🦕 Now that you've deployed your first application, you might want to: 1. [Add a custom domain](/deploy/reference/domains/) to your application 2. Explore [framework support](/deploy/reference/frameworks/) for Next.js, Astro, and other frameworks 3. Learn about [caching strategies](/deploy/reference/caching/) to improve performance 4. Set up different [environments](/deploy/reference/env_vars_and_contexts/) for development and production --- # Generating documentation with deno doc > Learn how to generate professional documentation for your Deno projects using the built-in deno doc command. This tutorial covers JSDoc comments, HTML output, linting, and best practices for documenting your code. URL: https://docs.deno.com/examples/tutorials/deno_doc Good documentation is essential for any software project. It helps other developers understand your code, makes maintenance easier, and improves the overall quality of your project. Deno includes a built-in documentation generator called `deno doc` that can automatically generate searchable documentation from your TypeScript and JavaScript code. `deno doc` works out of the box, with no setup required and can generate HTML, JSON or terminal output. It leverages JSDoc comments for documentation and automatically extracts type information from TypeScript type annotations in your code. :::info Automatic documentation with JSR If you're publishing your package to [JSR (JavaScript Registry)](https://jsr.io), you get beautiful documentation automatically generated for free! JSR uses the same `deno doc` technology under the hood to create searchable, web-based documentation for all published packages. Simply publish your well-documented code with `deno publish` and JSR handles the rest. ::: ## Setting up a sample project Let's create a sample library to demonstrate `deno doc` features. We'll build a simple mathematics utilities library with proper documentation. ````ts title="math.ts" /** * A collection of mathematical utility functions. * @module */ /** * Adds two numbers together. * * @example * ```ts * import { add } from "./math.ts"; * * const result = add(5, 3); * console.log(result); // 8 * ``` * * @param x The first number * @param y The second number * @returns The sum of x and y */ export function add(x: number, y: number): number { return x + y; } /** * Multiplies two numbers together. * * @example * ```ts * import { multiply } from "./math.ts"; * * const result = multiply(4, 3); * console.log(result); // 12 * ``` * * @param x The first number * @param y The second number * @returns The product of x and y */ export function multiply(x: number, y: number): number { return x * y; } /** * Represents a 2D point in space. * * @example * ```ts * import { Point } from "./math.ts"; * * const point = new Point(5, 10); * console.log(point.distance()); // 11.180339887498949 * ``` */ export class Point { /** * Creates a new Point instance. * * @param x The x-coordinate * @param y The y-coordinate */ constructor(public x: number, public y: number) {} /** * Calculates the distance from the origin (0, 0). * * @returns The distance from the origin */ distance(): number { return Math.sqrt(this.x * this.x + this.y * this.y); } /** * Calculates the distance to another point. * * @param other The other point * @returns The distance between the two points */ distanceTo(other: Point): number { const dx = this.x - other.x; const dy = this.y - other.y; return Math.sqrt(dx * dx + dy * dy); } } /** * Configuration options for mathematical operations. */ export interface MathConfig { /** The precision for floating-point calculations */ precision?: number; /** Whether to round results to integers */ roundToInt?: boolean; } /** * Performs advanced mathematical operations with configuration. * * @example * ```ts * import { calculate } from "./math.ts"; * * const result = calculate(10, 3, { precision: 2, roundToInt: false }); * console.log(result); // 3.33 * ``` */ export function calculate( dividend: number, divisor: number, config: MathConfig = {}, ): number { const { precision = 10, roundToInt = false } = config; let result = dividend / divisor; if (roundToInt) { result = Math.round(result); } else { result = Math.round(result * Math.pow(10, precision)) / Math.pow(10, precision); } return result; } ```` ## Basic documentation generation The simplest way to generate documentation is to run `deno doc` with your source files: ```bash deno doc math.ts ``` This will output the documentation to your terminal, showing all exported functions, classes, and interfaces with their JSDoc comments. ## Generating HTML documentation To create a documentation website with HTML, CSS and JS, use the `--html` flag: ```bash deno doc --html --name="Math Utilities" math.ts ``` This generates a static site in the `./docs/` directory. The site includes: - A searchable interface - Syntax highlighting - Cross-references between symbols - Mobile-friendly responsive design You can also specify a custom output directory: ```bash deno doc --html --name="Math Utilities" --output=./documentation/ math.ts ``` ## Documentation linting Use the `--lint` flag to check for documentation issues: ```bash deno doc --lint math.ts ``` This will report several types of problems: 1. Missing JSDoc comments on exported functions, classes, or interfaces 2. Missing return types on functions 3. Unexported types referenced by exported symbols Let's create a file with some documentation issues to see the linter in action: ```ts title="bad_example.ts" // Missing JSDoc comment export function badFunction(x) { return x * 2; } interface InternalType { value: string; } // References non-exported type export function anotherFunction(param: InternalType) { return param.value; } ``` Running `deno doc --lint bad_example.ts` will show errors for these issues. ## Working with multiple files You can document multiple files at once: ```bash deno doc --html --name="My Library" ./mod.ts ./utils.ts ./types.ts ``` For larger projects, create a main module file that re-exports everything: ````ts title="mod.ts" /** * Math Utilities Library * * A comprehensive collection of mathematical functions and utilities. * * @example * ```ts * import { add, multiply, Point } from "./mod.ts"; * * const sum = add(5, 3); * const product = multiply(4, 2); * const point = new Point(3, 4); * ``` * * @module */ export * from "./math.ts"; ```` Then generate documentation from the main module: ```bash deno doc --html --name="Math Utilities" mod.ts ``` ## JSON output for automation Generate documentation in JSON format for use with other tools: ```bash deno doc --json math.ts > documentation.json ``` The JSON output provides a low-level representation of your code's structure, including symbol definitions and basic type information. This format is primarily useful for building custom documentation tools or integrating with other systems that need programmatic access to your code's API surface. ## Best practices for JSDoc comments To get the most out of `deno doc`, follow these JSDoc best practices: ### 1. Use descriptive summaries ```ts /** * Calculates the factorial of a number using recursion. * * @param n The number to calculate factorial for * @returns The factorial of n */ export function factorial(n: number): number { // Implementation... } ``` ### 2. Provide concrete examples ````ts /** * Formats a number as currency. * * @example * ```ts * formatCurrency(123.456); // "$123.46" * formatCurrency(1000); // "$1,000.00" * ``` * * @param amount The amount to format * @returns The formatted currency string */ export function formatCurrency(amount: number): string { // Implementation... } ```` ### 3. Document parameters and return values ```ts /** * Validates an email address format. * * @param email The email address to validate * @returns true if valid, false otherwise * @throws {Error} When email is null or undefined */ export function validateEmail(email: string): boolean { // Implementation... } ``` ### 4. Use module-level documentation ```ts /** * Email validation utilities. * * This module provides functions for validating and formatting email addresses * according to RFC 5322 standards. * * @module */ ``` ### 5. Mark deprecated or experimental features ```ts /** * Legacy function for backward compatibility. * * @deprecated Use `newFunction()` instead * @param data The input data */ export function oldFunction(data: string): void { // Implementation... } /** * New experimental feature. * * @experimental This API may change in future versions * @param options Configuration options */ export function experimentalFunction(options: unknown): void { // Implementation... } ``` ## Integrating with CI/CD You can integrate documentation generation into your continuous integration pipeline: ```yaml title=".github/workflows/docs.yml" name: Generate Documentation on: push: branches: [main] jobs: docs: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Setup Deno uses: denoland/setup-deno@v2 with: deno-version: v2.x - name: Generate documentation run: deno doc --html --name="My Library" --output=./docs/ mod.ts - name: Deploy to GitHub Pages uses: peaceiris/actions-gh-pages@v3 with: github_token: ${{ secrets.GITHUB_TOKEN }} publish_dir: ./docs ``` ## Configuration in deno.json You can configure documentation generation in your `deno.json` file: ```json title="deno.json" { "tasks": { "doc": "deno doc --html --name='Math Utilities' --output=./docs/ mod.ts", "doc:lint": "deno doc --lint mod.ts", "doc:json": "deno doc --json mod.ts > documentation.json" } } ``` Then run documentation tasks easily: ```bash deno task doc deno task doc:lint deno task doc:json ``` 🦕 The `deno doc` command is a powerful tool for generating professional documentation from your Deno projects. Good documentation makes your code more maintainable and helps other developers understand and use your projects effectively. With `deno doc`, creating comprehensive documentation is just a command away! --- # Deploy an app with the deno deploy command > Step-by-step tutorial for using the deno deploy CLI command to create and deploy your first application to Deno Deploy. URL: https://docs.deno.com/examples/tutorials/deploy_command The `deno deploy` command provides a powerful CLI for deploying and managing applications on [Deno Deploy](https://deno.com/deploy). If you already have an app to deploy you can skip to [Deploying your application](#deploy-your-application), or read on to make and then deploy a simple app. ## Prerequisites Before using the deploy command, you will need a Deno Deploy account, and a Deno Deploy organization set up on that account. To create an account visit [the Deno Deploy dashboard](https://console.deno.com/). To create an organization, follow the steps in the ## Create a simple web application First, let's create a basic HTTP server that will serve as our application. Create a new directory for your project and navigate into it: ```bash mkdir my-deploy-app cd my-deploy-app ``` Initialize a new Deno project: ```bash deno init ``` Replace the contents of `main.ts` with a simple HTTP server: ```ts title="main.ts" Deno.serve({ port: 8000 }, (req) => { const url = new URL(req.url); const userAgent = req.headers.get("user-agent") || "unknown"; const timestamp = new Date().toISOString(); // Log every request console.log( `[${timestamp}] ${req.method} ${url.pathname} - User-Agent: ${userAgent}`, ); // Simple routing if (url.pathname === "/") { console.log("Serving home page"); return new Response( ` My Deploy App

Welcome to My Deploy App!

This app was deployed using the deno deploy command.

`, { headers: { "content-type": "text/html" }, }, ); } if (url.pathname === "/about") { console.log("Serving about page"); return new Response( ` About - My Deploy App

About This App

This is a simple demonstration of deploying with the deno deploy CLI.

Check the logs to see request information!

← Back to Home `, { headers: { "content-type": "text/html" }, }, ); } if (url.pathname === "/api/status") { const responseData = { status: "ok", timestamp: new Date().toISOString(), message: "API is running successfully", requestCount: Math.floor(Math.random() * 1000) + 1, // Simulate request counter }; console.log("API status check - all systems operational"); console.log(`Response data:`, responseData); return Response.json(responseData); } if (url.pathname === "/api/error") { // This endpoint demonstrates error logging console.error("Error endpoint accessed - demonstrating error logging"); console.warn("This is a warning message that will appear in logs"); return Response.json({ error: "This is a test error for demonstration", timestamp: new Date().toISOString(), tip: "Check the logs with: deno deploy logs", }, { status: 500 }); } // 404 for all other routes console.warn(`404 - Route not found: ${url.pathname}`); return new Response("Not Found", { status: 404 }); }); ``` ### Test your application locally Update the `dev` task in the `deno.json` file in the root, to allow network access: ```json "dev": "deno run -N --watch main.ts" ``` Then run the dev command: ```sh deno run dev ``` Visit `http://localhost:8000` to see your application running. Try navigating to the different routes (`/about`, `/api/status`, and `/api/error`) to verify everything works. You'll notice that each request is logged to the console - these are the same logs you'll be able to see when the app is deployed! ## Authentication The `deno deploy` command handles authentication automatically. When you first run a deploy command, it will prompt you to authenticate. Run the deploy command with the `--help` flag to see all available options: ```bash deno deploy --help ``` :::note Deno Deploy organization requirement The `deno deploy` command requires a Deno Deploy organization. If you don't already have an organization set up in your account, you can create one through the [Deno Deploy web app](https://console.deno.com). ::: ## Deploy your application Now let's use the `deno deploy` command to deploy your application! Ensure that you are in the root directory of your project and run: ```bash deno deploy ``` Select the appropriate options in the terminal when prompted. The deployment process will: 1. Make a tarball of your application code 2. Upload the tarball to Deno Deploy 3. Unpack the tarball 4. Build and deploy to the edge network 5. Provide you with a live URL You have now successfully deployed your application! You can visit the returned URL to see your app in action. If you need to make changes to your application, simply update your code and run the `deno deploy` command again. Our demo application had some logging built in, we can use the built in logging features of Deno Deploy to monitor the application. ## Monitoring your application ### View application logs After deploying your application, you can stream live logs to see exactly what's happening on the app: ```bash deno deploy logs ``` Visit your application URL and navigate to different pages. You'll see logs like: - Request logs showing HTTP method, path, and user agent - Info logs from `console.log()` calls - Warning logs from `console.warn()` calls - Error logs from `console.error()` calls Open your app url in the browser and try visiting the `/api/error` endpoint to see the error logs in action. ### View logs for a specific time range To view logs for a specific time range, you can use the `--start` and `--end` flags: ```bash deno deploy logs \ --start "2024-01-01T00:00:00Z" \ --end "2024-01-01T23:59:59Z" ``` ## Managing environment variables Your application might need environment variables for configuration. The `deno deploy` command provides comprehensive environment variable management. ### List environment variables You can view all environment variables for your application: ```bash deno deploy env list ``` ### Add and update environment variables To add individual environment variables, use the `deno deploy env add` command, for example: ```bash deno deploy env add API_KEY "your-secret-key" deno deploy env add DATABASE_URL "postgresql://..." ``` Then to update them, use the `deno deploy env update-value` command, for example: ```bash deno deploy env update-value API_KEY "new-secret-key" deno deploy env update-value DATABASE_URL "postgresql://new-user:new-pass@localhost/new-db" ``` ### Delete environment variables To delete an environment variable, use the `deno deploy env delete` command, for example: ```bash deno deploy env delete API_KEY deno deploy env delete DATABASE_URL ``` ### Load environment variables from a .env file You can also use an `.env` file to load your environment variables to your deployed app: ```bash deno deploy env load .env ``` 🦕 You've successfully deployed your first application with the `deno deploy` command! Check out the [`deno deploy` docs](/runtime/reference/cli/deploy/) for more commands and options. For more information on Deno Deploy, check the [Deno Deploy documentation](/deploy/). --- # Monitor your app with OpenTelemetry and Deno Deploy > A step-by-step tutorial for adding custom OpenTelemetry instrumentation to your Deno Deploy application. URL: https://docs.deno.com/examples/tutorials/deploy_otel Deno Deploy includes built-in OpenTelemetry support that automatically captures traces for HTTP requests, database queries, and other operations. This tutorial shows how to add custom OpenTelemetry instrumentation to your applications for more detailed observability. ## Prerequisites 1. A [GitHub](https://github.com) account 2. [Deno installed](https://docs.deno.com/runtime/manual/getting_started/installation) on your local machine 3. Access to the [Deno Deploy account](https://console.deno.com/) 4. Basic familiarity with [OpenTelemetry concepts](https://opentelemetry.io/docs/concepts/) ## Create a basic API application First, let's create a simple API server that we'll instrument with OpenTelemetry: ```ts title="main.ts" const dataStore: Record = {}; async function handler(req: Request): Promise { const url = new URL(req.url); // Simulate random latency await new Promise((resolve) => setTimeout(resolve, Math.random() * 200)); try { // Handle product listing if (url.pathname === "/products" && req.method === "GET") { return new Response(JSON.stringify(Object.values(dataStore)), { headers: { "Content-Type": "application/json" }, }); } // Handle product creation if (url.pathname === "/products" && req.method === "POST") { const data = await req.json(); const id = crypto.randomUUID(); dataStore[id] = data; return new Response(JSON.stringify({ id, ...data }), { status: 201, headers: { "Content-Type": "application/json" }, }); } // Handle product retrieval by ID if (url.pathname.startsWith("/products/") && req.method === "GET") { const id = url.pathname.split("/")[2]; const product = dataStore[id]; if (!product) { return new Response("Product not found", { status: 404 }); } return new Response(JSON.stringify(product), { headers: { "Content-Type": "application/json" }, }); } // Handle root route if (url.pathname === "/") { return new Response("Product API - Try /products endpoint"); } return new Response("Not Found", { status: 404 }); } catch (error) { console.error("Error handling request:", error); return new Response("Internal Server Error", { status: 500 }); } } console.log("Server running on http://localhost:8000"); Deno.serve(handler, { port: 8000 }); ``` Save this file and run it locally: ```sh deno run --allow-net main.ts ``` Test the API with curl or a browser to ensure it works: ```sh # List products (empty at first) curl http://localhost:8000/products # Add a product curl -X POST http://localhost:8000/products \ -H "Content-Type: application/json" \ -d '{"name": "Test Product", "price": 19.99}' ``` ## Add OpenTelemetry instrumentation Now, let's add custom OpenTelemetry instrumentation to our application. Create a new file called `instrumented-main.ts`: ```ts title="instrumented-main.ts" import { trace } from "npm:@opentelemetry/api@1"; // Get the OpenTelemetry tracer const tracer = trace.getTracer("product-api"); const dataStore: Record = {}; // Simulate a database operation with custom span async function queryDatabase( operation: string, data?: unknown, ): Promise { return await tracer.startActiveSpan(`database.${operation}`, async (span) => { try { // Add attributes to the span for better context span.setAttributes({ "db.system": "memory-store", "db.operation": operation, }); // Simulate database latency const delay = Math.random() * 100; await new Promise((resolve) => setTimeout(resolve, delay)); // Add latency information to the span span.setAttributes({ "db.latency_ms": delay }); if (operation === "list") { return Object.values(dataStore); } else if (operation === "get") { return dataStore[data as string]; } else if (operation === "insert") { const id = crypto.randomUUID(); dataStore[id] = data as string; return { id, data }; } return null; } catch (error) { // Record any errors to the span span.recordException(error); span.setStatus({ code: trace.SpanStatusCode.ERROR }); throw error; } finally { // End the span when we're done span.end(); } }); } async function handler(req: Request): Promise { // Create a parent span for the entire request return await tracer.startActiveSpan( `${req.method} ${new URL(req.url).pathname}`, async (parentSpan) => { const url = new URL(req.url); // Add request details as span attributes parentSpan.setAttributes({ "http.method": req.method, "http.url": req.url, "http.route": url.pathname, }); try { // Handle product listing if (url.pathname === "/products" && req.method === "GET") { const products = await queryDatabase("list"); return new Response(JSON.stringify(products), { headers: { "Content-Type": "application/json" }, }); } // Handle product creation if (url.pathname === "/products" && req.method === "POST") { // Create a span for parsing request JSON const data = await tracer.startActiveSpan( "parse.request.body", async (span) => { try { const result = await req.json(); return result; } catch (error) { span.recordException(error); span.setStatus({ code: trace.SpanStatusCode.ERROR }); throw error; } finally { span.end(); } }, ); const result = await queryDatabase("insert", data); return new Response(JSON.stringify(result), { status: 201, headers: { "Content-Type": "application/json" }, }); } // Handle product retrieval by ID if (url.pathname.startsWith("/products/") && req.method === "GET") { const id = url.pathname.split("/")[2]; parentSpan.setAttributes({ "product.id": id }); const product = await queryDatabase("get", id); if (!product) { parentSpan.setAttributes({ "error": true, "error.type": "not_found", }); return new Response("Product not found", { status: 404 }); } return new Response(JSON.stringify(product), { headers: { "Content-Type": "application/json" }, }); } // Handle root route if (url.pathname === "/") { return new Response("Product API - Try /products endpoint"); } parentSpan.setAttributes({ "error": true, "error.type": "not_found" }); return new Response("Not Found", { status: 404 }); } catch (error) { console.error("Error handling request:", error); // Record the error in the span parentSpan.recordException(error); parentSpan.setAttributes({ "error": true, "error.type": error.name, "error.message": error.message, }); parentSpan.setStatus({ code: trace.SpanStatusCode.ERROR }); return new Response("Internal Server Error", { status: 500 }); } finally { // End the parent span when we're done parentSpan.end(); } }, ); } console.log( "Server running with OpenTelemetry instrumentation on http://localhost:8000", ); Deno.serve(handler, { port: 8000 }); ``` Run the instrumented version locally: ```sh deno run --allow-net instrumented-main.ts ``` Test the API again with curl to generate some traces. ## Create a GitHub repository 1. Go to [GitHub](https://github.com) and create a new repository. 2. Initialize your local directory as a Git repository: ```sh git init git add . git commit -m "Add OpenTelemetry instrumented API" ``` 3. Add your GitHub repository as a remote and push your code: ```sh git remote add origin https://github.com/your-username/otel-demo-app.git git branch -M main git push -u origin main ``` ## Deploy to Deno Deploy 1. Navigate to [console.deno.com](https://console.deno.com) 2. Select your organization or create a new one if needed 3. Click "+ New App" 4. Select the GitHub repository you created earlier 5. Configure the build settings: - Framework preset: No preset - Runtime configuration: Dynamic - Entrypoint: `instrumented-main.ts` 6. Click "Create App" to start the deployment process ## Generate sample traffic To generate sample traces and metrics, let's send some traffic to your deployed application: 1. Copy your deployment URL from the Deno Deploy dashboard 2. Send several requests to different endpoints: ```sh # Store your app URL in a variable APP_URL=https://your-app-name.your-org-name.deno.net # Get the root route curl $APP_URL/ # List products (empty at first) curl $APP_URL/products # Add some products curl -X POST $APP_URL/products -H "Content-Type: application/json" -d '{"name": "Laptop", "price": 999.99}' curl -X POST $APP_URL/products -H "Content-Type: application/json" -d '{"name": "Headphones", "price": 129.99}' curl -X POST $APP_URL/products -H "Content-Type: application/json" -d '{"name": "Mouse", "price": 59.99}' # List products again curl $APP_URL/products # Try to access a non-existent product (will generate an error span) curl $APP_URL/products/nonexistent-id ``` ## Explore OpenTelemetry traces and metrics Now let's explore the observability data collected by Deno Deploy: 1. From your application dashboard, click "Traces" in the sidebar - You'll see a list of traces for each request to your application - You can filter traces by HTTP method or status code using the search bar 2. Select one of your `/products` POST traces to see detailed information: - The parent span for the entire request - Child spans for database operations - The span for parsing the request body ![Trace waterfall view](./images/otel_trace.png) 3. Click on individual spans to see their details: - Duration and timing information - Attributes you set like `db.operation` and `db.latency_ms` - Any recorded exceptions 4. Click "Logs" in the sidebar to see console output with trace context: - Notice how logs emitted during a traced operation are automatically linked to the trace - Click "View trace" on a log line to see the associated trace 5. Click "Metrics" to view application performance metrics: - HTTP request counts by endpoint - Error rates - Response time distributions 🦕 The automatic instrumentation in Deno Deploy combined with your custom instrumentation provides comprehensive visibility into your application's performance and behavior. For more information about OpenTelemetry in Deno, check out these resources: - [OpenTelemetry in Deno documentation](/runtime/fundamentals/open_telemetry/) - [Deno Deploy Observability reference](/deploy/reference/observability/) - [OpenTelemetry official documentation](https://opentelemetry.io/docs/) --- # How to deploy Deno to Digital Ocean > A step-by-step guide to deploying Deno applications on Digital Ocean. Learn about Docker containerization, GitHub Actions automation, container registries, and how to set up continuous deployment workflows. URL: https://docs.deno.com/examples/tutorials/digital_ocean Digital Ocean is a popular cloud infrastructure provider offering a variety of hosting services ranging from networking, to compute, to storage. Here's a step by step guide to deploying a Deno app to Digital Ocean using Docker and GitHub Actions. The pre-requisites for this are: - [`docker` CLI](https://docs.docker.com/engine/reference/commandline/cli/) - a [GitHub account](https://github.com) - a [Digital Ocean account](https://digitalocean.com) - [`doctl` CLI](https://docs.digitalocean.com/reference/doctl/how-to/install/) ## Create Dockerfile and docker-compose.yml To focus on the deployment, our app will simply be a `main.ts` file that returns a string as an HTTP response: ```ts title="main.ts" import { Application } from "jsr:@oak/oak"; const app = new Application(); app.use((ctx) => { ctx.response.body = "Hello from Deno and Digital Ocean!"; }); await app.listen({ port: 8000 }); ``` Then, we'll create two files -- `Dockerfile` and `docker-compose.yml` -- to build the Docker image. In our `Dockerfile`, let's add: ```Dockerfile title="Dockerfile" FROM denoland/deno EXPOSE 8000 WORKDIR /app ADD . /app RUN deno install --entrypoint main.ts CMD ["run", "--allow-net", "main.ts"] ``` Then, in our `docker-compose.yml`: ```yml version: "3" services: web: build: . container_name: deno-container image: deno-image ports: - "8000:8000" ``` Let's test this locally by running `docker compose -f docker-compose.yml build`, then `docker compose up`, and going to `localhost:8000`. ![Hello from localhost](./images/how-to/digital-ocean/hello-world-from-localhost.png) It works! ## Build, Tag, and Push your Docker image to Digital Ocean Container Registry Digital Ocean has its own private Container Registry, with which we can push and pull Docker images. In order to use this registry, let's [install and authenticate `doctl` on the command line](https://docs.digitalocean.com/reference/doctl/how-to/install/). After that, we'll create a new private registry named `deno-on-digital-ocean`: ```shell doctl registry create deno-on-digital-ocean ``` Using our Dockerfile and docker-compose.yml, we'll build a new image, tag it, and push it to the registry. Note that `docker-compose.yml` will name the build locally as `deno-image`. ```shell docker compose -f docker-compose.yml build ``` Let's [tag](https://docs.docker.com/engine/reference/commandline/tag/) it with `new`: ```shell docker tag deno-image registry.digitalocean.com/deno-on-digital-ocean/deno-image:new ``` Before pushing, authenticate your Docker client with Digital Ocean Container Registry: ```shell doctl registry login ``` Now we can push it to the registry. ```shell docker push registry.digitalocean.com/deno-on-digital-ocean/deno-image:new ``` You should see your new `deno-image` with the `new` tag in your [Digital Ocean container registry](https://cloud.digitalocean.com/registry): ![New deno image on Digital Ocean container registry](./images/how-to/digital-ocean/new-deno-image-on-digital-ocean-container-registry.png) Perfect! ## Deploy to Digital Ocean via SSH Once our `deno-image` is in the registry, we can run it anywhere using `docker run`. In this case, we'll run it while in our [Digital Ocean Droplet](https://www.digitalocean.com/products/droplets), their hosted virtual machine. While on your [Droplet page](https://cloud.digitalocean.com/droplets), click on your Droplet and then `console` to SSH into the virtual machine. (Or you can [ssh directly from your command line](https://docs.digitalocean.com/products/droplets/how-to/connect-with-ssh/).) To pull down the `deno-image` image and run it, let's run: ```shell docker run -d --restart always -it -p 8000:8000 --name deno-image registry.digitalocean.com/deno-on-digital-ocean/deno-image:new ``` Using our browser to go to the Digital Ocean address, we now see: ![Hello from Deno and Digital Ocean](./images/how-to/digital-ocean/hello-from-deno-and-digital-ocean.png) Boom! ## Automate the Deployment via GitHub Actions Let's automate that entire process with GitHub actions. First, let's get all of our environmental variables needed for logging into `doctl` and SSHing into the Droplet: - [DIGITALOCEAN_ACCESS_TOKEN](https://docs.digitalocean.com/reference/api/create-personal-access-token/) - DIGITALOCEAN_HOST (the IP address of your Droplet) - DIGITALOCEAN_USERNAME (the default is `root`) - DIGITALOCEAN_SSHKEY (more on this below) ### Generate `DIGITALOCEAN_SSHKEY` The `DIGITALOCEAN_SSHKEY` is a private key where its public counterpart exists on the virtual machine in its `~/.ssh/authorized_keys` file. To do this, first let's run `ssh-keygen` on your local machine: ```shell ssh-keygen ``` When prompted for an email, **be sure to use your GitHub email** for the GitHub Action to authenticate properly. Your final output should look something like this: ```console Output Your identification has been saved in /your_home/.ssh/id_rsa Your public key has been saved in /your_home/.ssh/id_rsa.pub The key fingerprint is: SHA256:/hk7MJ5n5aiqdfTVUZr+2Qt+qCiS7BIm5Iv0dxrc3ks user@host The key's randomart image is: +---[RSA 3072]----+ | .| | + | | + | | . o . | |o S . o | | + o. .oo. .. .o| |o = oooooEo+ ...o| |.. o *o+=.*+o....| | =+=ooB=o.... | +----[SHA256]-----+ ``` Next, we'll have to upload the newly generated public key to your Droplet. You can either use [`ssh-copy-id`](https://www.ssh.com/academy/ssh/copy-id) or manually copy it, ssh into your Droplet, and pasting it to `~/.ssh/authorized_keys`. Using `ssh-copy-id`: ```shell ssh-copy-id {{ username }}@{{ host }} ``` This command will prompt you for the password. Note that this will automatically copy `id_rsa.pub` key from your local machine and paste it to your Droplet's `~/.ssh/authorized_keys` file. If you've named your key something other than `id_rsa`, you can pass it with the `-i` flag to the command: ```shell ssh-copy-id -i ~/.ssh/mykey {{ username }}@{{ host }} ``` To test whether this is done successfully: ```shell ssh -i ~/.ssh/mykey {{ username }}@{{ host }} ``` Awesome! ### Define the yml File The final step is to put this all together. We're basically taking each step during the manual deployment and adding them to a GitHub Actions workflow yml file: ```yml name: Deploy to Digital Ocean on: push: branches: - main env: REGISTRY: "registry.digitalocean.com/deno-on-digital-ocean" IMAGE_NAME: "deno-image" jobs: build_and_push: name: Build, Push, and Deploy runs-on: ubuntu-latest steps: - name: Checkout main uses: actions/checkout@v4 - name: Set $TAG from shortened sha run: echo "TAG=`echo ${GITHUB_SHA} | cut -c1-8`" >> $GITHUB_ENV - name: Build container image run: docker compose -f docker-compose.yml build - name: Tag container image run: docker tag ${{ env.IMAGE_NAME }} ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.TAG }} - name: Install `doctl` uses: digitalocean/action-doctl@v2 with: token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }} - name: Log in to Digital Ocean Container Registry run: doctl registry login --expiry-seconds 600 - name: Push image to Digital Ocean Container Registry run: docker push ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.TAG }} - name: Deploy via SSH uses: appleboy/ssh-action@master with: host: ${{ secrets.DIGITALOCEAN_HOST }} username: ${{ secrets.DIGITALOCEAN_USERNAME }} key: ${{ secrets.DIGITALOCEAN_SSHKEY }} script: | # Login to Digital Ocean Container Registry docker login -u ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }} -p ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }} registry.digitalocean.com # Stop and remove a running image. docker stop ${{ env.IMAGE_NAME }} docker rm ${{ env.IMAGE_NAME }} # Run a new container from a new image docker run -d --restart always -it -p 8000:8000 --name ${{ env.IMAGE_NAME }} ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.TAG }} ``` When you push to GitHub, this yml file is automatically detected, triggering the Deploy action. --- # Build a Database App with Drizzle ORM and Deno > Step-by-step guide to building database applications with Drizzle ORM and Deno. Learn about schema management, type-safe queries, PostgreSQL integration, migrations, and how to implement CRUD operations. URL: https://docs.deno.com/examples/tutorials/drizzle [Drizzle ORM](https://orm.drizzle.team/) is a TypeScript ORM that provides a type-safe way to interact with your database. In this tutorial, we'll set up Drizzle ORM with Deno and PostgreSQL to create, read, update, and delete dinosaur data: - [Install Drizzle](#install-drizzle) - [Configure Drizzle](#configure-drizzle) - [Define schemas](#define-schemas) - [Interact with the database](#interact-with-the-database) - [What's next?](#whats-next) You can find all the code for this tutorial in [this GitHub repo](https://github.com/denoland/examples/tree/main/with-drizzle). ## Install Drizzle First, we'll install the required dependencies using Deno's npm compatibility. We'll be using Drizzle with [Postgres](https://orm.drizzle.team/docs/get-started-postgresql), but you can also use [MySQL](https://orm.drizzle.team/docs/get-started-mysql) or [SQLite](https://orm.drizzle.team/docs/get-started-sqlite). (If you don't have Postgres, you can [install it here](https://www.postgresql.org/download/).) ```bash deno install npm:drizzle-orm npm:drizzle-kit npm:pg npm:@types/pg ``` This installs Drizzle ORM and its associated tools — [drizzle-kit](https://orm.drizzle.team/docs/kit-overview) for schema migrations, [pg](https://www.npmjs.com/package/pg) for PostgreSQL connectivity, and [the TypeScript types for PostgreSQL](https://www.npmjs.com/package/@types/pg). These packages will allow us to interact with our database in a type-safe way while maintaining compatibility with Deno's runtime environment. It will also create a `deno.json` file in your project root to manage the npm dependencies: ```json { "imports": { "@types/pg": "npm:@types/pg@^8.11.10", "drizzle-kit": "npm:drizzle-kit@^0.27.2", "drizzle-orm": "npm:drizzle-orm@^0.36.0", "pg": "npm:pg@^8.13.1" } } ``` ## Configure Drizzle Next, let's create a `drizzle.config.ts` file in your project root. This file will configure Drizzle to work with your PostgreSQL database: ```tsx import { defineConfig } from "drizzle-kit"; export default defineConfig({ out: "./drizzle", schema: "./src/db/schema.ts", dialect: "postgresql", dbCredentials: { url: Deno.env.get("DATABASE_URL")!, }, }); ``` These config settings determine: - where to output migration files (`./drizzle`) - where to find your schema definition (`./src/db/schema.ts`) - that PostgreSQL as your database dialect, and - how to connect to your database using the URL stored in your environment variables The `drizzle-kit` will use this configuration to manage your database schema and generate SQL migrations automatically. We’ll also need a `.env` file in the project root containing the `DATABASE_URL` connection string: ```bash DATABASE_URL=postgresql://[user[:password]@][host][:port]/[dbname] ``` Be sure to replace the login credentials with yours. Next, let's connect to the database and use Drizzle to populate our tables. ## Define schemas There are two ways that you can define your table schema with Drizzle. If you already have Postgres tables defined, you can infer them with `pull`; otherwise, you can define them in code, then use Drizzle to create a new table. We'll explore both approaches below. ### Infer schema with `pull` If you already have Postgres tables before adding Drizzle, then you can introspect your database schema to automatically generate TypeScript types and table definitions with the command [`npm:drizzle-kit pull`](https://orm.drizzle.team/docs/drizzle-kit-pull). This is particularly useful when working with an existing database or when you want to ensure your code stays in sync with your database structure. Let's say our current database already has the following table schemas: ![Diagram of table schema in postgres](./images/how-to/drizzle/table-diagram.png) We'll run the following command to instrospect the database and populate several files under a `./drizzle` directory:
```bash deno --env -A --node-modules-dir npm:drizzle-kit pull Failed to find Response internal state key No config path provided, using default 'drizzle.config.ts' Reading config file '/private/tmp/deno-drizzle-example/drizzle.config.ts' Pulling from ['public'] list of schemas Using 'pg' driver for database querying [✓] 2 tables fetched [✓] 8 columns fetched [✓] 0 enums fetched [✓] 0 indexes fetched [✓] 1 foreign keys fetched [✓] 0 policies fetched [✓] 0 check constraints fetched [✓] 0 views fetched [i] No SQL generated, you already have migrations in project [✓] You schema file is ready ➜ drizzle/schema.ts 🚀 [✓] You relations file is ready ➜ drizzle/relations.ts 🚀 ```
We use the --env flag to read the .env file with our database url and the --node-modules-dir flag to create a node_modules folder that will allow us to use drizzle-kit correctly.

The above command will create a number of files within a `./drizzle` directory that define the schema, track changes, and provide the necessary information for database migrations: - `drizzle/schema.ts`: This file defines the database schema using Drizzle ORM's schema definition syntax. - `drizzle/relations.ts`: This file is intended to define relationships between tables using Drizzle ORM's relations API. - `drizzle/0000_long_veda.sql`: A SQL migration file that contains the SQL code to create the database table(s). The code is commented out — you can uncomment this code if you want to run this migration to create the table(s) in a new environment. - `drizzle/meta/0000_snapshot.json`: A snapshot file that represents the current state of your database schema. - `drizzle/meta/_journal.json`: This file keeps track of the migrations that have been applied to your database. It helps Drizzle ORM know which migrations have been run and which ones still need to be applied. ### Define schema in Drizzle first If you don't already have an existing table defined in Postgres (e.g. you're starting a completely new project), you can define the tables and types in code and have Drizzle create them. Let's create a new directory `./src/db/` and in it, a `schema.ts` file, which we'll populate with the below:
```ts // schema.ts import { boolean, foreignKey, integer, pgTable, serial, text, timestamp, } from "drizzle-orm/pg-core"; export const dinosaurs = pgTable("dinosaurs", { id: serial().primaryKey().notNull(), name: text(), description: text(), }); export const tasks = pgTable("tasks", { id: serial().primaryKey().notNull(), dinosaurId: integer("dinosaur_id"), description: text(), dateCreated: timestamp("date_created", { mode: "string" }).defaultNow(), isComplete: boolean("is_complete"), }, (table) => { return { tasksDinosaurIdFkey: foreignKey({ columns: [table.dinosaurId], foreignColumns: [dinosaurs.id], name: "tasks_dinosaur_id_fkey", }), }; }); ```
The above represents in code the two tables, dinosaurs and tasks and their relation. Learn more about using Drizzle to define schemas and their relations.

Once we have defined `./src/db/schema.ts`, we can create the tables and their specified relationship by creating a migration: ```bash deno -A --node-modules-dir npm:drizzle-kit generate Failed to find Response internal state key No config path provided, using default 'drizzle.config.ts' Reading config file '/private/tmp/drizzle/drizzle.config.ts' 2 tables dinosaurs 3 columns 0 indexes 0 fks tasks 5 columns 0 indexes 1 fks ``` The above command will create a `./drizzle/` folder that contains migration scripts and logs. ## Interact with the database Now that we have setup Drizzle ORM, we can use it to simplify managing data in our Postgres database. First, Drizzle suggests taking the `schema.ts` and `relations.ts` and copying them to the `./src/db` directory to use within an application. Let's create a `./src/db/db.ts` which exports a few helper functions that'll make it easier for us to interact with the database: ```ts import { drizzle } from "drizzle-orm/node-postgres"; import { dinosaurs as dinosaurSchema, tasks as taskSchema } from "./schema.ts"; import { dinosaursRelations, tasksRelations } from "./relations.ts"; import pg from "pg"; import { integer } from "drizzle-orm/sqlite-core"; import { eq } from "drizzle-orm/expressions"; // Use pg driver. const { Pool } = pg; // Instantiate Drizzle client with pg driver and schema. export const db = drizzle({ client: new Pool({ connectionString: Deno.env.get("DATABASE_URL"), }), schema: { dinosaurSchema, taskSchema, dinosaursRelations, tasksRelations }, }); // Insert dinosaur. export async function insertDinosaur(dinosaurObj: typeof dinosaurSchema) { return await db.insert(dinosaurSchema).values(dinosaurObj); } // Insert task. export async function insertTask(taskObj: typeof taskSchema) { return await db.insert(taskSchema).values(taskObj); } // Find dinosaur by id. export async function findDinosaurById(dinosaurId: typeof integer) { return await db.select().from(dinosaurSchema).where( eq(dinosaurSchema.id, dinosaurId), ); } // Find dinosaur by name. export async function findDinosaurByName(name: string) { return await db.select().from(dinosaurSchema).where( eq(dinosaurSchema.name, name), ); } // Find tasks based on dinosaur id. export async function findDinosaurTasksByDinosaurId( dinosaurId: typeof integer, ) { return await db.select().from(taskSchema).where( eq(taskSchema.dinosaurId, dinosaurId), ); } // Update dinosaur. export async function updateDinosaur(dinosaurObj: typeof dinosaurSchema) { return await db.update(dinosaurSchema).set(dinosaurObj).where( eq(dinosaurSchema.id, dinosaurObj.id), ); } // Update task. export async function updateTask(taskObj: typeof taskSchema) { return await db.update(taskSchema).set(taskObj).where( eq(taskSchema.id, taskObj.id), ); } // Delete dinosaur by id. export async function deleteDinosaurById(id: typeof integer) { return await db.delete(dinosaurSchema).where( eq(dinosaurSchema.id, id), ); } // Delete task by id. export async function deleteTask(id: typeof integer) { return await db.delete(taskSchema).where(eq(taskSchema.id, id)); } ``` Now we can import some of these helper functions to a script where we can perform some simple CRUD operations on our database. Let's create a new file `./src/script.ts`: ```ts import { deleteDinosaurById, findDinosaurByName, insertDinosaur, insertTask, updateDinosaur, } from "./db/db.ts"; // Create a new dinosaur. await insertDinosaur({ name: "Denosaur", description: "Dinosaurs should be simple.", }); // Find that dinosaur by name. const res = await findDinosaurByName("Denosaur"); // Create a task with that dinosaur by its id. await insertTask({ dinosaurId: res.id, description: "Remove unnecessary config.", isComplete: false, }); // Update a dinosaur with a new description. const newDeno = { id: res.id, name: "Denosaur", description: "The simplest dinosaur.", }; await updateDinosaur(newDeno); // Delete the dinosaur (and any tasks it has). await deleteDinosaurById(res.id); ``` We can run it and it will perform all of the actions on the database: ```ts deno -A --env ./src/script.ts ``` ## What's next? Drizzle ORM is a popular data mapping tool to simplify managing and maintaining data models and working with your database. Hopefully, this tutorial gives you a start on how to use Drizzle in your Deno projects. Now that you have a basic understanding of how to use Drizzle ORM with Deno, you could: 1. Add more complex database relationships 2. [Implement a REST API](https://docs.deno.com/examples/) using [Hono](https://jsr.io/@hono/hono) to serve your dinosaur data 3. Add validation and error handling to your database operations 4. Write tests for your database interactions 5. [Deploy your application to the cloud](https://docs.deno.com/runtime/tutorials/#deploying-deno-projects) 🦕 Happy coding with Deno and Drizzle ORM! The type-safety and simplicity of this stack make it a great choice for building modern web applications. --- # How to use Express with Deno > Step-by-step guide to using Express.js with Deno. Learn how to set up an Express server, configure routes, handle middleware, and build REST APIs using Deno's Node.js compatibility features. URL: https://docs.deno.com/examples/tutorials/express [Express](https://expressjs.com/) is a popular web framework known for being simple and unopinionated with a large ecosystem of middleware. This How To guide will show you how to create a simple API using Express and Deno. [View source here.](https://github.com/denoland/tutorial-with-express) ## Initialize a new deno project In your commandline run the command to create a new starter project, then navigate into the project directory: ```sh deno init my-express-project cd my-express-project ``` ## Install Express To install Express, we'll use the `npm:` module specifier. This specifier allows us to import modules from npm: ```sh deno add npm:express ``` This will add the latest `express` package to the `imports` field in your `deno.json` file. Now you can import `express` in your code with `import express from "express";`. ## Update `main.ts` In the `main.ts`, let's create a simple server: ```ts import express from "express"; const app = express(); app.get("/", (req, res) => { res.send("Welcome to the Dinosaur API!"); }); app.listen(8000); console.log(`Server is running on http://localhost:8000`); ``` You may notice that your editor is complaining about the `req` and `res` parameters. This is because Deno does not have types for the `express` module. To fix this, you can import the Express types file directly from npm. Add the following comment to the top of your `main.ts` file: ```ts // @ts-types="npm:@types/express@4.17.15" ``` This comment tells Deno to use the types from the `@types/express` package. ## Run the server When you initialized the project, Deno set up a task which will run the main.ts file, you can see it in the `deno.json` file. Update the `dev` task to include the [`--allow-net`](/runtime/fundamentals/security/#network-access) flag: ```jsonc { "scripts": { "dev": "deno run --allow-net main.ts" }, ... } ``` This will allow the project to make network requests. You can [read more about permissions flags](/runtime/fundamentals/security/). Now you can run the server with: ```sh deno run dev ``` If you visit `localhost:8000` in your browser, you should see: **Welcome to the Dinosaur API!** ## Add data and routes The next step here is to add some data. We'll use this Dinosaur data that we found from [this article](https://www.thoughtco.com/dinosaurs-a-to-z-1093748). Feel free to [copy it from here](https://raw.githubusercontent.com/denoland/tutorial-with-express/refs/heads/main/data.json). Create a `data.json` file in the root of your project, and paste in the dinosaur data. Next, we'll import that data into `main.ts`: ```ts import data from "./data.json" with { type: "json" }; ``` We will create the routes to access that data. To keep it simple, let's just define `GET` handlers for `/api/` and `/api/:dinosaur`. Add the following code after the `const app = express();` line: ```ts app.get("/", (req, res) => { res.send("Welcome to the Dinosaur API!"); }); app.get("/api", (req, res) => { res.send(data); }); app.get("/api/:dinosaur", (req, res) => { if (req?.params?.dinosaur) { const found = data.find((item) => item.name.toLowerCase() === req.params.dinosaur.toLowerCase() ); if (found) { res.send(found); } else { res.send("No dinosaurs found."); } } }); app.listen(8000); console.log(`Server is running on http://localhost:8000`); ``` Let's run the server with `deno run dev` and check out `localhost:8000/api` in your browser. You should see a list of dinosaurs! ```jsonc [ { "name": "Aardonyx", "description": "An early stage in the evolution of sauropods." }, { "name": "Abelisaurus", "description": "\"Abel's lizard\" has been reconstructed from a single skull." }, { "name": "Abrictosaurus", "description": "An early relative of Heterodontosaurus." }, ... ``` You can also get the details of a specific dinosaur by visiting "/api/dinosaur name", for example `localhost:8000/api/aardonyx` will display: ```json { "name": "Aardonyx", "description": "An early stage in the evolution of sauropods." } ``` 🦕 Now you're all set to use Express with Deno. You could consider expanding this example into a dinosaur web app. Or take a look at [Deno's built in HTTP server](https://docs.deno.com/runtime/fundamentals/http_server/). --- # Fetch and stream data > A tutorial on working with network requests in Deno. Learn how to use the fetch API for HTTP requests, handle responses, implement data streaming, and manage file uploads and downloads. URL: https://docs.deno.com/examples/tutorials/fetch_data Deno brings several familiar Web APIs to the server-side environment. If you've worked with browsers you may recognize the [`fetch()`](/api/web/fetch) method and the [`streams`](/api/web/streams) API, which are used to make network requests and access streams of data over the network. Deno implements these APIs, allowing you to fetch and stream data from the web. ## Fetching data When building a web application, developers will often need to retrieve resources from somewhere else on the web. We can do so with the `fetch` API. We'll look at how to fetch different shapes of data from a url and how to handle an error if the request fails. Create a new file called `fetch.js` and add the following code: ```ts title="fetch.js" // Output: JSON Data const jsonResponse = await fetch("https://api.github.com/users/denoland"); const jsonData = await jsonResponse.json(); console.log(jsonData, "\n"); // Output: HTML Data const textResponse = await fetch("https://deno.land/"); const textData = await textResponse.text(); console.log(textData, "\n"); // Output: Error Message try { await fetch("https://does.not.exist/"); } catch (error) { console.log(error); } ``` You can run this code with the `deno run` command. Because it is fetching data across the network, you need to grant the `--allow-net` permission: ```sh deno run --allow-net fetch.js ``` You should see the JSON data, HTML data as text, and an error message in the console. ## Streaming data Sometimes you may want to send or receive large files over the network. When you don't know the size of a file in advance, streaming is a more efficient way to handle the data. The client can read from the stream until it says it is done. Deno provides a way to stream data using the `Streams API`. We'll look at how to convert a file into a readable or writable stream and how to send and receive files using streams. Create a new file called `stream.js`. We'll use the `fetch` API to retrieve a file. Then we'll use the [`Deno.open`](/api/deno/Deno.open) method to create and open a writable file and the [`pipeTo`](/api/web/~/ReadableStream.pipeTo) method from the Streams API to send the byte stream to the created file. Next, we'll use the `readable` property on a `POST` request to send the byte stream of the file to a server. ```ts title="stream.js" // Receiving a file const fileResponse = await fetch("https://deno.land/logo.svg"); if (fileResponse.body) { const file = await Deno.open("./logo.svg", { write: true, create: true }); await fileResponse.body.pipeTo(file.writable); } // Sending a file const file = await Deno.open("./logo.svg", { read: true }); await fetch("https://example.com/", { method: "POST", body: file.readable, }); ``` You can run this code with the `deno run` command. Because it is fetching data across the network and writing to a file, you need to grant the `--allow-net`, `--allow-write` and `--allow-read` permissions: ```sh deno run --allow-read --allow-write --allow-net stream.js ``` You should see the file `logo.svg` created and populated in the current directory and, if you owned example.com you would see the file being sent to the server. 🦕 Now you know how to fetch and stream data across a network and how to stream that data to and from files! Whether you're serving static files, processing uploads, generating dynamic content or streaming large datasets, Deno’s file handling and streaming capabilities are great tools to have in your developer toolbox! --- # examples/tutorials/file_based_routing.md > Tutorial on implementing file-based routing in Deno. Learn how to create a dynamic routing system similar to Next.js, handle HTTP methods, manage nested routes, and build a flexible server architecture. URL: https://docs.deno.com/examples/tutorials/file_based_routing If you've used frameworks like [Next.js](https://nextjs.org/), you might be familiar with file based routing - you add a file in a specific directory and it automatically becomes a route. This tutorial demonstrates how to create a simple HTTP server that uses file based routing. ## Route requests Create a new file called `server.ts`. This file will be used to route requests. Set up an async function called `handler` that takes a request object as an argument: ```ts title="server.ts" async function handler(req: Request): Promise { const url = new URL(req.url); const path = url.pathname; const method = req.method; let module; try { module = await import(`.${path}.ts`); } catch (_error) { return new Response("Not found", { status: 404 }); } if (module[method]) { return module[method](req); } return new Response("Method not implemented", { status: 501 }); } Deno.serve(handler); ``` The `handler` function sets up a path variable which contains the path, extracted from the request URL, and a method variable which contains the request method. It then tries to import a module based on the path. If the module is not found, it returns a 404 response. If the module is found, it checks if the module has a method handler for the request method. If the method handler is found, it calls the method handler with the request object. If the method handler is not found, it returns a 501 response. Finally, it serves the handler function using `Deno.serve`. > The path could be any valid URL path such as `/users`, `/posts`, etc. For > paths like `/users`, the file `./users.ts` will be imported. However, deeper > paths like `/org/users` will require a file `./org/users.ts`. You can create > nested routes by creating nested directories and files. ## Handle requests Create a new file called `users.ts` in the same directory as `server.ts`. This file will be used to handle requests to the `/users` path. We'll use a `GET` request as an example. You could add more HTTP methods such as `POST`, `PUT`, `DELETE`, etc. In `users.ts`, set up an async function called `GET` that takes a request object as an argument: ```ts title="users.ts" export function GET(_req: Request): Response { return new Response("Hello from user.ts", { status: 200 }); } ``` ## Start the server To start the server, run the following command: ```sh deno run --allow-net --allow-read server.ts ``` This will start the server on `localhost:8080`. You can now make a `GET` request to `localhost:8000/users` and you should see the response `Hello from user.ts`. This command requires the `--allow-net` and `--allow-read` [permissions flags](/runtime/fundamentals/security/) to allow access to the network to start the server and to read the `users.ts` file from the file system. 🦕 Now you can set up routing in your apps based on file structure. You can extend this example to add more routes and methods as needed. Thanks to [@naishe](https://github.com/naishe) for contributing this tutorial. --- # Write a file server > Tutorial on building a file server with Deno. Learn how to handle HTTP requests, serve static files, implement streaming responses, and use the standard library's file server module for production deployments. URL: https://docs.deno.com/examples/tutorials/file_server A file server listens for incoming HTTP requests and serves files from the local file system. This tutorial demonstrates how to create a simple file server using Deno's built-in [file system APIs](/api/deno/file-system). ## Write a simple File Server To start, create a new file called `file-server.ts`. We'll use Deno's built in [HTTP server](/api/deno/~/Deno.serve) to listen for incoming requests. In your new `file-server.ts` file, add the following code: ```ts title="file-server.ts" Deno.serve( { hostname: "localhost", port: 8080 }, async (request) => { const url = new URL(request.url); const filepath = decodeURIComponent(url.pathname); }, ); ``` > If you're not familiar with the `URL` object, you can learn more about it in > the [URL API](https://developer.mozilla.org/en-US/docs/Web/API/URL) > documentation. The > [decodeURIComponent function](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURIComponent) > is used to decode the URL-encoded path, in the case that characters have been > percent-encoded.) ### Open a file and stream its contents When a request is received, we'll attempt to open the file specified in the request URL with [`Deno.open`](/api/deno/~/Deno.open). If the requested file exists, we'll convert it into a readable stream of data with the [ReadableStream API](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream), and stream its contents to the response. We don't know how large the requested file might be, so streaming it will prevent memory issues when serving large files or multiple requests concurrently. If the file does not exist, we'll return a "404 Not Found" response. In the body of the request handler, below the two variables, add the following code: ```ts try { const file = await Deno.open("." + filepath, { read: true }); return new Response(file.readable); } catch { return new Response("404 Not Found", { status: 404 }); } ``` ### Run the file server Run your new file server with the `deno run` command, allowing read access and network access: ```shell deno run --allow-read=. --allow-net file-server.ts ``` ## Using the file server provided by the Deno Standard Library Writing a file server from scratch is a good exercise to understand how Deno's HTTP server works. However, writing production ready file server from scratch can be complex and error-prone. It's better to use a tested and reliable solution. The Deno Standard Library provides you with a [file server](https://jsr.io/@std/http/doc/file-server/~) so that you don't have to write your own. To use it, first install the remote script to your local file system: ```shell # Deno 1.x deno install --allow-net --allow-read jsr:@std/http/file-server # Deno 2.x deno install --global --allow-net --allow-read jsr:@std/http/file-server ``` > This will install the script to the Deno installation root, e.g. > `/home/user/.deno/bin/file-server`. You can now run the script with the simplified script name: ```shell $ file-server . Listening on: - Local: http://0.0.0.0:8000 ``` To see the complete list of options available with the file server, run `file-server --help`. If you visit [http://0.0.0.0:8000/](http://0.0.0.0:8000/) in your web browser you will see the contents of your local directory. ### Using the @std/http file server in a Deno project To use the file-server in a [Deno project](/runtime/getting_started/first_project), you can add it to your `deno.json` file with: ```sh deno add jsr:@std/http ``` And then import it in your project: ```ts title="file-server.ts" import { serveDir } from "@std/http/file-server"; Deno.serve((req) => { const pathname = new URL(req.url).pathname; if (pathname.startsWith("/static")) { return serveDir(req, { fsRoot: "path/to/static/files/dir", }); } return new Response(); }); ``` This code will set up an HTTP server with `Deno.serve`. When a request comes in, it checks if the requested path starts with “/static”. If so, it serves files from the specified directory. Otherwise, it responds with an empty response. 🦕 Now you know how to write your own simple file server, and how to use the file-server utility provided by the Deno Standard Library. You're equipped to tackle a whole variety of tasks - whether it’s serving static files, handling uploads, transforming data, or managing access control - you're ready to serve files with Deno. --- # File system events > Tutorial on monitoring file system changes with Deno. Learn how to watch directories for file modifications, handle change events, and understand platform-specific behaviors across Linux, macOS, and Windows. URL: https://docs.deno.com/examples/tutorials/file_system_events ## Concepts - Use [Deno.watchFs](https://docs.deno.com/api/deno/~/Deno.watchFs) to watch for file system events. - Results may vary between operating systems. ## Example To poll for file system events in the current directory: ```ts title="watcher.ts" const watcher = Deno.watchFs("."); for await (const event of watcher) { console.log(">>>> event", event); // Example event: { kind: "create", paths: [ "/home/alice/deno/foo.txt" ] } } ``` Run with: ```shell deno run --allow-read watcher.ts ``` Now try adding, removing and modifying files in the same directory as `watcher.ts`. Note that the exact ordering of the events can vary between operating systems. This feature uses different syscalls depending on the platform: - Linux: [inotify](https://man7.org/linux/man-pages/man7/inotify.7.html) - macOS: [FSEvents](https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/FSEvents_ProgGuide/Introduction/Introduction.html) - Windows: [ReadDirectoryChangesW](https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-readdirectorychangesw) --- # Build a Fresh App > Complete guide to building Full-stack applications with Fresh and Deno. Learn how to set up a project, implement server-side rendering with islands architecture, add API routes, and deploy your TypeScript application. URL: https://docs.deno.com/examples/tutorials/fresh [Fresh](https://fresh.deno.dev/) is a full-stack web framework for Deno that emphasizes server-side rendering with islands of interactivity. It sends no JavaScript to the client by default, making it incredibly fast and efficient. Fresh uses a file-based routing system and leverages Deno's modern runtime capabilities. In this tutorial, we'll build a simple dinosaur catalog app that demonstrates Fresh's key features. The app will display a list of dinosaurs, allow you to view individual dinosaur details, and include interactive components using Fresh's islands architecture. You can see the [finished app repo on GitHub](https://github.com/denoland/tutorial-with-fresh) and a [demo of the app on Deno Deploy](https://tutorial-with-fresh.deno.deno.net/). :::info Deploy your own Want to skip the tutorial and deploy the finished app right now? Click the button below to instantly deploy your own copy of the complete Fresh dinosaur app to Deno Deploy. You'll get a live, working application that you can customize and modify as you learn! [![Deploy on Deno](https://deno.com/button)](https://console.deno.com/new?clone=https://github.com/denoland/tutorial-with-fresh) ::: ## Create a Fresh project Fresh provides a convenient scaffolding tool to create a new project. In your terminal, run the following command: ```sh deno run -Ar jsr:@fresh/init ``` This command will: - Download the latest Fresh scaffolding script - Create a new directory called `my-fresh-app` - Set up a basic Fresh project structure - Install all necessary dependencies Navigate into your new project directory: ```sh cd my-fresh-app ``` Start the development server: ```sh deno task dev ``` Open your browser to `http://localhost:5173` to see your new Fresh app running! ## Understanding the project structure The project contains the following key directories and files: ```text my-fresh-app/ ├── assets/ # Static assets (images, CSS, etc.) ├── components/ # Reusable UI components ├── islands/ # Interactive components (islands) ├── routes/ # File-based routing │ └── api/ # API routes ├── static/ # Static assets (images, CSS, etc.) ├── main.ts # Entry point of the application ├── deno.json # Deno configuration file └── README.md # Project documentation ``` ## Adding dinosaur data To add dinosaur data to our app, we'll create a simple data file which contains some information about dinosaurs in json. In a real application, this data might come from a database or an external API, but for simplicity, we'll use a static file. In the `routes/api` directory, create a new file called `data.json` and copy the content from [here](https://github.com/denoland/tutorial-with-fresh/blob/main/routes/api/data.json). ## Displaying the dinosaur list The homepage will display a list of dinosaurs that the user can click on to view more details. Lets update the `routes/index.tsx` file to fetch and display the dinosaur data. First update the `` in the head of the file to read "Dinosaur Encyclopedia". Then we'll add some basic HTML to introduce the app. ```tsx title="index.tsx" <main> <h1>🦕 Welcome to the Dinosaur Encyclopedia</h1> <p>Click on a dinosaur below to learn more.</p> <div class="dinosaur-list"> {/* Dinosaur list will go here */} </div> </main>; ``` We'll make a new component which will be used to display each dinosaur in the list. ## Creating a component Create a new file at `components/LinkButton.tsx` and add the following code: ```tsx title="LinkButton.tsx" import type { ComponentChildren } from "preact"; export interface LinkButtonProps { href?: string; class?: string; children?: ComponentChildren; } export function LinkButton(props: LinkButtonProps) { return ( <a {...props} class={"btn " + (props.class ?? "")} /> ); } ``` This component renders a styled link that looks like a button. It accepts `href`, `class`, and `children` props. finally, update the `routes/index.tsx` file to import and use the new `LinkButton` component to display each dinosaur in the list. ```tsx title="index.tsx" import { Head } from "fresh/runtime"; import { define } from "../utils.ts"; import data from "./api/data.json" with { type: "json" }; import { LinkButton } from "../components/LinkButton.tsx"; export default define.page(function Home() { return ( <> <Head> <title>Dinosaur Encyclopedia

🦕 Welcome to the Dinosaur Encyclopedia

Click on a dinosaur below to learn more.

{data.map((dinosaur: { name: string; description: string }) => ( {dinosaur.name} ))}
); }); ``` ## Creating dynamic routes Fresh allows us to create dynamic routes using file-based routing. We'll create a new route to display individual dinosaur details. Create a new file at `routes/dinosaurs/[name].tsx`. In this file, we'll fetch the dinosaur data based on the name parameter and display it. ```tsx title="[dinosaur].tsx" import { PageProps } from "$fresh/server.ts"; import data from "../api/data.json" with { type: "json" }; import { LinkButton } from "../../components/LinkButton.tsx"; export default function DinosaurPage(props: PageProps) { const name = props.params.dinosaur; const dinosaur = data.find((d: { name: string }) => d.name.toLowerCase() === name.toLowerCase() ); if (!dinosaur) { return (

Dinosaur not found

); } return (

{dinosaur.name}

{dinosaur.description}

← Back to list
); } ``` ## Adding interactivity with islands Fresh's islands architecture allows us to add interactivity to specific components without sending unnecessary JavaScript to the client. Let's create a simple interactive component that allows users to "favorite" a dinosaur. Create a new file at `islands/FavoriteButton.tsx` and add the following code: ```tsx title="FavoriteButton.tsx" import { useState } from "preact/hooks"; export default function FavoriteButton() { const [favorited, setFavorited] = useState(false); return ( ); } ``` This is just a simple button that toggles its state when clicked. You could update it to store the favorite state in a database or local storage for a more complete feature. Now we need to import and use this `FavoriteButton` island in our dinosaur detail page. Add the import at the top of `routes/dinosaurs/[dinosaur].tsx`: ```tsx title="[dinosaur].tsx" import FavoriteButton from "../../islands/FavoriteButton.tsx"; ``` and then include the `` component in the JSX where you want it to appear, for example, before the back button: ```tsx title="[dinosaur].tsx" ; ``` ## Styling the app We've created some basic styles to add to your app, but of course you can add your own css in the `assets/styles.css` file. Add a link to our provided stylesheet in the `` of `routes/_app.tsx`: ```tsx title="_app.tsx" ; ``` ## Running the app Make sure your development server is running with: ```sh deno task dev ``` Open your browser to `http://localhost:5173` to see your dinosaur catalog app in action! You should be able to view the list of dinosaurs, click on one to see its details, and use the "Favorite" button to toggle its favorite status. ## Build and deploy The default Fresh app comes with a `build` task that builds the app with Vite. You can run the following command to build the app for production mode: ```sh deno run build ``` This will build out the optimized files to a directory called `_fresh`. To run the built app, you can use the `start` task, which will automatically pick up the optimized assets in the `_fresh` directory: ```sh deno task start ``` Open your browser to `http://localhost:8000` to see the production version of your app. You can deploy this app to your favorite cloud provider. We recommend using [Deno Deploy](https://deno.com/deploy) for a simple and easy deployment experience. You can deploy your app directly from GitHub, simply create a GitHub repository and push your code there, then connect it to Deno Deploy. ### Create a GitHub repository [Create a new GitHub repository](https://github.com/new), then initialize and push your app to GitHub: ```sh git init -b main git remote add origin https://github.com//.git git add . git commit -am 'my fresh app' git push -u origin main ``` ### Deploy to Deno Deploy Once your app is on GitHub, you can [deploy it to Deno Deploy](https://console.deno.com/). For a walkthrough of deploying your app, check out the [Deno Deploy tutorial](/examples/deno_deploy_tutorial/). 🦕 Now you have a starter Fresh app! Here are some ideas to extend your dinosaur catalog: - Add a database (try [Deno KV](https://docs.deno.com/runtime/fundamentals/kv/) or connect to [PostgreSQL](https://docs.deno.com/runtime/tutorials/connecting_to_databases/)) - Implement user authentication with - Add more interactive features like favorites or ratings - Connect to external APIs for more dinosaur data Fresh's architecture makes it easy to build fast, scalable web applications while maintaining a great developer experience. The combination of server-side rendering by default with optional client-side interactivity gives you the best of both worlds. --- # How to deploy to Google Cloud Run > Step-by-step guide to deploying Deno applications on Google Cloud Run. Learn about Docker containerization, Artifact Registry configuration, GitHub Actions automation, and how to set up continuous deployment to Google Cloud. URL: https://docs.deno.com/examples/tutorials/google_cloud_run [Google Cloud Run](https://cloud.google.com/run) is a managed compute platform that lets you run containers on Google's scalable infrastructure. This How To guide will show you how to use Docker to deploy your Deno app to Google Cloud Run. First, we'll show you how to deploy manually, then we'll show you how to automate it with GitHub Actions. Pre-requisites: - [Google Cloud Platform account](https://cloud.google.com/gcp) - [`docker` CLI](https://docs.docker.com/engine/reference/commandline/cli/) installed - [`gcloud`](https://cloud.google.com/sdk/gcloud) installed ## Manual Deployment ### Create `Dockerfile` and `docker-compose.yml` To focus on the deployment, our app will simply be a `main.ts` file that returns a string as an HTTP response: ```ts title="main.ts" import { Application } from "jsr:@oak/oak"; const app = new Application(); app.use((ctx) => { ctx.response.body = "Hello from Deno and Google Cloud Run!"; }); await app.listen({ port: 8000 }); ``` Then, we'll create two files -- `Dockerfile` and `docker-compose.yml` -- to build the Docker image. In our `Dockerfile`, let's add: ```Dockerfile FROM denoland/deno EXPOSE 8000 WORKDIR /app ADD . /app RUN deno install --entrypoint main.ts CMD ["run", "--allow-net", "main.ts"] ``` Then, in our `docker-compose.yml`: ```yml version: "3" services: web: build: . container_name: deno-container image: deno-image ports: - "8000:8000" ``` Let's test this locally by running `docker compose -f docker-compose.yml build`, then `docker compose up`, and going to `localhost:8000`. ![Hello from localhost](./images/how-to/google-cloud-run/hello-world-from-localhost.png) It works! ### Set up Artifact Registry Artifact Registry is GCP's private registry of Docker images. Before we can use it, go to GCP's [Artifact Registry](https://console.cloud.google.com/artifacts) and click "Create repository". You'll be asked for a name (`deno-repository`) and a region (`us-central1`). Then click "Create". ![New repository in Google Artifact Repository](./images/how-to/google-cloud-run/new-repository-in-google-artifact-repository.png) ### Build, Tag, and Push to Artifact Registry Once we've created a repository, we can start pushing images to it. First, let's add the registry's address to `gcloud`: ```shell gcloud auth configure-docker us-central1-docker.pkg.dev ``` Then, let's build your Docker image. (Note that the image name is defined in our `docker-compose.yml` file.) ```shell docker compose -f docker-compose.yml build ``` Then, [tag](https://docs.docker.com/engine/reference/commandline/tag/) it with the new Google Artifact Registry address, repository, and name. The image name should follow this structure: `{{ location }}-docker.pkg.dev/{{ google_cloudrun_project_name }}/{{ repository }}/{{ image }}`. ```shell docker tag deno-image us-central1-docker.pkg.dev/deno-app-368305/deno-repository/deno-cloudrun-image ``` If you don't specify a tag, it'll use `:latest` by default. Next, push the image: ```shell docker push us-central1-docker.pkg.dev/deno-app-368305/deno-repository/deno-cloudrun-image ``` _[More info on how to push and pull images to Google Artifact Registry](https://cloud.google.com/artifact-registry/docs/docker/pushing-and-pulling)._ Your image should now appear in your Google Artifact Registry! ![Image in Google Artifact Registry](./images/how-to/google-cloud-run/image-in-google-artifact-registry.png) ### Create a Google Cloud Run Service We need an instance where we can build these images, so let's go to [Google Cloud Run](https://console.cloud.google.com/run) and click "Create Service". Let's name it "hello-from-deno". Select "Deploy one revision from an existing container image". Use the drop down to select the image from the `deno-repository` Artifact Registry. Select "allow unauthenticated requests" and then click "Create service". Make sure the port is `8000`. When it's done, your app should now be live: ![Hello from Google Cloud Run](./images/how-to/google-cloud-run/hello-from-google-cloud-run.png) Awesome! ### Deploy with `gcloud` Now that it's created, we'll be able to deploy to this service from the `gcloud` CLI. The command follows this structure: `gcloud run deploy {{ service_name }} --image={{ image }} --region={{ region }} --allow-unauthenticated`. Note that the `image` name follows the structure from above. For this example, the command is: ```shell gcloud run deploy hello-from-deno --image=us-central1-docker.pkg.dev/deno-app-368305/deno-repository/deno-cloudrun-image --region=us-central1 --allow-unauthenticated ``` ![Hello from Google Cloud Run](./images/how-to/google-cloud-run/hello-from-google-cloud-run.png) Success! ## Automate Deployment with GitHub Actions In order for automation to work, we first need to make sure that these both have been created: - the Google Artifact Registry - the Google Cloud Run service instance (If you haven't done that, please see the section before.) Now that we have done that, we can automate it with a GitHub workflow. Here's the yaml file: ```yml name: Build and Deploy to Cloud Run on: push: branches: - main env: PROJECT_ID: { { PROJECT_ID } } GAR_LOCATION: { { GAR_LOCATION } } REPOSITORY: { { GAR_REPOSITORY } } SERVICE: { { SERVICE } } REGION: { { REGION } } jobs: deploy: name: Deploy permissions: contents: "read" id-token: "write" runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Google Auth id: auth uses: "google-github-actions/auth@v0" with: credentials_json: "${{ secrets.GCP_CREDENTIALS }}" - name: Login to GAR uses: docker/login-action@v2.1.0 with: registry: ${{ env.GAR_LOCATION }}-docker.pkg.dev username: _json_key password: ${{ secrets.GCP_CREDENTIALS }} - name: Build and Push Container run: |- docker build -t "${{ env.GAR_LOCATION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.REPOSITORY }}/${{ env.SERVICE }}:${{ github.sha }}" ./ docker push "${{ env.GAR_LOCATION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.REPOSITORY }}/${{ env.SERVICE }}:${{ github.sha }}" - name: Deploy to Cloud Run id: deploy uses: google-github-actions/deploy-cloudrun@v0 with: service: ${{ env.SERVICE }} region: ${{ env.REGION }} image: ${{ env.GAR_LOCATION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.REPOSITORY }}/${{ env.SERVICE }}:${{ github.sha }} - name: Show Output run: echo ${{ steps.deploy.outputs.url }} ``` The environment variables that we need to set are (the examples in parenthesis are the ones for this repository) - `PROJECT_ID`: your project id (`deno-app-368305`) - `GAR_LOCATION`: the location your Google Artifact Registry is set (`us-central1`) - `GAR_REPOSITORY`: the name you gave your Google Artifact Registry (`deno-repository`) - `SERVICE`: the name of the Google Cloud Run service (`hello-from-deno`) - `REGION`: the region of your Google Cloud Run service (`us-central1`) The secret variables that we need to set are: - `GCP_CREDENTIALS`: this is the [service account](https://cloud.google.com/iam/docs/service-accounts) json key. When you create the service account, be sure to [include the roles and permissions necessary](https://cloud.google.com/iam/docs/granting-changing-revoking-access#granting_access_to_a_user_for_a_service_account) for Artifact Registry and Google Cloud Run. [Check out more details and examples of deploying to Cloud Run from GitHub Actions.](https://github.com/google-github-actions/deploy-cloudrun) For reference: https://github.com/google-github-actions/example-workflows/blob/main/workflows/deploy-cloudrun/cloudrun-docker.yml --- # How to export telemetry data to Grafana > Complete guide to exporting telemetry data with OpenTelemetry and Grafana. Learn how to configure collectors, visualize traces, and monitor application performance. URL: https://docs.deno.com/examples/tutorials/grafana [OpenTelemetry](https://opentelemetry.io/) (often abbreviated as OTel) is an open-source observability framework that provides a standardized way to collect and export telemetry data such as traces, metrics and logs. Deno has built-in support for OpenTelemetry, making it easy to instrument your applications without adding external dependencies. This integration works out of the box with observability platforms like [Grafana](https://grafana.com/). Grafana is an open-source observability platform that lets DevOps teams visualize, query, and alert on metrics, logs, and traces from diverse data sources in real time. It’s widely used for building dashboards to monitor infrastructure, applications, and systems health. Grafana also offers a hosted version called [Grafana Cloud](https://grafana.com/products/cloud/). This tutorial will help you configure your project to export OTel data to Grafana Cloud. In this tutorial, we'll build a simple application and export its telemetry data to Grafana Cloud. We'll cover: - [Set up your chat app](#set-up-your-chat-app) - [Set up a Docker collector](#set-up-a-docker-collector) - [Generating telemetry data](#generating-telemetry-data) - [Viewing telemetry data](#viewing-telemetry-data) You can find the complete source code for this tutorial [on GitHub](https://github.com/denoland/examples/tree/main/with-grafana). ## Set up your chat app For this tutorial, we'll use a simple chat application to demonstrate how to export telemetry data. You can find the [code for the app on GitHub](https://github.com/denoland/examples/tree/main/with-grafana). Either take a copy of that repository or create a [main.ts](https://github.com/denoland/examples/blob/main/with-grafana/main.ts) file and a [.env](https://github.com/denoland/examples/blob/main/with-grafana/.env.example) file. In order to run the app you will need an OpenAI API key. You can get one by signing up for an account at [OpenAI](https://platform.openai.com/signup) and creating a new secret key. You can find your API key in the [API keys section](https://platform.openai.com/account/api-keys) of your OpenAI account. Once you have an API key, set up an `OPENAI_API-KEY` environment variable in your `.env` file: ```env title=".env" OPENAI_API_KEY=your_openai_api_key ``` ## Set up a Docker collector Next, we'll set up a Docker container to run the OpenTelemetry collector. The collector is responsible for receiving telemetry data from your application and exporting it to Grafana Cloud. In the same directory as your `main.ts` file, create a `Dockerfile` and an `otel-collector.yml` file. The `Dockerfile` will be used to build a Docker image: ```dockerfile title="Dockerfile" FROM otel/opentelemetry-collector-contrib:latest COPY otel-collector.yml /otel-config.yml CMD ["--config", "/otel-config.yml"] ``` [`FROM otel/opentelemetry-collector-contrib:latest`](https://hub.docker.com/r/otel/opentelemetry-collector-contrib/) - This line specifies the base image for the container. It uses the official OpenTelemetry Collector Contributor image, which contains all receivers, exporters, processors, connectors, and other optional components, and pulls the latest version. `COPY otel-collector.yml /otel-config.yml` - This instruction copies our configuration file named `otel-collector.yml` from the local build context into the container. The file is renamed to `/otel-config.yml` inside the container. `CMD ["--config", "/otel-config.yml"]` - This sets the default command that will run when the container starts. It tells the OpenTelemetry Collector to use the configuration file we copied in the previous step. Next, let's setup a Grafana Cloud account and grab some info. If you have not already, [create a free Grafana Cloud account](https://grafana.com/auth/sign-up/create-user). Once created, you will receive a Grafana Cloud stack. Click "Details". ![Click details on your Grafana Cloud stack](./images/how-to/grafana/grafana-1.png) Next, find "OpenTelemetry" and click "Configure". ![Find and configure OpenTelemetry](./images/how-to/grafana/grafana-2.png) This page will provide you with all the details you'll need to configure your OpenTelemetry collector. Make note of your **OTLP Endpoint**, **Instance ID**, and **Password / API Token** (you will have to generate one). ![Configuring OTel in Grafana Cloud](./images/how-to/grafana/grafana-3.png) Next, add the following to your `otel-collector.yml` file to define how how telemetry data should be collected and exported to Grafana Cloud: ```yml title="otel-collector.yml" receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 exporters: otlphttp/grafana_cloud: endpoint: $_YOUR_GRAFANA_OTLP_ENDPOINT auth: authenticator: basicauth/grafana_cloud extensions: basicauth/grafana_cloud: client_auth: username: $_YOUR_INSTANCE_ID password: $_YOUR_API_TOKEN processors: batch: service: extensions: [basicauth/grafana_cloud] pipelines: traces: receivers: [otlp] processors: [batch] exporters: [otlphttp/grafana_cloud] metrics: receivers: [otlp] processors: [batch] exporters: [otlphttp/grafana_cloud] logs: receivers: [otlp] processors: [batch] exporters: [otlphttp/grafana_cloud] ``` The `receivers` section configures how the collector receives data. It sets up an OTLP (OpenTelemetry Protocol) receiver that listens on two protocols, `gRPC` and `HTTP`, the `0.0.0.0` address means it will accept data from any source. The `exporters` section defines where the collected data should be sent. Be sure to include **the OTLP endpoint** provided by your Grafana Cloud instance. The `extensions` section defines the authentication for OTel to export data to Grafana Cloud. Be sure to include your Grafana Cloud **Instance ID**, as well as your generated **Password / API Token**. The `processors` section defines how the data should be processed before export. It uses batch processing with a timeout of 5 seconds and a maximum batch size of 5000 items. The `service` section ties everything together by defining three pipelines. Each pipeline is responsible for a different type of telemetry data. The logs pipeline collects application logs. The traces pipeline is for distributed tracing data. The metric pipeline is for performance metrics. Build and run the docker instance to start collecting your telemetry data with the following command: ```sh docker build -t otel-collector . && docker run -p 4317:4317 -p 4318:4318 otel-collector ``` ## Generating telemetry data Now that we have the app and the docker container set up, we can start generating telemetry data. Run your application with these environment variables to send data to the collector: ```sh OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 \ OTEL_SERVICE_NAME=chat-app \ OTEL_DENO=true \ deno run --allow-net --allow-env --env-file --allow-read main.ts ``` This command: - Points the OpenTelemetry exporter to your local collector (`localhost:4318`) - Names your service "chat-app" in Grafana Cloud - Enables Deno's OpenTelemetry integration - Runs your application with the necessary permissions To generate some telemetry data, make a few requests to your running application in your browser at [`http://localhost:8000`](http://localhost:8000). Each request will: 1. Generate traces as it flows through your application 2. Send logs from your application's console output 3. Create metrics about the request performance 4. Forward all this data through the collector to Grafana Cloud ## Viewing telemetry data After making some requests to your application, you'll see three types of data in your Grafana Cloud dashboard: 1. **Traces** - End-to-end request flows through your system 2. **Logs** - Console output and structured log data 3. **Metrics** - Performance and resource utilization data ![Viewing logs in Grafana](./images/how-to/grafana/grafana-logs.png) You can drill down into individual spans to debug performance issues: ![Viewing traces in Grafana](./images/how-to/grafana/grafana-traces.png) 🦕 Now that you have telemetry export working, you could: 1. Add custom spans and attributes to better understand your application 2. Set up alerts based on latency or error conditions 3. Deploy your application and collector to production using platforms like: - [Fly.io](https://docs.deno.com/examples/deploying_deno_with_docker/) - [Digital Ocean](https://docs.deno.com/examples/digital_ocean_tutorial/) - [AWS Lightsail](https://docs.deno.com/examples/aws_lightsail_tutorial/) For more details on OpenTelemetry configuration, check out the [Grafana Cloud documentation](https://grafana.com/docs/grafana-cloud/monitor-applications/application-observability/collector/). --- # Executable scripts > Guide to creating executable scripts with Deno. Learn about hashbangs, file permissions, cross-platform compatibility, and how to create command-line tools that can run directly from the terminal. URL: https://docs.deno.com/examples/tutorials/hashbang Making Deno scripts executable can come in handy when creating small tools or utilities for tasks like file manipulation, data processing or repetitive tasks that you might want to run from the command line. Executable scripts allow you to create ad-hoc solutions without setting up an entire project. ## Creating an example script To make a script executable, start the script with a hashbang, (sometimes called a shebang). This is a sequence of characters (#!) that tells your operating system how to execute a script. It is followed by the path to the interpreter that should be used to run the script. :::note To use a hashbang on Windows you will need to install the Windows Subsystem for Linux (WSL) or use a Unix-like shell like [Git Bash](https://git-scm.com/downloads). ::: We'll make a simple script that prints the Deno installation path using the [Deno.env](/api/deno/~/Deno.env) API. Create a file named `hashbang.ts` with the following content: ```ts title="hashbang.ts" #!/usr/bin/env -S deno run --allow-env const path = Deno.env.get("DENO_INSTALL"); console.log("Deno Install Path:", path); ``` This script tells the system to use the deno runtime to run the script. The -S flag splits the command into arguments and indicates that the following argument (`deno run --allow-env`) should be passed to the env command. The script then retrieves the value associated with the environment variable named `DENO_INSTALL` with `Deno.env.get()` and assigns it to a variable called `path`. Finally, it prints the path to the console using `console.log()`. ### Execute the script In order to execute the script, you may need to give the script execution permissions, you can do so using the `chmod` command with a `+x` flag (for execute): ```sh chmod +x hashbang.ts ``` You can execute the script directly in the command line with: ```sh ./hashbang.ts ``` ## Using hashbang in files with no extension For brevity, you may wish to omit the extension for your script's filename. In this case, supply one using the `--ext` flag in the script itself, then you can run the script with just the file name: ```shell title="my_script" $ cat my_script #!/usr/bin/env -S deno run --allow-env --ext=js console.log("Hello!"); $ ./my_script Hello! ``` 🦕 Now you can directly execute Deno scripts from the command line! Remember to set the execute permission (`chmod +x`) for your script file, and you’re all set to build anything from simple utilities to complex tools. Check out the [Deno examples](/examples/) for inspiration on what you can script. --- # How to export telemetry data to Honeycomb > Complete guide to exporting telemetry data with OpenTelemetry and Honeycomb.io. Learn how to configure collectors, visualize traces, and monitor application performance. URL: https://docs.deno.com/examples/tutorials/honeycomb [OpenTelemetry](https://opentelemetry.io/) (often abbreviated as OTel) is an open-source observability framework that provides a standardized way to collect and export telemetry data such as traces, metrics and logs. Deno has built-in support for OpenTelemetry, making it easy to instrument your applications without adding external dependencies. This integration works out of the box with observability platforms like [Honeycomb](https://honeycomb.io). Honeycomb is an observability platform designed for debugging and understanding complex, modern distributed systems. In this tutorial, we'll build a simple application and export its telemetry data to Honeycomb. We'll cover: - [Set up your chat app](#set-up-your-chat-app) - [Set up a Docker collector](#set-up-a-docker-collector) - [Generating telemetry data](#generating-telemetry-data) - [Viewing telemetry data](#viewing-telemetry-data) You can find the complete source code for this tutorial [on GitHub](https://github.com/denoland/examples/tree/main/with-honeycomb). ## Set up your chat app For this tutorial, we'll use a simple chat application to demonstrate how to export telemetry data. You can find the [code for the app on GitHub](https://github.com/denoland/examples/tree/main/with-honeycomb). Either take a copy of that repository or create a [main.ts](https://github.com/denoland/examples/blob/main/with-honeycomb/main.ts) file and a [.env](https://github.com/denoland/examples/blob/main/with-honeycomb/.env.example) file. In order to run the app you will need an OpenAI API key. You can get one by signing up for an account at [OpenAI](https://platform.openai.com/signup) and creating a new secret key. You can find your API key in the [API keys section](https://platform.openai.com/account/api-keys) of your OpenAI account. Once you have an API key, set up an `OPENAI_API-KEY` environment variable in your `.env` file: ```env title=".env" OPENAI_API_KEY=your_openai_api_key ``` ## Set up a Docker collector Next, we'll set up a Docker container to run the OpenTelemetry collector. The collector is responsible for receiving telemetry data from your application and exporting it to Honeycomb. If you have not already, create a free Honeycomb account and set up an [ingest API key](https://docs.honeycomb.io/configure/environments/manage-api-keys/). In the same directory as your `main.ts` file, create a `Dockerfile` and an `otel-collector.yml` file. The `Dockerfile` will be used to build a Docker image: ```dockerfile title="Dockerfile" FROM otel/opentelemetry-collector:latest COPY otel-collector.yml /otel-config.yml CMD ["--config", "/otel-config.yml"] ``` `FROM otel/opentelemetry-collector:latest` - This line specifies the base image for the container. It uses the official OpenTelemetry Collector image and pulls the latest version. `COPY otel-collector.yml /otel-config.yml` - This instruction copies our configuration file named `otel-collector.yml` from the local build context into the container. The file is renamed to `/otel-config.yml` inside the container. `CMD ["--config", "/otel-config.yml"]` - This sets the default command that will run when the container starts. It tells the OpenTelemetry Collector to use the configuration file we copied in the previous step. Next, add the following to your `otel-collector.yml` file to define how how telemetry data should be collected and exported to Honeycomb: ```yml title="otel-collector.yml" receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 exporters: otlp: endpoint: "api.honeycomb.io:443" headers: x-honeycomb-team: $_HONEYCOMB_API_KEY processors: batch: timeout: 5s send_batch_size: 5000 service: pipelines: logs: receivers: [otlp] processors: [batch] exporters: [otlp] traces: receivers: [otlp] processors: [batch] exporters: [otlp] metrics: receivers: [otlp] processors: [batch] exporters: [otlp] ``` The `receivers` section configures how the collector receives data. It sets up an OTLP (OpenTelemetry Protocol) receiver that listens on two protocols, `gRPC` and `HTTP`, the `0.0.0.0` address means it will accept data from any source. The `exporters` section defines where the collected data should be sent. It's configured to send data to Honeycomb's API endpoint at `api.honeycomb.io:443`. The configuration requires an API key for authentication, swap `$_HONEYCOMB_API_KEY` for your actual Honeycomb API key. The `processors` section defines how the data should be processed before export. It uses batch processing with a timeout of 5 seconds and a maximum batch size of 5000 items. The `service` section ties everything together by defining three pipelines. Each pipeline is responsible for a different type of telemetry data. The logs pipeline collects application logs. The traces pipeline is for distributed tracing data. The metric pipeline is for performance metrics. Build and run the docker instance to start collecting your telemetry data with the following command: ```sh docker build -t otel-collector . && docker run -p 4317:4317 -p 4318:4318 otel-collector ``` ## Generating telemetry data Now that we have the app and the docker container set up, we can start generating telemetry data. Run your application with these environment variables to send data to the collector: ```sh OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 \ OTEL_SERVICE_NAME=chat-app \ OTEL_DENO=true \ deno run --allow-net --allow-env --env-file --allow-read main.ts ``` This command: - Points the OpenTelemetry exporter to your local collector (`localhost:4318`) - Names your service "chat-app" in Honeycomb - Enables Deno's OpenTelemetry integration - Runs your application with the necessary permissions To generate some telemetry data, make a few requests to your running application in your browser at [`http://localhost:8000`](http://localhost:8000). Each request will: 1. Generate traces as it flows through your application 2. Send logs from your application's console output 3. Create metrics about the request performance 4. Forward all this data through the collector to Honeycomb ## Viewing telemetry data After making some requests to your application, you'll see three types of data in your Honeycomb.io dashboard: 1. **Traces** - End-to-end request flows through your system 2. **Logs** - Console output and structured log data 3. **Metrics** - Performance and resource utilization data ![Viewing traces in Honeycomb](./images/how-to/honeycomb/honeycomb-3.webp) You can drill down into individual spans to debug performance issues: ![Viewing expanded traces in Honeycomb](./images/how-to/honeycomb/honeycomb-4.webp) 🦕 Now that you have telemetry export working, you could: 1. Add custom spans and attributes to better understand your application 2. Set up alerts based on latency or error conditions 3. Deploy your application and collector to production using platforms like: - [Fly.io](https://docs.deno.com/examples/deploying_deno_with_docker/) - [Digital Ocean](https://docs.deno.com/examples/digital_ocean_tutorial/) - [AWS Lightsail](https://docs.deno.com/examples/aws_lightsail_tutorial/) For more details on OpenTelemetry configuration, check out the [Honeycomb documentation](https://docs.honeycomb.io/send-data/opentelemetry/collector/). --- # How to export telemetry data to HyperDX > Complete guide to exporting telemetry data with OpenTelemetry and HyperDX. Learn how to configure collectors, visualize traces, logs, metrics, and debug distributed applications effectively. URL: https://docs.deno.com/examples/tutorials/hyperdx [HyperDX](https://hyperdx.io) is an open source observability platform that unifies logs, traces, metrics, exceptions, and session replays into a single interface. It helps developers debug applications faster by providing a complete view of your system's behavior and performance. [OpenTelemetry](https://opentelemetry.io/) (often abbreviated as OTel) provides a standardized way to collect and export telemetry data. Deno includes built-in OpenTelemetry support, allowing you to instrument your applications without additional dependencies. This integration works seamlessly with platforms like HyperDX to collect and visualize telemetry data. In this tutorial, we'll build a simple application and export its telemetry data to HyperDX: - [Set up your chat app](#set-up-your-chat-app) - [Set up a Docker collector](#set-up-a-docker-collector) - [Generating telemetry data](#generating-telemetry-data) - [Viewing telemetry data](#viewing-telemetry-data) You can find the complete source code for this tutorial [on GitHub](https://github.com/denoland/examples/tree/main/with-hyperdx). ## Set up the app For this tutorial, we'll use a simple chat application to demonstrate how to export telemetry data. You can find the [code for the app on GitHub](https://github.com/denoland/examples/tree/main/with-hyperdx). Either take a copy of that repository or create a [main.ts](https://github.com/denoland/examples/blob/main/with-hyperdx/main.ts) file and a [.env](https://github.com/denoland/examples/blob/main/with-hyperdx/.env.example) file. In order to run the app you will need an OpenAI API key. You can get one by signing up for an account at [OpenAI](https://platform.openai.com/signup) and creating a new secret key. You can find your API key in the [API keys section](https://platform.openai.com/account/api-keys) of your OpenAI account. Once you have an API key, set up an `OPENAI_API-KEY` environment variable in your `.env` file: ```env title=".env" OPENAI_API_KEY=your_openai_api_key ``` ## Set up the collector First, create a free HyperDX account to get your API key. Then, we'll set up two files to configure the OpenTelemetry collector: 1. Create a `Dockerfile`: ```dockerfile title="Dockerfile" FROM otel/opentelemetry-collector:latest COPY otel-collector.yml /otel-config.yml CMD ["--config", "/otel-config.yml"] ``` This Dockerfile: - Uses the official OpenTelemetry Collector as the base image - Copies your configuration into the container - Sets up the collector to use your config when it starts 2. Create a file called `otel-collector.yml`: ```yml title="otel-collector.yml" receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 exporters: otlphttp/hdx: endpoint: "https://in-otel.hyperdx.io" headers: authorization: $_HYPERDX_API_KEY compression: gzip processors: batch: service: pipelines: traces: receivers: [otlp] processors: [batch] exporters: [otlphttp/hdx] metrics: receivers: [otlp] processors: [batch] exporters: [otlphttp/hdx] logs: receivers: [otlp] processors: [batch] exporters: [otlphttp/hdx] ``` This configuration file sets up the OpenTelemetry collector to receive telemetry data from your application and export it to HyperDX. It includes: - The receivers section accepts data via gRPC (4317) and HTTP (4318) - The Exporters section sends data to HyperDX with compression and authentication - The processors section batches telemetry data for efficient transmission - The pipelines section defines separate flows for logs, traces, and metrics Build and run the docker instance to start collecting your telemetry data with the following command: ```sh docker build -t otel-collector . && docker run -p 4317:4317 -p 4318:4318 otel-collector ``` ## Generating telemetry data Now that we have the app and the docker container set up, we can start generating telemetry data. Run your application with these environment variables to send data to the collector: ```sh OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 \ OTEL_SERVICE_NAME=chat-app \ OTEL_DENO=true \ deno run --allow-net --allow-env --env-file --allow-read main.ts ``` This command: - Points the OpenTelemetry exporter to your local collector (`localhost:4318`) - Names your service "chat-app" in HyperDX - Enables Deno's OpenTelemetry integration - Runs your application with the necessary permissions To generate some telemetry data, make a few requests to your running application in your browser at [`http://localhost:8000`](http://localhost:8000). Each request will: 1. Generate traces as it flows through your application 2. Send logs from your application's console output 3. Create metrics about the request performance 4. Forward all this data through the collector to HyperDX ## Viewing telemetry data In your HyperDX dashboard, you'll see different views of your telemetry data: ### Logs View ![Viewing logs in HyperDX](./images/how-to/hyperdx/hyperdx-1.webp) Click any log to see details: ![Viewing a single log in HyperDX](./images/how-to/hyperdx/hyperdx-2.webp) ### Request Traces See all logs within a single request: ![Viewing all logs in a request in HyperDX](./images/how-to/hyperdx/hyperdx-3.webp) ### Metrics Dashboard Monitor system performance: ![Viewing metrics in HyperDX](./images/how-to/hyperdx/hyperdx-4.webp) 🦕 Now that you have telemetry export working, you could: 1. Add custom spans and attributes to better understand your application 2. Set up alerts based on latency or error conditions 3. Deploy your application and collector to production using platforms like: - [Fly.io](https://docs.deno.com/examples/deploying_deno_with_docker/) - [Digital Ocean](https://docs.deno.com/examples/digital_ocean_tutorial/) - [AWS Lightsail](https://docs.deno.com/examples/aws_lightsail_tutorial/) 🦕 For more details on OpenTelemetry configuration with HyperDX, see their [documentation](https://www.hyperdx.io/docs/install/opentelemetry). --- # Initialize a project > Guide to creating and structuring new Deno projects. Learn about starting a new project, task configuration, dependency management, and best practices for growing applications. URL: https://docs.deno.com/examples/tutorials/initialize_project While it is possible to run scripts directly with `deno run`, for larger projects it is recommended to create a sensible directory structure. This way you can organize your code, manage dependencies, script tasks and run tests more easily. Initialize a new project by running the following command: ```sh deno init my_project ``` Where `my_project` is the name of your project. You can [read more about the project structure](/runtime/getting_started/first_project/). ### Run your project Navigate to the project directory: ```sh cd my_project ``` Then you can run the project directly using the `deno task` command: ```sh deno run dev ``` Take a look in the `deno.json` file in your new project. You should see a `dev` task in the "tasks" field. ```json title="deno.json" "tasks": { "dev": "deno run --watch main.ts" }, ``` The `dev` task is a common task that runs the project in development mode. As you can see, it runs the `main.ts` file with the `--watch` flag, which will automatically reload the script when changes are made. You can see this in action if you open the `main.ts` file and make a change. ### Run the tests In the project directory run: ```sh deno test ``` This will execute all the tests in the project. You can read more about [testing in Deno](/runtime/fundamentals/testing/) and we'll cover tests in a little more depth in a later tutorial. At the moment you have one test file, `main_test.ts`, which tests the `add` function in `main.ts`. ### Adding to your project The `main.ts` file serves as the entry point for your application. It’s where you’ll write your main program logic. When developing your project you will start by removing the default addition program and replace it with your own code. For example, if you’re building a web server, this is where you’d set up your routes and handle requests. Beyond the initial files, you’ll likely create additional modules (files) to organize your code. Consider grouping related functionality into separate files. Remember that Deno [supports ES modules](/runtime/fundamentals/modules/), so you can use import and export statements to structure your code. Example folder structure for a deno project: ```sh my_project/ ├── deno.json ├── main.ts ├── main_test.ts ├── routes/ │ ├── home.ts │ ├── about.ts ├── services/ │ ├── user.ts │ ├── post.ts └──utils/ ├── logger.ts ├── logger_test.ts ├── validator_test.ts └── validator.ts ``` This kind of structure keeps your project clean and makes it easier to find and manage files. 🦕 Congratulations! Now you know how to create a brand new project with `deno init`. Remember that Deno encourages simplicity and avoids complex build tools. Keep your project modular, testable, and organized. As your project grows, adapt the structure to fit your needs. And most importantly, have fun exploring Deno’s capabilities! --- # How to deploy Deno on Kinsta > Step-by-step guide to deploying Deno applications on Kinsta. Learn how to configure package.json, handle environment variables, set up Git deployments, and use Kinsta's application hosting platform. URL: https://docs.deno.com/examples/tutorials/kinsta [Kinsta Application Hosting](https://kinsta.com/application-hosting) is a service that lets you build and deploy your web apps directly from your Git repository. ## Preparing your application At **Kinsta**, we recommend using the [`deno-bin`](https://www.npmjs.com/package/deno-bin) package to run Deno applications. To do so, your `package.json` should look like this: ```json title="package.json" { "name": "deno app", "scripts": { "start": "deno run --allow-net index.js --port=${PORT}" }, "devDependencies": { "deno-bin": "^1.28.2" } } ``` ## Example application ```js import { parseArgs } from "jsr:@std/cli"; const { args } = Deno; const port = parseArgs(args).port ? Number(parseArgs(args).port) : 8000; Deno.serve({ port }, (_req) => new Response("Hello, world")); ``` The application itself is self-explanatory. It's crucial not to hardcode the `PORT` but use the environmental variable **Kinsta** provides. There is also a [repository](https://github.com/kinsta/hello-world-deno) that should help you to get started. ## Deployment 1. Register on [Kinsta Application Hosting](https://kinsta.com/signup/?product_type=app-db) or login directly to [My Kinsta](https://my.kinsta.com/) admin panel. 2. Go to the Applications tab. 3. Connect your GitHub repository. 4. Press the **Add service > Application button**. 5. Follow the wizard steps. --- # Build a Real-time LLM Chat App with Deno > Learn how to integrate Large Language Models (LLM) with Deno to create an interactive roleplay chat application with AI characters using OpenAI or Anthropic APIs. URL: https://docs.deno.com/examples/tutorials/llm Large Language Models (LLMs) like OpenAI's GPT and Anthropic's Claude are powerful tools for creating intelligent, conversational applications. In this tutorial, we'll build a real-time chat application where AI characters powered by LLMs interact with users in a roleplay game setting. You can see the code for the [finished app on GitHub](https://github.com/denoland/tutorial-with-llm). :::info Deploy your own Want to skip the tutorial and deploy the finished app right now? Click the button below to instantly deploy your own copy of the complete LLM chat application to Deno Deploy. You'll get a live, working application that you can customize and modify as you learn! [![Deploy on Deno](https://deno.com/button)](https://console.deno.com/new?clone=https://github.com/denoland/tutorial-with-llm&mode=dynamic&entrypoint=main.ts&install=deno+install) Once you have deployed, add your `OPENAI_API_KEY` or `ANTHROPIC_API_KEY` in the project "Settings". ::: ## Initialize a new project First, create a new directory for your project and initialize it: ```bash mkdir deno-llm-chat cd deno-llm-chat deno init ``` ## Project structure We'll create a modular structure that separates concerns between LLM integration, game logic, and server management: ```sh ├── main.ts # Main server entry point ├── main_test.ts # Test file ├── deno.json # Deno configuration ├── .env # Environment variables (API keys) ├── src/ │ ├── config/ │ │ ├── characters.ts # Character configurations and presets │ │ └── scenarios.ts # Pre-defined scenario templates │ ├── game/ │ │ ├── GameManager.ts # Core game logic and state management │ │ └── Character.ts # AI character implementation │ ├── llm/ │ │ └── LLMProvider.ts # LLM integration layer (OpenAI/Anthropic) │ └── server/ │ └── WebSocketHandler.ts # Real-time communication └── static/ ├── index.html # Web interface ├── app.js # Frontend JavaScript └── styles.css # Application styling ``` ## Set up dependencies Add the required dependencies to your `deno.json`: ```json title="deno.json" { "tasks": { "dev": "deno run -A --env-file --watch main.ts", "start": "deno run --allow-net --allow-env --allow-read main.ts", "test": "deno test --allow-net --allow-env" }, "imports": { "@std/assert": "jsr:@std/assert@1", "@std/http": "jsr:@std/http@1", "@std/uuid": "jsr:@std/uuid@1", "@std/json": "jsr:@std/json@1" }, "compilerOptions": { "lib": [ "dom", "dom.asynciterable", "deno.ns" ] } } ``` ## Configure environment variables Create a `.env` file for your API keys. The application supports both OpenAI and Anthropic. Comment out the config that you won't be using with a `#`. ```bash title=".env" # Choose one of the following LLM providers: # OpenAI Configuration OPENAI_API_KEY=your-openai-api-key-here # OR Anthropic Configuration # ANTHROPIC_API_KEY=your-anthropic-api-key-here # Server Configuration (optional) PORT=8000 ``` You can get API keys from: - [OpenAI Platform](https://platform.openai.com/api-keys) - [Anthropic Console](https://console.anthropic.com/) ## Build the LLM Provider The core of our application is the LLM provider that handles communication with AI services. Create `src/llm/LLMProvider.ts`: ```typescript title="src/llm/LLMProvider.ts" export interface LLMConfig { provider: "openai" | "anthropic" | "mock"; apiKey?: string; model?: string; maxTokens?: number; temperature?: number; } export class LLMProvider { private config: LLMConfig; private rateLimitedUntil: number = 0; private retryCount: number = 0; private maxRetries: number = 3; constructor(config?: Partial) { const apiKey = config?.apiKey || Deno.env.get("OPENAI_API_KEY") || Deno.env.get("ANTHROPIC_API_KEY"); // Auto-detect provider based on available API keys let provider = config?.provider; if (!provider && apiKey) { if (Deno.env.get("OPENAI_API_KEY")) { provider = "openai"; } else if (Deno.env.get("ANTHROPIC_API_KEY")) { provider = "anthropic"; } } this.config = { provider: provider || "mock", model: provider === "anthropic" ? "claude-3-haiku-20240307" : "gpt-3.5-turbo", maxTokens: 150, temperature: 0.8, ...config, apiKey, }; console.log(`LLM Provider initialized: ${this.config.provider}`); } async generateResponse(prompt: string): Promise { // Check rate limiting if (this.rateLimitedUntil > Date.now()) { console.warn("Rate limited, using mock response"); return this.mockResponse(prompt); } try { switch (this.config.provider) { case "openai": return await this.callOpenAI(prompt); case "anthropic": return await this.callAnthropic(prompt); case "mock": default: return this.mockResponse(prompt); } } catch (error) { console.error("LLM API error:", error); if (this.shouldRetry(error)) { this.retryCount++; if (this.retryCount <= this.maxRetries) { console.log(`Retrying... (${this.retryCount}/${this.maxRetries})`); await this.delay(1000 * this.retryCount); return this.generateResponse(prompt); } } return this.mockResponse(prompt); } } private async callOpenAI(prompt: string): Promise { const response = await fetch("https://api.openai.com/v1/chat/completions", { method: "POST", headers: { "Authorization": `Bearer ${this.config.apiKey}`, "Content-Type": "application/json", }, body: JSON.stringify({ model: this.config.model, messages: [{ role: "user", content: prompt }], max_tokens: this.config.maxTokens, temperature: this.config.temperature, }), }); if (!response.ok) { throw new Error(`OpenAI API error: ${response.status}`); } const data = await response.json(); this.retryCount = 0; // Reset on success return data.choices[0].message.content.trim(); } private async callAnthropic(prompt: string): Promise { const response = await fetch("https://api.anthropic.com/v1/messages", { method: "POST", headers: { "x-api-key": this.config.apiKey!, "Content-Type": "application/json", "anthropic-version": "2023-06-01", }, body: JSON.stringify({ model: this.config.model, max_tokens: this.config.maxTokens, messages: [{ role: "user", content: prompt }], temperature: this.config.temperature, }), }); if (!response.ok) { throw new Error(`Anthropic API error: ${response.status}`); } const data = await response.json(); this.retryCount = 0; // Reset on success return data.content[0].text.trim(); } private mockResponse(prompt: string): string { const responses = [ "I understand! Let me think about this...", "That's an interesting approach to the situation.", "I see what you're getting at. Here's what I think...", "Fascinating! I would approach it this way...", "Good point! That gives me an idea...", ]; return responses[Math.floor(Math.random() * responses.length)]; } private shouldRetry(error: any): boolean { // Retry on rate limits and temporary server errors const errorMessage = error.message?.toLowerCase() || ""; return errorMessage.includes("rate limit") || errorMessage.includes("429") || errorMessage.includes("500") || errorMessage.includes("502") || errorMessage.includes("503"); } private delay(ms: number): Promise { return new Promise((resolve) => setTimeout(resolve, ms)); } } ``` In this file we set an LLM provider, this allows us to easily switch between different LLM APIs or mock responses for testing. We also add a retry mechanism for handling API errors. ## Create AI Characters Characters are the heart of our roleplay application. Create `src/game/Character.ts`: ```typescript title="src/game/Character.ts" import { LLMProvider } from "../llm/LLMProvider.ts"; export class Character { public name: string; public class: string; public personality: string; public conversationHistory: string[] = []; private llmProvider: LLMProvider; constructor( name: string, characterClass: string, personality: string, llmProvider: LLMProvider, ) { this.name = name; this.class = characterClass; this.personality = personality; this.llmProvider = llmProvider; } async generateResponse( context: string, userMessage: string, ): Promise { // Build the character's prompt with personality and context const characterPrompt = ` You are ${this.name}, a ${this.class} with this personality: ${this.personality} Context: ${context} Recent conversation: ${this.conversationHistory.slice(-3).join("\n")} User message: ${userMessage} Respond as ${this.name} in character. Keep responses under 150 words and maintain your personality traits. Be engaging and helpful to advance the roleplay scenario. `.trim(); try { const response = await this.llmProvider.generateResponse(characterPrompt); // Add to conversation history this.conversationHistory.push(`User: ${userMessage}`); this.conversationHistory.push(`${this.name}: ${response}`); // Keep history manageable if (this.conversationHistory.length > 20) { this.conversationHistory = this.conversationHistory.slice(-10); } return response; } catch (error) { console.error(`Error generating response for ${this.name}:`, error); return `*${this.name} seems lost in thought and doesn't respond*`; } } getCharacterInfo() { return { name: this.name, class: this.class, personality: this.personality, }; } clearHistory() { this.conversationHistory = []; } } ``` Here we define the `Character` class, which represents each player character in the game. This class will handle generating responses based on the character's personality and the current game context. ## Set up character configurations Create predefined character templates in `src/config/characters.ts`: ```typescript title="src/config/characters.ts" export interface CharacterConfig { name: string; class: string; personality: string; emoji?: string; backstory?: string; } export const defaultCharacters: CharacterConfig[] = [ { name: "Tharin", emoji: "⚔️", class: "Fighter", personality: "Brave and loyal team leader, always ready to protect allies. Takes charge in dangerous situations but listens to party input.", backstory: "A former city guard seeking adventure and justice.", }, { name: "Lyra", emoji: "🔮", class: "Wizard", personality: "Curious and analytical strategist, loves solving puzzles. Uses magic creatively to support the party.", backstory: "A scholar of ancient magic seeking forgotten spells.", }, { name: "Finn", emoji: "🗡️", class: "Rogue", personality: "Witty and sneaky scout, prefers clever solutions. Acts quickly and adapts to what allies need.", backstory: "A former street thief now using skills for good.", }, ]; ``` These templates are what the `Character` class will use to instantiate each character with their unique traits. The LLM will use these traits to generate responses that are consistent with each character's personality and backstory. ## Build the Game Manager The Game Manager coordinates characters and maintains game state. Create `src/game/GameManager.ts`: ```typescript title="src/game/GameManager.ts" import { Character } from "./Character.ts"; import { LLMProvider } from "../llm/LLMProvider.ts"; export interface GameState { id: string; gmPrompt: string; characters: Character[]; messages: GameMessage[]; currentTurn: number; isActive: boolean; createdAt: Date; } export interface GameMessage { id: string; speaker: string; message: string; timestamp: Date; type: "gm" | "character" | "system"; } export interface StartGameRequest { gmPrompt: string; characters: Array<{ name: string; class: string; personality: string; }>; } export class GameManager { private games: Map = new Map(); private llmProvider: LLMProvider; constructor() { this.llmProvider = new LLMProvider(); } async startNewGame( gmPrompt: string, characterConfigs: StartGameRequest["characters"], ): Promise { const gameId = crypto.randomUUID(); // Create characters with their LLM personalities const characters = characterConfigs.map((config) => new Character( config.name, config.class, config.personality, this.llmProvider, ) ); const gameState: GameState = { id: gameId, gmPrompt, characters, messages: [], currentTurn: 0, isActive: true, createdAt: new Date(), }; this.games.set(gameId, gameState); // Add initial system message this.addMessage(gameId, { speaker: "System", message: `Game started! Players: ${ characters.map((c) => c.name).join(", ") }`, type: "system", }); console.log(`New game started: ${gameId}`); return gameId; } async handlePlayerMessage( gameId: string, message: string, ): Promise { const game = this.games.get(gameId); if (!game || !game.isActive) { throw new Error("Game not found or inactive"); } // Add player message this.addMessage(gameId, { speaker: "Player", message, type: "gm", }); // Generate responses from each character const responses: GameMessage[] = []; for (const character of game.characters) { try { const context = this.buildContext(game); const response = await character.generateResponse(context, message); const characterMessage = this.addMessage(gameId, { speaker: character.name, message: response, type: "character", }); responses.push(characterMessage); // Small delay between character responses for realism await new Promise((resolve) => setTimeout(resolve, 500)); } catch (error) { console.error(`Error getting response from ${character.name}:`, error); } } game.currentTurn++; return responses; } private buildContext(game: GameState): string { const recentMessages = game.messages.slice(-5); const context = [ `Scenario: ${game.gmPrompt}`, `Current turn: ${game.currentTurn}`, "Recent events:", ...recentMessages.map((m) => `${m.speaker}: ${m.message}`), ].join("\n"); return context; } private addMessage( gameId: string, messageData: Omit, ): GameMessage { const game = this.games.get(gameId); if (!game) throw new Error("Game not found"); const message: GameMessage = { id: crypto.randomUUID(), timestamp: new Date(), ...messageData, }; game.messages.push(message); return message; } getGame(gameId: string): GameState | undefined { return this.games.get(gameId); } getActiveGames(): string[] { return Array.from(this.games.entries()) .filter(([_, game]) => game.isActive) .map(([id, _]) => id); } endGame(gameId: string): boolean { const game = this.games.get(gameId); if (game) { game.isActive = false; console.log(`Game ended: ${gameId}`); return true; } return false; } } ``` The game manager will handle all game-related logic, including starting new games, processing player messages, and managing game state. When a player sends a message, the game manager will route it to the appropriate character for response generation. ## Add WebSocket Support Real-time communication makes the roleplay experience more engaging. Create `src/server/WebSocketHandler.ts`: ```typescript title="src/server/WebSocketHandler.ts" import { GameManager } from "../game/GameManager.ts"; export interface WebSocketMessage { type: "start_game" | "send_message" | "join_game" | "get_game_state"; gameId?: string; data?: any; } export class WebSocketHandler { private gameManager: GameManager; private connections: Map = new Map(); constructor(gameManager: GameManager) { this.gameManager = gameManager; } handleConnection(request: Request): Response { const { socket, response } = Deno.upgradeWebSocket(request); const connectionId = crypto.randomUUID(); this.connections.set(connectionId, socket); socket.onopen = () => { console.log(`WebSocket connection opened: ${connectionId}`); this.sendMessage(socket, { type: "connection", data: { connectionId, message: "Connected to LLM Chat server" }, }); }; socket.onmessage = async (event) => { try { const message: WebSocketMessage = JSON.parse(event.data); await this.handleMessage(socket, message); } catch (error) { console.error("Error handling WebSocket message:", error); this.sendError(socket, "Invalid message format"); } }; socket.onclose = () => { console.log(`WebSocket connection closed: ${connectionId}`); this.connections.delete(connectionId); }; socket.onerror = (error) => { console.error(`WebSocket error for ${connectionId}:`, error); }; return response; } private async handleMessage(socket: WebSocket, message: WebSocketMessage) { switch (message.type) { case "start_game": await this.handleStartGame(socket, message.data); break; case "send_message": await this.handleSendMessage(socket, message); break; case "get_game_state": await this.handleGetGameState(socket, message.gameId!); break; default: this.sendError(socket, `Unknown message type: ${message.type}`); } } private async handleStartGame(socket: WebSocket, data: any) { try { const { gmPrompt, characters } = data; const gameId = await this.gameManager.startNewGame(gmPrompt, characters); this.sendMessage(socket, { type: "game_started", data: { gameId, message: "Game started successfully! You can now send messages to interact with your characters.", }, }); } catch (error) { this.sendError(socket, `Failed to start game: ${error.message}`); } } private async handleSendMessage( socket: WebSocket, message: WebSocketMessage, ) { try { const { gameId, data } = message; if (!gameId) { this.sendError(socket, "Game ID required"); return; } const responses = await this.gameManager.handlePlayerMessage( gameId, data.message, ); this.sendMessage(socket, { type: "character_responses", data: { gameId, responses }, }); } catch (error) { this.sendError(socket, `Failed to process message: ${error.message}`); } } private async handleGetGameState(socket: WebSocket, gameId: string) { try { const game = this.gameManager.getGame(gameId); if (!game) { this.sendError(socket, "Game not found"); return; } this.sendMessage(socket, { type: "game_state", data: { gameId, characters: game.characters.map((c) => c.getCharacterInfo()), messages: game.messages.slice(-10), // Last 10 messages isActive: game.isActive, }, }); } catch (error) { this.sendError(socket, `Failed to get game state: ${error.message}`); } } private sendMessage(socket: WebSocket, message: any) { if (socket.readyState === WebSocket.OPEN) { socket.send(JSON.stringify(message)); } } private sendError(socket: WebSocket, error: string) { this.sendMessage(socket, { type: "error", data: { error }, }); } } ``` Here we set up the WebSocket server to handle connections and messages. Websockets allow for real-time communication between the client and server, making them ideal for interactive applications like a chat app, or game. We send messages back and forth between the client and server to keep the game state in sync. ## Create the main server Now let's tie everything together in `main.ts`: ```typescript title="main.ts" import { GameManager } from "./src/game/GameManager.ts"; import { WebSocketHandler } from "./src/server/WebSocketHandler.ts"; import { defaultCharacters } from "./src/config/characters.ts"; const gameManager = new GameManager(); const wsHandler = new WebSocketHandler(gameManager); async function handler(req: Request): Promise { const url = new URL(req.url); // Handle WebSocket connections if (req.headers.get("upgrade") === "websocket") { return wsHandler.handleConnection(req); } // Serve static files and API endpoints switch (url.pathname) { case "/": return new Response(await getIndexHTML(), { headers: { "content-type": "text/html" }, }); case "/api/characters": return new Response(JSON.stringify(defaultCharacters), { headers: { "content-type": "application/json" }, }); case "/api/game/start": if (req.method === "POST") { try { const body = await req.json(); const gameId = await gameManager.startNewGame( body.gmPrompt, body.characters, ); return new Response(JSON.stringify({ gameId }), { headers: { "content-type": "application/json" }, }); } catch (error) { return new Response( JSON.stringify({ error: error.message }), { status: 400, headers: { "content-type": "application/json" }, }, ); } } break; case "/api/game/message": if (req.method === "POST") { try { const body = await req.json(); const responses = await gameManager.handlePlayerMessage( body.gameId, body.message, ); return new Response(JSON.stringify({ responses }), { headers: { "content-type": "application/json" }, }); } catch (error) { return new Response( JSON.stringify({ error: error.message }), { status: 400, headers: { "content-type": "application/json" }, }, ); } } break; default: return new Response("Not Found", { status: 404 }); } return new Response("Method Not Allowed", { status: 405 }); } async function getIndexHTML(): Promise { try { return await Deno.readTextFile("./static/index.html"); } catch { // Return a basic HTML template if file doesn't exist return ` LLM Roleplay Chat

Oops! Something went wrong.

`; } } const port = parseInt(Deno.env.get("PORT") || "8000"); console.log(`🎭 LLM Chat server starting on http://localhost:${port}`); Deno.serve({ port }, handler); ``` In the `main.ts` file we set up an HTTP server and a WebSocket server to handle real-time communication. We use the HTTP server to serve static files and provide API endpoints, while the WebSocket server manages real-time interactions between clients. ## Add a frontend The frontend of our app will live in the `static` directory. Create an `index.html`, `app.js` and a `style.css` file in the `static` directory. ### `index.html` We'll create a very basic layout with a textarea to collect the user's scenario input and a section to show the response messages with a text input to send messages. Copy the content from [this html file](https://github.com/denoland/tutorial-with-llm/blob/main/static/index.html) into your `index.html`. ### `app.js` In `app.js`, we'll add the JavaScript to handle user input and display responses. Copy the content from [this js file](https://github.com/denoland/tutorial-with-llm/blob/main/static/app.js) into your `app.js`. ### `style.css` We'll add some basic styles to make our app look nicer. Copy the content from [this css file](https://github.com/denoland/tutorial-with-llm/blob/main/static/style.css) into your `style.css`. ## Run your application Start your development server: ```bash deno task dev ``` Your LLM chat application will be available at `http://localhost:8000`. The application will: 1. **Auto-detect your LLM provider** based on available API keys 2. **Fall back to mock responses** if no API keys are configured 3. **Handle rate limiting** gracefully with retries and fallbacks 4. **Provide real-time interaction** through WebSockets ## Deploy your application to the cloud Now that you have your working LLM chat application, you can deploy it to the cloud with Deno Deploy. For the best experience, you can deploy your app directly from GitHub, which will set up automated deployments. Create a GitHub repository and push your app there. [Create a new GitHub repository](https://github.com/new), then initialize and push your app to GitHub: ```sh git init -b main git remote add origin https://github.com//.git git add . git commit -am 'initial commit' git push -u origin main ``` Once your app is on GitHub, you can [deploy it to Deno Deploy](https://console.deno.com/). Don't forget to add your `OPENAI_API_KEY` or `ANTHROPIC_API_KEY` environment variables in the project "Settings". For a walkthrough of deploying your app, check out the [Deno Deploy tutorial](/examples/deno_deploy_tutorial/). ## Testing We've provided tests to verify your setup, copy the [`main.test.ts`](https://github.com/denoland/tutorial-with-llm/blob/main/tests/main.test.ts) file to your project directory and run the included tests to verify your setup: ```bash deno task test ``` 🦕 You now have a working LLM chat application, with realtime interaction, rate limiting and error handling. Next you can customise it to your own play style! Consider giving the LLM instructions on how to behave in different scenarios, or how to respond to specific user inputs. You can add these into the character configuration files. You could also consider adding a database to store the conversation history for long-term character and story development. --- # Migrating a custom domain from Deploy Classic to Deno Deploy > Learn how to migrate your custom domain from Deploy Classic to Deno Deploy URL: https://docs.deno.com/examples/tutorials/migrate_custom_domain If you have previously set up a custom domain on Deploy Classic and want to migrate it to Deno Deploy, follow these steps: ## Add your domain to Deno Deploy 1. Visit the [Deno Deploy dashboard](https://dash.deno.com) and navigate to the project you want to associate with your custom domain. 2. Click the **"Settings"** tab. 3. Under "Production Domains", click **"+ Add Domain"**. 4. Enter your custom domain (e.g., `test.mywebsite.com`), select whether you want just the base url or base and wildcard, then click **"Save"**. This will kick off DNS record configuration, which may take a few minutes. You will be presented with DNS records that you need to add to your DNS provider. ## Provision a TLS certificate In your DNS provider's settings, update your domain's DNS records to include the provided `_acme-challenge` CNAME record. This is necessary for Deno Deploy to verify your domain and provision a TLS certificate. ![DNS Records modal](/deploy/images/dns_config.png) Once the DNS records are picked up, provision a new TLS certificate by clicking the **"Verify DNS and provision certificate"** button. ## Update DNS records In your DNS provider's settings, remove any existing CNAME/A/AAAA records for your domain and replace them with the CNAME or ANAME records provided by Deno Deploy. This may take some time due to DNS propagation delays. Allow up to 48 hours for the changes to take effect before removing the domain from Deploy Classic. --- # Testing in isolation with mocks > Master the art of mocking in your unit tests. Learn how spies, stubs, fake time, and other Deno tools let you improve your code and confidence URL: https://docs.deno.com/examples/tutorials/mocking This guide builds on the [basics of testing in Deno](/examples/testing_tutorial/) to focus specifically on mocking techniques that help you isolate your code during testing. For effective unit testing, you'll often need to "mock" the data that your code interacts with. Mocking is a technique used in testing where you replace real data with simulated versions that you can control. This is particularly useful when testing components that interact with external services, such as APIs or databases. Deno provides [helpful mocking utilities](https://jsr.io/@std/testing/doc/mock) through the Deno Standard Library, making your tests easier to write, more reliable and faster. ### Spying In Deno, you can [`spy`](https://jsr.io/@std/testing/doc/mock#spying) on a function to track how it's called during test execution. Spies don't change how a function behaves, but they record important details like how many times the function was called and what arguments were passed to it. By using spies, you can verify that your code interacts correctly with its dependencies without setting up complex infrastructure. In the following example we will test a function called `saveUser()`, which takes a user object and a database object and calls the database's `save` method: ```ts import { assertEquals } from "jsr:@std/assert"; import { assertSpyCalls, spy } from "jsr:@std/testing/mock"; // Define types for better code quality interface User { name: string; } interface Database { save: (user: User) => Promise; } // Function to test function saveUser( user: User, database: Database, ): Promise { return database.save(user); } // Test with a mock Deno.test("saveUser calls database.save", async () => { // Create a mock database with a spy on the save method const mockDatabase = { save: spy((user: User) => Promise.resolve({ id: 1, ...user })), }; const user: User = { name: "Test User" }; const result = await saveUser(user, mockDatabase); // Verify the mock was called correctly assertSpyCalls(mockDatabase.save, 1); assertEquals(mockDatabase.save.calls[0].args[0], user); assertEquals(result, { id: 1, name: "Test User" }); }); ``` We import the necessary functions from the Deno Standard Library to assert equality and to create and verify spy functions. The mock database is a stand-in for a real database object, with a `save` method that is wrapped in a `spy`. The spy function tracks calls to the method, records arguments passed to it and executes the underlying implementation (in this case returning a promise with the `user` and an `id`). The test calls `saveUser()` with the mock data and we use assertions to verify that: 1. The save method was called exactly once 2. The first argument of the call was the `user` object we passed in 3. The result contains both the original user data and the added ID We were able to test the `saveUser` operation without setting up or tearing down any complex database state. ### Clearing spies When working with multiple tests that use spies, it's important to reset or clear spies between tests to avoid interference. The Deno testing library provides a simple way to restore all spies to their original state using the `restore()` method. Here's how to clear a spy after you're done with it: ```ts import { assertEquals } from "jsr:@std/assert"; import { assertSpyCalls, spy } from "jsr:@std/testing/mock"; Deno.test("spy cleanup example", () => { // Create a spy on a function const myFunction = spy((x: number) => x * 2); // Use the spy const result = myFunction(5); assertEquals(result, 10); assertSpyCalls(myFunction, 1); // After testing, restore the spy try { // Test code using the spy // ... } finally { // Always clean up spies myFunction.restore(); } }); ``` Method spies are disposable, they can automatically restore themselves with the `using` keyword. This approach means that you do not need to wrap your assertions in a try statement to ensure you restore the methods before the tests finish. ```ts import { assertEquals } from "jsr:@std/assert"; import { assertSpyCalls, spy } from "jsr:@std/testing/mock"; Deno.test("using disposable spies", () => { const calculator = { add: (a: number, b: number) => a + b, multiply: (a: number, b: number) => a * b, }; // The spy will automatically be restored when it goes out of scope using addSpy = spy(calculator, "add"); // Use the spy const sum = calculator.add(3, 4); assertEquals(sum, 7); assertSpyCalls(addSpy, 1); assertEquals(addSpy.calls[0].args, [3, 4]); // No need for try/finally blocks - the spy will be restored automatically }); Deno.test("using multiple disposable spies", () => { const calculator = { add: (a: number, b: number) => a + b, multiply: (a: number, b: number) => a * b, }; // Both spies will automatically be restored using addSpy = spy(calculator, "add"); using multiplySpy = spy(calculator, "multiply"); calculator.add(5, 3); calculator.multiply(4, 2); assertSpyCalls(addSpy, 1); assertSpyCalls(multiplySpy, 1); // No cleanup code needed }); ``` For cases where you have multiple spies that don't support the `using` keyword, you can track them in an array and restore them all at once: ```ts Deno.test("multiple spies cleanup", () => { const spies = []; // Create spies const functionA = spy((x: number) => x + 1); spies.push(functionA); const objectB = { method: (x: number) => x * 2, }; const spyB = spy(objectB, "method"); spies.push(spyB); // Use the spies in tests // ... // Clean up all spies at the end try { // Test code using spies } finally { // Restore all spies spies.forEach((spyFn) => spyFn.restore()); } }); ``` By properly cleaning up spies, you ensure that each test starts with a clean state and avoid side effects between tests. ### Stubbing While spies track method calls without changing behavior, stubs replace the original implementation entirely. [Stubbing](https://jsr.io/@std/testing/doc/mock#stubbing) is a form of mocking where you temporarily replace a function or method with a controlled implementation. This allows you to simulate specific conditions or behaviors and return predetermined values. It can also be used when you need to override environment-dependent functionality. In Deno, you can create stubs using the `stub` function from the standard testing library: ```ts import { assertEquals } from "jsr:@std/assert"; import { Stub, stub } from "jsr:@std/testing/mock"; // Define types for better code quality interface User { name: string; role: string; } // Original function function getCurrentUser(userId: string): User { // Implementation that might involve database calls return { name: "Real User", role: "admin" }; } // Function we want to test function hasAdminAccess(userId: string): boolean { const user = getCurrentUser(userId); return user.role === "admin"; } Deno.test("hasAdminAccess with stubbed user", () => { // Create a stub that replaces getCurrentUser const getUserStub: Stub = stub( globalThis, "getCurrentUser", // Return a test user with non-admin role () => ({ name: "Test User", role: "guest" }), ); try { // Test with the stubbed function const result = hasAdminAccess("user123"); assertEquals(result, false); // You can also change the stub's behavior during the test getUserStub.restore(); // Remove first stub const adminStub = stub( globalThis, "getCurrentUser", () => ({ name: "Admin User", role: "admin" }), ); try { const adminResult = hasAdminAccess("admin456"); assertEquals(adminResult, true); } finally { adminStub.restore(); } } finally { // Always restore the original function getUserStub.restore(); } }); ``` Here we import the necessary functions from the Deno Standard Library, then we set up the function we're going to stub. In a real application this might connect to a database, make an API call, or perform other operations that we may want to avoid during testing. We set up the function under test, in this case the `hasAdminAccess()` function. We want to test whether it: - Calls the `getCurrentUser()` function to get a user object - Checks if the user's role is "admin" - Returns a boolean indicating whether the user has admin access Next we create a test named `hasAdminAccess with a stubbed user` and set up a stub for the `getCurrentUser` function. This will replace the real implementation with one that returns a user with a `guest` role. We run the test with the stubbed function, it will call `hasAdminAccess` with a user ID. Even though the real function would return a user with `admin` role, our stub returns `guest`, so we can assert that `hasAdminAccess` returns `false` (since our stub returns a non-admin user). We can change the stub behavior to return `admin` instead and assert that the function now returns `true`. At the end we use a `finally` block to ensure the original function is restored so that we don't accidentally affect other tests. ### Stubbing environment variables For deterministic testing, you often need to control environment variables. Deno's Standard Library provides utilities to achieve this: ```ts import { assertEquals } from "jsr:@std/assert"; import { stub } from "jsr:@std/testing/mock"; // Function that depends on environment variables and time function generateReport() { const environment = Deno.env.get("ENVIRONMENT") || "development"; const timestamp = new Date().toISOString(); return { environment, generatedAt: timestamp, data: {/* report data */}, }; } Deno.test("report generation with controlled environment", () => { // Stub environment const originalEnv = Deno.env.get; const envStub = stub(Deno.env, "get", (key: string) => { if (key === "ENVIRONMENT") return "production"; return originalEnv.call(Deno.env, key); }); // Stub time const dateStub = stub( Date.prototype, "toISOString", () => "2023-06-15T12:00:00Z", ); try { const report = generateReport(); // Verify results with controlled values assertEquals(report.environment, "production"); assertEquals(report.generatedAt, "2023-06-15T12:00:00Z"); } finally { // Always restore stubs to prevent affecting other tests envStub.restore(); dateStub.restore(); } }); ``` ### Faking time Time-dependent code can be challenging to test because it may produce different results based on when the test runs. Deno provides a [`FakeTime`](https://jsr.io/@std/testing/doc/time) utility that allows you to simulate the passage of time and control date-related functions during tests. The example below demonstrates how to test time-dependent functions: `isWeekend()`, which returns true if the current day is Saturday or Sunday, and `delayedGreeting()` which calls a callback after a 1-second delay: ```ts import { assertEquals } from "jsr:@std/assert"; import { FakeTime } from "jsr:@std/testing/time"; // Function that depends on the current time function isWeekend(): boolean { const date = new Date(); const day = date.getDay(); return day === 0 || day === 6; // 0 is Sunday, 6 is Saturday } // Function that works with timeouts function delayedGreeting(callback: (message: string) => void): void { setTimeout(() => { callback("Hello after delay"); }, 1000); // 1 second delay } Deno.test("time-dependent tests", () => { using fakeTime = new FakeTime(); // Create a fake time starting at a specific date (a Monday) const mockedTime: FakeTime = fakeTime(new Date("2023-05-01T12:00:00Z")); try { // Test with the mocked Monday assertEquals(isWeekend(), false); // Move time forward to Saturday mockedTime.tick(5 * 24 * 60 * 60 * 1000); // Advance 5 days assertEquals(isWeekend(), true); // Test async operations with timers let greeting = ""; delayedGreeting((message) => { greeting = message; }); // Advance time to trigger the timeout immediately mockedTime.tick(1000); assertEquals(greeting, "Hello after delay"); } finally { // Always restore the real time mockedTime.restore(); } }); ``` Here we set up a test which creates a controlled time environment with `fakeTime` which sets the starting date to May 1, 2023, (which was a Monday). It returns a `FakeTime` controller object that lets us manipulate time. We run tests with the mocked Monday and will see that the `isWeekend` function returns `false`. Then we can advance time to Saturday and run the test again to verify that `isWeekend` returns `true`. The `fakeTime` function replaces JavaScript's timing functions (`Date`, `setTimeout`, `setInterval`, etc.) with versions you can control. This allows you to test code with specific dates or times regardless of when the test runs. This powerful technique means you will avoid flaky tests that depend on the system clock and can speed up tests by advancing time instantly instead of waiting for real timeouts. Fake time is particularly useful for testing: - Calendar or date-based features, such as scheduling, appointments or expiration dates - Code with timeouts or intervals, such as polling, delayed operations or debouncing - Animations or transitions such as testing the completion of timed visual effects Like with stubs, always restore the real time functions after your tests using the `restore()` method to avoid affecting other tests. ## Advanced mocking patterns ### Partial mocking Sometimes you only want to mock certain methods of an object while keeping others intact: ```ts import { assertEquals } from "jsr:@std/assert"; import { stub } from "jsr:@std/testing/mock"; class UserService { async getUser(id: string) { // Complex database query return { id, name: "Database User" }; } async formatUser(user: { id: string; name: string }) { return { ...user, displayName: user.name.toUpperCase(), }; } async getUserFormatted(id: string) { const user = await this.getUser(id); return this.formatUser(user); } } Deno.test("partial mocking with stubs", async () => { const service = new UserService(); // Only mock the getUser method const getUserMock = stub( service, "getUser", () => Promise.resolve({ id: "test-id", name: "Mocked User" }), ); try { // The formatUser method will still use the real implementation const result = await service.getUserFormatted("test-id"); assertEquals(result, { id: "test-id", name: "Mocked User", displayName: "MOCKED USER", }); // Verify getUser was called with the right arguments assertEquals(getUserMock.calls.length, 1); assertEquals(getUserMock.calls[0].args[0], "test-id"); } finally { getUserMock.restore(); } }); ``` ### Mocking fetch requests Testing code that makes HTTP requests often requires mocking the `fetch` API: ```ts import { assertEquals } from "jsr:@std/assert"; import { stub } from "jsr:@std/testing/mock"; // Function that uses fetch async function fetchUserData(userId: string) { const response = await fetch(`https://api.example.com/users/${userId}`); if (!response.ok) { throw new Error(`Failed to fetch user: ${response.status}`); } return await response.json(); } Deno.test("mocking fetch API", async () => { const originalFetch = globalThis.fetch; // Create a response that the mock fetch will return const mockResponse = new Response( JSON.stringify({ id: "123", name: "John Doe" }), { status: 200, headers: { "Content-Type": "application/json" } }, ); // Replace fetch with a stubbed version globalThis.fetch = stub( globalThis, "fetch", (_input: string | URL | Request, _init?: RequestInit) => Promise.resolve(mockResponse), ); try { const result = await fetchUserData("123"); assertEquals(result, { id: "123", name: "John Doe" }); } finally { // Restore original fetch globalThis.fetch = originalFetch; } }); ``` ## Real-world example Let's put everything together in a more comprehensive example. We'll test a user authentication service that: 1. Validates user credentials 2. Calls an API to authenticate 3. Stores tokens with expiration times In the example below, we'll create a full `AuthService` class that handles user login, token management, and authentication. We'll test it thoroughly using various mocking techniques covered earlier: stubbing fetch requests, spying on methods, and manipulating time to test token expiration - all within organized test steps. Deno's testing API provides a useful `t.step()` function that allows you to organize your tests into logical steps or sub-tests. This makes complex tests more readable and helps pinpoint exactly which part of a test is failing. Each step can have its own assertions and will be reported separately in the test output. ```ts import { assertEquals, assertRejects } from "jsr:@std/assert"; import { spy, stub } from "jsr:@std/testing/mock"; import { FakeTime } from "jsr:@std/testing/time"; // The service we want to test class AuthService { private token: string | null = null; private expiresAt: Date | null = null; async login(username: string, password: string): Promise { // Validate inputs if (!username || !password) { throw new Error("Username and password are required"); } // Call authentication API const response = await fetch("https://api.example.com/login", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ username, password }), }); if (!response.ok) { throw new Error(`Authentication failed: ${response.status}`); } const data = await response.json(); // Store token with expiration (1 hour) this.token = data.token; this.expiresAt = new Date(Date.now() + 60 * 60 * 1000); return this.token; } getToken(): string { if (!this.token || !this.expiresAt) { throw new Error("Not authenticated"); } if (new Date() > this.expiresAt) { this.token = null; this.expiresAt = null; throw new Error("Token expired"); } return this.token; } logout(): void { this.token = null; this.expiresAt = null; } } Deno.test("AuthService comprehensive test", async (t) => { await t.step("login should validate credentials", async () => { const authService = new AuthService(); await assertRejects( () => authService.login("", "password"), Error, "Username and password are required", ); }); await t.step("login should handle API calls", async () => { const authService = new AuthService(); // Mock successful response const mockResponse = new Response( JSON.stringify({ token: "fake-jwt-token" }), { status: 200, headers: { "Content-Type": "application/json" } }, ); const fetchStub = stub( globalThis, "fetch", (_url: string | URL | Request, options?: RequestInit) => { // Verify correct data is being sent const body = options?.body as string; const parsedBody = JSON.parse(body); assertEquals(parsedBody.username, "testuser"); assertEquals(parsedBody.password, "password123"); return Promise.resolve(mockResponse); }, ); try { const token = await authService.login("testuser", "password123"); assertEquals(token, "fake-jwt-token"); } finally { fetchStub.restore(); } }); await t.step("token expiration should work correctly", () => { using fakeTime = new FakeTime(); const authService = new AuthService(); const time = fakeTime(new Date("2023-01-01T12:00:00Z")); try { // Mock the login process to set token directly authService.login = spy( authService, "login", async () => { (authService as any).token = "fake-token"; (authService as any).expiresAt = new Date( Date.now() + 60 * 60 * 1000, ); return "fake-token"; }, ); // Login and verify token authService.login("user", "pass").then(() => { const token = authService.getToken(); assertEquals(token, "fake-token"); // Advance time past expiration time.tick(61 * 60 * 1000); // Token should now be expired assertRejects( () => { authService.getToken(); }, Error, "Token expired", ); }); } finally { time.restore(); (authService.login as any).restore(); } }); }); ``` This code defines `AuthService` class with three main functionalities: - Login - Validates credentials, calls an API, and stores a token with an expiration time - GetToken - Returns the token if valid and not expired - Logout - Clears the token and expiration The testing structure is organized as a single main test with three logical **steps**, each testing a different aspect of the service; credential validation, API call handling and token expiration. 🦕 Effective mocking is essential for writing reliable, maintainable unit tests. Deno provides several powerful tools to help you isolate your code during testing. By mastering these mocking techniques, you'll be able to write more reliable tests that run faster and don't depend on external services. For more testing resources, check out: - [Deno Testing API Documentation](/api/deno/testing) - [Deno Standard Library Testing Modules](https://jsr.io/@std/testing) - [Basic Testing in Deno](/examples/testing_tutorial/) --- # Module metadata > A guide to working with module metadata in Deno. Learn about import.meta properties, main module detection, file paths, URL resolution, and how to access module context information in your applications. URL: https://docs.deno.com/examples/tutorials/module_metadata ## Concepts - [import.meta](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import.meta) can provide information on the context of the module. - The boolean [import.meta.main](https://docs.deno.com/api/web/~/ImportMeta#property_main) will let you know if the current module is the program entry point. - The string [import.meta.url](https://docs.deno.com/api/web/~/ImportMeta#property_url) will give you the URL of the current module. - The string [import.meta.filename](https://docs.deno.com/api/web/~/ImportMeta#property_filename) will give you the fully resolved path to the current module. _For local modules only_. - The string [import.meta.dirname](https://docs.deno.com/api/web/~/ImportMeta#property_dirname) will give you the fully resolved path to the directory containing the current module. _For local modules only_. - The [import.meta.resolve](https://docs.deno.com/api/web/~/ImportMeta#property_resolve) allows you to resolve specifier relative to the current module. This function takes into account an import map (if one was provided on startup). - The string [Deno.mainModule](https://docs.deno.com/api/deno/~/Deno.mainModule) will give you the URL of the main module entry point, i.e. the module invoked by the deno runtime. ## Example The example below uses two modules to show the difference between `import.meta.url`, `import.meta.main` and `Deno.mainModule`. In this example, `module_a.ts` is the main module entry point: ```ts title="module_b.ts" export function outputB() { console.log("Module B's import.meta.url", import.meta.url); console.log("Module B's mainModule url", Deno.mainModule); console.log( "Is module B the main module via import.meta.main?", import.meta.main, ); } ``` ```ts title="module_a.ts" import { outputB } from "./module_b.ts"; function outputA() { console.log("Module A's import.meta.url", import.meta.url); console.log("Module A's mainModule url", Deno.mainModule); console.log( "Is module A the main module via import.meta.main?", import.meta.main, ); console.log( "Resolved specifier for ./module_b.ts", import.meta.resolve("./module_b.ts"), ); } outputA(); console.log(""); outputB(); ``` If `module_a.ts` is located in `/home/alice/deno` then the output of `deno run --allow-read module_a.ts` is: ```console Module A's import.meta.url file:///home/alice/deno/module_a.ts Module A's mainModule url file:///home/alice/deno/module_a.ts Is module A the main module via import.meta.main? true Resolved specifier for ./module_b.ts file:///home/alice/deno/module_b.ts Module B's import.meta.url file:///home/alice/deno/module_b.ts Module B's mainModule url file:///home/alice/deno/module_a.ts Is module B the main module via import.meta.main? false ``` --- # How to use Mongoose with Deno > Step-by-step guide to using Mongoose with Deno. Learn how to set up MongoDB connectivity, create schemas, implement data models, and perform CRUD operations using Mongoose's schema-based modeling. URL: https://docs.deno.com/examples/tutorials/mongoose [Mongoose](https://mongoosejs.com/) is a popular, schema-based library that models data for [MongoDB](https://www.mongodb.com/). It simplifies writing MongoDB validation, casting, and other relevant business logic. This tutorial will show you how to setup Mongoose and MongoDB with your Deno project. [View source](https://github.com/denoland/examples/tree/main/with-mongoose) or [check out the video guide](https://youtu.be/dmZ9Ih0CR9g). ## Creating a Mongoose Model Let's create a simple app that connects to MongoDB, creates a `Dinosaur` model, and adds and updates a dinosaur to the database. First, we'll create the necessary files and directories: ```console touch main.ts && mkdir model && touch model/Dinosaur.ts ``` In `/model/Dinosaur.ts`, we'll import `npm:mongoose`, define the [schema], and export it: ```ts title="model/Dinosaur.ts" import mongoose, { type HydratedDocument, type Model, model, models, Schema, } from "npm:mongoose@latest"; interface Dinosaur { name: string; description: string; createdAt?: Date; updatedAt?: Date; } interface DinosaurMethods { updateDescription( this: HydratedDocument, description: string, ): Promise< HydratedDocument >; } type DinosaurModel = Model; const dinosaurSchema = new Schema( { name: { type: String, unique: true, required: true }, description: { type: String, required: true }, }, { timestamps: true }, ); dinosaurSchema.methods.updateDescription = async function ( this: HydratedDocument, description: string, ) { this.description = description; return await this.save(); }; export default (models.Dinosaur as DinosaurModel) || model("Dinosaur", dinosaurSchema); ``` ## Connecting to MongoDB Now, in our `main.ts` file, we'll import mongoose and the `Dinosaur` schema, and connect to MongoDB: ```ts import mongoose from "npm:mongoose@latest"; import Dinosaur from "./model/Dinosaur.ts"; const MONGODB_URI = Deno.env.get("MONGODB_URI") ?? "mongodb://localhost:27017/deno_mongoose_tutorial"; await mongoose.connect(MONGODB_URI); console.log(mongoose.connection.readyState); ``` Because Deno supports top-level `await`, we're able to simply `await mongoose.connect()`. Running the code with this command: ```shell deno run --allow-env --allow-net main.ts ``` We expect a log of `1`. ## Manipulating Data Let's add a typed instance [method](https://mongoosejs.com/docs/guide.html#methods) to our `Dinosaur` schema in `/model/Dinosaur.ts`: ```ts title="model/Dinosaur.ts" dinosaurSchema.methods.updateDescription = async function ( this: HydratedDocument, description: string, ) { this.description = description; return await this.save(); }; // ... ``` This instance method, `updateDescription`, will allow you to update a record's description. Back in `main.ts`, let's start adding and manipulating data in MongoDB. ```ts title="main.ts" const deno = new Dinosaur({ name: "Deno", description: "The fastest dinosaur that ever lived.", }); await deno.save(); const denoFromMongoDb = await Dinosaur.findOne({ name: "Deno" }).exec(); if (!denoFromMongoDb) throw new Error("Deno not found"); console.log( `Finding Deno in MongoDB -- \n ${denoFromMongoDb.name}: ${denoFromMongoDb.description}`, ); await denoFromMongoDb.updateDescription( "The fastest and most secure dinosaur that ever lived.", ); const newDenoFromMongoDb = await Dinosaur.findOne({ name: "Deno" }).exec(); if (!newDenoFromMongoDb) throw new Error("Deno not found after update"); console.log( `Finding Deno (again) -- \n ${newDenoFromMongoDb.name}: ${newDenoFromMongoDb.description}`, ); ``` Running the code, we get: ```console Finding Deno in MongoDB -- Deno: The fastest dinosaur that ever lived. Finding Deno (again) -- Deno: The fastest and most secure dinosaur that ever lived. ``` 🦕 Now you have a fully functional Deno application using Mongoose to interact with MongoDB! For more info on using Mongoose, please refer to [their documentation](https://mongoosejs.com/docs/guide.html). --- # How to use MySQL2 with Deno > Step-by-step guide to using MySQL2 with Deno. Learn how to set up database connections, execute queries, handle transactions, and build data-driven applications using MySQL's Node.js driver. URL: https://docs.deno.com/examples/tutorials/mysql2 [MySQL](https://www.mysql.com/) is the most popular database in the [2022 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2022/#most-popular-technologies-database) and counts Facebook, Twitter, YouTube, and Netflix among its users. [View source here.](https://github.com/denoland/examples/tree/main/with-mysql2) You can manipulate and query a MySQL database with Deno using the `mysql2` node package and importing via `npm:mysql2`. This allows us to use its Promise wrapper and take advantage of top-level await. ```tsx import mysql from "npm:mysql2@^2.3.3/promise"; ``` ## Connecting to MySQL We can connect to our MySQL server using the `createConnection()` method. You need the host (`localhost` if you are testing, or more likely a cloud database endpoint in production) and the user and password: ```tsx const connection = await mysql.createConnection({ host: "localhost", user: "root", password: "password", }); ``` You can also optionally specify a database during the connection creation. Here we are going to use `mysql2` to create the database on the fly. ## Creating and populating the database Now that you have the connection running, you can use `connection.query()` with SQL commands to create databases and tables as well as insert the initial data. First we want to generate and select the database to use: ```tsx await connection.query("CREATE DATABASE denos"); await connection.query("use denos"); ``` Then we want to create the table: ```tsx await connection.query( "CREATE TABLE `dinosaurs` ( `id` int NOT NULL AUTO_INCREMENT PRIMARY KEY, `name` varchar(255) NOT NULL, `description` varchar(255) )", ); ``` After the table is created we can populate the data: ```tsx await connection.query( "INSERT INTO `dinosaurs` (id, name, description) VALUES (1, 'Aardonyx', 'An early stage in the evolution of sauropods.'), (2, 'Abelisaurus', 'Abels lizard has been reconstructed from a single skull.'), (3, 'Deno', 'The fastest dinosaur that ever lived.')", ); ``` We now have all the data ready to start querying. ## Querying MySQL We can use the same connection.query() method to write our queries. First we try and get all the data in our `dinosaurs` table: ```tsx const results = await connection.query("SELECT * FROM `dinosaurs`"); console.log(results); ``` The result from this query is all the data in our database: ```tsx [ [ { id: 1, name: "Aardonyx", description: "An early stage in the evolution of sauropods." }, { id: 2, name: "Abelisaurus", description: `Abel's lizard" has been reconstructed from a single skull.` }, { id: 3, name: "Deno", description: "The fastest dinosaur that ever lived." } ], ``` If we want to just get a single element from the database, we can change our query: ```tsx const [results, fields] = await connection.query( "SELECT * FROM `dinosaurs` WHERE `name` = 'Deno'", ); console.log(results); ``` Which gives us a single row result: ```tsx [{ id: 3, name: "Deno", description: "The fastest dinosaur that ever lived." }]; ``` Finally, we can close the connection: ```tsx await connection.end(); ``` For more on `mysql2`, check out their documentation [here](https://github.com/sidorares/node-mysql2). --- # Build a Next.js App > Walkthrough guide to building a Next.js application with Deno. Learn how to set up a project, create API routes, implement server-side rendering, and build a full-stack TypeScript application. URL: https://docs.deno.com/examples/tutorials/next [Next.js](https://nextjs.org/) is a popular framework for building server-side-rendered applications. It is built on top of React and provides a lot of features out of the box. In this tutorial, we'll build a [simple Next.js application](https://tutorial-with-next.deno.deno.net/) and run it with Deno. The app will display a list of dinosaurs. When you click on one, it'll take you to a dinosaur page with more details. You can see the [complete app on GitHub](https://github.com/denoland/tutorial-with-next/tree/main). :::info Deploy your own Want to skip the tutorial and deploy the finished app right now? Click the button below to instantly deploy your own copy of the complete SvelteKit dinosaur app to Deno Deploy. You'll get a live, working application that you can customize and modify as you learn! [![Deploy on Deno](https://deno.com/button)](https://console.deno.com/new?clone=https://github.com/denoland/tutorial-with-next) ::: ## Create a Next.js app with Deno Next provides a CLI tool to quickly scaffold a new Next.js app. In your terminal run the following command to create a new Next.js app with Deno: ```sh deno run -A npm:create-next-app@latest ``` When prompted, select the default options to create a new Next.js app with TypeScript. Next.js has some dependencies that still rely on `Object.prototype.__proto__` and requires CommonJS module support. To configure Deno for Next.js compatibility, update your `deno.json` file with the following configuration: ```json deno.json { "unstable": [ "bare-node-builtins", "detect-cjs", "node-globals", "unsafe-proto", "sloppy-imports" ] } ``` Now install the dependencies found in the package.json: ```sh deno install --allow-scripts ``` Now you can serve your new Next.js app: ```sh deno task dev ``` This will start the Next.js development server using Deno. The `deno task dev` command runs `dev` task in the package.json, which starts the Next.js development server with the necessary flags for CommonJS compatibility. Visit [http://localhost:3000](http://localhost:3000) to see the app in the browser. ## Add a backend The next step is to add a backend API. We'll create a very simple API that returns information about dinosaurs. We'll use Next.js's [built in API route handlers](https://nextjs.org/docs/app/building-your-application/routing/route-handlers) to set up our dinosaur API. Next.js uses a file-system-based router, where the folder structure directly defines the routes. We'll define three routes, The first route at `/api` will return the string `Welcome to the dinosaur API`, then we'll set up `/api/dinosaurs` to return all the dinosaurs, and finally `/api/dinosaur/[dinosaur]` to return a specific dinosaur based on the name in the URL. ### /api/ In the `src/app` folder of your new project, create an `api` folder. In that folder, create a `route.ts` file, which will handle requests to `/api/. Copy and paste the following code into the `api/route.ts` file: ```ts title="route.ts" export function GET() { return Response.json("welcome to the dinosaur API"); } ``` This code defines a simple route handler that returns a JSON response with the string `welcome to the dinosaur API`. ### /api/data.json In the `api` folder, create a `data.json` file, which will contain the hard coded dinosaur data. Copy and paste [this json file](https://raw.githubusercontent.com/denoland/deno-vue-example/main/api/data.json) into the `data.json` file. ### /api/dinosaurs In the `api` folder, create a folder called `dinosaurs`, in that create a `route.ts` file which will handle requests to `/api/dinosaurs`. In this route we'll read the `data.json` file and return the dinosaurs as JSON: ```ts title="route.ts" import data from "./data.json" with { type: "json" }; export function GET() { return Response.json(data); } ``` ### /api/dinosaurs/[dinosaur] And for the final route, `/api/dinosaurs/[dinosaur]`, we'll create a folder called `[dinosaur]` in the `dinosaurs` directory. In there, create a `route.ts` file. In this file we'll read the `data.json` file, find the dinosaur with the name in the URL, and return it as JSON: ```ts title="route.ts" import data from "../../data.json" with { type: "json" }; type RouteParams = { params: Promise<{ dinosaur: string }> }; export const GET = async (_request: Request, { params }: RouteParams) => { const { dinosaur } = await params; if (!dinosaur) { return Response.json("No dinosaur name provided."); } const dinosaurData = data.find((item) => item.name.toLowerCase() === dinosaur.toLowerCase() ); return Response.json(dinosaurData ? dinosaurData : "No dinosaur found."); }; ``` Now, if you run the app with `deno task dev` and visit `http://localhost:3000/api/dinosaurs/brachiosaurus` in your browser, you should see the details of the brachiosaurus dinosaur. ## Build the frontend Now that we have our backend API set up, let's build the frontend to display the dinosaur data. ### Define the dinosaur type Firstly we'll set up a new type, to define the shape of the dinosaur data. In the `app` directory, create a `types.ts` file and add the following code: ```ts title="types.ts" export type Dino = { name: string; description: string }; ``` ### Update the homepage We'll update the `page.tsx` file in the `app` directory to fetch the dinosaur data from our API and display it as a list of links. To execute client-side code in Next.js we need to use the `use Client` directive at the top of the file. Then we'll import the modules that we'll need in this page and export the default function that will render the page: ```tsx title="page.tsx" "use client"; import { useEffect, useState } from "react"; import { Dino } from "./types"; import Link from "next/link"; export default function Home() { } ``` Inside the body of the `Home` function, we'll define a state variable to store the dinosaur data, and a `useEffect` hook to fetch the data from the API when the component mounts: ```tsx title="page.tsx" const [dinosaurs, setDinosaurs] = useState([]); useEffect(() => { (async () => { const response = await fetch(`/api/dinosaurs`); const allDinosaurs = await response.json() as Dino[]; setDinosaurs(allDinosaurs); })(); }, []); ``` Beneath this, still inside the body of the `Home` function, we'll return a list of links, each linking to the dinosaur's page: ```tsx title="page.tsx" return (

Welcome to the Dinosaur app

Click on a dinosaur below to learn more.

    {dinosaurs.map((dinosaur: Dino) => { return (
  • {dinosaur.name}
  • ); })}
); ``` ### Create the dinosaur page Inside the `app` directory, create a new folder called `[dinosaur]`. Inside this folder create a `page.tsx` file. This file will fetch the details of a specific dinosaur from the API and render them on the page. Much like the homepage, we'll need client side code, and we'll import the modules we need and export a default function. We'll pass the incoming to the function and set up a type for this parameter: ```tsx title="[dinosaur]/page.tsx" "use client"; import { useEffect, useState } from "react"; import { Dino } from "../types"; import Link from "next/link"; type RouteParams = { params: Promise<{ dinosaur: string }> }; export default function Dinosaur({ params }: RouteParams) { } ``` Inside the body of the `Dinosaur` function we'll get the selected dinosaur from the request, set up a state variable to store the dinosaur data, and write a `useEffect` hook to fetch the data from the API when the component mounts: ```tsx title="[dinosaur]/page.tsx" const selectedDinosaur = params.then((params) => params.dinosaur); const [dinosaur, setDino] = useState({ name: "", description: "" }); useEffect(() => { (async () => { const resp = await fetch(`/api/dinosaurs/${await selectedDinosaur}`); const dino = await resp.json() as Dino; setDino(dino); })(); }, []); ``` Finally, still inside the `Dinosaur` function body, we'll return a paragraph element containing the dinosaur's name and description: ```tsx title="[dinosaur]/page.tsx" return (

{dinosaur.name}

{dinosaur.description}

🠠 Back to all dinosaurs
); ``` ### Add some styles Let's add some basic styles to make the app look nicer. Update your `app/globals.css` file with the [styles from this file](https://raw.githubusercontent.com/denoland/tutorial-with-next/refs/heads/main/src/app/globals.css). ## Run the app Now you can run the app with `deno run dev` and visit `http://localhost:3000` in your browser to see the list of dinosaurs. Click on a dinosaur to see more details! ## Deploy the app Now that you have your working Next.js app, you can deploy it to the web with Deno Deploy. For the best experience, you can deploy your app directly from GitHub, which will set up automated deployments. Create a GitHub repository and push your app there. [Create a new GitHub repository](https://github.com/new), then initialize and push your app to GitHub: ```sh git init -b main git remote add origin https://github.com//.git git add . git commit -am 'my next app' git push -u origin main ``` Once your app is on GitHub, you can [deploy it to Deno Deploy](https://console.deno.com/). For a walkthrough of deploying your app, check out the [Deno Deploy tutorial](/examples/deno_deploy_tutorial/). 🦕 Now you can build and run a Next.js app with Deno! To build on your app you could consider [adding a database](/runtime/tutorials/connecting_to_databases/) to replace your `data.json` file, or consider [writing some tests](/runtime/fundamentals/testing/) to make your app reliable and production ready. --- # Build a Nuxt app with Deno > Step-by-step guide to building Nuxt applications with Deno. Learn how to create a full-stack Vue.js app, implement server-side rendering, add Tailwind styling, and deploy your application. URL: https://docs.deno.com/examples/tutorials/nuxt [Nuxt](https://nuxt.com/) is a framework that provides an intuitive way to create full-stack applications based on [Vue](https://vuejs.org/). It offers file-based routing, a variety of rendering options, and automatic code splitting out of the box. With its modular architecture, Nuxt simplifies the development process by providing a structured approach to building Vue applications. In this tutorial, we'll build a simple Nuxt application with Deno that will display a list of dinosaurs and allow you to learn more about each one when you click on the name. You can see the [finished app on GitHub](https://github.com/denoland/examples/tree/main/with-nuxt). You can see a [live version of the app on Deno Deploy](https://example-with-nuxt.deno.deno.net/). :::info Deploy your own Want to skip the tutorial and deploy the finished app right now? Click the button below to instantly deploy your own copy of the complete Nuxt dinosaur app to Deno Deploy. You'll get a live, working application that you can customize and modify as you learn! [![Deploy on Deno](https://deno.com/button)](https://console.deno.com/new?clone=https://github.com/denoland/examples&path=with-nuxt) ::: ## Scaffold a Nuxt app with Deno We can create a new Nuxt project using Deno like this: ```bash deno -A npm:nuxi@latest init ``` Select the directory where you want to create the project, and choose `deno` to manage dependencies. You can also choose to initialize a git repository if you want, or we can do that later. Next change directory into the newly created project, you can check which tasks Nuxt has available by running `deno task`. ```bash cd nuxt-app deno task ``` This will show you the available tasks, such as `dev`, `build`, and `preview`. The `dev` task is used to start the development server. ## Start the development server Now we can start the development server with: ```bash deno task dev ``` This will start the Nuxt development server, and you can visit [http://localhost:3000](http://localhost:3000) in your browser to see the default Nuxt welcome page. ## Build out the app architecture Now that we have a basic Nuxt app set up, we can start building out the application architecture. We'll create a few directories to organize our code and prepare for the features we want to implement. Create the following directories in your project: ```bash NUXT-APP/ ├── pages/ # Vue pages │ └── dinosaurs/ # Dinosaur pages ├── public/ # Static files ├── server/ # Server-side code │ └── api/ # API routes ``` ## Add dinosaur data In the `api` directory, create a new file called `data.json` file, which will contain the hard coded dinosaur data. Copy and paste [this json file](https://raw.githubusercontent.com/denoland/tutorial-with-nuxt/refs/heads/main/src/data/data.json) into the `data.json` file. (If you were building a real app, you would probably fetch this data from a database or an external API.) ## Setup the API routes This app will have two API routes. They will serve the following: - the full list of dinosaurs for an index page - individual dinosaur information for an individual dinosaur page Both will be `*.get.ts` files, which Nuxt automatically converts to API endpoints to respond to `GET` requests. [The filename convention determines both the HTTP method and the route path](https://nuxt.com/docs/guide/directory-structure/server#matching-http-method). The initial `dinosaurs.get.ts` is fairly simple and uses [`defineCachedEventHandler`](https://nitro.build/guide/cache) to create a cached endpoint for better performance. This handler simply returns our full dinosaur data array without any filtering: ```tsx title="server/api/dinosaurs.get.ts" import data from "./data.json" with { type: "json" }; export default defineCachedEventHandler(() => { return data; }); ``` The `GET` route for the individual dinosaur has a little more logic. It extracts the name parameter from the event context, performs case-insensitive matching to find the requested dinosaur, and includes proper error handling for missing or invalid dinosaur names. We'll create a `dinosaurs` directory, then to pass the name parameter, we'll make a new file named `[name].get.ts`: ```tsx title="server/api/dinosaurs/[name].get.ts" import data from "../data.json"; export default defineCachedEventHandler((event) => { const name = getRouterParam(event, "name"); if (!name) { throw createError({ statusCode: 400, message: "No dinosaur name provided", }); } const dinosaur = data.find( (dino) => dino.name.toLowerCase() === name.toLowerCase(), ); if (!dinosaur) { throw createError({ statusCode: 404, message: "Dinosaur not found", }); } return dinosaur; }); ``` Run the server with `deno task dev` and visit [http://localhost:3000/api/dinosaurs](http://localhost:3000/api/dinosaurs) in your browser, you should see the raw JSON response showing all of the dinosaurs! ![Setting up API](./images/how-to/nuxt/nuxt-1.webp) You can also retrieve data for a single dinosaur by visiting a particular dinosaur name, for example: [http://localhost:3000/api/dinosaurs/aardonyx](http://localhost:3000/api/dinosaurs/aardonyx). ![Setting up API](./images/how-to/nuxt/nuxt-2.webp) Next, we'll setup the frontend with Vue to display the index page and each individual dinosaur page. ## Setup the Vue frontend We want to set up two pages within the app: - An index page which will list all of the dinosaurs - An individual dinosaur page showing more information about the selected dinosaur. First, create the index page. Nuxt uses [file-system routing](https://nuxt.com/docs/getting-started/routing), so we will create a `pages` directory in the root, and within that an index page called `index.vue`. To get the data, we’ll use the `useFetch` composable to hit the API endpoint we created in the previous section: ```tsx title="pages/index.vue" ``` For the page that shows information on each dinosaur, we'll create a new dynamic page called `[name].vue`. This page uses Nuxt's [dynamic route parameters](https://nuxt.com/docs/getting-started/routing#route-parameters), where the `[name]` in the filename can be accessed in JavaScript as `route.params.name`. We’ll use the `useRoute` composable to access the route parameters and `useFetch` to get the specific dinosaur's data based on the name parameter: ```tsx title="pages/[name].vue" ``` Next, we’ll have to connect these Vue components together so that they render properly when we visit the root of the domain. Let’s update `app.vue` at the root of the directory to serve our application’s root component. We’ll use [`NuxtLayout`](https://nuxt.com/docs/api/components/nuxt-layout) for consistent page structure and [`NuxtPage`](https://nuxt.com/docs/api/components/nuxt-page) for dynamic page rendering: ```tsx title="app.vue" ; ``` Run the server with `deno task dev` and see how it looks at [http://localhost:3000](http://localhost:3000): Looks great! ```bash deno install -D npm:tailwindcss npm:@tailwindcss/vite ``` Then, we're going to update the `nuxt.config.ts`. Import the Tailwind dependency and configure the Nuxt application for Deno compatibility, We'll enable development tools, and set up Tailwind CSS: ```tsx title="nuxt.config.ts" import tailwindcss from "@tailwindcss/vite"; export default defineNuxtConfig({ compatibilityDate: "2025-05-15", devtools: { enabled: true }, nitro: { preset: "deno", }, app: { head: { title: "Dinosaur Encyclopedia", }, }, css: ["~/assets/css/main.css"], vite: { plugins: [ tailwindcss(), ], }, }); ``` Next, create a new css file, `assets/css/main.css`, and add an import `@import` that imports tailwind, as well as the tailwind utilities: ```tsx title="assets/css/main.css" @import "tailwindcss"; @tailwind base; @tailwind components; @tailwind utilities; ``` ## Running the application We can then run the application using: ```bash deno task dev ``` This will start the app at localhost:3000:
And we’re done! 🦕 Next steps for a Nuxt app might be to add authentication using the [Nuxt Auth](https://auth.nuxtjs.org/) module, implement state management with [Pinia](https://pinia.vuejs.org/), add server-side data persistence with [Prisma](https://docs.deno.com/examples/prisma_tutorial/) or [MongoDB](https://docs.deno.com/examples/mongoose_tutorial/), and set up automated testing with Vitest. These features would make it production-ready for larger applications. --- # Handle OS signals > Tutorial on handling operating system signals in Deno. Learn how to capture SIGINT and SIGBREAK events, manage signal listeners, and implement graceful shutdown handlers in your applications. URL: https://docs.deno.com/examples/tutorials/os_signals > ⚠️ Windows only supports listening for SIGINT and SIGBREAK as of Deno v1.23. ## Concepts - [Deno.addSignalListener()](https://docs.deno.com/api/deno/~/Deno.addSignalListener) can be used to capture and monitor OS signals. - [Deno.removeSignalListener()](https://docs.deno.com/api/deno/~/Deno.removeSignalListener) can be used to stop watching the signal. ## Set up an OS signal listener APIs for handling OS signals are modelled after already familiar [`addEventListener`](https://developer.mozilla.org/en-US/docs/Web/API/EventTarget/addEventListener) and [`removeEventListener`](https://developer.mozilla.org/en-US/docs/Web/API/EventTarget/removeEventListener) APIs. > ⚠️ Note that listening for OS signals doesn't prevent event loop from > finishing, ie. if there are no more pending async operations the process will > exit. You can use `Deno.addSignalListener()` function for handling OS signals: ```ts title="add_signal_listener.ts" console.log("Press Ctrl-C to trigger a SIGINT signal"); Deno.addSignalListener("SIGINT", () => { console.log("interrupted!"); Deno.exit(); }); // Add a timeout to prevent process exiting immediately. setTimeout(() => {}, 5000); ``` Run with: ```shell deno run add_signal_listener.ts ``` You can use `Deno.removeSignalListener()` function to unregister previously added signal handler. ```ts title="signal_listeners.ts" console.log("Press Ctrl-C to trigger a SIGINT signal"); const sigIntHandler = () => { console.log("interrupted!"); Deno.exit(); }; Deno.addSignalListener("SIGINT", sigIntHandler); // Add a timeout to prevent process exiting immediately. setTimeout(() => {}, 5000); // Stop listening for a signal after 1s. setTimeout(() => { Deno.removeSignalListener("SIGINT", sigIntHandler); }, 1000); ``` Run with: ```shell deno run signal_listeners.ts ``` --- # Distributed Tracing with Context Propagation in Deno > Implement end-to-end distributed tracing with automatic context propagation in Deno applications. This tutorial covers creating traced services, automatic propagation of trace context, and visualizing distributed traces. URL: https://docs.deno.com/examples/tutorials/otel_span_propagation Modern applications are often built as distributed systems with multiple services communicating with each other. When debugging issues or optimizing performance in these systems, it's crucial to be able to trace requests as they flow through different services. This is where distributed tracing comes in. As of Deno 2.3, the runtime now automatically preserves trace context across service boundaries, making end-to-end tracing in distributed systems simpler and more powerful. This means that when one service makes a request to another, the trace context is automatically propagated, allowing you to see the entire request flow as a single trace. ## Setting up a distributed system Our example system will consist of two parts: 1. A server that provides an API endpoint 2. A client that makes requests to the server ### The server We'll set up a simple HTTP server that responds to GET requests with a JSON message: ```ts title="server.ts" import { trace } from "npm:@opentelemetry/api@1"; const tracer = trace.getTracer("api-server", "1.0.0"); // Create a simple API server with Deno.serve Deno.serve({ port: 8000 }, (req) => { return tracer.startActiveSpan("process-api-request", async (span) => { // Add attributes to the span for better context span.setAttribute("http.route", "/"); span.updateName("GET /"); // Add a span event to see in traces span.addEvent("processing_request", { request_id: crypto.randomUUID(), timestamp: Date.now(), }); // Simulate processing time await new Promise((resolve) => setTimeout(resolve, 50)); console.log("Server: Processing request in trace context"); // End the span when we're done span.end(); return new Response(JSON.stringify({ message: "Hello from server!" }), { headers: { "Content-Type": "application/json" }, }); }); }); ``` ### The client Now, let's create a client that will make requests to our server: ```ts title="client.ts" import { SpanStatusCode, trace } from "npm:@opentelemetry/api@1"; const tracer = trace.getTracer("api-client", "1.0.0"); // Create a parent span for the client operation await tracer.startActiveSpan("call-api", async (parentSpan) => { try { console.log("Client: Starting API call"); // The fetch call inside this span will automatically: // 1. Create a child span for the fetch operation // 2. Inject the trace context into the outgoing request headers const response = await fetch("http://localhost:8000/"); const data = await response.json(); console.log(`Client: Received response: ${JSON.stringify(data)}`); parentSpan.addEvent("received_response", { status: response.status, timestamp: Date.now(), }); } catch (error) { console.error("Error calling API:", error); if (error instanceof Error) { parentSpan.recordException(error); } parentSpan.setStatus({ code: SpanStatusCode.ERROR, message: error instanceof Error ? error.message : String(error), }); } finally { parentSpan.end(); } }); ``` ## Tracing with OpenTelemetry Both the client and server code already include basic OpenTelemetry instrumentation: 1. Create a tracer - both files create a tracer using `trace.getTracer()` with a name and version. 2. Create spans - We use `startActiveSpan()` to create spans that represent operations. 3. Add context - We add attributes and events to spans to provide more context. 4. Ending spans - We make sure to end spans when operations are complete. ## Automatic context propagation The magic happens when the client makes a request to the server. In the client code there is a fetch call to the server: ```ts const response = await fetch("http://localhost:8000/"); ``` Since this fetch call happens inside an active span, Deno automatically creates a child span for the fetch operation and Injects the trace context into the outgoing request headers. When the server receives this request, Deno extracts the trace context from the request headers and establishes the server span as a child of the client's span. ## Running the example To run this example, first, start the server, giving your otel service a name: ```sh OTEL_DENO=true OTEL_SERVICE_NAME=server deno run --allow-net server.ts ``` Then, in another terminal, run the client, giving the client a different service name to make observing the propagation clearer: ```sh OTEL_DENO=true OTEL_SERVICE_NAME=client deno run --allow-net client.ts ``` You should see: 1. The client logs "Client: Starting API call" 2. The server logs "Server: Processing request in trace context" 3. The client logs the response received from the server ## Viewing traces To actually see the traces, you'll need an OpenTelemetry collector and a visualization tool, [for example Grafana Tempo](/runtime/fundamentals/open_telemetry/#quick-start). When you visualize the traces, you'll see: 1. A parent span from the client 2. Connected to a child span for the HTTP request 3. Connected to a span from the server 4. All as part of a single trace! For example, in Grafana, the trace visualization may look like this: ![Viewing expanded traces in Grafana](./images/how-to/grafana/propagation.png) 🦕 Now that you understand distributed tracing with Deno, you could extend this to more complex systems with multiple services and async operations. With Deno's automatic context propagation, implementing distributed tracing in your applications has never been easier! --- # How to use Planetscale with Deno > Step-by-step guide to using Planetscale with Deno. Learn how to set up serverless MySQL databases, manage connections, execute queries, and build scalable applications with Planetscale's developer-friendly platform. URL: https://docs.deno.com/examples/tutorials/planetscale Planetscale is a MySQL-compatible serverless database that is designed with a developer workflow where developers can create, branch, and deploy databases from the command line. [View source here.](https://github.com/denoland/examples/tree/main/with-planetscale) We'll use the Planetscale serverless driver, `@planetscale/database`, to work with Deno. First we want to create `main.ts` and import the connect method from this package: ```tsx import { connect } from "npm:@planetscale/database@^1.4"; ``` ## Configuring our connection The connection requires three credentials: host, username, and password. These are database-specific, so we first need to create a database in Planetscale. You can do that by following the initial instructions [here](https://planetscale.com/docs/tutorials/planetscale-quick-start-guide). Don't worry about adding the schema—we can do that through `@planetscale/database`. Once you have created the database, head to Overview, click "Connect", and choose "Connect with `@planetscale/database`" to get the host and username. Then click through to Passwords to create a new password for your database. Once you have all three you can plug them in directly, or better, store them as environment variables: ```bash export HOST= export USERNAME= export PASSWORD= ``` Then call them using `Deno.env`: ```tsx const config = { host: Deno.env.get("HOST"), username: Deno.env.get("USERNAME"), password: Deno.env.get("PASSWORD"), }; const conn = connect(config); ``` This will also work on Deno Deploy if you set the environment variables in the dashboard. Run with: ```shell deno run --allow-net --allow-env main.ts ``` The `conn` object is now an open connection to our Planetscale database. ## Creating and populating our database table Now that you have the connection running, you can `conn.execute()` with SQL commands to create tables and insert the initial data: ```tsx await conn.execute( "CREATE TABLE dinosaurs (id int NOT NULL AUTO_INCREMENT PRIMARY KEY, name varchar(255) NOT NULL, description varchar(255) NOT NULL);", ); await conn.execute( "INSERT INTO `dinosaurs` (id, name, description) VALUES (1, 'Aardonyx', 'An early stage in the evolution of sauropods.'), (2, 'Abelisaurus', 'Abels lizard has been reconstructed from a single skull.'), (3, 'Deno', 'The fastest dinosaur that ever lived.')", ); ``` ## Querying Planetscale We can use same `conn.execute()` to also write our queries. Let's get a list of all our dinosaurs: ```tsx const results = await conn.execute("SELECT * FROM `dinosaurs`"); console.log(results.rows); ``` The result: ```tsx [ { id: 1, name: "Aardonyx", description: "An early stage in the evolution of sauropods.", }, { id: 2, name: "Abelisaurus", description: "Abels lizard has been reconstructed from a single skull.", }, { id: 3, name: "Deno", description: "The fastest dinosaur that ever lived." }, ]; ``` We can also get just a single row from the database by specifying a dinosaur name: ```tsx const result = await conn.execute( "SELECT * FROM `dinosaurs` WHERE `name` = 'Deno'", ); console.log(result.rows); ``` Which gives us a single row result: ```tsx [{ id: 3, name: "Deno", description: "The fastest dinosaur that ever lived." }]; ``` You can find out more about working with Planetscale in their [docs](https://planetscale.com/docs). --- # How to create a RESTful API with Prisma and Oak > Guide to building a RESTful API using Prisma and Oak with Deno. Learn how to set up database schemas, generate clients, implement CRUD operations, and deploy your API with proper type safety. URL: https://docs.deno.com/examples/tutorials/prisma [Prisma](https://prisma.io) has been one of our top requested modules to work with in Deno. The demand is understandable, given that Prisma's developer experience is top notch and plays well with so many persistent data storage technologies. We're excited to show you how to use Prisma with Deno. In this How To guide, we'll setup a simple RESTful API in Deno using Oak and Prisma. Let's get started. [View source](https://github.com/denoland/examples/tree/main/with-prisma) or [check out the video guide](https://youtu.be/P8VzA_XSF8w). ## Setup the application Let's create the folder `rest-api-with-prisma-oak` and navigate there: ```shell mkdir rest-api-with-prisma-oak cd rest-api-with-prisma-oak ``` Then, let's run `prisma init` with Deno: ```shell npx prisma@latest init --generator-provider prisma-client --output ./generated ``` Let's understand the key parameters: - `--generator-provider prisma-client`: Define the provides as "prisma-client-js" instead of the default "prisma-client-js". The "prisma-client" provider is optimized for Deno and generates TypeScript code compatible with the Deno runtime. - `--output`: Defines the directory where Prisma will save the generated client files, including type definitions and database access utilities. This will generate [`prisma/schema.prisma`](https://www.prisma.io/docs/orm/prisma-schema). Let's update it with the following: :::tip Don't forget to add `runtime = "deno"` to the generator block in your schema.prisma file. This is required for Prisma to work correctly with Deno. ::: ```ts generator client { provider = "prisma-client" output = "./generated" runtime = "deno" } datasource db { provider = "postgresql" url = env("DATABASE_URL") } model Dinosaur { id Int @id @default(autoincrement()) name String @unique description String } ``` Prisma also generates a `.env` file with a `DATABASE_URL` environment variable. Let's assign `DATABASE_URL` to a PostgreSQL connection string. In this example, we'll use a free [PostgreSQL database from Supabase](https://supabase.com/database). Next, let's create the database schema: ```shell deno run -A npm:prisma@latest db push ``` After that's complete, we'll need to generate a Prisma Client: ```shell deno run -A npm:prisma@latest generate ``` ## Setup Accelerate in the Prisma Data Platform > Note: This is an optional step. Prisma Accelerate is not required for the > basic functionality. To get started with the Prisma Data Platform: 1. Sign up for a free [Prisma Data Platform account](https://console.prisma.io). 2. Create a project. 3. Navigate to the project you created. 4. Enable Accelerate by providing your database's connection string. 5. Generate an Accelerate connection string and copy it to your clipboard. Assign the Accelerate connection string, that begins with `prisma://`, to `DATABASE_URL` in your `.env` file replacing your existing connection string. Next, let's create a seed script to seed the database. ## Seed your Database Create `./prisma/seed.ts`: ```shell touch prisma/seed.ts ``` And in `./prisma/seed.ts`: ```ts import { Prisma, PrismaClient } from "./generated/client.ts"; const prisma = new PrismaClient({ datasourceUrl: process.env.DATABASE_URL, }); const dinosaurData: Prisma.DinosaurCreateInput[] = [ { name: "Aardonyx", description: "An early stage in the evolution of sauropods.", }, { name: "Abelisaurus", description: "Abel's lizard has been reconstructed from a single skull.", }, { name: "Acanthopholis", description: "No, it's not a city in Greece.", }, ]; /** * Seed the database. */ for (const u of dinosaurData) { const dinosaur = await prisma.dinosaur.create({ data: u, }); console.log(`Created dinosaur with id: ${dinosaur.id}`); } console.log(`Seeding finished.`); await prisma.$disconnect(); ``` We can now run `seed.ts` with: ```shell deno run -A --env prisma/seed.ts ``` :::tip The `--env` flag is used to tell Deno to load environment variables from the `.env` file. ::: After doing so, you should be able to see your data on Prisma Studio by running the following command: ```bash deno run -A npm:prisma studio ``` You should see something similar to the following screenshot: ![New dinosaurs are in Prisma dashboard](./images/how-to/prisma/1-dinosaurs-in-prisma.png) ## Create your API routes We'll use [`oak`](https://jsr.io/@oak/oak) to create the API routes. Let's keep them simple for now. Let's create a `main.ts` file: ```shell touch main.ts ``` Then, in your `main.ts` file: ```ts import { PrismaClient } from "./prisma/generated/client.ts"; import { Application, Router } from "jsr:@oak/oak"; /** * Initialize. */ const prisma = new PrismaClient({ datasources: { db: { url: process.env.DATABASE_URL, }, }, }); const app = new Application(); const router = new Router(); /** * Setup routes. */ router .get("/", (context) => { context.response.body = "Welcome to the Dinosaur API!"; }) .get("/dinosaur", async (context) => { // Get all dinosaurs. const dinosaurs = await prisma.dinosaur.findMany(); context.response.body = dinosaurs; }) .get("/dinosaur/:id", async (context) => { // Get one dinosaur by id. const { id } = context.params; const dinosaur = await prisma.dinosaur.findUnique({ where: { id: Number(id), }, }); context.response.body = dinosaur; }) .post("/dinosaur", async (context) => { // Create a new dinosaur. const { name, description } = await context.request.body.json(); const result = await prisma.dinosaur.create({ data: { name, description, }, }); context.response.body = result; }) .delete("/dinosaur/:id", async (context) => { // Delete a dinosaur by id. const { id } = context.params; const dinosaur = await prisma.dinosaur.delete({ where: { id: Number(id), }, }); context.response.body = dinosaur; }); /** * Setup middleware. */ app.use(router.routes()); app.use(router.allowedMethods()); /** * Start server. */ await app.listen({ port: 8000 }); ``` Now, let's run it: ```shell deno run -A --env main.ts ``` Let's visit `localhost:8000/dinosaurs`: ![List of all dinosaurs from REST API](./images/how-to/prisma/2-dinosaurs-from-api.png) Next, let's `POST` a new user with this `curl` command: ```shell curl -X POST http://localhost:8000/dinosaur -H "Content-Type: application/json" -d '{"name": "Deno", "description":"The fastest, most secure, easiest to use Dinosaur ever to walk the Earth."}' ``` You should now see a new row on Prisma Studio: ![New dinosaur Deno in Prisma](./images/how-to/prisma/3-new-dinosaur-in-prisma.png) Nice! ## What's next? Building your next app will be more productive and fun with Deno and Prisma, since both technologies deliver an intuitive developer experience with data modeling, type-safety, and robust IDE support. If you're interested in connecting Prisma to Deno Deploy, [check out this awesome guide](https://www.prisma.io/docs/guides/deployment/deployment-guides/deploying-to-deno-deploy). --- # Build Qwik with Deno > Step-by-step guide to building Qwik applications with Deno. Learn about resumability, server-side rendering, route handling, and how to create fast, modern web applications with zero client-side JavaScript by default. URL: https://docs.deno.com/examples/tutorials/qwik [Qwik](https://qwik.dev/) is a JavaScript framework that delivers instant-loading web applications by leveraging resumability instead of hydration. In this tutorial, we'll build a simple Qwik application and run it with Deno. The app will display a list of dinosaurs. When you click on one, it'll take you to a dinosaur page with more details. We'll go over how to build a simple Qwik app using Deno: - [Scaffold a Qwik app](#scaffold-a-qwik-app) - [Setup data and type definitions](#setup-data-and-type-definitions) - [Build the frontend](#build-the-frontend) - [Next steps](#next-steps) Feel free to skip directly to [the source code](https://github.com/denoland/examples/tree/main/with-qwik) or follow along below! ## Scaffold a Qwik app We can create a new Qwik project using deno like this: ```bash deno init --npm qwik@latest ``` This will run you through the setup process for Qwik and Qwik City. Here, we chose the simplest “Empty App” deployment with npm dependencies. When complete, you’ll have a project structure that looks like this: ``` . ├── node_modules/ ├── public/ └── src/ ├── components/ │ └── router-head/ │ └── router-head.tsx └── routes/ ├── index.tsx ├── layout.tsx ├── service-worker.ts ├── entry.dev.tsx ├── entry.preview.tsx ├── entry.ssr.tsx ├── global.css └── root.tsx ├── .eslintignore ├── .eslintrc.cjs ├── .gitignore ├── .prettierignore ├── package-lock.json ├── package.json ├── qwik.env.d.ts ├── README.md ├── tsconfig.json └── vite.config.ts ``` Most of this is boilerplate configuration that we won’t touch. A few of the important files to know for how Qwik works are: - `src/components/router-head/router-head.tsx`: Manages the HTML head elements (like title, meta tags, etc.) across different routes in your Qwik application. - `src/routes/index.tsx`: The main entry point and home page of your application that users see when they visit the root URL. - `src/routes/layout.tsx`: Defines the common layout structure that wraps around pages, allowing you to maintain consistent UI elements like headers and footers. - `src/routes/service-worker.ts`: Handles Progressive Web App (PWA) functionality, offline caching, and background tasks for your application. - `src/routes/entry.ssr.tsx`: Controls how your application is server-side rendered, managing the initial HTML generation and hydration process. - `src/routes/root.tsx`: The root component that serves as the application's shell, containing global providers and the main routing structure. Now we can build out our own routes and files within the application. ## Setup data and type definitions We’ll start by adding our [dinosaur data](https://github.com/denoland/examples/blob/main/with-qwik/src/data/dinosaurs.json) to a new `./src/data` directory as `dinosaurs.json`: ```jsonc // ./src/data/dinosaurs.json { "dinosaurs": [ { "name": "Tyrannosaurus Rex", "description": "A massive carnivorous dinosaur with powerful jaws and tiny arms." }, { "name": "Brachiosaurus", "description": "A huge herbivorous dinosaur with a very long neck." }, { "name": "Velociraptor", "description": "A small but fierce predator that hunted in packs." } // ... ] } ``` This is where our data will be pulled from. In a full application, this data would come from a database. > ⚠️️ In this tutorial we hard code the data. But you can connect > to [a variety of databases](https://docs.deno.com/runtime/tutorials/connecting_to_databases/) and [even use ORMs like Prisma](https://docs.deno.com/runtime/tutorials/how_to_with_npm/prisma/) with > Deno. Next, let's add type definitions for our dinosaur data. We'll put it in `types.ts` in `./src/`: ```tsx // ./src/types.ts export type Dino = { name: string; description: string; }; ``` Next, let's add API routes to server this data. ## Add API routes First, let's create the route to load all dinosaurs for the index page. This API endpoint uses Qwik City's [`RequestHandler`](https://qwik.dev/docs/advanced/request-handling/) to create a `GET` endpoint that loads and returns our dinosaur data using the json helper for proper response formatting. We'll add the below to a new file in `./src/routes/api/dinosaurs/index.ts`: ```tsx // ./src/routes/api/dinosaurs/index.ts import { RequestHandler } from "@builder.io/qwik-city"; import data from "~/data/dinosaurs.json" with { type: "json" }; export const onGet: RequestHandler = async ({ json }) => { const dinosaurs = data; json(200, dinosaurs); }; ``` Next, let's create the API route to get the information for a single dinosaur. This takes the parameter from the URL and uses it to search through our dinosaur data. We'll add the below code to `./src/routes/api/dinosaurs/[name]/index.ts`: ```tsx // ./src/routes/api/dinosaurs/[name]/index.ts import { RequestHandler } from "@builder.io/qwik-city"; import data from "~/data/dinosaurs.json" with { type: "json" }; export const onGet: RequestHandler = async ({ params, json }) => { const { name } = params; const dinosaurs = data; if (!name) { json(400, { error: "No dinosaur name provided." }); return; } const dinosaur = dinosaurs.find( (dino) => dino.name.toLowerCase() === name.toLowerCase(), ); if (!dinosaur) { json(404, { error: "No dinosaur found." }); return; } json(200, dinosaur); }; ``` Now that the API routes are wired up and serving data, let's create the two frontend pages: the index page and the individual dinosaur detail pages. ## Build the frontend We'll create our homepage by updating our `./src/routes/index.tsx` file using Qwik's [`routeLoader$`](https://qwik.dev/docs/route-loader/) for server-side data fetching. This `component$` loads and renders the dinosaur data during SSR via `useDinosaurs()`: ```tsx // ./src/routes/index.tsx import { component$ } from "@builder.io/qwik"; import { Link, routeLoader$ } from "@builder.io/qwik-city"; import type { Dino } from "~/types"; import data from "~/data/dinosaurs.json" with { type: "json" }; export const useDinosaurs = routeLoader$(() => { return data; }); export default component$(() => { const dinosaursSignal = useDinosaurs(); return (

Welcome to the Dinosaur app

Click on a dinosaur below to learn more.

    {dinosaursSignal.value.dinosaurs.map((dinosaur: Dino) => (
  • {dinosaur.name}
  • ))}
); }); ``` Now that we have our main index page, let's add a page for the individual dinosaur information. We'll use Qwik's [dynamic routing](https://qwik.dev/docs/routing/), with `[name]` as the key for each dinosaur. This page leverages `routeLoader$` to fetch individual dinosaur details based on the URL parameter, with built-in error handling if the dinosaur isn't found. The component uses the same SSR pattern as our index page, but with parameter-based data loading and a simpler display layout for individual dinosaur details: ```tsx // ./src/routes/[name]/index.tsx import { component$ } from "@builder.io/qwik"; import { Link, routeLoader$ } from "@builder.io/qwik-city"; import type { Dino } from "~/types"; import data from "~/data/dinosaurs.json" with { type: "json" }; export const useDinosaurDetails = routeLoader$(({ params }): Dino => { const { dinosaurs } = data; const dinosaur = dinosaurs.find( (dino: Dino) => dino.name.toLowerCase() === params.name.toLowerCase(), ); if (!dinosaur) { throw new Error("Dinosaur not found"); } return dinosaur; }); export default component$(() => { const dinosaurSignal = useDinosaurDetails(); return (

{dinosaurSignal.value.name}

{dinosaurSignal.value.description}

Back to all dinosaurs
); }); ``` Now that we have built our routes and the frontend components, we can run our application: ```bash deno task dev ``` This will start the app at `localhost:5173`:
Tada! ## Next steps 🦕 Now you can build and run a Qwik app with Deno! Here are some ways you could enhance your dinosaur application: Next steps for a Qwik app might be to use Qwik's lazy loading capabilities for dinosaur images and other components, or add client-side state management for complex features. - Add persistent data store [using a database like Postgres or MongoDB](https://docs.deno.com/runtime/tutorials/connecting_to_databases/) and an ORM like [Drizzle](https://docs.deno.com/examples/drizzle_tutorial/) or [Prisma](https://docs.deno.com/runtime/tutorials/how_to_with_npm/prisma/) - use Qwik's lazy loading capabilities for dinosaur images and components - add client-side state management - self-host your app to [AWS](https://docs.deno.com/runtime/tutorials/aws_lightsail/), [Digital Ocean](https://docs.deno.com/runtime/tutorials/digital_ocean/), and [Google Cloud Run](https://docs.deno.com/runtime/tutorials/google_cloud_run/) --- # React app with Vite > Complete guide to building React applications with Deno and Vite. Learn how to set up a project, implement routing, add API endpoints, and deploy your full-stack TypeScript application. URL: https://docs.deno.com/examples/tutorials/react [React](https://reactjs.org) is the most widely used JavaScript frontend library. In this tutorial we'll build a simple React app with Deno. The app will display a list of dinosaurs. When you click on one, it'll take you to a dinosaur page with more details. You can see the [finished app repo on GitHub](https://github.com/denoland/tutorial-with-react) and a [demo of the app on Deno Deploy](https://tutorial-with-react.deno.deno.net/) :::info Deploy your own Want to skip the tutorial and deploy the finished app right now? Click the button below to instantly deploy your own copy of the complete SvelteKit dinosaur app to Deno Deploy. You'll get a live, working application that you can customize and modify as you learn! [![Deploy on Deno](https://deno.com/button)](https://console.deno.com/new?clone=https://github.com/denoland/tutorial-with-react&mode=dynamic&entrypoint=api/main.ts&build=deno+task+build&install=deno+install) ::: ## Create a basic react app with Vite This tutorial will use [Vite](https://vitejs.dev/) to serve the app locally. Vite is a build tool and development server for modern web projects. It pairs well with React and Deno, leveraging ES modules and allowing you to import React components directly. In your terminal run the following command to create a new React app with Vite using the typescript template: ```sh $ deno init --npm vite my-react-app --template react-ts ``` ## Run the dev server Change directory to your new react app and install the dependencies: ```sh cd deno install ``` Now you can serve your new react app by running: ```sh deno run dev ``` This will start the Vite server, click the output link to localhost to see your app in the browser. ## Configure the project We're going to build a full-stack React app with a Deno backend. We'll need to configure both vite and Deno to work together. Install the deno plugin for Vite, the React types and the Vite React plugin: ```sh deno add npm:@deno/vite-plugin@latest npm:@types/react@latest npm:@vitejs/plugin-react@latest ``` We'll also need to install the Oak web framework for Deno to handle our API requests, and CORS middleware to allow cross-origin requests from the React app: ```sh deno add jsr:@oak/oak jsr:@tajpouria/cors ``` This will add these dependencies to a new `deno.json` file. In that file, we'll also add some tasks to make it easier to run the app in development and production modes and some configuration to set up Deno with React and Vite. Add the following to your `deno.json` file: ```json "tasks": { "dev": "deno run -A npm:vite & deno run server:start", "build": "deno run -A npm:vite build", "server:start": "deno run -A --watch ./api/main.ts", "serve": "deno run build && deno run server:start" }, "nodeModulesDir": "auto", "compilerOptions": { "types": [ "react", "react-dom", "@types/react" ], "lib": [ "dom", "dom.iterable", "deno.ns" ], "jsx": "react-jsx", "jsxImportSource": "react" } ``` We can use both `package.json` and `deno.json` for dependency and configuration, but if you'd rather you can remove the `package.json` file and use only `deno.json` for your project configuration, be sure to move across the dependencies from `package.json` to `deno.json` imports first. ## Add a backend API Our project will have a backend API that serves dinosaur data. This API will be built using Deno and Oak, and will provide endpoints to fetch a list of dinosaurs and details about a specific dinosaur from a JSON file. In a production app this data would likely come from a database, but for this tutorial we'll use a static JSON file. In the root of your project, create a new directory called `api`. In this directory, create a file called `data.json` and copy across [the dinosaur data](https://github.com/denoland/tutorial-with-react/blob/main/api/data.json). Next make a file called `main.ts` in the `api` directory. This file will contain the Oak server code to handle API requests. In this file we will set up the Oak server, define API routes, and serve static files for the React app. First set up the imports and create the Oak application and router: ```ts title="api/main.ts" import { Application, Router } from "@oak/oak"; import { oakCors } from "@tajpouria/cors"; import routeStaticFilesFrom from "./util/routeStaticFilesFrom.ts"; import data from "./data.json" with { type: "json" }; export const app = new Application(); const router = new Router(); ``` Then we'll define the two main API routes: ```ts title="api/main.ts" router.get("/api/dinosaurs", (context) => { context.response.body = data; }); router.get("/api/dinosaurs/:dinosaur", (context) => { if (!context?.params?.dinosaur) { context.response.body = "No dinosaur name provided."; } const dinosaur = data.find((item) => item.name.toLowerCase() === context.params.dinosaur.toLowerCase() ); context.response.body = dinosaur ?? "No dinosaur found."; }); ``` Finally, we'll configure the server with middleware and start it listening: ```ts title="api/main.ts" app.use(oakCors()); app.use(router.routes()); app.use(router.allowedMethods()); app.use(routeStaticFilesFrom([ `${Deno.cwd()}/dist`, `${Deno.cwd()}/public`, ])); if (import.meta.main) { console.log("Server listening on port http://localhost:8000"); await app.listen({ port: 8000 }); } ``` The server handles CORS, serves the API routes, and also serves static files from the `dist` (built app) and `public` directories. ## Serve static files The Oak server will also serve the built React app. To do this, we need to configure it to serve static files from the `dist` directory where Vite outputs the built app. We can use the `routeStaticFilesFrom` utility function to set this up. Create a new file called `util/routeStaticFilesFrom.ts` in the `api` directory with the following code: ```ts title="api/util/routeStaticFilesFrom.ts" import { Context, Next } from "jsr:@oak/oak"; export default function routeStaticFilesFrom(staticPaths: string[]) { return async (context: Context>, next: Next) => { for (const path of staticPaths) { try { await context.send({ root: path, index: "index.html" }); return; } catch { continue; } } await next(); }; } ``` This utility function attempts to serve static files from the provided paths, falling back to the next middleware if no file is found. It will serve the `index.html` file from the `dist` directory, which is the entry point for the React app. You can test the API by running `deno run dev` and visiting `localhost:8000/api/dinosaurs` in your browser to see the JSON response with all dinosaurs. ## React app setup ### Entry point The React app entry point is in `src/main.tsx`. We don't need to change anything here, but it's worth noting that this is where the React app is rendered into the DOM. The `createRoot` function from `react-dom/client` is used to render the `App` component into the `root` element in `index.html`. Here's the code in `src/main.tsx`: ```tsx title="src/main.tsx" import { StrictMode } from "react"; import { createRoot } from "react-dom/client"; import "./index.css"; import App from "./App.tsx"; createRoot(document.getElementById("root")!).render( , ); ``` ## Add a router The app will have two routes: `/` and `/:dinosaur`. We'll set up the routing in `src/App.tsx`: ```tsx title="src/App.tsx" import { BrowserRouter, Route, Routes } from "react-router-dom"; import Index from "./pages/index.tsx"; import Dinosaur from "./pages/Dinosaur.tsx"; function App() { return ( } /> } /> ); } export default App; ``` ## Proxy to forward the api requests Vite serves the React application on port `3000` while the API runs on port `8000`. We'll need to set up proxy configuration in `vite.config.ts` to forward API requests: ```ts title="vite.config.ts" import { defineConfig } from "vite"; import react from "@vitejs/plugin-react"; import deno from "@deno/vite-plugin"; export default defineConfig({ server: { port: 3000, proxy: { "/api": { target: "http://localhost:8000", changeOrigin: true, }, }, }, plugins: [react(), deno()], optimizeDeps: { include: ["react/jsx-runtime"], }, }); ``` ## Create the pages Create a new directory called `pages`, and inside we'll make two new files `src/pages/index.tsx` and `src/pages/Dinosaur.tsx`. The `Index` page lists all dinosaurs and the `Dinosaur` page shows details of a specific dinosaur. ### index.tsx This page fetches the list of dinosaurs from the API and renders them as links: ```tsx title="src/pages/index.tsx" import { useEffect, useState } from "react"; import { Link } from "react-router-dom"; export default function Index() { const [dinosaurs, setDinosaurs] = useState([]); useEffect(() => { (async () => { const response = await fetch(`/api/dinosaurs/`); const allDinosaurs = await response.json(); setDinosaurs(allDinosaurs); })(); }, []); return (

🦕 Dinosaur app

Click on a dinosaur below to learn more.

{dinosaurs.map((dinosaur: { name: string; description: string }) => { return ( {dinosaur.name} ); })}
); } ``` ### Dinosaur.tsx This page will fetch the details of a specific dinosaur from the API and render it in a paragraph: ```tsx title="src/pages/Dinosaur.tsx" import { useEffect, useState } from "react"; import { Link, useParams } from "react-router-dom"; export default function Dinosaur() { const { selectedDinosaur } = useParams(); const [dinosaur, setDino] = useState({ name: "", description: "" }); useEffect(() => { (async () => { const resp = await fetch(`/api/dinosaurs/${selectedDinosaur}`); const dino = await resp.json(); setDino(dino); })(); }, [selectedDinosaur]); return (

{dinosaur.name}

{dinosaur.description}

🠠 Back to all dinosaurs
); } ``` ### Styling your app We've written [some basic styles for you](https://raw.githubusercontent.com/denoland/tutorial-with-react/refs/heads/main/src/index.css), which can be copied into `src/index.css`. ## Run the app To run the app, use the dev task defined in `deno.json`: ```sh deno run dev ``` This command will: 1. Start the Vite development server on port 3000 2. Start the API server on port 8000 3. Set up the proxy to forward `/api` requests from the frontend to the backend Navigate to `localhost:3000` in your browser and you should see the dinosaur app with a list of dinosaurs that you can click through to learn about each one. ## Understanding the project structure Let's walk through the key files and folders in this project: ```text tutorial-with-react/ ├── api/ # Backend API │ ├── data.json # Dinosaur data (700+ dinosaurs) │ ├── main.ts # Oak server with API routes │ └── util/ │ └── routeStaticFilesFrom.ts ├── src/ # React frontend │ ├── main.tsx # React app entry point │ ├── App.tsx # Main app with routing │ ├── index.css # Global styles │ └── pages/ │ ├── index.tsx # Homepage with dinosaur list │ └── Dinosaur.tsx # Individual dinosaur page ├── public/ # Static assets ├── deno.json # Deno configuration and tasks ├── package.json # npm dependencies for Vite ├── vite.config.ts # Vite configuration with proxy └── index.html # HTML template ``` ### Key concepts 1. **Hybrid dependency management**: The project uses both Deno and npm dependencies. Deno handles server-side dependencies like Oak, while npm handles frontend dependencies through Vite. 2. **Development vs Production**: In development, Vite serves the React app on port 3000 and proxies API requests to the Oak server on port 8000. In production, the Oak server serves both the built React app and the API from port 8000. 3. **Modern React patterns**: The app uses React 19, functional components, hooks, and React Router for navigation. 4. **Type safety**: While this example doesn't use a separate types file, in a larger app you'd typically create TypeScript interfaces for your data structures. You can see a version of the [app running on Deno Deploy](https://tutorial-with-react.deno.deno.net/) ## Build and deploy We set up the project with a `serve` task that builds the React app and serves it with the Oak backend server. Run the following command to build and serve the app in production mode: ```sh deno run build deno run serve ``` This will: 1. Build the React app using Vite (output goes to `dist/`) 2. Start the Oak server which serves both the API and the built React app Visit `localhost:8000` in your browser to see the production version of the app! You can deploy this app to your favorite cloud provider. We recommend using [Deno Deploy](https://deno.com/deploy) for a simple and easy deployment experience. You can deploy your app directly from GitHub, simply create a GitHub repository and push your code there, then connect it to Deno Deploy. ### Create a GitHub repository [Create a new GitHub repository](https://github.com/new), then initialize and push your app to GitHub: ```sh git init -b main git remote add origin https://github.com//.git git add . git commit -am 'my react app' git push -u origin main ``` ### Deploy to Deno Deploy Once your app is on GitHub, you can [deploy it to Deno Deploy](https://console.deno.com/). For a walkthrough of deploying your app, check out the [Deno Deploy tutorial](/examples/deno_deploy_tutorial/). 🦕 Now you can scaffold and develop a React app with Vite and Deno! You’re ready to build blazing-fast web applications. We hope you enjoy exploring these cutting-edge tools, we can't wait to see what you make! --- # How to use Redis with Deno > Step-by-step guide to using Redis with Deno. Learn how to set up caching, implement message brokers, handle data streaming, and optimize your applications with Redis's in-memory data store. URL: https://docs.deno.com/examples/tutorials/redis [Redis](https://redis.io/) is an in-memory data store you can use for caching, as a message broker, or for streaming data. [View source here.](https://github.com/denoland/examples/tree/main/with-redis) Here we're going to set up Redis to cache data from an API call to speed up any subsequent requests for that data. We're going to: - Set up a Redis client to save data from every API call in memory - Set up a Deno server so we can easily request certain data - Call the Github API within the server handler to get the data on first request - Serve data from Redis on every subsequent request We can do this within a single file, `main.ts`. ## Connecting to a Redis client We need two modules. The first is the Deno server. We'll use this to get the information from the user to query our API. The second is Redis. We can grab the node package for Redis using the `npm:` modifier: ```tsx import { createClient } from "npm:redis@^4.5"; ``` We create a Redis client using `createClient` and connect to our local Redis server: ```tsx // make a connection to the local instance of redis const client = createClient({ url: "redis://localhost:6379", }); await client.connect(); ``` You can also set host, user, password, and port individually in this [configuration](https://github.com/redis/node-redis/blob/master/docs/client-configuration.md) object. ## Setting up the server Our server is going to act as a wrapper around the Github API. A client can call our server with a Github username in the URL pathname, such as `http://localhost:3000/{username}`. Parsing out the pathname and calling the Github API will take place inside a handler function in our server. We strip the leading slash so we are left with a variable we can pass to the Github API as a username. We'll then pass the response back to the user. ```tsx Deno.serve({ port: 3000 }, async (req) => { const { pathname } = new URL(req.url); // strip the leading slash const username = pathname.substring(1); const resp = await fetch(`https://api.github.com/users/${username}`); const user = await resp.json(); return new Response(JSON.stringify(user), { headers: { "content-type": "application/json", }, }); }); ``` We'll run this with: ```tsx deno run --allow-net main.ts ``` If we then go to [http://localhost:3000/ry](http://localhost:3000/ry) in Postman, we'll get the Github response: ![uncached-redis-body.png](./images/how-to/redis/uncached-redis-body.png) Let's cache this response using Redis. ## Checking the cache Once we have our response from the Github API, we can cache this within Redis using `client.set`, with our username as the key and the user object as the value: ```tsx await client.set(username, JSON.stringify(user)); ``` Next time we request the same username, we can use `client.get` to get the cached user: ```tsx const cached_user = await client.get(username); ``` This returns null if the key doesn't exist. So we can use it in some flow control. When we get the username, we'll initially check whether we already have that user in the cache. If we do we'll serve the cached result. If not, we'll call the Github API to get the user, cache it, the serve the API result. In both cases, we'll add a custom header to show which version we're serving: ```tsx const server = new Server({ handler: async (req) => { const { pathname } = new URL(req.url); // strip the leading slash const username = pathname.substring(1); const cached_user = await client.get(username); if (cached_user) { return new Response(cached_user, { headers: { "content-type": "application/json", "is-cached": "true", }, }); } else { const resp = await fetch(`https://api.github.com/users/${username}`); const user = await resp.json(); await client.set(username, JSON.stringify(user)); return new Response(JSON.stringify(user), { headers: { "content-type": "application/json", "is-cached": "false", }, }); } }, port: 3000, }); server.listenAndServe(); ``` Running this first time gives us the same response as above, and we'll see the `is-cached` header set to `false`: ![uncached-redis-header.png](./images/how-to/redis/uncached-redis-header.png) But call with the same username again, and we get the cached result. The body is identical: ![cached-redis-body.png](./images/how-to/redis/cached-redis-body.png) But the header shows we have the cache: ![cached-redis-header.png](./images/how-to/redis/cached-redis-header.png) We can also see that the response was ~200ms quicker! You can check out the Redis documentation [here](https://redis.io/docs/) and the Redis node package [here](https://github.com/redis/node-redis). --- # Run a script > A guide to creating and running basic scripts with Deno. Learn how to write and execute JavaScript and TypeScript code, understand runtime environments, and get started with fundamental Deno concepts. URL: https://docs.deno.com/examples/tutorials/run_script Deno is a secure runtime for JavaScript and TypeScript. A runtime is the environment where your code executes. It provides the necessary infrastructure for your programs to run, handling things like memory management, I/O operations, and interaction with external resources. The runtime is responsible for translating your high-level code (JavaScript or TypeScript) into machine instructions that the computer can understand. When you run JavaScript in a web browser (like Chrome, Firefox, or Edge), you’re using a browser runtime. Browser runtimes are tightly coupled with the browser itself. They provide APIs for manipulating the Document Object Model (DOM), handling events, making network requests, and more. These runtimes are sandboxed, they operate within the browser’s security model. They can’t access resources outside the browser, such as the file system or environment variables. When you run your code with Deno, you’re executing your JavaScript or TypeScript code directly on your machine, outside the browser context. Therefore, Deno programs can access resources on the host computer, such as the file system, environment variables, and network sockets. Deno provides a seamless experience for running JavaScript and TypeScript code. Whether you prefer the dynamic nature of JavaScript or the type safety of TypeScript, Deno has you covered. ## Running a script In this tutorial we'll create a simple "Hello World" example in both JavaScript and TypeScript using Deno. We'll define a `capitalize` function that capitalizes the first letter of a word. Then, we define a `hello` function that returns a greeting message with the capitalized name. Finally, we call the `hello` function with different names and print the output to the console. ### JavaScript First, create a `hello-world.js` file and add the following code: ```js title="hello-world.js" function capitalize(word) { return word.charAt(0).toUpperCase() + word.slice(1); } function hello(name) { return "Hello " + capitalize(name); } console.log(hello("john")); console.log(hello("Sarah")); console.log(hello("kai")); ``` Run the script using the `deno run` command: ```sh $ deno run hello-world.js Hello John Hello Sarah Hello Kai ``` ### TypeScript This TypeScript example is exactly the same as the JavaScript example above, the code just has the additional type information which TypeScript supports. Create a `hello-world.ts` file and add the following code: ```ts title="hello-world.ts" function capitalize(word: string): string { return word.charAt(0).toUpperCase() + word.slice(1); } function hello(name: string): string { return "Hello " + capitalize(name); } console.log(hello("john")); console.log(hello("Sarah")); console.log(hello("kai")); ``` Run the TypeScript script using the `deno run` command: ```sh $ deno run hello-world.ts Hello John Hello Sarah Hello Kai ``` 🦕 Congratulations! Now you know how to create a simple script in both JS and TS and how to run it in Deno with the `deno run` command. Keep exploring the tutorials and examples to learn more about Deno! --- # Simple API server URL: https://docs.deno.com/examples/tutorials/simple_api Deno makes it easy to build lightweight, standards-based HTTP APIs using just the Web Platform primitives (Request, Response, fetch) with built‑in data storage in KV. In this tutorial you'll build and deploy a small link shortener backed by Deno KV, then push it to production on Deno Deploy. We'll implement a simple link shortener service using [Deno KV](/deploy/kv/manual). The modern Deno runtime provides `Deno.serve()` which starts an HTTP server with zero configuration. ## Create a local API server Make a new directory for your project and run `deno init` to create a basic Deno project. Update the `main.ts` file with the following code: ```ts title="main.ts" const kv = await Deno.openKv(); interface CreateLinkBody { slug: string; url: string; } function json(body: unknown, init: ResponseInit = {}) { const headers = new Headers(init.headers); headers.set("content-type", "application/json; charset=utf-8"); return new Response(JSON.stringify(body), { ...init, headers }); } function isValidSlug(slug: string) { return /^[a-zA-Z0-9-_]{1,40}$/.test(slug); } export function handler(req: Request): Promise | Response { return (async () => { // Basic CORS support (optional – remove if not needed) if (req.method === "OPTIONS") { return new Response(null, { headers: { "access-control-allow-origin": "*", "access-control-allow-methods": "GET,POST,OPTIONS", "access-control-allow-headers": "content-type", }, }); } if (req.method === "POST") { let body: CreateLinkBody; try { body = await req.json(); } catch { return json({ error: "Invalid JSON body" }, { status: 400 }); } const { slug, url } = body; if (!slug || !url) { return json({ error: "'slug' and 'url' are required" }, { status: 400, }); } if (!isValidSlug(slug)) { return json({ error: "Invalid slug format" }, { status: 422 }); } try { new URL(url); } catch { return json({ error: "'url' must be an absolute URL" }, { status: 422, }); } // Prevent overwriting an existing slug using an atomic check const key = ["links", slug]; const txResult = await kv.atomic().check({ key, versionstamp: null }).set( key, url, ).commit(); if (!txResult.ok) { return json({ error: "Slug already exists" }, { status: 409 }); } return json({ slug, url }, { status: 201 }); } // Redirect short links – extract slug from pathname const slug = new URL(req.url).pathname.slice(1); // remove leading '/' if (!slug) { return json({ message: "Provide a slug in the path or POST to create one.", }, { status: 400 }); } const result = await kv.get<[string] | string>(["links", slug]); const target = result.value as string | null; if (!target) { return json({ error: "Slug not found" }, { status: 404 }); } return Response.redirect(target, 301); })(); } export function startServer(port = 8000) { return Deno.serve({ port }, handler); } startServer(); ``` ## Run and test your server locally Update the `dev` task in the `deno.json` file to allow network permissions and add the `--unstable-kv` flag to allow using Deno KV locally: ```json title="deno.json" { "tasks": { "dev": "deno run --unstable-kv -N main.ts" } } ``` Now you can run your server with: ```sh deno task dev ``` > For rapid iteration you could grant all permissions (`-A`) instead of just > network (`-N`), but we do not recommend this for production environments. ### Test your API server This server will respond to HTTP `GET` and `POST` requests. The `POST` handler expects to receive a JSON document in request the body with `slug` and `url` properties. The `slug` is the short URL component, and the `url` is the full URL you want to redirect to. Here's an example of creating a short link with cURL (expects a 201 Created response): ```shell curl --header "Content-Type: application/json" \ --request POST \ --data '{"url":"https://docs.deno.com/","slug":"denodocs"}' \ http://localhost:8000/ ``` In response, the server returns JSON describing the stored link: ```json { "slug": "denodocs", "url": "https://docs.deno.com/" } ``` If you try to run the curl to create the same slug again you'll get a 409 Conflict: ```json { "error": "Slug already exists" } ``` A `GET` request to our server will take a URL slug as a path parameter, and redirect to the provided URL. You can visit this URL in the browser, or make another cURL request to see this in action! ```shell curl -v http://localhost:8000/denodocs ``` ## Deploy your API server ::: Deno Deploy account required You will need an account on Deno Deploy to complete this section. If you haven't already, [sign up for a free Deno Deploy account](https://console.deno.com/). ::: ### Provision a KV database on Deno Deploy First, we will 'provision' a KV database in Deno Deploy for our deployed app to use. 1. Visit [Deno Deploy](https://console.deno.com/) and click on the "Databases" tab. 2. Click the "+ Provision database" button. 3. Click the "Provision" button to create a free KV database. 4. Give your database an identifying slug, select a region and click "Provision Database". ### Deploy your server Deploy your server with the following command: ```sh deno deploy ``` This will briefly redirect you to a browser to authenticate with your Deno Deploy account, once authenticated, return to your terminal. 1. Select an organization (if you belong to more than one). 2. Select 'Create a new application'. 3. Return to the browser to see your new project and give it a name. 4. Click "Create App". 5. Once created, click on the "Timelines" menu item on the left side. 6. Click "Manage" next to the Databases section. 7. Find the KV database you created earlier and click "Assign". 8. Select your newly created app. 9. Click on the app name in the "Assignments" column to return to your app. 10. Click on the recent deployment link (which will have failed because it has no KV assigned). 11. Click the "Retry Build" button to redeploy your app with the KV database assigned. Once successfully built, in the "Overview" tab, you will see your Production URL, you can now use this with your curl commands to test your deployed API. ## Test out your new link shortener Without any additional configuration (Deno KV just works on Deploy), your app should run the same as it did on your local machine. You can add new links using the `POST` handler as you did before. Just replace the `localhost` URL with your live production URL on Deno Deploy: ```shell curl --header "Content-Type: application/json" \ --request POST \ --data '{"url":"https://docs.deno.com/runtime/","slug":"denodocs"}' \ https://your-project.yourusername.deno.net/ ``` Similarly, you can visit your shortened URLs in the browser, or view the redirect coming back with a cURL command: ```shell curl -v https://your-project.yourusername.deno.net/denodocs ``` 🦕 Now you know how to make a basic API with Deno and how to deploy it to Deno Deploy. Now that you have a working url shortener, you could consider making a frontend for it to allow users to create and manage their short links. Take a look at our (web frameworks)[/frameworks](/examples/#web-frameworks-and-libraries) page for some ideas on how to get started! --- # Snapshot testing > Learn how to use snapshot testing in Deno to compare outputs against recorded references, making it easier to detect unintended changes in your code URL: https://docs.deno.com/examples/tutorials/snapshot Snapshot testing is a testing technique that captures the output of your code and compares it against a stored reference version. Rather than manually writing assertions for each property, you let the test runner record the entire output structure, making it easier to detect any unexpected changes. The [Deno Standard Library](/runtime/reference/std/) has a [snapshot module](https://jsr.io/@std/testing/doc/snapshot), which enables developers to write tests which assert a value against a reference snapshot. This reference snapshot is a serialized representation of the original value and is stored alongside the test file. ## Basic usage The `assertSnapshot` function will create a snapshot of a value and compare it to a reference snapshot, which is stored alongside the test file in the `__snapshots__` directory. To create an initial snapshot (or to update an existing snapshot), use the `-- --update` flag with the `deno test` command. ### Basic snapshot example The below example shows how to use the snapshot library with the `Deno.test` API. We can test a snapshot of a basic object, containing string and number properties. The `assertSnapshot(t, a)` function compares the object against a stored snapshot. The `t` parameter is the test context that Deno provides, which the snapshot function uses to determine the test name and location for storing snapshots. ```ts title="example_test.ts" import { assertSnapshot } from "jsr:@std/testing/snapshot"; Deno.test("isSnapshotMatch", async (t) => { const a = { hello: "world!", example: 123, }; await assertSnapshot(t, a); }); ``` You will need to grant read and write file permissions in order for Deno to write a snapshot file and then read it to test the assertion. If it is the first time you are running the test a do not already have a snapshot, add the `--update` flag: ```bash deno test --allow-read --allow-write -- --update ``` If you already have a snapshot file, you can run the test with: ```bash deno test --allow-read ``` The test will compare the current output of the object against the stored snapshot. If they match, the test passes; if they differ, the test fails. The snapshot file will look like this: ```ts title="__snapshots__/example_test.ts.snap" export const snapshot = {}; snapshot[`isSnapshotMatch 1`] = ` { example: 123, hello: "world!", } `; ``` You can edit your test to change the `hello` string to `"everyone!"` and run the test again with `deno test --allow-read`. This time the `assertSnapshot` function will throw an `AssertionError`, causing the test to fail because the snapshot created during the test does not match the one in the snapshot file. ## Updating snapshots When adding new snapshot assertions to your test suite, or when intentionally making changes which cause your snapshots to fail, you can update your snapshots by running the snapshot tests in update mode. Tests can be run in update mode by passing the `--update` or `-u` flag as an argument when running the test. When this flag is passed, then any snapshots which do not match will be updated. ```bash deno test --allow-read --allow-write -- --update ``` :::note New snapshots will only be created when the `--update` flag is present. ::: ## Permissions When running snapshot tests, the `--allow-read` permission must be enabled, or else any calls to `assertSnapshot` will fail due to insufficient permissions. Additionally, when updating snapshots, the `--allow-write` permission must be enabled, as this is required in order to update snapshot files. The assertSnapshot function will only attempt to read from and write to snapshot files. As such, the allow list for `--allow-read` and `--allow-write` can be limited to only include existing snapshot files, if desired. ## Version Control Snapshot testing works best when changes to snapshot files are committed alongside other code changes. This allows for changes to reference snapshots to be reviewed along side the code changes that caused them, and ensures that when others pull your changes, their tests will pass without needing to update snapshots locally. ## Options The `assertSnapshot` function can be called with an `options` object which offers greater flexibility and enables some non standard use cases: ```ts import { assertSnapshot } from "jsr:@std/testing/snapshot"; Deno.test("isSnapshotMatch", async (t) => { const a = { hello: "world!", example: 123, }; await assertSnapshot(t, a, {/*custom options go here*/}); }); ``` ### serializer When you run a test with `assertSnapshot`, the data you're testing needs to be converted to a string format that can be written to the snapshot file (when creating or updating snapshots) and compared with the existing snapshot (when validating), this is called serialization. The `serializer` option allows you to provide a custom serializer function. This custom function will be called by `assertSnapshot` and be passed the value being asserted. Your custom function must: 1. Return a `string` 2. Be deterministic, (it will always produce the same output, given the same input). The code below shows a practical example of creating and using a custom serializer function for snapshot testing. This serialiser removes any ANSI colour codes from a string using the [`stripColour`](https://jsr.io/@std/fmt/doc/colors) string formatter from the Deno Standard Library. ```ts title="example_test.ts" import { assertSnapshot, serialize } from "jsr:@std/testing/snapshot"; import { stripColor } from "jsr:@std/fmt/colors"; /** * Serializes `actual` and removes ANSI escape codes. */ function customSerializer(actual: string) { return serialize(stripColor(actual)); } Deno.test("Custom Serializer", async (t) => { const output = "\x1b[34mHello World!\x1b[39m"; await assertSnapshot(t, output, { serializer: customSerializer, }); }); ``` ```ts title="__snapshots__/example_test.ts.snap" snapshot = {}; snapshot[`Custom Serializer 1`] = `"Hello World!"`; ``` Custom serializers can be useful in a variety of scenarios: - To remove irrelevant formatting (like ANSI codes shown above) and improve legibility - To handle non-deterministic data. Timestamps, UUIDs, or random values can be replaced with placeholders - To mask or remove sensitive data that shouldn't be saved in snapshots - Custom formatting to present complex objects in a domain-specific format ### Serialization with `Deno.customInspect` Because the default serializer uses `Deno.inspect` under the hood, you can set the property `Symbol.for("Deno.customInspect")` to a custom serialization function if desired: ```ts title="example_test.ts" // example_test.ts import { assertSnapshot } from "jsr:@std/testing/snapshot"; class HTMLTag { constructor( public name: string, public children: Array = [], ) {} public render(depth: number) { const indent = " ".repeat(depth); let output = `${indent}<${this.name}>\n`; for (const child of this.children) { if (child instanceof HTMLTag) { output += `${child.render(depth + 1)}\n`; } else { output += `${indent} ${child}\n`; } } output += `${indent}`; return output; } public [Symbol.for("Deno.customInspect")]() { return this.render(0); } } Deno.test("Page HTML Tree", async (t) => { const page = new HTMLTag("html", [ new HTMLTag("head", [ new HTMLTag("title", [ "Simple SSR Example", ]), ]), new HTMLTag("body", [ new HTMLTag("h1", [ "Simple SSR Example", ]), new HTMLTag("p", [ "This is an example of how Deno.customInspect could be used to snapshot an intermediate SSR representation", ]), ]), ]); await assertSnapshot(t, page); }); ``` This test will produce the following snapshot. ```ts title="__snapshots__/example_test.ts.snap" export const snapshot = {}; snapshot[`Page HTML Tree 1`] = ` Simple SSR Example

Simple SSR Example

This is an example of how Deno.customInspect could be used to snapshot an intermediate SSR representation

`; ``` In contrast, when we remove the `Deno.customInspect` method, the test will produce the following snapshot: ```ts title="__snapshots__/example_test.ts.snap" export const snapshot = {}; snapshot[`Page HTML Tree 1`] = `HTMLTag { children: [ HTMLTag { children: [ HTMLTag { children: [ "Simple SSR Example", ], name: "title", }, ], name: "head", }, HTMLTag { children: [ HTMLTag { children: [ "Simple SSR Example", ], name: "h1", }, HTMLTag { children: [ "This is an example of how Deno.customInspect could be used to snapshot an intermediate SSR representation", ], name: "p", }, ], name: "body", }, ], name: "html", }`; ``` You can see that this second snapshot is much less readable. This is because: 1. The keys are sorted alphabetically, so the name of the element is displayed after its children 2. It includes a lot of extra information, causing the snapshot to be more than twice as long 3. It is not an accurate serialization of the HTML which the data represents Note that in this example it would be possible to achieve the same result by calling: ```ts await assertSnapshot(t, page.render(0)); ``` However, depending on the public API you choose to expose, this may not be practical. It is also worth considering that this could have an impact beyond your snapshot testing. For example, `Deno.customInspect` is also used to serialize objects when calling `console.log` (and in some other cases). This may or may not be desirable. ### `dir` and `path` The `dir` and `path` options allow you to control where the snapshot file will be saved to and read from. These can be absolute paths or relative paths. If relative, they will be resolved relative to the test file. For example, if your test file is located at `/path/to/test.ts` and the `dir` option is set to `snapshots`, then the snapshot file would be written to `/path/to/snapshots/test.ts.snap`. - `dir` allows you to specify the snapshot directory, while still using the default format for the snapshot file name. - `path` allows you to specify the directory and file name of the snapshot file. If your test file is located at `/path/to/test.ts` and the `path` option is set to `snapshots/test.snapshot`, then the snapshot file would be written to `/path/to/snapshots/test.snapshot`. :::note If both `dir` and `path` are specified, the `dir` option will be ignored and the `path` option will be handled as normal. ::: ### `mode` The `mode` option controls how `assertSnapshot` behaves regardless of command line flags and has two settings, `assert` or `update`: - `assert`: Always performs comparison only, ignoring any `--update` or `-u` flags. If snapshots don't match, the test will fail with an `AssertionError`. - `update`: Always updates snapshots. Any mismatched snapshots will be updated after tests complete. This option is useful when you need different snapshot behaviors within the same test suite: ```ts // Create a new snapshot or verify an existing one await assertSnapshot(t, stableComponent); // Always update this snapshot regardless of command line flags await assertSnapshot(t, experimentalComponent, { mode: "update", name: "experimental feature", }); // Always verify but never update this snapshot regardless of command line flags await assertSnapshot(t, criticalComponent, { mode: "assert", name: "critical feature", }); ``` ### `name` The name of the snapshot. If unspecified, the name of the test step will be used instead. ```ts title="example_test.ts" import { assertSnapshot } from "jsr:@std/testing/snapshot"; Deno.test("isSnapshotMatch", async (t) => { const a = { hello: "world!", example: 123, }; await assertSnapshot(t, a, { name: "Test Name", }); }); ``` ```ts title="__snapshots__/example_test.ts.snap" export const snapshot = {}; snapshot[`Test Name 1`] = ` { example: 123, hello: "world!", } `; ``` When `assertSnapshot` is run multiple times with the same value for name, then the suffix will be incremented as normal. i.e. `Test Name 1`, `Test Name 2`, `Test Name 3`, etc. ### `msg` Used to set a custom error message. This will overwrite the default error message, which includes the diff for failed snapshots: ```ts Deno.test("custom error message example", async (t) => { const userData = { name: "John Doe", role: "admin", }; await assertSnapshot(t, userData, { msg: "User data structure has changed unexpectedly. Please verify your changes are intentional.", }); }); ``` When the snapshot fails, instead of seeing the default diff message, you'll see your custom error message. ## Testing Different Data Types Snapshot testing works with various data types and structures: ```ts Deno.test("snapshot various types", async (t) => { // Arrays await assertSnapshot(t, [1, 2, 3, "four", { five: true }]); // Complex objects await assertSnapshot(t, { user: { name: "Test", roles: ["admin", "user"] }, settings: new Map([["theme", "dark"], ["language", "en"]]), }); // Error objects await assertSnapshot(t, new Error("Test error message")); }); ``` ## Working with Asynchronous Code When testing asynchronous functions, ensure you await the results before passing them to the snapshot: ```ts Deno.test("async function test", async (t) => { const fetchData = async () => { // Simulate API call return { success: true, data: ["item1", "item2"] }; }; const result = await fetchData(); await assertSnapshot(t, result); }); ``` ## Best Practices ### Keep Snapshots Concise Avoid capturing large data structures that aren't necessary for your test. Focus on capturing only what's relevant. ### Descriptive Test Names Use descriptive test names that clearly indicate what's being tested: ```ts Deno.test( "renders user profile card with all required fields", async (t) => { // ... test code await assertSnapshot(t, component); }, ); ``` ### Review Snapshots During Code Reviews Always review snapshot changes during code reviews to ensure they represent intentional changes and not regressions. ### Snapshot Organization For larger projects, consider organizing snapshots by feature or component: ```ts await assertSnapshot(t, component, { path: `__snapshots__/components/${componentName}.snap`, }); ``` ## Snapshot Testing in CI/CD ### GitHub Actions Example When running snapshot tests in CI environments, you'll typically want to verify existing snapshots rather than updating them: ```yaml title=".github/workflows/test.yml" name: Test on: [push, pull_request] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: denoland/setup-deno@v2 with: deno-version: v2.x - name: Run tests run: deno test --allow-read ``` For pull requests that intentionally update snapshots, reviewers should verify the changes are expected before merging. ## Practical Examples ### Testing HTML Output HTML output testing with snapshots is particularly useful for web applications where you want to ensure your components render the expected markup. This approach allows you to catch unintended changes in your HTML structure, attributes, or content that might affect the visual appearance or functionality of your UI components. By capturing a snapshot of the HTML output, you can: - Verify that UI components render correctly with different props/data - Detect regressions when refactoring rendering logic - Document the expected output format of components ```ts Deno.test("HTML rendering", async (t) => { const renderComponent = () => { return `

User Profile

Username: testuser

`; }; await assertSnapshot(t, renderComponent()); }); ``` ### Testing API Responses When building applications that interact with APIs, snapshot testing helps ensure that the structure and format of API responses remain consistent. This is particularly valuable for: - Maintaining backward compatibility when updating API integrations - Verifying that your API response parsing logic works correctly - Documenting the expected shape of API responses for team collaboration - Detecting unexpected changes in API responses that could break your application ```ts Deno.test("API response format", async (t) => { const mockApiResponse = { status: 200, data: { users: [ { id: 1, name: "User 1" }, { id: 2, name: "User 2" }, ], pagination: { page: 1, total: 10 }, }, }; await assertSnapshot(t, mockApiResponse); }); ``` 🦕 Snapshot testing is a powerful technique that complements traditional unit tests by allowing you to capture and verify complex outputs without writing detailed assertions. By incorporating snapshot tests into your testing strategy, you can catch unintended changes, document expected behavior, and build more resilient applications. --- # Build a SolidJS app with Deno > Build a SolidJS application with Deno. Learn how to set up a project, implement reactive components, handle routing, create API endpoints with Hono, and build a full-stack TypeScript application. URL: https://docs.deno.com/examples/tutorials/solidjs [SolidJS](https://www.solidjs.com/) is a declarative JavaScript library for creating user interfaces that emphasizes fine-grained reactivity and minimal overhead. When combined with Deno's modern runtime environment, you get a powerful, performant stack for building web applications. In this tutorial, we'll build a simple dinosaur catalog app that demonstrates the key features of both technologies. We'll go over how to build a simple SolidJS app using Deno: - [Scaffold a SolidJS app](#scaffold-a-solidjs-app-with-vite) - [Set up on Hono backend](#set-up-our-hono-backend) - [Create our SolidJS frontend](#create-our-solidjs-frontend) - [Next steps](#next-steps) Feel free to skip directly to [the source code](https://github.com/denoland/examples/tree/main/with-solidjs) or follow along below! ## Scaffold a SolidJS app with Vite Let's set up our SolidJS application using [Vite](https://vite.dev/), a modern build tool that provides an excellent development experience with features like hot module replacement and optimized builds. ```bash deno init --npm vite@latest solid-deno --template solid-ts ``` Our backend will be powered by [Hono](https://hono.dev/), which we can install via [JSR](https://jsr.io). Let's also add `solidjs/router` for client-side routing and navigation between our dinosaur catalog pages.
```bash deno add jsr:@hono/hono npm:@solidjs/router ```
Learn more about deno add and using Deno as a package manager.
We'll also have to update our `deno.json` to include a few tasks and `compilerOptions` to run our app:
```json { "tasks": { "dev": "deno task dev:api & deno task dev:vite", "dev:api": "deno run --allow-env --allow-net --allow-read api/main.ts", "dev:vite": "deno run -A npm:vite", "build": "deno run -A npm:vite build", "serve": { "command": "deno task dev:api", "description": "Run the build, and then start the API server", "dependencies": ["deno task build"] } }, "imports": { "@hono/hono": "jsr:@hono/hono@^4.6.12", "@solidjs/router": "npm:@solidjs/router@^0.14.10" }, "compilerOptions": { "jsx": "react-jsx", "jsxImportSource": "solid-js", "lib": ["DOM", "DOM.Iterable", "ESNext"] } } ```
You can write your tasks as objects. Here our serve command includes a description and dependencies.
Great! Next, let's setup our API backend. ## Set up our Hono backend Within our main directory, we will set up an `api/` directory and create two files. First, our dinosaur data file, [`api/data.json`](https://github.com/denoland/examples/blob/main/with-solidjs/api/data.json): ```jsonc // api/data.json [ { "name": "Aardonyx", "description": "An early stage in the evolution of sauropods." }, { "name": "Abelisaurus", "description": "\"Abel's lizard\" has been reconstructed from a single skull." }, { "name": "Abrictosaurus", "description": "An early relative of Heterodontosaurus." }, ... ] ``` This is where our data will be pulled from. In a full application, this data would come from a database. > ⚠️️ In this tutorial we hard code the data. But you can connect > to [a variety of databases](https://docs.deno.com/runtime/tutorials/connecting_to_databases/) and [even use ORMs like Prisma](https://docs.deno.com/runtime/tutorials/how_to_with_npm/prisma/) with > Deno. Secondly, we need our Hono server, `api/main.ts`: ```tsx // api/main.ts import { Hono } from "@hono/hono"; import data from "./data.json" with { type: "json" }; const app = new Hono(); app.get("/", (c) => { return c.text("Welcome to the dinosaur API!"); }); app.get("/api/dinosaurs", (c) => { return c.json(data); }); app.get("/api/dinosaurs/:dinosaur", (c) => { if (!c.req.param("dinosaur")) { return c.text("No dinosaur name provided."); } const dinosaur = data.find((item) => item.name.toLowerCase() === c.req.param("dinosaur").toLowerCase() ); console.log(dinosaur); if (dinosaur) { return c.json(dinosaur); } else { return c.notFound(); } }); Deno.serve(app.fetch); ``` This Hono server provides two API endpoints: - `GET /api/dinosaurs` to fetch all dinosaurs, and - `GET /api/dinosaurs/:dinosaur` to fetch a specific dinosaur by name This server will be started on `localhost:8000` when we run `deno task dev`. Finally, before we start building out the frontend, let's update our `vite.config.ts` file with the below, especially the `server.proxy`, which informs our SolidJS frontend where to locate the API endpoint. ```tsx // vite.config.ts import { defineConfig } from "vite"; import solid from "vite-plugin-solid"; export default defineConfig({ plugins: [solid()], server: { proxy: { "/api": { target: "http://localhost:8000", changeOrigin: true, }, }, }, }); ``` ## Create our SolidJS frontend Before we begin building out the frontend components, let's quickly define the `Dino` type in `src/types.ts`: ```tsx // src/types.ts export type Dino = { name: string; description: string; }; ``` The `Dino` type interface ensures type safety throughout our application, defining the shape of our dinosaur data and enabling TypeScript's static type checking. Next, let's set up our frontend to receive that data. We're going to have two pages: - `Index.tsx` - `Dinosaur.tsx` Here's the code for the `src/pages/Index.tsx` page: ```tsx // src/pages/Index.tsx import { createSignal, For, onMount } from "solid-js"; import { A } from "@solidjs/router"; import type { Dino } from "../types.ts"; export default function Index() { const [dinosaurs, setDinosaurs] = createSignal([]); onMount(async () => { try { const response = await fetch("/api/dinosaurs"); const allDinosaurs = (await response.json()) as Dino[]; setDinosaurs(allDinosaurs); console.log("Fetched dinosaurs:", allDinosaurs); } catch (error) { console.error("Failed to fetch dinosaurs:", error); } }); return (

Welcome to the Dinosaur app

Click on a dinosaur below to learn more.

{(dinosaur) => ( {dinosaur.name} )}
); } ``` When using SolidJS, there are a few key differences to React to be aware of: 1. We use SolidJS-specific primitives: - `createSignal` instead of `useState` - `createEffect` instead of `useEffect` - `For` component instead of `map` - `A` component instead of `Link` 2. SolidJS components use fine-grained reactivity, so we call signals as functions, e.g. `dinosaur()` instead of just `dinosaur` 3. The routing is handled by `@solidjs/router` instead of `react-router-dom` 4. Component imports use Solid's [`lazy`](https://docs.solidjs.com/reference/component-apis/lazy) for code splitting The `Index` page uses SolidJS's `createSignal` to manage the list of dinosaurs and `onMount` to fetch the data when the component loads. We use the `For` component, which is SolidJS's efficient way of rendering lists, rather than using JavaScript's map function. The `A` component from `@solidjs/router` creates client-side navigation links to individual dinosaur pages, preventing full page reloads. Now the individual dinosaur data page at `src/pages/Dinosaur.tsx`: ```tsx // src/pages/Dinosaur.tsx import { createSignal, onMount } from "solid-js"; import { A, useParams } from "@solidjs/router"; import type { Dino } from "../types.ts"; export default function Dinosaur() { const params = useParams(); const [dinosaur, setDinosaur] = createSignal({ name: "", description: "", }); onMount(async () => { const resp = await fetch(`/api/dinosaurs/${params.selectedDinosaur}`); const dino = (await resp.json()) as Dino; setDinosaur(dino); console.log("Dinosaur", dino); }); return (

{dinosaur().name}

{dinosaur().description}

Back to all dinosaurs
); } ``` The `Dinosaur` page demonstrates SolidJS's approach to dynamic routing by using `useParams` to access the URL parameters. It follows a similar pattern to the `Index` page, using `createSignal` for state management and `onMount` for data fetching, but focuses on a single dinosaur's details. This `Dinosaur` component also shows how to access signal values in the template by calling them as functions (e.g., `dinosaur().name`), which is a key difference from React's state management. Finally, to tie it all together, we'll update the `App.tsx` file, which will serve both the `Index` and `Dinosaur` pages as components. The `App` component sets up our routing configuration using `@solidjs/router`, defining two main routes: the index route for our dinosaur list and a dynamic route for individual dinosaur pages. The `:selectedDinosaur` parameter in the route path creates a dynamic segment that matches any dinosaur name in the URL. ```tsx // src/App.tsx import { Route, Router } from "@solidjs/router"; import Index from "./pages/Index.tsx"; import Dinosaur from "./pages/Dinosaur.tsx"; import "./App.css"; const App = () => { return ( ); }; export default App; ``` Finally, this `App` component will be called from our main index: ```tsx // src/index.tsx import { render } from "solid-js/web"; import App from "./App.tsx"; import "./index.css"; const wrapper = document.getElementById("root"); if (!wrapper) { throw new Error("Wrapper div not found"); } render(() => , wrapper); ``` The entry point of our application mounts the App component to the DOM using SolidJS's `render` function. It includes a safety check to ensure the root element exists before attempting to render, providing better error handling during initialization. Now, let's run `deno task dev` to start both the frontend and backend together:
## Next steps 🦕 Now you can build and run a SolidJS app with Deno! Here are some ways you could enhance your dinosaur application: - Add persistent data store [using a database like Postgres or MongoDB](https://docs.deno.com/runtime/tutorials/connecting_to_databases/) and an ORM like [Drizzle](https://deno.com/blog/build-database-app-drizzle) or [Prisma](https://docs.deno.com/runtime/tutorials/how_to_with_npm/prisma/) - Implement global state using SolidJS's [`createContext`](https://docs.solidjs.com/reference/component-apis/create-context) for sharing data between components - Add loading states using [`createResource`](https://docs.solidjs.com/reference/basic-reactivity/create-resource)'s loading property - Implement route-based code splitting with [`lazy`](https://docs.solidjs.com/reference/component-apis/lazy) imports - Use `Index` component for more efficient list rendering - Deploy your app to [AWS](https://docs.deno.com/runtime/tutorials/aws_lightsail/), [Digital Ocean](https://docs.deno.com/runtime/tutorials/digital_ocean/), or [Google Cloud Run](https://docs.deno.com/runtime/tutorials/google_cloud_run/) The combination of SolidJS's unique reactive primitives, true DOM reconciliation, and Deno's modern runtime provides an incredibly efficient foundation for web development. With no Virtual DOM overhead and granular updates only where needed, your application can achieve optimal performance while maintaining clean, readable code. --- # Stubbing in tests > Learn how to use stubs in Deno to isolate code during testing by replacing function implementations with controlled behavior URL: https://docs.deno.com/examples/tutorials/stubbing Stubbing is a powerful technique for isolating the code you're testing by replacing functions with controlled implementations. While [spies](/examples/mocking_tutorial/#spying) monitor function calls without changing behavior, stubs go a step further by completely replacing the original implementation, allowing you to simulate specific conditions or behaviors during testing. ## What are stubs? Stubs are fake implementations that replace real functions during testing. They let you: - Control what values functions return - Simulate errors or specific edge cases - Prevent external services like databases or APIs from being called - Test code paths that would be difficult to trigger with real implementations Deno provides robust stubbing capabilities through the [Standard Library's testing tools](https://jsr.io/@std/testing/doc/mock#stubbing). ## Basic stub usage Here's a simple example demonstrating how to stub a function: ```ts import { assertEquals } from "jsr:@std/assert"; import { stub } from "jsr:@std/testing/mock"; // Original function function getUserName(id: number): string { // In a real app, this might call a database return "Original User"; } // Function under test function greetUser(id: number): string { const name = getUserName(id); return `Hello, ${name}!`; } Deno.test("greetUser with stubbed getUserName", () => { // Create a stub that returns a controlled value const getUserNameStub = stub(globalThis, "getUserName", () => "Test User"); try { // Test with the stubbed function const greeting = greetUser(123); assertEquals(greeting, "Hello, Test User!"); } finally { // Always restore the original function getUserNameStub.restore(); } }); ``` In this example, we: 1. Import the necessary functions from Deno's standard library 2. Create a stub for the `getUserName` function that returns "Test User" instead of calling the real implementation 3. Call our function under test, which will use the stubbed implementation 4. Verify the result meets our expectations 5. Restore the original function to prevent affecting other tests ## Using stubs in a testing scenario Let's look at a more practical example with a `UserRepository` class that interacts with a database: ```ts import { assertSpyCalls, returnsNext, stub } from "jsr:@std/testing/mock"; import { assertThrows } from "jsr:@std/assert"; type User = { id: number; name: string; }; // This represents our database access layer const database = { getUserById(id: number): User | undefined { // In a real app, this would query a database return { id, name: "Ada Lovelace" }; }, }; // The class we want to test class UserRepository { static findOrThrow(id: number): User { const user = database.getUserById(id); if (!user) { throw new Error("User not found"); } return user; } } Deno.test("findOrThrow method throws when the user was not found", () => { // Stub the database.getUserById function to return undefined using dbStub = stub(database, "getUserById", returnsNext([undefined])); // We expect this function call to throw an error assertThrows(() => UserRepository.findOrThrow(1), Error, "User not found"); // Verify the stubbed function was called once assertSpyCalls(dbStub, 1); }); ``` In this example: 1. We're testing the `findOrThrow` method, which should throw an error when a user is not found 2. We stub `database.getUserById` to return `undefined`, simulating a missing user 3. We verify that `findOrThrow` throws the expected error 4. We also check that the database method was called exactly once Note that we're using the `using` keyword with `stub`, which is a convenient way to ensure the stub is automatically restored when it goes out of scope. ## Advanced stub techniques ### Returning different values on subsequent calls Sometimes you want a stub to return different values each time it's called: ```ts import { returnsNext, stub } from "jsr:@std/testing/mock"; import { assertEquals } from "jsr:@std/assert"; Deno.test("stub with multiple return values", () => { const fetchDataStub = stub( globalThis, "fetchData", // Return these values in sequence returnsNext(["first result", "second result", "third result"]), ); try { assertEquals(fetchData(), "first result"); assertEquals(fetchData(), "second result"); assertEquals(fetchData(), "third result"); } finally { fetchDataStub.restore(); } }); ``` ### Stubbing with implementation logic You can also provide custom logic in your stub implementations: ```ts import { stub } from "jsr:@std/testing/mock"; import { assertEquals } from "jsr:@std/assert"; Deno.test("stub with custom implementation", () => { // Create a counter to track how many times the stub is called let callCount = 0; const calculateStub = stub( globalThis, "calculate", (a: number, b: number) => { callCount++; return a + b * 2; // Custom implementation }, ); try { const result = calculate(5, 10); assertEquals(result, 25); // 5 + (10 * 2) assertEquals(callCount, 1); } finally { calculateStub.restore(); } }); ``` ## Stubbing API calls and external services One of the most common uses of stubs is to replace API calls during testing: ```ts import { assertEquals } from "jsr:@std/assert"; import { stub } from "jsr:@std/testing/mock"; async function fetchUserData(id: string) { const response = await fetch(`https://api.example.com/users/${id}`); if (!response.ok) { throw new Error(`Failed to fetch user: ${response.status}`); } return await response.json(); } Deno.test("fetchUserData with stubbed fetch", async () => { const mockResponse = new Response( JSON.stringify({ id: "123", name: "Jane Doe" }), { status: 200, headers: { "Content-Type": "application/json" } }, ); // Replace global fetch with a stubbed version const fetchStub = stub( globalThis, "fetch", () => Promise.resolve(mockResponse), ); try { const user = await fetchUserData("123"); assertEquals(user, { id: "123", name: "Jane Doe" }); } finally { fetchStub.restore(); } }); ``` ## Best practices 1. **Always restore stubs**: Use `try/finally` blocks or the `using` keyword to ensure stubs are restored, even if tests fail. 2. **Use stubs for external dependencies**: Stub out database calls, API requests, or file system operations to make tests faster and more reliable. 3. **Keep stubs simple**: Stubs should return predictable values that let you test specific scenarios. 4. **Combine with spies when needed**: Sometimes you need to both replace functionality (stub) and track calls (spy). 5. **Stub at the right level**: Stub at the interface boundary rather than deep within implementation details. 🦕 Stubs are a powerful tool for isolating your code during testing, allowing you to create deterministic test environments and easily test edge cases. By replacing real implementations with controlled behavior, you can write more focused, reliable tests that run quickly and consistently. For more testing resources, check out: - [Testing in isolation with mocks](/examples/mocking_tutorial/) - [Deno Standard Library Testing Modules](https://jsr.io/@std/testing) - [Basic Testing in Deno](/examples/testing_tutorial/) --- # Creating a subprocess > A guide to working with subprocesses in Deno. Learn how to spawn processes, handle input/output streams, manage process lifecycles, and implement inter-process communication patterns safely. URL: https://docs.deno.com/examples/tutorials/subprocess ## Concepts - Deno is capable of spawning a subprocess via [Deno.Command](https://docs.deno.com/api/deno/~/Deno.Command). - `--allow-run` permission is required to spawn a subprocess. - Spawned subprocesses do not run in a security sandbox. - Communicate with the subprocess via the [stdin](https://docs.deno.com/api/deno/~/Deno.stdin), [stdout](https://docs.deno.com/api/deno/~/Deno.stdout) and [stderr](https://docs.deno.com/api/deno/~/Deno.stderr) streams. ## Simple example This example is the equivalent of running `echo "Hello from Deno!"` from the command line. ```ts title="subprocess_simple.ts" // define command used to create the subprocess const command = new Deno.Command("echo", { args: [ "Hello from Deno!", ], }); // create subprocess and collect output const { code, stdout, stderr } = await command.output(); console.assert(code === 0); console.log(new TextDecoder().decode(stdout)); console.log(new TextDecoder().decode(stderr)); ``` Run it: ```shell $ deno run --allow-run=echo ./subprocess_simple.ts Hello from Deno! ``` ## Security The `--allow-run` permission is required for creation of a subprocess. Be aware that subprocesses are not run in a Deno sandbox and therefore have the same permissions as if you were to run the command from the command line yourself. ## Communicating with subprocesses By default when you use `Deno.Command()` the subprocess inherits `stdin`, `stdout` and `stderr` of the parent process. If you want to communicate with a started subprocess you must use the `"piped"` option. ## Piping to files This example is the equivalent of running `yes &> ./process_output` in bash. ```ts title="subprocess_piping_to_files.ts" import { mergeReadableStreams, } from "jsr:@std/streams@1.0.0-rc.4/merge-readable-streams"; // create the file to attach the process to const file = await Deno.open("./process_output.txt", { read: true, write: true, create: true, }); // start the process const command = new Deno.Command("yes", { stdout: "piped", stderr: "piped", }); const process = command.spawn(); // example of combining stdout and stderr while sending to a file const joined = mergeReadableStreams( process.stdout, process.stderr, ); // returns a promise that resolves when the process is killed/closed joined.pipeTo(file.writable).then(() => console.log("pipe join done")); // manually stop process "yes" will never end on its own setTimeout(() => { process.kill(); }, 100); ``` Run it: ```shell $ deno run --allow-run=yes --allow-read=. --allow-write=. ./subprocess_piping_to_file.ts ``` ## Reading subprocess output with convenience methods When working with spawned subprocesses, you can use convenience methods on the `stdout` and `stderr` streams to easily collect and parse output. These methods are similar to those available on `Response` objects: ```ts title="subprocess_convenience_methods.ts" const command = new Deno.Command("deno", { args: [ "eval", "console.log(JSON.stringify({message: 'Hello from subprocess'}))", ], stdout: "piped", stderr: "piped", }); const process = command.spawn(); // Use convenience methods to collect output const stdoutText = await process.stdout.text(); const stderrText = await process.stderr.text(); console.log("stdout:", stdoutText); console.log("stderr:", stderrText); // Wait for the process to complete const status = await process.status; console.log("Exit code:", status.code); ``` Available convenience methods include: - `.text()` - Returns the output as a UTF-8 string - `.bytes()` - Returns the output as a `Uint8Array` - `.arrayBuffer()` - Returns the output as an `ArrayBuffer` - `.json()` - Parses the output as JSON and returns the parsed object ```ts title="subprocess_json_parsing.ts" const command = new Deno.Command("deno", { args: ["eval", "console.log(JSON.stringify({name: 'Deno', version: '2.0'}))"], stdout: "piped", }); const process = command.spawn(); // Parse JSON output directly const jsonOutput = await process.stdout.json(); console.log("Parsed JSON:", jsonOutput); // { name: "Deno", version: "2.0" } await process.status; ``` --- # Build a SvelteKit App > A tutorial on building SvelteKit applications with Deno. Learn how to set up a SvelteKit project, implement file-based routing, manage state with load functions, and create a full-stack TypeScript application. URL: https://docs.deno.com/examples/tutorials/svelte [SvelteKit](https://kit.svelte.dev/) is a web framework built on top of [Svelte](https://svelte.dev/), a modern front-end compiler that builds highly optimized vanilla JavaScript. SvelteKit provides features like file-based routing, server-side rendering, and full-stack capabilities. In this tutorial we'll build a simple SvelteKit app with Deno. The app will display a list of dinosaurs. When you click on one, it'll take you to a dinosaur page with more details. You can see the [finished app on GitHub](https://github.com/denoland/tutorial-with-svelte). You can see a live version of the app on [Deno Deploy](https://tutorial-with-svelte.deno.deno.net/). :::info Deploy your own Want to skip the tutorial and deploy the finished app right now? Click the button below to instantly deploy your own copy of the complete SvelteKit dinosaur app to Deno Deploy. You'll get a live, working application that you can customize and modify as you learn! [![Deploy on Deno](https://deno.com/button)](https://console.deno.com/new?clone=https://github.com/denoland/tutorial-with-svelte) ::: ## Create a SvelteKit app with Deno We'll use [SvelteKit](https://kit.svelte.dev/) to create a new SvelteKit app. In your terminal, run the following command to create a new SvelteKit app: ```shell deno run -A npm:sv create my-app ``` When prompted, give your app a name and select the "Skeleton project" template. Choose "Yes, using TypeScript syntax" when asked about TypeScript. Once created, `cd` into your new project and run the following command to install dependencies: ```shell deno install ``` Then, run the following command to serve your new SvelteKit app: ```shell deno task dev ``` Deno will run the `dev` task from the `package.json` file which will start the Vite development server. Click the output link to localhost to see your app in the browser. ## Configure the formatter `deno fmt` supports Svelte files with the [`--unstable-component`](https://docs.deno.com/runtime/reference/cli/fmt/#formatting-options-unstable-component) flag. To use it, run this command: ```sh deno fmt --unstable-component ``` To configure `deno fmt` to always format your Svelte files, add this at the top level of your `deno.json` file: ```json "unstable": ["fmt-component"] ``` ## Add a backend API We'll build API routes using SvelteKit's built-in API capabilities. SvelteKit allows you to create API endpoints by creating `+server.js` or `+server.ts` files in your routes directory. In the `src/routes` directory, create an `api` folder. In that folder, create a `data.json`, which will contain the hard coded dinosaur data. Copy and paste [this json file](https://github.com/denoland/tutorial-with-svelte/blob/main/src/routes/api/data.json) into the `src/routes/api/data.json` file. (If you were building a real app, you would probably fetch this data from a database or an external API.) We're going to build out some API routes that return dinosaur information. SvelteKit provides a simple way to create API endpoints using server files. Create `src/routes/api/dinosaurs/+server.ts` to handle the `/api/dinosaurs` endpoint. This will return all dinosaurs: ```js title="src/routes/api/dinosaurs/+server.ts" import { json } from "@sveltejs/kit"; import data from "../data.json" with { type: "json" }; export function GET() { return json(data); } ``` Then create `src/routes/api/dinosaurs/[id]/+server.ts` to handle individual dinosaur requests at `/api/dinosaurs/:id`: ```ts title="src/routes/api/dinosaurs/[id]/+server.ts" import { json } from "@sveltejs/kit"; import type { RequestHandler } from "./$types"; import data from "../../data.json" with { type: "json" }; export const GET: RequestHandler = ({ params }) => { const dinosaur = data.find((item) => { return item.name.toLowerCase() === params.id.toLowerCase(); }); if (dinosaur) { return json(dinosaur); } return json({ error: "Not found" }, { status: 404 }); }; ``` SvelteKit automatically handles routing based on the file structure. The `+server.ts` files define API endpoints, and the `[id]` folder creates a dynamic route parameter. ## Build the frontend ### File-based routing and data loading SvelteKit uses file-based routing, where the structure of your `src/routes` directory determines your app's routes. Unlike Vue Router, you don't need to configure routes manually - SvelteKit automatically creates routes based on your file structure. In SvelteKit, `+page.svelte` files define page components, and `+page.ts` files define data loading functions that run before the page loads. This provides built-in server-side rendering and data fetching capabilities. ### The pages and components SvelteKit organizes the frontend into pages and components. Pages are defined by `+page.svelte` files in the routes directory, while components can be reusable pieces of code stored anywhere in your project. Each Svelte component file contains three optional sections: `

🦕 Dinosaur app

Click on a dinosaur below to learn more.

{#each dinosaurs as dinosaur (dinosaur.name)} {dinosaur.name} {/each}
``` This code uses Svelte's [each block](https://svelte.dev/docs/logic-blocks#each) to iterate over the `dinosaurs` array and render each dinosaur as a link. The `{#each}` block is Svelte's way of rendering lists, and the `(dinosaur.name)` provides a unique key for each item. #### The Dinosaur detail page The dinosaur detail page will display information about a specific dinosaur. SvelteKit uses folder names in square brackets to create dynamic routes. The `[dinosaur]` folder creates a route that captures the dinosaur name from the URL. First, create `src/routes/[dinosaur]/+page.ts` to load individual dinosaur data: ```ts title="src/routes/[dinosaur]/+page.ts" import type { PageLoad } from "./$types"; import { error } from "@sveltejs/kit"; export const load: PageLoad = async ({ fetch, params }) => { const res = await fetch(`/api/dinosaurs/${params.dinosaur}`); const dinosaur = await res.json() as { name: string; description: string }; if (res.status === 404) { return error(404, "No dinosaur found"); } return { dinosaur }; }; ``` This load function uses the `params` object to access the `dinosaur` parameter from the URL. If the API returns a 404, we use SvelteKit's `error` function to throw a 404 error. Next, create `src/routes/[dinosaur]/+page.svelte` to display the dinosaur details: ```html title="src/routes/[dinosaur]/+page.svelte"

{dinosaur.name}

{dinosaur.description}

🠠 Back to all dinosaurs
``` This page displays the dinosaur's name and description, along with a link back to the home page. The data comes from the load function and is automatically available in the component. ## Run the app Now that we've set up the frontend and backend API routes, we can run the app. In your terminal, run the following command: ```shell deno task dev ``` This will start the SvelteKit development server with Vite. SvelteKit automatically handles both the frontend pages and the API routes we created, so you don't need to run separate servers. Visit `http://localhost:5173` in your browser to see the app. Click on a dinosaur to see more details! You can see a live version of the app on [Deno Deploy](https://tutorial-with-svelte.deno.deno.net/). ## Build and deploy SvelteKit comes with built-in build capabilities. We configured it to use the Deno adapter, which optimizes the build for deployment on Deno-compatible platforms. Run the following command to build the app in production mode: ```sh deno task build ``` This will: 1. Build the SvelteKit app using Vite 2. Generate optimized production assets 3. Create server-side code compatible with Deno The built app will be ready for deployment on platforms that support Deno, such as Deno Deploy. You can deploy this app to your favorite cloud provider. We recommend using [Deno Deploy](https://deno.com/deploy) for a simple and easy deployment experience. You can deploy your app directly from GitHub, simply create a GitHub repository and push your code there, then connect it to Deno Deploy. ### Create a GitHub repository [Create a new GitHub repository](https://github.com/new), then initialize and push your app to GitHub: ```sh git init -b main git remote add origin https://github.com//.git git add . git commit -am 'my svelte app' git push -u origin main ``` ### Deploy to Deno Deploy Once your app is on GitHub, you can [deploy it to Deno Deploy](https://console.deno.com/). For a walkthrough of deploying your app, check out the [Deno Deploy tutorial](/examples/deno_deploy_tutorial/). 🦕 Now that you can run a SvelteKit app in Deno with the Deno adapter you're ready to build real world applications! --- # Building a SvelteKit app with sv and Deno URL: https://docs.deno.com/examples/tutorials/sveltekit SvelteKit has been a stable popular vote since its launch and with Svelte version 5 releasing recently, as of time of writing, there isn't a better time to show off running it with Deno! Through this tutorial we will walk through setting up a SvelteKit project, made easier with the sv cli release and look at loading patterns. You can see the finished app at [GitHub](https://github.com/s-petey/deno-sveltekit) ## Getting started We can scaffold an application easily with `npx sv create`. This is [SvelteKit's CLI](https://github.com/sveltejs/cli) which has a lot of utility. If you have followed the video above great! If not, here are the selections: - Template - SvelteKit minimal - Type Checking - Typescript - Additions to project - tailwindcss - Tailwind plugins - typography - forms - Package manager - Deno For the remainder of this, you should have `deno task dev` running in the background so you can see your changes and the application running locally. ### Deno setup The `sv` command will generate a `package.json` file which we will need to update to a `deno.json` file. For this we will change `scripts` to `tasks` and update the `vite` based commands to use the `npm:` prefix. Example: ```json "dev": "vite dev", ``` Becomes: ```json "dev": "deno run -A npm:vite dev", ``` At this stage we are also wanting to integrate with Deno's formatter and linter. We can add these tasks as well. ```json { "tasks": { "dev": "deno run -A npm:vite dev", "format": "deno fmt", "lint": "deno lint", "lint:fix": "deno lint --fix" } } ``` With these tasks set up we need to also set up some rules to use form the `format` and `lint` commands. Here we are using an `unstable` flag for `fmt-component`, which enables the [svelte component](https://docs.deno.com/runtime/reference/cli/fmt/#supported-file-types) formatting. We also add some rules for linting based on the recommended settings. ```json { "fmt": {}, "lint": { "include": ["src/**/*.{ts,js,svelte}"], "exclude": ["node_modules", ".svelte-kit", ".vite", "dist", "build"], "rules": { "tags": ["recommended"] } }, "unstable": ["fmt-component"] } ``` We will need to add the `nodeModulesDir` to the `deno.json` file so our commands can properly find the `node_modules` directory. ```json { "nodeModulesDir": "auto" } ``` For those running VSCode or other editors which can support a similar `settings.json` file some recommended settings for auto formatting on save and linting are here. ```json { "deno.enable": true, "deno.enablePaths": [ "./deno.json" ], "editor.defaultFormatter": "denoland.vscode-deno" } ``` ### Walkthrough There are a few different folders to be mindful of. `src` this is the root of your application code and where most of your time and effort will go. `src/lib` this is a SvelteKit aliased directory for fast import and where many of your helpers or library code will live. `src/routes` this is the rendered pages for your application with SvelteKit, there is folder routing. #### Good info There are a few conventions which we will use in our SvelteKit application. (Although there are more available, I am only covering the ones used). - Files or folders with `server` in the name are meant to **only** be run on the server and may cause errors if you try to run them in the client. - Within `src/routes` files have a naming conventions: - `+page.svelte` -- this is the rendered file in the browser - `+page.server.ts` -- This file will run on the server and sends, and can receive, data directly and type safely to the `+page.svelte` it is directly next to. - `+layout.svelte` -- a file which defines a layout with an outlet to be added to any `+page.svelte` on the same directory or any child directories. - `+error.svelte` -- A custom error page you can add to make error pages nicer to come across. Additional note later you will see we put the dinosaurs `dinosaurs.ts` file within a `lib/server` directory. This will mean as stated above that this file is meant to **only** be accessed by other server files. ### Setup our "database" We will be using a ts file with a `Map` to access and find our items for simplicity. Create a file and folder: ``` src/lib/server/dinosaurs.ts ``` Within this file we will set up a type for our Dinosaurs and store the array of data to be exported as a map. ```ts export type Dinosaur = { name: string; description: string }; const dinosaurs = new Map(); const allDinosaurs: Dino[] = [ // Paste all your dino information here ]; for (const dino of allDinosaurs) { dinosaurs.set(dino.name.toLowerCase(), dino); } export { dinosaurs }; ``` With this setup we have our "database"! Next to access it on a page. ### Loading data to be rendered We now need to create a `+page.server.ts` file which will be at the root of our routes directory. There should already be a `+page.svlete` there. ``` src/routes/+page.server.ts ``` With this file created, we need to initialize the function to load our dinosaurs! ```ts /// src/routes/+page.server.ts import { dinosaurs } from '$lib/server/dino.js'; export const load = async ({ url }) => { return { dinosaurs: Array.from(dinosaurs) }; }); ``` All we are doing here is converting our map to an array so we can see them rendered on the `+page.svelte`. Within this page you can remove anything you'd like or just add the following. ```html
{#each data.dinosaurs as item} {item.name} {/each}
``` Notice while you are working with `data` we have type safety to know that `data.dinosaurs` exists and the types that are available! ### Adding an individual Dino page Now that we are rendering each dino and have links on each of them setup, we can add a route to handle rendering this data. ``` src/routes/[name]/+page.server.ts src/routes/[name]/+page.svelte ``` There is something neat and unique about this route. I am sure you noticed the `[name]` inserted as a folder name. This allows us to have a named route parameter. We could have used anything as the `name`, however we want to be able to route to `localhost:5173/Ingenia` and see our dinosaur and since that is the name I've used the parameter `name`. With that explained now we can access this without our server loader to get our dino and send it to the page! ```ts /// src/routes/[name]/+page.server.ts import { dinosaurs } from "$lib/server/dino.js"; import { error } from "@sveltejs/kit"; export const load = async ({ params: { name } }) => { const dino = dinosaurs.get(name.toLowerCase()); if (!dino) { throw error(404, { message: "Dino not found" }); } return { name: dino.name, description: dino.description }; }; ``` Notice we are throwing an error here. We don't have an `+error.svelte` page set up yet, so any errors will currently bubble to the default SvelteKit error page. Lets add one at the root level to handle any errors. ``` src/routes/+error.svelte ``` This is a very simple page, feel free to spruce up the styles here or add your own flair! ```html

{page.status}: {page.error?.message}

``` We simply want to show that something went wrong and with the `page` exposed by SvelteKit, we can show the status code thrown and if there was a message attached to the error. Now with that pesky error distraction handled, pun intended, we can get back to showing our individual dinosaur! ```html

{data.name}

{data.description}

``` Starting to work on this page you can see we still get the type safety knowing a `name` and `description` will exist on our data and we can render it! However, there is another problem if you navigate to this page, either by clicking on one of the links on the main page or by manually adding the dinosaur name to the URL we have no way of getting back! ### Layouts We want to have a standard layout so that our pages can share different information or links. This can be achieved through `+layout.svelte` pages. Lets go ahead and update the one up at the root of the `routes` directory. Here are a few things we want to achieve: 1. Allow users to navigate to the home page 2. Show the awesome docs for Deno and SvelteKit 3. Show a cute Dino on the page! ```html

Deno Sveltekit

Vite with Deno
{@render children()}
``` As you see, we are seeing `{@render children()}` for the first time. This just works as an "outlet" if you are coming from the React world to render whatever other child page may need to be output. Going back to your application you can see our heading `h1` has a link to go back to the home page. ### Advanced routing, search parameters, and styling We don't want to render all of the dinosaurs at a single time; as that is too much to scroll through. We want our users to be able to search and click through pages of dinosaurs, which will also showcase another awesome Svelte 5 feature, snippets! Lets open our main page and its server page to make a few changes. Previously we were returning an array version of our dinosaurs, lets allow users to search them and add some pagination logic. ```ts import { dinosaurs } from "$lib/server/dino.js"; export const load = async ({ url }) => { // We can access the search params by using the `url` provided // by SvelteKit const queryParams = url.searchParams; // We will use `q` as our search string const q = queryParams.get("q"); // We will use `page` to know which page we are on const pageParam = queryParams.get("page"); let page = 1; // We should verify there is a page param, if there is verify it is a number // otherwise use our default of 1 if (pageParam) { const parsedPage = parseInt(pageParam); if (!isNaN(parsedPage)) { page = parsedPage; } } const limitParam = queryParams.get("limit"); let limit = 25; // We should verify there is a limit param, if there is verify it is a number // otherwise use our default of 1 if (limitParam) { const parsedLimit = parseInt(limitParam); if (!isNaN(parsedLimit)) { limit = parsedLimit; } } // We want to allow searching and if there is no `q` to search against // allow all dinosaurs, otherwise compare the names in lowercase against one // another. const filteredDinosaurs = Array.from(dinosaurs.values()).filter((d) => { if (!q) { return true; } return d.name.toLowerCase().includes(q.toLowerCase()); }); // Here we calculate how we need to slice the array of filtered dinosaurs to return to the user const offset = Math.abs((page - 1) * limit); const paginatedDinosaurs = Array.from(filteredDinosaurs).slice( offset, offset + limit, ); const totalDinosaurs = filteredDinosaurs.length; const totalPages = Math.ceil(totalDinosaurs / limit); // Last we are returning a lot more data so it is easier to render // our pagination and dinosaurs on the page. return { dinosaurs: paginatedDinosaurs, q, page, limit, totalPages, totalDinosaurs, }; }; ``` Wuuf, that was a lot of work, and with it out of the way lets get some pagination and search inputs added to the UI! ```html
{#each data.dinosaurs as item} {item.name} {/each} {#if data.dinosaurs.length === 0}

No dinosaurs found

{/if}
{#if data.totalPages > 0}
{@render pageButton(data.page - 1, data.page === 1, false, '←')} {#each { length: data.totalPages }, page} {#if page >= data.page - 2 && page <= data.page + 2} {@render pageButton(page + 1, data.page === page + 1, data.page === page + 1, page + 1)} {:else if (page === 0 || page === 1) && page !== data.page - 1} {@render pageButton(page + 1, data.page === page + 1, data.page === page + 1, page + 1)} {:else if page >= data.totalPages - 2 && page !== data.page - 1} {@render pageButton(page + 1, data.page === page + 1, data.page === page + 1, page + 1)} {/if} {/each} {@render pageButton(data.page + 1, data.page === data.totalPages, false, '→')}
{/if} {#snippet pageButton(page: number, disabled: boolean, active: boolean, child: string | number)} {/snippet} ``` Notice for the input we use `defaultValue={data.q ?? ''}` so that when it is rendered in the UI we don't get `undefined` or `null` showing. With snippets you can create re-useable parts of Svelte code for easier rendering. `{#snippet pageButton(...)}` allows us to define the section to be rendered. We can then use it and pass the required type safe parameters using `{@render pageButton(...)}`. You can see this for all of the pagination buttons. Another neat Svelte trick is whenever `