WASM at Edge
Running WebAssembly modules on CDN edge servers for near-instant compute without cold starts. Used by Fastly Compute and Cloudflare Workers to execute custom logic at hundreds of PoPs worldwide.
Full Explanation
WebAssembly (WASM) at the edge means compiling your application code to WASM and running it on CDN edge servers instead of centralized cloud regions. Fastly Compute is the poster child: you write code in Rust, Go, JavaScript, or other languages, compile to WASM, and deploy to Fastly's edge network. Your code runs within milliseconds of users worldwide.
The key advantage of WASM over containers or VMs for edge compute is startup time. A WASM module can be instantiated in microseconds. There's no cold start in the traditional sense because creating a new WASM sandbox is nearly free. Compare this to AWS Lambda cold starts (100ms to several seconds) or container startup (seconds to minutes). For CDN edge compute where requests must be handled in single-digit milliseconds, this instant startup is essential.
WASM provides strong isolation through its sandbox model. Each request runs in its own WASM instance with no shared memory, no filesystem access, and no ability to affect other requests. This security model is simpler and more provably secure than container isolation or V8 isolate isolation. A bug in one customer's code can't affect another customer's workload running on the same physical server.
Cloudflare Workers took a different approach, using V8 isolates (the JavaScript engine from Chrome) instead of WASM. Isolates are lightweight JavaScript execution environments that also start in microseconds. However, Cloudflare Workers now also supports WASM modules running inside isolates, giving you both options. The V8 approach is optimized for JavaScript, while the WASM approach lets you use any language that compiles to WASM.
Real-world edge WASM use cases include: image transformation (resize, format conversion at the edge), authentication and authorization (JWT validation, OAuth flows), A/B testing (routing logic without origin round trips), personalization (geolocation-based content), API composition (aggregating multiple backend responses at the edge), and request/response manipulation (header injection, body rewriting).
The limitations are real. WASM at the edge typically has memory limits (128MB to 256MB per instance), CPU time limits (50ms to 500ms per request), no persistent local storage, and restricted network access. You can't run a database or a long-running process. Think of it as a very fast, very ephemeral request handler, not a general-purpose compute platform.
The ecosystem is maturing fast. WASI (WebAssembly System Interface) standardizes how WASM modules interact with the outside world (network, clock, random numbers). The Component Model enables composing WASM modules from different languages. Spin (by Fermyon) and wasmCloud provide frameworks for building edge WASM applications. The long-term vision is that WASM becomes the universal edge compute runtime, replacing both containers and language-specific runtimes.
Examples
# Fastly Compute: Rust edge handler
use fastly::{Error, Request, Response};
#[fastly::main]
fn main(req: Request) -> Result<Response, Error> {
// Geolocation at the edge
let geo = req.get_client_ip_addr()
.and_then(|ip| fastly::geo::geo_lookup(ip));
let country = geo.map(|g| g.country_code())
.unwrap_or("US");
// Route to nearest backend
let backend = match country {
"JP" | "KR" | "SG" => "origin-apac",
"DE" | "FR" | "GB" => "origin-eu",
_ => "origin-us",
};
let mut beresp = req.send(backend)?;
beresp.set_header("X-Edge-Country", country);
Ok(beresp)
}
# Cloudflare Worker with WASM (Rust via wasm-bindgen)
use worker::*;
#[event(fetch)]
async fn main(req: Request, env: Env, _ctx: Context)
-> Result<Response> {
let url = req.url()?;
if url.path().starts_with("/api/") {
// Transform API response at edge
let mut resp = Fetch::Url(url).send().await?;
let body: serde_json::Value = resp.json().await?;
let filtered = filter_fields(&body);
Response::from_json(&filtered)
} else {
Response::error("Not found", 404)
}
}
# Deploy WASM to Fastly
fastly compute init --language rust
fastly compute build
fastly compute deploy
# Deploy to Cloudflare
npx wrangler deploy
Frequently Asked Questions
Running WebAssembly modules on CDN edge servers for near-instant compute without cold starts. Used by Fastly Compute and Cloudflare Workers to execute custom logic at hundreds of PoPs worldwide.
# Fastly Compute: Rust edge handler
use fastly::{Error, Request, Response};
#[fastly::main]
fn main(req: Request) -> Result<Response, Error> {
// Geolocation at the edge
let geo = req.get_client_ip_addr()
.and_then(|ip| fastly::geo::geo_lookup(ip));
let country = geo.map(|g| g.country_code())
.unwrap_or("US");
// Route to nearest backend
let backend = match country {
"JP" | "KR" | "SG" => "origin-apac",
"DE" | "FR" | "GB" => "origin-eu",
_ => "origin-us",
};
let mut beresp = req.send(backend)?;
beresp.set_header("X-Edge-Country", country);
Ok(beresp)
}
# Cloudflare Worker with WASM (Rust via wasm-bindgen)
use worker::*;
#[event(fetch)]
async fn main(req: Request, env: Env, _ctx: Context)
-> Result<Response> {
let url = req.url()?;
if url.path().starts_with("/api/") {
// Transform API response at edge
let mut resp = Fetch::Url(url).send().await?;
let body: serde_json::Value = resp.json().await?;
let filtered = filter_fields(&body);
Response::from_json(&filtered)
} else {
Response::error("Not found", 404)
}
}
# Deploy WASM to Fastly
fastly compute init --language rust
fastly compute build
fastly compute deploy
# Deploy to Cloudflare
npx wrangler deploy
Related CDN concepts include:
- API Gateway — A server that acts as the single entry point for API requests, handling authentication, rate …