You can use Lightfast agents with any Node.js framework or HTTP server. The fetchRequestHandler
provides a framework-agnostic interface that works with Web API Request/Response objects.
Supported Frameworks
Choose your preferred framework for detailed integration guides:
- Node.js HTTP Server - Native Node.js HTTP server with full control
- Express - Popular and flexible web framework with rich middleware ecosystem
- Hono - Ultrafast web framework with excellent Web API compatibility ⭐ Recommended
- Fastify - High-performance framework with schema validation
- NestJS - Progressive framework for building scalable applications
Quick Comparison
Framework | Web API Support | Setup Complexity | Performance | Best For |
---|---|---|---|---|
Hono | ✅ Native | Simple | Excellent | Modern apps, edge deployment |
Node.js HTTP | ⚠️ Manual | Medium | Good | Full control, minimal dependencies |
Fastify | ⚠️ Manual | Medium | Excellent | High-performance APIs |
Express | ⚠️ Manual | Medium | Good | Traditional web apps, large ecosystem |
NestJS | ⚠️ Manual | Complex | Good | Enterprise applications, large teams |
Common Setup
All frameworks use the same agent configuration:
import { createAgent } from 'lightfast/agent';
import { createTool } from 'lightfast/tool';
import { RedisMemory } from 'lightfast/memory/adapters/redis';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
// Create a simple tool
const weatherTool = createTool({
name: 'get_weather',
description: 'Get weather for a location',
parameters: z.object({
location: z.string().describe('The location to get weather for')
}),
execute: async ({ location }) => {
// Your weather API logic here
return `Weather in ${location}: Sunny, 72°F`;
}
});
// Create the agent
const agent = createAgent({
name: 'weather-assistant',
model: openai('gpt-4o'),
system: 'You are a helpful weather assistant.',
tools: { weather: weatherTool },
createRuntimeContext: ({ sessionId, resourceId }) => ({
// Add any runtime context here
timestamp: Date.now()
})
});
// Create memory adapter
const memory = new RedisMemory({
url: process.env.REDIS_URL!,
token: process.env.REDIS_TOKEN!
});
Getting Started
- Choose your framework from the list above
- Follow the specific integration guide for detailed setup instructions
- Configure your agent and memory using the common setup above
- Deploy and test your AI-powered API
Each framework guide includes:
- Installation instructions
- Basic and advanced integration examples
- Authentication and middleware setup
- Error handling and logging
- Testing strategies
- Production deployment tips
Why Lightfast Works Everywhere
Lightfast's Web API-first design makes it naturally compatible with any framework that supports standard Request/Response objects. This means:
- ✅ Future-proof - Works with new frameworks as they emerge
- ✅ Edge compatible - Runs on Cloudflare Workers, Vercel Edge, etc.
- ✅ Consistent API - Same integration pattern across all frameworks
- ✅ No vendor lock-in - Easy to migrate between frameworks
Testing Your Integration
You can test any framework integration using curl:
# Test the agent endpoint
curl -X POST http://localhost:8080/agents/my-session \
-H "Content-Type: application/json" \
-d '{
"messages": [
{
"role": "user",
"content": "What'\''s the weather like in San Francisco?"
}
]
}'
Next Steps
- Choose your framework and follow the detailed integration guide
- Explore Agent Development for advanced agent features
- Learn about Memory & State for persistent conversations
- Check out Tool Factories for dynamic tool creation