Back

Get Structured Output from LangGraph / LangChain

May 5, 2025 by Rick Blalock

LangGraph Structured Output

LangGraph is a library for building stateful, multi-actor applications with LLMs, extending the capabilities of LangChain. While LangGraph provides ways to achieve this, deploying these agents often involves extra setup and it's a bit unclear too.

What if you could deploy your existing LangGraph agent, complete with structured output, with just one command? That's where Agentuity comes in.

The LangGraph Agent with Structured Output

Let's take a simple example of a LangGraph agent using the createReactAgent helper, designed to get weather information and return it in a specific JSON format. This example uses Zod for schema definition and OpenAI for the LLM, but the core concepts apply broadly.

// Example borrowed from:
// https://langchain-ai.github.io/langgraphjs/how-tos/react-return-structured-output/

import type { AgentRequest, AgentResponse, AgentContext } from '@agentuity/sdk';
import { ChatOpenAI } from '@langchain/openai';
import { createReactAgent } from '@langchain/langgraph/prebuilt';
import { tool } from '@langchain/core/tools';
import { z } from 'zod';

// Tools for the agent
const weatherTool = tool(
	async (input): Promise<string> => {
		if (input.city === 'nyc') {
			return 'It might be cloudy in nyc';
		} else if (input.city === 'sf') {
			return "It's always sunny in sf";
		} else {
			throw new Error('Unknown city');
		}
	},
	{
		name: 'get_weather',
		description: 'Use this to get weather information.',
		schema: z.object({
			city: z.enum(['nyc', 'sf']).describe('The city to get weather for'),
		}),
	}
);

const langGraphAgent = createReactAgent({
	llm: new ChatOpenAI({ model: 'gpt-4o', temperature: 0 }),
	tools: [weatherTool],
	responseFormat: z.object({
		conditions: z.string().describe('Weather conditions'),
	}),
});

export default async function AgentHandler(
	req: AgentRequest,
	resp: AgentResponse,
	ctx: AgentContext
) {
	const response = await langGraphAgent.invoke({
		messages: [
			{
				role: 'user',
				content:
					(await req.data.text()) ?? "What's the weather in NYC?",
			},
		],
	});

	return resp.json(response.structuredResponse);
}

Integrating with Agentuity

Notice how little code is needed to make this LangGraph agent work within Agentuity. The core logic remains the same. We just need to:

  1. Wrap the agent invocation within an AgentHandler function.
  2. Read the user's input from req.data.text().
  3. Return the structured output using resp.json(). Agentuity handles the JSON serialization automatically.

Deploying to the Cloud

Once your agent is set up, deploying is trivial. Assuming you have the Agentuity CLI installed and configured, navigate to your project directory and run agentuity deploy.

That's it! Your LangGraph agent is now live and accessible via an API endpoint provided by Agentuity.

Conclusion

Agentuity makes it incredibly simple to take your existing LangChain and LangGraph projects, like this agent focused on structured output, and deploy them to the cloud without fuss. Focus on building your agent logic, and let Agentuity handle the deployment. If you want to do some cooler things: You can even build out a Vercel AI SDK agent, or a CrewAI agent and have them all talk to each other! Cool eh?