Agentic Workflows · Lesson 1 of 3
AI Agents: ReAct, Tools & Reasoning
What Is an AI Agent?
A standard LLM call is one-shot: you send a prompt, it returns text. An AI agent is a system where the LLM can take actions — call tools, execute code, search the web — and loop until it completes a goal.
Standard LLM:
User → [Prompt] → LLM → [Text Response] → Done
AI Agent:
User → [Goal] → LLM → [Decide action] → [Execute tool]
↑ |
└────── [Observe result] ──────┘
(loop until goal complete)The three components of any agent:
- LLM — the reasoning engine
- Tools — functions the LLM can call (search, database, APIs, code execution)
- Loop — the orchestration logic that runs until the task is done
Function/Tool Calling
OpenAI's function calling lets you define tools the model can choose to call. The model doesn't execute the functions — it requests that you execute them by returning a structured JSON payload.
1. You define available tools as JSON schemas
2. Model decides if it needs to call a tool
3. Model returns: { tool: "get_weather", args: { city: "London" } }
4. You execute the function with those args
5. You send the result back to the model
6. Model incorporates the result into its final answerBuilding a Tool-Calling Agent in .NET
// Define tools as C# methods
public static class Tools
{
[Description("Get the current weather for a city")]
public static async Task<string> GetWeatherAsync(
[Description("The city name")] string city)
{
// In production: call a real weather API
await Task.Delay(100);
return $"{{ \"city\": \"{city}\", \"temp\": \"18°C\", \"condition\": \"Partly cloudy\" }}";
}
[Description("Search for information on the web")]
public static async Task<string> SearchWebAsync(
[Description("The search query")] string query)
{
// In production: call Bing/Google Search API
await Task.Delay(100);
return $"Search results for '{query}': [result1, result2, result3]";
}
[Description("Execute a SQL query against the reporting database")]
public static async Task<string> RunSqlQueryAsync(
[Description("A read-only SQL SELECT query")] string sql,
IDbConnection db)
{
// Validate it's SELECT only
if (!sql.TrimStart().StartsWith("SELECT", StringComparison.OrdinalIgnoreCase))
return "Error: Only SELECT queries are allowed.";
var results = await db.QueryAsync(sql);
return JsonSerializer.Serialize(results.Take(50));
}
}// Define tool schemas for the OpenAI API
var tools = new List<ChatTool>
{
ChatTool.CreateFunctionTool(
"get_weather",
"Get the current weather for a city",
BinaryData.FromString("""
{
"type": "object",
"properties": {
"city": { "type": "string", "description": "The city name, e.g. London" }
},
"required": ["city"]
}
""")),
ChatTool.CreateFunctionTool(
"search_web",
"Search for up-to-date information on the web",
BinaryData.FromString("""
{
"type": "object",
"properties": {
"query": { "type": "string", "description": "The search query" }
},
"required": ["query"]
}
""")),
};// The agentic loop
public class AgentService
{
private readonly OpenAIClient _client;
private const string Model = "gpt-4o-mini";
private const int MaxIterations = 10; // safety limit
public async Task<string> RunAsync(string goal, CancellationToken ct = default)
{
var chatClient = _client.GetChatClient(Model);
var messages = new List<ChatMessage>
{
ChatMessage.CreateSystemMessage("""
You are a helpful assistant with access to tools.
Use tools when you need current information or need to take action.
Reason step by step before calling tools.
Stop when you have a complete answer for the user.
"""),
ChatMessage.CreateUserMessage(goal),
};
var options = new ChatCompletionOptions();
foreach (var tool in tools)
options.Tools.Add(tool);
for (var iteration = 0; iteration < MaxIterations; iteration++)
{
var response = await chatClient.CompleteChatAsync(messages, options, ct);
var choice = response.Value;
// If the model is done reasoning, return the final answer
if (choice.FinishReason == ChatFinishReason.Stop)
return choice.Content[0].Text;
// Model wants to call tools
if (choice.FinishReason == ChatFinishReason.ToolCalls)
{
// Add the assistant message with tool call requests
messages.Add(ChatMessage.CreateAssistantMessage(choice));
// Execute each requested tool call
foreach (var toolCall in choice.ToolCalls)
{
var result = await ExecuteToolAsync(toolCall, ct);
messages.Add(ChatMessage.CreateToolMessage(
toolCall.Id,
result));
}
// Loop: send results back to model
continue;
}
// Unexpected finish reason
break;
}
return "Agent reached maximum iterations without completing the task.";
}
private async Task<string> ExecuteToolAsync(ChatToolCall toolCall, CancellationToken ct)
{
var args = JsonDocument.Parse(toolCall.FunctionArguments);
return toolCall.FunctionName switch
{
"get_weather" => await Tools.GetWeatherAsync(
args.RootElement.GetProperty("city").GetString()!),
"search_web" => await Tools.SearchWebAsync(
args.RootElement.GetProperty("query").GetString()!),
_ => $"Unknown tool: {toolCall.FunctionName}",
};
}
}Parallel Tool Calls
The model can request multiple tools at once — execute them in parallel:
// Execute all tool calls in parallel
var toolTasks = choice.ToolCalls.Select(async toolCall =>
{
var result = await ExecuteToolAsync(toolCall, ct);
return (toolCall.Id, result);
});
var results = await Task.WhenAll(toolTasks);
// Add all tool results back to messages
foreach (var (id, result) in results)
messages.Add(ChatMessage.CreateToolMessage(id, result));Using Semantic Kernel
Microsoft's Semantic Kernel is the production-grade SDK for building AI agents in .NET. It handles tool registration, the agentic loop, memory, and more:
dotnet add package Microsoft.SemanticKernel
dotnet add package Microsoft.SemanticKernel.Agents.Core// Register Semantic Kernel
builder.Services.AddKernel()
.AddOpenAIChatCompletion("gpt-4o-mini", builder.Configuration["OpenAI:ApiKey"]!);// Define a plugin (tool group) with the [KernelFunction] attribute
public class WeatherPlugin
{
[KernelFunction("get_weather")]
[Description("Get current weather for a city")]
public async Task<string> GetWeatherAsync(
[Description("The city name")] string city)
{
// Call weather API
return $"{{ \"city\": \"{city}\", \"temp\": \"18°C\" }}";
}
}
public class DatabasePlugin
{
private readonly IDbConnection _db;
public DatabasePlugin(IDbConnection db) => _db = db;
[KernelFunction("query_orders")]
[Description("Get recent orders from the database. Returns up to 10 results.")]
public async Task<string> QueryOrdersAsync(
[Description("Filter by status: pending, confirmed, shipped, or all")] string status = "all")
{
var sql = status == "all"
? "SELECT TOP 10 Id, Reference, Status, TotalAmount FROM Orders ORDER BY CreatedAt DESC"
: $"SELECT TOP 10 Id, Reference, Status, TotalAmount FROM Orders WHERE Status = @status ORDER BY CreatedAt DESC";
var orders = await _db.QueryAsync(sql, new { status });
return JsonSerializer.Serialize(orders);
}
}// Build and run the agent
public class SemanticKernelAgentService
{
private readonly Kernel _kernel;
public SemanticKernelAgentService(Kernel kernel)
{
_kernel = kernel;
_kernel.Plugins.AddFromType<WeatherPlugin>();
_kernel.Plugins.AddFromType<DatabasePlugin>();
}
public async Task<string> RunAsync(string goal, CancellationToken ct = default)
{
var agent = new ChatCompletionAgent
{
Name = "LearnixoAssistant",
Instructions = """
You are a helpful assistant for Learnixo.
Use tools to answer questions accurately.
Keep answers concise and include relevant data from tools.
""",
Kernel = _kernel,
};
var thread = new AgentGroupChat();
thread.AddChatMessage(new ChatMessageContent(AuthorRole.User, goal));
await foreach (var response in thread.InvokeAsync(agent, ct))
return response.Content ?? string.Empty;
return "No response.";
}
}Multi-Agent Systems
For complex tasks, multiple specialised agents collaborate:
User Request
│
▼
Orchestrator Agent (gpt-4o)
├── Research Agent → searches web, reads documents
├── Code Agent → writes and reviews code
└── Writer Agent → formats the final output// Orchestrator delegates to specialised agents
public class OrchestratorAgent
{
private readonly AgentService _researcher;
private readonly AgentService _coder;
private readonly AgentService _writer;
public async Task<string> HandleAsync(string userRequest, CancellationToken ct)
{
// Step 1: Research Agent gathers facts
var facts = await _researcher.RunAsync(
$"Research and summarise key facts for: {userRequest}", ct);
// Step 2: Code Agent produces any needed code
var code = await _coder.RunAsync(
$"Based on these facts: {facts}\nWrite the implementation for: {userRequest}", ct);
// Step 3: Writer Agent produces the final response
return await _writer.RunAsync(
$"Facts: {facts}\nCode: {code}\nFormat a clear, concise answer for: {userRequest}", ct);
}
}Safety and Guardrails
Agents that can take actions need strong guardrails:
// ✅ Validate every tool call before executing it
private async Task<string> ExecuteToolAsync(ChatToolCall toolCall, CancellationToken ct)
{
// Log every tool call for auditing
_logger.LogInformation(
"Agent calling tool {Tool} with args: {Args}",
toolCall.FunctionName, toolCall.FunctionArguments);
// Block dangerous patterns
if (toolCall.FunctionName == "run_sql")
{
var sql = JsonDocument.Parse(toolCall.FunctionArguments)
.RootElement.GetProperty("sql").GetString()!;
// Never allow destructive SQL
var forbidden = new[] { "DROP", "DELETE", "TRUNCATE", "UPDATE", "INSERT", "EXEC" };
if (forbidden.Any(f => sql.ToUpperInvariant().Contains(f)))
{
_logger.LogWarning("Agent attempted forbidden SQL: {Sql}", sql);
return "Error: Only SELECT queries are permitted.";
}
}
// Execute with timeout
using var timeoutCts = CancellationTokenSource.CreateLinkedTokenSource(ct);
timeoutCts.CancelAfter(TimeSpan.FromSeconds(10));
return await ExecuteToolCoreAsync(toolCall, timeoutCts.Token);
}
// ✅ Always set a maximum iteration limit
private const int MaxIterations = 10;
// ✅ Confirm before irreversible actions (in interactive agents)
if (toolCall.FunctionName == "send_email")
{
// Ask the user to confirm before sending
yield return $"\n\nI'm about to send an email to {recipient}. Type 'confirm' to proceed.";
// Wait for confirmation...
}When to Use Agents vs Simple Prompts
Use a simple prompt when:
✅ The task is one-step (summarise, classify, rewrite)
✅ The input is self-contained — no external lookups needed
✅ Latency matters — every tool call adds 100ms–2s
Use an agent when:
✅ The task requires multiple steps or decisions
✅ You need real-time data (weather, stock price, database query)
✅ The path to completion isn't known upfront
✅ You need the model to use external tools or APIsAgents add latency (multiple LLM calls + tool execution) and cost (more tokens). Only use them when the complexity genuinely warrants it.
Key Takeaways
- An agent is an LLM + tools + a loop — nothing more
- Function/tool calling is how OpenAI models request tool execution — the model never runs code directly
- Always validate tool calls before executing — treat them like user input
- Parallel tool calls improve throughput — execute independent tools simultaneously
- Semantic Kernel is the production .NET SDK — handles the loop, memory, and plugins
- Set iteration limits — without a cap, a confused agent can loop forever and drain your budget
- Multi-agent systems work well for complex tasks — specialised agents are more reliable than one general one
- Agents are powerful but add latency and cost — start with a simple prompt, escalate to an agent only when needed
What triggers a tool/function call in the AI agents loop?