AI Agents — What They Are and How Semantic Kernel Implements Them
Understand what AI agents are, how they differ from chatbots, and how to build agents with Semantic Kernel: plugins, planners, memory, and multi-agent orchestration.
AI Agents
An AI agent is an LLM that can take actions: call APIs, search the web, write files, query databases. A chatbot answers questions. An agent does things.
Chatbot vs Agent
Chatbot:
User: "What are my pending orders?"
Bot: "I don't have access to your orders. Please check your account page."
Agent:
User: "What are my pending orders?"
Agent → calls get_orders(userId) tool
Agent → reads result: [Order #1234, Order #5678]
Agent: "You have 2 pending orders: #1234 (ships tomorrow) and #5678 (processing)."The difference is tool use — the agent calls real systems and uses the results to answer.
Semantic Kernel
Semantic Kernel is Microsoft's open-source SDK for building AI-powered apps in .NET (and Python/Java). It provides:
- Kernel — the central object; connects LLM + plugins + memory
- Plugins — collections of functions the AI can call
- Planners — the AI decides which functions to call and in what order
- Memory — vector search over your own data
dotnet add package Microsoft.SemanticKernelBuilding a Simple Agent
// Setup
var kernel = Kernel.CreateBuilder()
.AddOpenAIChatCompletion(
modelId: "gpt-4o",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!)
.Build();
// Define a plugin — functions the AI can call
public class OrderPlugin(IOrderRepository orders)
{
[KernelFunction("get_pending_orders")]
[Description("Gets a list of pending orders for a user")]
public async Task<string> GetPendingOrdersAsync(
[Description("The user ID")] string userId)
{
var pending = await orders.GetPendingAsync(userId);
return JsonSerializer.Serialize(pending);
}
[KernelFunction("cancel_order")]
[Description("Cancels an order by ID. Returns success or error message.")]
public async Task<string> CancelOrderAsync(
[Description("The order ID to cancel")] string orderId)
{
var result = await orders.CancelAsync(orderId);
return result ? $"Order {orderId} cancelled successfully" : $"Failed to cancel {orderId}";
}
}
// Register the plugin
kernel.Plugins.AddFromObject(new OrderPlugin(orderRepo), "Orders");Running the Agent
// Chat with auto-invocation of tools
var chat = kernel.GetRequiredService<IChatCompletionService>();
var history = new ChatHistory();
history.AddSystemMessage(
"You are a helpful order management assistant. " +
"Use the available tools to answer questions about orders.");
history.AddUserMessage("Show me my pending orders for user U-001");
var settings = new OpenAIPromptExecutionSettings
{
ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions // agent mode
};
var reply = await chat.GetChatMessageContentAsync(history, settings, kernel);
Console.WriteLine(reply.Content);
// Output: "You have 2 pending orders: ..."With AutoInvokeKernelFunctions, the model decides when to call tools, calls them, gets the results, and incorporates them into the response — all automatically.
Semantic (Memory) Plugins
// Add vector memory for searching your own data
var memory = new SemanticTextMemory(
new AzureAISearchMemoryStore(endpoint, apiKey),
new AzureOpenAITextEmbeddingGenerationService(deploymentName, endpoint, apiKey)
);
// Store documents
await memory.SaveInformationAsync(
collection: "docs",
id: "policy-001",
text: "Returns are accepted within 30 days of purchase.",
description: "Return policy"
);
// The model can query memory via a plugin
kernel.ImportPluginFromObject(
new TextMemoryPlugin(memory), "memory");Multi-Step Planning
// Handlebars planner — AI generates a plan as Handlebars template
var planner = new HandlebarsPlanner();
var goal = "Summarize the top 3 pending orders for user U-001 and email a report to manager@company.com";
var plan = await planner.CreatePlanAsync(kernel, goal);
Console.WriteLine(plan); // shows the generated plan
var result = await plan.InvokeAsync(kernel);
Console.WriteLine(result);Multi-Agent Orchestration
// Define specialized agents
var orderAgent = new ChatCompletionAgent
{
Kernel = kernel,
Name = "OrderAgent",
Instructions = "You manage orders. Use the Orders plugin.",
};
var emailAgent = new ChatCompletionAgent
{
Kernel = kernel,
Name = "EmailAgent",
Instructions = "You send emails. Use the Email plugin.",
};
// Orchestrate with AgentGroupChat
var chat = new AgentGroupChat(orderAgent, emailAgent)
{
ExecutionSettings = new AgentGroupChatSettings
{
TerminationStrategy = new ApprovalTerminationStrategy()
}
};
await foreach (var message in chat.InvokeAsync())
Console.WriteLine($"{message.AuthorName}: {message.Content}");Agent vs RAG vs Fine-tuning
RAG (Retrieval-Augmented Generation):
• Query a vector store, inject results into the prompt
• Best for: Q&A over static documents, knowledge bases
Agent:
• LLM calls tools dynamically based on the task
• Best for: multi-step tasks, real-time data, taking actions
Fine-tuning:
• Train the model on your domain data
• Best for: specific tone/format, specialized knowledge, cheaper inference
• Expensive and rarely necessaryKey Takeaways
- An agent is an LLM + tools — it takes actions, not just answers questions
- Semantic Kernel plugins are plain C# methods decorated with
[KernelFunction]— no framework magic AutoInvokeKernelFunctionsturns on the tool-call loop — model calls tools, gets results, responds- Vector memory lets agents search your own data semantically, not just by keyword
- Start with a single agent + 2-3 tools; add multi-agent orchestration only when complexity demands it
Found this helpful?
Leave a comment
Have a question, correction, or just found this helpful? Leave a note below.