OpenAI + .NET Blazor — Building a Smarter Knowledge Base Search with AI


In today’s fast-paced workplaces, employees need instant access to relevant information, but traditional keyword search often misses the mark. Blazor, a modern .NET framework, empowers developers to create dynamic web apps using C# and .NET, reducing reliance on JavaScript while delivering a seamless experience. By integrating OpenAI’s embedding models, we can shift from basic keyword matching to AI-driven semantic search, enabling users to find documents based on meaning, not just phrasing. This combination of Blazor’s rich front-end and OpenAI’s natural language processing provides a faster, smarter, and scalable search experience that enhances productivity and knowledge discovery.

Why OpenAI is the Right Solution

OpenAI embeddings understand meaning, not just words. By converting search queries and documents into vector representations, AI can identify the most relevant content — even when keywords don’t match exactly.

  • Understands natural language (e.g., “reset my VPN” → finds “VPN Troubleshooting Guide”)
  • Ranks results by meaning, not just frequency
  • Scales effortlessly with thousands of documents, no manual tagging required

In this article, we’ll show you how to build an AI-powered knowledge base search system using OpenAI and Blazor. By combining OpenAI embeddings with SQL Server (using a code-first approach), we’ll create an intelligent search engine that finds content based on meaning, not just keywords. You’ll learn to integrate Blazor for the front-end and SQL Server for storage, crafting a scalable and efficient solution.

Implementing OpenAI + .NET Blazor for AI-Powered Search

A company has a Blazor-based web portal where employees search for IT help documents. We’ll enhance its search functionality by integrating OpenAI embeddings with .NET.

What We’ll Build:
✔️ A Blazor front-end for users to search documents.
✔️.NET backend to process searches using OpenAI embeddings.
✔️ A search API that matches queries with the most relevant documents.

1️⃣ Set Up OpenAI API Key

The search needs access to OpenAI’s GPT-4 to generate responses. As such, the first step is to get an API key from OpenAI.

  • Sign up at OpenAI (https://openai.com) and get an API key.
  • Store the API key in appsettings.json:
{
"OpenAI": {
"ApiKey": "your-api-key-here"
}
}

2️⃣ Install Required NuGet Packages

Ensure you have the necessary NuGet packages in your project:

  • OpenAI API SDK (for interacting with OpenAI)
  • Entity Framework Core (for database interaction)
Install-Package OpenAI_API
Install-Package Microsoft.EntityFrameworkCore.SqlServer
Install-Package Microsoft.EntityFrameworkCore.Tools

3️⃣ Create the Database Model

In this step, we define a C# class named DocumentEmbedding, which is used to represent a document and its associated embedding vector. This model will have two properties: Id (for the primary key) and EmbeddingVector (to store the embedding).

public class DocumentEmbedding
{
public int Id { get; set; }
public string DocumentName { get; set; }
public List<float> EmbeddingVector { get; set; } // Stores the embedding as a list of floats
}

4️⃣ Create the DbContext

Next, we define a KnowledgeBaseContext class that extends DbContext to interact with a SQL Server database using Entity Framework Core.

public class KnowledgeBaseContext : DbContext
{
public DbSet<DocumentEmbedding> DocumentEmbeddings { get; set; }

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
// Replace with your actual connection string to SQL Server
optionsBuilder.UseSqlServer("Your-Connection-String-Here");
}

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
// Configure the EmbeddingVector to be stored as a table column
modelBuilder.Entity<DocumentEmbedding>()
.Property(e => e.EmbeddingVector)
.HasConversion(
v => string.Join(",", v), // Convert List<float> to a string for storage
v => v.Split(',').Select(float.Parse).ToList()); // Convert back to List<float>
}
}
  • DbSet<DocumentEmbedding>: Represents the DocumentEmbedding table in the database, allowing you to query and manipulate records.
  • OnConfiguring: Configures the database connection string to SQL Server.
  • OnModelCreating: Customizes how the EmbeddingVector (a List<float>) is stored. It converts the list to a comma-separated string for storage and back to a list when retrieving from the database.

5️⃣ Generate the Database Migration

Once the model and DbContext are set up, we use Entity Framework to generate a migration and update the database. This will create the necessary tables in SQL Server based on your DbContext.

Run the following commands from the Package Manager Console:

Add-Migration InitialCreate
Update-Database

6️⃣ Generate the Embedding and Store It in SQL Server

Now that we have our database model and context, let’s integrate the code for calling OpenAI’s API and storing the embeddings in the SQL Server database.

using OpenAI_API;
using OpenAI_API.Embedding;
using Microsoft.EntityFrameworkCore;

class Program
{
static async Task Main(string[] args)
{
var api = new OpenAIAPI("<Your-OpenAI-API-Key>");
var embeddingRequest = new EmbeddingRequest("text-embedding-ada-002", new[] { "VPN Troubleshooting Guide" });

// Step 1: Generate the embedding from OpenAI
var response = await api.Embeddings.CreateEmbeddingAsync(embeddingRequest);
var documentVector = response.Data[0].Embedding; // Get the embedding vector

// Step 2: Save the embedding in SQL Server using Entity Framework
using (var context = new KnowledgeBaseContext())
{
// Step 2a: Create a new DocumentEmbedding instance
var embeddingRecord = new DocumentEmbedding
{
DocumentName = "VPN Troubleshooting Guide",
EmbeddingVector = documentVector.ToList() // Store the vector as List<float>
};

// Step 2b: Add it to the database
context.DocumentEmbeddings.Add(embeddingRecord);
await context.SaveChangesAsync(); // Save changes to the database
}

Console.WriteLine("Embedding successfully saved to database!");
}
}

✔️ Generates an embedding using OpenAI

  • Initializes the OpenAI API using the provided API key.
  • Sends a request to generate an embedding for the text “VPN Troubleshooting Guide” using the “text-embedding-ada-002” model.
  • Retrieves the embedding vector from the API response.

✔️ Saves the embedding to SQL Server using Entity Framework

  • Creates a new DocumentEmbedding object with the document name and embedding vector.
  • Adds the DocumentEmbedding instance to the database using the KnowledgeBaseContext (which is a DbContext).
  • Saves the changes to the database.

7️⃣ Setting up the Blazor Project

Next, we start by creating a Blazor Server or Blazor WebAssembly application in Visual Studio.

  • Open Visual Studio and create a new Blazor WebAssembly or Blazor Server project.
  • Install Entity Framework Core and OpenAI API NuGet packages in the project if not already done.

8️⃣ Create a Search Component in Blazor

Let’s create a Blazor component where users can input search terms and retrieve relevant documents based on the semantic similarity of their search query to the stored embeddings.

@page "/search"
@using YourProjectNamespace
@inject HttpClient Http
@inject NavigationManager Navigation

<h3>Search Knowledge Base</h3>

<div>
<label for="searchQuery">Enter Search Term:</label>
<input type="text" id="searchQuery" @bind="searchQuery" placeholder="Search..." />
<button @onclick="Search">Search</button>
</div>

@if (isLoading)
{
<p>Loading...</p>
}
else
{
@if (searchResults != null && searchResults.Any())
{
<ul>
@foreach (var result in searchResults)
{
<li>
<strong>@result.DocumentName</strong>
<p>@result.Snippet</p>
</li>
}
</ul>
}
else
{
<p>No results found.</p>
}
}

@code {
private string searchQuery = string.Empty;
private bool isLoading = false;
private List<DocumentSearchResult> searchResults;

private async Task Search()
{
if (string.IsNullOrEmpty(searchQuery)) return;

isLoading = true;

// Step 1: Get the embedding for the search query from OpenAI API
var queryEmbedding = await GetEmbeddingFromOpenAI(searchQuery);

// Step 2: Send the query embedding to the server for similarity search
searchResults = await Http.PostAsJsonAsync<List<DocumentSearchResult>>("api/search", queryEmbedding);

isLoading = false;
}

private async Task<List<float>> GetEmbeddingFromOpenAI(string query)
{
// Send a request to OpenAI API to get the embedding for the search query
var api = new OpenAIAPI("<Your-OpenAI-API-Key>");
var embeddingRequest = new EmbeddingRequest("text-embedding-ada-002", new[] { query });

var response = await api.Embeddings.CreateEmbeddingAsync(embeddingRequest);
return response.Data[0].Embedding.ToList();
}

public class DocumentSearchResult
{
public string DocumentName { get; set; }
public string Snippet { get; set; } // Excerpt from the document for display
}
}

✔️ Input for Search Query

  • The user inputs a search query in the text box (<input type="text" id="searchQuery" @bind="searchQuery" />).
  •  When the search button is clicked (@onclick="Search"), it triggers the Search method.

✔️ Get Embedding for Search Query

  • The Search method first sends the search query to OpenAI to get its embedding vector (GetEmbeddingFromOpenAI method). 
  • The embedding represents the semantic meaning of the search query.

✔️ Search on the Backend

  • After obtaining the embedding, the search query embedding is sent to an API endpoint (api/search), where the back-end will compare the query’s embedding with the embeddings stored in the SQL Server database to find the most relevant documents.

✔️ Display Results

  • The search results are displayed as a list of documents with their names and snippets (short excerpts from the document). 
  • If no results are found, it shows a “No results found” message.

9️⃣ Implementing the Back-End Search API

Now we need to implement the back-end API that will handle the comparison of the search query embedding with the embeddings stored in SQL Server and return the most similar documents.

using Microsoft.AspNetCore.Mvc;
using Microsoft.EntityFrameworkCore;
using YourProjectNamespace;
using System.Linq;

[Route("api/[controller]")]
[ApiController]
public class SearchController : ControllerBase
{
private readonly KnowledgeBaseContext _context;

public SearchController(KnowledgeBaseContext context)
{
_context = context;
}

[HttpPost]
public async Task<ActionResult<List<DocumentSearchResult>>> Search([FromBody] List<float> queryEmbedding)
{
// Step 1: Retrieve all stored document embeddings from the database
var documentEmbeddings = await _context.DocumentEmbeddings.ToListAsync();

// Step 2: Find the most similar documents using cosine similarity
var results = documentEmbeddings
.Select(doc => new
{
doc.DocumentName,
doc.EmbeddingVector,
SimilarityScore = CosineSimilarity(doc.EmbeddingVector, queryEmbedding)
})
.OrderByDescending(x => x.SimilarityScore)
.Take(5) // Return top 5 similar documents
.Select(x => new DocumentSearchResult
{
DocumentName = x.DocumentName,
Snippet = GetSnippet(x.DocumentName) // You could create a more advanced snippet generation logic
})
.ToList();

return Ok(results);
}

private float CosineSimilarity(List<float> vectorA, List<float> vectorB)
{
// Compute cosine similarity between two vectors (A and B)
var dotProduct = vectorA.Zip(vectorB, (a, b) => a * b).Sum();
var magnitudeA = (float)Math.Sqrt(vectorA.Sum(x => x * x));
var magnitudeB = (float)Math.Sqrt(vectorB.Sum(x => x * x));

return dotProduct / (magnitudeA * magnitudeB);
}

private string GetSnippet(string documentName)
{
// Return a snippet from the document for preview purposes
// This could be an excerpt or summary logic from the stored document
return $"Snippet from {documentName}...";
}
}

public class DocumentSearchResult
{
public string DocumentName { get; set; }
public string Snippet { get; set; } // Excerpt from the document for display
}

✔️ Search API Endpoint

  • The Search method accepts the query embedding sent from the Blazor front-end ([FromBody] List<float> queryEmbedding). 
  • It retrieves all stored embeddings from the database.

✔️ Cosine Similarity Calculation

  • It calculates the cosine similarity between the stored document embeddings and the search query embedding. 
  • Cosine similarity is a common method for comparing the similarity between two vectors.

✔️ Return Most Relevant Results

  • After calculating the similarity score, the results are sorted by similarity and the top 5 documents are returned. 
  • We can return a “snippet” (an excerpt) of the document to provide context to the user.

🔟 Running the Application

  • When the user inputs a search query in the Blazor front-end and hits search, the front-end sends the query to the OpenAI API to generate its embedding.
  • The Blazor front-end then sends the generated embedding to the API endpoint (api/search).
  • The back-end compares the query embedding with the embeddings stored in SQL Server, computes the cosine similarity, and returns the most relevant documents.
  • Finally, the Blazor front-end displays the search results to the user.

By integrating OpenAI embeddings with a Blazor front-end and SQL Server back-end, we’ve built a powerful, semantic search system that understands and processes user queries based on meaning rather than simple keyword matching. This solution not only improves the user experience by providing more accurate search results but also showcases how AI and modern web technologies can work together to enhance enterprise applications. The ability to leverage semantic search in your knowledge base can unlock new efficiencies, improve content discovery, and enable users to find information more intuitively.