I’ve been exploring AI tools ever since Semantic Kernel was released in 2023. It’s a lightweight, open-source framework that helps developers build AI agents and connect them to their own C#, Python, or Java applications. While it was fun to play with demos and plugins, I wanted to build something more than just a proof of concept—something that could actually help people on my team.
That opportunity came during an Operations Team meeting at Epitec.
The Challenge: Bid Breakdown Process
We were reviewing our Bid Breakdown Process, which supports requests for rate increases for consultants. It’s an important step to ensure that the raise falls within the customer approved margin.
But the process was painful.
It involved several manual steps, switching between systems, opening each consultant’s deal history one by one, and then copying data into spreadsheets. Even just determining if a consultant was eligible for a raise required a lot of digging. The user interface wasn’t designed for this type of review, and the manual work made it easy to miss something important.

Connecting the Dots with AI
At the time, I was experimenting with function calling in Semantic Kernel. This allows you to build plugins that the AI can call based on what the user asks. So I created:
- A plugin that connects to our Applicant Tracking System (ATS) database
- A function to check placement and deal history
- A tool to generate bid breakdown documents
- And finally, a Copilot interface to tie it all together
This allowed team members to simply ask questions like:
“Is this consultant eligible for a raise?”
“When was their last pay increase?”
“Generate the bid breakdown spreadsheet.”
Instead of jumping from app to app, the Copilot handles everything in one place.




What I Learned Along the Way
Building this Copilot wasn’t smooth sailing, but that’s where the learning happened:
- Model Mismatch: I started with GPT-4.0 but later switched to GPT-4.0 mini to reduce cost only to find it didn’t work well with function calling. I switched back. Fortunately, Semantic Kernel makes swapping models easy.
- Prompt Parameter Bug: I tested a prompt in Azure OpenAI Studio, but forgot to include a key parameter when I pasted it back into my code. That small oversight caused a huge spike in token usage until I realized what was wrong.
- Observability Helps: One of the best things about Semantic Kernel is that you can monitor what’s happening. Tokens used, which plugins were called, and more. That’s how I caught the issue above.
- Plugin Overload: The more plugins I added, the less consistent the results became. Especially when plugin functions had similar names. Lesson learned: keep it lean and well-named.
Why This Matters
If I hadn’t been experimenting earlier, I wouldn’t have been able to connect the dots when this use case came up.
Now that I’ve delivered my first Copilot that actually improves a business process, finding the next use case is already easier.
Tech Stack
If you’re curious about the technical side:
- Platform: .NET using the .NET AI Template in Visual Studio
- AI Framework: Semantic Kernel, which I added to the template to take advantage of its flexible plugin system
- Hosting: Azure
Quick Glossary (Our Team’s Lingo)
- Placement: A successful match between a consultant and a customer. It includes details like start and end dates and tracks assignments.
- Deal: A record linked to a placement that includes pay rate, bill rate, PTO, holidays, insurance, and more. Placements can have multiple deals over time when something changes.