Vibe Coding an AI-Powered Portfolio Assistant: How I Built an Interactive 'Ask About Me' Tool That Actually Works
TL;DR
I built an AI assistant for my portfolio that lets visitors ask specific questions like "Will Stephen be good for VP of Web Strategy?" instead of forcing them to dig through static content. Using "Vibe Coding" (collaborative development with AI tools like V0, Claude, and ChatGPT), I went from idea to deployed tool in a weekend. The result: more meaningful connections and conversations that actually address what people want to know. This post shows you exactly how i did it.

Introduction: The Future of Personal Branding is Interactive
Picture this: A hiring manager lands on your portfolio at 11 PM, three coffees deep into their candidate review session. Instead of scrolling through another static resume, they type: "Will Stephen be a good fit for the role of VP of Web Strategy?" and get an intelligent, contextual response that directly addresses their specific needs and concerns.
This isn't science fiction it's what I built for my own portfolio using what I call "Vibe Coding." This approach leverages AI tools like Vercel's V0, Claude, and ChatGPT to rapidly prototype and iterate, turning a weekend idea into a professional tool that's already changing how potential clients and employers interact with my work.
In this post, I'll take you through both the strategic thinking and technical implementation behind building an AI assistant that knows you better than your LinkedIn profile ever could. Whether you're a developer, designer, or any professional looking to stand out, you'll learn how to create your own interactive portfolio experience and why this approach is becoming essential in our AI-driven world.
81%
of developers agree increasing productivity is the biggest benefit they identify for AI tools
Source: Statista
The Problem with Traditional Portfolios
Let's be honest: traditional portfolios are basically digital brochures. Beautiful, perhaps, but fundamentally passive. They showcase your work, list your skills, and tell your story but only in the way you've pre-scripted it.
The reality is that every recruiter, client, or potential collaborator comes to your portfolio with different contexts and questions:
- A startup CEO wants to know if you can handle the chaos of rapid scaling
- An enterprise hiring manager needs to understand your experience with compliance and process
- A creative agency is curious about how you balance innovation with client constraints
- A technical team wants to dig into your problem-solving approach
Static content can't answer these nuanced questions. Sure, they might find relevant project examples if they dig deep enough, but there's no guarantee your carefully curated case studies align with their specific concerns.
Even worse, traditional portfolios miss the opportunity for discovery. That VP role you never considered might actually be perfect for you, but the hiring manager will never know to ask about your leadership philosophy or change management experience if those aren't front and center on your homepage.
The result? Missed connections, misaligned expectations, and a lot of talented people getting overlooked because their static portfolio didn't tell the right story to the right person at the right time.
Enter the AI Portfolio Assistant
Now imagine a different scenario. That same hiring manager lands on your portfolio and can immediately start a conversation:
"Will Stephen be a good fit for VP of Web Strategy? Summarize it in two to three sentences."
Stephen Andekian has over 15 years of experience in digital strategy and marketing, particularly within the cybersecurity and SaaS industries. His leadership roles, including as Interim CMO at Trustwave, demonstrate his ability to drive significant growth, such as increasing marketing-generated opportunities from $410M to $605M. Additionally, his expertise in AI-driven strategies, web infrastructure modernization, and conversion optimization positions him as a strong candidate for the VP of Web Strategy role.
Overall, Stephen's proven track record in transforming digital ecosystems and enhancing user engagement makes him a compelling fit for this position.
👨🚀 Spaceman AI Assistant
"Briefly explain how Stephen approaches digital marketing leadership?"
Stephen Andekian approaches digital marketing leadership by:
Driving Digital Transformation: He emphasizes the integration of advanced technologies and analytics to enhance marketing effectiveness, as demonstrated during his tenure at Trustwave where he guided significant digital transformation initiatives.
Fostering Data-Driven Decision Making: Stephen prioritizes the use of marketing analytics for better tracking and decision-making, ensuring that strategies are informed by data to optimize performance and achieve growth.
👨🚀 Spaceman AI Assistant
The Vibe Coding Approach: AI as Your Development Partner
Before we dive into the technical details, let me introduce you to "Vibe Coding" a development approach where developers provide general, high-level directions to an LLM, which then produces the precise instructions contained within working code. Rather than manually writing every line of code, vibe coding refers to the practice of instructing AI agents to write code based on natural language prompts, focusing your time and energy on the creative aspects of app development rather than getting stuck in technical details.
Traditional coding often follows a linear path: research, plan, code, test, iterate. Vibe Coding is more like a jazz improvisation session. You start with a rough idea, bounce it off AI tools to refine the concept, use AI to rapidly prototype interfaces, collaborate with AI to solve technical challenges, and iterate in real-time based on the feedback loop between your creative vision and AI capabilities.
For this project, my AI toolkit consisted of:
- Vercel's V0: For rapid UI prototyping and component generation
- Claude: For architectural planning, code review, and complex problem-solving
- ChatGPT: For brainstorming, content strategy, and prompt engineering
- OpenAI Assistant API: As the runtime engine powering the actual portfolio assistant
The magic happens in the collaboration. V0 helps me visualize ideas instantly, turning rough concepts into polished components in minutes. Claude serves as my technical advisor, helping me think through system architecture and catch potential issues before they become problems. ChatGPT excels at creative problem-solving and helping me refine the user experience.
This isn't about replacing human creativity, this is about amplifying it. I provide the vision, taste, and strategic thinking. AI provides the rapid iteration, technical knowledge, and ability to explore multiple solutions simultaneously.
The result? What would traditionally take weeks of development happened over a weekend. But more importantly, the final product is more polished and thoughtful than what I could have built alone, because I had access to AI feedback loops throughout the entire process.
Building Your Own: Step-by-Step Implementation
Now for the fun part, let's build this thing. I'll walk you through the entire implementation process, sharing the actual code, design decisions, and lessons learned along the way.
The beauty of the Vibe Coding approach is that you don't need to be an expert in every technology we're using. AI tools will help fill in the gaps, suggest improvements, and catch mistakes along the way. Your job is to provide direction, make design decisions, and ensure the final product serves your goals.
Setting Up Your OpenAI Assistant
Before we start coding, we need to create the AI brain that will power our portfolio assistant. This involves setting up an OpenAI Platform account, creating an assistant, and training it with your professional information.
Creating Your OpenAI Platform Account
First, head to OpenAI Platform and create an account if you don't already have one. This is separate from ChatGPT Plus, the Platform is specifically for developers building applications with OpenAI's APIs.
Once you're logged in, you'll need to add credits to your account. Navigate to the billing section and add at least $5-10 to get started. The good news is that chat interactions are relatively inexpensive—even with moderate usage, your assistant will cost pennies per conversation.
Creating Your AI Assistant
Navigate to OpenAI Assistants and click "Create Assistant." This is where the magic begins—you're about to create an AI that knows you better than most humans do.
Assistant Configuration:
Give your assistant a name that reflects its purpose. I named mine "Mission Control Agent for Spaceman Media" to align with my space-themed brand.
System Instructions - The Critical Foundation:
The system instructions are arguably the most important part of your assistant. They define how your AI will behave, what information it can access, and how it will respond to questions. Here's what I used:
1You are the Mission Control Agent for Spaceman Media, representing the digital strategy expertise of Stephen Andekian.23CRITICAL INSTRUCTION: You MUST ONLY use information from the uploaded files and documents in your knowledge base. Do NOT use any information from your training data or general knowledge. If you cannot find the answer in the uploaded files, explicitly state that the information is not available in your knowledge base.45When answering:61. ONLY reference information that can be found in the uploaded documents72. Always cite which document or file you're referencing when possible83. If asked about something not covered in your files, respond with: "I don't have information about that in my knowledge base. Please provide relevant documents or ask about topics covered in the uploaded files."94. Do not make assumptions or fill in gaps with general knowledge105. Be transparent about the limitations of your knowledge base1112Communicate with authority, precision, and strategic vision based solely on the provided documents. Provide clear, concise, and actionable answers that reflect the expertise documented in your knowledge base.
Why These Instructions Matter:
The critical instruction to only use uploaded files is essential for several reasons:
- Accuracy: Prevents the AI from hallucinating or providing outdated information about you
- Control: Ensures all responses are based on information you've explicitly provided
- Transparency: Makes it clear when the AI doesn't have relevant information
- Professional Credibility: Avoids embarrassing situations where the AI makes incorrect claims about your background
Building Your Knowledge Base
This is where you transform your professional experience into AI-readable format. You'll need to upload files that comprehensively cover:
Essential Documents to Create:
- Professional Resume/CV: Your complete work history with detailed descriptions
- Project Portfolio: Detailed case studies of your best work
- Leadership Philosophy: How you approach management, teamwork, and decision-making
- Technical Expertise: Specific skills, technologies, and methodologies you use
- Career Narrative: The story of your professional journey and what drives you
- Values and Working Style: What makes you unique as a collaborator
Pro Tips for Knowledge Base Creation:
- Write in a conversational style that matches how you'd explain things to a colleague
- Include specific examples and metrics where possible
- Cover different aspects of your personality and approach, not just accomplishments
- Think about the questions people actually ask in interviews or client calls
- Include information about what you're looking for in opportunities
The goal is to create a comprehensive picture of who you are professionally, so the AI can handle everything from "What's Stephen's experience with remote teams?" to "How does he approach strategic planning?"
Once you've uploaded your files, test your assistant directly in the OpenAI interface to make sure it's responding appropriately and staying within the bounds of your knowledge base.
Next Step: With your AI assistant configured and trained, we're ready to build the interface that will bring it to life on your portfolio.
Vibe Coding the UI First: Rapid Prototyping with V0
Before writing any code, I started with the interface design using V0. This is where Vibe Coding really shines instead of wireframing and designing mockups, I could describe what I wanted and iterate in real-time.
Here's how the conversation with V0 unfolded:
"Create a AI Chat interface to use on spaceman.media https://www.spaceman.media/ where users can enter questions about the developer and gain insights on how they would be an amazing person to work at their company"
I also shared screenshots of my existing portfolio site so V0 could understand the design context and brand aesthetic.
What impressed me immediately was that V0 didn't just create a generic chat bubble interface. Instead, it analyzed my request and proposed a complete technical architecture:
1I'll create a chat application using the AI SDK and OpenAI, leveraging Next.js with the App Router, which is the recommended framework for building applications with the AI SDK.
You can view the v0 conversation in full and code that is broken down below here:
v0 Spaceman Media Chat Interface
/app/components/ChatInterface.tsx
1"use client" // Marks this component as a Client Component in Next.js App Router23import React from "react"4import { useState, useEffect, useMemo } from "react"5import { useChat } from "@ai-sdk/react" // Vercel AI SDK React hook for chat state and actions6import { DefaultChatTransport } from "ai" // Transport to route chat requests to your API7import { Button } from "@/components/ui/button" // shadcn/ui button8import { Input } from "@/components/ui/input" // shadcn/ui input9import { Card, CardContent, CardFooter, CardHeader, CardTitle } from "@/components/ui/card"10import { X } from "lucide-react" // Close icon for modal1112// Using marked library for markdown to HTML conversion13// Install: npm install marked @types/marked14import { marked } from 'marked'1516// Configure marked for simple, predictable rendering suitable for chat display17marked.setOptions({18 breaks: true, // Treat single line breaks as <br>19 gfm: true, // Enable GitHub Flavored Markdown20})2122// Custom renderer to constrain output. Intent is to avoid rich blocks and external links.23// This helps keep content visually consistent and reduces risk when using innerHTML.24const renderer = new marked.Renderer()2526// Render fenced code blocks as inline boxes instead of <pre><code> blocks27renderer.code = ({ text, lang, escaped }: { text: string, lang?: string, escaped?: boolean }) => `<span class="inline-code">${text}</span>`2829// Render inline code with a custom span for styling30renderer.codespan = ({ text }: { text: string }) => `<span class="inline-code">${text}</span>`3132// Render links as plain text by returning only their text tokens33renderer.link = ({ href, title, tokens }: { href: string, title?: string | null, tokens: any[] }) => {34 // Remove the anchor element and produce only text content35 return tokens.map(token => token.raw || token.text || '').join('')36}3738// Render headings without IDs to avoid page-level anchor collisions inside chat bubbles39renderer.heading = ({ text, depth }: { text: string, depth: number }) => `<h${depth}>${text}</h${depth}>`4041// Apply the custom renderer to marked42marked.setOptions({ renderer })4344// Helper to convert markdown strings to HTML with the configured renderer45function markdownToHtml(markdown: string): string {46 return marked.parse(markdown) as string47}4849// Typing indicator component. Uses staggered CSS animations for three dots.50const DancingDots = () => {51 return (52 <span className="inline-flex">53 AI is thinking54 <span className="ml-1 inline-flex">55 <span56 className="animate-bounce"57 style={{58 animationDelay: '0ms',59 animationDuration: '1.4s'60 }}61 >62 .63 </span>64 <span65 className="animate-bounce"66 style={{67 animationDelay: '160ms',68 animationDuration: '1.4s'69 }}70 >71 .72 </span>73 <span74 className="animate-bounce"75 style={{76 animationDelay: '320ms',77 animationDuration: '1.4s'78 }}79 >80 .81 </span>82 </span>83 </span>84 )85}8687// Main chat UI component88export default function ChatInterface() {89 // Hook manages messages, sendMessage, and status90 // Transport points to your API route that proxies or handles the chat completion91 const { messages, sendMessage, status } = useChat({92 transport: new DefaultChatTransport({93 api: "/api/assistant/",94 }),95 })9697 // Text input state98 const [input, setInput] = useState("")99 // Modal visibility state100 const [isModalOpen, setIsModalOpen] = useState(false)101102 // Predefined example prompts for animated placeholder103 // useMemo ensures stable reference and avoids re-creating the array on each render104 const examplePrompts = useMemo(() => [105 "What are his key skills?",106 "What kind of projects has Stephen worked on?",107 "Is Stephen a good fit for a startup?",108 "Describe his experiences as a Marketing leader.",109 "Describe his experiences as a developer.",110 "How does he approach problem-solving?",111 ], [])112113 // Typing animation state for the rotating placeholder114 const [currentPlaceholder, setCurrentPlaceholder] = useState("")115 const [promptIndex, setPromptIndex] = useState(0)116 const [charIndex, setCharIndex] = useState(0)117 const [isDeleting, setIsDeleting] = useState(false)118119 // Example assistant message for initial demo typing120 const exampleAiResponse =121 "Hello! I can share detailed insights into Stephen Andekian's expertise, including his leadership in enterprise digital transformation, AI-driven marketing strategy, and modern web architecture for cybersecurity and SaaS organizations."122 const [typedExampleAiResponse, setTypedExampleAiResponse] = useState("")123 const [showExampleTyping, setShowExampleTyping] = useState(true)124125 // Effect to animate the rotating placeholder text126 // Pauses when user starts typing to avoid distraction and to keep placeholder empty127 useEffect(() => {128 const typeSpeed = isDeleting ? 50 : 100129 const delayBetweenPrompts = 1500130131 const timer = setTimeout(() => {132 const full = examplePrompts[promptIndex]133 if (isDeleting) {134 setCurrentPlaceholder(full.substring(0, charIndex - 1))135 setCharIndex((ci) => ci - 1)136 if (charIndex === 0) {137 setIsDeleting(false)138 setPromptIndex((i) => (i + 1) % examplePrompts.length)139 }140 } else {141 setCurrentPlaceholder(full.substring(0, charIndex + 1))142 setCharIndex((ci) => ci + 1)143 if (charIndex === full.length) {144 setTimeout(() => setIsDeleting(true), delayBetweenPrompts)145 }146 }147 }, typeSpeed)148149 // If the user has started entering input, stop the placeholder animation150 if (input.length > 0) {151 clearTimeout(timer)152 setCurrentPlaceholder("")153 setShowExampleTyping(false)154 }155156 return () => clearTimeout(timer)157 }, [charIndex, isDeleting, promptIndex, input, examplePrompts])158159 // Effect to animate an example assistant message before any real chat occurs160 useEffect(() => {161 if (messages.length === 0 && showExampleTyping && typedExampleAiResponse.length < exampleAiResponse.length) {162 const t = setTimeout(() => {163 setTypedExampleAiResponse((prev) => exampleAiResponse.slice(0, prev.length + 1))164 }, 10)165 return () => clearTimeout(t)166 } else if (messages.length > 0) {167 // Once real messages exist, hide the demo typing168 setShowExampleTyping(false)169 }170 }, [typedExampleAiResponse, showExampleTyping, messages.length, exampleAiResponse])171172 // Submit handler for the message form173 // Sends text to the chat endpoint and clears the input on success174 const onSubmit = async (e: React.FormEvent) => {175 e.preventDefault()176 if (!input.trim()) return177 await sendMessage({ text: input })178 setInput("")179 }180181 // Focus handler to open the modal version of the chat182 const handleInputFocus = () => {183 setIsModalOpen(true)184 }185186 // Close handler for modal187 const handleCloseModal = () => {188 setIsModalOpen(false)189 }190191 // Effect to bind Escape for modal close and to lock body scroll when modal is open192 useEffect(() => {193 const handleEscape = (e: KeyboardEvent) => {194 if (e.key === "Escape") {195 setIsModalOpen(false)196 }197 }198199 if (isModalOpen) {200 document.addEventListener("keydown", handleEscape)201 // Prevent background scroll while modal is open202 document.body.style.overflow = "hidden"203 } else {204 document.body.style.overflow = "unset"205 }206207 // Cleanup removes listener and restores body scroll208 return () => {209 document.removeEventListener("keydown", handleEscape)210 document.body.style.overflow = "unset"211 }212 }, [isModalOpen])213214 // Derive assistant "is typing" indicator from hook status215 // "submitted" or "streaming" indicates pending or active response216 const isTyping = status === "submitted" || status === "streaming"217218 // Renders a chat message. Assistant messages are parsed from markdown to HTML.219 // User messages render as plain text. This keeps user content safe and simple.220 const renderMessageContent = (msg: any) => {221 const text = msg.parts222 .filter((p: any) => p.type === "text")223 .map((p: any) => p.text)224 .join("")225226 if (msg.role === "assistant") {227 const htmlContent = markdownToHtml(text)228 return (229 <span230 className="inline-block p-3 rounded-lg bg-gray-200 text-gray-800 font-mono max-w-[80%] markdown-content"231 // Using innerHTML on sanitized and constrained renderer output232 dangerouslySetInnerHTML={{ __html: htmlContent }}233 />234 )235 }236237 // User bubble styling238 return (239 <span className="inline-block p-3 rounded-lg max-w-[80%] bg-purple-600 text-white">240 {text}241 </span>242 )243 }244245 return (246 <>247 {/* Primary in-page Chat Interface */}248 <Card className="w-full h-full flex flex-col bg-transparent shadow-none border-none">249 <CardHeader>250 {/* Title text includes emoji. Left as-is per the request not to change code. */}251 <CardTitle className="text-center text-2xl font-bold text-gray-800">👨🚀 Spaceman AI Assistant</CardTitle>252 </CardHeader>253254 <CardContent className="flex-grow overflow-hidden bg-white rounded-lg p-4 h-[60vh] border border-gray-200">255 <div className="h-full border rounded-lg bg-gray-50 p-4 overflow-y-auto">256 {/* Demo typing bubble shown before any messages exist */}257 {messages.length === 0 && showExampleTyping && (258 <div className="mb-4 text-left">259 <span className="inline-block p-3 rounded-lg bg-gray-200 text-gray-800 font-mono max-w-[80%]">260 {typedExampleAiResponse}261 <span className="animate-cursor-blink">|</span>262 </span>263 </div>264 )}265266 {/* Real chat transcript */}267 {messages.map((msg) => (268 <div key={msg.id} className={`mb-4 ${msg.role === "user" ? "text-right" : "text-left"}`}>269 {renderMessageContent(msg)}270 </div>271 ))}272273 {/* Streaming indicator while assistant responds */}274 {isTyping && messages.length > 0 && (275 <div className="text-left mb-4">276 <span className="inline-block p-3 rounded-lg bg-gray-200 text-gray-800">277 <DancingDots />278 </span>279 </div>280 )}281 </div>282 </CardContent>283284 {/* Input row with submit button */}285 <CardFooter className="pt-4">286 <form onSubmit={onSubmit} className="flex w-full space-x-2">287 <Input288 value={input}289 onChange={(e) => setInput(e.target.value)}290 onFocus={handleInputFocus} // Opens modal-focused experience291 disabled={status !== "ready"}292 placeholder={input ? "" : currentPlaceholder || "Type your message..."}293 className="flex-grow border-gray-300 focus:border-purple-500 focus:ring-purple-500 placeholder:text-gray-400"294 />295 <Button296 type="submit"297 disabled={status !== "ready"}298 className="bg-purple-600 hover:bg-purple-700 text-white"299 >300 Launch 🚀301 </Button>302 </form>303 </CardFooter>304 </Card>305306 {/* Full-screen Modal Chat Interface */}307 {isModalOpen && (308 <div309 className="fixed inset-0 bg-black bg-opacity-50 flex items-center justify-center p-10"310 style={{311 zIndex: 2147483647, // Large z-index to ensure topmost stacking312 isolation: "isolate",313 position: "fixed",314 top: 0,315 left: 0,316 right: 0,317 bottom: 0,318 }}319 >320 <Card321 className="w-full max-w-4xl h-full max-h-[80vh] flex flex-col bg-white shadow-2xl"322 style={{ zIndex: 2147483647 }}323 >324 <CardHeader className="relative">325 <CardTitle className="text-center text-2xl font-bold text-gray-800 pr-8">👨🚀 Spaceman AI Assistant</CardTitle>326 {/* Modal close button */}327 <Button328 variant="ghost"329 size="icon"330 onClick={handleCloseModal}331 className="absolute top-2 right-2 hover:bg-gray-100"332 style={{ zIndex: 2147483647 }}333 >334 <X className="h-5 w-5" />335 </Button>336 </CardHeader>337338 <CardContent className="flex-grow overflow-hidden bg-white p-6">339 <div className="h-full border rounded-lg bg-gray-50 p-4 overflow-y-auto">340 {/* Demo typing bubble for first-run experience */}341 {messages.length === 0 && showExampleTyping && (342 <div className="mb-4 text-left">343 <span className="inline-block p-3 rounded-lg bg-gray-200 text-gray-800 font-mono max-w-[80%]">344 {typedExampleAiResponse}345 <span className="animate-cursor-blink">|</span>346 </span>347 </div>348 )}349350 {/* Real chat transcript in modal */}351 {messages.map((msg) => (352 <div key={msg.id} className={`mb-4 ${msg.role === "user" ? "text-right" : "text-left"}`}>353 {renderMessageContent(msg)}354 </div>355 ))}356357 {/* Streaming indicator while assistant responds */}358 {isTyping && messages.length > 0 && (359 <div className="text-left mb-4">360 <span className="inline-block p-3 rounded-lg bg-gray-200 text-gray-800">361 <DancingDots />362 </span>363 </div>364 )}365 </div>366 </CardContent>367368 <CardFooter className="p-6 pt-4">369 <form onSubmit={onSubmit} className="flex w-full space-x-2">370 <Input371 value={input}372 onChange={(e) => setInput(e.target.value)}373 disabled={status !== "ready"}374 placeholder={input ? "" : currentPlaceholder || "Type your message..."}375 className="flex-grow border-gray-300 focus:border-purple-500 focus:ring-purple-500 placeholder:text-gray-400"376 autoFocus // Focus the modal input for immediate typing377 />378 <Button379 type="submit"380 disabled={status !== "ready"}381 className="bg-purple-600 hover:bg-purple-700 text-white"382 >383 Launch 🚀384 </Button>385 </form>386 </CardFooter>387 </Card>388 </div>389 )}390 </>391 )392}
/app/api/assistant/route.ts
1// app/api/assistant/route.ts2import OpenAI from "openai";34export const runtime = "nodejs"; // Ensure Node.js runtime for streaming SSE and server-side OpenAI SDK use56const openai = new OpenAI({7 apiKey: process.env.OPENAI_API_KEY, // Read API key from environment8});910const USE_MOCK = process.env.USE_MOCK === "true"; // Toggle for local testing without hitting OpenAI11const VECTOR_STORE_ID = process.env.SPACEMAN_ASSISTANT_VECTOR_STORE_ID; // Pre-provisioned vector store for file_search tool1213// Helper function to chunk text while preserving markdown formatting14// Splits on line boundaries to avoid breaking code blocks, lists, and headings15function chunkMarkdownText(text: string, maxChunkSize: number = 200): string[] {16 const chunks: string[] = [];17 const lines = text.split("\n");18 let currentChunk = "";1920 for (const line of lines) {21 // If adding this line would exceed chunk size and we have content, start new chunk22 if (23 currentChunk.length + line.length + 1 > maxChunkSize &&24 currentChunk.length > 025 ) {26 chunks.push(currentChunk);27 currentChunk = line + "\n";28 } else {29 currentChunk += (currentChunk ? "\n" : "") + line;30 }31 }3233 // Add final chunk if it has content34 if (currentChunk.trim()) {35 chunks.push(currentChunk);36 }3738 return chunks;39}4041// Helper function to ensure markdown formatting42// Light-touch normalization to encourage markdown in plain responses43function ensureMarkdownFormat(text: string): string {44 // If the text already has markdown formatting, return as-is45 if (text.includes("##") || text.includes("**") || text.includes("- ")) {46 return text;47 }4849 // Basic markdown conversion for plain text responses50 let formatted = text;5152 // Convert bullet points to markdown53 formatted = formatted.replace(/^\s*•\s+/gm, "- ");54 formatted = formatted.replace(/^\s*-\s+([^-])/gm, "- $1");5556 // Emphasize key names/titles57 formatted = formatted.replace(/Stephen Andekian/g, "**Stephen Andekian**");58 formatted = formatted.replace(/Spaceman Media/g, "**Spaceman Media**");5960 // Add section headers if the content suggests them61 if (62 formatted.includes("key highlights") ||63 formatted.includes("background and roles")64 ) {65 formatted = formatted.replace(/(Some key highlights[^:]*:)/gi, "## $1");66 }6768 return formatted;69}7071export async function POST(req: Request) {72 try {73 const body = await req.json();74 console.log("Request body:", JSON.stringify(body, null, 2));7576 // Extract user message from different possible formats77 let userMessage = "Hello"; // Default fallback7879 console.log("Processing messages:", JSON.stringify(body.messages, null, 2));8081 if (body.messages && body.messages.length > 0) {82 const lastMessage = body.messages[body.messages.length - 1];83 console.log("Last message:", JSON.stringify(lastMessage, null, 2));8485 // Handle parts array format (Vercel AI SDK message shape)86 if (lastMessage.parts && lastMessage.parts.length > 0) {87 const textPart = lastMessage.parts.find(88 (part: any) => part.type === "text",89 );90 if (textPart && textPart.text) {91 userMessage = textPart.text;92 console.log("Extracted from parts:", userMessage);93 }94 }95 // Handle direct content format for compatibility96 else if (lastMessage.content) {97 userMessage = lastMessage.content;98 console.log("Extracted from content:", userMessage);99 }100 }101 // Fallback to body.text102 else if (body.text) {103 userMessage = body.text;104 console.log("Extracted from body.text:", userMessage);105 }106107 console.log("Extracted user message:", JSON.stringify(userMessage));108 console.log("USE_MOCK:", USE_MOCK);109110 const encoder = new TextEncoder();111112 // Create a server-sent events stream compatible with Vercel AI SDK DefaultChatTransport113 const stream = new ReadableStream({114 start(controller) {115 // Helper to enqueue SSE-formatted data events116 const send = (data: any) => {117 const chunk = `data: ${JSON.stringify(data)}\n\n`;118 console.log("Sending chunk:", JSON.stringify(data));119 controller.enqueue(encoder.encode(chunk));120 };121122 // Simple mock path for local testing without calling OpenAI123 const handleMockResponse = () => {124 try {125 const messageId = `msg_${Date.now()}`;126127 // Begin a text message128 send({129 type: "text-start",130 id: messageId,131 });132133 // Full response text134 const fullResponse = `I received your message: "${userMessage}". This is a mock response from the AI assistant working correctly with vector store integration!`;135136 // Send entire response as one delta to keep the client pipeline consistent137 send({138 type: "text-delta",139 id: messageId,140 delta: fullResponse,141 });142143 // End the text message144 send({145 type: "text-end",146 id: messageId,147 });148149 // Finish event to tell client the turn is complete150 send({151 type: "finish",152 });153154 controller.close();155 } catch (error: unknown) {156 console.error("Mock response error:", error);157 send({158 type: "error",159 errorText:160 error instanceof Error ? error.message : "Mock response error",161 });162 controller.close();163 }164 };165166 // Real path that calls OpenAI Responses API with file_search tool against the vector store167 const handleResponsesAPIResponse = async () => {168 try {169 if (!process.env.OPENAI_API_KEY) {170 throw new Error(171 "OPENAI_API_KEY environment variable is required",172 );173 }174175 console.log("Using Responses API");176 console.log("Vector Store ID:", VECTOR_STORE_ID);177 console.log("User message:", userMessage);178179 // Hard rule to scope answers to vector store content only180 const restrictedInput = `CRITICAL INSTRUCTION: Answer ONLY using information from the uploaded files in the vector store. Do not use training data or general knowledge. If information is not available in the files, state clearly that it's not in your knowledge base.181182USER QUESTION: ${userMessage}183184FORMATTING REQUIREMENTS:185- Format your response in markdown186- Use proper headings (##, ###) for sections187- Use bullet points (-) for lists188- Use **bold** for emphasis on key points189- Include citations and be explicit about what information comes from the uploaded documents190- Structure the information clearly and professionally`;191192 // Call OpenAI Responses API with tool for file_search bound to the vector store193 const response = (await openai.responses.create({194 model: "gpt-4o-mini", // Efficient model for RAG-style lookups195 max_output_tokens: 600,196 temperature: 0.2, // Low temperature for factual tone197 input: restrictedInput,198 tools: [199 {200 type: "file_search",201 vector_store_ids: [VECTOR_STORE_ID as string],202 },203 ],204 })) as any; // Loosen typing to handle evolving SDK shapes205206 console.log(207 "Responses API response structure:",208 JSON.stringify(response, null, 2),209 );210211 const messageId = `msg_${Date.now()}`;212213 // Start the text message for the client214 send({215 type: "text-start",216 id: messageId,217 });218219 // Extract assistant text from Responses API output shapes220 let responseContent = "";221222 // Primary field on newer SDKs223 if (224 response.output_text &&225 typeof response.output_text === "string"226 ) {227 responseContent = response.output_text;228 console.log("Extracted from output_text:", responseContent);229 }230 // Fallback for array-based outputs231 else if (response.output && Array.isArray(response.output)) {232 const messageOutput = response.output.find(233 (item: any) => item.type === "message",234 );235 if (236 messageOutput &&237 messageOutput.content &&238 Array.isArray(messageOutput.content)239 ) {240 const textContent = messageOutput.content.find(241 (item: any) => item.type === "output_text",242 );243 if (textContent && textContent.text) {244 responseContent = textContent.text;245 console.log("Extracted from output array:", responseContent);246 }247 }248 }249250 if (!responseContent) {251 console.log("Could not extract response content from structure");252 responseContent =253 "I apologize, but I couldn't generate a response from the files in my knowledge base.";254 }255256 console.log("Final response content:", responseContent);257258 // Optionally normalize to markdown if needed before chunking259 // Note: you currently stream raw model markdown through chunker260 // const normalized = ensureMarkdownFormat(responseContent);261262 // Send the response in markdown-aware chunks to preserve formatting and smooth streaming263 const chunks = chunkMarkdownText(responseContent);264265 for (const chunk of chunks) {266 send({267 type: "text-delta",268 id: messageId,269 delta: chunk,270 });271272 // Tiny delay to simulate natural streaming cadence273 await new Promise((resolve) => setTimeout(resolve, 30));274 }275276 // Signal completion of the message and turn277 send({278 type: "text-end",279 id: messageId,280 });281282 send({283 type: "finish",284 });285286 controller.close();287 } catch (error: unknown) {288 console.error("Responses API error:", error);289 send({290 type: "error",291 errorText:292 error instanceof Error ? error.message : "Responses API error",293 });294 controller.close();295 }296 };297298 // Choose between mock and real API based on environment variable299 if (USE_MOCK) {300 handleMockResponse();301 } else {302 handleResponsesAPIResponse();303 }304 },305306 cancel() {307 console.log("Stream cancelled"); // Called if the client disconnects early308 },309 });310311 // Return an SSE response that the client hook knows how to consume312 return new Response(stream, {313 headers: {314 "Content-Type": "text/event-stream",315 "Cache-Control": "no-cache, no-transform",316 Connection: "keep-alive",317 "Access-Control-Allow-Origin": "*", // CORS open for local testing318 "Access-Control-Allow-Methods": "POST",319 "Access-Control-Allow-Headers": "Content-Type",320 },321 });322 } catch (error: unknown) {323 console.error("API route error:", error);324 return new Response(JSON.stringify({ error: "Internal server error" }), {325 status: 500,326 headers: { "Content-Type": "application/json" },327 });328 }329}
Conclusion: The Future is Interactive, and It's Here Now
What started as a weekend experiment has fundamentally changed how I connect with potential clients, collaborators, and employers. Instead of hoping my static portfolio tells the right story to the right person, I now have an AI assistant that can intelligently respond to any question about my background, approach, and fit for specific roles.
But beyond the immediate practical benefits, this project represents something bigger: the evolution of how professionals will present themselves in an AI-driven world. Static resumes and portfolios are becoming as outdated as business cards. The future belongs to those who can create interactive, intelligent experiences that adapt to each viewer's specific needs and interests.
The Vibe Coding approach made this transformation possible. By collaborating with AI tools like V0, Claude, and ChatGPT, I was able to focus on the creative and strategic aspects of the project while letting AI handle the technical heavy lifting. The result is a more polished, thoughtful solution than I could have built alone, and it happened in a fraction of the time.
Your Turn to Build the Future
The tools and techniques I've shared aren't just for developers. Whether you're a designer, marketer, consultant, or any other professional, you can use this same approach to create your own interactive portfolio experience. The key is starting with your unique value proposition and letting AI help you bring it to life.
The portfolio of the future isn't just about showcasing what you've done it's about demonstrating how you think, how you approach problems, and how you could specifically help solve the challenges your audience is facing right now.
Ready to transform your organization's digital presence? If you're looking for strategic guidance around digital marketing, AI implementation, or modern web development, I'd love to explore how we can work together. Book a meeting or reach out directly to discuss your specific challenges and how my approach to AI-driven digital strategy can help drive your next phase of growth.