Vibe Coding an AI-Powered Portfolio Assistant: How I Built an Interactive 'Ask About Me' Tool That Actually Works

TL;DR

I built an AI assistant for my portfolio that lets visitors ask specific questions like "Will Stephen be good for VP of Web Strategy?" instead of forcing them to dig through static content. Using "Vibe Coding" (collaborative development with AI tools like V0, Claude, and ChatGPT), I went from idea to deployed tool in a weekend. The result: more meaningful connections and conversations that actually address what people want to know. This post shows you exactly how i did it.

8 minute read

Introduction: The Future of Personal Branding is Interactive

Picture this: A hiring manager lands on your portfolio at 11 PM, three coffees deep into their candidate review session. Instead of scrolling through another static resume, they type: "Will Stephen be a good fit for the role of VP of Web Strategy?" and get an intelligent, contextual response that directly addresses their specific needs and concerns.

This isn't science fiction it's what I built for my own portfolio using what I call "Vibe Coding." This approach leverages AI tools like Vercel's V0, Claude, and ChatGPT to rapidly prototype and iterate, turning a weekend idea into a professional tool that's already changing how potential clients and employers interact with my work.

In this post, I'll take you through both the strategic thinking and technical implementation behind building an AI assistant that knows you better than your LinkedIn profile ever could. Whether you're a developer, designer, or any professional looking to stand out, you'll learn how to create your own interactive portfolio experience and why this approach is becoming essential in our AI-driven world.

81%

of developers agree increasing productivity is the biggest benefit they identify for AI tools

Source: Statista

The Problem with Traditional Portfolios

Let's be honest: traditional portfolios are basically digital brochures. Beautiful, perhaps, but fundamentally passive. They showcase your work, list your skills, and tell your story but only in the way you've pre-scripted it.

The reality is that every recruiter, client, or potential collaborator comes to your portfolio with different contexts and questions:

  • A startup CEO wants to know if you can handle the chaos of rapid scaling
  • An enterprise hiring manager needs to understand your experience with compliance and process
  • A creative agency is curious about how you balance innovation with client constraints
  • A technical team wants to dig into your problem-solving approach

Static content can't answer these nuanced questions. Sure, they might find relevant project examples if they dig deep enough, but there's no guarantee your carefully curated case studies align with their specific concerns.

Even worse, traditional portfolios miss the opportunity for discovery. That VP role you never considered might actually be perfect for you, but the hiring manager will never know to ask about your leadership philosophy or change management experience if those aren't front and center on your homepage.

The result? Missed connections, misaligned expectations, and a lot of talented people getting overlooked because their static portfolio didn't tell the right story to the right person at the right time.

Enter the AI Portfolio Assistant

Now imagine a different scenario. That same hiring manager lands on your portfolio and can immediately start a conversation:

"Will Stephen be a good fit for VP of Web Strategy? Summarize it in two to three sentences." 

Stephen Andekian has over 15 years of experience in digital strategy and marketing, particularly within the cybersecurity and SaaS industries. His leadership roles, including as Interim CMO at Trustwave, demonstrate his ability to drive significant growth, such as increasing marketing-generated opportunities from $410M to $605M. Additionally, his expertise in AI-driven strategiesweb infrastructure modernization, and conversion optimization positions him as a strong candidate for the VP of Web Strategy role.

Overall, Stephen's proven track record in transforming digital ecosystems and enhancing user engagement makes him a compelling fit for this position.


👨‍🚀 Spaceman AI Assistant


"Briefly explain how Stephen approaches digital marketing leadership?"

Stephen Andekian approaches digital marketing leadership by:

Driving Digital Transformation: He emphasizes the integration of advanced technologies and analytics to enhance marketing effectiveness, as demonstrated during his tenure at Trustwave where he guided significant digital transformation initiatives.

Fostering Data-Driven Decision Making: Stephen prioritizes the use of marketing analytics for better tracking and decision-making, ensuring that strategies are informed by data to optimize performance and achieve growth.


👨‍🚀 Spaceman AI Assistant

The Vibe Coding Approach: AI as Your Development Partner

Before we dive into the technical details, let me introduce you to "Vibe Coding" a development approach where developers provide general, high-level directions to an LLM, which then produces the precise instructions contained within working code. Rather than manually writing every line of code, vibe coding refers to the practice of instructing AI agents to write code based on natural language prompts, focusing your time and energy on the creative aspects of app development rather than getting stuck in technical details.

Traditional coding often follows a linear path: research, plan, code, test, iterate. Vibe Coding is more like a jazz improvisation session. You start with a rough idea, bounce it off AI tools to refine the concept, use AI to rapidly prototype interfaces, collaborate with AI to solve technical challenges, and iterate in real-time based on the feedback loop between your creative vision and AI capabilities.

For this project, my AI toolkit consisted of:

  • Vercel's V0: For rapid UI prototyping and component generation
  • Claude: For architectural planning, code review, and complex problem-solving
  • ChatGPT: For brainstorming, content strategy, and prompt engineering
  • OpenAI Assistant API: As the runtime engine powering the actual portfolio assistant

The magic happens in the collaboration. V0 helps me visualize ideas instantly, turning rough concepts into polished components in minutes. Claude serves as my technical advisor, helping me think through system architecture and catch potential issues before they become problems. ChatGPT excels at creative problem-solving and helping me refine the user experience.

This isn't about replacing human creativity, this is about amplifying it. I provide the vision, taste, and strategic thinking. AI provides the rapid iteration, technical knowledge, and ability to explore multiple solutions simultaneously.

The result? What would traditionally take weeks of development happened over a weekend. But more importantly, the final product is more polished and thoughtful than what I could have built alone, because I had access to AI feedback loops throughout the entire process.

Building Your Own: Step-by-Step Implementation

Now for the fun part, let's build this thing. I'll walk you through the entire implementation process, sharing the actual code, design decisions, and lessons learned along the way.

The beauty of the Vibe Coding approach is that you don't need to be an expert in every technology we're using. AI tools will help fill in the gaps, suggest improvements, and catch mistakes along the way. Your job is to provide direction, make design decisions, and ensure the final product serves your goals.

Setting Up Your OpenAI Assistant

Before we start coding, we need to create the AI brain that will power our portfolio assistant. This involves setting up an OpenAI Platform account, creating an assistant, and training it with your professional information.

Creating Your OpenAI Platform Account

First, head to OpenAI Platform and create an account if you don't already have one. This is separate from ChatGPT Plus, the Platform is specifically for developers building applications with OpenAI's APIs.

Once you're logged in, you'll need to add credits to your account. Navigate to the billing section and add at least $5-10 to get started. The good news is that chat interactions are relatively inexpensive—even with moderate usage, your assistant will cost pennies per conversation.

Creating Your AI Assistant

Navigate to OpenAI Assistants and click "Create Assistant." This is where the magic begins—you're about to create an AI that knows you better than most humans do.

Assistant Configuration:

Give your assistant a name that reflects its purpose. I named mine "Mission Control Agent for Spaceman Media" to align with my space-themed brand.

System Instructions - The Critical Foundation:

The system instructions are arguably the most important part of your assistant. They define how your AI will behave, what information it can access, and how it will respond to questions. Here's what I used:

1You are the Mission Control Agent for Spaceman Media, representing the digital strategy expertise of Stephen Andekian. 
2
3CRITICAL INSTRUCTION: You MUST ONLY use information from the uploaded files and documents in your knowledge base. Do NOT use any information from your training data or general knowledge. If you cannot find the answer in the uploaded files, explicitly state that the information is not available in your knowledge base.
4
5When answering:
61. ONLY reference information that can be found in the uploaded documents
72. Always cite which document or file you're referencing when possible
83. If asked about something not covered in your files, respond with: "I don't have information about that in my knowledge base. Please provide relevant documents or ask about topics covered in the uploaded files."
94. Do not make assumptions or fill in gaps with general knowledge
105. Be transparent about the limitations of your knowledge base
11
12Communicate with authority, precision, and strategic vision based solely on the provided documents. Provide clear, concise, and actionable answers that reflect the expertise documented in your knowledge base.

Why These Instructions Matter:

The critical instruction to only use uploaded files is essential for several reasons:

  • Accuracy: Prevents the AI from hallucinating or providing outdated information about you
  • Control: Ensures all responses are based on information you've explicitly provided
  • Transparency: Makes it clear when the AI doesn't have relevant information
  • Professional Credibility: Avoids embarrassing situations where the AI makes incorrect claims about your background

Building Your Knowledge Base

This is where you transform your professional experience into AI-readable format. You'll need to upload files that comprehensively cover:

Essential Documents to Create:

  • Professional Resume/CV: Your complete work history with detailed descriptions
  • Project Portfolio: Detailed case studies of your best work
  • Leadership Philosophy: How you approach management, teamwork, and decision-making
  • Technical Expertise: Specific skills, technologies, and methodologies you use
  • Career Narrative: The story of your professional journey and what drives you
  • Values and Working Style: What makes you unique as a collaborator

Pro Tips for Knowledge Base Creation:

  • Write in a conversational style that matches how you'd explain things to a colleague
  • Include specific examples and metrics where possible
  • Cover different aspects of your personality and approach, not just accomplishments
  • Think about the questions people actually ask in interviews or client calls
  • Include information about what you're looking for in opportunities

The goal is to create a comprehensive picture of who you are professionally, so the AI can handle everything from "What's Stephen's experience with remote teams?" to "How does he approach strategic planning?"

Once you've uploaded your files, test your assistant directly in the OpenAI interface to make sure it's responding appropriately and staying within the bounds of your knowledge base.

Next Step: With your AI assistant configured and trained, we're ready to build the interface that will bring it to life on your portfolio.

Vibe Coding the UI First: Rapid Prototyping with V0

Before writing any code, I started with the interface design using V0. This is where Vibe Coding really shines instead of wireframing and designing mockups, I could describe what I wanted and iterate in real-time.

Here's how the conversation with V0 unfolded:

"Create a AI Chat interface to use on spaceman.media https://www.spaceman.media/ where users can enter questions about the developer and gain insights on how they would be an amazing person to work at their company"

I also shared screenshots of my existing portfolio site so V0 could understand the design context and brand aesthetic.

What impressed me immediately was that V0 didn't just create a generic chat bubble interface. Instead, it analyzed my request and proposed a complete technical architecture:

1I'll create a chat application using the AI SDK and OpenAI, leveraging Next.js with the App Router, which is the recommended framework for building applications with the AI SDK.

You can view the v0 conversation in full and code that is broken down below here:

v0 Spaceman Media Chat Interface

/app/components/ChatInterface.tsx

1"use client" // Marks this component as a Client Component in Next.js App Router
2
3import React from "react"
4import { useState, useEffect, useMemo } from "react"
5import { useChat } from "@ai-sdk/react"                 // Vercel AI SDK React hook for chat state and actions
6import { DefaultChatTransport } from "ai"               // Transport to route chat requests to your API
7import { Button } from "@/components/ui/button"         // shadcn/ui button
8import { Input } from "@/components/ui/input"           // shadcn/ui input
9import { Card, CardContent, CardFooter, CardHeader, CardTitle } from "@/components/ui/card"
10import { X } from "lucide-react"                        // Close icon for modal
11
12// Using marked library for markdown to HTML conversion
13// Install: npm install marked @types/marked
14import { marked } from 'marked'
15
16// Configure marked for simple, predictable rendering suitable for chat display
17marked.setOptions({
18  breaks: true,    // Treat single line breaks as <br>
19  gfm: true,       // Enable GitHub Flavored Markdown
20})
21
22// Custom renderer to constrain output. Intent is to avoid rich blocks and external links.
23// This helps keep content visually consistent and reduces risk when using innerHTML.
24const renderer = new marked.Renderer()
25
26// Render fenced code blocks as inline boxes instead of <pre><code> blocks
27renderer.code = ({ text, lang, escaped }: { text: string, lang?: string, escaped?: boolean }) => `<span class="inline-code">${text}</span>`
28
29// Render inline code with a custom span for styling
30renderer.codespan = ({ text }: { text: string }) => `<span class="inline-code">${text}</span>`
31
32// Render links as plain text by returning only their text tokens
33renderer.link = ({ href, title, tokens }: { href: string, title?: string | null, tokens: any[] }) => {
34  // Remove the anchor element and produce only text content
35  return tokens.map(token => token.raw || token.text || '').join('')
36}
37
38// Render headings without IDs to avoid page-level anchor collisions inside chat bubbles
39renderer.heading = ({ text, depth }: { text: string, depth: number }) => `<h${depth}>${text}</h${depth}>`
40
41// Apply the custom renderer to marked
42marked.setOptions({ renderer })
43
44// Helper to convert markdown strings to HTML with the configured renderer
45function markdownToHtml(markdown: string): string {
46  return marked.parse(markdown) as string
47}
48
49// Typing indicator component. Uses staggered CSS animations for three dots.
50const DancingDots = () => {
51  return (
52    <span className="inline-flex">
53      AI is thinking
54      <span className="ml-1 inline-flex">
55        <span
56          className="animate-bounce"
57          style={{
58            animationDelay: '0ms',
59            animationDuration: '1.4s'
60          }}
61        >
62          .
63        </span>
64        <span
65          className="animate-bounce"
66          style={{
67            animationDelay: '160ms',
68            animationDuration: '1.4s'
69          }}
70        >
71          .
72        </span>
73        <span
74          className="animate-bounce"
75          style={{
76            animationDelay: '320ms',
77            animationDuration: '1.4s'
78          }}
79        >
80          .
81        </span>
82      </span>
83    </span>
84  )
85}
86
87// Main chat UI component
88export default function ChatInterface() {
89  // Hook manages messages, sendMessage, and status
90  // Transport points to your API route that proxies or handles the chat completion
91  const { messages, sendMessage, status } = useChat({
92    transport: new DefaultChatTransport({
93      api: "/api/assistant/",
94    }),
95  })
96
97  // Text input state
98  const [input, setInput] = useState("")
99  // Modal visibility state
100  const [isModalOpen, setIsModalOpen] = useState(false)
101
102  // Predefined example prompts for animated placeholder
103  // useMemo ensures stable reference and avoids re-creating the array on each render
104  const examplePrompts = useMemo(() => [
105    "What are his key skills?",
106    "What kind of projects has Stephen worked on?",
107    "Is Stephen a good fit for a startup?",
108    "Describe his experiences as a Marketing leader.",
109    "Describe his experiences as a developer.",
110    "How does he approach problem-solving?",
111  ], [])
112
113  // Typing animation state for the rotating placeholder
114  const [currentPlaceholder, setCurrentPlaceholder] = useState("")
115  const [promptIndex, setPromptIndex] = useState(0)
116  const [charIndex, setCharIndex] = useState(0)
117  const [isDeleting, setIsDeleting] = useState(false)
118
119  // Example assistant message for initial demo typing
120  const exampleAiResponse =
121    "Hello! I can share detailed insights into Stephen Andekian's expertise, including his leadership in enterprise digital transformation, AI-driven marketing strategy, and modern web architecture for cybersecurity and SaaS organizations."
122  const [typedExampleAiResponse, setTypedExampleAiResponse] = useState("")
123  const [showExampleTyping, setShowExampleTyping] = useState(true)
124
125  // Effect to animate the rotating placeholder text
126  // Pauses when user starts typing to avoid distraction and to keep placeholder empty
127  useEffect(() => {
128    const typeSpeed = isDeleting ? 50 : 100
129    const delayBetweenPrompts = 1500
130
131    const timer = setTimeout(() => {
132      const full = examplePrompts[promptIndex]
133      if (isDeleting) {
134        setCurrentPlaceholder(full.substring(0, charIndex - 1))
135        setCharIndex((ci) => ci - 1)
136        if (charIndex === 0) {
137          setIsDeleting(false)
138          setPromptIndex((i) => (i + 1) % examplePrompts.length)
139        }
140      } else {
141        setCurrentPlaceholder(full.substring(0, charIndex + 1))
142        setCharIndex((ci) => ci + 1)
143        if (charIndex === full.length) {
144          setTimeout(() => setIsDeleting(true), delayBetweenPrompts)
145        }
146      }
147    }, typeSpeed)
148
149    // If the user has started entering input, stop the placeholder animation
150    if (input.length > 0) {
151      clearTimeout(timer)
152      setCurrentPlaceholder("")
153      setShowExampleTyping(false)
154    }
155
156    return () => clearTimeout(timer)
157  }, [charIndex, isDeleting, promptIndex, input, examplePrompts])
158
159  // Effect to animate an example assistant message before any real chat occurs
160  useEffect(() => {
161    if (messages.length === 0 && showExampleTyping && typedExampleAiResponse.length < exampleAiResponse.length) {
162      const t = setTimeout(() => {
163        setTypedExampleAiResponse((prev) => exampleAiResponse.slice(0, prev.length + 1))
164      }, 10)
165      return () => clearTimeout(t)
166    } else if (messages.length > 0) {
167      // Once real messages exist, hide the demo typing
168      setShowExampleTyping(false)
169    }
170  }, [typedExampleAiResponse, showExampleTyping, messages.length, exampleAiResponse])
171
172  // Submit handler for the message form
173  // Sends text to the chat endpoint and clears the input on success
174  const onSubmit = async (e: React.FormEvent) => {
175    e.preventDefault()
176    if (!input.trim()) return
177    await sendMessage({ text: input })
178    setInput("")
179  }
180
181  // Focus handler to open the modal version of the chat
182  const handleInputFocus = () => {
183    setIsModalOpen(true)
184  }
185
186  // Close handler for modal
187  const handleCloseModal = () => {
188    setIsModalOpen(false)
189  }
190
191  // Effect to bind Escape for modal close and to lock body scroll when modal is open
192  useEffect(() => {
193    const handleEscape = (e: KeyboardEvent) => {
194      if (e.key === "Escape") {
195        setIsModalOpen(false)
196      }
197    }
198
199    if (isModalOpen) {
200      document.addEventListener("keydown", handleEscape)
201      // Prevent background scroll while modal is open
202      document.body.style.overflow = "hidden"
203    } else {
204      document.body.style.overflow = "unset"
205    }
206
207    // Cleanup removes listener and restores body scroll
208    return () => {
209      document.removeEventListener("keydown", handleEscape)
210      document.body.style.overflow = "unset"
211    }
212  }, [isModalOpen])
213
214  // Derive assistant "is typing" indicator from hook status
215  // "submitted" or "streaming" indicates pending or active response
216  const isTyping = status === "submitted" || status === "streaming"
217
218  // Renders a chat message. Assistant messages are parsed from markdown to HTML.
219  // User messages render as plain text. This keeps user content safe and simple.
220  const renderMessageContent = (msg: any) => {
221    const text = msg.parts
222      .filter((p: any) => p.type === "text")
223      .map((p: any) => p.text)
224      .join("")
225
226    if (msg.role === "assistant") {
227      const htmlContent = markdownToHtml(text)
228      return (
229        <span
230          className="inline-block p-3 rounded-lg bg-gray-200 text-gray-800 font-mono max-w-[80%] markdown-content"
231          // Using innerHTML on sanitized and constrained renderer output
232          dangerouslySetInnerHTML={{ __html: htmlContent }}
233        />
234      )
235    }
236
237    // User bubble styling
238    return (
239      <span className="inline-block p-3 rounded-lg max-w-[80%] bg-purple-600 text-white">
240        {text}
241      </span>
242    )
243  }
244
245  return (
246    <>
247      {/* Primary in-page Chat Interface */}
248      <Card className="w-full h-full flex flex-col bg-transparent shadow-none border-none">
249        <CardHeader>
250          {/* Title text includes emoji. Left as-is per the request not to change code. */}
251          <CardTitle className="text-center text-2xl font-bold text-gray-800">👨‍🚀 Spaceman AI Assistant</CardTitle>
252        </CardHeader>
253
254        <CardContent className="flex-grow overflow-hidden bg-white rounded-lg p-4 h-[60vh] border border-gray-200">
255          <div className="h-full border rounded-lg bg-gray-50 p-4 overflow-y-auto">
256            {/* Demo typing bubble shown before any messages exist */}
257            {messages.length === 0 && showExampleTyping && (
258              <div className="mb-4 text-left">
259                <span className="inline-block p-3 rounded-lg bg-gray-200 text-gray-800 font-mono max-w-[80%]">
260                  {typedExampleAiResponse}
261                  <span className="animate-cursor-blink">|</span>
262                </span>
263              </div>
264            )}
265
266            {/* Real chat transcript */}
267            {messages.map((msg) => (
268              <div key={msg.id} className={`mb-4 ${msg.role === "user" ? "text-right" : "text-left"}`}>
269                {renderMessageContent(msg)}
270              </div>
271            ))}
272
273            {/* Streaming indicator while assistant responds */}
274            {isTyping && messages.length > 0 && (
275              <div className="text-left mb-4">
276                <span className="inline-block p-3 rounded-lg bg-gray-200 text-gray-800">
277                  <DancingDots />
278                </span>
279              </div>
280            )}
281          </div>
282        </CardContent>
283
284        {/* Input row with submit button */}
285        <CardFooter className="pt-4">
286          <form onSubmit={onSubmit} className="flex w-full space-x-2">
287            <Input
288              value={input}
289              onChange={(e) => setInput(e.target.value)}
290              onFocus={handleInputFocus} // Opens modal-focused experience
291              disabled={status !== "ready"}
292              placeholder={input ? "" : currentPlaceholder || "Type your message..."}
293              className="flex-grow border-gray-300 focus:border-purple-500 focus:ring-purple-500 placeholder:text-gray-400"
294            />
295            <Button
296              type="submit"
297              disabled={status !== "ready"}
298              className="bg-purple-600 hover:bg-purple-700 text-white"
299            >
300              Launch 🚀
301            </Button>
302          </form>
303        </CardFooter>
304      </Card>
305
306      {/* Full-screen Modal Chat Interface */}
307      {isModalOpen && (
308        <div
309          className="fixed inset-0 bg-black bg-opacity-50 flex items-center justify-center p-10"
310          style={{
311            zIndex: 2147483647,  // Large z-index to ensure topmost stacking
312            isolation: "isolate",
313            position: "fixed",
314            top: 0,
315            left: 0,
316            right: 0,
317            bottom: 0,
318          }}
319        >
320          <Card
321            className="w-full max-w-4xl h-full max-h-[80vh] flex flex-col bg-white shadow-2xl"
322            style={{ zIndex: 2147483647 }}
323          >
324            <CardHeader className="relative">
325              <CardTitle className="text-center text-2xl font-bold text-gray-800 pr-8">👨‍🚀 Spaceman AI Assistant</CardTitle>
326              {/* Modal close button */}
327              <Button
328                variant="ghost"
329                size="icon"
330                onClick={handleCloseModal}
331                className="absolute top-2 right-2 hover:bg-gray-100"
332                style={{ zIndex: 2147483647 }}
333              >
334                <X className="h-5 w-5" />
335              </Button>
336            </CardHeader>
337
338            <CardContent className="flex-grow overflow-hidden bg-white p-6">
339              <div className="h-full border rounded-lg bg-gray-50 p-4 overflow-y-auto">
340                {/* Demo typing bubble for first-run experience */}
341                {messages.length === 0 && showExampleTyping && (
342                  <div className="mb-4 text-left">
343                    <span className="inline-block p-3 rounded-lg bg-gray-200 text-gray-800 font-mono max-w-[80%]">
344                      {typedExampleAiResponse}
345                      <span className="animate-cursor-blink">|</span>
346                    </span>
347                  </div>
348                )}
349
350                {/* Real chat transcript in modal */}
351                {messages.map((msg) => (
352                  <div key={msg.id} className={`mb-4 ${msg.role === "user" ? "text-right" : "text-left"}`}>
353                    {renderMessageContent(msg)}
354                  </div>
355                ))}
356
357                {/* Streaming indicator while assistant responds */}
358                {isTyping && messages.length > 0 && (
359                  <div className="text-left mb-4">
360                    <span className="inline-block p-3 rounded-lg bg-gray-200 text-gray-800">
361                      <DancingDots />
362                    </span>
363                  </div>
364                )}
365              </div>
366            </CardContent>
367
368            <CardFooter className="p-6 pt-4">
369              <form onSubmit={onSubmit} className="flex w-full space-x-2">
370                <Input
371                  value={input}
372                  onChange={(e) => setInput(e.target.value)}
373                  disabled={status !== "ready"}
374                  placeholder={input ? "" : currentPlaceholder || "Type your message..."}
375                  className="flex-grow border-gray-300 focus:border-purple-500 focus:ring-purple-500 placeholder:text-gray-400"
376                  autoFocus // Focus the modal input for immediate typing
377                />
378                <Button
379                  type="submit"
380                  disabled={status !== "ready"}
381                  className="bg-purple-600 hover:bg-purple-700 text-white"
382                >
383                  Launch 🚀
384                </Button>
385              </form>
386            </CardFooter>
387          </Card>
388        </div>
389      )}
390    </>
391  )
392}

/app/api/assistant/route.ts

1// app/api/assistant/route.ts
2import OpenAI from "openai";
3
4export const runtime = "nodejs"; // Ensure Node.js runtime for streaming SSE and server-side OpenAI SDK use
5
6const openai = new OpenAI({
7  apiKey: process.env.OPENAI_API_KEY, // Read API key from environment
8});
9
10const USE_MOCK = process.env.USE_MOCK === "true"; // Toggle for local testing without hitting OpenAI
11const VECTOR_STORE_ID = process.env.SPACEMAN_ASSISTANT_VECTOR_STORE_ID; // Pre-provisioned vector store for file_search tool
12
13// Helper function to chunk text while preserving markdown formatting
14// Splits on line boundaries to avoid breaking code blocks, lists, and headings
15function chunkMarkdownText(text: string, maxChunkSize: number = 200): string[] {
16  const chunks: string[] = [];
17  const lines = text.split("\n");
18  let currentChunk = "";
19
20  for (const line of lines) {
21    // If adding this line would exceed chunk size and we have content, start new chunk
22    if (
23      currentChunk.length + line.length + 1 > maxChunkSize &&
24      currentChunk.length > 0
25    ) {
26      chunks.push(currentChunk);
27      currentChunk = line + "\n";
28    } else {
29      currentChunk += (currentChunk ? "\n" : "") + line;
30    }
31  }
32
33  // Add final chunk if it has content
34  if (currentChunk.trim()) {
35    chunks.push(currentChunk);
36  }
37
38  return chunks;
39}
40
41// Helper function to ensure markdown formatting
42// Light-touch normalization to encourage markdown in plain responses
43function ensureMarkdownFormat(text: string): string {
44  // If the text already has markdown formatting, return as-is
45  if (text.includes("##") || text.includes("**") || text.includes("- ")) {
46    return text;
47  }
48
49  // Basic markdown conversion for plain text responses
50  let formatted = text;
51
52  // Convert bullet points to markdown
53  formatted = formatted.replace(/^\s*\s+/gm, "- ");
54  formatted = formatted.replace(/^\s*-\s+([^-])/gm, "- $1");
55
56  // Emphasize key names/titles
57  formatted = formatted.replace(/Stephen Andekian/g, "**Stephen Andekian**");
58  formatted = formatted.replace(/Spaceman Media/g, "**Spaceman Media**");
59
60  // Add section headers if the content suggests them
61  if (
62    formatted.includes("key highlights") ||
63    formatted.includes("background and roles")
64  ) {
65    formatted = formatted.replace(/(Some key highlights[^:]*:)/gi, "## $1");
66  }
67
68  return formatted;
69}
70
71export async function POST(req: Request) {
72  try {
73    const body = await req.json();
74    console.log("Request body:", JSON.stringify(body, null, 2));
75
76    // Extract user message from different possible formats
77    let userMessage = "Hello"; // Default fallback
78
79    console.log("Processing messages:", JSON.stringify(body.messages, null, 2));
80
81    if (body.messages && body.messages.length > 0) {
82      const lastMessage = body.messages[body.messages.length - 1];
83      console.log("Last message:", JSON.stringify(lastMessage, null, 2));
84
85      // Handle parts array format (Vercel AI SDK message shape)
86      if (lastMessage.parts && lastMessage.parts.length > 0) {
87        const textPart = lastMessage.parts.find(
88          (part: any) => part.type === "text",
89        );
90        if (textPart && textPart.text) {
91          userMessage = textPart.text;
92          console.log("Extracted from parts:", userMessage);
93        }
94      }
95      // Handle direct content format for compatibility
96      else if (lastMessage.content) {
97        userMessage = lastMessage.content;
98        console.log("Extracted from content:", userMessage);
99      }
100    }
101    // Fallback to body.text
102    else if (body.text) {
103      userMessage = body.text;
104      console.log("Extracted from body.text:", userMessage);
105    }
106
107    console.log("Extracted user message:", JSON.stringify(userMessage));
108    console.log("USE_MOCK:", USE_MOCK);
109
110    const encoder = new TextEncoder();
111
112    // Create a server-sent events stream compatible with Vercel AI SDK DefaultChatTransport
113    const stream = new ReadableStream({
114      start(controller) {
115        // Helper to enqueue SSE-formatted data events
116        const send = (data: any) => {
117          const chunk = `data: ${JSON.stringify(data)}\n\n`;
118          console.log("Sending chunk:", JSON.stringify(data));
119          controller.enqueue(encoder.encode(chunk));
120        };
121
122        // Simple mock path for local testing without calling OpenAI
123        const handleMockResponse = () => {
124          try {
125            const messageId = `msg_${Date.now()}`;
126
127            // Begin a text message
128            send({
129              type: "text-start",
130              id: messageId,
131            });
132
133            // Full response text
134            const fullResponse = `I received your message: "${userMessage}". This is a mock response from the AI assistant working correctly with vector store integration!`;
135
136            // Send entire response as one delta to keep the client pipeline consistent
137            send({
138              type: "text-delta",
139              id: messageId,
140              delta: fullResponse,
141            });
142
143            // End the text message
144            send({
145              type: "text-end",
146              id: messageId,
147            });
148
149            // Finish event to tell client the turn is complete
150            send({
151              type: "finish",
152            });
153
154            controller.close();
155          } catch (error: unknown) {
156            console.error("Mock response error:", error);
157            send({
158              type: "error",
159              errorText:
160                error instanceof Error ? error.message : "Mock response error",
161            });
162            controller.close();
163          }
164        };
165
166        // Real path that calls OpenAI Responses API with file_search tool against the vector store
167        const handleResponsesAPIResponse = async () => {
168          try {
169            if (!process.env.OPENAI_API_KEY) {
170              throw new Error(
171                "OPENAI_API_KEY environment variable is required",
172              );
173            }
174
175            console.log("Using Responses API");
176            console.log("Vector Store ID:", VECTOR_STORE_ID);
177            console.log("User message:", userMessage);
178
179            // Hard rule to scope answers to vector store content only
180            const restrictedInput = `CRITICAL INSTRUCTION: Answer ONLY using information from the uploaded files in the vector store. Do not use training data or general knowledge. If information is not available in the files, state clearly that it's not in your knowledge base.
181
182USER QUESTION: ${userMessage}
183
184FORMATTING REQUIREMENTS:
185- Format your response in markdown
186- Use proper headings (##, ###) for sections
187- Use bullet points (-) for lists
188- Use **bold** for emphasis on key points
189- Include citations and be explicit about what information comes from the uploaded documents
190- Structure the information clearly and professionally`;
191
192            // Call OpenAI Responses API with tool for file_search bound to the vector store
193            const response = (await openai.responses.create({
194              model: "gpt-4o-mini", // Efficient model for RAG-style lookups
195              max_output_tokens: 600,
196              temperature: 0.2, // Low temperature for factual tone
197              input: restrictedInput,
198              tools: [
199                {
200                  type: "file_search",
201                  vector_store_ids: [VECTOR_STORE_ID as string],
202                },
203              ],
204            })) as any; // Loosen typing to handle evolving SDK shapes
205
206            console.log(
207              "Responses API response structure:",
208              JSON.stringify(response, null, 2),
209            );
210
211            const messageId = `msg_${Date.now()}`;
212
213            // Start the text message for the client
214            send({
215              type: "text-start",
216              id: messageId,
217            });
218
219            // Extract assistant text from Responses API output shapes
220            let responseContent = "";
221
222            // Primary field on newer SDKs
223            if (
224              response.output_text &&
225              typeof response.output_text === "string"
226            ) {
227              responseContent = response.output_text;
228              console.log("Extracted from output_text:", responseContent);
229            }
230            // Fallback for array-based outputs
231            else if (response.output && Array.isArray(response.output)) {
232              const messageOutput = response.output.find(
233                (item: any) => item.type === "message",
234              );
235              if (
236                messageOutput &&
237                messageOutput.content &&
238                Array.isArray(messageOutput.content)
239              ) {
240                const textContent = messageOutput.content.find(
241                  (item: any) => item.type === "output_text",
242                );
243                if (textContent && textContent.text) {
244                  responseContent = textContent.text;
245                  console.log("Extracted from output array:", responseContent);
246                }
247              }
248            }
249
250            if (!responseContent) {
251              console.log("Could not extract response content from structure");
252              responseContent =
253                "I apologize, but I couldn't generate a response from the files in my knowledge base.";
254            }
255
256            console.log("Final response content:", responseContent);
257
258            // Optionally normalize to markdown if needed before chunking
259            // Note: you currently stream raw model markdown through chunker
260            // const normalized = ensureMarkdownFormat(responseContent);
261
262            // Send the response in markdown-aware chunks to preserve formatting and smooth streaming
263            const chunks = chunkMarkdownText(responseContent);
264
265            for (const chunk of chunks) {
266              send({
267                type: "text-delta",
268                id: messageId,
269                delta: chunk,
270              });
271
272              // Tiny delay to simulate natural streaming cadence
273              await new Promise((resolve) => setTimeout(resolve, 30));
274            }
275
276            // Signal completion of the message and turn
277            send({
278              type: "text-end",
279              id: messageId,
280            });
281
282            send({
283              type: "finish",
284            });
285
286            controller.close();
287          } catch (error: unknown) {
288            console.error("Responses API error:", error);
289            send({
290              type: "error",
291              errorText:
292                error instanceof Error ? error.message : "Responses API error",
293            });
294            controller.close();
295          }
296        };
297
298        // Choose between mock and real API based on environment variable
299        if (USE_MOCK) {
300          handleMockResponse();
301        } else {
302          handleResponsesAPIResponse();
303        }
304      },
305
306      cancel() {
307        console.log("Stream cancelled"); // Called if the client disconnects early
308      },
309    });
310
311    // Return an SSE response that the client hook knows how to consume
312    return new Response(stream, {
313      headers: {
314        "Content-Type": "text/event-stream",
315        "Cache-Control": "no-cache, no-transform",
316        Connection: "keep-alive",
317        "Access-Control-Allow-Origin": "*", // CORS open for local testing
318        "Access-Control-Allow-Methods": "POST",
319        "Access-Control-Allow-Headers": "Content-Type",
320      },
321    });
322  } catch (error: unknown) {
323    console.error("API route error:", error);
324    return new Response(JSON.stringify({ error: "Internal server error" }), {
325      status: 500,
326      headers: { "Content-Type": "application/json" },
327    });
328  }
329}

Conclusion: The Future is Interactive, and It's Here Now

What started as a weekend experiment has fundamentally changed how I connect with potential clients, collaborators, and employers. Instead of hoping my static portfolio tells the right story to the right person, I now have an AI assistant that can intelligently respond to any question about my background, approach, and fit for specific roles.

But beyond the immediate practical benefits, this project represents something bigger: the evolution of how professionals will present themselves in an AI-driven world. Static resumes and portfolios are becoming as outdated as business cards. The future belongs to those who can create interactive, intelligent experiences that adapt to each viewer's specific needs and interests.

The Vibe Coding approach made this transformation possible. By collaborating with AI tools like V0, Claude, and ChatGPT, I was able to focus on the creative and strategic aspects of the project while letting AI handle the technical heavy lifting. The result is a more polished, thoughtful solution than I could have built alone, and it happened in a fraction of the time.

Your Turn to Build the Future

The tools and techniques I've shared aren't just for developers. Whether you're a designer, marketer, consultant, or any other professional, you can use this same approach to create your own interactive portfolio experience. The key is starting with your unique value proposition and letting AI help you bring it to life.

The portfolio of the future isn't just about showcasing what you've done it's about demonstrating how you think, how you approach problems, and how you could specifically help solve the challenges your audience is facing right now.

Ready to transform your organization's digital presence? If you're looking for strategic guidance around digital marketing, AI implementation, or modern web development, I'd love to explore how we can work together. Book a meeting or reach out directly to discuss your specific challenges and how my approach to AI-driven digital strategy can help drive your next phase of growth.

Subscribe to my newsletter

Join 10,000+ designers and get creative site breakdowns, design musings and tips every Monday.