Vibe Coding an AI-Powered Portfolio Assistant: How I Built an Interactive 'Ask About Me' Tool That Actually Works

TL;DR

I built an AI assistant for my portfolio that lets visitors ask specific questions like "Will Stephen be good for VP of Web Strategy?" instead of forcing them to dig through static content. Using "Vibe Coding" (collaborative development with AI tools like V0, Claude, and ChatGPT), I went from idea to deployed tool in a weekend. The result: more meaningful connections and conversations that actually address what people want to know. This post shows you exactly how i did it.

31 minute read

Introduction: The Future of Personal Branding is Interactive

Picture this: A hiring manager lands on your portfolio at 11 PM, three coffees deep into their candidate review session. Instead of scrolling through another static resume, they type: "Will Stephen be a good fit for the role of VP of Web Strategy?" and get an intelligent, contextual response that directly addresses their specific needs and concerns.

This isn't science fiction it's what I built for my own portfolio using what I call "Vibe Coding." This approach leverages AI tools like Vercel's V0, Claude, and ChatGPT to rapidly prototype and iterate, turning a weekend idea into a professional tool that's already changing how potential clients and employers interact with my work.

In this post, I'll take you through both the strategic thinking and technical implementation behind building an AI assistant that knows you better than your LinkedIn profile ever could. Whether you're a developer, designer, or any professional looking to stand out, you'll learn how to create your own interactive portfolio experience and why this approach is becoming essential in our AI-driven world.

81%

of developers agree increasing productivity is the biggest benefit they identify for AI tools

Statista

The Problem with Traditional Portfolios

Let's be honest: traditional portfolios are basically digital brochures. Beautiful, perhaps, but fundamentally passive. They showcase your work, list your skills, and tell your story but only in the way you've pre-scripted it.

The reality is that every recruiter, client, or potential collaborator comes to your portfolio with different contexts and questions:

  • A startup CEO wants to know if you can handle the chaos of rapid scaling
  • An enterprise hiring manager needs to understand your experience with compliance and process
  • A creative agency is curious about how you balance innovation with client constraints
  • A technical team wants to dig into your problem-solving approach

Static content can't answer these nuanced questions. Sure, they might find relevant project examples if they dig deep enough, but there's no guarantee your carefully curated case studies align with their specific concerns.

Even worse, traditional portfolios miss the opportunity for discovery. That VP role you never considered might actually be perfect for you, but the hiring manager will never know to ask about your leadership philosophy or change management experience if those aren't front and center on your homepage.

The result? Missed connections, misaligned expectations, and a lot of talented people getting overlooked because their static portfolio didn't tell the right story to the right person at the right time.

Enter the AI Portfolio Assistant

Now imagine a different scenario. That same hiring manager lands on your portfolio and can immediately start a conversation:

"Will Stephen be a good fit for VP of Web Strategy? Summarize it in two to three sentences." 

Stephen Andekian has over 15 years of experience in digital strategy and marketing, particularly within the cybersecurity and SaaS industries. His leadership roles, including as Interim CMO at Trustwave, demonstrate his ability to drive significant growth, such as increasing marketing-generated opportunities from $410M to $605M. Additionally, his expertise in AI-driven strategiesweb infrastructure modernization, and conversion optimization positions him as a strong candidate for the VP of Web Strategy role.

Overall, Stephen's proven track record in transforming digital ecosystems and enhancing user engagement makes him a compelling fit for this position.


πŸ‘¨β€πŸš€ Spaceman AI Assistant


"Briefly explain how Stephen approaches digital marketing leadership?"

Stephen Andekian approaches digital marketing leadership by:

Driving Digital Transformation: He emphasizes the integration of advanced technologies and analytics to enhance marketing effectiveness, as demonstrated during his tenure at Trustwave where he guided significant digital transformation initiatives.

Fostering Data-Driven Decision Making: Stephen prioritizes the use of marketing analytics for better tracking and decision-making, ensuring that strategies are informed by data to optimize performance and achieve growth.


πŸ‘¨β€πŸš€ Spaceman AI Assistant

The Vibe Coding Approach: AI as Your Development Partner

Before we dive into the technical details, let me introduce you to "Vibe Coding" a development approach where developers provide general, high-level directions to an LLM, which then produces the precise instructions contained within working code. Rather than manually writing every line of code, vibe coding refers to the practice of instructing AI agents to write code based on natural language prompts, focusing your time and energy on the creative aspects of app development rather than getting stuck in technical details.

Traditional coding often follows a linear path: research, plan, code, test, iterate. Vibe Coding is more like a jazz improvisation session. You start with a rough idea, bounce it off AI tools to refine the concept, use AI to rapidly prototype interfaces, collaborate with AI to solve technical challenges, and iterate in real-time based on the feedback loop between your creative vision and AI capabilities.

For this project, my AI toolkit consisted of:

  • Vercel's V0: For rapid UI prototyping and component generation
  • Claude: For architectural planning, code review, and complex problem-solving
  • ChatGPT: For brainstorming, content strategy, and prompt engineering
  • OpenAI Assistant API: As the runtime engine powering the actual portfolio assistant

The magic happens in the collaboration. V0 helps me visualize ideas instantly, turning rough concepts into polished components in minutes. Claude serves as my technical advisor, helping me think through system architecture and catch potential issues before they become problems. ChatGPT excels at creative problem-solving and helping me refine the user experience.

This isn't about replacing human creativity, this is about amplifying it. I provide the vision, taste, and strategic thinking. AI provides the rapid iteration, technical knowledge, and ability to explore multiple solutions simultaneously.

The result? What would traditionally take weeks of development happened over a weekend. But more importantly, the final product is more polished and thoughtful than what I could have built alone, because I had access to AI feedback loops throughout the entire process.

Building Your Own: Step-by-Step Implementation

Now for the fun part, let's build this thing. I'll walk you through the entire implementation process, sharing the actual code, design decisions, and lessons learned along the way.

The beauty of the Vibe Coding approach is that you don't need to be an expert in every technology we're using. AI tools will help fill in the gaps, suggest improvements, and catch mistakes along the way. Your job is to provide direction, make design decisions, and ensure the final product serves your goals.

Setting Up Your OpenAI Assistant

Before we start coding, we need to create the AI brain that will power our portfolio assistant. This involves setting up an OpenAI Platform account, creating an assistant, and training it with your professional information.

Creating Your OpenAI Platform Account

First, head to OpenAI Platform and create an account if you don't already have one. This is separate from ChatGPT Plus, the Platform is specifically for developers building applications with OpenAI's APIs.

Once you're logged in, you'll need to add credits to your account. Navigate to the billing section and add at least $5-10 to get started. The good news is that chat interactions are relatively inexpensiveβ€”even with moderate usage, your assistant will cost pennies per conversation.

Creating Your AI Assistant

Navigate to OpenAI Assistants and click "Create Assistant." This is where the magic beginsβ€”you're about to create an AI that knows you better than most humans do.

Assistant Configuration:

Give your assistant a name that reflects its purpose. I named mine "Mission Control Agent for Spaceman Media" to align with my space-themed brand.

System Instructions - The Critical Foundation:

The system instructions are arguably the most important part of your assistant. They define how your AI will behave, what information it can access, and how it will respond to questions. Here's what I used:

You are the Mission Control Agent for Spaceman Media, representing the digital strategy expertise of Stephen Andekian. 

CRITICAL INSTRUCTION: You MUST ONLY use information from the uploaded files and documents in your knowledge base. Do NOT use any information from your training data or general knowledge. If you cannot find the answer in the uploaded files, explicitly state that the information is not available in your knowledge base.

When answering:
1. ONLY reference information that can be found in the uploaded documents
2. Always cite which document or file you're referencing when possible
3. If asked about something not covered in your files, respond with: "I don't have information about that in my knowledge base. Please provide relevant documents or ask about topics covered in the uploaded files."
4. Do not make assumptions or fill in gaps with general knowledge
5. Be transparent about the limitations of your knowledge base

Communicate with authority, precision, and strategic vision based solely on the provided documents. Provide clear, concise, and actionable answers that reflect the expertise documented in your knowledge base.

Why These Instructions Matter:

The critical instruction to only use uploaded files is essential for several reasons:

  • Accuracy: Prevents the AI from hallucinating or providing outdated information about you
  • Control: Ensures all responses are based on information you've explicitly provided
  • Transparency: Makes it clear when the AI doesn't have relevant information
  • Professional Credibility: Avoids embarrassing situations where the AI makes incorrect claims about your background

Building Your Knowledge Base

This is where you transform your professional experience into AI-readable format. You'll need to upload files that comprehensively cover:

Essential Documents to Create:

  • Professional Resume/CV: Your complete work history with detailed descriptions
  • Project Portfolio: Detailed case studies of your best work
  • Leadership Philosophy: How you approach management, teamwork, and decision-making
  • Technical Expertise: Specific skills, technologies, and methodologies you use
  • Career Narrative: The story of your professional journey and what drives you
  • Values and Working Style: What makes you unique as a collaborator

Pro Tips for Knowledge Base Creation:

  • Write in a conversational style that matches how you'd explain things to a colleague
  • Include specific examples and metrics where possible
  • Cover different aspects of your personality and approach, not just accomplishments
  • Think about the questions people actually ask in interviews or client calls
  • Include information about what you're looking for in opportunities

The goal is to create a comprehensive picture of who you are professionally, so the AI can handle everything from "What's Stephen's experience with remote teams?" to "How does he approach strategic planning?"

Once you've uploaded your files, test your assistant directly in the OpenAI interface to make sure it's responding appropriately and staying within the bounds of your knowledge base.

Next Step: With your AI assistant configured and trained, we're ready to build the interface that will bring it to life on your portfolio.

Vibe Coding the UI First: Rapid Prototyping with V0

Before writing any code, I started with the interface design using V0. This is where Vibe Coding really shines instead of wireframing and designing mockups, I could describe what I wanted and iterate in real-time.

Here's how the conversation with V0 unfolded:

"Create a AI Chat interface to use on spaceman.media https://www.spaceman.media/ where users can enter questions about the developer and gain insights on how they would be an amazing person to work at their company"

I also shared screenshots of my existing portfolio site so V0 could understand the design context and brand aesthetic.

What impressed me immediately was that V0 didn't just create a generic chat bubble interface. Instead, it analyzed my request and proposed a complete technical architecture:

I'll create a chat application using the AI SDK and OpenAI, leveraging Next.js with the App Router, which is the recommended framework for building applications with the AI SDK.

You can view the v0 conversation in full and code that is broken down below here:

v0 Spaceman Media Chat Interface

/app/components/ChatInterface.tsx

"use client" // Marks this component as a Client Component in Next.js App Router

import React from "react"
import { useState, useEffect, useMemo } from "react"
import { useChat } from "@ai-sdk/react"                 // Vercel AI SDK React hook for chat state and actions
import { DefaultChatTransport } from "ai"               // Transport to route chat requests to your API
import { Button } from "@/components/ui/button"         // shadcn/ui button
import { Input } from "@/components/ui/input"           // shadcn/ui input
import { Card, CardContent, CardFooter, CardHeader, CardTitle } from "@/components/ui/card"
import { X } from "lucide-react"                        // Close icon for modal

// Using marked library for markdown to HTML conversion
// Install: npm install marked @types/marked
import { marked } from 'marked'

// Configure marked for simple, predictable rendering suitable for chat display
marked.setOptions({
  breaks: true,    // Treat single line breaks as <br>
  gfm: true,       // Enable GitHub Flavored Markdown
})

// Custom renderer to constrain output. Intent is to avoid rich blocks and external links.
// This helps keep content visually consistent and reduces risk when using innerHTML.
const renderer = new marked.Renderer()

// Render fenced code blocks as inline boxes instead of <pre><code> blocks
renderer.code = ({ text, lang, escaped }: { text: string, lang?: string, escaped?: boolean }) => `<span class="inline-code">${text}</span>`

// Render inline code with a custom span for styling
renderer.codespan = ({ text }: { text: string }) => `<span class="inline-code">${text}</span>`

// Render links as plain text by returning only their text tokens
renderer.link = ({ href, title, tokens }: { href: string, title?: string | null, tokens: any[] }) => {
  // Remove the anchor element and produce only text content
  return tokens.map(token => token.raw || token.text || '').join('')
}

// Render headings without IDs to avoid page-level anchor collisions inside chat bubbles
renderer.heading = ({ text, depth }: { text: string, depth: number }) => `<h${depth}>${text}</h${depth}>`

// Apply the custom renderer to marked
marked.setOptions({ renderer })

// Helper to convert markdown strings to HTML with the configured renderer
function markdownToHtml(markdown: string): string {
  return marked.parse(markdown) as string
}

// Typing indicator component. Uses staggered CSS animations for three dots.
const DancingDots = () => {
  return (
    <span className="inline-flex">
      AI is thinking
      <span className="ml-1 inline-flex">
        <span 
          className="animate-bounce" 
          style={{ 
            animationDelay: '0ms',
            animationDuration: '1.4s'
          }}
        >
          .
        </span>
        <span 
          className="animate-bounce" 
          style={{ 
            animationDelay: '160ms',
            animationDuration: '1.4s'
          }}
        >
          .
        </span>
        <span 
          className="animate-bounce" 
          style={{ 
            animationDelay: '320ms',
            animationDuration: '1.4s'
          }}
        >
          .
        </span>
      </span>
    </span>
  )
}

// Main chat UI component
export default function ChatInterface() {
  // Hook manages messages, sendMessage, and status
  // Transport points to your API route that proxies or handles the chat completion
  const { messages, sendMessage, status } = useChat({
    transport: new DefaultChatTransport({
      api: "/api/assistant/",
    }),
  })

  // Text input state
  const [input, setInput] = useState("")
  // Modal visibility state
  const [isModalOpen, setIsModalOpen] = useState(false)

  // Predefined example prompts for animated placeholder
  // useMemo ensures stable reference and avoids re-creating the array on each render
  const examplePrompts = useMemo(() => [
    "What are his key skills?",
    "What kind of projects has Stephen worked on?",
    "Is Stephen a good fit for a startup?",
    "Describe his experiences as a Marketing leader.",
    "Describe his experiences as a developer.",
    "How does he approach problem-solving?",
  ], [])

  // Typing animation state for the rotating placeholder
  const [currentPlaceholder, setCurrentPlaceholder] = useState("")
  const [promptIndex, setPromptIndex] = useState(0)
  const [charIndex, setCharIndex] = useState(0)
  const [isDeleting, setIsDeleting] = useState(false)

  // Example assistant message for initial demo typing
  const exampleAiResponse =
    "Hello! I can share detailed insights into Stephen Andekian's expertise, including his leadership in enterprise digital transformation, AI-driven marketing strategy, and modern web architecture for cybersecurity and SaaS organizations."
  const [typedExampleAiResponse, setTypedExampleAiResponse] = useState("")
  const [showExampleTyping, setShowExampleTyping] = useState(true)

  // Effect to animate the rotating placeholder text
  // Pauses when user starts typing to avoid distraction and to keep placeholder empty
  useEffect(() => {
    const typeSpeed = isDeleting ? 50 : 100
    const delayBetweenPrompts = 1500

    const timer = setTimeout(() => {
      const full = examplePrompts[promptIndex]
      if (isDeleting) {
        setCurrentPlaceholder(full.substring(0, charIndex - 1))
        setCharIndex((ci) => ci - 1)
        if (charIndex === 0) {
          setIsDeleting(false)
          setPromptIndex((i) => (i + 1) % examplePrompts.length)
        }
      } else {
        setCurrentPlaceholder(full.substring(0, charIndex + 1))
        setCharIndex((ci) => ci + 1)
        if (charIndex === full.length) {
          setTimeout(() => setIsDeleting(true), delayBetweenPrompts)
        }
      }
    }, typeSpeed)

    // If the user has started entering input, stop the placeholder animation
    if (input.length > 0) {
      clearTimeout(timer)
      setCurrentPlaceholder("")
      setShowExampleTyping(false)
    }

    return () => clearTimeout(timer)
  }, [charIndex, isDeleting, promptIndex, input, examplePrompts])

  // Effect to animate an example assistant message before any real chat occurs
  useEffect(() => {
    if (messages.length === 0 && showExampleTyping && typedExampleAiResponse.length < exampleAiResponse.length) {
      const t = setTimeout(() => {
        setTypedExampleAiResponse((prev) => exampleAiResponse.slice(0, prev.length + 1))
      }, 10)
      return () => clearTimeout(t)
    } else if (messages.length > 0) {
      // Once real messages exist, hide the demo typing
      setShowExampleTyping(false)
    }
  }, [typedExampleAiResponse, showExampleTyping, messages.length, exampleAiResponse])

  // Submit handler for the message form
  // Sends text to the chat endpoint and clears the input on success
  const onSubmit = async (e: React.FormEvent) => {
    e.preventDefault()
    if (!input.trim()) return
    await sendMessage({ text: input })
    setInput("")
  }

  // Focus handler to open the modal version of the chat
  const handleInputFocus = () => {
    setIsModalOpen(true)
  }

  // Close handler for modal
  const handleCloseModal = () => {
    setIsModalOpen(false)
  }

  // Effect to bind Escape for modal close and to lock body scroll when modal is open
  useEffect(() => {
    const handleEscape = (e: KeyboardEvent) => {
      if (e.key === "Escape") {
        setIsModalOpen(false)
      }
    }

    if (isModalOpen) {
      document.addEventListener("keydown", handleEscape)
      // Prevent background scroll while modal is open
      document.body.style.overflow = "hidden"
    } else {
      document.body.style.overflow = "unset"
    }

    // Cleanup removes listener and restores body scroll
    return () => {
      document.removeEventListener("keydown", handleEscape)
      document.body.style.overflow = "unset"
    }
  }, [isModalOpen])

  // Derive assistant "is typing" indicator from hook status
  // "submitted" or "streaming" indicates pending or active response
  const isTyping = status === "submitted" || status === "streaming"

  // Renders a chat message. Assistant messages are parsed from markdown to HTML.
  // User messages render as plain text. This keeps user content safe and simple.
  const renderMessageContent = (msg: any) => {
    const text = msg.parts
      .filter((p: any) => p.type === "text")
      .map((p: any) => p.text)
      .join("")

    if (msg.role === "assistant") {
      const htmlContent = markdownToHtml(text)
      return (
        <span
          className="inline-block p-3 rounded-lg bg-gray-200 text-gray-800 font-mono max-w-[80%] markdown-content"
          // Using innerHTML on sanitized and constrained renderer output
          dangerouslySetInnerHTML={{ __html: htmlContent }}
        />
      )
    }

    // User bubble styling
    return (
      <span className="inline-block p-3 rounded-lg max-w-[80%] bg-purple-600 text-white">
        {text}
      </span>
    )
  }

  return (
    <>
      {/* Primary in-page Chat Interface */}
      <Card className="w-full h-full flex flex-col bg-transparent shadow-none border-none">
        <CardHeader>
          {/* Title text includes emoji. Left as-is per the request not to change code. */}
          <CardTitle className="text-center text-2xl font-bold text-gray-800">πŸ‘¨β€πŸš€ Spaceman AI Assistant</CardTitle>
        </CardHeader>

        <CardContent className="flex-grow overflow-hidden bg-white rounded-lg p-4 h-[60vh] border border-gray-200">
          <div className="h-full border rounded-lg bg-gray-50 p-4 overflow-y-auto">
            {/* Demo typing bubble shown before any messages exist */}
            {messages.length === 0 && showExampleTyping && (
              <div className="mb-4 text-left">
                <span className="inline-block p-3 rounded-lg bg-gray-200 text-gray-800 font-mono max-w-[80%]">
                  {typedExampleAiResponse}
                  <span className="animate-cursor-blink">|</span>
                </span>
              </div>
            )}

            {/* Real chat transcript */}
            {messages.map((msg) => (
              <div key={msg.id} className={`mb-4 ${msg.role === "user" ? "text-right" : "text-left"}`}>
                {renderMessageContent(msg)}
              </div>
            ))}

            {/* Streaming indicator while assistant responds */}
            {isTyping && messages.length > 0 && (
              <div className="text-left mb-4">
                <span className="inline-block p-3 rounded-lg bg-gray-200 text-gray-800">
                  <DancingDots />
                </span>
              </div>
            )}
          </div>
        </CardContent>

        {/* Input row with submit button */}
        <CardFooter className="pt-4">
          <form onSubmit={onSubmit} className="flex w-full space-x-2">
            <Input
              value={input}
              onChange={(e) => setInput(e.target.value)}
              onFocus={handleInputFocus} // Opens modal-focused experience
              disabled={status !== "ready"}
              placeholder={input ? "" : currentPlaceholder || "Type your message..."}
              className="flex-grow border-gray-300 focus:border-purple-500 focus:ring-purple-500 placeholder:text-gray-400"
            />
            <Button
              type="submit"
              disabled={status !== "ready"}
              className="bg-purple-600 hover:bg-purple-700 text-white"
            >
              Launch πŸš€
            </Button>
          </form>
        </CardFooter>
      </Card>

      {/* Full-screen Modal Chat Interface */}
      {isModalOpen && (
        <div
          className="fixed inset-0 bg-black bg-opacity-50 flex items-center justify-center p-10"
          style={{
            zIndex: 2147483647,  // Large z-index to ensure topmost stacking
            isolation: "isolate",
            position: "fixed",
            top: 0,
            left: 0,
            right: 0,
            bottom: 0,
          }}
        >
          <Card
            className="w-full max-w-4xl h-full max-h-[80vh] flex flex-col bg-white shadow-2xl"
            style={{ zIndex: 2147483647 }}
          >
            <CardHeader className="relative">
              <CardTitle className="text-center text-2xl font-bold text-gray-800 pr-8">πŸ‘¨β€πŸš€ Spaceman AI Assistant</CardTitle>
              {/* Modal close button */}
              <Button
                variant="ghost"
                size="icon"
                onClick={handleCloseModal}
                className="absolute top-2 right-2 hover:bg-gray-100"
                style={{ zIndex: 2147483647 }}
              >
                <X className="h-5 w-5" />
              </Button>
            </CardHeader>

            <CardContent className="flex-grow overflow-hidden bg-white p-6">
              <div className="h-full border rounded-lg bg-gray-50 p-4 overflow-y-auto">
                {/* Demo typing bubble for first-run experience */}
                {messages.length === 0 && showExampleTyping && (
                  <div className="mb-4 text-left">
                    <span className="inline-block p-3 rounded-lg bg-gray-200 text-gray-800 font-mono max-w-[80%]">
                      {typedExampleAiResponse}
                      <span className="animate-cursor-blink">|</span>
                    </span>
                  </div>
                )}

                {/* Real chat transcript in modal */}
                {messages.map((msg) => (
                  <div key={msg.id} className={`mb-4 ${msg.role === "user" ? "text-right" : "text-left"}`}>
                    {renderMessageContent(msg)}
                  </div>
                ))}

                {/* Streaming indicator while assistant responds */}
                {isTyping && messages.length > 0 && (
                  <div className="text-left mb-4">
                    <span className="inline-block p-3 rounded-lg bg-gray-200 text-gray-800">
                      <DancingDots />
                    </span>
                  </div>
                )}
              </div>
            </CardContent>

            <CardFooter className="p-6 pt-4">
              <form onSubmit={onSubmit} className="flex w-full space-x-2">
                <Input
                  value={input}
                  onChange={(e) => setInput(e.target.value)}
                  disabled={status !== "ready"}
                  placeholder={input ? "" : currentPlaceholder || "Type your message..."}
                  className="flex-grow border-gray-300 focus:border-purple-500 focus:ring-purple-500 placeholder:text-gray-400"
                  autoFocus // Focus the modal input for immediate typing
                />
                <Button
                  type="submit"
                  disabled={status !== "ready"}
                  className="bg-purple-600 hover:bg-purple-700 text-white"
                >
                  Launch πŸš€
                </Button>
              </form>
            </CardFooter>
          </Card>
        </div>
      )}
    </>
  )
}

/app/api/assistant/route.ts

// app/api/assistant/route.ts
import OpenAI from "openai";

export const runtime = "nodejs"; // Ensure Node.js runtime for streaming SSE and server-side OpenAI SDK use

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY, // Read API key from environment
});

const USE_MOCK = process.env.USE_MOCK === "true"; // Toggle for local testing without hitting OpenAI
const VECTOR_STORE_ID = process.env.SPACEMAN_ASSISTANT_VECTOR_STORE_ID; // Pre-provisioned vector store for file_search tool

// Helper function to chunk text while preserving markdown formatting
// Splits on line boundaries to avoid breaking code blocks, lists, and headings
function chunkMarkdownText(text: string, maxChunkSize: number = 200): string[] {
  const chunks: string[] = [];
  const lines = text.split("\n");
  let currentChunk = "";

  for (const line of lines) {
    // If adding this line would exceed chunk size and we have content, start new chunk
    if (
      currentChunk.length + line.length + 1 > maxChunkSize &&
      currentChunk.length > 0
    ) {
      chunks.push(currentChunk);
      currentChunk = line + "\n";
    } else {
      currentChunk += (currentChunk ? "\n" : "") + line;
    }
  }

  // Add final chunk if it has content
  if (currentChunk.trim()) {
    chunks.push(currentChunk);
  }

  return chunks;
}

// Helper function to ensure markdown formatting
// Light-touch normalization to encourage markdown in plain responses
function ensureMarkdownFormat(text: string): string {
  // If the text already has markdown formatting, return as-is
  if (text.includes("##") || text.includes("**") || text.includes("- ")) {
    return text;
  }

  // Basic markdown conversion for plain text responses
  let formatted = text;

  // Convert bullet points to markdown
  formatted = formatted.replace(/^\s*β€’\s+/gm, "- ");
  formatted = formatted.replace(/^\s*-\s+([^-])/gm, "- $1");

  // Emphasize key names/titles
  formatted = formatted.replace(/Stephen Andekian/g, "**Stephen Andekian**");
  formatted = formatted.replace(/Spaceman Media/g, "**Spaceman Media**");

  // Add section headers if the content suggests them
  if (
    formatted.includes("key highlights") ||
    formatted.includes("background and roles")
  ) {
    formatted = formatted.replace(/(Some key highlights[^:]*:)/gi, "## $1");
  }

  return formatted;
}

export async function POST(req: Request) {
  try {
    const body = await req.json();
    console.log("Request body:", JSON.stringify(body, null, 2));

    // Extract user message from different possible formats
    let userMessage = "Hello"; // Default fallback

    console.log("Processing messages:", JSON.stringify(body.messages, null, 2));

    if (body.messages && body.messages.length > 0) {
      const lastMessage = body.messages[body.messages.length - 1];
      console.log("Last message:", JSON.stringify(lastMessage, null, 2));

      // Handle parts array format (Vercel AI SDK message shape)
      if (lastMessage.parts && lastMessage.parts.length > 0) {
        const textPart = lastMessage.parts.find(
          (part: any) => part.type === "text",
        );
        if (textPart && textPart.text) {
          userMessage = textPart.text;
          console.log("Extracted from parts:", userMessage);
        }
      }
      // Handle direct content format for compatibility
      else if (lastMessage.content) {
        userMessage = lastMessage.content;
        console.log("Extracted from content:", userMessage);
      }
    }
    // Fallback to body.text
    else if (body.text) {
      userMessage = body.text;
      console.log("Extracted from body.text:", userMessage);
    }

    console.log("Extracted user message:", JSON.stringify(userMessage));
    console.log("USE_MOCK:", USE_MOCK);

    const encoder = new TextEncoder();

    // Create a server-sent events stream compatible with Vercel AI SDK DefaultChatTransport
    const stream = new ReadableStream({
      start(controller) {
        // Helper to enqueue SSE-formatted data events
        const send = (data: any) => {
          const chunk = `data: ${JSON.stringify(data)}\n\n`;
          console.log("Sending chunk:", JSON.stringify(data));
          controller.enqueue(encoder.encode(chunk));
        };

        // Simple mock path for local testing without calling OpenAI
        const handleMockResponse = () => {
          try {
            const messageId = `msg_${Date.now()}`;

            // Begin a text message
            send({
              type: "text-start",
              id: messageId,
            });

            // Full response text
            const fullResponse = `I received your message: "${userMessage}". This is a mock response from the AI assistant working correctly with vector store integration!`;

            // Send entire response as one delta to keep the client pipeline consistent
            send({
              type: "text-delta",
              id: messageId,
              delta: fullResponse,
            });

            // End the text message
            send({
              type: "text-end",
              id: messageId,
            });

            // Finish event to tell client the turn is complete
            send({
              type: "finish",
            });

            controller.close();
          } catch (error: unknown) {
            console.error("Mock response error:", error);
            send({
              type: "error",
              errorText:
                error instanceof Error ? error.message : "Mock response error",
            });
            controller.close();
          }
        };

        // Real path that calls OpenAI Responses API with file_search tool against the vector store
        const handleResponsesAPIResponse = async () => {
          try {
            if (!process.env.OPENAI_API_KEY) {
              throw new Error(
                "OPENAI_API_KEY environment variable is required",
              );
            }

            console.log("Using Responses API");
            console.log("Vector Store ID:", VECTOR_STORE_ID);
            console.log("User message:", userMessage);

            // Hard rule to scope answers to vector store content only
            const restrictedInput = `CRITICAL INSTRUCTION: Answer ONLY using information from the uploaded files in the vector store. Do not use training data or general knowledge. If information is not available in the files, state clearly that it's not in your knowledge base.

USER QUESTION: ${userMessage}

FORMATTING REQUIREMENTS:
- Format your response in markdown
- Use proper headings (##, ###) for sections
- Use bullet points (-) for lists
- Use **bold** for emphasis on key points
- Include citations and be explicit about what information comes from the uploaded documents
- Structure the information clearly and professionally`;

            // Call OpenAI Responses API with tool for file_search bound to the vector store
            const response = (await openai.responses.create({
              model: "gpt-4o-mini", // Efficient model for RAG-style lookups
              max_output_tokens: 600,
              temperature: 0.2, // Low temperature for factual tone
              input: restrictedInput,
              tools: [
                {
                  type: "file_search",
                  vector_store_ids: [VECTOR_STORE_ID as string],
                },
              ],
            })) as any; // Loosen typing to handle evolving SDK shapes

            console.log(
              "Responses API response structure:",
              JSON.stringify(response, null, 2),
            );

            const messageId = `msg_${Date.now()}`;

            // Start the text message for the client
            send({
              type: "text-start",
              id: messageId,
            });

            // Extract assistant text from Responses API output shapes
            let responseContent = "";

            // Primary field on newer SDKs
            if (
              response.output_text &&
              typeof response.output_text === "string"
            ) {
              responseContent = response.output_text;
              console.log("Extracted from output_text:", responseContent);
            }
            // Fallback for array-based outputs
            else if (response.output && Array.isArray(response.output)) {
              const messageOutput = response.output.find(
                (item: any) => item.type === "message",
              );
              if (
                messageOutput &&
                messageOutput.content &&
                Array.isArray(messageOutput.content)
              ) {
                const textContent = messageOutput.content.find(
                  (item: any) => item.type === "output_text",
                );
                if (textContent && textContent.text) {
                  responseContent = textContent.text;
                  console.log("Extracted from output array:", responseContent);
                }
              }
            }

            if (!responseContent) {
              console.log("Could not extract response content from structure");
              responseContent =
                "I apologize, but I couldn't generate a response from the files in my knowledge base.";
            }

            console.log("Final response content:", responseContent);

            // Optionally normalize to markdown if needed before chunking
            // Note: you currently stream raw model markdown through chunker
            // const normalized = ensureMarkdownFormat(responseContent);

            // Send the response in markdown-aware chunks to preserve formatting and smooth streaming
            const chunks = chunkMarkdownText(responseContent);

            for (const chunk of chunks) {
              send({
                type: "text-delta",
                id: messageId,
                delta: chunk,
              });

              // Tiny delay to simulate natural streaming cadence
              await new Promise((resolve) => setTimeout(resolve, 30));
            }

            // Signal completion of the message and turn
            send({
              type: "text-end",
              id: messageId,
            });

            send({
              type: "finish",
            });

            controller.close();
          } catch (error: unknown) {
            console.error("Responses API error:", error);
            send({
              type: "error",
              errorText:
                error instanceof Error ? error.message : "Responses API error",
            });
            controller.close();
          }
        };

        // Choose between mock and real API based on environment variable
        if (USE_MOCK) {
          handleMockResponse();
        } else {
          handleResponsesAPIResponse();
        }
      },

      cancel() {
        console.log("Stream cancelled"); // Called if the client disconnects early
      },
    });

    // Return an SSE response that the client hook knows how to consume
    return new Response(stream, {
      headers: {
        "Content-Type": "text/event-stream",
        "Cache-Control": "no-cache, no-transform",
        Connection: "keep-alive",
        "Access-Control-Allow-Origin": "*", // CORS open for local testing
        "Access-Control-Allow-Methods": "POST",
        "Access-Control-Allow-Headers": "Content-Type",
      },
    });
  } catch (error: unknown) {
    console.error("API route error:", error);
    return new Response(JSON.stringify({ error: "Internal server error" }), {
      status: 500,
      headers: { "Content-Type": "application/json" },
    });
  }
}

Conclusion: The Future is Interactive, and It's Here Now

What started as a weekend experiment has fundamentally changed how I connect with potential clients, collaborators, and employers. Instead of hoping my static portfolio tells the right story to the right person, I now have an AI assistant that can intelligently respond to any question about my background, approach, and fit for specific roles.

But beyond the immediate practical benefits, this project represents something bigger: the evolution of how professionals will present themselves in an AI-driven world. Static resumes and portfolios are becoming as outdated as business cards. The future belongs to those who can create interactive, intelligent experiences that adapt to each viewer's specific needs and interests.

The Vibe Coding approach made this transformation possible. By collaborating with AI tools like V0, Claude, and ChatGPT, I was able to focus on the creative and strategic aspects of the project while letting AI handle the technical heavy lifting. The result is a more polished, thoughtful solution than I could have built alone, and it happened in a fraction of the time.

Your Turn to Build the Future

The tools and techniques I've shared aren't just for developers. Whether you're a designer, marketer, consultant, or any other professional, you can use this same approach to create your own interactive portfolio experience. The key is starting with your unique value proposition and letting AI help you bring it to life.

The portfolio of the future isn't just about showcasing what you've done it's about demonstrating how you think, how you approach problems, and how you could specifically help solve the challenges your audience is facing right now.

Ready to transform your organization's digital presence? If you're looking for strategic guidance around digital marketing, AI implementation, or modern web development, I'd love to explore how we can work together. Book a meeting or reach out directly to discuss your specific challenges and how my approach to AI-driven digital strategy can help drive your next phase of growth.

Subscribe to my newsletter

Join 10,000+ designers and get creative site breakdowns, design musings and tips every Monday.