Creating a Chatbot from Scratch and Vibe Coding the UIπŸ’ƒ

4 min read β€’ 6/21/2025

52 comments β€’ 120 views

Creating a Chatbot from Scratch and Vibe Coding the UIπŸ’ƒ

Hey all,

I hope you remember me. (Yes?? LMK in the comment section.)

In this blog, I will discuss Radhika: Adaptive Reasoning & Intelligence Assistant. It provides specialized assistance across six distinct modes: General, Productivity, Wellness, Learning, Creative, and BFF.

Radhika

Radhika is a versatile AI chatbot designed to assist with a wide range of tasks, from answering questions to providing recommendations and engaging in casual conversation.

favicon radhika-sharma.vercel.app

(try it out, give feedback and suggestions, request changes)

 

πŸ› οΈ Tech Stack

Frontend

  • Framework: Next.js 14 with App Router and React 18
  • Styling: Tailwind CSS with custom design system
  • Components: shadcn/ui component library
  • Icons: Lucide React icon library
  • 3D Graphics: Three.js for particle visualizations
  • Animations: CSS transitions and keyframe animations

AI & Backend

  • AI Integration: Vercel AI SDK for unified LLM access
  • Providers: Groq, Google Gemini, OpenAI, Claude
  • Speech: WebKit Speech Recognition and Synthesis APIs
  • Storage: Browser localStorage for chat persistence and settings
  • API: Next.js API routes for secure LLM communication

Development

  • Language: TypeScript for type safety
  • Build: Next.js build system with optimizations
  • Deployment: Vercel-ready with environment variable support
  • Performance: Optimized bundle splitting and lazy loading

 

πŸš€ Implementing Main Logic

This section breaks down how the app/api/chat/route.ts endpoint processes requests, selects models, applies system prompts, and streams responses using different AI providers.

1. Parse Request

The request handler begins by parsing the JSON body from the incoming POST request:

const body = await req.json();
const { messages, mode = "general", provider = "groq", apiKey } = body;
Enter fullscreen mode
<svg xmlns="http://www.w3.org/2000/svg" width="20px" height="20px" viewbox="0 0 24 24" class="highlight-action crayons-icon highlight-action--fullscreen-off"><title>Exit fullscreen mode</title>
<path d="M18 7h4v2h-6V3h2v4zM8 9H2V7h4V3h2v6zm10 8v4h-2v-6h6v2h-4zM8 15v6H6v-4H2v-2h6z"></path>
  • messages: The conversation history sent by the client.
  • mode: Determines which system prompt to use (e.g., bff, learning, etc.).
  • provider: Specifies the AI backend to use (groq, openai, claude, gemini).
  • apiKey: Required for OpenAI and Claude if a user key is needed.

The code also validates whether the messages array exists and is non-empty.

2. Assign System Prompt

Based on the selected mode, a system prompt is selected to guide the assistant's personality and purpose:

const systemPrompt = SYSTEM_PROMPTS[mode] || SYSTEM_PROMPTS.general;
Enter fullscreen mode
<svg xmlns="http://www.w3.org/2000/svg" width="20px" height="20px" viewbox="0 0 24 24" class="highlight-action crayons-icon highlight-action--fullscreen-off"><title>Exit fullscreen mode</title>
<path d="M18 7h4v2h-6V3h2v4zM8 9H2V7h4V3h2v6zm10 8v4h-2v-6h6v2h-4zM8 15v6H6v-4H2v-2h6z"></path>

Examples of modes include:

  • productivity
  • bff
  • creative
  • wellness

3. Route to the Correct Provider

The provider field determines which AI model backend to use:

  • Gemini (google): Uses Google's Gemini 2.0 model.
  • OpenAI: Uses GPT models (like gpt-4o, gpt-3.5-turbo).
  • Claude: Uses Anthropic models (like claude-3-sonnet).
  • Groq: Defaults to models like llama-3 and qwen.

Each provider has custom logic to instantiate the model, handle errors, and stream the response using:

await streamText({...})
Enter fullscreen mode
<svg xmlns="http://www.w3.org/2000/svg" width="20px" height="20px" viewbox="0 0 24 24" class="highlight-action crayons-icon highlight-action--fullscreen-off"><title>Exit fullscreen mode</title>
<path d="M18 7h4v2h-6V3h2v4zM8 9H2V7h4V3h2v6zm10 8v4h-2v-6h6v2h-4zM8 15v6H6v-4H2v-2h6z"></path>

4. Model Selection (Groq Only)

If the provider is groq, model selection is dynamic. It analyzes the last message to determine the type of task:

if (lastMessage.includes("analyze") || lastMessage.includes("plan")) {
  modelType = "reasoning";
} else if (lastMessage.includes("creative") || lastMessage.includes("design")) {
  modelType = "creative";
} else {
  modelType = "fast";
}
Enter fullscreen mode
<svg xmlns="http://www.w3.org/2000/svg" width="20px" height="20px" viewbox="0 0 24 24" class="highlight-action crayons-icon highlight-action--fullscreen-off"><title>Exit fullscreen mode</title>
<path d="M18 7h4v2h-6V3h2v4zM8 9H2V7h4V3h2v6zm10 8v4h-2v-6h6v2h-4zM8 15v6H6v-4H2v-2h6z"></path>

RADHIKA automatically selects the best model based on your query complexity:

// Determine which model to use based on conversation context
let modelType = "fast"; // llama-3.1-8b-instant for quick responses

// Use reasoning model for complex analytical tasks if (query.includes("analyze", "compare", "plan", "strategy", "decision", "problem")) { modelType = "reasoning"; // llama-3.3-70b-versatile }

// Use creative model for artistic and innovative tasks if (query.includes("creative", "brainstorm", "idea", "write", "design", "story")) { modelType = "creative"; // qwen/qwen3-32b }

Enter fullscreen mode
<svg xmlns="http://www.w3.org/2000/svg" width="20px" height="20px" viewbox="0 0 24 24" class="highlight-action crayons-icon highlight-action--fullscreen-off"><title>Exit fullscreen mode</title>
<path d="M18 7h4v2h-6V3h2v4zM8 9H2V7h4V3h2v6zm10 8v4h-2v-6h6v2h-4zM8 15v6H6v-4H2v-2h6z"></path>

Model Configuration

Customize model selection in the API route:

const MODELS = {
  groq: {
    fast: "llama-3.1-8b-instant",
    reasoning: "llama-3.3-70b-versatile", 
    creative: "qwen/qwen3-32b"
  },
  gemini: { default: "gemini-2.0-flash" },
  openai: { default: "gpt-4o" },
  claude: { default: "claude-3-5-sonnet-20241022" }
}
Enter fullscreen mode
<svg xmlns="http://www.w3.org/2000/svg" width="20px" height="20px" viewbox="0 0 24 24" class="highlight-action crayons-icon highlight-action--fullscreen-off"><title>Exit fullscreen mode</title>
<path d="M18 7h4v2h-6V3h2v4zM8 9H2V7h4V3h2v6zm10 8v4h-2v-6h6v2h-4zM8 15v6H6v-4H2v-2h6z"></path>

Then the appropriate model (reasoning, creative, or fast) is selected and used for the response.

 

πŸ“„ Multi-Provider Flow

diagram

This approach allows a single API route to serve multiple model providers and assistant personalities while maintaining clean, scalable logic.

 

If you're interested in knowing about the other logics like voice recognition and speech synthesis, light/dark mode, etc,. then please go over the github repo:

GitHub logo RS-labhub / Radhika

Your day-2-day life assitant and bff πŸ’ž

banner

RADHIKA - Adaptive Reasoning & Intelligence Assistant

A sophisticated AI-powered assistant built with Next.js and powered by multiple LLM providers including Groq, Gemini, OpenAI, and Claude. RADHIKA adapts to different modes of interaction, providing specialized assistance for productivity, wellness, learning, creative tasks, and even acts as your GenZ bestie!

🎬 Project Showcase


















PreviewDescription
YouTube Demo🎬 YouTube Demo
Click the image to watch the full demo.
Blog PostπŸ“ Blog Post
Read the blog for in-depth explanation.

✨ Features

🎯 Multi-Mode Intelligence

  • General Assistant: All-purpose AI companion for everyday queries and conversations
  • Productivity Coach: Task management, planning, organization, and time optimization expert
  • Wellness Guide: Health, fitness, mental well-being, and self-care support with sensitive guidance
  • Learning Mentor: Personalized education, skill development, and study planning
  • Creative Partner: Brainstorming, ideation, creative projects, and artistic inspiration
  • BFF Mode: Your GenZ bestie who speaks your language, provides emotional support, and vibes with…

 

Vibe Coding the UI

When you have successfully implemented the main logic of your application, use the AI tools like v0, lovable, or bolt to create an interface according to your "thoughts".

I used v0 and ChatGPT. Prompting... Prompting... and never-ending prompting... Check out the video below to see a simple, short explanation of this project with features. However, you still have live access to it!

If you like it, then please star the repo 🌠 and follow me on GH.

 

Key Highlights

  • πŸ€– Multi-Modal AI - Six specialized assistant personalities in one app
  • ⚑ Multi-Provider Support - Groq, Gemini, OpenAI, and Claude integration
  • 🎀 Advanced Voice - Speech-to-text input and text-to-speech output
  • 🎨 Dynamic 3D Visuals - Interactive particle system with mode-based colors
  • πŸ’Ύ Smart Persistence - Automatic chat history saving per mode
  • πŸš€ Quick Actions - One-click access to common tasks per mode
  • πŸ“Š Real-time Analytics - Live usage statistics and AI activity monitoring
  • πŸŒ™ Beautiful UI - Responsive design with dark/light themes

Modes

  • Productivity: Task planning, project management, time optimization
  • Wellness: Health guidance, fitness routines, mental well-being support
  • Learning: Educational assistance, study plans, skill development
  • Creative: Brainstorming, content creation, artistic inspiration
  • General: Problem-solving, decision-making, everyday conversations
  • BFF: Emotional support, casual chats, GenZ-friendly interactions

Perfect for users who need a versatile AI assistant that adapts to different contexts, maintains conversation history across specialized domains, and provides an engaging visual experience with advanced voice capabilities.

 

Conclusion

Radhika is a sophisticated AI-powered assistant built with Next.js and powered by multiple LLM providers including Groq, Gemini, OpenAI, and Claude. RADHIKA adapts to different modes of interaction, providing specialized assistance for productivity, wellness, learning, creative tasks, and even acts as your GenZ bestie!

I personally suggest you try the "BFF" mode. You will like it for sure.

Once again, here are the links you don't want to miss out:

Thank you for reading. You're wonderful. And I mean it. Ba-bye, see you in the next blog. (and PLEASE SPAM THE COMMENT SECTION AS ALWAYS)