How do I build an AI chatbot?
Modern AI chatbots use LLMs instead of decision trees. The architecture: user sends message → your backend sends message + context to LLM API → LLM responds → display response. Minimum viable chatbot (1-2 days): use the OpenAI or Anthropic API directly. Send user messages with a system prompt defining your bot personality and knowledge boundaries. Add conversation history for context. Deploy as a widget on your site. Cost: $0.001-0.01 per conversation. Better chatbot with RAG (1-2 weeks): index your documentation, FAQs, and product information in a vector database. When a user asks a question, retrieve relevant documents, include them in the LLM context, and generate an answer grounded in your actual content. This dramatically reduces hallucination. Production chatbot (2-4 weeks): add streaming responses (shows text as it is generated), conversation persistence (users can return to previous chats), human escalation (detect when the bot cannot help and route to a human), analytics (track resolution rates, common questions, failure cases), and feedback collection (thumbs up/down on responses). Platforms: Voiceflow (no-code), Botpress (open-source), Intercom Fin (built into support), or custom build (most control). Custom is recommended when the chatbot is core to your product. Off-the-shelf works for support widgets.