How to Build a Simple Web-Based AI Chat

Are you interested in integrating multiple AI language models into a single web chat interface? This tutorial will walk you through building a simple yet powerful web-based chat application using GROQ as your one-stop API service. With GROQ, you can seamlessly access various large language models (LLMs) from a single endpoint.

1. Introduction

GROQ provides a convenient developer console where you can manage API keys, monitor usage, and select from a variety of AI models—ranging from DeepSeek’s Llama variants to Qwen, Gemma, Mixtral, and more. By integrating with GROQ’s single endpoint, you can switch between these models on the fly.

In this tutorial, you’ll learn how to:

  • Obtain your API key from GROQ.

  • Embed a simple HTML, CSS, and JavaScript snippet on your website.

  • Build a user interface (UI) with a dropdown to select the AI model.

  • Send user messages to the selected model and display the responses in a chat window.

  • Enforce daily usage limits (locally in the browser) to prevent excessive calls.

2. Prerequisites

  1. Basic HTML/CSS/JavaScript Knowledge:

    You should be comfortable with creating a simple webpage and embedding JavaScript.

  2. GROQ Account:

    Sign up or log in at groq.com.

  3. GROQ API Key:

    • Go to DevelopersStart Building to open the console.

    • In the left sidebar, choose API Keys and create a new key or copy an existing one.

    • Keep this key safe; you’ll embed it in your code (for a small project or demo). Ideally, for production, you’d hide it on a server.

3. Getting Your GROQ API Key

  1. Sign in at groq.com

  2. Click on DevelopersStart Building to open the console.

  3. In the left sidebar, find API Keys.

  4. Generate or copy your existing API key.

  5. Keep it noted; we’ll place it in the GROQ_API_KEY variable in our code snippet.

You can also explore the other sidebar options:

  • Models: Check out different model families, their tokens, and daily usage limits.

  • Usage: Monitor how many tokens or requests you’ve consumed.

  • Docs: Read detailed API references and advanced configuration.

4. The Code Snippet

Below is an example of a minimal HTML, CSS, and JavaScript setup for a web chat interface. It allows you to:

  • Display a dropdown of various AI models (all accessible via the single GROQ endpoint).

  • Let users enter their questions in a text box.

  • Send queries to the selected model, then display responses in a chat-like format.

  • Impose a daily usage limit (in this example, 10 requests per day—stored locally in localStorage).

5. How It Works

  1. Model Selection:
    A <select> dropdown allows the user to pick a model from a curated list (e.g., “DeepSeek R1 70b,” “Qwen 2.5 32b,” etc.). All these models are accessible via GROQ’s single endpoint.

  2. User Input & Display:

    • The user types a message in the input box and clicks Send.

    • The message is pushed into a messages array as { role: "user", content: ... }.

    • The chat window is re-rendered with the updated conversation.

  3. API Call to GROQ:

    • tryTextModelOnce constructs a jsonData object containing the selected model and the entire conversation.

    • It sends a POST request to https://api.groq.com/openai/v1/chat/completions with the GROQ_API_KEY as an authorization header.

    • If successful, the response is parsed for the generated text (responseText).

  4. Response Handling:

    • If the server returns a 429 status, the code sets assistantReply to "429_ERROR".

    • Any other HTTP error triggers "ERROR! Please try again."

    • If the returned text is empty or null, the chat displays "No suitable answer was found."

    • Otherwise, the final text is appended to the conversation as { role: "assistant", content: ... }.

  5. Local Usage Limit:

    • This snippet demonstrates a simplistic daily limit of 100 requests.

    • The usage is tracked in localStorage under keys like usage_overall_YYYY-MM-DD and usage_model_<modelId>_YYYY-MM-DD.

    • Once the limit is exceeded, the chat warns the user and disallows further requests.

6. Security Considerations

  • API Key Exposure:
    Because the API key is placed in your front-end code, it can be seen by anyone opening the browser’s developer tools. This is okay for demos or hobby projects but not recommended for production. For secure usage, you’d typically proxy requests through your own server so the key remains hidden.

  • Usage & Billing:
    Monitor your usage in the GROQ console to avoid unexpected charges. Adjust daily limits as needed.

7. Conclusion

With this simple setup, you have a fully functional AI-powered chat that can switch between multiple large language models using a single GROQ API endpoint. You can easily extend it by:

  • Integrating server-side code to securely store the API key.

  • Adding more advanced usage analytics or authentication.

  • Customizing the UI/UX (styling, theming, or streaming responses).

Try it out, explore different models in GROQ’s console, and create unique AI-driven experiences on your own website!