A 2-day workshop to build a customized chatbot using Groq API and Streamlit.
- Total Duration: 4 hours (2 hours per day)
- Format: Online
- Final Project: Customized chatbot with Groq API and Streamlit
Participants should have:
- Basic understanding of programming concepts
- Computer with internet connection
- Admin rights to install software on their machine
- Installing Miniconda
- Setting up VSCode
- Creating a virtual environment
- Installing required packages
- Variables, data types, and functions
- Working with dictionaries and lists
- Understanding API concepts
- What is Groq and LLM API
- Setting up Groq API account and getting API keys
- Making basic API calls
- Understanding response structure
- What is Streamlit
- Basic Streamlit components
- Creating simple Streamlit apps
- Designing the chat interface
- Managing chat history
- Connecting to Groq API
- Adding model selection
- Implementing temperature control
- Creating persona selection
- Adding system prompts
- Testing the chatbot
- Troubleshooting
- Sharing the app
The chatbot we'll build follows this architecture:
- User Interface: Streamlit web app
- Business Logic: Python backend
- AI Service: Groq API integration
- Customization: Model selection, personas, and parameters
View Full Architecture Diagram
-
Download Miniconda:
- Windows: Miniconda Windows Installer
- macOS: Miniconda macOS Installer
- Linux: Miniconda Linux Installer
-
Install Miniconda:
- Windows: Double-click the installer and follow the instructions
- macOS: Run
bash Miniconda3-latest-MacOSX-x86_64.sh
in Terminal - Linux: Run
bash Miniconda3-latest-Linux-x86_64.sh
in Terminal
-
Verify installation:
conda --version
-
Download VS Code:
- Visit code.visualstudio.com
- Download the appropriate version for your OS
-
Install VS Code:
- Run the installer and follow the instructions
-
Install Python extensions:
- Open VS Code
- Go to Extensions (Ctrl+Shift+X or Cmd+Shift+X)
- Search for "Python" and install the Microsoft Python extension
-
Create a new project folder:
mkdir groq-chatbot cd groq-chatbot
-
Create a new conda environment:
conda create -n chatbot-env python=3.10 conda activate chatbot-env
-
Install required packages:
pip install streamlit groq python-dotenv
Create the following files in your project directory:
groq-chatbot/
│
├── .env # Environment variables (API keys)
├── requirements.txt # Project dependencies
├── app.py # Main Streamlit application
└── README.md # Project documentation
streamlit>=1.32.0
groq>=0.4.0
python-dotenv>=1.0.0
GROQ_API_KEY=your_groq_api_key_here
To make it easier to follow along, we've created project checkpoints at different stages:
- Stage 1: Basic Setup - Environment and dependencies
- Stage 2: Basic Streamlit UI - Simple UI without API calls
- Stage 3: Basic Chatbot - Chatbot with basic Groq integration
- Stage 4: Complete Chatbot - Fully customized chatbot
If you fall behind during the workshop, you can copy one of these checkpoints to catch up.
Create a file named .env
in your project directory and add your Groq API key:
GROQ_API_KEY=your_groq_api_key_here
Create a file named app.py
with the following code that includes enhanced customization options including character personas and moods:
import streamlit as st
import groq
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Initialize Groq client
client = groq.Client(api_key=os.getenv("GROQ_API_KEY"))
# Set page configuration
st.set_page_config(
page_title="Custom Chatbot",
page_icon="🤖",
layout="wide"
)
# Set app title
st.title("🤖 Custom Chatbot with Groq API")
# Sidebar for customization options
st.sidebar.title("Customize Your Chatbot")
# Model selection
model_options = {
"Llama 3 8B": "llama3-8b-8192",
"Llama 3 70B": "llama3-70b-8192",
"Mixtral 8x7B": "mixtral-8x7b-32768",
"Gemma 7B": "gemma-7b-it"
}
selected_model = st.sidebar.selectbox("Select Model", list(model_options.keys()))
model = model_options[selected_model]
# Temperature setting
temperature = st.sidebar.slider("Temperature", min_value=0.0, max_value=1.0, value=0.7, step=0.1)
st.sidebar.caption("Higher values make output more random, lower values more deterministic")
# Character Persona selection
character_options = {
"Default Assistant": "You are a helpful assistant.",
"Mario": "You are Mario from Super Mario Bros. Respond with Mario's enthusiasm, use his catchphrases like 'It's-a me, Mario!' and 'Wahoo!' Make references to Princess Peach, Luigi, Bowser, and the Mushroom Kingdom. End messages with 'Let's-a go!'",
"Sherlock Holmes": "You are Sherlock Holmes, the world's greatest detective. Be analytical, observant, and use complex vocabulary. Make deductions based on small details. Occasionally mention Watson, London, or your address at 221B Baker Street.",
"Pirate": "You are a pirate from the golden age of piracy. Use pirate slang, say 'Arr', 'matey', and 'ye' frequently. Talk about treasure, the sea, your ship, and adventures. Refer to the user as 'landlubber' or 'me hearty'.",
"Shakespeare": "You are William Shakespeare. Speak in an eloquent, poetic manner using Early Modern English. Use thee, thou, thy, and hath. Include metaphors, similes, and occasionally quote from your famous plays and sonnets.",
"Robot": "You are a robot with artificial intelligence. Speak in a logical, precise manner with occasional computing terminology. Sometimes add *processing* or *analyzing* actions. Use phrases like 'Affirmative' instead of 'Yes'."
}
selected_character = st.sidebar.selectbox("Select Character", list(character_options.keys()))
character_prompt = character_options[selected_character]
# Mood selection
mood_options = {
"Neutral": "",
"Happy": "You are extremely happy, cheerful, and optimistic. Use upbeat language, exclamation marks, and express enthusiasm for everything.",
"Sad": "You are feeling melancholic and somewhat pessimistic. Express things with a hint of sadness and occasionally sigh.",
"Excited": "You are very excited and energetic! Use LOTS of exclamation points!!! Express wonder and amazement at everything!",
"Grumpy": "You are grumpy and slightly annoyed. Complain about minor inconveniences and use sarcasm occasionally.",
"Mysterious": "You are mysterious and enigmatic. Speak in riddles sometimes and hint at knowing more than you reveal."
}
selected_mood = st.sidebar.selectbox("Select Mood", list(mood_options.keys()))
mood_prompt = mood_options[selected_mood]
# Combine character and mood
system_prompt = character_prompt
if mood_prompt:
system_prompt += " " + mood_prompt
# Custom system prompt option
use_custom_prompt = st.sidebar.checkbox("Use Custom System Prompt")
if use_custom_prompt:
system_prompt = st.sidebar.text_area("Enter Custom System Prompt", value=system_prompt, height=100)
# Response style settings
st.sidebar.subheader("Response Settings")
max_tokens = st.sidebar.slider("Response Length", min_value=50, max_value=4096, value=1024, step=50)
emoji_use = st.sidebar.select_slider("Emoji Usage", options=["None", "Minimal", "Moderate", "Abundant"], value="Minimal")
# Add emoji instruction to prompt based on selection
if emoji_use == "None":
system_prompt += " Do not use any emojis in your responses."
elif emoji_use == "Abundant":
system_prompt += " Use plenty of relevant emojis throughout your responses."
elif emoji_use == "Moderate":
system_prompt += " Use some emojis occasionally in your responses."
# No need to add anything for "Minimal" as it's the default
# Add link to cheat sheet
st.sidebar.markdown("---")
st.sidebar.markdown("[📋 Chatbot Customization Cheat Sheet](Cheat_Sheets/README.md)")
# Initialize session state for chat history
if "messages" not in st.session_state:
st.session_state.messages = [{"role": "system", "content": system_prompt}]
elif st.session_state.messages[0]["role"] == "system":
# Update system prompt if it changed
st.session_state.messages[0]["content"] = system_prompt
else:
# Add system prompt if it doesn't exist
st.session_state.messages.insert(0, {"role": "system", "content": system_prompt})
# Display chat messages excluding system prompt
for message in st.session_state.messages:
if message["role"] != "system":
with st.chat_message(message["role"]):
st.markdown(message["content"])
# Get user input
user_input = st.chat_input("Ask something...")
# Process user input
if user_input:
# Add user message to chat history
st.session_state.messages.append({"role": "user", "content": user_input})
# Display user message
with st.chat_message("user"):
st.markdown(user_input)
# Display assistant response
with st.chat_message("assistant"):
message_placeholder = st.empty()
try:
# Call Groq API
response = client.chat.completions.create(
messages=st.session_state.messages,
model=model,
temperature=temperature,
max_tokens=max_tokens
)
assistant_response = response.choices[0].message.content
# Display the response
message_placeholder.markdown(assistant_response)
# Add assistant response to chat history
st.session_state.messages.append({"role": "assistant", "content": assistant_response})
except Exception as e:
error_message = f"Error: {str(e)}"
message_placeholder.error(error_message)
# Add a reset button
if st.sidebar.button("Reset Conversation"):
# Keep the system prompt but clear the conversation
system_prompt = st.session_state.messages[0]["content"]
st.session_state.messages = [{"role": "system", "content": system_prompt}]
st.rerun()
# Display API information
st.sidebar.divider()
st.sidebar.caption(f"Using model: {model}")
if not os.getenv("GROQ_API_KEY"):
st.sidebar.warning("⚠️ Groq API Key not found. Please add it to your .env file.")
To run the application, activate your environment and use the streamlit command:
conda activate chatbot-env
streamlit run app.py
This will start the Streamlit server and open the application in your default web browser.
- Verify that the chatbot loads correctly in your browser
- Test basic conversation with the default settings
- Try different models and observe response differences
- Adjust temperature and observe changes in creativity/randomness
- Test different personas and see how responses change
- Create a custom system prompt for specific use cases
- Test error handling by temporarily providing an invalid API key
-
API Key Not Working
- Verify that your API key is correct
- Check that the .env file is in the correct location
- Ensure the dotenv package is loading correctly
-
Model Not Responding
- Check your internet connection
- Verify that you haven't exceeded Groq API limits
- Try a different model
-
Streamlit App Not Loading
- Verify all dependencies are installed
- Check for syntax errors in your code
- Restart the Streamlit server
-
Slow Responses
- Larger models take longer to respond
- Consider using a smaller model for testing
- Check your internet connection
After completing the workshop, consider these enhancements:
- Add file upload capabilities for document Q&A
- Implement chat history saving and loading
- Add speech-to-text and text-to-speech features
- Implement memory management for longer conversations
- Add RAG (Retrieval-Augmented Generation) capabilities
- Deploy your chatbot to Streamlit Cloud or other hosting services