Skip to content

AI Configuration Guide

This guide explains how to configure AI chat assistants for your courses, including provider selection, model configuration, and custom backend setup.

Overview

The LMS supports multiple AI providers that can be configured on a per-course basis:

  • OpenAI (GPT models) - Default provider for general-purpose AI
  • Anthropic (Claude models) - Advanced reasoning and complex tasks
  • Google AI (Gemini models) - Multimodal capabilities
  • Custom Backend - Self-hosted solutions (Ollama, LM Studio, etc.)
  • Mock Provider - Testing and development

Accessing AI Configuration

For Instructors

  1. Navigate to your course
  2. Click on Course Settings in the sidebar
  3. Select the AI Configuration tab
  4. Configure your preferred AI provider and settings

Prerequisites

  • Instructor or administrator role for the course
  • (Optional) API keys for commercial AI providers
  • (Optional) Self-hosted AI backend for custom deployments

Provider Configuration

OpenAI (GPT)

OpenAI's GPT models provide general-purpose AI capabilities with strong performance across various tasks.

Configuration Steps:

  1. Select OpenAI as the provider
  2. (Optional) Enter your OpenAI API key
    • If not provided, uses system default
    • Per-course keys allow cost tracking
  3. Select model:
    • gpt-4.1-mini (Default) - Fast, cost-effective
    • gpt-4 - More capable, higher cost
    • gpt-3.5-turbo - Fastest, lowest cost
  4. Click Save Configuration

When to Use:

  • General course assistance
  • Quick responses
  • Code explanations
  • Document summarization

Anthropic (Claude)

Anthropic's Claude models excel at complex reasoning, detailed analysis, and maintaining context in long conversations.

Configuration Steps:

  1. Select Anthropic as the provider
  2. Enter your Anthropic API key
  3. Select model:
    • claude-sonnet-4-20250514 (Default) - Balanced performance
    • claude-3-opus - Highest capability
    • claude-3-haiku - Fastest responses
  4. Click Save Configuration

When to Use:

  • Complex problem-solving
  • Detailed explanations
  • Research assistance
  • Long-form content generation

Google AI (Gemini)

Google's Gemini models offer multimodal capabilities and strong performance across text, code, and reasoning tasks.

Configuration Steps:

  1. Select Google AI as the provider
  2. Enter your Google AI API key
  3. Select model:
    • gemini-1.5-pro (Default) - Best overall performance
    • gemini-1.5-flash - Faster, lower cost
  4. Click Save Configuration

When to Use:

  • Multimodal tasks
  • Code generation and analysis
  • General course assistance
  • Complex reasoning

Custom Backend (Self-Hosted)

Configure self-hosted AI models using Ollama, LM Studio, or other OpenAI-compatible backends.

Configuration Steps:

  1. Select Custom Backend as the provider
  2. Configure settings:
    • Base URL: Your backend URL (e.g., http://localhost:11434)
    • API Format: Select compatibility mode
      • OpenAI (most common)
      • Anthropic
      • Google
    • Model Name: Model identifier from your backend
      • For Ollama: llama3, mistral, codellama, etc.
      • For LM Studio: Model name from UI
  3. (Optional) API Key: If your backend requires authentication
  4. Click Save Configuration

Supported Backends:

BackendDefault PortAPI FormatExample URL
Ollama11434OpenAIhttp://localhost:11434
LM Studio1234OpenAIhttp://localhost:1234
text-generation-webui5000OpenAIhttp://localhost:5000
vLLM8000OpenAIhttp://localhost:8000

When to Use:

  • Privacy-sensitive courses
  • Offline or air-gapped environments
  • Cost control
  • Custom-trained models
  • Research and experimentation

Setting Up Ollama

bash
# Install Ollama
curl https://ollama.ai/install.sh | sh

# Pull a model
ollama pull llama3

# Start Ollama (usually starts automatically)
ollama serve

Configure in LMS:

  • Base URL: http://localhost:11434
  • API Format: openai
  • Model: llama3

Setting Up LM Studio

  1. Download and install LM Studio from https://lmstudio.ai
  2. Download your preferred model through LM Studio UI
  3. Start the local server from LM Studio (Server tab)
  4. Note the port (usually 1234)

Configure in LMS:

  • Base URL: http://localhost:1234
  • API Format: openai
  • Model: Name shown in LM Studio

AI Tool Permissions

Configure which AI tools students can use in their conversations. Tools allow the AI to interact with course data and perform actions.

Available Tool Categories

CategoryToolsPurpose
CalendarView events, create eventsHelp students manage deadlines
Course ContentBrowse modules, chaptersNavigate course materials
QuizView quiz info, review attemptsStudy assistance
DiscussionSearch discussions, view postsFind relevant conversations

Permission Scopes

Control how long tool permissions last:

  • Always: Tool can be used anytime (default for read-only tools)
  • Conversation: Permission lasts for the current conversation
  • Once: Single-use permission, requires re-approval

Configuring Tool Permissions

  1. Go to Course SettingsAI Configuration
  2. Scroll to Tool Permissions section
  3. For each tool category:
    • Enable/disable the category
    • Set default permission scope
    • Configure auto-approval rules
  4. Click Save Configuration

Best Practice

Enable read-only tools (browsing content, viewing events) with "Always" scope, and action tools (creating content, modifying data) with "Conversation" or "Once" scope for better control.

Managing AI Features

Enabling/Disabling AI

Toggle AI chat for your course:

  1. Navigate to Course SettingsAI Configuration
  2. Use the Enable AI Chat toggle
  3. When disabled, students won't see the AI chat interface

Testing Configuration

Before making AI available to students:

  1. Configure your preferred provider
  2. Use the Test Configuration button
  3. Send a test message to verify connectivity
  4. Check response quality and latency

Monitoring Usage

Track AI usage in your course:

  1. Navigate to Course Analytics
  2. View AI Chat Analytics section
  3. See metrics:
    • Total conversations
    • Messages per student
    • Token usage (cost estimation)
    • Most used tools
    • Common questions

Cost Management

API Key Strategies

System-Wide Keys (Set by Administrator):

  • Shared across all courses
  • Simplest setup
  • Central cost tracking

Per-Course Keys (Set by Instructor):

  • Individual course budgets
  • Isolated cost tracking
  • Useful for research projects

Per-User Keys (Advanced):

  • Students provide their own keys
  • Zero cost to institution
  • Requires student technical setup

Cost Optimization Tips

  1. Choose Appropriate Models

    • Use smaller models (gpt-4.1-mini, claude-haiku) for simple queries
    • Reserve larger models for complex tasks
  2. Set Usage Limits

    • Configure rate limits per student
    • Set daily/weekly message caps
    • Monitor token usage
  3. Consider Custom Backends

    • One-time hardware cost
    • No per-token fees
    • Unlimited usage
  4. Tool Selection

    • Enable only necessary tools
    • Reduce tool calls to minimize tokens

Troubleshooting

AI Chat Not Responding

Check Configuration:

  1. Verify AI is enabled for the course
  2. Confirm API key is valid
  3. Test connection with "Test Configuration" button

Common Issues:

  • Invalid or expired API key → Update in settings
  • Backend URL unreachable → Check network/firewall
  • Model not available → Verify model name spelling

Slow Response Times

Causes:

  • Large model selected → Try smaller model
  • High token count → Shorter context works faster
  • Backend overloaded → Check backend resources

Incorrect Responses

Improvements:

  • Adjust model selection
  • Review tool permissions
  • Check context window size
  • Consider different provider

Best Practices

For General Courses

  • Provider: OpenAI (gpt-4.1-mini)
  • Tools: Enable calendar, course content, discussions
  • Permissions: Read-only tools on "Always", write tools on "Conversation"

For Programming Courses

  • Provider: Anthropic (claude-sonnet) or OpenAI (gpt-4)
  • Tools: Enable course content, quiz review
  • Permissions: Mostly "Always" for reference materials

For Privacy-Sensitive Courses

  • Provider: Custom Backend (Ollama + open model)
  • Tools: Limited to essential only
  • Permissions: "Once" or "Conversation" for all tools

For Research Projects

  • Provider: Per-course API keys (any provider)
  • Tools: Full access
  • Permissions: Track usage, adjust based on needs

Getting Help

If you need assistance with AI configuration:

  1. Check the troubleshooting section above
  2. Review the AI Architecture documentation
  3. Contact your system administrator
  4. Consult your institution's IT support