Skip to main content

Platform

Agent Playground

The Agent Playground is a powerful web-based interface that allows you to interact with multiple MCP servers simultaneously through natural language. This feature makes AI capabilities accessible to non-technical users and provides a testing ground for developers.

Getting Started with Agent Playground

  1. Navigate to the Chat section in the Cogni+ web interface
  2. Click Create New Chat to start a new session
  3. Select the MCP servers you want to include in your session
  4. Start interacting with the agent through the chat interface

Composing Multi-Tool Agents

The Agent Playground allows you to create custom agents by combining multiple tools:
  1. From the server selection screen, browse available MCP servers
  2. Add servers to your session by clicking the ”+” button next to each server
  3. Configure any required credentials for the selected servers
  4. Click “Start Chat” to begin interacting with your custom agent
Your agent will now have access to all the tools provided by the selected servers, creating a powerful multi-capability assistant.

LLM Selection and Configuration

Choose from several language models to power your agent:
  • Model Provider: Select from available LLM providers (OpenAI, Anthropic, etc.)
  • Model: Choose a specific model (GPT-4, Claude 3, etc.)
  • Parameters: Adjust temperature, max tokens, and other model-specific settings
  • API Key: Supply your own API key for the selected provider (if required)

Tool Selection and Usage

The Agent Playground makes it easy to see which tools are available and when they’re being used:
  • Available Tools: View a list of all tools provided by the selected servers
  • Tool Execution: Watch in real-time as the agent decides which tools to use
  • Tool Results: See the output from each tool execution
  • Reasoning Steps: Toggle visibility of the agent’s reasoning process

Advanced Features

Session Management

  • Save Sessions: Save your custom agent configurations for future use
  • Share Sessions: Generate shareable links to your agent setups
  • Session History: Access past conversations and continue where you left off

Tool Execution Control

  • Manual Tool Approval: Optionally approve each tool execution before it happens
  • Tool Usage Limits: Set maximum usage counts for specific tools
  • Execution Timeout: Configure maximum time allowed for tool operations

Visualization Options

  • Compact View: Focus on the conversation with minimal UI elements
  • Detailed View: See all the underlying tool calls and reasoning steps
  • Developer Mode: Access technical details about each interaction

Streaming

  • Streaming: Streamable HTTP (per MCP Spec) for streaming responses

Example Use Cases

Research Assistant

Combine web search, Wikipedia, PDF parsing, and note-taking tools to create a comprehensive research assistant:
  1. Select the following tools:
    • Web Search Server
    • Wikipedia Server
    • Document Parser Server
    • Note-Taking Server
  2. Ask complex research questions
  3. The agent will search the web, consult Wikipedia, parse any documents you share, and maintain research notes

Code Assistant

Create a coding assistant with access to GitHub, code generation, and debugging tools:
  1. Select the following tools:
    • GitHub Access Server
    • Code Generation Server
    • Code Execution Server
    • Stack Overflow Search Server
  2. Ask coding questions or request features
  3. The agent will research solutions, generate code, test it, and help debug issues

Content Creator

Build a content creation assistant with writing, SEO, and image generation capabilities:
  1. Select the following tools:
    • Text Generation Server
    • SEO Analysis Server
    • Image Generation Server
    • Grammar Check Server
  2. Describe the content you need
  3. The agent will draft text, optimize for SEO, generate supporting images, and ensure proper grammar

Privacy and Security

When using the Agent Playground, keep these privacy considerations in mind:
  • Data Processing: Conversation content may be processed by the selected LLM provider
  • Tool Access: Tools may have access to data you share in the conversation
  • API Keys: Your provided API keys are used only for your sessions
For sensitive workflows, consider using the CLI’s advanced routing capabilities to keep certain operations local.