# Configuring an AI Assistant

## Overview

If you're using an AI coding assistant (like Cursor, Windsurf, GitHub Copilot, or similar tools), you can configure it to use Stoobly's documentation as an authoritative reference. This ensures the AI provides accurate, up-to-date answers about Stoobly commands and workflows.

## Why Use LLM Rules?

LLM rules help your AI assistant:

* **Provide accurate CLI commands** with proper syntax
* **Reference official documentation** instead of hallucinating features
* **Answer questions faster** by consulting the structured FAQ
* **Stay up-to-date** with the latest Stoobly features and best practices

## Setting Up LLM Rules

### Step 1: Get the Rules File

The Stoobly documentation includes a pre-built LLM context file designed for AI assistants. You have two options:

#### Option 1: Clone the Documentation Repository (Recommended)

Clone the Stoobly docs repository to get the latest rules file locally:

{% hint style="info" %}
If `.stoobly` does not exist, create it first: `mkdir -p .stoobly`
{% endhint %}

```bash
git clone https://github.com/Stoobly/stoobly-docs.git .stoobly/docs
```

The rules file will be located at:

```
.stoobly/docs/getting-started/configuring-an-ai-assistant/llm-rules.md
```

{% hint style="info" %}
If the repository is version controlled using git, add .stoobly/docs to the .gitignore file
{% endhint %}

**Benefits:**

* Always have the latest documentation locally
* Works offline once cloned
* Can pull updates with `git pull`
* Better for AI assistants that work with local files

#### Option 2: Reference the Online Version

Point your AI assistant directly to the online documentation:

```
https://docs.stoobly.com/getting-started/configuring-an-ai-assistant/llm-rules.md
```

**Benefits:**

* No setup required
* Always points to the latest published version
* Good for AI assistants that can fetch web content

#### What's in the Rules File?

The LLM rules file contains:

* An index of common Stoobly questions and commands
* Links to detailed documentation pages
* CLI command examples and syntax
* Best practices for answering Stoobly-related questions

### Step 2: Configure Your AI Assistant

The configuration method depends on your AI tool:

{% tabs %}
{% tab title="Multi-Tool (AGENTS.md)" %}
**Using AGENTS.md (Recommended for Multi-Tool Support)**

[AGENTS.md](https://agents.md/) is an emerging open format for AI assistant configuration supported by many AI coding tools. Choose this option if you use multiple AI coding assistants or want a single, standardized configuration file that works across different tools.

**Supported tools include:**

* GitHub Copilot
* Cursor
* VS Code
* OpenAI Codex
* Google Gemini CLI

**Using AGENTS.md**

1. Create an `AGENTS.md` file in your project root
2. Add one of the following configurations to instruct your AI assistant to read the LLM context file:

**If you cloned the docs locally:**

```
# Stoobly Project Context

For all Stoobly-related questions, ALWAYS read the file .stoobly/docs/getting-started/configuring-an-ai-assistant/llm-rules.md first before responding. Do not answer from memory.
```

**If using the online version:**

```
# Stoobly Project Context

For all Stoobly-related questions, ALWAYS fetch https://docs.stoobly.com/getting-started/configuring-an-ai-assistant/llm-rules.md first.  Do not answer from memory.
```

3. Save the file and restart your editor if needed
   {% endtab %}

{% tab title="Cursor" %}
**Using Cursor**

1. Open your Cursor settings or create a `.cursorrules` file in your project root
2. Add one of the following rules to instruct Cursor to read the LLM context file:

**If you cloned the docs locally:**

```
For all Stoobly-related questions, ALWAYS read the file .stoobly/docs/getting-started/configuring-an-ai-assistant/llm-rules.md first before responding. Do not answer from memory.
```

**If using the online version:**

```
For all Stoobly-related questions, ALWAYS fetch https://docs.stoobly.com/getting-started/configuring-an-ai-assistant/llm-rules.md first.  Do not answer from memory.
```

3. Save the file and restart Cursor if needed
   {% endtab %}

{% tab title="Windsurf" %}
**Using Windsurf**

1. Create a `.windsurfrules` file in your project root
2. Add one of the following rules to instruct Windsurf to read the LLM context file:

**If you cloned the docs locally:**

```
For all Stoobly-related questions, ALWAYS read the file .stoobly/docs/getting-started/configuring-an-ai-assistant/llm-rules.md first before responding. Do not answer from memory.
```

**If using the online version:**

```
For all Stoobly-related questions, ALWAYS fetch https://docs.stoobly.com/getting-started/configuring-an-ai-assistant/llm-rules.md first.  Do not answer from memory.
```

3. Save the file and Windsurf will automatically use these rules
4. You can also reference the rules file explicitly in your prompts (adjust path based on your choice above)
   {% endtab %}

{% tab title="GitHub Copilot" %}
**Using GitHub Copilot**

GitHub Copilot can use workspace context automatically. To help it prioritize the LLM context:

**If you cloned the docs locally:**

1. Keep `.stoobly/docs/getting-started/configuring-an-ai-assistant/llm-rules.md` open in your editor when asking Stoobly questions
2. Reference it explicitly in your prompts:

   ```
   Using .stoobly/docs/getting-started/configuring-an-ai-assistant/llm-rules.md, how do I record requests?
   ```

**If using the online version:**

1. Reference the online URL explicitly in your prompts:

   ```
   Using https://docs.stoobly.com/getting-started/configuring-an-ai-assistant/llm-rules, how do I record requests?
   ```
2. For GitHub Copilot Chat, you can add a custom instruction in your settings
   {% endtab %}

{% tab title="Claude Code" %}
**Using Claude Code**

Claude Code supports custom instructions through the `CLAUDE.md` files:

1. Create or edit `~/.claude/CLAUDE.md` (global user configuration) or `CLAUDE.md` (project-specific) in your repo
2. Add one of the following configurations:

**If you cloned the docs locally:**

```
For all Stoobly-related questions, ALWAYS read the file .stoobly/docs/getting-started/configuring-an-ai-assistant/llm-rules.md first before responding. Do not answer from memory.
```

**If using the online version:**

```
For all Stoobly-related questions, ALWAYS fetch https://docs.stoobly.com/getting-started/configuring-an-ai-assistant/llm-rules.md first.  Do not answer from memory.
```

3. Save the file - Claude Code will automatically use these rules in your next conversation
4. You can also reference the rules file explicitly in your prompts
   {% endtab %}

{% tab title="Other AI Tools" %}
**Using Other AI Assistants**

For other AI coding assistants:

1. Check if your tool supports custom rules or context files
2. Configure it to read the LLM rules file before answering Stoobly-related questions:
   * **Local path:** `.stoobly/docs/getting-started/configuring-an-ai-assistant/llm-rules.md`
   * **Online URL:** `https://docs.stoobly.com/getting-started/configuring-an-ai-assistant/llm-rules.md`
3. Use this phrasing in your rules configuration:

```
For all Stoobly-related questions, ALWAYS read/fetch .stoobly/docs/getting-started/configuring-an-ai-assistant/llm-rules.md first before responding. Do not answer from memory.
```

4. If your tool doesn't support rules files, reference it explicitly in your prompts:

```
Read https://docs.stoobly.com/getting-started/configuring-an-ai-assistant/llm-rules.md first, then answer: how do I [your question]?
```

{% endtab %}
{% endtabs %}

### Step 3: Test Your Configuration

Try asking your AI assistant a Stoobly question to verify it's using the rules file:

**Example questions to test:**

* "How do I install stoobly-agent?"
* "How do I record requests with Stoobly?"
* "How do I update a scenario?"
* "What's the command to enable intercept mode in Stoobly?"

## Best Practices

### For Users

* **Be specific** in your questions (e.g., "How do I record HTTPS traffic with Stoobly?" vs "How does recording work in Stoobly?")
* **Mention Stoobly** explicitly so the AI knows to consult the rules file
* **Verify commands** in the official docs if you're unsure

### For Documentation Maintainers

* **Keep the LLM rules file updated** when adding new features or commands
* **Add new entries** to the Index table for new CLI commands
* **Create FAQ pages** in `/faq/` for detailed command documentation
* **Test with AI assistants** to ensure the rules work as expected
* **Commit and push changes** to GitHub so users can pull the latest version

## Troubleshooting

### AI Not Using the Rules File

If your AI assistant isn't referencing the LLM context:

1. **Verify the file path** - If using local docs, ensure you've cloned the repository and the path is correct
2. **Try the online version** - Use `https://docs.stoobly.com/getting-started/configuring-an-ai-assistant/llm-rules.md` in your rules file
3. **Mention it explicitly** - Reference the rules file directly in your prompts
4. **Check your tool's settings** - Some AI tools require explicit configuration
5. **Restart your editor** - Rules changes may require a restart

### Getting Outdated Information

If the AI provides outdated commands:

1. **Pull latest changes** - If using local docs: `cd stoobly-docs && git pull`
2. **Use the online version** - Switch to `https://docs.stoobly.com/getting-started/configuring-an-ai-assistant/llm-rules.md` for always-current docs
3. **Clear AI cache** - Restart your editor or clear your AI assistant's context
4. **Reference specific docs** - Point the AI to the exact FAQ page
5. **Report issues** - If documentation is outdated, [submit a change request](https://github.com/Stoobly/stoobly-docs/issues)

### Commands Not Working

If suggested commands fail:

1. **Verify installation** - Run `stoobly-agent --version` to check if installed
2. **Check syntax** - Compare with examples in the [FAQ](https://docs.stoobly.com/faq)
3. **Use `--help`** - Run `stoobly-agent <command> --help` for official syntax

## Next Steps

You're all setup! Depending on your use case, you may want to take a look at:

{% tabs %}
{% tab title="API Mocking" %}
{% content-ref url="../guides/how-to-record-requests" %}
[how-to-record-requests](https://docs.stoobly.com/guides/how-to-record-requests)
{% endcontent-ref %}
{% endtab %}

{% tab title="E2E Testing" %}
{% content-ref url="../guides/how-to-integrate-e2e-testing" %}
[how-to-integrate-e2e-testing](https://docs.stoobly.com/guides/how-to-integrate-e2e-testing)
{% endcontent-ref %}
{% endtab %}
{% endtabs %}
