# Overview

The ilert AI ecosystem combines **assistive AI features** that enhance user workflows with **autonomous agents** that can reason, investigate, and act on incidents using connected data sources through **MCP servers**.

ilert AI is built around two complementary layers:

1. **AI Features:** Assistive capabilities embedded directly into existing ilert workflows (e.g. alert grouping, post-mortem generation, incident message generation).
2. **Agents:** Autonomous, reasoning-driven agents that analyze signals, identify causes, and trigger actions to resolve or communicate incidents.

Together, they enable **AI-first incident management**, from detection to resolution, while keeping engineers fully in control.

### Model endpoints & data residency

ilert AI operates on region-specific model endpoints to meet strict data protection and compliance requirements.

| Region                              | Model Hosting               | Provider                                                                                |
| ----------------------------------- | --------------------------- | --------------------------------------------------------------------------------------- |
| **EU customers**                    | Models hosted **in the EU** | OpenAI models via **Microsoft Azure**, Anthropic models via **AWS Bedrock (EU region)** |
| **Non-EU customers (including US)** | Models hosted **in the US** | OpenAI models via **OpenAI US data centers**                                            |

* You can view the **active model endpoint** used by your organization under\
  **Account settings**.
* **AI features that rely on external LLMs can be globally disabled** in your account settings.
* When disabled, AI Features and Agents that depend on LLM reasoning will no longer process or transmit data externally.

> 🔒 ilert ensures full transparency and compliance by keeping model traffic within your selected region and by opting out of data training with all of our LLM providers. ilert doesn't use any of your data to train models.

<figure><img src="https://3394882078-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-M76ygPnS4HUcFSX8ulm%2Fuploads%2FD7XdoBRQC2oJEZlwUooi%2Fimage.png?alt=media&#x26;token=8de50326-824a-44c8-b0b3-79d0bf41c098" alt=""><figcaption></figcaption></figure>

### Security & privacy

ilert AI is designed for **secure, auditable, and controlled automation**:

* All AI and Agent actions are logged and traceable.
* You have full control to enable or disable AI features globally.
* MCP connections use HTTPS with strict authorization (API keys or OAuth).
* No data is shared outside your ilert environment or the configured LLM region.

### Learn more

{% content-ref url="ai-features" %}
[ai-features](https://docs.ilert.com/ai-and-agents/ai-features)
{% endcontent-ref %}

{% content-ref url="agents" %}
[agents](https://docs.ilert.com/ai-and-agents/agents)
{% endcontent-ref %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.ilert.com/ai-and-agents/introduction.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
