Resources

Making sense of Atlassian's AI architecture: A guide for IT leaders and builders

Written by Valiantys | Jul 31, 2025 2:30:12 PM

The real challenge with AI is fragmentation

Enterprise teams today are surrounded by AI, and it is often overwhelming for enterprise IT leaders.

Some non-technical teams use ChatGPT, Claude, or Gemini. Developers rely on Cursor or GitHub Copilot. Business tools like Salesforce and Microsoft 365 now embed their own AI features, competing with frontier models. Some employees go rogue, choosing their own free tools and potentially exposing confidential data. And of course, everyone wants to build agents and connect as many enterprise data sources as possible.

Each tool adds value; but together, they create a fragmented experience. Context is often lost, governance becomes difficult, and users end up juggling workflows instead of rethinking them.

Atlassian’s approach aims to fix this. Rather than reinventing the wheel, they’ve built a modular AI stack that connects to what you already use, while adding deep contextual intelligence from your Atlassian data. Think of it as an AI spine: flexible, open, and grounded in how your teams actually work.

This article breaks down Atlassian’s AI architecture in clear, structured terms, helping IT and Transformation leaders understand how components like Rovo, Atlassian Intelligence, and the Teamwork Graph interact and integrate with your organization’s broader IT and AI strategy.

The semantic layer: what makes Atlassian’s AI different

Large language models (LLMs) work through statistical association. They’re great at predicting the next word — but not so great at verifying facts. That’s why they “hallucinate.”

Atlassian counters this by introducing a semantic layer: the Teamwork Graph. This graph provides a structured representation of your company’s people, teams, projects, issues, content, and relationships. For example: Bob works on issue ZEB-104 (part of epic ZEB-101), which is managed by Alice and references the "Payments Redesign Strategy" page in Confluence.

It’s your company’s structured memory. Instead of guessing, the AI is grounded in facts - delivering answers that are relevant, traceable, and aligned with your real work.

Atlassian Intelligence: built-in, invisible, grounded

Atlassian Intelligence is the AI layer embedded natively into Jira, Confluence, Bitbucket, and JSM. It summarizes, automates, queries, and drafts - powered by external models (Claude, GPT, Mistral, Gemini) and grounded by internal context from the Teamwork Graph.

It’s embedded where work happens: no extra interface, no chatbot to learn. That means lower adoption friction, fewer hallucinations, and better alignment with your teams’ actual workflows.

Rovo: your AI teammate, not just a chatbot

Rovo is your AI teammate. It helps you find, synthesize, and act on information. You can chat with it directly, use out-of-the-box agents, or build your own, with low-code tools or using Forge.

Rovo is aware of your company’s structure thanks to the Teamwork Graph. It can answer questions like “What changed last week in Project Phoenix?” or “Summarize the last three outages”, and trigger actions based on those insights.

It’s also multi-model: Atlassian chooses the best LLM for the job (Claude, GPT, Gemini, Llama), so you’re never locked into a single provider.

Rovo for developers: agentic automation meets your terminal

Rovo Dev Agents assist throughout the development lifecycle:

  • Code Planner: transforms Jira tickets into technical plans

  • Code Reviewer: checks PRs for alignment and quality

  • Pipeline Troubleshooter, Deployment Summarizer, Feature Flag Cleaner: These agents reduce manual effort and cognitive load in software delivery workflows; and since they’re context-aware, they only act on what’s relevant to your environment.

Developers can also use the Rovo Dev CLI, bringing AI directly into the terminal. In 2025, this CLI ranked #1 on the SWE-bench benchmark for real-world issue resolution.

Cursor, Claude Code, and Rovo Dev serve different but complementary purposes. While Cursor and other IDE-style tools excels at assisting within the code editor, Rovo Dev complements it by offering code generation as part of a larger, context-aware development workflow - one that spans Jira planning, Confluence documentation, and CI/CD orchestration.

MCP: the secure connector for open ecosystems

The Model Context Protocol (MCP) was developed by Anthropic to enable secure, permission-aware connections between LLMs and external tools or data sources. Atlassian was among the first major vendors to adopt it.

Their Remote MCP Server now lets you connect your Atlassian Cloud instance to external AI tools like Claude or Cursor, while enforcing permissions and governance. MCP solves the security team's nightmare of AI tools accessing data they shouldn't.

This means you can ask Claude to summarize Jira tickets or update Confluence pages and it will only access the data you’re authorized to see.

MCP reflects Atlassian’s strategy: interoperability first, not vendor lock-in.

 

Summary for non-tech audiences

Component

 

What it does

 

Analogy

 

Teamwork Graph

Connects people, projects, and content

Your organization’s brain: knows who does what, with whom, and why

Atlassian Intelligence

AI features built into Jira/Confluence

Your smart assistant: summarizes, automates, and clarifies work

Rovo

AI teammate and agent builder

A power user that can search, act, and delegate work for you

Rovo Dev Agent / CLI

Developer-focused AI for planning, coding, and deploying

A junior developer who works in your terminal and understands your stack

MCP

Secure bridge to external tools like Claude or Cursor

A badge checkpoint: ensures only permitted access to internal data

 
 
 
 

Three ways to use the Atlassian AI stack in more detail

Atlassian’s AI architecture is designed to be modular and adaptable. Whether you're deeply invested in Atlassian’s toolchain or have already invested in other AI capabilities, there is a path forward. Here are three scenarios that show how different combinations of components come together.

1. All-in on Atlassian

A development team at a mid-sized software company uses Jira, Confluence, and Bitbucket. Developers operate primarily from the terminal, and the company wants AI to improve velocity, code quality, and operational consistency.

In practice, the team uses:

  • Rovo Dev CLI for direct command-line interaction

  • Rovo Dev Agent to handle planning, code generation, debugging, and deployment

Developers can plan features, review code, and troubleshoot pipelines, all from their local terminal. Rovo Dev Agent provides smart recommendations and executes tasks, all while being fully aware of Jira tickets, Confluence documentation and Bitbucket repositories.

Benefits:

  • Seamless, high-context automation for developers

  • No extra setup or external tool integration required

This scenario is ideal for organizations that are all-in on Atlassian and want a governed, robust AI developer assistant tightly integrated with their software development lifecycle.

2. Bring-your-own-AI (Claude, ChatGPT...)

A product team has been using Claude for months already for brainstorming, writing, or summarizing content. They still use Jira and Confluence but don’t want to replace their existing AI stack because users are so used to it and it connects with their other systems already.

In practice, the team uses:

  • Claude as the front-end

  • MCP for secure access to Jira/Confluence

In this scenario, a user opens Claude, asks to summarize a set of Jira issues or generate a project status update, and the AI returns contextual results based on Atlassian data. The interaction respects permissions and data governance policies.

Benefits:

  • Minimal friction for teams already using Claude

  • Enterprise governance is ensured by MCP

  • Unlocks Atlassian data without requiring deep toolchain changes

This scenario is ideal for organizations with their own AI investments who want to safely expand access to Atlassian data without adopting new Atlassian-native interfaces.

3. AI-augmented development in IDEs
(Cursor, VS Code)

A software engineering team works primarily in modern IDEs like Cursor or Visual Studio Code, using tools such as GitHub Copilot for real-time code generation. They want to bring Jira and Confluence context into their workflows without leaving their preferred development environments.

In practice, the team uses:

  • Cursor or VS Code integrated via Remote MCP

  • Optional use of GitHub Copilot for in-editor code suggestions

Developers work inside Cursor or VSCode with an AI assistant that can reference Jira tickets or summarize Confluence pages directly within the IDE. Thanks to Remote MCP, these assistants have permission-aware access to Atlassian content, ensuring responses are relevant and governed.

Benefits:

  • Developers keep working in their IDE

  • Atlassian context is available without context switching

  • MCP ensures compliance with enterprise data governance

This scenario is ideal for enterprises that build software using modern AI-capable IDEs but still rely on Jira and Confluence as their source of truth, and want both worlds connected securely.

Conclusion: adaptability is the strategy

There is no one-size-fits-all setup when it comes to AI architecture. The Atlassian AI stack is modular by design, meaning customers can:

  • Use Rovo as a fully integrated assistant

  • Connect external LLMs through MCP for flexible interaction

  • Build hybrid workflows across tools and platforms

The key is to match the technology with your goals, whether it is developer productivity, operational efficiency or compliance. And with Atlassian's governed, open architecture, you have the flexibility to do just that.

Contact us to learn how to find the right solutions to achieve your goals.