// Proposed industry standard
AICL
A Artificial
I Intelligence
C Content
L Level

When AI writes,
who is the author?

The AICL Scale is a standardized 10-level framework for transparent AI authorship attribution — applicable to articles, reports, code, research, and any content produced with AI tools.

10
Defined levels
AICL-0 (fully human) through AICL-9 (fully AI)
5
Assessment questions
4
Core dimensions
The problem

The transparency gap
no one is naming

Across journalism, consulting, law, academia, and corporate communications, AI-generated content is flowing into the world with no authorship signal attached. Readers, clients, regulators, and colleagues have no way to know whether what they are reading represents a domain expert's hard-won knowledge or a language model's pattern-matching output.

The question is not whether AI was used

AI is a genuinely powerful tool that accelerates expert work and helps people communicate more clearly. The question is: how much, at which stage, and whether a qualified human stood behind the result. That distinction matters enormously — and currently, there is no standard way to communicate it.

A common vocabulary for a new era

The AICL Scale gives any organization, publisher, or individual a shared, consistent language for describing the nature of human-AI collaboration in any piece of content — a simple 0–9 vocabulary that any industry can adopt and enforce.

Mid-range levels are professional

A critical design principle: AICL-4 (expert dictates, AI articulates) and AICL-5 (human-led co-creation) are not marks of laziness. They describe legitimate, high-quality professional workflows. The framework must be framed this way or people will systematically under-report their AI use.

Self-declaration with AI scaffolding

Pure self-declaration is inconsistent. Pure AI auto-detection is impossible — no single tool sees the full process. The right model is AI-guided self-declaration: the AI tool helps the creator arrive at the correct level through a structured reflection, which the human then confirms.

Why it matters

Transparency is the
foundation of AI ethics

The AICL Scale is not just a labelling system. It is a response to one of the defining ethical challenges of the AI era — the erosion of trust in human expertise and the blurring of accountability when machines and people create together.

// The trust equation

Every piece of content carries an implicit promise. When you read a medical article, a legal opinion, or a financial analysis, you extend trust based on your assumption of who created it and how.

That trust relationship is not just about accuracy. It is about accountability. If the content is wrong, who answers for it? If it misleads, who is responsible? These questions have clear answers when a human expert authors content. They become dangerously ambiguous when AI is involved — and no disclosure exists.

The AICL Scale restores that accountability chain. It does not judge how much AI was used — it makes the nature of that use visible, so readers can calibrate their trust appropriately and creators can stand behind their work with precision.

// AI fluency

Knowing how you used AI is a professional skill

AI fluency is not just about knowing how to use AI tools — it is about understanding your own relationship with them. Did AI shape your thinking or just your prose? Did it generate the structure or just fill it in? These distinctions define the nature of your intellectual contribution, and professionals who cannot answer them clearly are not yet truly fluent.

The AICL self-assessment is designed to build that fluency. By asking the same four questions consistently — concept, development, words, review — it trains creators to be conscious of their own process in a way that makes them better collaborators with AI, not just more transparent ones.

// Ethical use

Ethics is not about avoiding AI — it is about owning your choices

The most important ethical question is not "did you use AI?" It is "did you take responsibility for what you published?" An AICL-9 document published without disclosure is an ethical failure. The same content published with an honest AICL-9 declaration is a legitimate choice — readers can decide what weight to give it.

Ethical AI use is transparent AI use. The AICL Scale operationalizes that principle into something concrete, consistent, and verifiable — turning a vague cultural expectation into a professional standard with real teeth.

01

Transparency as default

In a world where AI can produce fluent, credible text on any topic, the default assumption can no longer be human authorship. Transparency must become the baseline, not the exception.

02

Accountability without judgment

The AICL Scale does not prescribe how much AI is acceptable. It creates the conditions for informed judgment — by readers, employers, regulators, and clients — without imposing a single standard of correctness.

03

Human expertise preserved

By distinguishing original concept from AI development, the AICL Scale ensures that human intellectual contributions — especially expert knowledge — are never collapsed into the same category as AI-generated output.

The bigger picture

AI fluency is the
literacy of our time

Just as financial literacy became a civic expectation in the 20th century, AI fluency — the ability to understand, use, and account for artificial intelligence in your work — is becoming the defining professional competency of the 21st.

For individuals

Knowing your AICL level is an act of professional self-awareness. It signals that you understand your own creative process, that you take accountability seriously, and that you distinguish between your expertise and the tool you used to express it.

For organizations

Mandating AICL disclosure is an act of institutional integrity. It signals to clients, regulators, and the public that your organization takes the provenance of its knowledge seriously — and that human expertise is not interchangeable with AI output.

For society

Widespread AICL adoption creates an epistemic infrastructure for the AI age — a shared basis for evaluating the credibility of information, preserving the value of genuine expertise, and holding creators accountable for what they publish.

// About this framework

The AICL Scale is a proposed open standard for AI authorship transparency. It is published under a Creative Commons Attribution 4.0 license — free to use, adapt, and implement by any individual or organization, with attribution required. The goal is not ownership but adoption: the more widely AICL is used, the more trust it creates.

Published by
aiclscale.org
CC BY 4.0

Four dimensions — not one

The key innovation: splitting "ideas" into concept and development

C

Concept

The original idea, framing, or insight. The spark that initiated the content. This is almost always human-originated and should be tracked separately from how the concept was developed.

// The "what" and "why"
D

Development

How the concept was built out — structure, methodology, argumentation, analytical approach. AI can do heavy lifting here even when the concept was 100% human, and this must be captured separately.

// The "how" and "structure"
W

Words

Who produced the actual language and prose. This is the most visible dimension but often the least important — a domain expert's scribed text is worth far more than independently-written non-expert prose.

// The "expression"
R

Review

Whether a credentialed domain expert validated the content for substantive accuracy. This dimension can rehabilitate a high-AI-involvement score when genuine expertise was applied in review.

// The "validation"
The framework

AICL Scale — 10 levels

Each level is defined by four dimensions: Concept (original idea origin), Development (implementation thinking), Words (linguistic origin), and Review (expert validation). Click any level to expand.

Fully human
Fully AI
Concept
Development
Words
Review
// AICL Scale — 10 levels defined
Self-assessment

What is your AICL level?

Answer five questions about how you created your content. We will identify the correct AICL level and generate a disclosure statement ready to paste into your document.

// Assessment steps
// AICL Scale — self-assessment tool
For organizations

Implementing AICL
across your organization

Adoption works best as a phased rollout — starting with culture, moving to process, then embedding into tooling. The technical implementation is secondary to whether people want to be honest.

// Phase 01  ·  Months 1–3
01

Educate and normalize

Build shared vocabulary before mandating compliance. The goal in phase one is honest self-reflection, not surveillance.

  • Host AICL literacy sessions for all content creators
  • Frame AICL-4 and AICL-5 as professional and respectable
  • Add AICL field to all document templates and cover pages
  • Have leadership model honest disclosure publicly
  • Publish calibration examples showing correctly-rated content
// Phase 02  ·  Months 4–6
02

Embed in workflow

Make declaration a natural step in existing publishing and document governance flows — not an added burden.

  • Add AICL as required metadata in CMS and document management
  • Gate publishing on AICL completion, same as category tagging
  • Track AICL distribution across departments for calibration
  • Integrate self-assessment tool into internal tooling
  • Establish review process for high-stakes AICL-6+ content
// Phase 03  ·  Month 7+
03

AI-assisted scoring

Use your internal AI deployment to surface a suggested AICL score at session end. Human confirms.

  • Deploy end-of-session AICL prompt via internal API wrapper
  • AI reviews session history and suggests a starting level with rationale
  • Human adjusts and confirms — creating an auditable co-declaration
  • Log confirmed scores for periodic calibration audits
  • Build department-level AICL dashboards for management review

The AI-guided scoring conversation

The most practical near-term implementation is a prompted self-declaration workflow. Rather than assigning a score automatically, the AI tool asks the right questions at session end and helps the person arrive at the correct AICL level — consistently and honestly. This solves the inconsistency of pure self-declaration without requiring any single AI tool to know the full creative process.

A key implementation insight: the AI tool only sees its own contribution. If a creator used ChatGPT for brainstorming, Grammarly for editing, and Claude for drafting, no single tool has the full picture. The guided questionnaire captures the holistic process rather than any one tool's logs. This is why human confirmation remains essential even in automated workflows.

01

Session close trigger

When a user finishes a content creation session, the AI tool surfaces the AICL prompt automatically before the session closes — similar to a "save before exit" prompt.

02

Context-aware suggestion

The AI reviews the conversation history across all four dimensions and suggests a starting AICL level with brief reasoning: "Based on our session, this looks like AICL-5 — you provided the concept and strategic direction; I drafted most of the text; you then revised substantially."

03

Human adjustment and confirmation

The creator reviews the suggestion and adjusts if needed — for example, if they used other tools outside this session, or if the concept originated before any AI involvement. Human confirms the final level.

04

Disclosure generated and logged

The tool outputs a ready-to-paste disclosure statement with the confirmed AICL level. The declaration is also logged to the organization's document management system for audit and quality assurance purposes.

//

The cultural dimension is the hardest part

If the company frames AICL as surveillance or a performance metric — "managers will judge you for using too much AI" — people will systematically declare AICL-0 regardless of the system. The framework only works if mid-range levels are genuinely respected. An AICL-4 from a domain expert who used AI as a scribe represents outstanding work. That framing must come from leadership first.

// AICL Scale — implementation guide
Examples

AICL across industries

These scenarios show how AICL levels appear across different professional contexts — but the need for a common vocabulary is universal. These scenarios illustrate how the same AICL level can appear in very different professional contexts.

Academic publishing

Where epistemic credibility is everything

AICL-2
A professor used AI extensively to map the existing literature before writing an entirely independent analysis. The insights, methodology, and all writing are wholly theirs — AI only accelerated the learning phase.
AICL-4
A clinical researcher dictated their original findings and methodology. AI structured and articulated the paper. The researcher validated every claim — the concept, development logic, and accountability are fully human.
AICL-7
A graduate student had AI draft a literature review section and then edited for tone. The concept was theirs but AI developed the structure and wrote the text. No domain expert review beyond self-correction.

Journalism and media

Where trust is the core product

AICL-1
An investigative journalist used AI to verify public timelines while conducting original research and interviews. Concept, development, and all writing are independently theirs.
AICL-5
A reporter co-developed an explainer article — providing the story concept and key reported facts, with AI drafting sections the reporter rewrote and enriched substantially with original judgment.
AICL-8
An automated news brief about earnings data was AI-generated from a structured data feed. An editor approved it for publication with a quick accuracy check but no substantive editorial judgment.

Consulting and advisory

Where clients pay for expert judgment

AICL-3
A senior partner used AI to stress-test their strategic hypotheses before a client presentation. Concept, development logic, and all writing are the partner's own — AI played devil's advocate only.
AICL-4
A specialist consultant used AI to articulate and structure their proprietary methodology into a client deliverable, which they then validated end-to-end against client data and signed off on.
AICL-6
AI produced a market sizing analysis from a detailed prompt. A principal developed the concept and approach; AI developed the analysis structure and wrote the output. A sector analyst reviewed before delivery.

Corporate communications

Where volume and consistency matter

AICL-5
A communications director conceived a thought leadership article, provided the strategic angle and key data points. AI drafted; the director revised substantially and enriched with original insights.
AICL-7
Marketing team briefed AI with a topic direction. AI developed structure and content; a copywriter edited each piece for tone and brand voice. No domain expert reviewed the substance.
AICL-9
Product descriptions were auto-generated via an agentic pipeline and published directly. No human review was performed before publication.
// AICL Scale — real-world examples
Common questions

Frequently asked questions

// AICL Scale — aiclscale.org