The AICL Scale is a standardized 10-level framework for transparent AI authorship attribution — applicable to articles, reports, code, research, and any content produced with AI tools.
Across journalism, consulting, law, academia, and corporate communications, AI-generated content is flowing into the world with no authorship signal attached. Readers, clients, regulators, and colleagues have no way to know whether what they are reading represents a domain expert's hard-won knowledge or a language model's pattern-matching output.
AI is a genuinely powerful tool that accelerates expert work and helps people communicate more clearly. The question is: how much, at which stage, and whether a qualified human stood behind the result. That distinction matters enormously — and currently, there is no standard way to communicate it.
The AICL Scale gives any organization, publisher, or individual a shared, consistent language for describing the nature of human-AI collaboration in any piece of content — a simple 0–9 vocabulary that any industry can adopt and enforce.
A critical design principle: AICL-4 (expert dictates, AI articulates) and AICL-5 (human-led co-creation) are not marks of laziness. They describe legitimate, high-quality professional workflows. The framework must be framed this way or people will systematically under-report their AI use.
Pure self-declaration is inconsistent. Pure AI auto-detection is impossible — no single tool sees the full process. The right model is AI-guided self-declaration: the AI tool helps the creator arrive at the correct level through a structured reflection, which the human then confirms.
The AICL Scale is not just a labelling system. It is a response to one of the defining ethical challenges of the AI era — the erosion of trust in human expertise and the blurring of accountability when machines and people create together.
Every piece of content carries an implicit promise. When you read a medical article, a legal opinion, or a financial analysis, you extend trust based on your assumption of who created it and how.
That trust relationship is not just about accuracy. It is about accountability. If the content is wrong, who answers for it? If it misleads, who is responsible? These questions have clear answers when a human expert authors content. They become dangerously ambiguous when AI is involved — and no disclosure exists.
The AICL Scale restores that accountability chain. It does not judge how much AI was used — it makes the nature of that use visible, so readers can calibrate their trust appropriately and creators can stand behind their work with precision.
AI fluency is not just about knowing how to use AI tools — it is about understanding your own relationship with them. Did AI shape your thinking or just your prose? Did it generate the structure or just fill it in? These distinctions define the nature of your intellectual contribution, and professionals who cannot answer them clearly are not yet truly fluent.
The AICL self-assessment is designed to build that fluency. By asking the same four questions consistently — concept, development, words, review — it trains creators to be conscious of their own process in a way that makes them better collaborators with AI, not just more transparent ones.
The most important ethical question is not "did you use AI?" It is "did you take responsibility for what you published?" An AICL-9 document published without disclosure is an ethical failure. The same content published with an honest AICL-9 declaration is a legitimate choice — readers can decide what weight to give it.
Ethical AI use is transparent AI use. The AICL Scale operationalizes that principle into something concrete, consistent, and verifiable — turning a vague cultural expectation into a professional standard with real teeth.
In a world where AI can produce fluent, credible text on any topic, the default assumption can no longer be human authorship. Transparency must become the baseline, not the exception.
The AICL Scale does not prescribe how much AI is acceptable. It creates the conditions for informed judgment — by readers, employers, regulators, and clients — without imposing a single standard of correctness.
By distinguishing original concept from AI development, the AICL Scale ensures that human intellectual contributions — especially expert knowledge — are never collapsed into the same category as AI-generated output.
Just as financial literacy became a civic expectation in the 20th century, AI fluency — the ability to understand, use, and account for artificial intelligence in your work — is becoming the defining professional competency of the 21st.
Knowing your AICL level is an act of professional self-awareness. It signals that you understand your own creative process, that you take accountability seriously, and that you distinguish between your expertise and the tool you used to express it.
Mandating AICL disclosure is an act of institutional integrity. It signals to clients, regulators, and the public that your organization takes the provenance of its knowledge seriously — and that human expertise is not interchangeable with AI output.
Widespread AICL adoption creates an epistemic infrastructure for the AI age — a shared basis for evaluating the credibility of information, preserving the value of genuine expertise, and holding creators accountable for what they publish.
The AICL Scale is a proposed open standard for AI authorship transparency. It is published under a Creative Commons Attribution 4.0 license — free to use, adapt, and implement by any individual or organization, with attribution required. The goal is not ownership but adoption: the more widely AICL is used, the more trust it creates.
The key innovation: splitting "ideas" into concept and development
The original idea, framing, or insight. The spark that initiated the content. This is almost always human-originated and should be tracked separately from how the concept was developed.
How the concept was built out — structure, methodology, argumentation, analytical approach. AI can do heavy lifting here even when the concept was 100% human, and this must be captured separately.
Who produced the actual language and prose. This is the most visible dimension but often the least important — a domain expert's scribed text is worth far more than independently-written non-expert prose.
Whether a credentialed domain expert validated the content for substantive accuracy. This dimension can rehabilitate a high-AI-involvement score when genuine expertise was applied in review.
Each level is defined by four dimensions: Concept (original idea origin), Development (implementation thinking), Words (linguistic origin), and Review (expert validation). Click any level to expand.
Answer five questions about how you created your content. We will identify the correct AICL level and generate a disclosure statement ready to paste into your document.
Adoption works best as a phased rollout — starting with culture, moving to process, then embedding into tooling. The technical implementation is secondary to whether people want to be honest.
Build shared vocabulary before mandating compliance. The goal in phase one is honest self-reflection, not surveillance.
Make declaration a natural step in existing publishing and document governance flows — not an added burden.
Use your internal AI deployment to surface a suggested AICL score at session end. Human confirms.
The most practical near-term implementation is a prompted self-declaration workflow. Rather than assigning a score automatically, the AI tool asks the right questions at session end and helps the person arrive at the correct AICL level — consistently and honestly. This solves the inconsistency of pure self-declaration without requiring any single AI tool to know the full creative process.
A key implementation insight: the AI tool only sees its own contribution. If a creator used ChatGPT for brainstorming, Grammarly for editing, and Claude for drafting, no single tool has the full picture. The guided questionnaire captures the holistic process rather than any one tool's logs. This is why human confirmation remains essential even in automated workflows.
When a user finishes a content creation session, the AI tool surfaces the AICL prompt automatically before the session closes — similar to a "save before exit" prompt.
The AI reviews the conversation history across all four dimensions and suggests a starting AICL level with brief reasoning: "Based on our session, this looks like AICL-5 — you provided the concept and strategic direction; I drafted most of the text; you then revised substantially."
The creator reviews the suggestion and adjusts if needed — for example, if they used other tools outside this session, or if the concept originated before any AI involvement. Human confirms the final level.
The tool outputs a ready-to-paste disclosure statement with the confirmed AICL level. The declaration is also logged to the organization's document management system for audit and quality assurance purposes.
If the company frames AICL as surveillance or a performance metric — "managers will judge you for using too much AI" — people will systematically declare AICL-0 regardless of the system. The framework only works if mid-range levels are genuinely respected. An AICL-4 from a domain expert who used AI as a scribe represents outstanding work. That framing must come from leadership first.
These scenarios show how AICL levels appear across different professional contexts — but the need for a common vocabulary is universal. These scenarios illustrate how the same AICL level can appear in very different professional contexts.
Where epistemic credibility is everything
Where trust is the core product
Where clients pay for expert judgment
Where volume and consistency matter