Microsoft chose the word "extending" for a reason. When AI in SharePoint shipped Skills in public preview on 21 April 2026, the feature team did not call it customising or configuring. They called it extending. That one verb tells you how the team thinks about Skills, and it should change how you think about them too.
Extending SharePoint AI with Skills means turning a multi-step workflow into a reusable file that lives on a site. You write the instructions once, tested and tuned. After that, anyone on the site with View permission can run the same tested workflow. The unit of work becomes the workflow, not the prompt.
This guide is the picture I wish I'd had on day one of opt-in: where Skills sit in the stack, the prompt I actually use to write one that works, where Skills break under real load, and what they still cannot do in April 2026.
If you are starting cold, read what SharePoint AI Skills are and how they work first. This piece assumes you know what a Skill is and want the next layer down.
Why Microsoft called it "extending" AI in SharePoint
"Extending" is not marketing filler. It tells you three things about the design.
First, Skills do not add new capabilities. They compose existing ones. A Skill can summarise documents, organise files, update a list, or chain those actions together, but only because AI in SharePoint could already do each of them individually. The extension happens at the level of the workflow, not the capability. Microsoft's line in the official docs is explicit: a Skill "can only perform actions the user already has permission to do".
Second, Skills are not a plugin model. There is no external code, no webhook, no callout to another system. Everything happens inside SharePoint, in the permission context of whoever runs the Skill. If your mental model for extending a product is "write some JavaScript and ship a package", you will misread Skills on the first attempt. Microsoft documents Skills as distinct from the broader Copilot extensibility surface, which includes connectors, declarative agents, and custom engine agents. Skills are deliberately narrower.
Third, the extension is a library, not a product. You build Skills for the way your organisation works. A contract review Skill at a law firm looks nothing like a contract review Skill at a construction firm. The directory I am building on this site is a starting point, not a drop-in solution. Every Skill I publish is something you edit before using.
The word "extend" is doing real work. Read the docs and the public preview announcement with that framing and the feature lands differently.
Where Skills sit in the Microsoft AI stack
I get this question every week. Most teams already have Power Automate, some have Copilot Studio agents, many have a SharePoint agent or two, and now Skills are in the mix. The four tools overlap enough to be confusing and differ enough to matter.
Here is how I draw the line for clients.
| Tool | Lives in | Calls external systems | Dev skills needed | Best fit |
|---|---|---|---|---|
| AI in SharePoint Skill | SharePoint site | No | None | Reusable multi-step work on one site's content |
| SharePoint agent | SharePoint site | No | None | Scoped Q&A and summarisation over a site or library |
| Copilot Studio agent | M365 tenant | Yes, via connectors | Low-code | Cross-system workflows, external data sources |
| Power Automate flow | M365 tenant | Yes | Low-code | Triggered or scheduled business processes |
The rule I apply: if the work stays inside SharePoint and can be described as a repeatable set of steps on the site's own content, it is a Skill. If it reaches outside the site or the tenant, or it has to trigger automatically on an event, it is not a Skill, regardless of how the docs phrase it.
SharePoint agents and Skills are the most commonly confused pair. A SharePoint agent answers questions, scoped to documents you select. A Skill does work, scoped to the site. Agents respond, Skills act. They complement each other, and in several tenants I run both, doing different jobs.
I am writing a deeper comparison as a separate piece. When it ships, it will link from here.
The Agent Assets library in practice
A Skill lives at /Agent Assets/Skills/<skill-name>/SKILL.md in the site. The Agent Assets library is created by the product during preview enablement and cannot be deleted.
What this path actually means for you:
- Every Skill is a file. You can check it out, version it, label it, audit it, and retain it like any other SharePoint file.
- The folder name matters. AI in SharePoint uses the folder name, not just the filename, when it decides which Skill to load. Name deliberately.
- Editing
SKILL.mddirectly is supported. If you know Markdown, you can skip the chat experience and author in the browser or in Word on the web. - The library is site-scoped. There is no tenant-wide Skill library in April 2026. Three sites that need the same Skill need three copies.
For the full mechanics of creation and running, see what SharePoint AI Skills are. The rest of this guide goes past that foundation.
The prompt I actually use to create a reliable Skill
The example Microsoft gives in the docs is fine for a demo and bad for a production Skill. It tells the agent to create a Skill that reviews legal contracts for a lawyer ID. It does not specify inputs, does not specify outputs, does not specify failure modes, and does not specify the exact trigger language. You can build a Skill from that prompt, but the first version will be ambiguous and you will re-tune it three times.
This is the template I use instead.
Create a Skill named "<short, unique name>" for this site.
Purpose:
<One-sentence statement of what the Skill does and when.>
Trigger:
<The user prompt phrases that should run this Skill.
Give two or three variations.>
Inputs:
<What the user provides. Files selected? A library path?
A list name? A free-text parameter?>
Steps:
1. <First action. Be specific. Include the rule, not just the action.>
2. <Second action.>
3. <Third action.>
...
Output:
<Exactly what the Skill returns. A message in chat?
A file updated? A list item created? A summary format?>
Rules:
- <Any constraint. Do not rename files.
Never overwrite metadata. Flag and ask on ambiguity.>
- <Any format rule. Date in ISO.
Lists use controlled vocabulary.>
That structure forces you to state the ambiguity before the agent drafts the Skill. Every field you skip becomes a tuning round later. The Rules section is the one most teams leave out, and then they spend a week rewriting.
Once the agent drafts the Skill, I run two passes. First pass: read the Markdown with the lens "does this tell a team member who has never seen my original prompt what to run, and what good looks like?" Second pass: run it on three deliberately chosen files. One typical, one edge case, one broken input. If all three produce sensible output, it ships.
The Rules block is the highest-value field in a Skill. Most failure modes I see trace back to a missing rule, not a missing step. Before you save a Skill, ask "what would a careless person get wrong here?" and put the answer in Rules.
How AI in SharePoint decides which Skill to load
This part is thinly documented and causes more trouble than anything else.
When you prompt AI in SharePoint, two things happen. First, the agent checks for an explicit Skill name in your prompt. If you say "run contract review", it loads that Skill by name. Second, if there is no explicit name, the agent scans the Skill triggers on the current site and picks the one that best matches your prompt.
The failure mode is predictable. Two Skills with overlapping trigger language, and the agent picks the wrong one, silently. In one client tenant I had a "review contract" Skill and a "summarise contract" Skill, both triggered by prompts like "can you check this contract for me?". The agent loaded the wrong one about a third of the time. No error, no warning, just the wrong output.
Two fixes.
Write triggers that are actually distinct. "Review contract against template" and "summarise contract key terms" do not share vocabulary the way "review contract" and "summarise contract" do. Be specific in the verb and the object.
Watch the Skill indicator card in the chat UI. It confirms which Skill loaded. If it is the wrong one, name the right one explicitly in the next prompt ("run summarise contract key terms on this file"). The indicator card is the single most useful piece of UI for debugging Skills in preview. Check it every time until you trust the trigger set.
Where Skills break: patterns to watch for
Three patterns show up once Skills touch real content. Each has a cheap fix if you know to look for it.
Skills that read the wrong document field. If a site has columns with overlapping names (a Yes/No "Approved" left from an old template and a Person "Approved" from the current one), a Skill told to "check the Approved column" can read the wrong one. Rename columns to something unambiguous before the Skill references them. This surfaces data-quality debt that was always there. Skills do not cause it, they just expose it.
Skills that edit files beyond the intended library. A Skill whose step says "organise files in this library" can scope wider than intended when run from a parent site, because AI in SharePoint resolves "library" against the user's current context. The permission boundary holds, but the scope does not. Name the exact library path in the Steps block. Avoid relative words like "this library".
Skills that ask too many clarifying questions. A Skill drafted with generic language resolves ambiguity at run time by asking the user. Users stop using it after the third run. Add a Defaults block: "If the user has not provided a review template, use the latest version in the Templates library." Once defaults land, the Skill runs cleanly on a one-line prompt.
None of these are in the preview docs. They show up the first time you run a Skill against real content.
What Skills still cannot do in April 2026
Be honest about the boundary before you invest.
Skills cannot:
- Call any external system, including Microsoft Graph APIs outside what AI in SharePoint already exposes.
- Run custom code, including JavaScript, TypeScript, Power Fx, or OfficeScript.
- Operate across multiple sites in one run. Each Skill is scoped to the site it lives on.
- Trigger on an event. Skills are user-invoked, not scheduled or webhook-fired.
- Share a common library across a tenant. Three sites, three copies, at least during public preview.
If the work crosses any of those lines, reach for a different tool. Copilot Studio for cross-system agents. Power Automate for event-driven flows. SPFx or Graph for anything that needs code. Skills earn their keep on the inside-one-site layer, which is larger than it sounds.
The Microsoft 365 roadmap entry for custom Skills shows public preview rolling out from mid-April through early May 2026 and general availability from late May to early July 2026. Some of today's limits will move by the time GA ships. Build for today, watch the roadmap.
Governance once you have more than one Skill
The picture changes at around three to five Skills per site. Below that, Skills feel like clever prompts. Above it, you need a small amount of intentional governance.
Three things to get ahead of.
Naming and triggers. If two Skills share trigger language, the agent will pick wrong. Review your trigger phrases as a set, not in isolation.
Edit permissions on the Agent Assets library. Any site editor can write a Skill. If your site has a broad Edit group, that is a lot of Skill authors. Consider breaking permission inheritance on Agent Assets and restricting authoring to a smaller group.
Retention and sensitivity. SKILL.md files can contain organisation-specific rules, sometimes mirroring policies. Apply the same retention label you would apply to a policy document in that site. The Agent Assets library supports sensitivity labels, versioning, and audit the same way any other library does.
I am writing a separate piece on governance at scale. It will link from here once it ships.
Where this leaves you
Skills are the first feature in the AI in SharePoint family that changes how work actually gets done on a site, rather than how it gets prompted. The teams that invest a few hours this quarter in building three or four tested Skills will be operating at a different speed by the end of Q3 2026. The teams still typing one-off prompts will still be typing one-off prompts.
Pick the first one. Use the prompt template above. Test on three files. Ship it. The second Skill is easier than the first, and the tenth is easier than the fifth.