Most investment teams have already tried AI.
They have ChatGPT. They have Copilot. They may have a document search product with “AI-powered” somewhere in the deck. Someone on the team uses it to summarize OMs, draft emails, or pull facts out of PDFs.
And still, the firm does not feel meaningfully more leveraged.
That is not because the models are useless. It is because most firms are using general-purpose tools for workflow-specific problems.
A chat tool can answer a question.
A skill completes a defined workflow.
That distinction matters.
A skill, in plain English
A skill is a repeatable workflow with expected inputs, firm-specific instructions, validation rules, a defined output format, and a human review step.
It is narrow by design. It is not trying to be generally helpful. It is trying to do one job the way your firm wants that job done.
Take a hotel first-pass screening skill.
The inputs might be an offering memorandum, a T-12, an STR report, and a PIP summary. The skill's job is not to “summarize the documents.” Its job is to extract the facts that matter, reconcile the broker's story against the financials, flag missing diligence, apply the firm's screening logic, and produce the first-pass package in the format the team already uses.
That is a very different product from asking a chatbot, “Is this a good deal?”
One is an open-ended conversation. The other is an operating process.
What a skill is not
A skill is not a saved prompt.
Saved prompts help one person ask a better question. Skills standardize a workflow for the firm.
A skill is not a generic agent.
Generic agents are useful for exploration, research, and one-off work. Firms do not run on one-off work. They run on repeated processes: deal intake, underwriting support, diligence tracking, IC materials, asset management reporting, LP updates.
A skill is not just document extraction.
Extraction matters, but it is not enough. If the system stops at extracted fields, the team still has to turn those fields into the actual deliverable. In real firms, the deliverable is usually a file: an Excel workbook, a memo, a PowerPoint, a report, or a tracker.
A skill is only valuable when it reaches that last mile.
The four layers that determine whether a skill works
The market still talks about AI implementation as if the model is the product. It is not. The model is one component. The implementation is where the value either appears or disappears.
A useful skill has four layers.
1. Workflow instructions
The skill needs rules that reflect how the firm actually works.
For a hotel screening workflow, the instruction is not “analyze the OM.” It is more specific:
- treat occupancy, ADR, and RevPAR as linked metrics;
- reconcile broker NOI against T-12 actuals;
- separate franchise fees, management fees, PIP obligations, ground leases, and other recurring charges;
- distinguish required screening fields from optional diligence fields;
- surface conflicts between sources instead of silently choosing one number.
Those instructions come from the business. They encode judgment that usually lives in analysts, principals, and old templates.
This is where generic implementations fail. A technical team can make a model read documents. That does not mean it understands which conflicts matter before investment committee.
2. Firm context
A skill needs your firm's context, not generic market context.
That means thresholds by asset class. Template conventions. Naming standards. Market and brand preferences. How conservative the firm is on revenue recovery. Which operators, lenders, or counterparties trigger extra scrutiny. What counts as “screening-ready” versus “not enough information.”
None of that lives inside the model by default.
Without firm context, the output may be polished and still not be yours.
3. Output design
This is the part most tools skip.
A skill is not done when it produces the right analysis. It is done when the analysis lands in the right place, in the right structure, so the team can review it instead of rebuilding it.
If your screening template has fifteen fields in a specific order, the skill should produce those fifteen fields in that order. If your underwriting model expects assumptions in specific tabs, the output has to respect the workbook. If your IC memo uses fixed risk categories, the skill should draft into those categories.
Template fidelity is not cosmetic. It is operational.
4. Testing against real files
Demo files are clean. Real deal files are not.
Real OMs bury important facts in footnotes. T-12s have missing months. STR reports arrive as PDFs, Excel files, or both. Broker adjustments do not tie cleanly. PIP items are described differently across brands.
A skill that works on a curated demo may fail on the first live package.
Implementation means running the skill on messy files, reviewing the output against firm standards, finding failure points, tightening the logic, and repeating the cycle. That is the unglamorous work that makes the system reliable.
Model choice is rarely the bottleneck
Firms often assume the most important AI decision is which model to use.
That is understandable. The market talks about models constantly. Vendors lead with model capability because it is easier to explain than implementation discipline.
But for most professional-services workflows, model choice is no longer the main bottleneck.
A well-implemented skill on a capable model will beat a poorly configured workflow on the best model available. Consistently.
The hard question is rarely “can the model reason?” The hard question is whether the model has the right job, the right context, the right source material, the right constraints, and the right output target.
If those pieces are wrong, a better model just produces a better-written version of the wrong deliverable.
Individual productivity is not firm-level leverage
This is the adoption gap I see most often.
Most AI usage inside investment firms improves individual productivity. One analyst summarizes a report faster. One associate drafts a memo section. One principal asks a chat tool for market research before a call.
That is useful. It is not the same as operating leverage.
Firm-level leverage looks different:
- every team member can run the same workflow;
- outputs follow the firm's standard regardless of who ran the skill;
- the workflow takes twenty minutes instead of two hours every time;
- the firm's way of thinking is encoded into a repeatable process instead of transmitted informally from person to person.
If the output depends on who wrote the prompt, the firm has not built leverage. It has given a few employees better tools.
A real skill moves the process itself.
What this looks like in a lean investment firm
Imagine a firm that reviews 150 deals a year with two analysts and one acquisitions associate.
Deal intake repeats constantly. OM arrives. Financials arrive. The team extracts key facts, checks the basics against the firm's thresholds, writes a brief, and decides whether to advance.
That is a skill candidate.
A first-pass screening skill should:
- identify the required files;
- block or ask for confirmation if the wrong files are selected;
- extract property facts, financial metrics, and market benchmarks;
- reconcile conflicts across sources;
- apply the firm's first-pass thresholds;
- populate the screening memo or model in the house format;
- surface data gaps for human review.
The analyst is not removed. The analyst is repositioned.
Instead of spending two hours extracting, typing, formatting, and reconciling, the analyst spends fifteen to thirty minutes reviewing structured output, checking flagged issues, and applying judgment.
That is the correct boundary. AI handles the friction work. Humans own the judgment.
The question to ask before buying another AI tool
When your team uses AI today, what comes out the other side?
If the answer is “a summary,” “a chat response,” or “a table we still have to move into our template,” the firm is still upstream of the real workflow.
The useful question is not whether AI can read your documents. It can.
The useful question is whether it can produce a firm-standard output your team can review and route without rebuilding.
If not, the bottleneck is not intelligence. It is implementation.
Your firm already has workflows that are recurring, definable, and painful: deal screening, underwriting support, asset management reporting, LP updates, diligence tracking. Those are skill candidates.
The first step is not buying more model access. It is identifying which workflows deserve to become skills, what inputs they require, what output format they must produce, and who signs off.
That is where useful AI implementation starts.
Suzerand helps firms turn recurring professional-services workflows into repeatable AI skills: expected inputs, firm-specific instructions, source validation, file outputs, and human review. If your team is still using AI one person at a time, request a workflow review at suzerand.com.