You've done the research. You understand what Agentforce implementation involves, you've decided it's the right platform over Einstein AI, and you know roughly what it costs. Now you have a list of partners — and the proposals all look superficially similar. Every one of them claims deep Agentforce expertise. Every one of them promises 90-day delivery. Every one of them has a logo-covered slide in their deck.

The problem: most of them are lying. Not necessarily about their intentions — most partners genuinely intend to deliver what they promise. But a firm that has done 3 classic Einstein Bot projects and one Agentforce sandbox demo is not an Agentforce implementation partner. They're a Salesforce partner with AI aspirations and your $85K on the line.

This guide gives you a systematic way to tell the difference. By the end, you'll know what certifications actually prove production readiness, the red flags that appear before you sign, the cost-quality tradeoffs across different partner types, and the 12 questions that will separate serious Agentforce practitioners from vendors rebranding old work.

67%
of failed implementations traced to poor partner selection
3–5×
typical cost overrun with unqualified partners
90 days
what a qualified partner delivers vs. 12–18mo for Big 4

Certifications That Actually Mean Something

Certifications are not a sufficient proof of partner quality — but they are a necessary filter. A partner without the right credentials has definitely not invested in Agentforce-specific knowledge. A partner with them has cleared the minimum bar. Here's what the cert stack looks like and what each level actually proves:

Individual certifications to require

Ask specifically which credentials the consultants who will be doing your work hold. Not the company overview — the people assigned to your project.

  • Salesforce Certified AI Associate — The baseline. Proves foundational knowledge of Salesforce AI concepts. Necessary but not sufficient. If this is the most advanced AI cert on your team, find a different partner.
  • Salesforce Certified AI Specialist — Covers Agentforce configuration, prompt engineering, and agent testing. This is the minimum cert for anyone leading Agentforce build work.
  • Salesforce Certified Agentforce Specialist — Agentforce-specific: agent architecture, Topics configuration, Actions development, guardrail implementation. Every senior consultant on your project should hold this. Ask for their Trailhead profiles.
  • Salesforce Certified Platform Developer I/II — Relevant when your implementation requires Apex-based custom Actions or complex Flow logic. Most serious Agentforce builds require this-level Salesforce development skills alongside the AI credentials.

Company-level credentials to check

Individual certs prove knowledge. Company-level credentials prove that Salesforce itself has verified the firm completed real customer projects.

  • Salesforce Navigator Expert (Service Cloud or Sales Cloud) — Navigator Expert status requires demonstrated success on 3+ customer projects, customer satisfaction evidence, and product-specific expertise. This is the most meaningful company-level signal for Agentforce work, since Agentforce agents typically run within Service Cloud or Sales Cloud.
  • Salesforce Agentforce Partner designation — A newer credential specifically tied to Agentforce implementation. Ask if the firm is an Agentforce Partner and can show their partner network listing. This designation requires completing Agentforce-specific training and demonstrating customer success.
Verify on AppExchange

Every legitimate Salesforce partner has an AppExchange listing you can check independently. Search the firm's name on Salesforce AppExchange and verify their certifications, customer reviews, and Navigator status directly. Don't rely on what they tell you — look it up.

Certified Partner vs. Big 4 vs. Freelance: The Real Tradeoffs

Three paths to Agentforce implementation. They are not interchangeable, and the right choice depends entirely on your company's size, risk tolerance, and how much of your budget you're willing to spend on overhead versus actual implementation work.

Partner Type Typical Cost Timeline Best For Watch Out For
Freelance Agentforce Consultant $150–$250/hr Unpredictable Budget-constrained POC builds; companies with strong in-house Salesforce team to supervise Single point of failure; no bench for complex integrations; disappears if the project goes sideways
Certified Agentforce Specialist Partner (recommended) $75K–$105K fixed 75–90 days Mid-market companies (100–2,000 employees) who want production-grade delivery with predictable cost Verify Agentforce-specific projects, not just general Salesforce history
Regional Salesforce Partner (general) $65K–$120K 90–150 days Companies with established partner relationships; works well if the partner has current Agentforce projects Many are rebranding Einstein Bot or Service Cloud work as "Agentforce experience" — ask for dates on reference projects
Big 4 Consultancy (Deloitte, Accenture, PWC, KPMG) $200K–$500K+ 12–18 months Enterprise (2,500+ employees); complex multi-system integrations; global rollouts Your account is a rounding error to them; senior partners sell, junior consultants deliver; timeline bloat built into the model
System Integrator (Cognizant, Infosys, Wipro) $150K–$400K 9–15 months Companies with large existing SI relationships; complex compliance requirements Agentforce expertise is thin at most SIs; teams are often staffed by developers with Salesforce certs but limited Agentforce production experience

For mid-market companies — which Agentforce was explicitly designed for — the certified specialist partner route delivers the best quality-per-dollar outcome. The Big 4 premium buys you brand recognition, more account managers, and a longer timeline. It rarely buys you a better implementation. The freelance route works if you have strong internal Salesforce governance to manage them, but most mid-market companies don't.

The subcontracting trap

A common pattern at larger partners: a well-credentialed senior consultant sells the engagement, then hands off delivery to a subcontracted offshore team that has never done Agentforce work. Ask directly: "Will any subcontractors be involved in delivery?" and "Can you name the specific consultants who will work on our implementation?" If they can't answer the second question before signing, you don't have a delivery team — you have a promise.

Red Flags That Appear Before You Sign

The warning signs in a partner proposal are usually visible in the first meeting if you know what you're looking for. These are the patterns that consistently predict implementation problems:

🚩 Walk away from partners who show these signs

No Agentforce-specific production references

General Salesforce references are not Agentforce references. Ask for production deployments from the last 12 months with a reference contact you can call. If they have fewer than two, they're learning on your project.

Vague delivery timeline with no phase breakdown

Legitimate Agentforce partners have a defined delivery methodology: discovery, build, testing, deployment. If the proposal says "90 days" without specifying what happens in each phase, that number is fiction. You should be able to see what week each milestone falls on.

Time-and-materials pricing with no cap

Agentforce projects on T&M billing frequently run 2–3× their opening estimate. Fixed-scope, fixed-price is the correct model for a defined implementation. Partners who won't commit to a cap either don't know how to scope Agentforce work or are deliberately leaving room to run up hours.

Proposal focused on features, not outcomes

A proposal that lists "configure 5 Topics, build 8 Actions, set up 2 agent personas" without specifying what business outcomes those deliver is a feature proposal. You're not buying features — you're buying automation that resolves cases, qualifies leads, or handles internal requests. Demand outcome metrics in the SOW.

No guardrails methodology

Guardrails — what the agent is explicitly prohibited from doing — are not optional in a production deployment. A partner who doesn't proactively discuss guardrails in the proposal either hasn't thought about production safety or is hiding a scope gap that will surface in week 8.

Post-launch support is an upsell, not included

Agentforce agents need tuning after go-live. Real-world conversations will surface edge cases the sandbox never did. A partner who considers the engagement done at go-live and charges separately for everything after has no incentive to build a resilient system. Get at least 30 days of hypercare included in the base contract.

12 Questions That Separate Real Agentforce Partners from Rebranders

Ask all of these. The answers will tell you more about a partner's actual capabilities than anything in their proposal deck. Pay particular attention to how they handle the questions about scope changes and post-launch — that's where partnership quality shows.

Question 01

Can you show me 2–3 Agentforce production deployments from the last 12 months, with reference contacts I can call?

Not Einstein Bot. Not Service Cloud chatbot. Agentforce — the platform that launched in 2024. You want the reference contact name and phone number, not just a logo on a slide. Any serious partner will have these. If they offer case studies instead of live references, push harder.

Question 02

Who specifically will be doing the work? Can I meet the project lead and senior consultant before we sign?

Partners who subcontract delivery or staff from a bench often can't answer this before close. If they can't name the team pre-signature, the team doesn't exist yet. You want the right to approve the project team before the contract is executed.

Question 03

Walk me through your testing methodology. What does UAT look like, and how long does it typically take?

A credible answer is specific: the number of test scenarios, how edge cases are documented, how guardrail violations are tested, what "done" looks like before they sign off. A vague answer — "we do thorough testing" — means testing is not actually part of their methodology.

Question 04

What happens when we discover scope items mid-project that weren't in the original SOW?

This will happen. Every Agentforce implementation surfaces requirements during discovery that weren't visible upfront. How the partner handles this — change order process, built-in buffer, how disputes are resolved — tells you how the relationship will work when it gets hard.

Question 05

What post-launch support is included, and what does it cost after the initial engagement?

Get this in writing before signing. "Hypercare period" should be at least 30 days. Ask specifically what's included: bug fixes? Performance tuning? New edge case handling? And what's the retainer model or hourly rate for ongoing optimization after the hypercare period ends?

Question 06

How do you handle data quality issues that we discover during discovery?

Data cleanup is consistently the most underestimated cost in Agentforce implementations. Partners who've done this before have a defined process. Partners who haven't will tell you "we'll work with whatever you have" — which is how you end up with an agent making decisions on garbage data.

Question 07

What's your deployment approach — do you go straight to production or stage through shadow and supervised modes?

Good partners stage deployments: shadow mode (agent observes but doesn't act), supervised mode (agent acts but human reviews), then autonomous. Partners who go straight to production autonomous mode have either built an unusually simple agent or are cutting corners on safety.

Question 08

How do you measure success on an Agentforce implementation? What metrics do you commit to?

Legitimate partners commit to output metrics: deflection rate targets, resolution rate benchmarks, escalation rate ceilings. If the only metrics in the proposal are activity metrics (Actions built, Topics configured), you have a vendor, not a partner. Push for measurable business outcomes in the contract.

Question 09

What integrations have you built for Agentforce agents outside of core Salesforce objects?

Most Agentforce implementations need at least one external system integration — a ticketing system, ERP, or knowledge base outside Salesforce. A partner without this experience will struggle when your implementation inevitably needs it.

Question 10

What's the most complex Agentforce implementation you've completed, and what made it complex?

Listen for specificity. Real complexity in Agentforce projects involves things like multi-agent orchestration, custom Apex-based Actions, complex guardrail configurations, or high-volume performance tuning. If "complex" means "the client had a lot of custom objects," find a partner with more experience.

Question 11

How do you handle an implementation that isn't working as expected post-launch?

This is the character question. A good partner has a defined incident response process, documented escalation paths, and a commitment to resolution timelines. A bad partner will start pointing fingers at Salesforce platform limitations or your data quality — even when the problem is in their build.

Question 12

Can you show me the actual Trailhead profiles and certification records of the consultants assigned to my project?

Salesforce certifications are publicly verifiable on Trailhead. Any partner who refuses to share consultant Trailhead profiles before signing has something to hide. This takes 5 minutes and is completely standard — if they push back, that's your answer.

Timeline: What Good vs. Bad Looks Like

One of the clearest signals in a partner evaluation is how their proposed timeline compares to what a production Agentforce implementation actually requires. Good partners give you a specific milestone-level plan with a realistic testing window. Bad partners give you a number.

For the full implementation methodology breakdown, see our complete Agentforce implementation guide.

Good partner timeline (75–90 days)
  • Week 1–3: Discovery, data audit, requirements sign-off
  • Week 4–7: Topics and Actions architecture, org configuration
  • Week 8–11: Agent build, Flows and Apex, integration setup
  • Week 12–13: UAT with documented test scenarios
  • Week 13–14: Shadow mode, guardrail validation
  • Week 15: Supervised mode, performance baseline
  • Week 15–16+: Autonomous operation + 30-day hypercare
Bad partner timeline (same "90 days")
  • "Discovery": One kickoff call and a requirements template you fill out
  • "Build": 6 weeks, no interim checkpoints
  • "Testing": 1 week of demo review, no documented test cases
  • "Go-live": Straight to production, no staged deployment
  • Post-launch: "Reach out if you have questions"
  • No milestone schedule in the contract
  • Timeline measured from kickoff, not contract — loses 2 weeks

Evaluating Proposals: What a $45K Quote vs. a $130K Quote Should Include

Both numbers can be legitimate. Both can be disasters. The price alone tells you almost nothing — what matters is what's included and what's explicitly out of scope. Here's the breakdown of what you should see at each tier:

Scope Item $45K–$65K Package $75K–$105K Package $105K–$130K Package
Agent types 1 (Sales, Service, or Ops) 1–2 Sales + Service + Ops
Custom Actions Up to 5 Up to 12 20+
External integrations None or 1 basic 1–2 with data mapping Multiple, deep integrations
UAT and testing Basic UAT, 1 round Documented test scenarios, 2 rounds Full regression suite, edge case library
Data preparation Not included — you provide clean data Light cleanup and mapping included Full data audit and remediation
Guardrails configuration Standard library, no custom rules Custom guardrails for your use case Full guardrail architecture + testing
Staged deployment Shadow mode only Shadow + supervised + autonomous Full staged rollout with rollback plan
Post-launch support 2 weeks hypercare 30 days hypercare included 60 days + quarterly optimization

A $45K quote missing data preparation and limiting post-launch support to 2 weeks isn't a bad deal — it's a specific scope. If you have clean data and a capable internal Salesforce admin, that scope might be right. A $130K quote without documented test scenarios and a staged deployment methodology is overpriced regardless of what else is in it.

Evaluate the SOW, not the proposal deck

The proposal deck is marketing. The Statement of Work is the contract. Before evaluating price, ask for the draft SOW and check: are deliverables defined with acceptance criteria? Is the testing plan documented? Are exclusions listed explicitly? A partner who resists sharing a draft SOW before signing has something to hide in the fine print.

How to Actually Check References

Most buyers skip reference checks or make them perfunctory. That's a mistake. A 15-minute reference call with the right questions will tell you more about a partner than 3 hours of demos.

Ask for references from projects similar to yours

Same agent type (Service vs. Sales vs. Ops), similar company size, similar Salesforce org complexity. A reference from a 10,000-employee enterprise tells you nothing about a 300-person mid-market implementation.

Ask about what went wrong, not just what went right

Every implementation has problems. References who say it was perfect either aren't telling the truth or don't know what good looks like. "What was the hardest part, and how did the partner handle it?" reveals character more than any success story.

Ask specifically about timeline adherence and change orders

"Did the project deliver on the original timeline? Were there change orders, and how were they handled?" If there were 5 change orders and timeline slipped 3 months, that's a pattern, not an exception.

Ask about the agent's performance 6 months post-launch

How is the agent performing now — not at go-live, but today? Is it still hitting deflection rate targets? Has it needed re-tuning? The 6-month view separates partners who build launch demos from partners who build production systems.

Ask if they would hire the partner again

The final question. "Knowing what you know now, would you hire them again for your next Agentforce project?" The answer and how it's delivered — enthusiastic yes, hesitant yes, "probably not" — is the most useful data point in the call.

What a Solid Implementation Methodology Looks Like

Beyond the evaluation questions, here's the overall shape of a well-run Agentforce implementation. Use this as a template to evaluate whether a partner's proposed approach is credible.

The full implementation guide covers each phase in depth. At a summary level: discovery and data audit (3 weeks), agent architecture and Actions development (5 weeks), testing and guardrail configuration (2 weeks), staged deployment from shadow to autonomous (2 weeks), hypercare (4 weeks). That's 90 days of actual work. Any partner compressing this significantly is cutting something.

Understanding what your budget actually buys at each price point is equally important — a partner's price tier should match the scope of what they're delivering, and any gap between the two is a risk.

The companies that get the most out of Agentforce aren't necessarily the ones with the largest budgets. They're the ones who did partner selection rigorously, asked the hard questions before signing, and chose a team that had actually shipped production systems before theirs. That diligence front-loads the work you'd otherwise do after a failed implementation — and it's substantially cheaper than the alternative.

See what a qualified partner looks like

Fixed-price Agentforce packages, a team with verifiable production deployments, and a methodology you can review before signing anything.

View our packages

90-day delivery. Post-launch support included. References available before you sign.