Let’s say your client has a need, and it cannot be fulfilled by Salesforce. So you provide them with a shortlist of your preferred third-party applications. (Note: you have a shortlist, right?)
How do you help your client decide which of the apps is the best one for their needs?
After witnessing and participating in the procurement process many times over the years, here is my favorite approach. It involves multiple steps.
Step 1 – Decide the functional criteria
Start with 4-9 categories, and then define them further into specific requirements.
Let’s use webforms as our example. Here are some possible categories:
- Overall functionality
- End-user usability
- Salesforce integration
- Scalability
- Technical & security
- Roadmap & future enhancements
- Support
If we break down the first two categories, requirements could be the following.
Overall functionality:
- Able to create custom templates
- Able to add multimedia content
- Able to pre-populate forms (not Salesforce data)
- eSignature possible
End-user usability:
- Multi-page forms and conditional logic
- Wide range of field data types
- Theme editor to customize the look and feel
- Responsive forms
Be sure these requirements meet all of your main technical and business needs.
Step 2 – Define the category’s weight
Each category receives a number, and the sum of all categories is 100.
Weighting allows a category to have a higher priority over another. For our example, let’s propose the following weights:
- Overall functionality: 20
- End-user usability: 20
- Salesforce integration: 15
- Scalability: 15
- Technical & security: 15
- Roadmap & future enhancements: 5
- Support: 10
This means “support” is more important than “roadmap & future enhancements”, and “overall functionality” and “end-user usability” have equal priority.
Step 3 – Define the scoring values
Rather than use a typical score of 1-5 or 1-10, this is where things become interesting.
Imagine the following scoring:
- 10 = exceptional
- 7 = good
- 4 = workable
- 0 = poor
Having gaps between the scores allows for more discrete responses. In other words, the value of each score is worth more, so the people scoring need to carefully decide.
Note that the exact wording of the scoring may be different for each category. For example, when reviewing “support”, the scores may represent the following:
- 10 = World-class support with SLAs
- 7 = Standard email/ticket/phone support
- 4 = Only one support channel available
- 0 = Support is very limited
To be continued tomorrow…