Measuring Accountability

Measure the “fuzzy” with content analysis

We’ll hear about the unreliability of Yelp reviews, yet we still use them to decide if we should work with a specific company or go to a certain restaurant. Before we buy a car, we may look at consumer reports reviews to get an idea of what a car is like. Before we decide to purchase an IT system for our companies, we may read Gartner, Forrester, and IDC reviews to see how they rate the product compared to others or how other people who used the product liked it for their organization. Why? Because we value the experience that others have with a company, a product, or a service, and we leverage their experience to determine if a company met expectations or delivered on the claims it made. Essentially, we read reviews to determine if a company is accountable. Now companies know that customers value reviews for this reason and that’s why they value them too. But companies also understand that not all reviews on all sites are equal. And we are going to talk about that in this video today. 

We like to believe that you can’t measure the more fuzzy, personal side of business, but that’s not really true. There are always ways to measure what seems to be unmeasurable, like business relationships. To measure anything:

  • First, you need to determine what success means to your organization.
  • Then you need to discover and identify what’s tangible, factual, and measurable in that fuzzy zone,
  • Determine how those aspects relate to your definition of success and then
  • Ultimately, identify which information from this tracking and analysis can be used to determine that you achieved your goal.

I often work on optimizing transaction and lead gen flows, but that work doesn’t always address the larger, more strategic work that makes a sale in the first place: creating great customer relationships. It is the customer relationships, the relationships between people or between people and a company, which will ultimately improve trust. And trust builds over time and improves customer conversations, which result in increased sales or revenue. By measuring specific qualities about a customer relationship, we can use those insights to create better customer experiences and improve transactional and communication strategies that will increase trust and ultimately, increase revenue over time.

So how can you measure relationship success in your company? As mentioned in other videos, you can use a few different categories of measurements: engagement, loyalty, brand, and accountability. Within each category, there are multiple indicators and measurements of success. For this video we are going to focus on accountability and the conversations that support it—or reviews. To me, business happens during conversations, so conversations build business relationships which eventually lead to revenue. If we consider that conversations, especially digital conversations, are nothing more than content and we know that content can be tracked, categorized and measured—we can measure the quality of a relationship using content analysis. So how do we do this?

  • First, evaluate and score the review content by mapping it to the company’s claims and values.
  • Second, measure the integrity of the author, either an employee or customer or other stakeholder and use that value to weight the score.
  • Third, measure the integrity of the publishing organization and also use that value to weight the score.

This approach can be leveraged not just to measure accountability for reviews, but for awards, certifications, or other content items like branding, engagement, or loyalty (using different attributes). More on those later.

 

Step 1: Get Started Measuring Reviews

To score if a customer believes a company is delivering on its agreements:

  • First list all company agreements. There will be explicit and implicit agreements made at the company and product level that includes company values, product values, and product benefits.
  • Once those are listed, evaluate each customer review to determine if it includes items that correspond to the list.
  • If the customer raises these items on their own in the review, then your company definitely delivered on it.
  • If not, there may be a disconnect that you may want to investigate.

Maybe the customer’s value system is different than yours. Maybe the customer purchased your company’s product for a different reason and doesn’t view the solution the same way your company does. This process will help highlight that.

The sentiment of the review matters. If a feature is listed positively it gets a positive value; if it is mentioned in a negative light, it gets a negative value. Needless to say, the more detailed the review, the more points you can earn with a positive review or lose with a negative review.

You may want to record what is not included on your list to see if there are trends among customers who have a different view of your solution. It may provide insights about your company and brand.

This approach can also help you see trends regarding which of your values and benefits reviewers care enough to mention. Did they focus on one feature that you speak about most? Or something else? That knowledge can be helpful for your business because it indicates not just what customers remember, but what they found most useful, which is usually the most memorable. You also can correlate this with revenue to determine which value or benefit influenced the sale the most. What do people remember most about your product? That memorable item will keep them coming back.

Why am I mentioning memorable so often? This gets to peak-end rule, which claims that people remember the most extreme and the ending of an experience. "Most aspects of an experience aren't particularly memorable; it's rare to experience something extreme. Typically, extreme experiences center around problems and challenges; we often don't associate extreme experiences around something positive, unless it is extreme winnings or a prize of some sort. So, customers remember very little about experiences except the extreme aspects of it and the end." So, if a customer remembers a feature of your product or service, there is something about it that stood out to be mentioned. This is why memorable features are noted and scored. They stood out the most. 

You can also apply this approach to third party reviews. You can determine if the reviewers believe that you delivered your benefits. What do analyst firms and publications find memorable about your product? What matters to them? That’s great information to know.

After you determine how people describe your company and product, you need to determine the integrity of the author, the organization the author is affiliated with, and the organization publishing it. These elements will weigh the score of the review. So, let’s take a moment to discuss integrity and why this is important for determining accountability.

Why Integrity?

Integrity is formally defined in the Oxford English Dictionary as the condition of having no part or element taken away or wanting; undivided or unbroken state; material wholeness, completeness, entirety. It also means according to the condition of not being marred or violated; unimpaired or uncorrupted condition; original perfect state; soundness. It’s an interesting concept. In society, we like to equate integrity with purity, an idealized perfect form, being admirable and respectable. We value it so much that we have created integrity tests for employees. But integrity tests seem to work because they measure and predict desirable behavior in the workplace and support values like loyalty, views of ownership, and reliability. Then again, even experts say it is unclear what exactly an integrity test measures. Ironically, researchers don’t believe that it is really integrity. It is something else, but they aren’t sure what exactly it is.

Further, after some investigation into integrity tests, I started to realize that not only is honesty not connected to integrity, there really is no such thing as honesty for a person or company. Truth is relative, and your perspective of a situation may include facts, but could represent one of many possible truths. From the research paper "What Constructs Underlie Measures of Honesty or Integrity?" By Kevin Murphy, there is a great quote about integrity tests:

First, it is reasonable to infer that these tests do not measure honesty. People who receive favorable scores on integrity tests may be less prone to engage in dishonest behavior, or may be willing to engage in this behavior in a more narrow range of situations than people with less favorable scores, but there is little to be gained by thinking of these as "honesty" tests. These tests cover a wider range of behaviors and attitudes that would be expected in a measure of honesty (which would logically focus on truthfulness, but not necessarily on impulsiveness, thrill-seeking, likelihood of goldbricking or some other form of counterproductive behavior, etc.).

So, honesty is a tricky subject, difficult to track and measure, and something you may want to stay away from when it comes to business. Further in the same paper: 

Although the terms honesty, integrity and dependability are often used interchangeably in this literature, a useful distinction should be drawn between them. Honesty refers to a particular respect for truthfulness, whereas integrity and dependability imply slightly broader conceptions, including a willingness to comply with rules, internalized values, norms and expectations.

And it’s this second definition—rules, values, and expectations—that we want to measure. I’m less concerned about a review’s or publication’s truthfulness or honesty than other values or characteristics that I’ll list shortly, especially if there is no real way to measure it. A review could be a lie, but if the publication has little integrity, does that matter? Would any reader find it credible anyway? Or if a review points out that a company lied about something but in the end, the company was accountable to delivered on its promises, does the lie matter? That may be a philosophical question, but honestly, I’m not sure the lie matters. Because truth is based on perception and perception is debatable, acting accountable and demonstrating traits that support integrity is a more concrete approach for measurement. It is based on action rather than perception of words and meaning. If a company doesn’t deliver on its promise from a customer’s perspective, does it really matter if the company lied about it or there was a misunderstanding about expectations? The bottom line is that the company didn’t deliver. I suspect that to a customer, rectifying the problem is a better measurement of a company’s integrity long-term and is a better indicator of its intention to build trustworthy customer relationships.

So why not use reputation metrics here? Here we are less concerned about the reputation of a company or a person and its perception in the world. Instead, we are tracking attributes and actions that support values of a publication and review authors. There may be 1 or 2 metrics that are reputation related regarding the types of decisions a publication is used for. Most are not.

Qualities of Integrity 

With that said, the characteristics and attributes that I think are important to support integrity are:

First, transparency, or as defined in the Oxford English Dictionary, the quality or condition of being transparent; Frank, open, candid, ingenuous.

  • A publishing organization may include reviews written by anonymous members. If you don’t know who is writing the review, you have to wonder if the author was really a customer.
  • Are reviews written by users who have accounts? An account adds credibility because although a user may be anonymous, someone needs to own such an account. But you could argue that the account could possibly be a bot.
  • This leads to the next question: can you validate the author is a customer? That’s key to know when evaluating a review. If the reviewer is actually a customer then the review is valid; if not, it’s unclear if it is. In that case, the reviewer would need to list his or her employer to prove it for a B2B sale or include some other validating piece of data for a consumer sale, either a photo or something else. And that may not be possible for various reasons.
  • Is the publication open about its relationships with vendors? There may be conflicts of interest and a publication may use a pay-to-play model for elevating content. That may cause bias with reviews or their placement on the page for or against the company.
  • Other factors to consider for transparency could include how long the publication has been in business, if it is managed by an individual or team, and its distribution and readership.

 

Second, reliability or that may be relied on, able to be trusted; in which reliance or confidence may be placed; trustworthy, safe, sure. U.S. definition: of a product, service, etc.: consistently good in quality or performance; dependable.

Determining reliability could be based on:

  • Understanding the type of organization that is publishing the materials—if it is a publication, a not-for-profit, or a promotional site
  • Understanding if there are credible authors contributing reviews
  • Knowing that there is a full-time staff managing it
  • Seeing high traffic being driven to it through ads and other methods, and
  • Seeing regular postings.

And these are all signals that management takes the publication seriously.

Note that there are times when a reliable publication like a blog may not have staff, but it may have regular postings and respected content written by a respected author who is a customer or it could be an unreliable publication with high traffic and staff, but it posts content that is frequently in error, written by anonymous authors, not used to make key business decisions.

Reliability to build trust comes from a few factors:

  • Knowledgeable writers
  • A publication schedule
  • Transparency about the publication’s origin.

Further, how content is used helps determine reliability. And yes, reliability is related to reputation, so it is not entirely an accountability type of metric. Over time, you may want to find ways to transfer measurement to actions rather than perceptions.

Third, Popular or useful is an indicator that needs to be used cautiously. It is defined as being of a belief, attitude, etc.: prevalent or current among the general public; generally accepted, commonly known; capable of being put to good use; suitable for use; advantageous, profitable, beneficial.

A popular publication can be measured through site traffic related to length of time in existence. Popularity provides us information about the organization’s brand awareness, and some level of usefulness or purpose in people’s lives because they go to the site. If someone reads the content, it must be useful in some way. Even if the content is not accurate, authentic, or reliable, the reader gets some type of benefit by reading it.

So how do you use these attributes when scoring a review for integrity? Different traits and attributes of these values weight the integrity of the review. So, the higher the integrity of the author or the organization publication, the review score holds almost its full value (or greater). A lower integrity author or publication will reduce the review’s score because the review has less integrity. 

Measuring Integrity

Let’s review how you score everything.

First, start with review content on its own. As mentioned before:

  • First, list all values and benefits at the company and product level.
  • Give each value one a point and add those values to get the total possible value a review can have.
  • Now score each piece of content to that list, looking to see if the review mentions that value in some way, using positive or negative values depending on the sentiment. That will provide the raw review score.
  • Create a percentage of the raw score over total value as a way to see the relationship between the total possible score and the raw score.

Now weight the scores based on integrity, starting with authors. There are some suggested measures for authors below, but from a high-level, determine your score based on:

  • If the author is anonymous or pubic
  • If the author is a customer
  • If they have a relationship with the organization publishing, such as being a type of member.

Each attribute contributes to a weighting value, creating one adjustment to the total score for integrity.

Then you determine the adjustment for the organization publishing the review.

  • Are authors anonymous or known?
  • Are review authors customers and experts?
  • How do readers use the publication?
  • Is there staff to manage it?
  • Is the publication open about its affiliations or partnerships?
  • Is it independent or tied to a product?

Each attribute contributes to a weighting value creating another integrity adjustment.

Now use the company and author adjustments to weight the scores to get a final score. This approach provides a bunch of data to correlate with revenue. So you get:

  • A total possible score for each review,
  • The actual score for a review, a percentage score for a single review,
  • A cumulative review score for the publication – percentage and actual, and
  • For each review and the entire publication you can get a weighted score based on author and/or organization.

So, a publication may have great reviews for your company, but it has little integrity or unknown authors. You can take that as you’d like. Or a publication could publish reviews that have low accountability for your company and product and have high integrity for authors and the publication, which means you have a lot of work to do to improve customer perception. Or you may find that a high integrity publication has low integrity authors or vice versa and your reviews are just okay.

Awards & Certification

Now, you could apply this methodology to awards. Awards are content type that validates how a company delivers on its promises at the corporate- and product-level.

To get a score for this:

  • Map the qualities and traits of the award to the company and product values and benefits list to get a raw score.
  • Then you can use the same approach for determining the integrity of the company offering the award. You may use slightly different attributes to measure, like transparency about who the judges are and who they represent, their qualifications, and how open the competition is, just to name a few.

You could also use this approach in a predictive way to determine which awards are best for your organization based on the values and benefits of past winners.

So, what you can discover from this exercise?

  • You can identify current best practices based on award values and better understand what’s being measured and valued in the industry.
  • You can determine criteria for winners.
  • You can provide your company’s teams insight into why you are or are not winning.
  • You can gain insight into what the industry, third parties and customers value, which may be consistent with what the award values.
  • You can clarify industry values beyond revenue.

Awards for a company or product should contribute to increased revenue and improve the pipeline. Within one to two months of winning, you can start to correlate the impact between the award and the pipeline and revenue. A company should have an improved reputation and products will see an increase in sales. For team & individual awards there is possibly no impact except in HR metrics, except if it the award relates to directly to the product and its revenue.

Certification supports a company’s accountability and provides an opportunity for a company to validate its shared values between the certification organization and itself. The certification organization can help hold a company accountable as third-party validation. Within one to two months, correlate the impact between the certification and revenue. There should be a bump. For team & individual awards there is possibly no impact except in HR metrics, except if it the certification relates to directly to the product and its revenue.

Conclusion

This general model of scoring content against company values or benefits and then weighing that value based on integrity or other values can help you understand how your company is delivering to customers or communicating its brand or establishing loyalty or promoting engagement. Business is socializing with purpose. Conversations and content are tools that build business relationships that can become a sale. But revenue is only one measure of success. There are others. This is why the business relationship is key for the company. Without that relationship, revenue is hard to get. And measuring customer generated content like reviews and comparing that to your company’s perception of itself can help you see if you are both aligned. If a customer believes your company is delivering on its promises, then it’s easier to have a stronger customer relationship because trust is being built through actions. Actions are stronger than words, easier to measure, and the clearest form of communication possible. Be accountable to your customers through your actions. It really is the best way to build trust and solid customer relationships.

 

1. Measuring Reviews

  • List all company and product values and product benefits.
  • Give each item one point and add the total. That's the total possible score for the review.
  • For each review, provide +1 or -1 if the content mentions a value or benefit. Positive vallues for positive sentiments; negative values for negative sentiments.
  • Add all items to get the total value for the individual review. That is your raw review scoree . That will be weighted with author and company integrity to get other scores.
  • Create a percentage metric of the raw score divided by the total possible score for a review. That is an indicator for how closely your customers remember and value your claims.

Category

Scoring Logic

Notes 

  • Supports company and product values
  • Supports product benefits

1 point for each value/claim supported

-1 point for value/claim that has a negative sentiment

Don't forget to list claims included in your review and not in your list for reference later. Do not include them in the scoring. You will use this information later to potentially update your messaging to be more aligned with how customers see your company and product.

2. Measuring Author Integrity

  • List the attributes of the author you want to track and measure. A sample list is below.
  • Score each review author according to this list.
  • You will get a percentage score to adjust your review. Then you can multiply that percentage by your raw review score and get a weighted score that accommodates author integrity.

Category

Scoring Logic

Notes 

Author identity known

  • Yes = 0%
  • No/Anonymous = -10%

Transparency. Reliability.

If there is no author identity, then the review is not transparent, possibly inauthentic, and could present a fraudulant perspective.

Author’s role (type of user)

  • Stakeholder/approver = 10%
  • Maintainer/supporter = 10%
  • User = 5%
  • Influencer/champion = 5%
  • Experiencer = 5%
  • Benefactor = 5%
  • Buyer/approver = 0%
  • Unknown = 0%
  • Employee = -10%

Transparency. Reliability.

If the review or publication indicates the type of user the author represents, then it is a higher integrity review (more transparent and reliable for the customer's experience).

A different type of user/customer will have a different experience with the product. The more specific a review is about who used the product or service, the review has higher integrity.

For the types of roles/users here, see the chart at the bottom of the page.

Validation for author's role (customer status) available

B2B Customer:

  • Yes = 0%
  • No/Unknown = -10%

B2C Customer:

  • Yes = 0%
  • No/Unknown = -10%

Transparency. Reliability.

If the author is transparent about the company he or she works for or that he or she actually owns the product by providing proof, the review has more integrity and is more trustworthy.

If there is proof that the author interacted with the product or service, the review can be trusted.

Popularity of review

B2B Customer:

  • 1+ indicator of agreement, or comment with a positive sentiment = 10%
  • Unknown or no response = 0%
  • 1+ indicator of disagreement or negative sentiment to the review = -10%

Popular/Useful.

If there is a metric like usefulness or shared or a way for readers to comment on the interview in some way, track that.

A more popular review does indicate integrity because a like indicates an agreement with someone else's experience; a less popular review may indicate that the experience describe isn't relevant or accurate.

3. Measuring Publication Integrity

  • Similar to measuring author integrity, we will be measuring the integrity of a pubilication.
  • Score each review publication according to this list.
  • You will get a percentage score to adjust your review. Then you can multiply that percentage by your raw review score and get a weighted score that accommodates publication integrity.

Category

Scoring Logic

Notes 

Publication/organization time in existence

  • 15+ yrs = 10%
  • 10 - 15 yrs = 6%
  • 5-10 yrs = 4%
  • 3-5 yrs = 2%
  • 0-2 yrs = 0%

Transparency. Reliability.

The longer a site is in operation, the more reliable it is as a source. It wasn't something new created for an unknown yet possibly specific purpose. It has been operational to publish review content for a while.

Traffic

Based on the size of the market you are reviewing and comparing, this could vary. Consumer markets may have broader/larger sized ranges while enterprise products could refer to small niches (and smaller traffic numbers).

Popular/Useful.

If a site is popular you could assume it is useful to its visitors in some way, satisfy reader needs.

Who maintains the site?

  • Company/professionals/team = 5%
  • Individual = 2%
  • Unknown = -5%

Transparency. Reliability. Popular/Useful.

If a company or team is maintaining the site, it is a signal that the site is taken very seriously as a busines and there is an investment in it. It has commercial purposes and most likely some editorial standards.

If an individual manages the site, it may be a blog and therefore, a hobby or a job. But a single manager doesn't always have the resources to allow a site to grow into a larger publication.

If it is unknown how it is managed, that could imply that its purpose is unclear or be hidden.

Site content covers multiple products

  • All products in an industry = 10%
  • No product = 0
  • 1-3 products (a select few industry products) = -5%
  • Single product = -10%

Transparency. Reliability. Popular/Useful.

If a publication focuses its attention to specific products, that communicates a type of alliance that may include payments for content creation, and possibly editorial bias. There may be a communication agenda for sites that have limited product coverage.

Broad product coverage (and advertisement opportunities) implies greater objectivity and opportunity.

Relationship to vendors

  • Vendors are advertisers = 0%
  • Unknown = 0%
  • Vendors pay for content services = -10%
  • Publication clearly sponsored by vendor = -20%

Transparency. Reliability.

If a publication is open about its relationship with vendors (if they are advertisers or they sponsor content or there are other partnerships), then they are being transparent and building trust with readers.

There needs to be a symbiotic rerlationship between product vendors and publications. Vendors can provide revenue and content for readers. Publications provide a platform for their voice. However, there needs to be a balance to maintain some level of objectivity. Being transparent provides that and credibility.

Timing of posts

  • Multiple times each day = 10%
  • Daily = 7%
  • 2-4 Days/week = 5%
  • Weekly = 3%
  • Monthly = 1%
  • No schedule/sporadic = 0

Transparency. Reliability. Popular/Useful.

Timing of posts communicates the commitment of the publication to provide content to its readers. It also communicates that the owner/manager wants to engage with its audience by providing this information to them regularly. The more consistently a publication publishes content, the more trust it builds and traffic increases.

Author visible?

  • Visible = +10%
  • Definite user = +5%
  • Anonymous = 0%

Transparency. Reliability.

The author's name being visible provides accountability to the person who created the content and supports providing information that this person is indeed a customer.

The full name doesn't need to be available. However, there needs to be some way to show that the review author is indeed a person.

Regarding articles, if there is no author listed and the author is the editor or someone else, the content may be suspect for accuracy and credibilty. It could be created by a corporation, persuading you to have a perspective that leans towards their point of view.

Multiple authors/voices

  • Anonymous voice = -5%
  • Single voice (blog) = 0%
  • 2-5 authors = 2%
  • 5+ authors = 5%

Transparency. Reliability. Popular/Useful.

A single voice presents a single perspective. Multiple voices present multiple perspectives. More perspectives share more experiences and allow the reader to understand if the product/service is consistent in delivering on its commitments.

APPLIES ONLY TO ARTICLES

Articles have sources/links

  • Yes – many = 5%
  • Yes – 1-3 = 2%
  • No = 0%

Transparency. Reliability.

If articles have links, that provides credibility for their work. Reseach is necessary to support arguments. Without this, you have opinion pieces without backup knowledge for how someone came to a certain conclusion.

Authors are qualified/experts

  • Author credentials are available
  • Unknown = 0%
  • Authors are not credible to make this analysis = -5%

Transparency. Reliability.

TBD. At first, this will be based on the scorer's perspective. Over time, there will be criteria established to measure the credibility of the author for an article or review to determine if he or she is experienced to present such information. Credentials can include education, jobs/positions, association affiliations, and being published in certain publications.

How is content used by readers?

  • Helps readers make purchase decisions = 10%
  • Provides latest news/updates = 5%
  • Provides industry insights/thought leadership (opinions based on research) = 0%
  • Provides opinions (not based on research) = -5%

Transparency. Reliability.

How content is used provides insights into credibility and reliability of the publication and editorial team. At first, this will be based on the experience of the people scoring the publication. Over time, this will be tracked and measured through a publication's actions.

Type of site

  • Anaylist firm = 5%
  • Journalists/Press = 5%
  • Review/content site = 2%
  • Blog = personal 0%
  • Company site or blog = -5%

Transparency. Reliability.

Knowing the type of site that is providing the content adds credibility through ethical standards around content validation and authenticity.

The more that a site resembles and functions as a publication, the more credibility it has. The more closely it resembles a forum driven by opinions, the less credible it is (and less managed).

Regional reach of pubication/organization

  • International = 5%
  • National = 3%
  • State = 2%
  • Local = 0%

Popular/Useful.

This metric addresses the audience reach publication and review.

Global reach indicates greater impact than local reach. More people are able to access the content, making it more accessible and possibly useful. So the review has greater strength in a global publication.

Ads are on the site

  • Professional ads = 5%
  • None = 0%
  • Google Adwords= -2%
  • Authors are advertisers: -10%

Reliable.

Ads demonstrate who sponsors a publication. This provides insights into their alliances.

If authors are advertisers, that may imply a type of pay-to-play model (not always true).

If ads are not professional, then the publication may not be professional or authentic. It may be created for revenue generation only as a hobby (and not credible).

Ads to access the site

  • Professional ads = 5%
  • No advertising = 0%

Reliable. Popular/usable.

The presence of ads directed to a content site communicates the commitment of the organization to drive traffic and readership. The team takes pride to share it with new readers.

Relevancy

I didn't mention this in the video, but the age of a review matters. A review that is a month old is reflective of a customer's experience. A review that is 7 years old may not even represent the experience of the current product. Reviews lose relevancy in time. Here's a sample approach for scoring reviews as they age.

Category

Scoring Logic

Notes 

Relevancy

  • Year 1=100% value
  • Year 2=80% value
  • Year 3=60% value
  • Year 4=40% value
  • Year 5=20% value
  • Year 6=0% value

The value of the review content drops 20% each year until after 6 years it is irrelevant (zero value).

Types of Relationships Companies Can Have with Customers

From Revenue or Relationships, Win Both, p. 352-354:

"We often believe within companies that we need to increase the number of customers we contact, but maybe we need to increase the types of relationships a company has instead. 

The following chart identifies a few types of relationships a company could have with different people in the purchase process and their roles. Some roles could be shared by the same individual, but that’s not necessarily the case. Each of these individuals needs to experience the journey and product in some way, but that doesn’t always mean that they need to complete the purchase process. 

When you are determining how your company could develop a relationship with a prospective customer, remember what you want the outcome of the interaction to be and the type of individual who will be included in the process. This may be a factor in how you determine which type of activities to include. For example, an influencer or champion may need to be more engaged with your company than someone who simply approves a purchase. Most likely, a stakeholder in the decision or an approver will accept the advice of an internal influencer rather than the company salesperson because they will trust them more and not feel like the influencer is trying to “sell” them. A product user may want to know how this product will solve their problem, which makes a demo or in-person store visit a key element to the experience. A benefactor will care more about how the end result of the product will improve their life, which makes the experience of using the product less important, but customer stories vital for them to understand. 

By considering who you are communicating with and what you want to achieve through that communication, you can design experiences that build memories with your customers and help them feel closer to your company. Essentially, you are designing the right interactions to be experienced at the right time."

Buy Product

Use Product

Support Product

Stakeholder/approver:

Cares that the general problem is being addressed; takes advice from others who will use the solution (if they don’t use it themselves). 

Buyer/approver:

May not use the product, but is a key purchase decision-maker. 

Maintainer/supporter:

Does the maintenance work. 

Influencer/champion:

Understands the solution the company offers and the problem it solves. May be users, experiencers, or benefactors. 

User:

Actively engages with the product; wants to be sure it solves their problem. 

Benefactor:

Experiences results but doesn’t do the maintenance work for the product or solution. 

Unidentified targets:

Those who will be using the product and have no voice in making the decision. These individuals and their needs will be discovered during the solution selection process. 

Experiencer:

Experiences product (possibly a light user) but does not actively use it. 

Unidentified targets:

Those who will need support but not be identified at the start of the process. These individuals and their needs will be discovered during the solution selection process. 

Benefactor:

Enjoys the results of the product. 

Unidentified targets:

Those who won’t be using the product, but will be experiencing benefits from it. These individuals and their needs will be discovered during the solution selection process.