Selling multiple design solutions

Hypotheses formed when conceptualizing a solution to a problem are guided by its’ constraints, interpretation of research conducted against assigned metrics and ultimately by embracing uncertainties. Regardless of the confidence we may have in that the solution we are envisioning is valid, our processes still consist of first evaluating multiple design solutions to the problem in early stages of the project.

Then, at one point or another, a decision is made, particular approach selected and implemented; leaving those less fortunate ideas in the discovery stage, never to be seen again. How sad – I believe they deserve better.

I will examine that particular moment when likelihood of one variation to succeed is determined as higher than that of another; identify why and how that decision is made and justify that sometimes not selecting any one particular concept, but proceeding with two or more candidates is the most viable option.

Once we establish that having multiple design solutions, is a good thing, and only then, I will examine how we, the solution providers, may capitalize and how the users, therefore stakeholders will benefit.

Is there a need for having multiple design solutions?

As I am sure you will remember from maths, a problem or an equation may have multiple solutions:

x={0, 1}

where values in the brackets represent a solution set – all variations which will balance the equation. If we were to say that x is equal to 1, we would be correct, but also incomplete in providing a solution.

Alas, maths is maths and design is design. Or at least that’s what I was told. So let’s examine a perhaps more relevant example, keeping the same logic in perspective.

Say you are tasked with designing a web page for a site which has an established base of returning visitors. Doesn’t matter what the page is. All we are concerned with is that it will be viewed by returning as well as new visitors. We are also able to distinguish between the two audiences either by cooking them or by passing a string parameter of customer=true in the url string (e.g. from an email we sent to them).

This gives us an opportunity to target different needs each audience may have by conditioning the solution to behave accordingly. So on our page we may variate the sign-up form for new visitors with a login form for existing users, or emphasize one over the other making the solution more effective.

Same idea may be applied to other architecture, design and experience decisions such as the navigational model (if we know the users’ gender, entry paths, or keywords used, we can prioritize women’s or men’s etc. options accordingly), availability of content (paid memberships, regional and other restrictions) and so on.

So, first point that I am hoping to make without going too deep into it is that digital and interactive solutions inherit a very dynamic nature required of them by variating interactions with them.

Targeting different applications by conditioning the solution allows us to maximize its effectiveness. Whereas opting for a static, one size fits all approach, is a missed opportunity.

Candidate success metrics

When a need or a problem is identified, solution may be generalized as an idea but it’s not thoroughly examined and validated. We need to support it with evidence that it will be a valid solution and we do so by assigning performance and success metrics to potential candidates.

Solution providers (architects, designers, developers etc.) sit down with stakeholders to develop and agree upon certain values used in appraising ideas generated, resulting in a decision matrix, simplified version of which may look like this:

multiple design solutions decision matrix

Click for a template

Each metric is assigned a value (priority level if you will).

e.g. Performance is valued at 5 points and Visual Appeal at 3. (… cost to build, potential return etc.)

Candidate A relies on heavy imagery for visual appeal (gets a 9) but resultingly decreases performance (6) whereas Concept B uses little to no imagery yielding high performance (9) but slightly diminished visual experience (7). After evaluating each solution we will end up with the totals

A = (5*7)+(9*3)… = 62
B = (5*9)+(8*3)… = 69
and there you have it, a clear winner, candidate B.

However, while appraising the variations, a certain notion is formulated which assumes that the values we assigned to each metric are static. Which is almost never the case.

Variable interpretation

Different applications of the solution as well as different interactions users will have with it, require their own set of values assigned to each metric, be the users aware of them or not. Alignment of solutions’ with perceived values is crucial in delivering successful which is to say relevant and engaging experiences.

So then how do we make a decision… or do we even need to?

Making the decision

Pending the particular structure of your RACI (or a similar type of) delineation of project responsibilities and decision making processes following information may vary.

Before I realized how great of an opportunity and important step in the creation process conceptualization is, whenever I was assigned multiple concepts I would have a clear personal favorite and would devote most efforts to it (hoping that the stakeholders will agree) which was not always the case. In other words I was creating multiple design solutions just because I was asked to. Whatever the reason may be, I was making multiple solutions even though I saw a need for only one.

By providing multiple design solutions we are de facto saying, we do not know what will work, … to a certain extent. And there is nothing wrong with that. Hopefully we believe that all concepts will work, but are not sure which will perform strongest or be received better in a particular context.
Nor will we ever find out if we select only one and proceed with it, which as you know is what usually happens.

Stakeholder role in “evaluating” (selecting) concepts

An almost certain blunder is to expect stakeholders to choose one concept over another; and an unforgiving one is to request them to do so.

Even though results of the decisions are not known until the solution is deployed, due to logistics, only one concept may be actualized, and in cases when the decisions made prove to be wrong, there is no time nor budget to correct the solution. So before making such important decisions let’s pause and ask ourselves why are we required to come up and present variations of a solution, be it in design, architecture or individual interactions.
Is it to demonstrate to the stakeholders that we in fact have considered multiple ideas? If so, that’s a complete waste of time. Instead just say: “after considering potential impacts of… this is our recommendation” Stakeholders must trust your expertise and decision making skills, that’s your job.

Or do you perhaps wish to involve the stakeholders in the design process by getting their subject matter input? Rapid prototyping and iterative development provide a method of doing so without having to create wasteful artifacts.

Only justification I can come up with is that we (solution providers) have tried and tried again to choose between two options for a creative treatment, interface element, interaction pattern or whatever the case may be and have found out that for different reasons (which we must be able to justify at a moments notice) they all weigh approximately the same. There is no way to figure out which one will prevail in which circumstance until we validate causal assumptions.

Embracing the uncertainties

You’ve done your research, and you can use it to support the soundness of the proposed solution arguments. But as we’ve identified, there may be different use cases or solution applications which carry considerations impacting the performance and experience quality outcomes in relation to the variable factors.

Recommended format for presenting the variations is by always emphasizing the final outcome rather than immediate impact through “and also” statements:

Based on X we are proposing that the solution will achieve desired outcomes by doing A, and also based on Y, those same outcomes may be significantly improved in a different context (application, engagement etc.) by doing B instead of A.

…therefore validating each variation. Or instead

We have determined that users will perceive the solution as X if we position it as A and also as Y if we instead position it is as B.

…securing a need to find out which assumption will prevail.

Key is to communicate the certainty in the validity (or need to establish validity) of the findings. Never, and I can’t stress this enough, never try to separate the variations through contradictory statements such as “if A then X, if B then Y” or “X is for A, however, B is for Y” as that proposes a false choice. Stakeholders may say, well ok, then let’s do X for A, even though B will still occur, but Y will be nowhere to be found.

Deininger, R. L. (1960), Human Factors Engineering Studies of the Design and Use of Pushbutton Telephone Sets. Bell System Technical Journal, 39: 995–1012

Deininger, R. L. (1960), Human Factors Engineering Studies of the Design and Use of Pushbutton Telephone Sets. Bell System Technical Journal, 39: 995–1012

Each disagreement is also an opportunity

All arguments to the contrary, the stakeholders may still not be convinced that the solutions you are proposing are valid or that perhaps there may be a way to unify them; what’s more they have ideas which they perceive as more viable and efficient. This is where you make your stand.

Same as stakeholders challenge your ideas, you challenge theirs and thoroughly critique them, because after all, decisions being made will impact the performance and the experience – which are your responsibility. After you’ve pointed out that you have (hopefully) already explored that idea, listed out all the reasons why it’s not as great as initially perceived and the awkward silence subsides one of two things may happen.

Stakeholders will understand your points, yield and appreciate the efforts made in genuinely being concerned with the outcomes. Or, they will propose counter arguments identifying additional considerations still.

Latter is prefered as it will allow for debate and prime the expectations for the continual solution validation and optimization efforts, which is just one of the ways to get their support for proceeding with multiple candidates.

Selling through justification

One suitable approach is to explore and validate solution variations with target audiences independently of the stakeholders through a combination of research and rapid prototyping and report the findings to stakeholders for appraisal as subject matter experts. However, that is very unlikely to happen if you haven’t previously allocated the said effort.

Granted that once you understand how valuable the findings and the feedback are, and how much headache they save down the road, and how many sleepless nights they turn into restful slumbers you will regret not doing it previously. Pinky swear.

Having foresight can’t hurt either. Stakeholders shielding you from solution data, business performance indicators etc. is a sad thing. Provided you did ask for everything that will help you make the most informed decisions; otherwise the stakeholders might simply not realize it’s of any use to you.

Knowing the value of the solution becomes unquestionably useful, one may even argue to say detrimental in formulating and appraising multiple design solutions through providing a cost benefit as well as risk analysis.

Which is to say you must always include considerations for how much each effort will cost, how it will be maintained and what the potential yield will be.

You might thinking to yourself, if I go to stakeholders and say I need to do extra work to explore different approaches and it will take additional time to build them out, on top of which I might be wrong … it doesn’t sound that good.

Here’s how you make it sound good. Because it really is.

Don’t just assume, validate

I find the word testing (even though I use it often – because it became the adopted terminology) somewhat ambiguous and lacking in optimism. As in to test someone or something. You test to ensure satisfactory levels rather than to improve. Testing… it also carries, at least for myself, a “let’s see what happens” connotation.

Experiment is slightly better because it assumes some degree of rigor and process and most importantly longevity. Because if the experiment negates the hypothesis there’s an expectation that an adjusted approach will emerge.

My favorite however, is validation. Very to the point. It stipulates that something has been proposed and is now seeking its credence validated. Perfect for what we are trying to communicate.

Proposed variations of the solution are just in that we source our reasoning as well as provide potential outcomes of each based on research. Like formulating a thesis – it may turn out to be negative, in which case you have disproven it. Granted that disproving something does not necessarily mean it’s worth proving in the first place.

Thus it is my recommendation to form individual solution variation propositions as,

In order to maximize set outcome targets we must condition the solution in such a way as to respond to variable needs it has to meet by first validating and then further enhancing our assumptions.

Lessons learned and profiling

Enterprise considerations will significantly benefit from forming a running best practices list, identifying what variation of the solution or offering interactions works best in a particular context or against a specific audience segment, profile etc. streamlining the learning curve of future iterations as well as development of branched or entirely new solutions.

Developing a library of passive and active responses to variations will also serve in gaining an advantage over competitive solutions in that they will support future decisions and make development of new solutions more purposeful.

These and many other additional arguments may be made in justifying the cost for finding out and validating the solution sets to proceed with.

Only one is better than one out of many
It is worth noting that not all problems have more than one solution, some are very straight forward and their outcomes may be universally satisfied.

Oddly enough, even when that is the case (hah), stakeholders get presented with multiple design solutions to choose from. To reiterate, never ever do that.
It is perfectly fine to conceptualize different ideas, even test them, but if you dedicate to building out concepts knowing that let’s say two out of three will end in the trash can, you are just wasting time, both stakeholders’ and your own.

Please figure out, as we so delicately put it, what may work before you design or build it by conducting research, asking for early and frequent feedback and iterating accordingly.

Everybody wins!

Having and deploying multiple solutions either in parallel or through iteration gives way to statistically significant and audience accepted, i.e. proven experiences.

All credit due to sacrificing a bit of arrogance in always knowing what will work and committing to sustainable outcomes rather than immediate results. Meaning that generated impact is exponential and provides for instigating a champion challenge culture for all future solution iterations and related efforts.

You, as a solution provider, by relinquishing control and justifying the need for validating your assumptions, benefit directly from additional work and also by ensuring that the end product will be aligned with continually changing demands, securing future iterations work.

Users will enjoy the relevancy which solution conditions provide and also have the pleasure of establishing a relationship with that solution through continually exceeding their effectiveness expectations, resulting in satisfied and growing base of returning users.