What Organizations Don't Understand About Theories
Nate: Hey Trey.
Trey: Hey Nate.
Nate: How's it going?
Trey: Going well, man, how about yourself?
Nate: Pretty good- thanks for asking. We want to welcome everyone out to the Craftnotes podcast and just to give you a little bit of context since this is our first episode. The Craftnotes podcast is brought to you by Motif Research and Motif Research is a sophisticated research service that leverages a suite of products to be able to help you facilitate and expedite your qualitative research. And then we pack all that into a powerful system of record. So we want to talk a little bit about what the Craftnotes podcast is going to be and what it's going to be about. And we have seen a huge need for some content around the world of qualitative research and how to be able to integrate the activities and the skill sets that are needed to be able to do really effective qualitative research inside of an organization. So the Craftnotes podcast will dive into some of the field guides that we create within Craftnotes. If you go to the Craftnotes blog as well as we just try and talk in a more elaborate way and in a discussion format that we're not able to do in a field guide. So to start out we want to talk a little bit about something that Trey and I talked about quite a bit over the last couple years, which is the value of just trying to understand the value of research in the product development process, which we don't think actually has a significant amount of buy in yet across organizations and with key stakeholders. So I think to really understand why you would actually do some sort of research when you're building a product. I think most of the time when that's misunderstood or that's not being done- I would say that's usually because there's a fundamental misunderstanding of how value is created inside of organizations. Do you think, Trey, that most organizations think about and can articulate and are pretty sophisticated about thinking about their own value chain? About how value gets delivered to to an end customer?
Trey: I would say that value chains in larger organizations is definitely difficult just because how complex they really get. And I think when people take a step back and try to simplify a business model or what a business is actually doing then it's easier to look at what the value chain is. Right? So even if you look at simple software products or even a food delivery service, you can say, "Hey, here's the raw resources. This is how we acquire them. This is how we process them. This is the design that goes in place and how it gets manufactured. And this is how it gets sold and delivered." You can see that the linear trajectory...
Trey: For that value of where do we get with the beginning to where it ends up with a customer, but then when you really actually blow it up and are like, "Well?" kind of like we're talk about here, there's research in there as well. there's design in there as well. There's things that a company does to provide value to their employees so their employees can continue to provide value in the value chain to the customer and sometimes value chains can become cyclical. When you look at the customers also providing value back to the company, either in terms positive word of mouth or reviews or referrals or things like that. And so I really haven't seen a lot of companies at least from a grand level be able to articulate that well. I think that's a big struggle because I think a lot of companies don't see all the things are going in there, especially where either the bottlenecks are or the gaps. I think that's a huge thing that happens .They don't always see where everything's coming down the chain how it's all fitting together and where there are those losses and I think researcher is one of those things that is definitely a gap where we just kind of jump over this part of the value chain and just assume things keep going when there's definitely something that needs to be happening.
Nate: Absolutely. I think most companies that are able to articulate what their value chain is are usually physical product companies. So if you're an Apple...
Nate: You can usually articulate what your value chain is usually because there's costs associated and other vendors or companies like Foxconn for example is a master part part of Apples' value chain. So, of course you're going to be able to understand where that fits in and how it fits in and how integral it is. But within a software company, especially I think digital products, I I really don't think that there's a lot of thought put into how value is being created for the end customer and to your point what the cyclical aspects of you know, internally that are taking place with that. I think a lot of times marketing organizations within digital product organizations are a little bit more sophisticated about the value chain because it's it's essential to the go-to-market. So trying to think about how you start to distribute and attract customers. I think that's a little bit better understood and thought about but really the internal value chain thinking about the activities from the processes that your solution are offering goes through before it actually ends up in the hands of a customer. I don't think that most organizations think about it often if at all, so I think that that's why I think it would be good for us to chat a little bit about about what that looks like and how that actually starts to help organizations understand the value of research in that process. So I think the number one thing here is that when when you look at a really simple value proposition it's usually pretty simple to understand the value chain. So let's let's take the example that Trey I that were using earlier today was a kitchen table. So if you take a kitchen table, right, you can pretty well articulate the steps that is going to go through from the time that it's a tree to the time that you're putting it on a show floor or something, right? It's pretty simple. I don't think that it takes a lot of important time and energy to be able to understand what goes into that value chain. That being said, when you try to alter or change or add some sort of innovative nuance to a value chain, there are some inherent complexity that comes in that makes it so that the practitioner, the person creating the product, has to be articulating what the value proposition should now be. So for instance, if you add some complexity into that same kind of kitchen table example. What happens when you need to furnish your entire apartment in a day on a budget? Which is which is really Ikea's value proposition.
Nate: What context does a business need to understand about that goal so that whatever they provide, whatever solution that they build, whatever services they include, what context needs to be understood so that they can actually articulate what the value proposition should be so they can actually understand how the value proposition should be created, the steps that they should go through, the activities that need to take place etc. I would love some of your thoughts there on maybe what's missed and what some of that complexity looks like with needed context.
Trey: I like the example you use with Ikea because when you say that value proposition is "Being able to furnish your whole home within an entire day", that's not something that may easily be MVP'd. The first version of Ikea that would be able to accomplish that, to be able actually deliver on that value proposition, wouldn't be a small thing. And I think this goes back to "Where does this value chain start?" It really does start especially in these new Innovative areas where things haven't been done before with the idea. With the the hypothesis of what needs to take place to serve the market, to serve the customer. Then that missing portion is actually, "How do you validate that?" Because even if someone has a really good idea to start something like Ikea, maybe if they've got some good showmanship skills and sales skills, they can go and raise some money on that and get started that way. But the critical part we want to talk about here is that need for the research in the value chain. We have this hypothesis that, look it's difficult to furnish a home. One, furniture is expensive. Two, I have to go to a bunch of different stores. People probably just experienced these pains. How can we continue to take this assumption or this hypothesis of why this is such a problem and even if people want this problem solved and how can we turn that to something more concrete that we feel confident. "All right. We think there's something here. Let's take millions of dollars, build a store, build a product line, and like you said, taking the consideration for what should be included in the home?" We're not talking just tables anymore. We're talking rugs. We're talking nightstands and lamps and and dishes and silverware and all that stuff. That's not just going to happen overnight. You'd want something more concrete than just a guess or just a hunch to be able to go on that and that's that's the part that makes this predictable and I think much more leveragible for product practitioners.
Nate: Yeah. I think your your point of predictability is a really great one. And one I want to dive into later as well. I think the interesting part here is when these ideas originate try and pinpoint where in the value proposition that's happening within your organization? How how did you actually observe that somebody has the pain point of needing to furnish their entire apartment in a day on a budget? When when did that pain point actually become observed? Was it canonized? How did it actually get accepted and get buy in across the organization?
Trey: Exactly. I think what's even more complex too is whose job was that?
Trey: Who in the organization was actually thinking about this and, to your point, even had the position and the influence be able to make something like that happen.
Nate: A huge integral part of that is the roles and accountability that come with that and I think the interesting part here is that this is a beautiful segue into the the next point that we've kind of talked about before which is: how this is usually done inside of the especially the digital product industry today? It's usually one of three ways, and we would like to add a fourth which is we feel and are opinionated about the fact that it's a better way. But the one of three ways here...one of which could be you have a really important key stakeholder that somewhere in the process of that value chain decides the qualifications for deciding what should be built has the salary, experience, authority, and position. So essentially based upon the fact that they are the CEO or an executive they decide that they're the ones that are sophisticated enough to be able to decide and articulate what the value proposition should be. So now you have this person with extremely good intentions who has taken a complete and random guess and is taking the backing of their authority and putting all the resources of the organization behind that. So that's that's one I think Marty Kagan has dubbed that the pretty pig, or putting lipstick on the on the pig. Essentially you have this really terrible idea that comes down the line and all of the rest of the organization is expected to put lipstick on it.
Nate: There's there's that. The second that you and I have talked about is really a little bit of a misunderstanding and a degradation of The Lean Startup principles, which people have just distilled down into build, measure, learn. And I think the way that you and I would describe that is that it's you're just leading with a hypothesis. Would you agree? Is that how you would articulate that?
Trey: Exactly. I would say it's you are seeing something happening or you are making a guess and educated guess because you're probably a professional you've seen some things in industry or in the market and you may even be doing like quick little things that grab some data and feel it out.
Nate: Lightly gathered date.
Trey: Exactly. It's just we're finding stuff here or there on the internet and then you just run with it. "Well, I think we'll just build XYZ to fix this problem. Let's do it." We build it. We throw it out there and then there's this idea of, "Oh well did it fix it or not?" And you can get in this like cycle like hypothesis burn where it's like, "Oh, well that was wrong. Let's do something else." And as we'll talk about more later, but it's just it's very costly and very expensive...
Nate: Oh yeah.
Trey: ...to keep putting out features and products that just don't win. It's very expensive.
Nate: Yeah. Without taking the time to properly understand what it should be. And now we're both getting eger to talk about how we think it should be done.
Nate: The third which is really probably the industry standard which I think there are still principles I'm fond of which is agile and the principles of agile. There's going to be some agile missionaries listening to this who will burn me at the stake. The principle behind agile is we don't know upfront what the requirements should be and we don't need to and we reject the thought and the premise of meeting requirements up front. So what we're going to do is we're going to start building but in the process of building we're going to keep really short feedback loops with the customer at all times. So they'll be some product owner or product manager enabling and opening up lines of communication with an end user or a reference customer and those lines of communications are essentially feeding into what's constantly being built and the team does their best to say nimble enough to be able to respond to any changes that might come up. If it's the right thing for the customer then we transition and we move. A lot of great principles in there. However, it really falls under a lot of the same paradigms that we just talked about with the build, measure, learn issues where you're often leading up front with very little understanding of what something it should be. We're not we're not talkin about waterfall in comparison, agile missionaries. We're just talkin about getting context about understanding human behavior about being being able to empathasize. Then you keep iterating and you keep responding and then what happens when you actually talk to a statistically significant amount of people when you find out that something needs to change entirely you're throwing out, responsibly, you're throwing out a lot of work you probably could have foreseen in a still agile and nimble way. The agile methodology is not without its issues, and we haven't even spoken yet about the fact that it leads to an insane amount of meeting bloat and the operational cost of this company trying to build in uh, predictability into when something will be released instead of predictability of when something will be in the marketplace and how successful it could be. So I think that those are the three models that you and I have talked about a lot.
Trey: Yeah, and I think you're right. I think they're the most prevalent for how organizations are running.
Nate: Yeah. The scariest definitely being the the parachute in visionary executive.
Trey: "I'm Einstien. I'm gonna come down for my tall castle and tell you folks here in the dirt how it's going to look."
Nate: Yeah. Exactly. So I think the alternative way of thinking about this that we would like to propose is not something that is new. It's not something that we invented. It's actually something that's been done for thousands of years. And it's really leveraging the scientific method. This this principle of the scientific method has been adapted and changed and library for quite a long time. It's very well tested. You could say you pretty much any major scientific advancement ever has come about because of the scientific method. I think what we're proposing is to be able to start thinking about what you should do in the future and how you should do it in a more inductive and scientific way and we're going to talk about what we mean by that. So when we talk about this process, we commonly dub it the Theory Building Process. If anybody's familiar with with mixed method research and grounded theory and some of these things like that- we're talkin about inductive theory building here. Instead of this kind of lead with a hypothesis perpetually- leverage feedback instead of feed forward activities. Where we're actually going to start thinking about things in a scientific way. And we're going to talk today about what we actually mean by that because we know that there's some loaded context that we want to dive into a little bit in what we mean there. But, Trey, anything to add there on what we mean by actually building a theory.
Trey: I think the big thing, I will touch on this later as well, is that especially for people who are doing agile, this really is a step I before the build process where it's really going in and understanding and it's really the process of transitioning those hypothesis into a theory. And if you think about any popular theories in terms of science it's those things that gives you that predictability that you can say based on what we've seen here, based on the body of work that's been conducted, and what we've seen through our research, we can say with a high level of confidence that this is what will happen. From a product standpoint what looks like is that these kind of solutions will fit the needs of these customers in this market. And having that up front when you're going into the process of delivering those products is extremely powerful and helps you avoid the meeting bloat but hopefully also the biggest thing is building the wrong products at the wrong time.
Nate: Absolutely. I think something that you and I have talked about quite a bit before is the the principle that you in the process of building the wrong thing: you can only be right once about the fact that it's cheaper to build a guess.
Nate: If you build a conjecture assuming that it is cheaper. Operationally, you know time to market whatever might be assuming that it's cheaper that is only true once. The minute that you are wrong you automatically implicate the costs of building whatever you're doing entirely over again.
Trey: Absolutely. And there's also the risk that if that kind of validates that method of thinking that you can kind of build that man in the castle mentality or the pretty pig mentality where it's like, "Oh- we got it right. We're just gonna keep doing this." We're gonna keep being the visionaries and we were right once- who's to say we're not going to be right again. Which can be almost more costly than maybe you being wrong the first time.
Nate: Absolutely. I think before we dive into what we're talkin about here, too. I think there's a couple of assumptions that we're making about anyone that might be interested in trying to integrate this type of process into their organization. And the first of which is that you have buy in to the fact that human centered design or goal directed design or domain-driven design, whatever you want to call it, that you have buy in to that those methodologies that you understand the practices of understanding behavior. And designing before implementation, and the process of empathy building is extremely important, and that adds value inherently to any product to build. And we'll build on that as we chat a little bit further, but I think it's super important to be able to recognize the fact that in this day and age we cannot move to implementation before some stage of understanding. It's not viable for any organization to automatically start building something without any premise of design or understanding. We're too far into this. It's not acceptable anymore.
Trey: Yeah, and I think it's one of those things that if you say that to a room of people, "Hey- the goal here is to truly understand your customer and to truly understand your market." Most people would be like, "Yeah, that sounds like a great thing we should do right?" But because the nuances and because of these things that have been done previously it just gets lost and we feel like going with these other more expensive more taxing routes to try to understand the customer when we could just start with that off initially trying to understand them and then be able to deliver to them and meet their needs.
Nate: Absolutely. So so what do we mean by going out and building a theory? Well, I think the first thing that we have to talk about is what is the theory? What is the theory and how is how does it actually compared to what hypothesis is? I think the word hypothesis is really well understood by most of the industry, especially anyone that has tried to be able to do any sort of Lean Startup methodologies or for that matter anything regarding agile. I mean it's commonly used a lot of times in that context as well. So I would say that the first thing we have to understand is what is the nuance there? And the nuance being that a theory has some pretty specific parameters attached to what it is what qualifies something being a theory. The simplest being is that it's the most downstream aspect of what a hypothesis might become so I have hypothesis meaning that we've observed some sort of behavior or phenomenon and we now have a proposed explanation for why that's happening and how it's happening and we're going to get test either a solution or an explanation for that phenomenon. And a theory being, "Hey- not only have we built this hypothesis, but we might have tried several hypotheses and we saw it proven correct over and over and over again. It was repeatedly correct. So that's one parameter a second parameter would be that it builds an explanatory framework. So just by understanding what a theory is it actually helps you understand and it explains the phenomenon that it's attached to. It uses constructs in these principles to be able to actually explain the phenomenon that it is attached to. And will give some examples about what this means. Trey and I were talking about some really good examples early today. One of the last ones here is that it's falsifiable. Meaning that not only has it been proven true repeatedly, that it has this explanatory framework attached to it. The third, you know and you understand what characteristics of measurement will help you understand and disprove your own theory. So what things will not only help you know that you're successful. How will you know that this theory is false? That your way, your solution or proposed explanation of why phenomenon is happening, how will you know that it's false and if that doesn't exist and it's not a theory. So anything to add there Trey?
Trey: Yeah, I just wanted to share a quick example to explain what this kind of looks like. Recently I worked on some research with a team where we were looking at a group of professionals who were customer-facing and that over the course of years it had several tools and and processes built for them to help them do their job and we were tasked with trying to understand the phenomenon that there's some tools and processes that were really well adopted and some things that were just completely not adopted at all. So there's kind of this mix adoption going on that's the phenomenon we are observing. And we look at several hypothesis, but the strongest one at the time that we were thinking about, that I was thinking about, was that my assumption was that these tools and processes that are being used were easy to use. They were well built. They were elegant. They were easy to get to in a few clicks. All the kind of things that you would look to as, "Okay. Of course, they're going to do this because this is an easy thing to do that's part of the job and helps them." And the other things are not being done, they're not as easy. It's more clicks. It's harder to get to. It's a little more difficult to understand. Not as good of documentation. All those kind of things are just making more friction for the process. That was the hypothesis we went in to the research with. As we started talking and doing ethnographic research sessions with the different users, what started coming up early on and then persisted and kind of did hit that level of breadth in terms of data and feedback from these users was that what users used, the things they would do, they would do even hardened difficult things. They do the easy things as well. They're doing hard and difficult processes if they believe that one their manager cared about it or two, if they saw other individuals or other colleagues doing it as well. And so when we got done with this whole research process. Then we had this theory, and the theory was in place that, "Okay- when it comes to building tools and processes for these individuals for these professionals, if there isn't manager buy-in in a way that managers are tied into it and if there isn't a way where they can see other colleagues doing it as well there's going to be low adoption. And so now we had this theory in place, this way we could look forward in terms of building new tools and processes to be able to see how this would change our judgment and change the things that we do. And the data we have, the way to falsify was if we implement this new process, if we start thinking about when we build new things managers need to be tied and we have to make it so others can see others doing it then it's not gonna pick up. Fortunately as it was being played out it has worked out. That theory has come out to be conclusive with our research. But to your points far as it being falsifiable, those two components are very company specific. It was teams specific in terms of culture things and they could change in six months or a year from now. Those things could change. Also that theory becomes false and then you have to go to the drawing board again and run that research again to develop a new theory moving forward.
Nate: Yep. Absolutely. I think I think that's it. That's a perfect example of the inductive research that we're talkin about here. We used the word inductive a couple times. I think it would be good to be able to classify what we mean by that as well that in a lot of more rigid almost laboratory like environments the scientific community will use more of a deductive approach. Meaning that they actually start out with a proposed causation that they see. So not correlation, but causation. They start out with proposed causation and then they go and try and find a phenomenon and data to support what their proposed construct there is. However on the inductive side what they do is they start when the phenomenon is observed. So we see a phenomenon. We actually train understand all the things that are surrounding in the context that builds up around the phenomenon and build these constructs around it to the point where we can actually make an inference-based, an induction-based, proposal for what the causation is at the other end. So that's that's the nuance there and that's the type of flow and expectation that you can expect from these types of activities. So to try and explain a little bit to what we mean by these different stages and what a theory looks like as well as we were chatting there with some of your experiences thinking back to an experience. I had at a previous software company that was building software for for tax practitioners and we were doing an on-site contextual inquiry, which is just an on-site visit. And as we were doing that we were doing just some observations of people working. And one of the things that we saw is one of the practitioners receive an email from the client. They took the email. They dropped it into a folder called 'Correspondence.' And we had never seen that before. And we were like, "Hey tell us a little bit more about why you're doing that. What's what's going on here? Tell us why you're actually going through this process of exporting the email from your email client. Getting that document and then placing it into that client folder under Correspondence. Do you do that with all your clients?" "Oh, yes. Of course. We do it all of them." And it's more of like a type of thing in case there's any type of litigation we want an archive of our communications with all our engagements.
Trey: So he's like manually building a backup.
Nate: Yeah. Yes these go super interesting. and so then we started talkin with subsequent customers and we said, "Hey- you have any sort of correspondence or client communications folder?" "Yes, and we always do that every single time. When we end engagement we download all the emails as they come in. I download the email and put it in whatever might be." So that that was a phenomenon. Many organizations would then just go build something from that, but it's important to be able to stop. Canonize and understand that what you just saw was purely a phenomenon. You haven't seen anything that deserves or merits a solution yet. You have noticed a phenomenon that you now need to be able to build more of an understanding around. I think that's an important distinction here. What happens at this stage of actually articulating, observing, and canonizing a phenomenon.
Trey: Yeah, so you're definitely seeing the phenomenon there and I imagine the next step would be to start formulating some hypotheses on why that would be, but most important, continue in the path of research to be able to really deeply understand why are all these customers seeming to have this common trend? This is common phenomenon of why are they always copying this up? Are they worried about what kind of litigation is going to happen. If litigation does happen what are they looking for? How are they going to access it? There's just so many questions that need to be answered before you even go forward to even start thinking about building a product because I think a lot of us who are features base we immediately start thinking like, "Oh man. We'll build a backup that automatically logs. We'll put in these different files and and double fail safe." All these kind of fancy things that would probably sound good to the customer but without going that next level of deeper understanding we could totally miss the mark in terms of truly delivering value and really meeting the need that's arising from this phenomenon.
Nate: Yeah, totally and I think to your point, that the principle that you're talking about here trying to further understand the phenomenon, Clay Christiansen describes as 'dumpster diving' and I've always really loved that that analogy. You're just trying to get familiar with the problem space. You're doing everything you can to be able to understand the surrounding context and explanations and nuances that surround the phenomenon because most likely there's going to be some nuances of how people are experiencing that phenomenon and on top of that you might even when you speak with more people, you may even find out that what your initial thoughts and impressions of that phenomenon were degrade over time and that there's some other phenomenon that abstracts from that or is more granular from that that is more important to you and more important to your organization. Is more in your wheelhouse of the problems that you should be solving. Whatever that might be. Absolutely, you have to be able to take the time to be able to before you dive into the hypothesis stage you have to take the time to understand the context that surrounds the phenomenon.
Trey: Yeah, so I think we've definitely covered quite a bit on how this process of theory building can definitely be helpful to an organization. Especially then it comes to delivering products, but let's talk a little bit about why organizations don't invest into these methods. And we've been at several different SaaS companies between the two of us and the few places I've actually seen this done, it's always done for a short amount of time. I see like a product team will band together and they'll spend four to six weeks and they'll do this bit of research and then they're done for six months to a year. So with understanding that obviously organizations do see the benefit of this and they try to do it. But what are some things that you've seen as being like the biggest reasons why this isn't something that's continually happening especially in product organizations?
Nate: Yeah, there's usually two assumptions, and we mention in this field guide as well from this last week, there's usually two assumptions that are made around this type of research that lead to a lack of buy in and resentment towards any sort of scientific method being involved here. The first of which is they feel like it's too taxing from an operational cost standpoint to be able to integrate these types of activities. They're afraid that doing this type of research will be so heavy and so academic that it's just too much of an ask for an organization to do a scale. And that's false and we can we can talk more about that. The second of which is time to market. I hear that all the time is that we can't take the time to be able to understand because someone else is going to beat us to market or we we need to be able to move quickly and ship this out so we can actually get this out to the market and there's so much packed up into those two objections. A lot of times they're really smoke screens for much larger concerns or issues within a business. But those are the two most common objections that I hear.
Trey: I would say when you're mentioning as far as like sometimes those are being smokescreens, like there's a few nuanced things I've seen at different companies that cause these issues that come up. And one I think is when the product org is highly praised and rewarded when they ship product.
Nate: Yes. The output.
Trey: We go to companies... I think that's where you you see this rise a lot of companies doing things like we ship new code know, every six weeks, every two weeks, we're doing continuous deployment And so when it comes time to recognize what's happening in the product org in organizations it's always what's been shipped. "Oh. We built this finish this feature. We built and finish this product. It's in beta. It's being tested and used. Being purchased. And having that focus can really derail you from looking at the impact. And sometimes impact even become an afterthought almost in some cases. Or when impact is just purely measured in terms of dollars, because as we all know just because a customer actually decides to give you the money doesn't necessarily mean they've actually capture the value yet from the product that's been built. That could actually easily be explained with just great market in an organization. And another thing that derails it from that and I think makes a little more toxic as if product team start getting bonus or specifically compensated on shipping certain product, righ yeah. I mean, I think people can easily see how that kind of derails...
Trey: The incentives and totally messes up the incentives for trying to deliver something that's worthwhile to a customer.
Nate: Absolutely. The funny thing is too is that a lot of times when I've done consulting the past when you look at organizational leaders that have a lot of distrust in product organizations, especially if not just as an umbrella technology departments, so that could be all product and engineering and user experience or whatever it might be, but what causes the lack of trust the most is that when you look at like a marketing and sales organization there's a lot of sophistication that comes into attribution and outcomes. It's not about the amount of phone calls that are made. It's about the fact that this many marketing qualified leads came from this many phone calls being made. But the funnel is entirely built to be able to understand true outcomes, which are unfortunately a little bit better or more, not better, but easier to be able to attribute so when you have an organizational leader that is from that background of marketing or sales or be interfacing with in quite often. And then you go to the other side of the building and they're talking about how many lines of code they shipped and you are just at a loss of how to justify and understand how the investment into salaries and time there is as well spent. That can be totally understandable and we should empathize with that thought because it's where we're kind of writing our own demise in that situation. We shouldn't be in that situation. We shouldn't be incentivizing outputs over outcomes. We should be just as concerned with outcomes that come with any value proposition in the same way and that doesn't necessarily need to be revenue. Certainly can be and it can be a certain.. it can be a great outcome many times, but more often than not it's something else. So I think organizations to your point have to start being focused on the outcome of the theory that they were actually focused on. So there's all these types of things happening within the org around that are often smoke screens for that. I think that the taxing and operational costs aspect is usually also a misunderstanding of the amount of time that it takes to be able to do this. Many times the objections as I've tried to drill in most people are assuming that this is almost a waterfall like methodology. That we're saying that you have to define all the requirements up front. That it moves through seven or eight different stages before you actually ever get the requirements to a team and so forth. And that's not what we're talkin about. We're talkin about a cross-functional team of engineers, product management, and design. Taking their efforts and their resources to be able to look at a phenomenon within days, if not a week in my experience, to understand what's going on with that phenomenon, understanding a lot of the context around it, hypothesize and validate within a week or two and be writing code by the end of the month. It's the same pace that I've seen with, by the time you're done with all the damn meetings with agile. This is this is not a this is not a demonstrable amount of time that we're talkin about here. This is a this is a quick, nimble and highly experimental way of thinking about a problem space.
Trey: Yeah. I think the point you make as far as like having a cross-functional team I think is so important. Because as you see like when you don't have everyone getting a taste of the research through that theory building process you innately get biased. I think it's eThven kind of speaks to why like Theory Building should be something that's understood throughout the entire organization of how you work. Because it's always very easy if you haven't been there, if you haven't actually felt and seen what was observed with customers or with other individuals, you just just don't have the same impact. So there's just something about that, about having the cross-functional team there. Everyone seeing a piece of it. And to your point as well, even the research I mentioned earlier, that took about two weeks. But it was just it was a part-time thing. We were all still doing our full-time jobs and to your point, if you have a team that's focused on this and this is just part of how we understand the customer to deliver better products than it can be days or a week or two maybe.
Nate: Yeah, absolutely and that's a great point. I think from the the other objection standpoint of of time to market and really what goes into that. I think that's super related to what we're talkin about here. They're worried about the ability and chance that they might miss a huge opportunit, but connected with that is, do they understand the fact that we talked about in the article, in the field guide, the marginal cost fallacy that Clayton Christensen talks about. That you you go to Kellogg business school and you're reasonably taught. That in financial situations that you should make an assessment based upon opportunities or decisions or investments be made or resources be allocated, you should make a decision based upon the marginal delta, that marginal cost delta between the options that you're looking at. So when somebody is looking at these two different situations and they say, "Well Nate, I can either take the time that it takes to go to build a guess which I could probably do in a much shorter amount of time than what you're talkin about. Or I could take the full amount of time to be able to really understand the causality of customer problems and build the right solution for them and iterate from there." The marginal cost analysis is pretty clear. You should go with more of this build measure learn. You know process but the problem is is that you always end up paying the full cost. So while you made an assumption why you made a decision off of that marginal cost comparison, you always end up paying the full costs. His example of Blockbuster versus Netflix is a perfect example of that. So I think it's important for organizational stakeholders to understand, that if you do not understand the problem space, if you do not understand the context that surrounds it you will never build an offering that will be as compelling and as valuable, as revenue-generating, and as retaining as it could be.
Trey: I completely agree. And I think a lot of times even when they're doing that marginal cost analysis, there's a lot of costs that just don't get taken into account. Most times I think when this decision comes up they always just think about how much does it cost to build this? And they don't take in the fact that, especially if you're doing software, someone's got to maintain this. Chances are this isn't just like a silo piece of code. It's touching other things. And so there's going to be other complexities that get introduced to the system that some other engineers building other things will have to deal with. Same thing when you're adding things into design. There's other design considerations to be taken and once you've spent that out there, there's also this cost of the moral. Usually when a features been decided on or something's been commission to be built there's a champion. There's someone who thinks this is the right answer, and there's almost this cost of pride where that person if you're ever going to get rid of that feature because it doesn't work or the products didn't work out, someone has to eat that and be like, "Hey, I was wrong right." And I think in most organizations where people are trying to build careers and move forward, that's not the first thing you want to do is come back and say, "Oh, yeah. We spent all this time on this and we were wrong." So I think what that does with these natural tendencies lead us to hold on to these things. We build it out and like if it's not picking up, even if we have put things in place to be able to see if this is successful we'll keep justifying, "Oh- just give it a little longer. It's just going to take a little while for it to pick up." Keep on it. Keep on it. And it's not till probably people have left the company, other people come in and say, "Hey, why do we even have this?" That gets cut off and at that point you probably have a small handful of users who are actually using that random feature and now they're going to be upset because you're gonna get rid of this one thing that they just happen to discover and find and use. In so many ways like those just a little costs that I know they're super hard to quantify. But those are all the full costs, so it's one of those things that to Clayton Christensen's point, you bear the full cost of if it's not the right thing and everything from like just the build cost to the maintenance cost to what it does to your team to kind of like we were talking about like even some bad conjectures that can come out as well. Let's just run through three quick examples let's say you ship something and it doesn't pick up, you can do one of two things. One, you can look at it and be like, "Oh, we just need to build more." You can get in the cycle of just throwing another feature at it to try to make it better. Try to make it better. Try to make it better. And that can kind of lead you down a rabbit hole that way. Number two is that you say, "Oh, it's not really worth it. Let's ditch this effort." Which there may be something there. It could be one of those things where you understood a phenomenon, you made a hypothesis and ran with it. But because of the lack of understanding you didn't quite nail it. Which explains why it didn't pick up and now you're just abandoning something that could be a great opportunity. And then the third thing is, I think is somewhat just as dangerous, is that if it picks up. What if you were right and then it's justified, like we talked about, you almost start creating this persona of...
Nate: You get this false positive...
Trey: I'm the Visionary. Yeah exactly. I got it right and my hypothesis and my intuition is correct, and I'm just going to run with it now. And if people even start thinking that in the organization that gets a little dangerous we're even if there is research done. It creates this bias right was like, "Well, I know you've done research but I got it right in the past and so we're gonna run with what I'm thinking." I think this is almost just as dangerous and once again like no matter what the outcome is, you bear the full cost of that and it's usually never good for for the company or for the customer.
Nate: Absolutely. I agree. Well, we can talk about this for hours, but I think we will have plenty of opportunities in subsequent episodes to be able to discuss more about theory building and the activities that come along with that most likely next time we'll be diving into what it takes in some of the corresponding activities around canonizing and collecting and aggregating phenomenon with an organization. How you can do that. How that can benefit the downstream effects of building new products, so we'll probably talk about that next. Check out our field guides. So if you go to www.motifresearch.com/craftnotes , you'll see all of the field guides we will be publishing there. And we would love any feedback you have as well as just keep coming back and sign up for the newsletter and you'll be able to to hear from us as soon as a new field guide comes out.
Trey: Like Nate said, go ahead and check out the field guide that goes with this podcast. It's called "Intro to Theory Building" and go ahead and please comment. Ask us questions, anything like that there at the bottom or you can also shoot us an email at email@example.com. And if you go check out that field guide, Nate's to put together a great checklist of the theory building activities. We'd love to get your feedback on what you're doing. There are some great check boxes that you can click on to let us know so we can see where audience is at so we can cater our content here going forward.
Nate: Yeah. That's great. Also just a little plug for Anchor we are creating this podcast via anchor. It has a really amazing feature where you can actually respond to the podcast creators, so we'd love for you to be able to download the anchor podcast app and send us a reply. Love to hear from you. But until next time we'll see you later.
Nate: Thanks Trey. See you later man.
Trey: C'ya Nate.