Every data leader wants to be a strategic partner to the business. What does it take to do this effectively? Today we dive into design thinking with my guest Brian T. O’Neill founder and principal at designingforanalytics.com, and host of the Experiencing Data podcast.
Join hundreds of practitioners and leaders like you with episode insights straight in your inbox.
Checkout our brands or sponsors page to see if you are a match. We publish conversations with industry leaders to help data practitioners maximise the impact of their work.
Every data leader wants to be a strategic partner to the business. What does it take to do this effectively? Today we dive into design thinking with my guest Brian T. O’Neill founder and principal at designingforanalytics.com, and host of the Experiencing Data podcast.
We talk about data products, how and why design-thinking and UX research can boost their adoption and impact, the process of looking inside a superficial want to get to the true underlying need, the role of curiosity and empathy in asking the right questions, effective design choices including null-choices, why the deployment of data products should be an integral part of the design process, and many other topics. You can follow Brian on LinkedIn and Twitter.
📣 Twice a year Brian runs a LIVE instructor-led seminar titled “Designing Human-Centered Data Products”. The next one stats on March 28!! Learn more here.
Your ideas help us create useful and relevant content. Send a private message or rate the show on Apple Podcast or Spotify!
Loris Marini: I'm here with Brian T. O'Neill. Brian, thank you for joining me.
Brian O’Neill: Thanks for having me, Loris. It's good to be here.
Loris Marini: Cool. So there are a bunch of reasons why I wanted to record this conversation with you. I really want to, first of all, thank you for connecting at such a late time there. It's 6:16 AM here, and I suppose it's late in the afternoon over there.
Brian O’Neill: Oh no, that's a good time. This is nap time. It's also a quiet time for recording. It's a good time for us to talk for my son. Not for me. I will not sleep through the recording. Don't worry.
Loris Marini: I essentially wanted to explore this topic of design thinking when applied to data. I know you've been talking a lot about that over your podcast and most of the stuff that I know, to be honest, I learned it from you. Let's start with the basics. Why is design important in data in the first place? What does it mean to think about data as a product?
Brian O’Neill: Okay, good question. So, design can mean a lot of different things. I think for a lot of people, we tend to think of the visuals of something. If we're talking about an object, the way it looks, its form, and all this kind of stuff. I think the next tier of that is we think about experience with something. There might be a thing that has a form, but then we think about the experience of using the thing.
If we think about data as something that usually has to be put into some kind of form that is going to be used, whether it's a dashboard or it's an aspect of a software application or a PDF report or a machine learning model, it usually gets manifested into something before it can get used by a human that's going to be in the loop. Almost anything that even has automation at some point, there is some human factor there. There are not just machines running machines like the machines decided to build their own model for another machine. We're not quite there yet. Maybe we'll get there.
My point is there's still a human in the loop at some point with most of these data-driven solutions. We're still talking about software. I was just thinking about how abstract some of this gets. We're still in the realm of software. All this data science stuff and analytics and data strategy and all these kinds of stuff. Most of it still manifests itself in some form of software. Now, maybe you're not building a custom app or whatever. You're adding onto something. You're helping sort or classify something that will be spit out into a pre like a Tableau or a CRM or something. You're not really building a new app. You're sorting email and routing messages based on NLP or whatever, but that's still an experience that a human being is going to have.
The thing that I'm trying to do with my work is to drive a design approach and a product thinking approach into this. So that the stuff that the really talented people that know the technical part of data science and analytics and technical enterprise software products, which is what I specialize in, can actually be used. It actually produces value. It actually has utility that actually makes somebody's life better.
What do I mean by that? I'm talking mostly to the enterprise crowd here where we're working on internal solutions for internal stakeholders, and we're not necessarily building data-driven solutions for external buying customers. I think the bar is a lot lower. You don't even need someone to pay you to use your stuff. You're building free stuff. It's a lot harder actually when someone has to give you money for it. Part of my answer to your question about product is I think the product mindset, if you really want to boil it down to, how good would your solution, the thing you're making -- the dashboard, the model, whatever the thing is -- be for someone to pay for it?
Would you put that out? Is it good enough right now? If someone had to pay for it, how much better would the experience have to be for someone to say, “I can't live without this, and I'm willing to spend $2,200, $2,000 a month to have access to this dashboard,” or whatever the thing is. Thinking about it as a product and not a project that has an endpoint to it, but rather this product is the manifestation of something that's about solving a recurring problem.
It might exist all the time, which is which sales leads should we call next? I have a sales force of 5,000 people, who should they be calling? That problem doesn't just go away all of a sudden. It's like a latent problem. You can get better at it. You can improve the solution, but a product mindset would be thinking about owning that problem space, which maybe has multiple touchpoints. Maybe you built an app to help them with something. Maybe you help sort a call list using some machine learning model that predicts who was most likely the propensity model to buy something. Have to all these different outputs, technical projects, widgets, models, things, applications, reports. Someone owns the problem space and they might think of that as like a product, the way a modern software team would think of that as a data product.
That's kind of what I mean by that. The design part is really about acknowledging that there are still humans in the loop and the human interaction piece is where projects can go to die because they're too hard to use. The value is invisible. They solve the wrong problem.
Most of the time, the problem is not well understood in my experience. A lot of the times, the reason solutions aren't good is that the problem space is not well informed because the person making it doesn't really understand what it's like to be the person who's using it. They haven't done any research. No research, no ethnography. They haven't spent two hours seeing what it's like to be the accountant who runs the TPS report every quarter to know what that experience is like, feels like, what's frustrating about it. Not to mention level two, which is maybe I could also be looking for opportunities for my data team to make that experience better for this person who doesn't know to ask me to build a classifier, to do whatever. They don't have the vocabulary and they shouldn't need to care about that at all.
This is why UX research is such a big part of modern software companies. We're out problem hunting. We're out there finding opportunities instead of being a reactive data science team or reactive analytics team. We take the orders at McDonald's and serve them out the window next. No. We're going out to figure out what people actually need. Yes, we're still solving inbound inquiries but now that we know what it's like to be this accountant or this salesperson or whoever it is that we are there to help.
Now we can start to see things like, why are you doing it that way? We could do this. We could give you a thing that will do that. We can classify all this. You're typing in doctor's notes into this iPad. We could probably tell you what the theme of all these notes are for all these patients so that you don't need to copy and paste them between this machine and then fax them to this place. Classic U.S. healthcare.
We can start to see opportunities. We're being a strategic partner now, instead of being, I hate to say drive-thru order fulfillment. You send us a ticket; we'll build you a whatever. I hear this in my seminar all the time. They have no idea why someone is asking for this thing. A bunch of “gimme all the data on this thing, tables of numbers and fields.” They have no idea what these fields even mean.
Loris Marini: And machine learning will figure it out, right?
Brian O’Neill: Yeah. Sure, you can't be a partner. A lot of the leaders I talked to; they want to be a "strategic partner to the business.” You need to spend more time understanding the problem space and stop spending all the time building outputs. It's outcomes that people want, not outputs. Most of the time is spent building things that don't get used. That's my mission is to try to design. “I think can help with this.”
It's not a normal language or way we do things. I think in the data field, why can't that not exist here? My mission, selfishly, is really to drive more design in the world, but specifically with the data audience, because ultimately, I think that especially now with machine learning and AI being more and more prevalent, these people are also the ones that are going to impact the world and the culture and the society that we live in. The stuff is not a fad. It's not going away. If I can have a small part in influencing that, even if it's for someone working insurance company or a whatever company, they may not be there the rest of their life. If I can get those people thinking about the human factors involved and not just looking at your thing as, “I built this model and I published a paper about it, and I have no idea how it's affecting the users or even the downstream people who are affected by the decisions made by the actual users.”
You think about criminal justice and all these kinds of things and ethical reasons. Yes. If I can get data practitioners and leaders to be thinking at a much higher level about the impacts they're having on people's lives, that can be good for business. It's also good for ethical reasons. It's good for society. I will feel like I've fulfilled part of my mission, which is just to bring better things into the world through teaching and talking and those kinds of things.
Loris Marini: Absolutely inspiring and resonates a lot with me. I started the now Discovering Data podcast with a tagline “data through a human lens”, because I was frustrated exactly for the same reasons you just explained. I remember, I think it was episode 64, 65 of Experiencing Data over at your podcast that you talked about empathy and the importance of really knowing how it feels to be on the other side of the receiving end of a data solution. That, to me, really nailed the essence of what I think is the problem that we don't talk to each other. Especially data practitioners, particularly in enterprises where there's so many layers and those that develop are so far removed from those that have a problem, and those that want the solution.
There's a difference between the need and the want. There's a difference between the business and the technological requirements, or the architecture that you need to move data around. At some point, someone will complain and say, “Hey, we can't do a modeling because we don't have data.” Okay so the problem is tech. We've got to figure out how to bring a new tech stack together. All of a sudden the conversation with technology, we seem to forget completely the fact that there's going to be a user being a human or a bot, but particularly a human consuming that data.
I think the stuff you talked about applies to bots as well, because bots are not designed to talk only with other bots. Most of the time they're designed to serve some kind of information to a human. At the end of the loop, there's always humans. If the software developer or humans are thinking, “Oh man, bloody humans, they seem to be the hard problem.” Yes but if you remove the human from the loop, what's left? A bunch of silicon chips. So, you got to stick with these humans.
Brian O’Neill: Yeah. The other thing I'll say to pick on enterprises for a second, the way I hear this talked about now in the data science community is we talk about operationalizing the model, which gives me a sour taste in my mouth. I want to take a gun and shoot that thing in the head.
I hate operationalizing the model. I understand where it comes from, that language. The designer's approach to this is that you design the operationalization into the thing from the start, because building the model in isolation, apart from how it will be experienced by the person who it's for, you have removed part of the solution. The part of the solution is the change management. It is the operationalization that is designed into the solution, along with the model and the architecture and the pipelines and the spark and the streaming data and all the blah-blah-blahs that are out there. There's always going to be a long list of that stuff. It's all of that. That's also a product kind of approach. It's balancing the business need. What's technically possible. The models, the human beings, the users, all this stuff -- it's designed in there from the start.
This is why you see much better onboarding. For example, whenever you download a new software application or whatever, there's much more time spent on onboarding. You don't get a long fricking manual most of the time anymore. You get a guided experience. You get some value soon, and then maybe you have to tweak some things, change some settings, customize some stuff. You don't normally start with a blank slate anymore. Why? Because they've made that part of the products, part of the experience is the adopting it part.
It's the transition from the old to the new thing. It's built into the design. I don't hear that very much. I hear “we built one wheel of the car over here and it's the best wheel ever. It's someone else's job to get it on the car and then to convince someone to drive the car with this new wheel on it”, it's like, “That's someone else's job.” It might be that there is a team and maybe that's sort of true, but I would rather have the data scientists and the engineers and the product designers and the business stakeholders and the users all working on this together so that good data science and machine learning and analytics work doesn't die at the stage when the wheel goes under the car.
That's the worst thing because the wheel could be kick-ass. It could be a really, really good solution. It dies once it gets attached to the car. It did no good. If the outcome was “get to grandma's house faster”, our business is trying to get to grandma's house faster. It's snowy, it's slushy. It's not safe to drive on these roads. We need a better wheel and data science says, "We can build you a better wheel.” It doesn't do any good. The metric of success is how many trips to grandmas were faster than the other trips? That is the thing that matters not “did we build a better wheel?”
If you are a data person you will often be asked, “Can you build me a better wheel?” The answer's probably yes, you can. It's also our job to figure out what the pain and the need is. Is that really what's needed or is it actually behind that? We're just getting the surface. Can you build me a dashboard that does whatever a designer says. You've implied a solution in that case.
You've implied that you need a dashboard, which may be true. I don't know if it is or not, but a dashboard is a proxy for, “I'm trying to make a better decision. I think historical or predictive analytics will help me make a better decision.” My immediate thing is like, “What kind of decisions are you trying to make in the future?”
Because you probably don't really care just what happened yesterday. I know it feels good the first time. You see what happened yesterday and historically it's great, but it's like crack, you get that first hit. For the first time, we get to see historical data over time or whatever. It's like, “Wow, look at our sales,” and all these stuffs happen. Right around the corner, the analytics question is, “What do I do now?” Now current state is simply, “What do I do in the future?” The questions about the past are mostly behind us because it doesn't matter if it doesn't inform a decision in the future.
I think about this as a continuum: if A to Z is the user experience and you're asked to work on a dashboard, it's like, "Well, that's from letter F to letter L.” There is an E and there is an M. Even if the dashboard isn't going to fully solve for that, you need to know what step E was before they land on F, which is the dashboard. When they exit, there's some decision they may make in the future, you need to know that entire workflow. Journey mapping or service blueprints are the design tools that we use for this.
These start to give us that much bigger A to Z picture of what it's like to be this person. Yes, the dashboard project may end up just being that one slice. If you know what they need to exit the dashboard to make a better decision, where are you going? What is the next stop on the trip? That's how you build a better dashboard. You don’t stop at the data.
Loris Marini: There's an idea that is central to Discovering Data. I think the central theme is curiosity here. There are two elements, at least. One is the psychological stance of people working to solve these problems no matter where they are. From ingestion and storage of the data, all the way to the final product that gets used, it's a long way.
If you look at most teams, the way that they're designed is that we have specialists in each area. There are the data engineers that worry about where to put data, how to stream it, how to save it, how to massage it just to create a repository of reusable information. Everyone that comes downstream that actually wants to do stuff with it. All of these principles you just described I think apply smack on to the first and most invisible part of data which is the data management and all the integration, all the architecture. Because when you make those decisions, you can't build a system that works in every scenario.
You can strive to build a system that's flexible, but there are always pros and cons and money is limited. I believe that if we approached data management as a design problem and have that curiosity hat on to ask those questions and try to map what we want to happen ideally, you’ve got to always start from the vision. There's always going to be limits and everybody will have to adjust plans all the time, but if at least we had that vision clear and an idea of what's going to happen in between, it's going to be much, much easier to build assets that can be reused.
That was sort of the idea that I had. I saw your post on LinkedIn a couple of weeks ago, and I got really curious. What we know from the economists, Doug Laney has been on your podcast, James Price on mine. We talked about this idea of intangible assets and how much they drive the current economy. It's estimated that more than 90% of the value as P500 is attributed to intangibles. Data and information are a big chunk of that. What I'm thinking is we know information is non-depletable. We can reuse it. It's not like a bottle of wine; we drink it and it’s gone.
We can keep using it. There's a non-compete aspect to the assets because multiple people can use it at the same time, but then one might argue, “Well, YouTube videos are the same.” You can’t kick it with your toe, the intangibles. You can play that again and again. Multiple people can stream the same video at the same time, but I don't see that as a magic asset. What is the difference between information and a video on YouTube?
Well, apart from the fact that a video in YouTube is a form of information, but the sort of information we mean when we talk about getting to business outcomes, using data, using digital representation of the information, is slightly different. The difference is, I believe, in the fact that information is contextual. You kind of always reuse a data set. You kind of need to know the context around it. Up to what extent can we build data products that can be reused? Is there an important aspect to it? Should we worry about that?
Brian O’Neill: This is getting a little abstract. I guess when we talk about, “can they be reused,” I would say, “Okay, well look at any SaaS platform, market intelligence tool that has a monthly subscription model. Can it be reused? Someone's paying every month for this thing, which implies they're getting value out of it. I guess that means it can be reused and they're probably running the certain use cases on a regular basis. I have my 9:00 AM coffee. I look at this dashboard, I look for meaningful change. I look for insights that I didn't have yesterday.
Is it reusable? The tasks and activities are definitely reusable. Has the data slightly changed? Obviously, yes. A lot of these situations we're probably talking about dynamic information, so the insights might be different. When you say, can it be reused? It's like, well, has the data set changed or not? I guess I'm not quite sure when you say, can it be reused? What do you mean by that?
Are you talking about a static set of data and could it have multiple uses or what?
Loris Marini: Yeah, I suppose I'm talking about what comes upstream relative to a dashboard. The data comes from somewhere. It's been transformed along the way and then feeds the dashboard. To your point, it's not that the dashboard needs data sets and that doesn't magically appear. Data engineer teams are responsible to provide that data sets in a way that whoever's responsible for building a dashboard can work effectively and not reinvent the wheel, spin up spark clusters or do a whole bunch of transformation.
These folks should communicate. Obviously, the engineer and the analyst. There's a movement going on, at least four or five years, called the analytics engineering. A philosophy that is trying to build a bridge between the students. Kind of like dev-ops did for software devs and those that are operationalized to use your term. Software should talk to each other, actually. What they should actually be, there should be almost the same person. If we remove or reduce technical barriers in the process, through the pipeline, through the data transformation journey, then we can have one person own the whole process from ingestion to cleaning up the data and serving into a dashboard.
That process costs money and time. It's often done, just as you described at the beginning. We don't talk to the guys that are building dashboards. We just do our transformations and build something that might be reusable, maybe. Sometimes it isn't or there are assumptions baked in those transformations that will impact how people then interpret and use the data set. You can have all your inks and all your phones and all your sparkly, dynamic widgets but if the data is not representing what you want to represent, then you've got a problem, right?
Brian O’Neill: True. I understand what you're saying, and I'm not saying that what I would broadly call the plumbing for this. I'm not saying that the plumbing's not important. I'm saying that ultimately, I think teams are keeping track of technology efforts. Did we plumb the system into Snowflake yet? The game that everyone is playing is, “Have we migrated to Snowflake?” That doesn't mean anything to someone who's supposed to receive some value from the data that is in Snowflake. That's a different game. It doesn't mean that getting to Snowflake is a step in the right direction.
I'm not picking on Snowflake, fill in the blank with whatever tool, platform, whatever you want to talk about. I think sometimes with these enterprise projects, part of the issue is we don't really know what we want, but it sure sounds good to be on that platform. We can all imagine that these amazing things will open up to us if we were on this platform. Okay, now we're there. We're at the top of the hill, we're on this platform. Now all this new value is supposed to occur, but yet it still requires additional engineering and plumbing and other work and design work to figure out what's needed.
The designer's approach would be, “Let's start at the end of the process with the humans in the loop to understand what are the classes of problems we're trying to solve for.” You might come up with a prototype or a vision prototype, and then get the technical people involved early in that design process so they can see where we're going.
Wouldn't it be easier to know how to store the data and whether to normalize it? Should it be streaming? Does it need to be sampled weekly or monthly? Well, we built 10 dashboards. We built these applications. We've created an experience vision for the people. This is for the accounting team, the whatever team. We've prototyped a vision for what this looks like. I don't know any software architects. Having worked with many engineers over 25 years in this field, architects don't find it really helpful to see a stab at the endpoint, the classes of problems. Even one dashboard, it's like, “Wow, okay. I had no idea whether you're going to need to see this.” Reshuffle it weekly and compare it to this other thing.
We don't get that data. We have to buy it. We have to store it. It's not streaming. It's not real-time. This chart will never work unless we build all this other plumbing out. No one knew to say anything about that because no one had talked about that because we had never really engaged with the humans in the loop at the end of the process. We just took the platform's theoretical things that it could do and assume that we would get this new value out of it.
My point is understanding the endpoint first that's really the beginning of the process. Working backwards from the end as much as we can. It is about de-risking it, but that makes it sound almost like it's doomed and we're taking some of the doom out. It's more like aspirationally, we're trying to make someone's life better to create new value and it might be wrong. Maybe we still won't get there, but we will.
I understand that some of this stuff feels squishy and people's feelings and all this kind of stuff, do it because it makes your work better. What do I mean? Aren't you tired of doing work that doesn't get used, that you spent all this time on it, and then someone's like, “Oh, that's not what I meant. Could you do it this way?” It's just like, "I hate working here.”
Do this design stuff because your work will begin to matter more. You'll build stuff that matters and has an impact. It's much more fun to code up and build things that get used, than it is not. The way to do that is to figure out what does the end-user need and, and want what would make their life better? That's the way to build stuff that makes your work count.
Loris Marini: What do we need to do in the organization to help these people that embrace this mindset actually succeed at their job?
Brian O’Neill: That's a really general question. I'd love to say hire more designers and creatives, particularly in areas where we don't associate or think that they're necessary. What I think it’s about is, outcomes over outputs, which is not my thing. It's not my framing, but I love this framing. It's telling the team, “Success for us is delivering outcomes. Not things. It's not the model. We don't get points for the model. The game we're playing is meeting outcomes.” Therefore, outcomes need to be defined by who we're probably going to need to do it with them because a lot of people don't know how to express that.
They know how to say we're migrating to Snowflake. How's it going? Even though in their head, they're like, “So that I will have real-time whatever about my customer ad campaigns”, but that part's not said. They're just asking you about Snowflake. Together we focus on the outcome piece and the product-oriented thinker is thinking holistically about this problem.
Good product managers in software companies these days, it's about sending the team ownership of a problem, not a feature, not an integration, not a pipeline to own. Instead, your job is to increase the speed by which all these things load by 20X. I don't care how you do it. I don't know what needs to change, but we know that a 20X speed improvement will yield this other business result that we care about. How do you do that? I don't know, but that's your problem. I'm not going to tell you how to do it, but you're smart and you have a cross-disciplinary team. You own the problem.
That's what good product teams do. They're not building widgets and features. They're owning a class of problems and this has been actually studied now. There's real financial gain to this. I just had a friend François Candelon from Boston Consulting Group on my show to talk about how they'd been working with MIT Sloan to do studies of large AI use in the enterprise. Last year’s study was trying to look at the human-AI interaction thing. What are the "more successful companies” doing? What were the traits of that? The companies that we're seeing outsize business value from AI were the ones that had figured out the human-machine interaction piece.
Some of the quotes in there from some of the people that were part of this study, I think it was a mining company in South Africa, West Africa, I forget, but he said “We don't have machine learning teams. We have problem owning teams. We send a team out to own a problem.” Yeah, it's probably roughly in the scope of some kind of data-related problem, but that framing of, "We've sent them out to own this problem to talk to customers, to understand.”
When I say customers, that could be internal too. I don't remember that case, but the point was this leader, I think he was a CDO, had set it up as a problem-owning team. That's a very good, modern products kind of lens on the problem.
It might mean you need some other people, besides someone who has the word data in their title. You might need some other kinds of people, some researchers in there. Product, people, this analytics translator thing, which is very much what I would call a data product manager. These other kinds of roles that look at the world differently. I think the people that know how to connect that last-mile piece. I don't think that's always people that are used to “working on the plumbing and the backend.”
The people that are choosing the faucets. Ultimately water comes out of this faucet and yes, we need plumbing, but the plumbing down in the basement, nobody really cares about that. They expect it to work like oxygen. They don't understand the ramifications of those choices. We need to stop talking about those choices with the customers, because they probably don't know what they're talking about down there. I've sat through so many of these decisions, abstract conversations about which architecture we're going to use and all this kind of stuff.
Those are important things. Most of the time I realized the argument here is because we don't really know what someone's going to do with it. I think if we had a better picture of what good looks like from the end-user, the stakeholder here, we would have less arguments. We know what the problem space is better, and it's no longer just opinions and all this kind of stuff.
Loris Marini: There are at least two ways to solve this problem. I can think of one which is by educating everyone on everything. I imagine a scenario, which obviously is impossible, where you had a magic wand and you could give all the combined knowledge to everyone. Everyone would know exactly what happens outside of their bubble. We wouldn't have translational problems. We wouldn't have to worry about what do you care? What do I care? That's the reality of things. Our attention is very limited. It's very short and we can't know everything.
Effectively, it becomes a matter of filtering down the space of knowledge to the thing that you really need to worry about for your particular problem. How do I help you? I think it's a principle that applies to internal customers as well as external customers. With the internal ones it’s a little bit trickier because when you build a data set or a data mart or anything that is at the beginning of the pipeline, the idea is for many people to reuse that asset, instead of creating copies over and over of the same thing. Copies become a nightmare when you want to govern your data for compliance and all that.
I almost feel like the more we walk backwards on this line from the end-user, all the way to the data stores, the more critical it becomes to be curious and talk to people and understand what they want to do with it. How is that going to impact her impact their work?
I guess the original question I asked is, organizationally, the types of incentives that need to be there for a team that isn't enlightened, that gets it and they say, “We do want to own the problem but we can't do it all.” Enterprises are huge. There are so many data sets and so many people. We tend to build teams that are jive, that are a lean cluster of four or five enlightened people. What do they have to do to be able to move freely and effectively within the enterprise? Actually, demonstrate to those that are skeptical, that there is indeed value in doing this, not just theoretically, but is valid? We solve this problem with the customer. We reduce churn because we did.
Brian O’Neill: There are a million things that could be in the way of that happening. That the company's culture, leadership doesn't know what it wants. There are all kinds of historical baggage that's come along. Power structures, incentives. It's a very broad thing. I don't think there's a single answer to that. I do know that there are probably basic activities that you can do where you can't define the impact of those things. For example, Elon Musk would say, "Let's just get down to brass tacks here. Your product sucks. Spend more time with your customers.” What does that mean? Go out and actually talk to the people that are buying or using this thing and understand them.
It sounds really simple and it is. It doesn't happen because people think that they can speak for someone else. The manager, the department head thinks they can speak for all the CSRs that are on the phone all day because they used to do that job eight years ago. Even though they haven't actually done it in eight years. They think that the way they do it is the way all the CSRs do it. In reality, the job has changed.
They think they can speak on behalf of this data product that you're building for the customer service representatives. They actually aren't really aware of how that's going to fit into that workflow. They stand in the way of the makers and I'm going to call them designers. Slight tangent here.
Why do I call you a designer if you're doing this stuff? Because in the world of design, there's no null choice. Every choice is a choice, including doing nothing. Effectively, if you're building a data product or a dashboard, you are a designer. Someone decided that that chart went there. Someone decided there's a button and it goes to this other thing. You might not be super informed about it, but you are a designer.
My thing is about making people better at it. How do I help improve people and be aware of this? That there's the rendering of intent? I don't know. That's what came out of the plot at the end. I don't know, I just saw a chart like that before, so I used it. I intentionally chose this. I thought about it because of this end case user thing, whatever. I understood the problem space. That's my little tangent on everybody's a designer. When I talk about makers, there are too many people. Architects, data engineers, software engineers, analysts, whatever, there's too many of them. You are a designer in a way. If you're making something that a human is going to use.
Routing ourselves back on the road here to your original question, lots of reasons I think can get in the way. How many hours did the technical people spend shadowing one of these customer service representatives? Do you know what it's like to take calls? You'll hear this all the time. This is actually not in the software world. This is not really new anymore. A lot of teams that have this kind of thing, they require their engineers to spend a certain amount of time shadowing with the headset on listening to customer complaints come in to understand what it's like. Those stories can be very powerful at changing behavior.
It's easy to look at a chart and say, “Well, 82% of people reported a problem with whatever. 12 people had a bug report”, but it's different than listening to three really interesting phone calls. Those things are what can really have impact, not just on the makers tying your spark pipeline or whatever thing that you made back to this way downstream person. It really brings some new insight. Not to mention possibly some new ideas or perspectives that had not been heard before, because this gap has been there.
Simply spending more time. If you want to keep track as a manager, how many hours a week did my team spend with stakeholders? Eat your vegetables. I can't tell you what the immediate value of that is. I just know long-term, that is what is going to build empathy and make you build better stuff. Eat your vegetables. That means spending time with customers. You can start by just sitting on the phone and listening. Over time you can develop the skill of doing contextual inquiry and learning how to actually run a research session to ask good questions, to get you to something that actually starts to inform your more day-to-day work.
Loris Marini: You are in the learning business.
Brian O’Neill: Yeah, essentially.
Loris Marini: You're in the learning business. You've got to be curious and probe the audience and the customer in this case, whether it's internal or external. Customer is someone that has a problem. You have an assumption of what the problem is, might not be true. You'd ask those questions. Curiosity, I think is a huge element of that. I have experiences. I know a bunch of friends that work in the large enterprises in various data roles, from science to engineering. There tends to be this isolation, this siloing, where you are given a ticket, as you mentioned at the beginning. That's your job when your performance is measured against that.
Brian O’Neill: Well, just there, you said performance is measured against that. If the organization has a robust skillset and these things that I'm talking about, and this group can properly deliver and translate that information to the data teams that matter, you may not need them to spend as much time. I can almost guarantee you your company is probably going to develop better products and services.
Especially when we're talking about digital here, the more that all the technical people have some type of personal connection to real human beings. It's no longer this abstract idea of a customer. No, it's John in marketing. Jane in HR. It's a real person. We saw what they did. We saw their December period, the crunch. Accounting tax season. We know what that's like. We saw it. We know how important it is that systems are running it at 100% accuracy right now, because any flaw right there could completely screw up.
That guy's worried sick about this. It's very different when you just hear, “We need a 100% uptime,” versus this person's job is on the line. They are literally sweating bullets about the reliability of this information. We've learned through research that this person is scared to death because they're the ones that write the final number in the thing that goes to the CEO. That number is not well-informed and that gets out to the public and then stocks are traded and all this kind of stuff.
A UX researcher would be thinking about looking for this kind of stuff. My business coach calls it emotionally charged. Designers are looking for this with software when we're doing digital stuff, to see where people are worried, where they're joyful. Any time the emotions are being triggered there to see, is there a way we can remove that friction here? “We need a preview screen”, or we need a test run where you take this number and we rerun a Monte Carlo simulation and double-check the thing. It goes out the door and yes, some of that doesn't feel like it's a requirement and nobody asked us for it. But that was the thing that actually make these people say, “Loris, your team kicked ass. Can you also help us with this other thing? This is great.”
It got them to be thinking of you as an ally and a strategic partner instead of, “You gave me exactly what I wanted. Thank you. It's not quite what I needed though. I'm so sorry. But like I thought it was going to have customer data and I know, I never said it needed to have customer data and I'm really sorry.” You get that kind of relationship. I'm sure you've been there and you're just like, “Yeah, but that's your problem because you didn't ask for it.”
Loris Marini: Yeah, exactly. Tell me more about the wants and the need. How do we probe for the need? Number one, it's important to sell it, right? Otherwise, there are no people that are like, “That's not what I want,” but what you need might be different from what you want. How do you advise people to do that?
Brian O’Neill: Well, we use a ladder. It's a technique called laddering, or Y questioning, which you've probably heard before, but this is a very good reason and this has to be used carefully. I will admit as a consultant, it's much easier for me to ask those questions because I don't have all the politics and stuff that may come in when you're asking a leader, “why”.
They're saying, “We're moving to Snowflake.” This decision may have been discussed a lot. Maybe that decision is now down the stream, and you're asking why, and you realize they don't really know why except everyone else was doing it or whatever. I just want to acknowledge that this can be a sensitive thing to ask why. However, if it's done with a true sense of empathy and the way I would couch that with someone is to say, “Look maybe our team hasn’t hit home runs all the time here. We don't want to spend time building the wrong thing for you. There's a good chance we can do that. I'm not asking you why to challenge you. I'm asking to make sure that I surface all the assumptions. All the unspoken needs, the kinds of stuff that usually isn't written in a JIRA ticket or a requirements document.”
This is called discovery. We work through it together and it's not that you're withholding it. It's that it needs to be surfaced in a language I can understand and then take back to my team. That's how we build better stuff for you. My team has better success and we become partners here instead of like, “I gave you what you asked for, but it still didn't taste very good. You don't want to come eat at my restaurant anymore. You said you wanted double the flour in the cake. It's pretty fricking dry, isn't it? But that's what you said.”
I want teams to be challenging that and say, "Why? How will it help you to have double the flour in the cake?” That implies that there's not enough flour. How do you experience not enough flour? Is it dry? Is it wet? Yeah. What's behind that. I don't know. You find out this other thing and it's just like, you saw some report or trendy TV show about flourless cakes. Everyone else is doing it. Well, why are you doing that? Well, we need to do something different. I don't feel like we're very inundated. Okay. You want to do something different? Why? Well, because I'm up for review and I need to show some value in our work. Okay. Really you're concerned about your review coming up and you don't feel we've been particularly innovative. Yes. That's it. Okay. Well, we're working on some small machine learning experiments. They're small, but we know that the business strategy is supposed to include AI. You know what, if we could show some little wins here with some of this work, this prototyping we've been doing here with that instead of doubling the flour in the chocolate cake, how would that sound? That would be awesome. I'd love to be able to say that we did something with machine learning.
I'm totally improvising and riffing here. This is my jazz musician in me. The point there was, I simulated that conversation there and we ended up finding out that the real reason the stakeholder wanted something was because their review is coming up and they felt like we weren't being innovative. We weren't having an impact. We weren't showing anything. We connected the dots to a business strategy, which was, we need to start using our data assets and leveraging AI, very vague and high level. Now you're like, “Okay, I know this person needs to feel fulfilled. There is a genuine customer need here. There's this machine learning prototype. This thing we've been working on, I can connect these dots now, and now everyone can win the business wins. This person feels like they're up for a raise or whatever. Our team built something that matters for a real customer, there would be a real customer need.”
I'm not saying just go build a model so this guy can get a raise or whatever. That's not what I'm saying. I'm saying we're digging for the unarticulated needs through asking really good questions in a very empathetic way. I will tell you there's some stuff about why, at least from the sales world. I learned this in Chris Voss’s masterclass. If you know him, he's a consultant in sales, but a former FBI hostage negotiator.
He talks about to be careful with the question why, because it actually can trigger a little bit of a challenge. I'd imagine like the little kid that you were, like, “We're not going to Disney Land.” They keep asking you why and it can come across that way. I often recommend the people in my seminar to mix up that vocabulary a little bit. If you ask why, then you might say, "Well, could you tell me more about the reason for doing this?” Which is the same way as saying why, but just changing the lingo a little bit and being curious there. Not just literally keeping asking why over and over.
Loris Marini: Because people will fire you at some point. This guy asks why all the time. Because I said so.
Brian O’Neill: Yeah. Because I said so. Right. That's often a conversation stopper there and that's the sign of a risk-averse organization that wants to keep repeating yesterday.
Loris Marini: You've got to change up. Because if you want to keep yesterday repeating yesterday, maybe.
Brian O’Neill: Some people are happy with that. I want to do work that matters. I want to impact people's lives. That's just my personal thing. Some people are probably okay with, "Look, I see this whole software, this architecture thing is going to go off the cliff, but it's my job and it's not my problem.”
If that's your thing fine. That's not the audience that I'm trying to help. I'm trying to help people who actually want this work to matter. They want to have a real impact to use this skillset called design to help you do that. You don't have to be a design-er to do it. You can leverage some of these techniques.
Some designers will argue with me and they do not like this idea. If you're not trained then you can't do it. There's definitely a protectionist thing. I'm not in that camp. I'm in the camp that’s democratizing this activity. It's a difference. It's not a professional. You may not be a professional at it. A designer is an accelerant for your organization. If you don't want to go through the learning process, you want to cut the corners. You want to go faster. Well, design-ers can help you do that. They can also then alleviate a data engineer from having to spend as much time doing this non-engineering type of work that is important for somebody to do. That's why you hire people or get help with it is, is to accelerate.
Loris Marini: Design is not about the aesthetics, right?
Brian O’Neill: Well, it is at some point, but it's not only about that. To me, it's really about the details of which aesthetics is one part of that. In some contexts, aesthetics doesn't matter a ton and other context the rendering of a dashboard may have a lot to do with the way the data is interpreted. Signal, noise irrelevant information charts that lie, typography, these kinds of details matter. Those are things where yes, you can get better at it on your own, or you can hire someone to do it.
I can probably learn Python. I was an okay coder. I've built a lot of my own little applications in Cold Fusion and Ruby on Rails. People don't hire me to write Python, but if I needed to go learn it, I probably could. Can I do it fast enough? Can I write secure code that compiles properly and that tests right?
The question is, “Do you want to start changing the ship?” Which might mean your designers have to write a little code because you're resource-constrained. We're all contributing to that. Well, that's one way, or you can hire experts and pop them into your organization and get the acceleration. That's really just a resource issue, but I want to invite people to try these processes sometimes called design thinking,
By the way, design thinking was basically a marketing term for design. When they put the thinking part in it, it took it out of the visual aesthetics world into a, “Oh, it's an approach to problem-solving. Okay. I can do that too.” It was actually really smart marketing. I think it was IDEO's positioning to do that. I'm fine with that. I don't use that term. I liked the idea of design doing cause sometimes I think we design thinking. It's like, “I did the thinking and now I'm off to doing stuff.” Let's have more design doing. Let's just call it design. That's really what it is.
Loris Marini: It's so true. I feel that every business owner, every CEO of a startup or any founder knows how it feels like to be thinking too much and not acting on your thinking. At least not, not enough. Brian, I know that to fix this gap, you've got a seminar that you run twice a year. Can you tell me a little bit more about that? Who's that for? What’s the format?
Brian O’Neill: The seminar tends to skew towards more of the enterprise clients. Enterprise data, science teams, analytics teams, BI teams, technical product management. Sometimes we have some software industry people there too that seminar is for both private teams, but I also run it twice a year with public cohorts.
Individuals can just sign up instead of being part of a team there. It's an eight-to-10-week seminar. I'm always listening to my customers getting feedback and changing the format. Almost every time I run it, something has changed because I look at it. It’s not done for designers. It's never done. It's a continuum of improvement, so I'm always changing it. Right now, it's 10 weeks. It's probably going to move to eight weeks and I'm going to compress a little bit.
We have eight different modules that we work through together, and those are delivered by videos and text supplements, which give very detailed instructions. How do I do a usability test? How do I design a prototype? What sketching? What is design jams? What is writing a design brief and an ideation session? How do I run this? It gives step-by-step instructions for data professionals on how to do that to start to really get to this point where we're designing for outcomes instead of outputs.
That's the goal of it. I'm very big on trying to attract people who want to try to use this in their work, because if you want to learn how to ride a bike, you don't get a book, you get on the bike and you start. The people that get the most out of it are the ones that have live projects, and they're willing to try to use the modules in their real work.
We meet one or two times a week, through live video calls and we work through this stuff together. Sometimes we get into the organizational issues and some of that. We talk about some other strategies and things that are a little outside of the design thing, but whatever I can do to help people get more bridges built so they're on the right path. That's what our live calls are about.
Recently I've just started kind of developing a practicum to go with this as well. Sometimes teams have struggled to figure out, “how do I insert this into my current project? We're nine months in on this giant enterprise thing. Can't really redo this part of it.” We have some kind of exercises that we do as well. Right now, they're kind of themed around COVID-related dashboarding stuff. I wanted it to be something realistic that we all could kind of relate to here. I'm using kind of COVID as a domain for us to work on as a way to practice some of the materials that are in there.
It's very much geared to people that are in leadership positions and data science analytics, digital transformation, these kinds of areas. Sometimes we have designers and UX people in there as well, but it's really skewed toward data science people.
Loris Marini: How long does it go for you?
Brian O’Neill: It's 10 weeks. It's probably going to move to eight weeks that I'm going to compress a little bit more stuff into those eight weeks. The next one starts in March 2022. I think March 28th is like the first week or something like that.
Loris Marini: Oh fantastic. I might add a link to the show notes for those that are interested to check it out.
Brian O’Neill: Yeah. It's designingforanalytics.com/theseminar. All one word. That's how you get to it.
Loris Marini: The seminar, there you go. Well, Brian, and there's a gazillion other topics I’d like to talk to you about, but I'm afraid we're running out of time for today. Really took 10 minutes more of your time. It’s been great to be here and chat with you. I really wanted to thank you for sharing these insights and just in general for being this a voice in my head. I've been listening to your podcast for two or three years now.
It's been really, really useful to hear a different perspective, a different mindset, different thinking compared to what seems to be the norm around here. I was one of those that associated design with the realm of artists and, particularly focused on aesthetics and through your podcast, I learned that it's so much more and it's way more interesting than I thought.
Brian O’Neill: Excellent. Well, like I said, it does reap rewards for others, even if you're not the one doing it, you can benefit from it. As someone that's contributing to a big project, I think it's fun to do it. It will probably change your whole perspective on how you do the thing that you're good at. If you're a data engineer, whatever, you will see value in that, I think. Having that lens, at least on your work.
Loris Marini: One last thing. Where can people follow you? Is it Twitter, mainly LinkedIn?
Brian O’Neill: I'm most active on LinkedIn. So, it's just Brian T. O'Neill. You'll find me. My company is called Designing for Analytics. I am on Twitter, not super a ton, but it's @rhythmspice. It's R-H-Y-T-H-M spice. That's because I'm a drummer and that's where that comes from. I like Mexican food from Arizona rhythm spice.
Loris Marini: Cool. We'll leave it at that. Thanks a lot, Brian. Thanks again. Enjoy the rest of your afternoon there.
Brian O’Neill: Cool. Thank you so much.