Kate O’Neill: “Tech Humanism: The Future is Meaningful” | Talks at Google

Kate O’Neill: “Tech Humanism: The Future is Meaningful” | Talks at Google


[MUSIC PLAYING] TIM: Thanks, again for
coming and please join me as we give a warm
welcome to Kate O’Neill. [APPLAUSE] KATE O’NEILL: Thanks very much. Thank you, Tim. Thank you, everyone. Hello, fellow humans. Wait, actually, I guess I
should check and make sure, are there any robots here? Any robots? Raise your hand if you’re
a robot in the audience. I don’t see any, so I think
we’re safe to proceed. But I ask this
question every once in a while, because I figure,
one of these days, that it’s going to be kind of a little
spindly, mechanical arm that comes up when I
ask that question. I’m not really sure what I’m
supposed to do at that moment, if I’m supposed to invite the
robot to come up here and take my job, because that’s kind
of how we talk about robots and automation and AI
and everything these days is like, with this fear,
this dread about what it’s going to mean for human
jobs, for humanity, for existential
reality as we know it. So my premise has been
to think about how we can make technology
better for humanity, better for our future, and
of course, better for serving business purposes. But in doing so, I think
we have to start back from one of the squares,
one of the foundations, and that is to think about
what it is that makes us human. What is it that
makes humans human? So I’ll ask you to
indulge me and just think for a moment of a
word or a characteristic that you feel really captures
the human experience. What one characteristic is it? And I won’t have you
call it out or anything but just hold it in
your head for a moment. What do you feel like it is
that really makes humans human? And so let me ask, how many
of you, by a show of hands, thought of something like
creativity or problem solving or innovation
or something like that? Anyone? No one? A couple people in the room. Good. So that’s a pretty
common answer. This is a more common
answer, I think, empathy or love or compassion. Anyone? A few more hands. Those are both great, I
think– great characteristics and admirable
qualities of humans. I didn’t necessarily
specify that these needed to be uniquely
human attributes. But I do think, if
we think about those, they don’t feel like
they are uniquely human. We’ve seen creativity
and problem solving in non-human
animals, like otters bang mollusks on rocks to open
them, and ravens use tools. And we’ve seen
compassion and love from elephants and
dogs and other species. So we know that those are
exhibited by other animals. And I don’t think
it’s too far fetched to imagine that in the
not too distant future, we might see at least
superficial indications of machines exhibiting
those kinds of qualities in their behavior interactions
with humans and maybe even, eventually, other machines,
which will be very interesting at a surface level. But how many of
you, when you think about what that most
human of characteristics is, thought of checking a box? Anyone, by a show of hands? Of course, you didn’t,
because it’s absurd. But this is the
premise, the problem that we encounter in
technology a lot of the time is that we don’t
necessarily think through this kind of
foundational experience. And we are presenting
absurdities as if they are
foundational truths. And besides which,
if we were to try to claim that this is a
uniquely human characteristic, we’d get beat out anyway
by machines that can also do this characteristic. So I don’t know how
many of you have seen this little [INAUDIBLE]
guy, but he’s kind of fun. [LAUGHTER] So I have come to be known
as, as Tim mentioned, the tech humanist. And I take this moniker
pretty seriously, because I feel like there
is this area around which technology does
have the capacity to solve human problems. It also has the
capacity to scale, like we are experiencing
automation and AI and all kinds of other emerging
technologies, bringing scale to the types of solutions
we create like never before. And so I think it behooves
us to really think about what the human
experience around that scale is going to be and what
the human experience around technology is going to be
and how we can make technology better solve business problems
and solve human problems at the same time. So as I talk about
being a tech humanist and as I think about
solving those challenges, I’m excited that you
all are in this room and are on the livestream and
watching on the video later, I hope. And then I want to
offer that, perhaps, that that is also you, that you
maybe also are a tech humanist. And I’d like to offer
that term to you, so that when you see my book– as Tim mentioned,
it just came out September 24, “Tech Humanist”– that you will see
that title and think, I’m describing you as well,
because that is the truth. I’d like to see us all
join hands in this movement to create more human technology
and more wide-scale human experiences that are
more meaningful and more integrated and more dimensional
with the technology we create. So the premise
there is, how can we both make technology
better for business, to solve business challenges,
and make it better for humans? And I think that that
both, and framing is the key to the whole thing. We need to understand how to
accept that these things do need to be integrated together. And I would propose that the
way to accomplish this at scale is to focus on creating more
meaningful human experiences at scale. So how do we focus on
getting that meaning into the human experiences? So the way that that looks
in the model that I propose is this. On one hand, how can we
think about scaling business meaningfully through data,
through strategic alignment and automation? How can we think about using
the tools at our disposal to make business more
effective, while also creating more meaningful
human experiences and scaling those through
data and automation? I’ve had the opportunity
to test this idea with a lot of
different companies that I’ve consulted with,
spoken with, advised, worked with on different
projects over the years. And I’m excited to say that it
works in almost every industry I’ve encountered. It provides great results
no matter who you are or what you’re
trying to accomplish. Every company is trying
to achieve profit. Every company is trying to
achieve revenue-based metrics in what they’re going about. Even if you’re a
non-profit organization, you still have to be accountable
to some sort of profit and loss scenario. There’s some sort of
breakdown of the financials that you need to
be accountable for. And I’m happy to tell you
that the work of creating meaningful experiences actually
does lead to increased employee retention, decreased customer
acquisition cost, increased loyalty, and all kinds of
other directional metrics that lead to more profit. Of course, it’s also
the right thing to do. It also creates a better
experience for all of us. And I want everybody
to be motivated by this aesthetic of wanting
the world to be better and creating more meaningful
experiences being its own end. But if we have to be motivated
by profit, we can be, and that’s all a
good thing, too. So let’s unpack this
just a little bit, what I mean when I
talk about creating meaningful human
experiences at scale. What does that entail? So first, let’s think about
what meaningful really is. So the example of the click the
box to confirm your humanity, I mentioned, I think, that
that’s an absurd example. And I have this running
hobby of appreciating the tension between meaning
and absurdity in the world. But I feel like anywhere
there is a lack of meaning, it opens up this void into
which absurdity can flow. So where we don’t create
enough meaning, where we don’t describe
enough meaning, we allow absurdity to flourish. So there is enough opportunity
for that in technology as it is and business, really. I think you all probably
have this experience, I’m guessing, that
there are areas where– let’s say, you talk
about work things in ways that you wouldn’t talk
about with your friends outside of work. You use language or
terminology that your friends who don’t work with you
would not understand. Or there are things that you
do at work that just don’t– that are kind of like, that’s
the way we’ve always done it. But any time you
think to yourself, this doesn’t make sense,
that’s a really big clue, because making things makes
sense is what meaning does. So we have an opportunity to
step back and assess absurdity and recognize that we can infuse
meaning into those structures and create more opportunity
to avoid meaning, to keep away from meaning. So the reason that
that works, I believe, is because humans
crave meaning more than any other characteristic. So if you were to ask me what
I think makes humans human, this is what it is– is that we seek meaning. In all areas of life, we
are compelled by meaning. If you offer us a meaningful
answer or solution, we are compelled by it. How many of you are
Douglas Adams fans? Anyone? A few. So you already know where
I’m going to go with this. In the “Hitchhiker’s Guide
to the Galaxy” series, the answer to the
great question of life, the universe, and
everything was– AUDIENCE: 42. KATE O’NEILL: 42, of course. So Douglas Adams wrote
or said in interviews that he chose 42 because it was
not too high and not too low of a number and because
it was just funny. It is. But I don’t know
if you know this, but on Reddit,
you can find this. And a few other places
collected around the web, there are collections of
alternate explanations for why 42 actually
kind of makes sense as the explanation of
meaning in the world. So for example, there are
42 characters in the phrase, it’s the answer to life, the
universe, and everything. So you’re convinced now, right? Also, there’s 42 dots
on a pair of dice, so that answers everything,
which I thought was just a throw away explanation. But my husband
said, well, life is kind of like a roll of the
dice, so I thought, well, all right, fair enough. My favorite of them is this. 42 is apparently the
Unicode character– Unicode value for the
asterisk character, which as you may know, is
a wild card symbol, often, in computing, which means
it can mean anything. It’s obviously
just a coincidence, because Douglas Adams
didn’t mean it that way. AUDIENCE: That makes
it sound better. [INAUDIBLE] KATE O’NEILL: But that’s
the important point, is that even though Douglas
Adams didn’t mean it that way and it is
just a coincidence, it is an absurd and poetic
and beautiful coincidence. But we always make
meaning the way we have always done
and always will, which is by ascribing
different significance to different events, based on
how and much we value them, or in other words, by
making it up as we go along. And I think that’s the
encouraging thing about this, is that even though
we talk about robots and automation and AI
in the broad mainstream in a scary way,
what this suggests is that there is this open
interpretation to the future. We get to make meaning for
the future as we go along. We get to decide the
future as we go along. You get to decide the
future as you go along. And that’s really incredible,
because right now, the possibilities,
the power of what’s happening within technology and
within the scale of emerging technologies, means that
we have the capacity to create the best futures
for the most people. There’s really this
potential and, I think, even an ethical
responsibility, to think about how solutions
can scale to that sort of level. So let’s go back to
unpack and create meaningful human
experiences at scale and what is it that human
experiences really describes. We talk a lot in business
about customer experience, user experience, or depending
on your industry, it may be patient
or guest or visitor experience, student experience. We don’t often, in
many industries, talk about human experience
in this integrated way, in this way that brings
all of those roles together and appreciates
the fact that there is this kind of holistic
human experience that transcends any of
those roles, that you are– we are all of those roles
at any given point in time. And so even though
you may be performing as a customer in a
customer experience, you are still a human coming
into that customer experience. The important thing about that
is the transcendent empathy that can come from
understanding the baggage, the context that someone
brings to that interaction. So there’s an
opportunity to create these more dimensional
interactions, these more integrated interactions. And so to create more
meaningful human interactions, it turns out, we
need to design more integrated human experiences. We need to think about how
to blend all of those roles and understandings together and
bring an understanding of where someone has been, where you’re
meeting them in the world, and how you can create
these kinds of senses of dimension and holism. So I promised a Venn
diagram to a friend earlier, and I have it, the best Venn
diagram in the entire world. I’m sure you all have seen this. If you haven’t, you’ll want to
rush out and get this t-shirt right away. Tenso Graphics makes
this Venn diagram. But the illustration
here, really, I think, gets at the point. When you think about
what is possible on one side of an equation,
such as the best technology or the technology
to make business better, and what’s possible on the
other side of equation, like what’s possible to make
technology better for humans, it’s only by really thinking
about the intersection of those things that you really
come at the best solutions, like platypus keytar. You don’t get platypus
keytar until you’re doing some serious both, anding,
so that’s the opportunity. And really, what
we’re talking about is augmenting human experience
with data and context. So the broader opportunity,
in a technology sense, is to really think
about, where are you meeting someone in the world? What data do you have to
understand and appreciate where they come from, what
their preferences are, what their tastes are? And how can you create context
that addresses the objectives that you have as a
business and the objectives they have as a
human and the role that you’re meeting them
in and the alignment of those objectives? How can we come at that in
a way that provides that? And so I actually
kind of think of this in a way as being meaning
as a service, in a sense. It’s an opportunity to
think about offering up a meaningful
construct that aligns your objective and
their objective and providing the
hooks, in a sense, to be able to expand upon that. And I really mean any
meaning of meaning. So meaning, as we talk about
it, could be at any level. We talk about meaning as it
relates to communication. So I’m a linguist
by education, so I think about the semantic
layer, how we communicate with one another, what we
convey across our communications with one another. You probably spend
a lot of your time, if you do a lot of
development or engineering, in patterns and significance. That’s probably a layer that
you spend a lot of time in. But it could be all the way out
to the existential and cosmic layer. What is it all about, Alfie? That sort of thing. I think, in a sense,
it’s almost like API thinking for everything. You can really think about
how one idea integrates with another and how what
is meaningful on one side, like the business side, can be
meaningful on the human side of the equation. How do you provide
hooks and intelligence across those different
parts of the experience and make sure that
that meaning is being transferred through that layer? The integration that most
brought me to this realization is thinking about how the
design of experiences online now regularly intersects with the
design of experiences offline, that more and more as we think
about physical experiences, they come with some sort
of digital component or some sort of track
ability or traceability with that physical experience. Or when we think about
digital experiences, we have to think about
the physical context somebody might be in as they
encounter those interactions. So I wrote about this in my
last book, “Pixels and Place.” So thinking about things
like the internet of things and wearables and
beacons and sensors and all kinds of
connected smart devices, and how those bring
that connective layer between those two worlds. But the important
point about that is that just about
everywhere interesting that the physical
world and digital world connect, that connection
layer happens through us, through humans, through
our human experience. It’s our movements. It’s our behavior. It’s our patterns. It’s what we want, what we do,
what we indicate that really creates that connection. So again, it comes
back to thinking about that integrated
human experience. So I proposed this model in
“Pixels and Place,” which is integrated human experience
design, thinking about how to blend those online
and offline contexts, thinking about how to come
across all the different levels and roles of humanity
that you might encounter, thinking about how to
think about experience in an integrated way,
interactions and transactions across all the different touch
points that you might have. And note the way I’m defining
the word design, which is the adaptive execution
of strategic intent. So you know you
have an intention. You have a purpose
to what you’re trying to do with any
given design initiative. And you know that you’re
going to probably not get it exactly where you want
it to be on the first go, so we need to build an adaptive
iterative process to this. And the more we do this
around a framework of creating that meaningful interaction and
that dimensional relationship between business,
entity, and human that’s consuming that
experience, the more we stand a chance of conveying
some sort of meaningful truth. So the elements of
integrated human experience design, as described
in “Pixels and Place”– I’ll go through
it really quickly, because what I want to get to
is that with “Tech Humanist,” I’ve actually
built out upon this to a more automated
understanding of experience. But within integrated
human experience design in “Pixels and Place,” we look
at integration, of course. That comes along for the ride. So we’re already talking
about all these layers that are being integrated, the
online and offline contexts. We’re talking about
dimensionality. So how does something
come to life across different touch
points or ways in which you interact with people? How do the metaphors and
cognitive associations come to life? What sorts of intentional
things are you communicating through all of
the choices that you’re making, about the language that you
use, the iconography you use, and the cognitive assertions
you’re bringing along with that– cognitive
associations you’re bringing along with that? Intentionality and
purpose, so how have you defined what it is
you’re trying to accomplish? And that comes into play at
a more holistic, macro level as well, which we’ll
get to in just a moment. And a value and emotional
load, where are you meeting someone in the world? How challenging is that context? If you’re designing for an
encounter in a hospital, for example, it’s going to
be a very different type of a value or emotional load
than if you encounter somebody at a children’s museum
where they’re having fun– hopefully, having fun. Alignment is, of course,
that foundational principle of understanding what
the business objective is and understanding
the human objective and making sure that they are
as tightly aligned as possible. And then adaptation
and iteration being, of course, that
process of making sure that we are building
upon what we’ve learned, we’re using experimentation and
that mental model of building our learnings as we go. There’s also this premise
that experience has, in a sense, two layers to it. If you think about human
nature as this ongoing truism, like we all have,
throughout time, needed to drink
water, for example. But then there is this
shape of that experience and how it gets packaged
up and dimensionalized. And so you can see
this bottle of water is an example of
saying, well, if I were to put that water into
a heavy glass bottle and label it with some sort
of minimalistic typeface brand and create that whole aesthetic,
and it has this hipster vibe to it, maybe
I’d feel like I’m being a more aspirational
version of myself, because I’m drinking maybe even the same
water out of this cool bottle. And I feel like a
better version of who I am than if I just drank it out
of the tap glass or whatever. So there’s this ongoing
way in which shapes evolve. And it’s important, I think,
to recognize that as we create these integrated experiences,
that human experiences do evolve, but the shapes will
always change more readily than the nature. And it helps us get into contact
and create this continuity across time with the
human nature that persists throughout
the experiences that we’re designing
for, and yet, be ready to adapt to the
changing shape of experiences. So with that, that leads
us into this opportunity to think about how machine-led
experiences can actually be more meaningful. The more we’re thinking
about automated experiences and artificially
intelligent experiences, how can we think about making
sure that the humans that interact with those
are having as much of a sense of meaning and
significance and dimension to those? So what I proposed
in “Tech Humanist” is that we don’t just
automate the menial. We automate the meaningful. I’ll go through each one of
these in detail, of course. That we automate empathy, that
we use human data respectfully, and that we reinvest
the gains in efficiency that we get in business
from automation back into humanity and
human experiences, at least at some level. And so we’ll talk about
each one of these. I’ll start with this. I think a lot of times when
we talk about automation, our base understanding is
that we should automate menial, meaningless
things, so that humans can do higher order tasks,
which is a nice enough premise, until you start thinking
about that at scale and start imagining a world in
which all kinds of functions have been automated. And most of our
world is automated and most of our interactions
are with machines, and they’ve all been
automated to be meaningless. So I think it’s a yes,
and, a both, and scenario. We do need to think
about automating the menial, meaningless
functions to free ourselves up to think about
higher order things. But we also need to think
about, what’s working? What are human
interactions that convey some level of empathy
and nuance, that create some sort of
significance and dimension? And how can we work to
automate those as well? How can we capture some
of that significance in those automations? So in this way, we’re talking
about using data and technology to scale, not just for
efficiency but for meaning, to think about ways
that we can actually create a sense of dimension
in the world around us. One way that that works is– I like to think about this
model of this relationship between metaphor and metadata. And I think the
easiest way to explain this is a slide I
stole from Brian Chesky, the CEO of Airbnb,
when he was demoing a couple of years
ago, the new campaign that they were launching
at the time, which was the “Don’t Go There,
Live There” campaign. Anybody familiar with that? Did you guys run
across that at all? So the idea was, even if
it’s only for one night, go to every place you
visit as if you’re a local, treat that city
like you’re a local. And this slide was
an illustration of how you could experience
a different type of approach on TripAdvisor versus
with the Airbnb approach of trusting the local experts. So this is obviously Paris. And note that
everything on each list is different, except
for one, which is the Luxembourg Garden, which
is my favorite place in Paris, so yay, me. Each of the other things
on the TripAdvisor list is really just a brute
popularity contest. It’s all just, what are the most
bucket list items that someone would associate with Paris? And on the Airbnb
side, it’s who has the most significant
understanding of the city of
Paris, what do they recommend as being the places
that you must visit and must experience in Paris? And what I think
is interesting, is when you think about the
metaphor that’s really underlying this, it’s clear
that the TripAdvisor metaphor is much more about
this casual tourist experience of the world, this
conventional understanding. Whereas the Airbnb
thing is that, don’t go there, live there,
this knowledge of the expertise. And then the metadata clearly
is– it’s like the same city. These are all the
same landmarks. They exist in either case. But one is being
rated for popularity, and one is being ranked for
this expertise or authority. So the way that these
two dimensions interact with one another creates this
more meaningful understanding of what the company
is trying to achieve and how it brings it
to dimensional life for the person that’s
interacting with it. Because that meaning
informs the purpose that the company is bringing
to life in their experiences, and the purpose of the
company that they’re trying to bring to life
fosters the meaning that the person is going to
experience when they interact with the touch points
that the company creates, if they’ve done it well. And the nice thing about this,
when you think about how this really comes to life in an
automated, machine-led way, is that humans really– I think, when you think about
what the research shows, what we most thrive on
is a sense of meaning and common goals and a sense
of fulfilling something bigger than ourselves. Whereas machines thrive on this
sense of clear instruction. And what leads to both of
those things is purpose. Now, I’m not talking
about purpose like in this touchy, feely,
spiritual sense, necessarily. I’m talking about purpose as
a set of clear instructions or a sense of clarity about what
it is you’re trying to achieve. And what that does is
leads to this ability to bring all your resources to
bear in a very efficient way and to align all
those resources, to set priorities
very effectively and make sure that everybody is
rowing in the same direction. My favorite example
of this, of companies setting a strategic
purpose and really using it to operationalize
around, is Disney theme parks. And from a digital
transformation perspective, the MyMagic Band program– how many of you have been
to Disney World or one of the Disney theme parks
since they’ve introduced this? It is pretty magical, right? So they’ve articulated
their purpose statement as, create magical experiences. It’s really just
those three words, create magical experiences. And so you think about
across the organization, just about anyone
in any function can understand how
they can solve problems relative to their
scope of their work as it relates to creating
more magical experiences. A problem that’s
brought to them, they can just go, I
know how to solve this, as long as the company actually
gets in line behind that and allows them the autonomy
to solve the problem the way they need to. But think about
that as it relates to digital transformation
and deploying a billion dollar program, which
this was on that investment scale for the company. And you can do that with
complete confidence knowing that this Magic Band is
going to allow people to be able to go around the
park and use it as payment, use it as access,
use it as preferences on all kinds of information,
tracking that certainly gives a lot of useful
information to the company as they merchandise more
effectively and so on. But that ability to
translate the purpose into a deployment at
a billion dollar scale is very clear from that program. So we can design
experiences that are aligned with
strategic purpose, so that we can actually see that
understanding of purpose scale to massive levels. And purpose is the shape
meaning takes in business. So that’s how we get
that meaning to be felt and understood at a human level. By the way, I keep
talking about scale, so I want to unpack
that a little bit, too. So when we think about
creating meaningful experiences at scale– normally when we talk
about scale in a startup or a corporate business, a
corporate growth scenario, we are talking about
removing hard limits, so that growth
opportunities can flourish. And usually, we’re talking about
that in terms of multiples, let’s say, like 3x
or 4x or 5x or 10x, if you’re very, very lucky. But what happens
when a notion meets nearly unlimited expansion
possibility, when data can model it and
software can accelerate it and automation can amplify
it and culture can adapt it? And that’s what,
really, we’re talking about with machine-led
experiences. And that’s why it’s so important
that we think about creating these in a more meaningful way. Because if we don’t create
the meaning into the system, what are we doing? We’re allowing
absurdity to encroach, and we don’t want
absurdity to scale. So my favorite example
of absurdity at scale is one that– I don’t mean to knock the
program or the product, because I think
it’s incredible– the Amazon Go store. How many of you have
experienced it in person? It’s pretty cool, right? The idea that you can actually
just walk into a grocery store, you scan your
app as you go in and then just pick up whatever
you need and walk right out. And there’s no
checking out process. It just knows. Through cameras and
sensors and so on, it knows what you’ve
picked up and what to associate with your
account, and you’re good to go. So obviously, we have to
talk about cashier jobs and what that means for
the future of human work as that goes to scale. But let’s leave that set
aside for just another moment, because right now, what I’m
focused on is something else. What happens when you open
the app for the first time and you get this onboarding
that explains that, as you pick things
up off the shelf, the sensors know what
you’ve picked up. And as you put it into
your basket or in your bag, you’ll be charged for it. So it says, don’t pick up
anything for anyone else, which is fine, except that you
start thinking about, I don’t know about
you, but I get asked all the time to
help people in stories. You seem pretty tall. You probably get asked for that. You get asked. And now, it’s like,
well, I can’t really help that person to get
the thing off the shelf, because it might
charge me for it. And there’s a way to
get it charged back, and Amazon might fix this
before it goes to scale. But really, 3,000
Amazon Go stores have been announced before 2021. So if this doesn’t get fixed and
if it is something that we all start adjusting our
behavior and not helping other people
in the Amazon Go store, well, that’s the future of
retail we’re talking about. 3,000 Amazon Go
stores by 2021 is going to mean that retail
environments are going to be this cashierless
environment before too long, and so we won’t help
each other in any stores. And how long is it before we
don’t help each other at all? I know it sounds like
hyperbole at some level. But what I mean to suggest is
that the idea that experience at scale does change culture. And I think that’s important
to recognize, because really, experience at scale is culture. What we all collectively
agree to do with each other and how we agree to interact
with each other is culture. And all of the work that we
do creating human experiences sets that context and
creates that modality, so that understanding is
super, super important. So I do have a slide here. And if anybody
needs these slides, I’m happy to share them. But a slide that asks questions
and it’s in the book as well. If we were to try to deprogram
the absurdity of not helping each other, we could
ask some questions to step back from
that and think, how do we not create
experience at scale that’s going to be absurd? How do we make sure
that the brand isn’t going to be impacted
if we create products or solutions
that might scale in ways that are unexpected? How do we pivot
to deal with that? What does that look like? So there’s some questions we
can ask to anticipate that. But primarily, I
think the challenge is or the opportunity is
to think about meaning and to think about keeping
absurdity from scaling. This is a comic that
was drawn for me by my friend Rob
Cottingham to illustrate the opposite of tech humanism. I don’t know if you can read it. It says, “It’s getting
harder and harder to hold on to my humanity. But wow is it easy to track
my Amazon deliveries.” So of course, that’s absurd. But it’s the idea that
we aren’t thinking about, what do we really want
meaningful experiences to look like? What do we really want our
future humanity to look like? How do we create
technology solutions that amplify our
humanity and don’t get in the way of humanity? I think, that’s really
what we’re talking about. The second premise of
machine-led meaningful human experiences is to automate
empathy, which again, may sound like it’s a contradiction. But I think there’s
an opportunity to think about the ways that
any kind of experience that we design creates some
kind of connection and to create as meaningful
a connection as possible. So how many of you remember
the “Seinfeld” episode where Kramer got a
new phone number, and it was one digit
off from Moviefone? Anybody remember this? So anybody remember Moviefone? I know the app just
ended as of a week ago. But in the ’80s and
’90s or whatever, we all had to pick up the
phone and actually call a service to tell us
what movies were playing. And in this episode, Kramer
had gotten a new phone number. It was one digit
off from Moviefone. And it’s obviously
touch tone service, so he couldn’t understand
the touch tones. He decided that he was going
to impersonate Moviefone. But he couldn’t understand
the touch tones, so he ends up just saying,
why don’t you just tell me what movie you want to see. And I find this to be
such a prescient example of how we think about
machine-driven interactions and how we think about what
human-based interactions look like, the relationship
between those two. So I think of the
Moviefone, Kramer model as agile deployment of
emerging technology. That you can think about
the robotic interaction and the human interaction
as being somehow interchangeable
with one another, so that you can actually
use human interaction to gather patterns that
you will encode as chatbots or other types of automation. And not to suggest
that you would lie, that you would present
a human and have it be posing as Moviefone or
whatever your equivalent is, but rather, that you
would have some kind of agile human-based
interaction that gives you the insights
to be able to create scripts and create patterns
that help you develop frameworks for automation. And of course, you’re starting
with if, then statements. But you’re quickly trying
to work beyond the if, then to get to the nuance. So if, then is
easy to anticipate, in the kind of frequently asked
questions model of automation. If you’re automating, let’s
say, a chatbot for a bank, you know that a lot
of your interactions are going to be about how
to change your password, for example, or how to
set up a new account. So if someone wants to
create a new account, then here’s the answer and
here’s the flow diagram that you can walk them through. But the nuance beyond that is,
I need to change my password, because my ex is stalking me,
and it’s a dangerous situation. And there needs to be some
human interaction there. There needs to be some human
nuance to that experience. So that’s more where the
empathy gets automated into the process, is finding
those types of interactions and finding the
opportunity to build out the relationship between
the automated and the human. And also, when we’re
looking for patterns, that we’re not just looking
for arbitrary patterns and encoding those. Arbitrariness also
sits in opposition to meaningfulness in much
the same way that absurdity sits in opposition
to meaningfulness. So we want to make
sure that we’re finding meaningful patterns
and automating those. Because in all of the
work that we do with this, we cannot leave
meaning up to machines. Machines won’t do meaning. That’s just not something that
machines are really equipped for. So it has to be humans that
determine what is meaningful. And I love this example. I know many of you probably work
around image recognition or AI, and so you know this
dilemma very, very well. I know a lot of algorithms
have advanced since this day. The puppy versus muffin
problem is one of my faves, and it always gets a chuckle. I see some smiles
in the audience. But it’s true,
that subtle nuances aren’t really where AI shines,
in many cases at this point, at least not at a meaningful
recognition level, not being able to say
that the muffins all have this certain meaningful
characteristic and the human, the puppies all have
this certain meaningful characteristic. Whereas, I believe many
of you are probably able to determine
which one’s a muffin and which one’s a
puppy pretty well. Here’s some more, by the way. You know which one’s a barn
owl and which one’s an apple. You’re not having any
trouble with that, I bet– which one’s a croissant. But I think this
introduces an idea that there may be
opportunities for humans to work alongside
machines in ways that add nuance and
empathy and understanding to the machine-led processes,
because humans generally do nuance pretty well. That’s something that
we are encoded for. We get meaning. That’s what we’re
about, so we’re able to add that into
the value proposition. So when we think about the
relationship between machines and humans as we move
into the future of work and the future of
that economy, I think we’re going to add the
most value by being human and understanding
meaning and nuance and understanding value and
understanding each other and adding that layer
to those interactions. So the third tenet of the
machine-led meaningful experience is to use
human data respectfully. It comes from this
idea that when we talk about digital
transformation, I kind of feel like that’s
a little bit of a misnomer at some level,
because we already made a digital transformation
the moment we started spending all of our time in
front of screens, transacting in bits and
bytes with each other. So that’s kind of a done deal. And what’s really
more meaningful than that is the data
transformation, the fact that all of this is happening
with a data layer behind it, that business has all
kinds of data visibility and transparency through
the supply chain, through logistics,
through operations. And everything has this
clarity and transparency about what kind of track ability
is going on, what’s measurable and all that. So it’s a really interesting
layer to work with. But when we talk about digital
transformation, including automation and
digitization, all of that, all of the many nuances
of that, fundamentally what we’re talking about
is agility with data. As companies become more
digitally ready and digitally transformed, they’re
becoming more agile with database decisions. And that data that we’re talking
about is really our data. It’s human data. For the most part, business
data is largely about people. It’s our purchases. It’s our movements
through space. It’s our preferences. It’s all of our tastes and
indications that we’ve made. And really, what I’m saying
is analytics are people. At some level,
for the most part, when we are looking at
graphs and reports and so on, we’re generally looking
at the needs and interests and motivations of real
people that are there buying from our companies
and interacting with them and driving all of
these decisions for us. And I think the flip side of
appreciating that and treating that data with respect
is understanding that what we encode
into machines is really about us, that
we are putting ourselves and our biases
into the encoding, into the algorithms and
everything that we create. So the opportunity,
I think, as we look at this tech
humanist future, is to encode the
best of ourselves, is to think about
how we can create our most egalitarian
viewpoints and our most evolved understandings
into the data we model, into the algorithms we
build, and into the automated experiences that we
design and create. So we can use our human data to
make more meaning in the world. And we can recognize that
the more we create relevance in the alignment
between business objectives and human
objectives, the more we are creating a form of respect. But the caveat to that
is that discretion is a form of respect, too,
that we’re also allowing people to, say, be forgotten
by us and allowing them to take their data with them,
and that we can not make people feel like we’re
creeping them out by knowing so much about them. And that we protect
human data excessively, that we make sure we’re
being very, very, very careful with the data
that we collect and use in business decisions,
because we recognize that it is human data. The last point– and
it’s a quick one, because this may or may not be
within scope for many of you. But that as we think
about the gains that we make in our
businesses through automation and machine-led experiences,
that we think about reinvesting some of those gains into how
to create more meaningful human experiences at scale. And I don’t think it’s
really a mystery why that’s so important. There was a study done,
a couple of versions of a study done on what
jobs are potentially considered automatable. And this is one visualization
of the data from that study that shows the different
cities in the United States and how likely the
jobs that are there are to be automated
over the coming years. I zoomed in on New York,
which is where I’m from, and you see 55% of
jobs are considered potentially automatable. And you have to think about the
socioeconomic impact of that. You have to think about the
psychological impact of that, that humans have had a very
deeply connected experience to work, that we’ve derived
a lot of our sense of meaning and identity from work. We say who we are in
terms of what we do, and we have names like butcher,
baker, tanner, carpenter, and so on, that derive
from ancestral jobs, that have been carried down through
generations of our family. And that’s true across cultures. So it’s a really important
thing to understand, that jobs are going to change. There’s going to be job
displacement, augmentation, and replacement by automation. And we don’t yet know what
that means for human meaning, and we don’t yet know what
that means economically. We don’t yet know what that
means sociopolitically. And so there’s a huge
opportunity for us to take the gains that
we make in automation and have this ethical
contribution back into society, back to humanity
and say, what can we do to foster a sense of meaning
and a sense of community and a sense of connectedness
and a sense of more humanity with those gains? So I think we can also think
about repurposing human skills and qualities into
higher value roles. One of the executives that was
in a strategic workshop I led ran a utilities company
in South America. And he found, through
our work, an opportunity to automate a customer
service function that was their most heavily
accessed customer support question and function. And once he saw
that opportunity, he saw that there was a
way to take the humans that were working in that job and
create oversight positions for them, so that
they could continue training the algorithms
that were going to create that automation. So obviously, a very
straightforward kind of replacement, it may
not be a one-for-one. We may see job loss anyway. But some of that
reinvestment is going to offer up higher
value human roles. So here are those
four tenets again. And I think the summary of
this really comes back to, as you think about the
work you’re trying to do and how to create these
more meaningful experiences through automation and through
artificial intelligence and so on, it really comes
down to this question of, what is it that you are
trying to do at scale? So that’s the purpose statement. How can you
articulate what it is your company is trying to do,
your team is trying to do, you are trying to do at scale? And for me, the answer
to that question is, create more meaningful
human experiences. It’s just as simple as that. But the way that
I can do that is by speaking with
groups like yourself, working with executives,
working with leaders and being able to help them
hone in on that purpose and really get clearer on
how to create those more meaningful experiences
that do align the business objectives with the human
objectives, that do bring the business results and
that create a better future for humanity. So because business will have
to scale through digitization and automation, business
won’t be successful long term without it, it’s table stakes. But humanity won’t be
successful without meaning. So for that, I thank all of
you for the work that you do. Thank you very much. [APPLAUSE] TIM: Thanks, Kate. We have a Dory. There no questions
on it right now, so we can ask a few
local questions. And if anything shows up, then
we’ll get those included, too. But I see a hand right here
and can we get a mic over– SPEAKER: I have the
mic, test, test, test. Sounds good. AUDIENCE: Thank you for
the talk, first off. So for the rhetorical
question you asked at the very
beginning of the talk, I know a lot of people,
if not most people I know, would answer, a soul. What makes humans
human is a soul. So what advice would you give
about how to automate religion? KATE O’NEILL: It’s a very
interesting question. I actually have talked with
a few people about the work that they’re doing
around automation and creating experiences
for people at scale around religion. And I don’t feel like I’m
in a really good position to be an expert on that. It’s not the work that I do. But I do think that religion is
fundamentally offering meaning, so really, we’re talking
about the same principles. We’re talking about being
able to offer people a lens into what is meaningful
and then helping to scale that. So if there is a solution that
someone is trying to build, that is some sort of
technology product for creating a religious experience or a
religious outlet for people or community, then I
think it really comes down to the same principles. It’s just like religion is
the industry, in that sense, and we’re trying
to offer meaning through those experiences. That would be, probably,
my best take on that. But I think it’s a more
interesting question than that at some fundamental
layer, and it sounds like a discussion over
beers or something like that. So do we have another question? Yes, to your left. AUDIENCE: Hi,
thanks for the talk. I was really struck
by your point about shared experiences
forming culture. And obviously, we in
the technology world, have a lot of increasingly
shape shared experience. So in the Amazon Go point about
people not helping each other, that’s something
we can all probably agree is a net negative. And even Amazon, I’m
sure, would agree. But there’s a lot of cases
where we have the potential to shape culture, where the
answer really isn’t clear, what is the right thing to
do, filter bubbles being one controversial idea,
whether they’re a good thing or a bad thing. So I’m wondering, what’s
your take on how we should approach these problems? What principle should we use in
deciding how to shape culture or what processes or
institutions maybe we need to make these decisions? KATE O’NEILL: Thank you, it’s
a really good and big question. I’d like to cop out and say
that the entire book, “Tech Humanist,” addresses it. But at some level,
what it comes down to is trying to understand
that strategic purpose, that alignment between business
objective and human objective. And I think, if you’re
looking at a filter bubble type of example as one
example of something where a social platform or an
online community or a media company is fostering through
algorithmic content filtering and so on, this
sense of disparity between people’s collective
understanding of what is truth, I think you can probably
come to some understanding at some level of view of that. The business objective, which
may be advertising or something along those lines, and the human
objective aren’t aligned there. So I do think that there is
still a useful framework there, but I do offer some additional
ones in “Tech Humanist” as well. It is a really good
and important question. And it’s an important
point for us all, I think, to
consider in the work that we’re doing,
because there are so many net positives and
net goods that come out of, let’s say, with social
media and the connectedness we have with each
other and the way we’re able to maintain relationships
with such ease versus 20 years ago. But of course, it does come with
these associated difficulties and the challenges
of making sure that we’re all speaking the same
language, which at the moment, I believe we’re not. We were having that
discussion beforehand. So I’m going to
leave it at that. I think there’s a lot in
the book, which I’ll just keep pointing back to
that, that does unpack that a little further. But I genuinely think that that
framework of understanding what it is the business is
trying to accomplish and what it is that’s good for
humanity, how those things can be in line. And it doesn’t have to come
down to a humanitarian purpose. It just has to
mean that we’re not accelerating
something that is not ultimately good for humanity. That, I think, is where the
alignment comes back to. TIM: There’s a
question on the Dory and it’s similar to a
question that I had. So I’m going to try to
merge them together. The question on the
Dory starts like this. Scale tends to force humans
to reduce their variety to adapt to machines instead
of the other way around. So further, what about
ways to reduce scale that are compatible with business? You mention distributism,
decentralization, or something else. And I’ll add that
I think that a lot of this technological change
is really coercive, meaning, either you get
with it or you get left out, particularly
around the job changes that you talked about. And how much is
our responsibility to bring people along,
like to offer the lifeboat, and how much do people really
need to get in the boat? KATE O’NEILL: I think that’s
a really difficult thing to be able to break down
in one side or the other. I think that the change
is coming no matter what. What we find,
though, is the change is going to be
disproportionately felt. So jobs that are most
likely to be automated are jobs like truck driver,
cashier, these types of things. And what we know is
that statistically, those jobs are
disproportionately held by people of
color, so that not fair, not equal distribution
is happening. So I think we do
have an obligation. If we’re trying to create
the best futures for the most people, which is
what I would say is one of the underpinning
ideas or underlying ideas of “Tech Humanist,” that we
have to be thinking about how to create a more equitable
distribution of opportunity and how to make sure that
the impact of automation is not going to destroy one
set of human’s potential while it increases the potential
for enrichment of another. So that inequity is going
to become even more extreme than we’ve already experienced. So I think it’s in our
best interest as humans to think about how to shift
that or how to level that out. Not that people
can’t become wealthy, but that we don’t create this
even more extreme distribution than we already have. So I think to some extent,
it’s imperative for anyone who’s creating experiences,
which is pretty much everyone that works in technology,
that works around most fields that I’ve worked around, health
care, entertainment, and so on, to think about the
change that’s coming and how to make
sure that it is– that we are creating as much
opportunity there as possible. But yes, I think there’s also
this kind of new emerging space around opportunities
to retrain people and repurpose, get people
to understand the new skills that they might have. I shared some really great
stats in “Tech Humanist” about programs that
were taking, let’s say, prisoners who had come
out of prison programs and been able to retrain them
into communities, the jobs that they could keep. And there was 0.1% recidivism
within this program, so I’d urge you to
look into that example. There’s just so many ways that
I think an ecosystem of answers is really what’s going on here. We have to own
the responsibility as content creators and
experience creators. And we also have to
recognize that this is going to be a
broadly distributed, broadly felt thing that is
going to have inequity to it. TIM: So to bring back a word
that Paul put in his question, decentralization, how does
maybe decentralization help with this by spreading power
around or control around? Maybe just talk about
decentralization for a bit. KATE O’NEILL: Well, I think the
idea of spreading power around and spreading control
around is interesting. Certainly, we have seen, through
user-generated content, user communities, and platforms
like Meetup, for example, we were talking about
earlier, as one way that there are tools that we can put in
the hands of people that allow people to create communities
amongst themselves, create more human connection. Those are going to be, I
think, increasingly important. And the technologies are
there to foster that and allow discovery within
those communities, allow people to find each
other and connect more deeply. But I think we just have
to be thinking mindfully about the challenge of not
amplifying those net negatives, as your question was alluding to
earlier with the filter bubble and so on. So I’d love to hear more
specifically what about decentralization might be– what’s nagging on the person’s
mind that’s asking the question or on your mind. So whoever is asking
that, feel free to ask a secondary question. TIM: If they call. KATE O’NEILL: And if
there’s any other– TIM: Or maybe for
the sake of time, we look for one more question
in the room before we wrap up. Anything else? Yes. KATE O’NEILL: Great. AUDIENCE: So I have
a thought about some of this stuff in terms of,
do you ever take your work and look at it as
a lens of looking at humanity through the
lens of what technology is revealing about people? KATE O’NEILL: Yeah. I have looked at that. But I’m very curious
as to what is occurring to you as you think about that. AUDIENCE: Well, I mean, I
used to do a lot of community management, and so I came
out of being on the BBSs back in the late ’80s, early ’90s. And so it’s this
thing where I realized that a lot of what
happened online was just what happened offline
but at a different scale and at different localities. And that was an
aha moment for me when I was a part of these
little communities back in the BBS days. So it’s just kind of like as
technology has become more and more prevalent in
our lives, it’s something that I kind of look
at, almost flip the conversation a little
bit in my own head of like, what does it mean– what does it say about
people given how we’re using? KATE O’NEILL: A
little bit of that points back to the
decentralization discussion, as well. But I like the aspect
you brought up. One of the aspects of
this that I have looked at is that it seems to me
that our digital selves, that aggregate set of
characteristics that gets collected through our movements,
through our connections, through our interactions
in social spheres, that digital self is really
our aspirational self mostly, and that we are saying
who we most want to be. And it seems ironic to
me that that digital self is the self, the
version of ourselves, that is most
commodified by business and most capitalized upon
and manipulated by business. I think in our physical
manifestations, we are much less prone to
that kind of manipulation and over-capitalization. Yet, this digital self, which
is our aspirational self, is prone to that. So I think this is
the opportunity for us to merge that understanding
and say, well, it is a human that we’re looking
at in that digital, collected, aggregate data points,
and so we need to be respectful about that too. So I think that flipped
version is to say, there’s this way that
we’re interacting with each other in a
way that represents who we most want to be and
who we most feel we are. So it’s all the
more reason why we need to be respectful
with the data that we collect and
monetize and use within business to inform
our intelligent decisions and our systems. So I’d say that
that’s exciting to me. TIM: Thank you, Kate. And thanks, Google, for being
a great audience for Kate. And I guess we owe a
solid round of applause. So thanks. [APPLAUSE] And Kate, especially.

6 Comments

  • invisibleAzN says:

    From my observations, the future looks more like suppression of speech and ideas you don't like. ( edit: you being big tech companies)

  • Aha marques says:

    https://www.youtube.com/watch?time_continue=2&v=RU7Rc0pH91k

  • Ernest Of Gaia says:

    Permaculture Design

  • Ernest Of Gaia says:

    In permaculture we call this social Permaculture

  • friendlybus1 says:

    The talk could be improved by defining meaning? Referencing Peterson's work with McGilchrist on the neuroscience of the brain, and the orienting reflex would have been more illuminating. A deeper definition gives more scope to explore the metaphors vs science problem that is being discussed.

    It is somewhat helpful to seek "any" definition of the word meaning, but without a metaphorical bedrock for the idea beyond what makes sense to us, you can go too broad and too shallow.

  • Gary Ogg says:

    What do I do

Leave a Reply

Your email address will not be published. Required fields are marked *