top of page

Decoding Artificial Thought: What Our Interactions with AI Reveal

  • Writer: horizonshiftlab
    horizonshiftlab
  • Jul 17
  • 20 min read

Updated: Aug 1

A person wearing VR goggles relaxes on a couch, while another watches nearby in a minimalist room with white shelves and soft lighting.
Source: August de Richelieu via pexels.com

How is AI subtly, yet profoundly, reshaping our decisions, cognition, and even our sense of self? In this incisive episode of Signal Shift, Raakhee welcomes Elina Halonen, a behavioral strategist and founder of Prismatic Strategy. Elina, whose work explores the intricate intersection of behavioral science and AI, clarifies the diverse landscape of this field, from AI as a tool for experiments to behavioral science as a lens for critiquing AI systems. She shares her personal insights on interacting with generative AI as a "co-intelligence" and dissects the concept of "generative friction" – where the challenge shifts from production to curation. The conversation tackles critical issues like bias in AI development (the "WEIRD" problem), the surprising parallels between dog training and interacting with LLMs, and the profound impact of AI on human skills and social relationships. Elina passionately argues why understanding human behavior is paramount for designing ethical and effective AI, emphasizing that this powerful technology cannot be left to "techies” alone.



Where to Find Elina Halonen:



Selected Links:





Episode Transcript:

Raakhee: (00:00)

Hello and welcome to Signal Shift with me, Raakhee and our guest today, Elina Halonen. Elina, great to have you here today. Elina is a behavioral strategist. She is the founder of Prismatic Strategy before that SquarePeg Insight and she also co-founded one of the very first behavioral science consultancies informing market research.


She integrates behavioral science insights and strategic thinking to help clients in diverse industries from healthcare to tech. The current focus of her work is exploring where behavior science meets AI. And she's the creator of the podcast, Artificial Thought, narrated by AI hosts. The podcast serves as an audio companion to the newsletter of the same name. Artificial Thought provides a behavioral science look at how AI changes the way we decide, and make sense of the world.


And these are Elina's words from the newsletter, which I thought really explained this work beautifully. "The more I thought about it, though, the clearer it became. Every product, every system, every innovation ultimately intersects with human behavior. Someway, someone's behavior determines success or failure, whether it's a user, a customer, a policymaker, or a stakeholder. And understanding human behavior is what I know. Over time, I saw where behavioral science could make a meaningful contribution to building, evaluating, and humanizing these systems from the ground up".


So this past month, we have been exploring and talking about topics in the behavioral, psychological, and mind spaces, from happiness to health. And today, we want to explore behavior science in AI. In other words, how AI is shaping, molding, and changing our behaviors. Elina, a very warm welcome. I'm really excited to explore this topic with you today.


Elina: (01:48)

Thank you. I'm excited too. So thank you for inviting me.


There are so many ways that you can conceptualize behavioral science plus AI. And actually, I mean, maybe that's the one place to start This kind of how to map this landscape of behavioral science plus AI or, maybe it's a Venn diagram more than a plus.


Often what people mean is, at least in my space, they are talking about using AI to run experiments or analyze data, studying how algorithms influence decisions and behavior and so on, or redesigning AI systems with behavioral insights. They're not the same thing.


The way that I've kind of conceptualized it is that there's sort of three layers and obviously two types of, broadly speaking, two types of AI systems that we might be talking about. One is the, you know, you have generative AI and then everything else that is not generative.


One is that is it is the purpose. Are you using it as a tool to for behavioral science or are using behavioral science that lends to understand and critique AI? That's those are different things. Then there's a scale. Are you focused on the individual cognition and behavior or are you looking at systemic sort of that sort of micro and then macro systemic consequences? And then there's the interaction like are you is the interaction that you are in in the space that you are interested in or exploring or talking about.


Is it transactional, of preset outputs, or just one way, like let's say an algorithm that facial recognition algorithm would be more transactional, or is it relational? Is it mutual influence, co-creation, which when we're talking about interacting with generative AI tools, where are you in that

landscape, where do you want to be in that landscape?


Where do I sit with that? And it's is. For me, it's mostly as a lens, so using behavioral science as a lens to look at AI, not the other way around, because I don't usually design a lot of experiments and things like that. And that also explains what artificial thought is, the platform. And I focus on a micro scale, the human, the individual. But obviously, I have to stay aware of the systemic consequences and the context. mostly, focus is the individual.


And I'm interested in that relational aspect, those relational systems. So I don't particularly focus on, let's say, algorithmic justice or things like that. what I write about and what I think about is in that interaction between human and computer.


Maybe that helps situate our conversation as well, like what it is that I, how I see behavioral science in AI.


Raakhee: (04:27)

I think that's really important framing because you're right. There's many ways to even interpret what behavior science meets AI entails. I think from the perspective of the lens that you're looking at it and just that interaction between humans and behavior science, yeah, what are your discoveries? What have you been finding?


Elina: (04:47)

What is really starting to interest me more and more is that is how our thinking, how our behavior and our cognition is changing when we are interacting with these generative AI systems. Because I've probably also because I've observed some of those things in myself. I work for for about a year and half now. I have been using GenAI tools quite a lot myself. And I would say that


Like for me, it really does feel like a co-intelligence. Like that's not my term. That's Ethan Mollick's book. And it's quite a good book for an introduction into what that could look like and what that concept means. Obviously, that might be different for everyone. But for me, I use these tools as a part of my workflow every day, all day long. And it really is sometimes it can feel like, quite often it feels like a teammate.


And so that's kind of where I started to really gravitate towards and thinking about how does our behavior change? How do we think differently when we are interacting with an AI colleague? Essentially, I work by myself. So for me, it is often a colleague, the colleague that I don't have. But also what happens with what I've called it generative friction.


So previously, it was actually generating and producing things, these sort of intellectual things that was, or digital things maybe, was actually producing them was where the friction was. So if you wanted to shoot an ad campaign, it's actually, there's a huge amount of logistics around that. It was, you had to be very sure what you want to produce because the actual production process is where the friction is. It's not in the ideas. And, or the same thing would be like, let's say writing or editing, like actually producing the text would take a lot of work. And now, the opposite is the case that actually the cost of production or cost of producing these things is declining.


And therefore, the friction shifts somewhere else. And where I think part of it at least is shifting is, first of all, thinking about: How do we select and curate and choose from this fire hose of generated artifacts? there is always, for example, there's always another iteration to explore. When do you decide to stop? How much is good enough? How do you approach the work itself? Because generating text is so easy.


You can generate an entire book in a day, more or less, probably, it depends how you want to do it, how advanced your automation system is. But it doesn't remove the work of editing and really thinking about it. We can generate a lot of things. But that brings other kinds of cognitive work that I don't think we have, you know, we don't really know.


That's kind of part of my curiosity map for artificial thought to think about those things that don't really have answers.


We're all just really exploring, what does this all mean? It's moving so fast that we're not really keeping up. Yeah, think asking the questions is a necessary step. So that's kind of what artificial thought is.


Some of the interesting parts for me is, is, you know, using behavioral science to look at AI and critique those AI systems, so that we can identify biases and blind spots, but also understand how those systems, how those AI systems shape our cognition, our agency, how our trust in these systems. Trust is a huge, huge part of this if we're rolling out AI features, AI systems at the speed that we are. And also using behavioral science to contribute to hopefully create better system, better design of these systems by aligning them with human values and human decision making.


There's a lot talk about the values, but not so much with the decision making. And that's kind of the interesting part. How does infinite personalization, how does that affect your decision making? How do you experience fatigue or friction in those AI supported environments? And how people, I guess, relate to feedback or automation? there's so much to explore and think about in that individual cognitive experience and moment to moment behavior.


When we're interacting with the system. that's really, I find that sort of endlessly fascinating that, I don't know, to seek answers and seek better questions, we don't have answers yet.


Raakhee: (09:30)

I totally appreciate the exploratory nature of the work, right? I mean, it's even similar to most of the conversations we have in here because we're looking at the future, which is like, these are ideas, these are possibilities. And I think it's a similar thing with AI. Like, it could go this way, it could go that way. There's possibilities around it. But I think...


Yeah, I think so important what you mentioned again around decision making. I you think of social media. And I think for a lot of people sitting where we are with it now being, wow, why was this not rolled out with the anticipation and understanding of how this is going to impact human behavior 15 years down the line, And if we can do that with AI, that'll, yeah, we have to do that.


Elina: (10:19)

Yeah, I will I will definitely I could definitely say that I Already, you know, I already had that thought before I read the the careless people book by the Facebook whistleblower and It's kind of it's kind of why I first got into this area as well about a year and a half ago I went to one of the like consumer insights. I went to one of the industry conferences here in Amsterdam


And AI was all anyone talked about. was like probably more than half the talks had the term AI in it. So quite the buzzword. And when you go into the space where the vendors are and everyone was selling their new systems, everything is AI. The interesting and slightly concerning, very concerning observation that had was that almost all of the people talking about AI were men. And I have nothing against them.


But it does seem a little bit unbalanced that there's only one, you know, one small slice of humanity is kind of in charge of all of this. And it was very particular. It was it's the same. I noticed the same pattern as I'd noticed in psychology many, years. Not me personally. I didn't discover the term weird. But it's basically about I think it's now almost 20 years ago, I guess. There was a very sort of pivotal paper that talks about how most of psychology is weird. And what that means is based on most of what we know about human behavior is based on Western educated samples from industrialized rich nations, democratic nations. So that's the weird with capitals. And so I'm kind of sensitive. I was already sensitized to kind of noticing that it cannot be that, you know, we're repeating the same thing that


Most of psychology of what we know is based on American, white American college students. We cannot have the same situation where we are this thing that's going to revolutionize supposedly, allegedly our lives, our lives, our working lives and everything. That it's just, you know, most of the people in charge of that are, you know, white middle class men. It cannot be like that's just like fundamentally not good.


And it took me more than a year to explore this space and kind of work out where is it that I fit in? Where can behavioral science, from my perspective, contribute?


And that's effectively that journey of that matrix that I talked about that. What angle of this fits me because it's not necessarily some of the other things. It's just not a place I am professionally. So once I figured that out, then it started to, that was really the moment that I realized, OK, I should start writing about this because things don't exist unless you create that conversation.


Behavioural science, like behavioural economics, did not really exist as a thing before people started talking about it. So we're at the same kind of point that, you know, if I don't start talking about it, I start talking about it, someone else might think, well, maybe I should talk about that too, because they are, you know, and it doesn't really matter. It's not about me. It's just like someone needs to make it visible. And then someone else might get inspired by it. And then, you know, it starts from there.


We didn't realize 20 years ago where we're going to be. And my husband works in tech, and he's been following all this stuff for a long time. And he's often said, "Tech should not be left to programmers. Tech should not be left to programmers because we don't really understand human behavior".


Raakhee: (14:06)

Yeah, not having the technical knowledge around AI. actually think that's such a plus because again, you're coming at this with the human behavior bias, not the tech bias. We have enough of the tech bias, right? You know, there's always a bias. And I think in this case, it's good to have that from the perspective of the human kind of, and what's the impact on the person and solely focus on that.


Elina: (14:30)

I agree.


What I realized was that maybe I am interacting with LLMs or how I interact with them is shaped by my experience of working with dogs.


I do dog training every week, several times a week, and I do multiple different dog sports. So it's basically dogs that are part of my life. And so I think about dog behavior a lot, almost as much as I think about human behavior. So somehow, interacting and kind of working with animals influenced how I interact with LLMs. I didn't think of it as necessarily like I felt like I had I discovered that I maybe had more patience and more empathy in the sense that when it got things wrong, was like, that's okay, it's probably me. Like I probably did not give you the right cues because when you're when you are working with dogs, your first question is, did I give the instruction correctly?


Always to assume that the dog isn't wrong. The dog, what they do reflects, your first question is, did I mess up? Did I not do this well enough? And that's how I tended to approach it. I was like, well, it's not doing what I want. But a lot of people complain, like, it's not doing what I want. And I also didn't feel comfortable somehow with just barking orders. a year and a half ago, it was all about, you like, you probably do this or else you will get punished. I was like, oh, that feels a bit wrong.


And I didn't say anything about it at the time because I was like, well, maybe I'm wrong. Maybe it's me. Maybe I'm a bit weird how I approach this. And it took about six months or so or maybe more. don't know. I don't remember anymore. Then people start talking back, no, you should just talk to you, LLM, like a human. was like, wait a minute. That's what I've been doing for. That's how I've been operating from the beginning, sort of naturally and intuitively. But I just thought it was like, well, I just couldn't be bothered to learn all the prompt frameworks and stuff.


Raakhee: (16:26)

The parallel between that. makes complete sense. I think kind of taking us back into, think, your experiences and your perspective on this as well, right? There's this two sort of things we're seeing. On the one hand, there's this fear of de-skilling that's happening with AI. Like, gosh, people are just, we are losing our cognition. And maybe even I think impacts on our decision making, which you would know, right? So is this impacting our cognition, our decision making?


And then the other one, and maybe it's a separate question, but it's around the emotional support. And I know that's become a big thing now with people having virtual chatbot friends and just the question around how does that impact other human relationships, right? For that individual, like other human relationships. So how we just communicate and engage with each other. mean, again, there's so much even in these questions, but


Yeah, anything around this.


Elina: (17:24)

I mean, the deskilling is definitely a very important risk. I'm not sure if I'm the best person to talk about that, because for me, actually, I have to say, LLMs have really unlocked a professional transformation, because my brain typically moves so fast that I can't type fast enough. And especially because I am speaking in second language. I'm writing in second language.


I'm finished. so, you know, that's my native language. And even though I've been speaking English for a very long time, like more than half of my life speaking it daily, there is still a small amount of friction that comes in, like producing text. And so quite often my thoughts will go so they will kind of like evolve so quickly that I can't really I can't really type as fast. so LLMs have unlocked a totally new level of professional development or growth because I can just, yeah, I have to caveat that I have various systems set up that are like second brains of different literatures and kind of like different contexts and project files and things like that.


So it's not just your regular chat TTP, but to be able to talk about my ideas and kind of explain them in a half-baked way, like, you know, like it's basically, basically, will...let's say you have 10, like 20 ideas, I will probably mention every third or fourth part of the thing to the, my conversation to the LLM. And it fills in the gaps because I've jumped from idea to it, I, you know, to one another, sorry, to another. And the LLM fills in the rest of my guess, that's it. Or, you know, thank you because that would have taken me ages to kind of bridge the gap for a reader, not for me, but for someone else to follow my thinking. And so it has really allowed me to, to start becoming the professional that I wanted to be. And that's brought me so much joy and kind of a real creative outlet, intellectual creativity


But your other question was, it was about therapy and using these as like sort of companions.


First of all, for the therapeutic uses, we have to be very careful that we are not imposing an elitist mindset on that because accessing mental health support is very expensive and in some cases inaccessible and in many cases inaccessible to people. it's not for me, from my perspective, it's not OK to judge people for that because they're just trying to do what they, know, people are just trying to do their best. If they could access, I don't know if they could access therapists, great.


But it is an expensive luxury that a lot of people do not have the ability to do. And probably, if you are using AI for therapy, you know, I mean, if you have a real need for therapy, then you're probably not doing amazingly well in life. And so that's kind of how I feel about that. Whether it's a good idea or not, that depends. mean, AI or generative AI is a mirror of what you put into it.


My experience of Chat GPT is totally different than the generic experience, totally different to your experience or anyone else's because I have certain custom instructions. I talk about certain things to it. When I say it knows me, it knows me because I've told it things about me. And so the way it interacts with me is kind of the, you know, it's kind of the friend that I've, my imaginary friend that I've created for myself. If you interact with your version of your copy of an LLM in a way that is not very positive, then there have always been ways of doing that. You know, like there is no shortage of things that you can read that are unhelpful. I don't know. I guess it's more at scale, but I think it's far too easy to judge that. to be honest, I quite enjoy my little AI colleague. I have really fun work conversations with my AI colleague.


And certainly, it has made working by yourself a lot less lonely. So I don't know. I am not the best person to be critical about it because for me, it's for great joy.


Raakhee: (21:42)

Yeah, I love that you shared that though, because it's a different perspective and hearing it from somebody who's using these tools this way. think it's really useful to hear that. And I think what I got from what you were saying as well is, look, there's always going to be both information and tools that have some extent of, can be misused, right? It was TV, it was gaming, it was social media, it's now AI. And so we're always going to have those risks as people.


We have to as society figure out, I guess, those, you know, the guardrails and how we keep it safe for certain groups of people while still maximizing and think having the possibility of such a great relationship with AI, as you've described, I think is really cool. And as for someone who uses it as well, yeah, I certainly...


You know, exactly like you said, it all depends on your relationship with it. I've had no negative outcomes, but it's a different relationship, right? You're using it for very specific things in a very different way.


Elina: (22:42)

So the one of the interesting things I find is that we that was things that we're not talking about is our conversation and discourse around AI, generative AI, is very Western. And actually, there is a really fantastic book that I would advise, recommend everyone to read. It's called A New Breed by Kate Darling. And she is a roboticist. But whatever she says about robots is kind of relevant to our conversation about AI companions, co-intelligence - she talks about how, our benchmark for robots, and in that case, by extension, I would say for AI we're talking about as if it must be a human, like we're comparing to human and using human benchmarks. But it's not unusual, it's not new at all for humans to use other species, other intelligences as an extension of our own. So we've been using animals to extend our own abilities for thousands of years.


Our conversation around this is very much reflective of cultural narratives that come from sci-fi. actually, people, there is a very different approach or attitudes towards robots in many Asian countries. For example, you might have robots that are cute and that you're quite used to interacting with robots.


If you have a different kind of religious outlook, spiritual outlook that God can be in many things, you may also have a different relationship with nature, with artifacts.


I think this is something we should be aware of that, you know, AI and generative AI is very, very, very weird


That kind of whole area of comparative intelligence and kind of appreciating other kinds of intelligence that just human is very interesting and I think there could be a rich seam of insights there for people to get into if only we can get over our sort of like human sense of superiority and kind of like human exceptionalism. Maybe I've hung around too with dogs but I'm certainly I'm always fascinated by the new research I read.


And I think that has opened my mind to thinking, to realizing that humans are actually not that special in some ways. Like for context, if you are not reading about all this dog stuff all the time, the area of canine science has expanded in 10 years. What we knew about dogs 10 years ago is totally different to what we know now. All that, like a lot of the abilities, the technology has advanced, it's funded a lot more.


Seeing all of that, things that you might suspect that animals can do, we can now prove it, we can research it. And so that really opens your mind about, they're not so special. Maybe there is like, maybe we can learn a thing or two. Especially if you consider that we need huge amounts of technology to detect cancer. A dog needs to take, a trained dog only needs to take one sniff and they're like, yep, you have cancer. And it's like, how? How? It really humbles you, like, well, okay, well.


I am a very inferior creature compared to this that is easy. So I don't know, maybe that's changed how I approach this.


So one reason why I think behavioral science is a very it should be part of the conversation is because people evaluate LLMs, whether or not they think or whether they reason, it doesn't really matter because we evaluate LLMs through those social lenses, projecting intent and coherence, even morality onto the systems that don't really have them. And so that makes that core challenge in human LLM interaction not just a technical one, but also cognitive and social and so on. so there's a little mismatch there because


They are not humans, they're not sort of similar conversation partners, yet we are imposing some of those norms, I suppose, like these type of social lenses. And so there are a things, a few consequences from that. One is trust and transparency. So people tend to distrust systems that they don't understand, but also overestimate their own understanding. And LLMs tend to... build trust by offering explanations, even if it's not entirely accurate. But that's kind of a system prompt to help build trust with the user. Then there's flexibility and predictability. People value something that's adaptable and not rigid, but they also want consistency. And kind of walking that tightrope between flexibility and predictability is really difficult because ...


The more flexible, an LM is really flexible or creative, sorry, the temperature setting, for example, that can make it seem unreliable. But also too rigid is not good. So there's somewhere in the middle that consistency in tone, reasoning style, and values, it helps that usability. There's a couple of more things like people want control, but they don't want necessarilyfull autonomy because it can be a bit much. So the agency within these systems should be introduced gradually and have those constraints, really. And then the last thing that we've actually talked about in other ways as well is that expectations around emotion, individuality, politeness, there are a lot of cultural differences in those depending on where you are, what context you grew up in, where you are.


And an LM that seems overly helpful in one setting for someone for one person may feel really inappropriate for, you know, in another. And so if you are if these models that tend to be trained on aggregate data can easily violate local norms. So these are types of things are these are kind of things that there's like the engineering goals clash with behavioral goals sometimes. And that's where actually bringing in people who, you know, behavioral science experts or specialists who understand this stuff.


can be really helpful at the kind of the system level, not necessarily every single interaction, but thinking about, as you were saying, that when we created all these social networks and these big tech, there were UX people involved in that. But there wasn't that broader picture of what happens, what's the next, what's the second order, third order effects, what could happen here? Because UX is mostly, it's more about that immediate moment of interaction, optimizing that, making that, you know, great. It doesn't think about what happens outside of it. It's not its role. And that's why, you know, that's another layer of where behavioral science can fit into this. So that's kind of, you know, I think that's, yeah, that's really part of why this tech should not be left to the techies alone.


Raakhee: (29:51)

Elina, thank you so, much. think so much. And you've offered such unique perspectives, I think, on a lot of this, which I really love. And I think a lot of people are going to appreciate. thank you for being here. Yeah, I guess as a final call, can you tell people where they can find you, a little bit about your newsletters, et cetera?


Elina: (30:09)

Yes, I can. So I write about artificial intelligence and human behavior at artificial thought, is a substack publication. So artificialthought.substack.com. You find it there.


I'm very happy to you know, like anyone has any questions it I may not have an answer but it may be a really good question to explore in the you know in the newsletter so I always welcome questions and challenges because questions are good.


Raakhee: (30:34)

Perfect. Thank you so much, Elina. And thank you everyone for watching, liking, subscribing, drop your questions. As Elina said, we love to hear them as well. until next time, thank you so much for being here and bye for now.

Comments


bottom of page