LEAP Listens

Building Trust in AI-Powered Storytelling with Amit Parmar

Sara MacGregor and Roger Cayless Season 6 Episode 81

In this episode of LEAP Listens, Sara MacGregor and Roger Cayless are joined by Amit Parmar, CEO of Cliquify, to explore how AI is reshaping employer branding and what it means for authenticity, ethics and trust. From deepfakes and data to organic content vs "processed" comms, Amit shares where technology helps, where it hinders, and how the future of storytelling is changing with AI. 

LEAP Listens is brought to you by LEAP Create, an award-winning people communications agency. Find out more at leapcreate.co.uk

Speaker 1:

Sarah Duffy LeapListens. Sarah Duffy, leaplistens. Welcome to LeapListens, the bite-size employer branding podcast. I'm Sarah Roger.

Speaker 2:

Smith LeapListens.

Speaker 1:

And I'm Roger Sarah Duffy LeapListens and we lead LeapCreate. Leapcreate is a creative communications agency and we specialize in employer branding and internal communications. We work with in-house professionalsap Listens and, if you're new here, we chat to a variety of industry specialists about workplace culture and how to communicate with candidates and employees. And if you want to know more, head over to our website or Spotify for over 70 episodes to listen to. Welcome, Welcome, Rog.

Speaker 2:

Welcome Sarah. How are you doing?

Speaker 1:

Yeah, I'm good. Yeah, how are you?

Speaker 2:

I'm very well, thanks yeah.

Speaker 1:

Yeah, I'm really enjoying this new season of our podcast. I feel like we've kind of got a new sort of energetic vibe going on.

Speaker 2:

Yeah, I think it's the conversational nature of it makes it feel better.

Speaker 1:

I think yeah. So today we're going to be talking to Amit. He is the CEO of Clickify, and Clickify are an AI data-driven storytelling software to attract and retain top talent. So I mean we could just dive straight in and speak to Amit and he can explain exactly what that is.

Speaker 2:

Let's do that.

Speaker 1:

So welcome to the podcast Amit.

Speaker 3:

Yeah, glad to be here, Sarah and Roger.

Speaker 1:

Excellent. So, Amit, tell us a little bit about yourself and Clickify.

Speaker 3:

Yeah, so a little bit about me, based right here in New York City. I've been in the talent space for over 20 years in-house talent leadership roles with the likes of IBM, deloitte, and about five years ago had this epiphany, I guess, to create a platform to enable authentic, human-oriented storytelling across enterprises to help companies really bring their personality out and help candidates as well as employees connect with the value proposition of organizations. So launched Clickify about five years ago and just super excited to see the growth journey of it.

Speaker 1:

Wonderful. So we're going to be talking about today building trust in AI and storytelling. So, amit, give us a little bit about this topic and what this means for you.

Speaker 3:

So, amit, give us a little bit about this topic and what this means for you.

Speaker 3:

Yeah, you know it's a trust, as we all know, with Edelman Trust Barometer.

Speaker 3:

If you look at the last 30 years and I love Edelman for those, for your listeners, who follow Edelman, they do a deep dive study every year in the trust barometer and institutions and governments and enterprises and if you look at the level of trust between institutions and people at large, it's been on a downward spiral for the last 30 years and storytelling at the middle of, at the center of really building that trust.

Speaker 3:

We're at a very interesting juncture with AI generated content. Now, sarah, I know you and I were talking about that last week and I think we as HR and talent leaders have to really think hard about the balance between efficiency that AI can generate with storytelling but, at the same time, keep the human element of storytelling so that you can actually build trust with the right information that candidates and employees want, with transparency. And I think we're at a very interesting juncture with AI can get ahead of us, if you will, in terms of really understanding what's happening in enterprises. And how do you make that connection without sounding too fake or without you know sounding like everybody else right, and so that's a really interesting juncture as a society that we're in at the moment.

Speaker 2:

Yeah, so where do you see or where is AI currently being used in your tool?

Speaker 3:

Yeah, great question. So we have always been advocates for using AI where it makes sense, with the human in the process or in the loop. So we've built our AI models to to one help organizations understand what the brand sentiment is employer brand sentiment so we look at millions of data points as it relates to workplace sentiment on Google, on review sites, reddit, linkedin there are multiple channels. So we use AI from that standpoint to enable just really big data analytics at the fingertips of employer branding and talent leaders. So that's one aspect. So we're reducing at least 30 to 40 hours a month for branding leaders to really understand what's happening on a continuous basis. For branding leaders to really understand what's happening on a continuous basis. As you know, sarah and Roger, there are amazing agencies out there who do some really, really great work to understand the sentiment and what to do with that data. And that's another avenue for Clickify, where agencies can leverage that the same software to help with their analysis and storytelling as well for their own clients.

Speaker 1:

So, in terms of the storytelling, now, obviously that's so important, isn't it? When it comes to employer brand and talent attraction, because we talk about, obviously, authenticity quite a lot. When it comes to ai, there is a real risk of people using that too much and then it just sounds very inhuman and like we're talking to a robot. Have you got any like really bad examples that you've seen out there of people using ai? And uh, yeah, just some, some horror stories good examples of bad examples. Yeah, good examples exactly.

Speaker 3:

Oh yeah, I mean, we see. I mean, you know, through our platform, as we're speaking. There are probably hundreds of stories being told through our platform today and we enable a very simple or very controlled use of AI within the platform, or very controlled use of AI within the platform. So we don't use AI for any image creation or any content creation as it relates to text formats. The way we use AI is we actually will ingest the value proposition of that company to be able to generate some content suggestions, but it's very unique to that enterprise and that sets us apart from an AI use standpoint.

Speaker 3:

What worries me, and what I've seen out there, Sarah and Roger, is you just hit one click, you generate all kinds of assets and the question really is number one how real is it in terms of alignment with your values as a company? Number two are you really differentiating yourself? Because chances are somebody else is using the same imagery to attract and retain talent and engage with talent as well. So there's a fine balance there. No-transcript, but you could tell it's like fake, it's automated. Even the emotions aren't there. Now, that's not to say that AI won't get there, maybe in the future, but we're ways away from really taking the human out of the process.

Speaker 2:

Yeah, I mean, some of these things are still a little bit sort of uncanny valley, as they call it, but I was thinking about this the other day and reading stuff where you've got a whole generation of people who are going to grow up with AI and machine learning as a thing. You know, in school and hospitals. It would just exist as a thing that's never existed, and you'll have to bear with me while I go on this train of thought, but I suppose, like what you said is true when you think about it, just simply using a stock shot, um, on some materials is actually, um, not quite truthful, is it? But people accept that because because, oh well, it's a stock shot. You know, we couldn't we didn't do a photo shoot, so we just found a shot of some people in a meeting enjoying themselves and pointing at a chart. But I suppose we're not that far away.

Speaker 2:

Are we from you saying, or someone saying okay, can you here's the message you want to say can you generate a whole load of ai videos of people that are apparently authentic stories about this organization and blast them all out there? And those people aren't actual, real people. They're sort of personas, they're archetypes of who works here, but it's no different to just a stock shot, is it? And you can kind of see that happening as a thing. What would be? You know, is that something that you think we may arrive at or already have?

Speaker 3:

Yeah, great point, Roger. I think, much less to our liking. I think we're there right now where you do see stock generation of content with very specific messaging for that enterprise or for the content creator, if you will, content creator, if you will. I do think there's a play for efficiency there in terms of driving content at mass. However, there is no, and we have this data for Clickify, with thousands of stories that have been told. Some are AI generated and some are actual people and right now, with the current tech, we're seeing about six to eight X more engagement with like real person talking versus AI generated content right now, and it could be early in the process where you know AI is just learning the behaviors of humans, the behaviors of humans. I do see a world in the future where you could use stock, you know, photos, imagery, even real people that AI can emulate.

Speaker 3:

What I get nervous about is misinformation, and this goes back to the trust, right. So we all know this. You know saying garbage in, garbage out, right? And so the bigger issue for talent leaders is I'm less worried about the content creation and the efficiencies that AI can provide. I'm more worried about how trustful that input is to create the output from an AI standpoint, and how ethical is it as well right as a society? So and this goes hand in hand with the trust conversation that we started this is, you know, that CEO I was referencing earlier. What if somebody just took that footage and this happens right, it happened during the elections here in the US not too long ago and essentially used AI to dub the person to tell a really false narrative about their financial outcomes, and so that's something we just have to be mindful of as leaders when we're implementing technologies like like AI and content creation.

Speaker 1:

It's really interesting that point about the CEO, because I do wonder, when they were making the decision about you know, let's do this, what their thought process was. You know, obviously it's more efficient, can um script exactly what they're saying? But then what are the negative impacts of that? And that's, you know, like a discussion point really for us now, like, and it is, it comes down to trust, doesn't it?

Speaker 2:

I suppose this could take a really dark black mirror style dark turn, couldn't it, where you have all sorts of misinformation about fake testimonials which are negative about companies, and you could do a real reputational damage to a business with a whole load of fake campaign. I'm not saying that that's what companies want to do, but if you have large government organizations, you could certainly use this to really ruin the reputation of a business by creating a whole load of negative content as well as false positive content. But anyway, that takes us down rather a dark turn.

Speaker 1:

Yeah, it does, but in the terms of the employees, like going back to that. You know the CEO video. When employees are watching that and listening to that, how are they feeling?

Speaker 2:

You know it's going to feel so was it done as a sort of demonstration of like oh look, aren't we? You know, we're right at the cutting edge. I've done my um, I've done my report using ai something. What was that? What was the context for it?

Speaker 3:

well it was. It was an annual, like how the company did in 2024, so it was essentially like an annual review of the company's financial status, right, for employees as well as for shareholders. Right, and look, and, to the company's credit, they were very transparent. They you know the CEO said, hey, I've used AI to create this. Right, so that's a great start in terms of just being transparent from the beginning, that you know this is AI assisted or AI generated.

Speaker 3:

Love about tech is it can also help you in terms of self-governing, and so there is going to be a huge market it's already starting to happen now for AI detector technologies, right, so I'm very hopeful that some of the self-governing tech also will play a role into instilling trust. So, you know, Intel just launched just part of their chips. Actually, they've got, you know, they've got a deep fake detector built into the chip, which is, which is, and it's available in the market for our audience to review, and that's a great start Right in the market for our audience to review, and that's a great start, right. So if you start to instill, you know, ai governance models at the chip level, that's very comforting to see as an HR leader, right?

Speaker 2:

Yeah, and I've certainly heard that there are technologies that haven't been released because of the implications of them. I mean certainly the voice recognition stuff and the voice simulation stuff. I was listening to a podcast recently where halfway through the person switched to synthesize for ai version of their voice and then said for the last two minutes I it hasn't actually been me, it's been that and I it was completely, completely seamless, which, yeah, does make you wonder about voice security protocol. But look, I'm probably getting a bit far away from where you are. So I guess I mean it's a big question. As we approach the end is like, I guess, what is it to be human?

Speaker 3:

Yeah, well, you know, I mean, look at this conversation we're just having. Right, I would not want to replace this conversation that we're having with any kind of AI, and I think we're at a very interesting juncture or fork in our society. So I kind of view what's happening with AI and storytelling similar to what happens with processed food versus organic food, to what happens with processed food versus organic food, and it's the same kind of philosophy If people want to ingest organic food, they'll want to ingest organic content in the future as well.

Speaker 2:

That's a great, that's a brilliant analogy.

Speaker 3:

You're still that woman.

Speaker 2:

Yeah, I know Sarah's got the look that tells her Roger's going to be regurgitating that at the next possible opportunity. I mean, it's as I always say. I mean I see it very much like processed food. Um, practicing now. Uh, I'm gonna be using that. Yeah, people would always want to ingest organic, won't they?

Speaker 1:

yeah fantastic.

Speaker 2:

Well look, can you believe that we have reached our time? But we can't let you go without first asking the question that we ask all guests, which is what interesting reads or listens have you currently got happening on your plate, on your organic plate?

Speaker 3:

Well, not to self-promote, but I just released a Forbes article. It's essentially an article related to a blueprint for chief people officers in terms of how to operate and build their people strategy in the age of AI, and it's got a five-step approach fairly simple to read Of course, I just published that last week. Approach fairly simple to read Of course I just published that last week. But one of my all-time faves is Moments that Matter by Chip Heath and company, and if you think about the moments that matter, it's super related to everything we just talked about, right? Like, how do you will AI ever understand the moments that matter for people, right? And will AI ever understand the moments that matter for people? And will it ever be smart enough to have that emotional intelligence and connection with people? And this comes odd coming from a CEO of a tech company. I truly believe AI will never replace humans.

Speaker 2:

What you need to do now is start sort of fizzing and buzzing and have your eye pop out on a spring.

Speaker 3:

Right, but no, I really enjoyed this conversation, sarah and Roger. I hope to have very many of these conversations in the future, with the audience as well.

Speaker 2:

Well, thank you, we've enjoyed it a lot.

Speaker 1:

Thank you so much Amit.

Speaker 2:

Likewise.

Speaker 1:

Okay, rog, that is the sound of the Leap Lift, and I feel like I'm slowly coming around to this idea.

Speaker 2:

I knew you would come around. And what Leap Create service are we pitching today?

Speaker 1:

Okay, so today we're going to be pitching candidate journey mapping. This is actually one of my favourite exercises to do with clients. It's where we uncover key touch points to enhance brand awareness and attract top talent by creating really compelling and inspiring content and looking at the candidate experience.

Speaker 2:

I have been on a few of these and it's done in a workshop setting and we map out a typical marketing funnel all the way from awareness, interest consideration right through to decision, and that allows us to identify and optimise each stage of the candidate experience.

Speaker 1:

Yeah, and it's a great way to uncover untapped recruitment marketing channels and develop a really targeted strategy that resonates with potential candidates, particularly those that are hard to hire and we can also understand what messaging and content is needed at each stage.

Speaker 2:

There we go.

Speaker 1:

Well, that's it, we've arrived on a serious note, if you did want to be a podcast guest or you did have a project for us, reach out to either Roger or I on LinkedIn or head over to our website, leapcreatecouk, and contact us through there. We also have a number of different initiatives for employer branding and internal comms specialists. We have events, webinars, and also sign up to our no Bullshit Guide to Employer Brand newsletter.

Speaker 2:

Oh, yes, hit subscribe.

Speaker 1:

To our podcast. Yeah, so you'll be the first to hear.

Speaker 2:

There's lots of ways to engage with us and our community.

Speaker 1:

Thanks for listening.

Speaker 2:

Thank you.