Government and Policy

People who grow up around robots may develop feelings for them, lose ability to socialize: WEF ‘Summer Davos’

Will we merge ourselves so intimately with technology that it becomes so much a part of us that we lose ourselves and forget what it even means to be human? perspective

Staring at an early age, the more interactions people have with robots, the more likely they will be to develop feelings for them and lose the ability to socialize with other humans, according to a discussion at the World Economic Forum (WEF) Annual Meeting of the New Champions (AMNC), aka “Summer Davos,” in Tianjin, China.

In his presentation during the “Social Robots and Isession, Future Explorer Society founder Chang Dong-Seon explained that children who will grow up around robots may not be able to distinguish machines from other humans as they would lack the social cues to do so.

“If you look at studies of young people, then they don’t distinguish non-human robots with humans […] They might think they really have feelings, show sympathy, and think of the other non-human beings like they’re human”

Chang Dong-Seon, WEF Annual Meeting of the New Champions, June 2025

In this scenario of constant interaction with machines an entire generation may grow up to have feelings for robots because they will be able to hug them, converse with them, and see themselves reflected in them.

We can see it in children, or already in the younger generation, if you look at studies of young people, then they don’t distinguish non-human robots with humans so much like us [adults] because we didn’t grow up with them,” said Dong-Seon.

But the younger generation, they might think they really have feelings, show sympathy, and think of the other non-human beings like they’re human.”

But it’s not just younger people who will be affected from being around robots.

In his second scenario, Dong-Seon says that the more we interact with robots throughout our lives, the more we may lose our ability to socialize normally with other humans because our brains would be rewired to treat other people as if they were machines.

“If you are isolated and if you don’t interact with other human beings, over time your brain loses the ability to read social cues from other beings […] if you interact with robots, a lot of people see that you might actually become very aggressive, abusive […] and your non-human actions might convey also to humans when you interact

Chang Dong-Seon, WEF Annual Meeting of the New Champions, June 2025

If we deal with AI in our little chambers, in our little rooms, and just talk to AI and not interact with other humans, what happens in our brain?” Dong-Seon pondered.

There’s an interesting study in Trends in Cognitive Science recently. It said the region of the PSTS — it’s a social network area in our brain — will shrink, and there’ll be less connections, which means if you are isolated and if you don’t interact with other human beings, over time your brain loses the ability to read social cues from other beings — meaning the next time you meet a human being, you might misinterpret the eye size, the looks, the words, the social cues, and you might think, ‘Oh! I don’t know how to interact with humans; I prefer robots; I prefer AI. It’s so much better; it completely reflects me,’ and you will have a different way of interaction, which is more limited, but which will become over time, potentially, a new way of interaction, and people prefer it more.

But here is also a danger because if you interact with robots, a lot of people see that you might actually become very aggressive, abusive, and when you know the other being is not human, the way you interact changes.

But the problem is if the brain sometimes needs to interact with humans and with non-human beings, you might lose the sense of distinguishing between both — meaning that your non-human actions might convey also to humans when you interact.”

“Relational AIs compete as friends, coworkers, and mentors, and romance-based AIs compete for love. Competition over love has the potential ethical consequence of disrupting our closest human relationships”

Trends in Cognitive Science, “Artificial intimacy: ethical issues of AI romance,” June 2025

There is a paper in the June 2025 issue of Trends in Cognitive Science called “Artificial intimacy: ethical issues of AI romance” that coincides with what Dong-Seon said, and it states:

As AIs grow in their capacity to be perceived as having minds and as ideal relationship partners, some people prefer AI relationships over human ones.”

The study also warned that “AIs can also be used by people to manipulate other people. Relational AIs can be harnessed by private companies, rogue states, or cyberhackers, first offering companionship and luring people into intimate relationships to then push misinformation, encourage wrongdoing, or otherwise exploit the individual.”

Dong-Seon’s presentation followed that of University of Twente professor Vanessa Evers.

You can check out The Sociable‘s coverage on Evers’s presentation: here.

But for a quick summary, Evers explained that in order to achieve true robot intelligence, there would need to be a Large Behavioral Model or a digital twin of the entire world.

After the robot is trained on these models and twins, in theory, it would then be able to interact seemingly intelligently in the real world.

“For true robot intelligence, you need to build a model of the world, like a digital twin of the entire world”

Vanessa Evers, WEF Annual Meeting of the New Champions, June 2025

“We could add superhuman capabilities, listen to the heartbeat or watch the breath of a person to know stress, to know pain. We can detect dominance, aggression, creative flow — there’s all things you could detect in an automated way”

Vanessa Evers, WEF Annual Meeting of the New Champions, June 2025

Once robots are equipped with a model of the world, they would become seemingly “intelligent” and “social,” and there enters a whole host of philosophical and ethical issues on the nature of how we will interact with robots, and how that might change the dynamic of how we might interact with our fellow humans.

For example, Evers talked about how her lab created a smart pillow that could detect when a baby was in pain through sounds he or she makes.

In the future that type of technology could be applied to other devices and robots for different purposes, like decoding and detecting a person’s mental or emotional states.

When Chang Dong-Seon heard about Evers’s pillow-pain-detector, his mind went straight to building a lie detector test using smart glasses and sensors — to which Evers said that would be an abuse of her technology, and Dong-Seon disagreed.

We don’t know how people are going to use it. It is public knowledge. Of course, everybody can use it. We publish it freely. Then we have no power over how it is used

Vanessa Evers, WEF Annual Meeting of the New Champions, June 2025

I imagine in the future that we are all wearing smart glasses, smart sensors, and our body signals can convey information directly to a robot, to others,” said Dong-Seon.

Maybe that might change also the human reaction, saying, ‘Hmm. I don’t see a pain signal in you, so maybe you are faking.’

Although we might, in a very human situation react like, ‘Oh! You must be in pain,’ maybe the standard in how robots learn — or with that how humans learn — to perceive pain might change.”

Evers immediately piped in and said, “I feel you would be abusing my technology for doing something bad, right? I didn’t make it for that. I made it for good.”

But herein lies what’s at stake for the future.

There are people creating technologies the likes of which will change all society, and it doesn’t matter if they build them with the best intentions because there will always be somebody looking to weaponize them.

As Evers explained, “We don’t know how people are going to use it. It is public knowledge. Of course, everybody can use it. We publish it freely. Then we have no power over how it is used.”

And this gets to the very heart of what The Sociable strives to tell the world about — how governments, corporations, and unelected globalists could wield technologies in nefarious ways, even if that’s not what the original creators had intended.

So, yes, the researchers, the developers, the scientists, they all know that they are building something dangerous, and many may have the best of intentions, but hubris has a funny way of coming back to bite you.

But in the case of powerful technologies, what’s at stake is the future of humanity.

Will we lose our ability to socialize with other people and prefer the company of social robots?

Will we allow technology to govern our lives under a complete surveillance state?

Will we all start using wearables to track everything that’s going on inside of our bodies while all that data is being sent to governments and corporations in real-time?

Will we merge ourselves with technology so intimately that it becomes so much a part of us that we lose our humanity along the way and forget what it even means to be human in the first place?

The technology is here, now. It’s time to decide.


Image Source: Screenshot of Chang Dong-Seon from the WEF Annual Meeting of the New Champions session on “Social Robots and I” June 24, 2025.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

Recent Posts

City of Seville welcomes the return of Techstars Startup Weekend

Techstars Startup Weekend Seville has announced that it will celebrate its 29th edition this June…

2 days ago

Meet agentic AI: Your AI agent just leveled up to teammate (Brains Byte Back Podcast)

You’ve probably been coming across the term “agentic AI” a lot more recently, and in…

2 days ago

True robot intelligence requires a digital twin of the entire world: WEF ‘Summer Davos’ in China

In order to achieve true robot intelligence, a digital twin of the entire world would…

3 days ago

Sensors to surveil people & cities among WEF top 10 emerging technologies at ‘Summer Davos’

Autonomous Biochemical Sensing can turn human bodies into surveillance tools for monitoring and control, Collaborative…

3 days ago

AI and the Dopamine Trap: How Algorithms Are Rewiring Our Social Cravings

New research shows AI companions can lift mood and teach social skills, but only when…

7 days ago

Hate speech, deepfakes & false info undermine UN work: comms chief

Hate speech is a launching point for crackdowns on narratives that impede UN agendas: perspective…

7 days ago