When Robots Help Us Understand How Humans Learn
On the Sarcelles campus of CY Cergy Paris Université, Sofiane Boucenna spends his days teaching computer science and control engineering. But in the laboratory, his work moves beyond code and equations. Boucenna designs robots that learn, machines that can develop abilities through their interactions with the world and with people.
At the ETIS Laboratory, he is part of the neurocybernetics team, a group exploring the intersection of robotics, artificial intelligence, and human cognition.
Interestingly, robotics was not always part of Boucenna’s plan. His path into the field came almost by accident. After studying mathematics and computer science, he met his future PhD supervisor, Philippe Gaussier, a professor at CY Cergy Paris Université and head of the neurocybernetics team at ETIS. The encounter led him to pursue a doctoral thesis exploring emotional interactions between humans and robots. A topic that would shape the rest of his research.
Why emotions matter
Boucenna’s doctoral work began with a deceptively simple question: could a robot learn to recognise human facial expressions on its own?
The answer, it turns out, was yes.
Through a process based on imitation, the robot gradually learned to associate facial expressions with specific situations. More importantly, it learned to adjust its behavior accordingly. If a human partner showed fear, the robot could learn to avoid the object that caused it. If the person expressed pleasure, the robot might approach.
A key part of this learning process is something psychologists call joint attention: the ability to follow someone else’s gaze toward an object. Human infants typically develop this skill around twelve months of age. Boucenna and his colleagues showed that similar mechanisms could be reproduced using artificial neural networks.
For Boucenna, however, the real goal is not simply building smarter machines. Robots, he explains, are powerful experimental tools. They allow researchers to model how human cognitive abilities might emerge, test those models, and observe what happens when they interact with the real world.
Using robots to study language and development
Today, Boucenna’s research continues along several lines. Since 2023, his main research project has been funded through the Horizon programme of CY Initiative. One of its objectives is to better understand how language begins.
In the lab, robots are used to simulate the earliest stages of communication, the transition from the babbling of infancy to structured language.
The project has also brought new researchers into the team, including research engineer Alexandre Alves and doctoral student Raphaël D’Urso. D’Urso’s work has already gained recognition: his thesis topic on the usage of artificial intelligence in supporting learning in autistic children was one of only ten selected for the 2025 edition of the comic book “Sciences en bulles”, published for the French Annual Science Festival and distributed all throughout France.
Some of the project’s findings have been surprising. In one experiment, a robot was asked to imitate simple postures performed by children and adults. When interacting with autistic children, the robot appeared to require significantly more processing effort.
The likely reason lies in extremely subtle movements. When repeating the postures, autistic children often produced small micro-movements that made their gestures harder for the robot to interpret. From this observation emerged a new hypothesis: some children may experience difficulties with proprioception, the sense that allows us to perceive the position of our own body.
This insight came from robotic modeling, something that had not previously been identified by psychologists or therapists.
When children teach the robots
Another branch of the project looks at how robots might support children with autism or language disorders.
The researchers designed vocabulary exercises based on a simple but unusual idea: learning by teaching.
Instead of the robot teaching the child, the roles are reversed. The child teaches words to the robot. The act of explaining reinforces the child’s own understanding and memory.
The same approach has also been adapted into a tablet application inspired by the concept of a serious game, making it easier to use at home or in clinical environments.
Building expressive machines
At ETIS, research does not stop at algorithms. The team also designs and builds their own robotic systems.
One of them is a simplified expressive robotic head used in Boucenna’s experiments. Powered by about fifteen motors, it can display clear emotional expressions. The design deliberately avoids the complexity of a fully realistic human face, which can be difficult for neurodivergent children to interpret.
The lab also hosts Berenson, a mobile robot that developed aesthetic preferences through interactions with visitors at the Musée du Quai Branly – Jacques Chirac. Other projects include articulated robotic arms, hydraulically driven robots designed for motor tasks, and autonomous navigation systems inspired by living organisms.
A cautious vision of the future
Despite working at the frontier of robotics, Boucenna remains careful about the future he imagines.
In the most ambitious version of his research, he says, it might one day be possible to build a robot capable of developing like a child from birth to three years old, learning language, attention, and emotions through experience.
But he is quick to point out the limits of such ideas. Boucenna does not subscribe to visions of robots living autonomously alongside humans or replacing professionals. In his view, robots should remain tools. They can assist teachers, support therapists, and help researchers understand how humans learn. But they cannot replace the richness of human relationships.
For Boucenna, the objective is clear: use robotics to produce knowledge and develop practical tools for fields like education, health, and neuroscience, while keeping one principle in mind: humans remain irreplaceable.
Source:



