Halim Djerroud

Halim Djerroud

Associate Professor in Computer Science
LISV / UVSQ / Paris-Saclay
🇫🇷 Français 🇬🇧 English

Scientific Approach

An Assumed Research Stance

My research work is part of an approach to embodied, cognitive, and frugal robotics, centered on decision-making as a fundamental scientific problem.

I consider that robotic autonomy cannot be reduced to either local algorithmic optimization or an accumulation of perceptual or motor capabilities. It rests above all on a system's capacity to produce the information necessary for deciding, to anticipate the consequences of its actions, and to self-regulate over time under physical, computational, and social constraints.

This stance leads to considering the robot not as a simple executor of behaviors, but as an embodied agent, engaged in continuous interaction with a real, dynamic environment often shared with humans. In this framework, the decision is never abstract: it is always situated, constrained by the robot's body, its sensors, its action capabilities, and the context in which it operates.

A Foundational Question

All of my work revolves around a central question:

How can we design robotic systems capable of making relevant, adaptive, and explainable decisions in real, dynamic environments that are intrinsically shared with humans?

This question deliberately goes beyond classical issues of performance or optimization. It engages a deeper reflection on the nature of embodied artificial intelligence: acting autonomously in the physical world does not solely consist of reacting to stimuli or maximizing an abstract reward function, but requires a structured understanding of the environment, the ability to anticipate its possible evolutions, and the capacity to inscribe the robot's action within a framework that is comprehensible, predictable, and acceptable to humans.

What I Do Differently: Beyond Dominant Paradigms

This perspective has gradually led me to move away from two dominant paradigms of contemporary robotics, whose limitations become clearly apparent when seeking to go beyond controlled or artificially simplified scenarios.

On the one hand, purely reactive architectures, stemming from behavior-based robotics and subsumption architectures, offer undeniable robustness and execution speed. However, this efficiency relies on an almost total absence of prospective capacity. A reactive robot can avoid an immediate obstacle, but is fundamentally incapable of reasoning about the delayed consequences of its actions, for example when an obstacle is mobile, manipulable, or socially mediated by a human.

On the other hand, end-to-end deep learning approaches have demonstrated spectacular performance on perception or navigation tasks. Nevertheless, they rely on distributed and opaque representations, making the produced decisions difficult, if not impossible, to explain, justify, or audit. In contexts such as assistive robotics, human-robot collaboration, or shared environments, this decisional opacity constitutes a major obstacle to acceptability, trust, and safety.

Faced with these limitations, I advocate for a third way, which I call frugal and explainable cognitive robotics.

A Third Way: Frugal and Explainable Cognitive Robotics

This approach is based on the idea that robotic autonomy can only be robust and socially integrable on the condition of relying on:

  • explicit representations,
  • anticipation mechanisms through simulation,
  • and decision-making processes that are traceable by construction.

This position is inspired by cognitive sciences and computational neurosciences, not in a naive biomimetic logic aiming to reproduce the human brain, but in an approach to understanding the computational principles that allow living beings to reason, anticipate, and act effectively in complex environments with limited resources.

First Pillar: Multi-Level Representations for Action

A first fundamental pillar of this vision lies in the use of multi-level representations, ranging from raw geometry to semantics and situated action. A robot cannot settle for a perception of the world reduced to pixels or unstructured point clouds.

I notably distinguish several complementary levels:

  • Geometric level: explicit description of the world in terms of surfaces, volumes, obstacles, and navigable zones, anchored in physical metrics.
  • Topological level: organization of these primitives into structures capturing the connectivity of space, independently of exact distances.
  • Semantic level: categorization of entities and modeling of their functional and social relations.
  • Behavioral level: encoding of affordances and interaction constraints, i.e., the possibilities for action offered by the environment according to context.

This stratification is not a simple stacking of independent layers, but the expression of a central cognitive hypothesis: abstraction is a condition for generalization. A robot capable of reasoning about the abstract notion of controlled passage can transfer its skills between different types of doors without exhaustive relearning.

Second Pillar: Simulation as a Cognitive Decision-Making Mechanism

The second pillar of my approach concerns the central role of simulation in the decision-making process. In my approach, simulation is not a posteriori validation tool, but an a priori cognitive mechanism, directly involved in decision-making.

Inspired by work on mental simulation and internal models, this approach is based on the idea that the agent explores several plausible futures before acting. Concretely, this translates into the use of multi-agent systems as internal simulation support: entities perceived in the real environment are modeled as agents endowed with physical properties, action capacities, and specific constraints, while the robot itself is integrated into the simulated world as a cognitive agent.

Before any engaging action, several scenarios are simulated, evaluated, and compared according to explicitly modeled criteria (energy cost, risk, duration, social acceptability). The actual execution is then supervised by continuous comparison between predictions from simulation and actual observations.

Third Pillar: Cognitive Frugality and Meta-Decision

The third pillar, today the most structuring of my research, concerns cognitive frugality and meta-decision. An intelligent system does not merely decide what to do; it must also decide how to decide.

Faced with a given situation, several decision-making strategies are possible: complete planning, reuse of past experiences, simple reactive behaviors, or explicit delegation of the decision to a human. The choice between these strategies must be guided by a rational evaluation of criteria such as urgency, criticality, available computational resources, or the human context.

This approach is part of the tradition of bounded rationality, while distinguishing itself through concrete anchoring in embedded robotics. It also carries an ethical and ecological dimension: integrating computational parsimony as a design principle contributes to more sustainable and socially responsible robotics.

Methodological Choices

My methodological choices stem directly from this vision:

  • Symbolic formalisms: for explainable and auditable decisions,
  • Multi-agent systems: for internal simulation,
  • RGB-D perception: as a compromise between informational richness and frugality,
  • Transferable learning: adapted to the realistic constraints of real environments.

Vision and Ambitions

Through all of these works, I defend the idea that the future of autonomous robotics does not lie in a race for computing power, but in the capacity to mobilize the right decision-making strategy at the right time.

Robotics that is explainable by construction, anticipative through simulation, frugal by principle, and collaborative through understanding constitutes, in my view, a credible path to sustainably integrate robots into our daily lives.

← Back to home View publications →