Home>AI: the coevolution hypothesis formulated by Luca Pappalardo

02.05.2025
AI: the coevolution hypothesis formulated by Luca Pappalardo
Luca Pappalardo is senior researcher based in PISA (Italy), at the ISTI-CNR (Institute of Information Science and Technologies), KDD Lab. He's visiting at Sciences Po between January and May 2025. As digital technology plays an increasingly prominent role in our societies, he presents his research perspectives, at the intersection of social sciences and computational sciences.
How would you characterize your academic and research field(s)? What are its main features or boundaries?
My research lies at the intersection of computational social science, human mobility analysis, and human-AI coevolution. The field is inherently interdisciplinary, combining methods from computer science, complex systems, data science, and the social sciences to study how individuals and societies interact with AI systems. One defining feature is the feedback loop between humans and AI: not only does AI shape individual and collective behavior, but human behavior also steers the evolution of AI systems. This coevolutionary dynamic is becoming increasingly central in understanding the societal impact of algorithms.
What types of materials or data do you work with in your research?
I primarily work with large-scale behavioral data, such as human mobility traces from mobile phones and GPS devices, social media interactions, and usage logs from AI-powered platforms like recommender systems. I also use synthetic data generated through simulation models to study human-AI interactions in controlled scenarios. These datasets allow for the empirical and computational modeling of individual behavior and the feedback effects introduced by algorithmic mediation.
What are the main scientific questions you are currently investigating?
My current research explores how AI systems, particularly recommender algorithms, influence and are influenced by human behavior over time. Key questions include:
- How can we design AI systems that balance individual utility with societal values?
- How does the coevolution between humans and algorithms affect diversity, fairness, and collective decision-making?
- What are the measurable signals of algorithmic influence in human trajectories, and how can we design complexity-aware algorithms that mitigate unintended consequences?
From your perspective, what are the main societal risks or key points of concern regarding artificial intelligence?
First, loss of agency: individuals' choices are overly shaped by opaque recommendation mechanisms. This can lead to homogenization of behavior and reinforcement of existing biases. Second, AI automates some human mental tasks, dispossessing us of the capability to solve them in the long run. This means that we will be more and more dependent on AI to solve everyday tasks (from navigating a city to writing text). Third, algorithmic systems may erode democratic values if not properly governed, especially when deployed at scale without transparency or accountability. A critical concern is ensuring that AI systems are aligned not only with individual preferences but also with collective, long-term human goals.
What was the value or benefit for you of spending time at Sciences Po, particularly under the TIERED project?
My visiting period at Sciences Po under the TIERED program has been a great experience. Sciences Po provided a unique interdisciplinary environment to critically examine the societal implications of AI. Interacting with scholars from political science, sociology, and ethics enriched my understanding of how algorithmic systems intersect with institutions, governance, and public policy. It was intellectually stimulating to confront my computational approach with normative perspectives, which helped refine the ethical framing of my research on human-AI coevolution.
Did you participate in any teaching activities during your visit? If so, could you describe them?
I taught an innovative course on Human-AI Coevolution, which was both exciting and intellectually demanding. As the first of its kind, the course was tailored for an audience primarily composed of social science students—quite different from my usual academic setting. I carefully designed the sessions to actively engage students in critical discussions about the societal implications of algorithmic decision-making. Drawing on case studies from my research on mobility data and recommender systems, we explored how AI systems can be designed to align with social values and promote responsible outcomes. The experience was immensely rewarding, and the enthusiastic feedback from students suggested that the course resonated deeply with them.
What collaborations or connections did you establish during your time at Sciences Po?
I established several valuable collaborations during my time at Sciences Po. Together with Prof. Emanuele Ferragina, I co-authored and submitted a paper on the political economy implications of Human-AI Coevolution. I also initiated a new collaboration with Prof. Ettore Recchi, focusing on the analysis of customer purchasing behaviors in Italian supermarkets. In addition, I engaged with early-career scholars working on the ethical and social dimensions of data science, laying the foundation for future interdisciplinary publications and joint workshops. I’m hopeful that these collaborations will lead to a productive stream of research outputs in the years to come!
What advice would you give to future visiting researchers in your field?
Immerse yourself in interdisciplinary dialogue. Sciences Po offers a vibrant intellectual ecosystem where computational researchers can meaningfully engage with normative frameworks. Be open to critical feedback from social scientists, it will challenge your assumptions and strengthen the societal relevance of your work.
How do you see the future of AI impacting society?
AI will increasingly mediate our relationships with information, institutions, and one another, and its influence will only grow in the coming years. This trajectory should not inspire fear, but responsibility. We possess the technical and political tools to govern AI in ways that promote social good. As humans and AI systems continue to coevolve, it is crucial that we actively shape this process to serve collective well-being. Steering this coevolution toward positive societal outcomes is not only possible, it is a choice we must consciously make.