Eiko Fried has been appointed professor of Mental Health & Data Science. This combined chair neatly fits the view that understanding complex mental health issues require the integration of statistical methods. 'The idea that mental health problems are monocausal entities with simple etiologies is no longer plausible.'
Your chair is a response to the growing need for a stronger connection between the field of clinical psychology and statistics. What has made it so that these two areas have not yet been sufficiently integrated?
'In 2017, I wrote a tongue-in-cheek blog post, pointing out how many competencies we expect psychologists to have. They should be good experimentalists and empiricists. They need a solid foundation in measurement, including modalities like EEG, MRI, wearable data, or ecological momentary assessments. They require a solid theoretical background for the topics they are studying. And they should be good in statistics, machine learning, and programming. In 2025, we now add topics like participatory research, qualitative data, open science knowledge, AI competency, and a background in causal inference … and perhaps-just perhaps-we're asking a little too much of psychologists.
'Science is not only better as a team sport, but also a lot more fun'
The solution to this problem is obvious: teamwork! We put a preprint online just a few days ago, written by a highly interdisciplinary team of colleagues from fields including clinical psychology, neuroscience, computer science, engineering, ecology, and others, arguing that scientific progress will critically hinge on our ability to engage in collaborations and to integrate work from across domains of science. This is why the new chair is devoted to Mental Health & Data Science, trying to bridge gaps, get people to work together, and share their expertise. Science is not only better as a team sport, but also a lot more fun.'
What is the most important question that clinical psychology still needs to answer? (Roy de Kleijn)
'I personally think we have not really addressed the core question what mental health problems actually are. We have conflated mental health problems (complex within-person processes) with the diagnoses by which they are classified (clinically useful idealizations to facilitate treatment selection and prognosis). And then we have devoted most of our resources to study these pragmatic labels such as 'major depression' or 'schizophrenia' in case control studies, rather than studying people's varied experiences of mental health problems. I just submitted a new paper on the topic and it is something I'd love to work on more in the future. Come to think of it, I'd love to teach a course on the topic! Let's see what 2026 brings.'
What do you see as the biggest challenge in bringing these worlds together? In what sense do they speak different languages that you can help translate?
'I've spent a lot of time in both worlds in the last 15 years, and there is plenty of good news. Both worlds overlap substantially, as you can see in our research master programs. And people do want to work together-model developers commonly ask me for application examples or data, and applied psychologists often ask for help with design, measurement, or analyses. But practices and language use can differ: transdisciplinary work is fantastic, but it requires dedicated efforts from all sides to understand each other, and there are common traps we need to learn to navigate (e.g., psychologists often think of 'prediction' as temporal, whereas methodologists often use the term to denote explained variance). This process also includes understanding how much (often invisible) effort each side puts into collaborations, and acknowledging this work (e.g., via authorships).
The one recommendation I have is to work together earlier: when you think about extending your model or applying it to a particular dataset to showcase its utility, get feedback from applied folks early in the process. And when you plan a study, get methodologists involved before data collection starts, not after.'
At what stage are you currently in developing an algorithm for personalised prediction of depression? (Bunga Pratiwi)
'In our ERC-funded WARN-D study, we have been very busy to develop a personalised early warning system for depression. Obtaining funding and carrying out the work has only possible due to a highly transdisciplinary team bridging the gap between mental health and data science.
Current efforts for the warning system include nomothetic (group-level) and idiographic (personalized) prediction of depression severity, as well as an open machine learning competition for predicting depression onset. Everybody is invited to participate, the competition should be online early January 2026!'
You will be working in two different units within Psychology. This set-up is quite new. How will you organise this in practice?
'Great question-part of this will definitely be figuring things out in the next year. A few things come to mind. First, I will be involved in a number of themes in both units, such as "Depression & Suicide Prevention, "Responsible Research Methods", and "Applied Psycho- and Sociometrics" that align with my background. Second, I will teach and supervise students in both units. Third, I will continue talking to educators and researchers in both units to figure out what obstacles and wishes there are regarding transdisciplinary work, and how we can better facilitate team science efforts.'
'Mental health problems emerge from dynamic, complex, biopsychosocial systems'
Why is better integration of statistical methods essential for formulating answers to complex issues surrounding mental health?
'Mental health problems emerge from dynamic, complex, biopsychosocial systems; the idea that mental health problems are monocausal entities with simple etiologies is no longer plausible. This implies that we need theories, measures, and statistical models that do justice to these complex issues. Luckily, systems and network sciences have made a lot of progress in the last decades so we don't have to start from scratch. And there is much promising work showing that collaborations are possible and provide fruitful avenues to better describe, predict, treat, and explain mental health problems.'
Network models can help understand the transdiagnostic factors that underlie many mental health problems. What factors are already known?
'Using network models that aim to estimate the biopsychosocial systems that give rise to mental health problems is a great example of bridging the gap between clinical psychology and statistics. The field has identified many transdiagnostic factors, and entire research frameworks such as NIH's Research Domain Criteria are built around those. Factors like sleep problems, avoidance behaviors, and emotion dysregulation are important targets. But in my view, not enough attention has been paid to social determinants of mental health, such as housing instability, job insecurity, food insecurity, neighborhood disadvantage, social isolation, and stigma. These transdiagnostic factors call for system-level interventions - structural solutions including strengthening welfare and social safety nets - instead of our current focus on individual-level interventions such as psychotherapy.'
Which major findings in your field do you think will survive the replication crisis? (Anne Krause)
'Explananda are robust phenomena, recurrent features in the world, that require explaining. The fact that avoidance tends to be a bad thing in the long run and makes anxiety problems worse is a phenomenon. Sleep problems are very common in people with mental health problems. Women are more often diagnosed with some mental health disorders than men. And adverse life experiences predispose people to a wide variety of mental health problems. These and many similarly robust phenomena will survive the replication crisis.
Explanantia, on the other hand, are things that explain phenomena-usually theories.
This makes obvious that we need robust phenomena before we start explaining them. The replication crisis has shown that we should do more of the former, and less of the latter: we've wasted too many resources trying to come up with theories that explain things that were never things in the first place.'
You mention that 'clinical theories are often narrative and imprecise', and instead propose 'theories that are spelled out in mathematical equations'. Could you name an example of a theory that was once widely accepted, but has since 'faded away'?
'One of the most discussed problems in the literature on psychological theories is that they are often narrative and strategically ambiguous: they don't make precise predictions, they can be patched post-hoc ('hidden moderators'), and can absorb most empirical results. In short, narrative theories often cannot be refuted by data. So they just fade away as the field loses interest, as famously bemoaned by Paul Meehl in 1978. In my research area, some of the theories that have faded away include theories on subtypes of depression (such as exogenoues vs endeougenous depression) as well as biological theories such as the monoamine depletion theory of depression.'
Is there more to therapy than the placebo effect and social contact? (Roderick Gerritsen)
'The short answer is "yes". First, there is a sufficiently large evidence base to conclude that some treatments do work better than placebos. Second, there is solid evidence that some digital interventions (in the absence of any social contact), for instance based on teaching certain types of skills, work better than placebos.'
You note that some questionnaires, such as the Hamilton Scale used to measure depression, are more than seventy years old and are still in use. Why are these outdated tools still in use?
'Hamilton published his scale in 1960, based on what we knew about depression in the 1950s. I believe he would be genuinely shocked to learn that his scale - unaltered after 65 years - is still the most commonly used scale in clinical trials for depression today. And don't get me started on the fact that Hamilton expressly instructs researchers to only use this scale with already diagnosed inpatients, and to avoid using the "total crude score" of this scale - instructions that are nearly completely ignored today.
Generally, there are several reasons outdated scales are still in use. One is reliance on tradition: "we have always done it like this". Another is that psychologists often show what Jessica Flake and I have termed a Measurement Schmeasurement attitude. And finally, all psychologists should read Hasok Chang's 2004 book Inventing Temperature, where he shows that progress was only possible due to epistemic iteration: iterative efforts where small improvements of theories lead to small improvements in measures leading to small improvements in theories, following an iterative cycle. This is very rarely done in our field: most scale developers publish scales based on very initial validation efforts, but never iterate them.'