Dimitris Dimitriadis, futurist: ‘In 10 years, we can have a computer with the capacity of humanity for 1,000 euros’.
- babis chatzakos
- September 24, 2024
- Foresight
- AI, Elpais, foresight, futurist, innovation, TheFutureCats
- 0 Comments
The Greek researcher who advises governments and companies on technological advances talks about what he thinks the next decade will be like.
Dimitris Dimitriadis, Greek futurist who researches the consequences of technological advances and author of ‘2049’.
Dimitris Dimitriadis defines himself as a futurist and works at the Institute for Futures and Foresight Research (IFFR DAO). He specifies that he does not predict the future, but does research for institutions, such as the Special Secretariat for Strategic Foresight of the Greek Prime Minister’s Office, and companies so that they can anticipate new developments and their consequences. Born in Thessaloniki 42 years ago, he collaborated with the cybersecurity company Kaspersky at its last meeting in Athens and is the author of 2049, published in Greek last year by Key Books, translated into English and with a Spanish version expected by the end of the year. Its subtitle sums up his vision: A hopeful perspective on the future of humanity. It is the message he is sending to the multinationals, EU institutions and educational institutions he collaborates with to, he says, help leaders take advantage of new technologies.
Question: What is a futurist?
Answer: We cannot predict the future, we foresee it. As futurists, we don’t say there is only one future, we say there are futures. But we cannot predict them no matter how much data we have. The important thing is to be prepared, to learn to make better decisions today and to extrapolate thinking, to scan the horizon for the convergence of technologies, social norms and other trends. Because we have all these new things from a technology perspective, but we also have trends in society, social norms or the economy and we need to understand all these forces and scan the horizon to be better prepared. We build scenarios and horizon scanning and planning with governments and with large organizations. We try to facilitate the process of anticipating the future, not predicting it.
Q. Are you saying that in 10 years we will have a computer for 1,000 euros with the capacity of the human mind?
A. I base this on the law of accelerating returns [attributed to US engineer Raymond Kurzweil, who argues that any evolutionary system, including technology, tends to increase exponentially and accelerates the rate of change]. Right now we have computers that do computations at about the same capacity as the human brain. If we continue with this acceleration, within 10 years, we can have one with the capacity of the whole of humanity for 1,000 euros.
Q. Is there any reason to panic?
A. In some ways it is frightening, but it is also hopeful because this capacity in terms of computation and solving real human problems can solve many things, like finding new proteins or curing diseases. If you look at it from the perspective of humanity and how we can use it to move forward, I think it’s really hopeful, not the other way around. Of course, the malicious actors will always have access to these technologies, but the good ones, let’s say, or the positive ones and the scientists who are working on the other side are only doing it with the human being in mind.
We grew up with the idea of flying cars and we don’t have them. But autonomous vehicles on the road or at sea for cargo, for example, are inevitable.
Q. In the future you anticipate, will there be driverless cars?
A. That’s a great example. As millennials, we grew up with the idea of flying cars and we don’t have them. But autonomous road and sea freight vehicles, for example, are inevitable. Then there will be driverless cars in cities. Autonomous driving is a big thing because so many lives are lost on the roads. So they have to be developed. They are expensive now because of the systems and sensors they need, but think about the capacity of some technological tools 10 or 20 years ago and now. Now, every piece of this technology has to take into account humans and all the policy guidelines. For example, it’s really hard to have policy guidelines for drones right now. But we are close and we have to anticipate.
Q. What about medical care, will it be artificial intelligence (AI)?
A. AI is really good at recognising patterns, so we have mammograms or X-rays and AI is great because it can learn from a billion images and understand what it’s looking at. But when you want to break the news of a disease, you don’t need an AI or a message on the phone, you need someone with empathy, who you can relate to, who you can trust. The human needs empathy and we will never replace this part.
The human needs empathy and we will never replace this part.
Q. What about virtual teachers?
A. Virtual teaching is also a great thing. A student can virtually walk with Aristotle in the agora and, through this immersive AI avatar, learn more because it’s not something you read or something you’re shown; it’s an experience and we learn through experiences. We can build small language models that are specific teachers through a 360 euro phone with eight gigabytes and teach and solve the whole first, second and third grade syllabus.
Q. Is there a reason to be technophobic?
A. We are technophobic because of the narrative of technology. Science fiction movies and fiction always need a villain, but in our real lives we must begin to trust our technology because, as trust takes root, we will have more education and information integrity. This is a very slow process, but if you want to change an education system, you need 20 years, so if we start now, we should start with technology and, in a generation, change things. That’s why I say we need an international multi-scale approach to how we perceive truth, values, social cohesion, our fellow human beings and how we see our parents. It’s not just about technology.
Q. And how do you ensure universal access to technological advances?
A. Access to technology can be democratized and decentralized through policies. We need to build the policy guidelines. For example, we have the AI law in Europe and it is, let’s say, a very difficult and phobic legislation because it has a risk approach and everything that has this approach is on the side of fear. But, on the other hand, it has fundamental elements to make AI egalitarian and more diverse.
Q. Is it dangerous for people to replace their human relationships with AI?
A. There are two sides. The good side is that AI can make a person happier or prevent them from committing suicide or improve their social skills through their conversation with AI to be more confident in real life. The bad side is that AI completely replaces personal relationships. But, when we have these two concepts of dystopia and utopia, humans are always in the middle. You can have an imaginary friend or a virtual pet that never dies, but you can learn things from there. I’m always on the positive side.