Four Myths About AI: Researchers Reveal What You Shouldn't Expect From It
With artificial intelligence, you better be careful, cautious, and clever — but not cynical. We asked researchers what one should expect from AI, and what not.
Ville Blåfield, 12.01.2026
|
Long Forms
Artificial intelligence entered the workplace before we had time to agree on the rules. For researchers, though, it’s nothing new—not even news of this decade.
We asked four AI researchers what one should expect from artificial intelligence and where the expectations often go wrong.
AI won’t need to become humanlike

We shouldn’t expect AI to think like a human—nor may we want to, says Professor Antti Oulasvirta.
Antti Oulasvirta has what you might call a positive problem. A sabbatical year in Berkeley, California, opened doors to Silicon Valley giants like Meta and Google. He built valuable new networks in San Francisco and Los Angeles, but maintaining those connections after returning to Aalto University in Finland has required stretching the limits of his day.
Now, Oulasvirta finds himself in remote meetings with the U.S. West Coast early in the morning or late at night, while fulfilling his duties on Aalto University’s campus in Espoo during the day.
Nevertheless, new connections are pure gold for an AI researcher.
“It’s always good to see how things are done at the top,” Oulasvirta says.
In Finland, he leads research at Aalto University’s Computational Behavior Lab. He studies human-computer interaction and develops computational models to understand and predict human behavior.
Oulasvirta breaks AI’s capabilities into three categories: imitation, interpretation, and prediction.
“Interpretation goes beyond imitation: to interpret another person, you need to reason what the causes might be behind their actions. But prediction, when it comes to human behaviour, is the hardest of all.”
Oulasvirta breaks AI’s capabilities into three categories: imitation, interpretation, and prediction.
According to Oulasvirta, AI has already made huge strides in imitation. He says language models can already carry on humanlike conversations to the point where we only realize we’re talking to a machine after about ten minutes. “But then it starts to break down.”
Interpretation and prediction are much harder for AI because they require a deeper understanding of a person’s motivations. To predict human behavior, AI would need to understand what the person is aiming to achieve.
Research is aiming to develop AI in these areas, but not with the goal of making AI humanlike, says Oulasvirta.
“Most AI researchers aren’t trying to create something humanlike. The goal is to develop AI that can perform increasingly complex tasks. Whether it does them in a humanlike way or not doesn’t really matter.”
In fact, trying to replicate the human mind would waste much of AI’s potential. It’s a fallacy to use the limits of human intelligence as the yardstick for all intelligence. Oulasvirta compares it to searching for life in space that resembles our own civilization—we’re limited by our own imagination.
“If you think about what AI is optimized for, there’s no inherent reason it should think like a human. It might arrive at humanlike solutions to tasks like planning or chess, but then again, it might not,” Oulasvirta says.
“You can’t expect AI to think like a human. And there’s no reason it should.”
“You can’t expect AI to think like a human. And there’s no reason it should.”
Understanding human motivations would still be useful for AI. If we could teach AI to grasp what we’re trying to achieve through our communication and behavior, it could assist us more effectively.
“One challenge is figuring out how to train AI to understand human behavior and emotions.”
Emotions and deep motivations probably can’t be learned just by crunching internet data. Oulasvirta says researchers need to find a new way to collect large, meaningful datasets for AI to learn these skills.
In his own lab, Oulasvirta combines behavioral sciences with machine learning. While that might sound like an unusual pairing, the approach is actually not that new. Oulasvirta points out that researchers have been searching for patterns in human behavior for over a century.
“It’s all about conditioning—something that goes all the way back to Pavlov’s theory.”
Who?
Antti Oulasvirta, 45, is a professor at the Department of Information and Communications Engineering at Aalto University. He leads Aalto’s Computational Behavior Lab (cbl.aalto.fi), the User Interfaces research group, and the Interactive AI research project at the Finnish Center for Artificial Intelligence (FCAI).
When did you first hear about AI?
“It was in 1999 when I started studying cognitive science. It was a very different time, but of course, AI research was already happening.”
When has AI helped you?
“Here’s an example from when I was working on my dissertation. In 2005 or 2006, we conducted a major study with Nokia, collecting data on where people focused their attention while using smartphones. We filmed people with their phones all over Espoo and Helsinki. When it was time to analyze the data, we turned to machine learning experts. It was eye-opening to see how far those models already were back then.”
When has AI failed you?
“We had a promising master’s student in a project, but it turned out he had used GPT to code everything and didn’t understand how the code actually worked. We had to redo the whole thing. If you don’t understand what you’re doing with AI, you can’t trust the results. Outsourcing your thinking to AI is very dangerous.”
Regulation is good for the development of AI

A responsibly developed AI based on Nordic values could become a global competitive advantage, believes Nitin Sawhney.
Nitin Sawhney can’t be accused of being overly enamored with AI — quite the opposite. He is an AI researcher, but cautious, even skeptical.
"Any technological development can always be irresponsible and exclude the most vulnerable groups. The pace at which AI is being developed greatly increases this risk," Sawhney says.
In his view, AI hype leads to careless, lazy discourse.
“We have to break through the hype and bring more nuance to the conversation around AI.”
That said, Sawhney — currently a visiting researcher at the Research Institute of the University of the Arts Helsinki — also sees great potential in AI, not just in the business world, but in the public sector. In his own research projects, he has developed AI research projects in collaboration with Finnish institutions like THL (National Institute for Health and Welfare), Kela (Social Insurance Institution), and the cities of Espoo and Helsinki.
"The public sector’s mandate comes from serving the common good," Sawhney emphasizes.
"If I were the CEO of a Nordic company, I’d see this as a huge opportunity. Responsible AI developed according to Nordic principles could be a global competitive advantage."
And with that mandate comes responsibility.
Services from social insurance institutions or immigration officials, for instance, should not be handed over to AI unless one can be certain that the technology can provide fair, transparent, secure, and equitable service. And even then, it would be better to have strong human oversight.
Lack of transparency and accountability is what makes AI models problematic, Sawhney claims.
"We must be extremely cautious when introducing these systems into public services. It’s totally fine to use AI systems in, say, creative expression or game design, but when they begin affecting people’s lives, we must be far more demanding when it comes to auditing, monitoring, and transparency," Sawhney says.
In business, it’s often assumed there's a trade-off between ethical and innovative AI. But Sawhney argues that such thinking should never be accepted without critical deliberations with all affected stakeholders in public-sector systems.
Before entering academia, Sawhney worked at a fast-growing AI tech startup in New York. Even then, his focus was on designing human-centered collaborative AI systems, but he saw firsthand how ethical concerns were not always prioritized in the American tech industry. He believes the Nordic countries and the EU could turn this contrast into a competitive edge.
"The Nordic countries have traditionally been high-trust societies. The EU has also developed solid regulations for AI. The EU AI Act gets a lot of criticism, but I think it’s a good start. If I were the CEO of a Nordic company, I’d see this as a huge opportunity. Responsible AI developed according to Nordic principles could be a global competitive advantage."
"Use AI only to support your own original thinking."
There’s market space for this — especially since public sector actors and industry providers worldwide will soon need to develop AI solutions they and their citizens can trust.
Sawhney also differs from many AI researchers in that he says he uses generative AI applications very little himself due to the underlying ethical concerns.
"For me, it’s also an energy-consumption issue: if we all start using AI applications for every trivial task, our energy consumption will skyrocket. But more broadly, I just don’t trust AI’s analysis in important decisions," he says.
"At the university, we should protect our writing and thinking skills. I tell this to students too: use AI only to support your own original thinking."
Who?
Nitin Sawhney is currently a visiting researcher at the University of the Arts in Helsinki.
In his research, he develops inclusive, responsible, and human-centered AI and has collaborated with a wide range of societal actors in his projects.
When did you first hear about AI?
"I studied at MIT Media Lab in the late 1990s and learned about AI from my professors, including Marvin Minsky, Patties Maes, and Sandy Pentland. My recollection was how it tied to the cognitive aspect of how the brain works, but also can be applied to designing human-machine interaction in creative ways.”
When has AI helped you?
"AI has helped me in the context of doing research on tying human economic development indicators (HDI) of literacy from a UN study to patterns of mobile phone usage by poor rural women in India in a project I conducted at MIT using support vector machines (SVM), a statistical machine learning method. It was a highly surprising result, which I only later realized was true based on the probabilistic data that underlies such findings.”
When has AI failed you?
"AI fails me every day with biased and erroneous results, everything from chatbots to voice recognition that makes the wrong assumptions, as it has no embodied learning, social or environmental context, unlike other humans or animals we interact with every day. Thankfully, I appreciate that human and nonhuman interactions will always be far superior and enjoyable than AI will ever be. At least for most of us who prefer to live on a planet with empathy, curiosity, and imagination."
AI won’t do the thinking for you

As leaders learn to use AI more and more, their own thinking may become lazier, says Eeva Vilkkumaa.
"Language models don’t care about truth."
Eeva Vilkkumaa says this with a gentle smile, as a reminder. "AI, in itself, doesn’t understand anything."
That’s why you shouldn’t outsource your thinking to a machine. Vilkkumaa, an Assistant Professor at Aalto University’s School of Business, develops mathematical models to support, for example, strategy processes. She sees how business leaders have become enthusiastic about using AI as a tool in their daily work.
"What worries me is that, with language models making writing so easy, outsourcing writing to AI may eventually affect how leaders think. Good writing requires good thinking, and if a person doesn’t do the mental work behind the text, their thinking doesn’t develop either," she says.
And it’s not just thinking skills that erode — so does commitment to ideas. If a leader hasn’t personally wrestled with the different options, it’s harder to commit to a decision and lead its implementation. If they haven’t considered the risks or values behind a decision, it’s harder to credibly communicate it to others.
"Good writing requires good thinking, and if a person doesn’t do the mental work behind the text, their thinking doesn’t develop either."
It’s essential to understand what kind of thinking or decision-making AI can help with — and where it falls short.
"If you want to automate a production process, then AI can definitely help. But if you need to make long-term strategic decisions that require a deep understanding of different options, that takes human thought," Vilkkumaa believes.
That doesn’t mean strategy processes can’t use math models or AI at all. Vilkkumaa simply calls for clarity about when in the process AI should be used.
"Strategic management professionals have always been skeptical of these kinds of tools. The concern has been that models can’t properly account for various uncertainties. I understand the worry — no one wants to outsource complex decision-making to mathematical models."
A good rule of thumb? Don’t let AI define your strategic objectives. Humans should remain responsible for setting them.
"Decision-making always requires objectives — and objectives are value judgments," Vilkkumaa reminds us. And values are inherently subjective. "You can’t outsource them to AI."
When objectives have been personally thought through and committed to, it’s easier to communicate and sell the chosen direction — to your own organization, and to other stakeholders.
"Any time a leader makes decisions, they also need to motivate others to implement them. That’s a lot easier if the leader has internalized the chosen objectives. Then you can use models or AI to discuss how to best achieve them."
"Why would anyone bother to read something that no one bothered to write?"
Vilkkumaa says that here, AI can outperform humans — particularly when it comes to analyzing multiple strategic paths in parallel. People tend to rush decisions, because the human mind can only imagine a few alternative futures at once.
"There are models that help keep multiple paths alive all the way through the decision-making process," Vilkkumaa explains.
So: AI as a thought partner — not the final author. There’s one additional, undeniable downside to AI-generated texts: they’re also boring to read.
"Why would anyone bother to read something that no one bothered to write?"
Who?
Eeva Vilkkumaa, 44, is an Assistant Professor at Aalto University’s School of Business. She teaches courses on business analytics and behavioral decision theory.
In her research, she develops mathematical models to support decision-making and strategic processes in both private companies and public-sector organizations.
When did you first hear about AI?
"As a freshman in 1999, I visited the speech recognition lab in Otaniemi. Wow — that’s a quarter-century ago!"
When has AI helped you?
"In creating grading rubrics for my courses."
When has AI failed you?
"I recently asked AI to find a reference for a specific term. When I got the answer, I thought, 'Hmm, this sounds suspicious.' AI insisted the reference was accurate — but it wasn’t. Fortunately, I know the literature well myself."
AI only succeeds when people are excited about it

Companies whose leadership is skeptical about AI often also fail to utilize it successfully, says Natalia Vuori.
According to a recent study, 74 percent of companies that adopt AI systems either experience significant challenges or outright fail in making the new technology work for them. Together with her research team, Associate Professor Natalia Vuori from Aalto University wanted to understand why.
One potential reason is that many leaders fear losing control over strategic decision-making and fear that relying on AI would undermine their expertise as strategists.
In one of her studies, Vuori embedded herself in seven management boards across three companies. She managed to become a fly on the wall and observe whether and how the management boards use AI while making strategic decisions or developing roadmaps.
“I traveled to different countries and cities to follow the management board's meetings and workshops. I also interviewed executives before and after their workshops, and had many informal conversations during coffee meetings, and while waiting for the plane.”
Some management boards started using generative AI to support their strategy formulation and implementation, and strategic decision-making. Vuori observed how incorporating AI into strategy work can speed up decision-making and improve the quality of strategy work.
Other teams stuck to traditional strategy tools, such as SWOT analysis.
“In those teams, there was still a strong belief that strategic work is a human game. In these teams, executives took pride in their role as strategists. The idea of delegating any part of this process to AI challenges long-standing professional identities. As a result, they not only avoided using AI themselves, but also fostered a culture of skepticism and stigma around its adoption among their peers.”
"Leaders who approached AI with curiosity turned it from a buzzword into a new, powerful tool in their leadership arsenal."
But Vuori’s most important insight came from observing the other kind of management board teams.
“Leaders who approached AI with curiosity turned it from a buzzword into a new, powerful tool in their leadership arsenal. Their excitement and hands-on experimentation during strategy work sparked a culture of AI enthusiasm across their teams and the company.”
“In companies that succeed with AI, internal AI and IT specialists are invited to participate in strategy formulation and execution. In contrast, in failing companies, the attitude problem toward new technologies is visible even in day-to-day operations,” Vuori says.
“Technology-skeptical leaders view AI and IT merely as a support function for the company.”
Another study of Vuori and her research team revealed that companies might struggle to adopt or benefit from AI because of the absence of a new kind of leadership: AI leadership. Leaders must not only understand the potential of AI but also build emotional trust in it across the organization, Vuori states.
“The real challenge is that employees don’t all see AI the same way,” Vuori says. “You can’t lead everyone the same way because some employees need support specifically to overcome skepticism toward AI reliability and performance, while others need stronger support to overcome their anxiety and fear of relying on AI.”
She sees a significant gap in AI competencies.
“Even though 75 percent of business leaders name AI as one of their top three strategic priorities, on average, only 25 percent of the staff have been trained to utilize it. That alone is a major reason for the high failure rate among companies.”
Who?
Natalia Vuori, 44, is an Associate Professor of Entrepreneurial Leaders and Strategy at the Industrial Engineering and Management at Aalto University.
With her research team, she has studied what leads to successful or failed AI adoption in companies.
Her research resulted in the academic articles Why some companies use generative AI to foster their strategic practices and performance, while others never do and It’s amazing – but terrifying! Unveiling the combined effect of emotional and cognitive trust on organizational members’ behaviours, AI performance, and adoption.
When did you first hear about AI?
“I first heard about AI in 1992 by watching the movies The Lawnmower Man and Terminator. But in a way, I’ve always followed the development of AI. From the start, computers involved mathematical models and their development. As early as 1950, in his paper Computing Machinery and Intelligence, Alan Turing posed the famous question: Can machines think? So, when the AI hype began, I was surprised it was being talked about as something new. Maybe because I have an engineering and technological background, it never felt like a new phenomenon to me. But then generative AI came. This was something groundbreaking."
When has AI helped you?
"It helps me constantly, for example, in my teaching. With AI, I can easily create really cool simulation exercises for students. It really enriches my teaching. It’s incredibly helpful whenever I work with companies to build their strategy implementation roadmap.”
When has AI failed you?
“It has never really failed me, because I understand I can’t expect it to do the work for me. The results always need to be double-checked, and you can never fully trust AI. It’s just an assistant.”