Home > Feature > In the age of AI, siloed thinking is a national liability

In the age of AI, siloed thinking is a national liability

We live in a time of explosive knowledge growth, accelerated by AI in every domain from art to quantum physics.

By: Pooja Arora
Last Updated: December 7, 2025 02:01:55 IST

In an era long before modern academia with silos and expertise of complicated language, polymath scholars like Hemachandra (1088–1173 CE) integrated knowledge. Hemachandra, a Jain monk and adviser to kings, was renowned as ‘Kalikālasarvajña’, the omniscient of his age, for contributions to poetry, mathematics, grammar, philosophy, and even political theory. In ancient India, centers of learning such as Nalanda encouraged students to study ‘every branch of knowledge,’ and the pursuit of wisdom was seldom divided into rigid disciplines. Over the centuries, however, education worldwide narrowed into specialization. The Renaissance polymath gave way to the modern expert ensconced in a silo.

We live in a time of explosive knowledge growth, accelerated by AI in every domain from art to quantum physics. Hyper-specialization is ill-suited to problems like climate change, pandemic response, or AI governance, which span multiple fields by nature. Fortunately, AI itself is helping shatter these silos. Modern AI, especially large language models, acts as a force multiplier for human intellect, breaking down barriers between disciplines. In effect, AI democratizes expertise: vast repositories of art, science, and literature are now a few keystrokes away for anyone. Advanced language models and tutoring systems make information and skills accessible to all, from quantum mechanics to classical music, enabling individuals to transcend traditional boundaries and become ‘modern polymaths’.

This shift is leading to reforms in education. Where 20th-century curricula enforced strict subject silos, new approaches emphasize breadth alongside depth. Even in India, the National Education Policy 2020 explicitly aims to ‘break disciplinary silos’ and return to a holistic model, ‘rebooting the legacy’ of ancient multidisciplinary learning. The NEP recognizes that today’s job market prizes ‘multiple capacities rather than specialization in one exclusive field’. In the digital era, a software engineer may need an understanding of psychology for user-centric design; a doctor may draw on data science for diagnostics; policymakers grapple with technical nuances of AI and climate science. Knowledge is interlinked, and breakthroughs often happen at the intersections.

The erosion of silos is not only a technical or educational phenomenon, but also a social and political phenomenon. A world of freely flowing knowledge demands new ways of organizing people and power. When information spreads openly, it can produce seismic shifts: financial bubbles burst, or revolutions ignite when everyone knows that everyone knows a truth that was formerly suppressed. In our hyper-networked age, the transition from private understanding to global common knowledge can be lightning-fast, toppling old hierarchies overnight. This has profound political implications. Political theory traditionally assumed relatively slow flows of information and stable institutions of authority. Now, leaders and citizens alike must reckon with a landscape where ideas go viral, and legitimacy can crumble in a social media storm.

One crucial factor here is trust. High-trust environments encourage the open exchange of ideas and risk-taking, which are vital for innovation and adaptation. By contrast, high-context, low-trust societies, where communication is opaque, social relations are based upon clans, and institutions are weak, risk being left behind if they cling to old ways. In such settings, talent is often recognized through informal networks or nepotism rather than merit, and people hesitate to share bold ideas for fear of censure. Studies show low-trust cultures stifle creativity and progress, as individuals become hesitant, silent, and unwilling to challenge the status quo. On a broader scale, economists have found that low social trust raises transaction costs and impedes development, leading to fewer investments in the future. A clear example can be seen in workplaces: companies with secretive, distrustful cultures rarely innovate, whereas those fostering psychological safety see employees collaborate and experiment freely. The same applies to nations. Societies that remain mired in rigid procedures and patronage, without trusting outsiders or new thinking, will struggle to harness the full potential of their people in the AI age. On the other hand, a high-trust, transparent society can mobilize collective intelligence quickly, a critical advantage when adapting to rapid technological change. Knowledge integration and social trust go hand in hand: a silo-busting education fosters informed citizens; informed citizens demand transparency and merit; and a high-trust, open society in turn accelerates learning and innovation. This virtuous cycle is the hallmark of the most dynamic regions in the world today.

We have only seen the tip of the iceberg of AI’s capabilities- a reality both exhilarating and sobering. A recent example was reported by AI company Anthropic in late 2025: a state-sponsored hacker group used an AI agent to orchestrate a large-scale cyber espionage campaign largely without human intervention. By cleverly ‘jailbreaking’ a coding assistant, the attackers had the AI autonomously perform about 80–90% of the hacking operations- from reconnaissance to writing exploits at superhuman speed. Anthropic noted this as the ‘first documented case’ of an AI-driven cyberattack at scale, a true watershed moment. The AI’s ‘agentic’ capabilities were used to execute complex sequences of tasks, essentially acting with a degree of agency and strategy once reserved for human operators. Yet even this alarming feat may pale in comparison to what is coming. AI systems are iterating and improving at a pace that challenges our imagination- in warfare, yes, but also in science, art, and every creative endeavor. AI’s power is not only in destructive acts or speed; it is also a creative and problem-solving force multiplier. Already, AI has cracked scientific problems like protein folding that stumped experts for decades, designed new molecular compounds, and assisted in drafting legislation.

Adding further dimensions to this upheaval are two frontiers once confined to science fiction: outer space and neurotechnology. Humanity is on the cusp of becoming a multi-planetary species, with missions planned for the Moon, Mars, and beyond. As we expand into space, we will carry our technologies and social frameworks with us and likely must reinvent them. The isolated, siloed thinking of the past will not survive the challenges of off-world colonies. Consider the complexity of a Mars settlement: it entails astrophysics, ecology (for life support), psychology (for crew cohesion), robotics, and governance. Who will make decisions on Mars, and under what laws? Political theory must evolve to accommodate scenarios where humans live outside traditional nation-state boundaries. We may need new compacts or constitutions for space habitats, drawing on principles from multiple disciplines- aerospace engineering to design habitable systems, sociology to manage community, and jurisprudence to define rights and responsibilities in an environment utterly unlike Earth. Even the notion of citizenship could expand: an individual could be simultaneously an Earth citizen and a Martian settler. This isn’t merely speculative; space agencies and private companies are already grappling with frameworks for resource use and conflict resolution beyond Earth.

Meanwhile, on Earth, the boundary of humans and machines is blurring within our very brains. Breakthroughs in neurotechnology are enabling direct interfaces between the human nervous system and computers. Recently, patients with paralysis have been fitted with brain implants that allow them to control digital devices purely by thought- moving cursors, typing messages, even playing video games via a neural link. This astonishing fusion of mind and machine points toward a future where cognitive enhancement or repair is routine. Startups are racing to develop implants to boost memory or attention, while neuroscientists use AI to decode speech or images directly from brain signals. Such innovations will profoundly challenge our definition of what it means to be human. When a person’s memory can be backed up, or two minds linked brain-to-brain via the cloud, classical notions of individual identity and autonomy may need rethinking. They require us to revisit age-old philosophical questions about self and consciousness now armed with new empirical insights. In effect, theory needs to redefine what it means to be human. We can no longer be content to regurgitate what has been learned before and apply 20th-century assumptions to 21st-century realities.

All these threads- the return of polymathic learning, the integration of knowledge, AI’s transformative potential, social trust, space, and neurotech- converge on a simple truth: our inherited categories are straining under the weight of change. We must cultivate wisdom alongside knowledge- a holistic understanding of humanity’s direction. Being human in the 21st century is not what it was in the 18th century: our reach is planetary (even interplanetary) and our tools are quasi-intelligent. Theory must catch up to practice.

  • Pooja Arora is Assistant Professor, Jindal School of International Affairs

Most Popular

The Sunday Guardian is India’s fastest
growing News channel and enjoy highest
viewership and highest time spent amongst
educated urban Indians.

The Sunday Guardian is India’s fastest growing News channel and enjoy highest viewership and highest time spent amongst educated urban Indians.

© Copyright ITV Network Ltd 2025. All right reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?