Beyond AI Enabled: Why AI Native Organizations Matter and How to  Become One

Beyond AI Enabled: Why AI Native Organizations Matter and How to Become One

Endava
7 min read
audio-thumbnail
Listen to this text here!
0:00
/595.464

After the introduction of the internet, by the early 1990s, companies like Amazon and eBay built their own infrastructure and became “internet native.” We all know these two names redefined retail and logistics at the time and gained a huge advantage. History repeats itself, and the same principle holds today - only this time the main driving force is artificial intelligence. Today more and more AI native companies embed AI into the very core of their processes, just as the internet pioneers once built business models around the web. When it comes to the impact of these changes on human resources, in modern companies, it is taken for granted that every employee knows how to use networks for data exchange; in the same way, emerging AI native companies assume that tools like ChatGPT are available and embedded in all workflows and that employees know how to handle them. 

💡
We talked about the practical application of AI with representatives of a company that has AI integrated at the core of its business - being one of only seven exclusive OpenAI Beta Services partners in the world.

Borivoje Kovač, Regional AI Evangelist, Kaja Kandare, AI Engineer, and Gabriel Preda, Principal Data Scientist, all from Endava, will try to demystify how an AI native environment improves processes within the organisation and within teams, and what benefits it brings for delivering solutions to clients.

How does a company become AI native?

The AI enabled and AI native approaches to business are often confused, especially because many organisations are beginning to adopt AI but still treat it as an add-on. Becoming AI native is not about replacing people with an AI agent, quite the opposite! AI native organisations build AI as a first principle, so models and data are embedded in every workflow and decision, with real-time learning and continuous optimisation. 

“From legal building out their custom assistants to reduce work cycle-time and get our contracts out faster, project managers using meeting transcripts to infer project risks, improve governance and reduce delivery uncertainty, all the way to developers using specification-first approach to agentic coding to improve throughput of their teams,” explains Borivoje Kovač, highlighting how AI is already reshaping workflows across the entire organisation.

Borivoje Kovač

As he points out, hands-on work is crucial here because people need to build intuition about the tools before they can unlock their true power. Endava started by expanding its existing partnerships with Microsoft, Google, and AWS to include OpenAI, Windsurf, and the like, and providing basic learning support and tooling to everyone.

This has also resulted in better collaboration between teams, says Kaja Kandare. In addition to this, there are two other major changes that the implementation of artificial intelligence brings - faster decision-making and a shift toward experimentation: 

“AI tools help teams validate ideas quickly - whether it’s through automated data analysis, code generation, or rapid prototyping. This reduces the time spent in planning and allows us to move to execution faster. In addition, AI often introduces new roles and skills into projects (data scientists, ML engineers, domain experts), so teams now collaborate more closely across disciplines, which improves knowledge sharing and innovation. With AI, teams are more open to iterative development. Instead of rigid plans, we adopt a ‘test and learn’ mindset, using AI to simulate outcomes or detect issues early.” 

In such environments, AI is perceived as a support system-helping people learn faster, solve bugs and incompatibilities more efficiently, and boost productivity without creating fear of automation.

How a custom GPT assistant improved the work of the internal AI community, or - how AI works in practice?

One concrete example of what Kaja is describing is solving a practical problem for the local AI community meetup events that Borivoje organises for colleagues in Serbia: 

“Our community brings together the local AI crew, and at our regular weekly meetings we exchange ideas, share knowledge, ask questions and learn from each other about everything related to artificial intelligence-from concrete projects and tools to the latest trends and examples from practice. Every meeting is broadcast and recorded for people who couldn’t make it to the office or had overlapping meetings. After almost a year of doing this, a ton of very interesting stories accumulated. The problem was finding ways to expose this useful knowledge to the wider audience.

Unfortunately, Teams is horrible at transcribing Serbian, so although we have all the videos, it’s impossible to share the knowledge without people physically reviewing all the recordings. The solution was building a console app that pulls the video recording, extracts audio from it, passes that onto the OpenAI API we all have access to transcribe using a better Speech-To-Text model, then passes that transcription to a Large-Language-Model to fix transcription issues, and finally uses another call to Large-Language-Model to generate detailed meeting minutes. Without agentic coding tools, this was a matter of two mandays to produce!

Finally, all these are made available to a custom GPT assistant inside ChatGPT, which the whole enterprise has access to. Now we have an assistant which you can ask questions like – “what was discussed in the community last month?”, or “has anyone talked about CUDA and when?”, “who can help me with HuggingFace?” – enabling querying in any language and accessing information across our global enterprise and across time-multiplying the value of the AI community many times over!”

dr. Kaja Kandare

According to our interlocutors, an increasingly large percentage of new projects include generative AI, and the expectation is that within the next five years, all of them will include AI in some shape or form. Existing roles like developer or tester, but also upstream and downstream roles like business analysts and DevOps engineers, map quite nicely to the new stack, but require upskilling to adopt this new mindset and understand the new toolset used to engineer these next-generation solutions.

AI native in solution delivery

AI native firms are already seeing advantages today, such as up to half shorter delivery times and reduced project durations. It is precisely this ability to make AI not an add-on but part of the infrastructure that makes the difference between slow adoption and complete business transformation. 

“We are just launching Dava.Flow – our new delivery framework, which focuses on maximising the use of generative AI and assistants across the software development life cycle to ultimately reduce delivery cycles and increase value for our clients. The key shift that is starting to emerge with the rise of generative AI and AI agents is that people are gradually moving from executing to planning and governing delivery. Actual gains depend on many factors, but we’ve seen gains up to 40% shorter time to market in initial pilots already,” says Kovač.

Endava’s role as a leading innovator in artificial intelligence was confirmed in April 2025 when the company joined the exclusive OpenAI Beta Services Partnership program as one of only seven companies with that status globally. Endava’s proprietary solutions, Morpheus - an agentic AI accelerator for regulated industries, and Compass- an agentic copilot for accelerating software development with large and complex code bases - were highlighted in the announcement as leading AI based solutions that help clients adopt generative AI more quickly and effectively to reduce operational costs as well as the timelines and costs of software development. Gabriel Preda, Principal Data Scientist at Endava, points out that building such accelerators has a dual positive effect: 

“Developing them improves our engineers’ skills in the new, AI centric technologies, and on the other hand, we shorten the path to building production-ready, enterprise-level AI-powered applications for our clients.”

What will be the next step in the evolution of AI native companies?

The next step is moving from “AI enabled” to AI orchestrated organizations, where AI systems autonomously optimize processes, ensure compliance, and continuously improve, while humans focus on strategy and creativity. On that path, Gabriel sees at least three different dimensions:

  1. the delivery acceleration side, which is about empowering our multifunctional teams to fully adopt AI powered development tools and practices - from AI assisted requirements elicitation to AI assisted design, build, testing, and CI/CD support, including fully automating some feature building; 
  2. the AI consulting side, which is really about leveraging our skills, knowledge, and expertise in Data & AI and promoting building more AI centric enterprise applications for our clients, and going beyond building to assist them with AI transformation; the capacity of our Data & AI engineering team to foresee trends and follow them before they become mainstream is very important as well; 
  3. the change management evolution of enterprises, which requires management practice shifts, simplified decision paths, more dynamic adaptations to fast market and technology changes, and more integrative engineering functions.
Gabriel Preda

With regard to the impact of AI on general business, it will be different for every industry and is hard to assess at this point, because it depends not only on technology but also on regulation and the feedback of society at large, our speakers say.

For instance, we’re already seeing the EU AI Act regulating AI in a way that riskier scenarios get more scrutiny and regulation, while scenarios whose risks are classified as unacceptable are outright banned. We also see backlash against certain types of AI applications, particularly in content and art-related industries.

What we’re already starting to learn in the field is that more autonomy given to AI assistants amplifies both good and bad outcomes, and as such requires careful and gradual build of trust through engineering, but also societal adaptation. So, stay tuned, stay alert, get involved, and enjoy the ride. 

DEV

[10:27 AM]