Thibault Milan Get In Touch

Get In Touch

Prefer using email? Say hi at hello@thibaultmilan.com

The integration of Gen AI in the company is becoming vital

At Smile, innovation is at the heart of our DNA. As a technology company, we are constantly seeking solutions that not only increase our competitiveness but also improve the efficiency and satisfaction of our teams. It is in this spirit that we have recently explored the integration of large language models (LLM) for our developers. This article details our feedback (REX) on this bold initiative, the methods we employed, the results obtained, and the lessons we learned.

Why did we explore the use of LLM for developers?

The integration of artificial intelligence (AI) technologies in the world of software development is no longer just a trend but a necessity to stay competitive. At Smile, we undertook this experimentation with several objectives in mind.

First, we wanted to prove that AI can truly transform the way developers work by saving them time on repetitive tasks, providing them with code suggestions on the fly, and enhancing code documentation, ideally making it easier for developers to write unit tests.

Secondly, by adopting advanced technologies, we aim not only to match but surpass our competitors in the market. It is clear that the entire industry is closely following this trend, but few companies communicate about it. Everyone is waiting to see who will take the first step in communicating their business strategy in light of these new tools. In the meantime, we need to do all the necessary work to be ready when the market sends a strong signal of acceptance of this new way of working.

Finally, it was essential for us to assure our employees that we are committed to providing them with modern tools that not only facilitate their work but also prepare them for the future of software development. Equally important, we need to manage shadow-IT usage that could arise if we do not provide tools to our developers; and support our developers and the rest of our teams in transforming their jobs, disrupted by the explosion of LLMs.

How did we conduct this experiment?

To ensure the success of this initiative, we established a structured and multidisciplinary organization. A task force was formed, involving representatives from management, IT, and legal, to address all aspects of LLM integration, including the impact on jobs and legal compliance.

We started by identifying relevant use cases where AI could add value (code completion, code documentation, chatbot assistant to help with more complex tasks like refactoring, or more generally, assistance in thinking with code as context). Next, we tested several AI solutions to find the one that best met our needs in terms of return on investment (ROI), security, and compliance.

This initial phase allowed us to experiment and gather valuable data on the actual use of LLMs by our teams, both qualitative (how developers feel about the tool's impact on their work, their appreciation of the tool, their "ah-ha moments" and frustrations) and quantitative (how much time saved through code completion based on the developer's usual typing speed, applied by task types and technologies).

The results obtained

The results of our experiment were very promising.

To cite just a few figures: about 80% of developers reported significant perceived time savings thanks to the use of LLM-based tools, 60% of these people will continue to use LLM tools even if it involves shadow-IT. We also observed an overall improvement in coding time of 15% on the projects concerned, averaging across all studied technologies and task types. Of course, we conducted a more detailed split for our internal needs.

One of the main perceived advantages of LLMs is their ability to automate repetitive tasks, allowing developers to focus on more complex and creative tasks. In a sense, we indeed observed a considerable time saving allowing this, but it was not really on the repetitive tasks we imagined, but rather a succession of savings on all tasks that, when added together, form a considerable time saving.

Additionally, the use of LLMs encouraged better documentation practices, either by systematically generating it in the proposed code or by requiring the developer to write some documentation to provide context and guide the tool to generate the desired code, which had a positive impact on the overall quality of the code. These results show that LLMs do not replace developers but act as facilitators, increasing both their efficiency and the quality of their work.

What lessons did we learn?

Our adventure with LLMs has been rich in lessons. We learned that, although AI is powerful, it cannot do everything alone. It is crucial to keep humans in the loop to make informed decisions and oversee the suggestions made by AI. Thus, AI behaves more like assistants for developers, and this is an important point to communicate to the developer population to reassure them about the future of their profession and to dispel the reluctance that has arisen from hearing in the media that they will all be replaced by AI.

The adoption of these technologies requires continuous adaptation and particular attention to user feedback. Bi-weekly follow-up sessions are important initially to help appropriate these new types of tools. We also confirmed our hypotheses about training and support being essential, especially for junior developers who need time and support to master these new tools, even more so than more experienced developers.

Quite counterintuitively, although junior developers have the most room for improvement, it is the more experienced developers who quickly understand how to take advantage of this new tool, formulate their requests more clearly, and thus achieve better results.

From the perspective of programming languages and frameworks used, we observed disparate quality in suggestions, making the tool less relevant and of lower quality in its suggestions for certain technologies. We are currently exploring how fine-tuning strategies could be implemented to correct this disparity.

Finally, we found that to maximize the benefits of LLMs, it is essential to continuously monitor the quality of the generated outputs and be ready to adjust models and processes based on the changing needs of the team.

Conclusion

The integration of LLMs at Smile has been a decisive step towards innovation and improving our operational efficiency. The results we have obtained demonstrate the immense potential of these technologies to transform software development. We are determined to continue exploring and adopting AI tools that not only increase our productivity but also support our employees in their professional development.

In the future, we plan to extend the use of LLMs to other areas of the company to fully leverage this revolutionary technology. We are already deploying to a small community of testers a secure sandbox where employees can use generative AI models without fear that any client or personal data entered will be used to train a public model or risk causing sensitive data leaks. We have named it SmileGPT, and we will tell you more about it very soon.

At Smile, we believe that continuous innovation and investment in our teams are essential to offer cutting-edge solutions to our clients and stay at the forefront of our industry.

If you are interested in our AI integration approach or wish to discuss how these technologies can transform your business, do not hesitate to contact me. I am Thibault Milan, Director of Innovation at Smile, and I would be delighted to discuss these exciting topics with you.

Comments