The social media company’s engineers wanted the technology to improve experiences and engagement. But the final product required more tweaking than previously thought.
Around seven months ago, LinkedIn engineers set out to improve user experience and engagement by embedding generative AI capabilities into its platform.
The efforts resulted in a new AI-powered premium subscription offering, which required energy and time to adjust to internal standards and best practices.
“You can build something that looks and feels very useful, that maybe once every five times completely messes up… and that’s fine for a lot of use cases, [but] that was not fine for us,” Juan Bottaro, principal staff software engineer at LinkedIn, told CIO Dive.
Users can turn to the platform to get assistance with effective writing, information gathering and skills assessments. The interface offers job seekers tailored profile suggestions and users can access key takeaways from posts.
Like other enterprises, LinkedIn wanted its AI-generated responses to be factual, yet empathetic.
If a user wants to know whether a job posting in biology is a good fit with their professional profile, despite having no experience, the social media company wanted its AI assistant to suggest LinkedIn Learning courses in addition to saying the role wasn’t a fit — rather than a blunt response.
Enhancing the user experience is a common goal for using generative AI. But just adding technology for the sake of novelty can have consequences.
If solutions are interacting with customers, the stakes can be even higher.
Despite running into a few unanticipated roadblocks, LinkedIn engineers continued to iterate on the product, mitigating risks along the way.
“Don’t expect that you’re going to hit a home run at the first try,” Bottaro said. “But you do get to build that muscle very quickly, and, fortunately, it’s a technology that gives you a very quick feedback loop.”
Crafting quality experiences can be time-consuming
LinkedIn engineers spent an unexpected amount of time tweaking the experience. Bottaro said the majority of the team’s efforts were focused on fine-tuning, rather than on the actual development stages.
“Technology and product development requires a lot of work,” said Bottaro, who has spent more than a decade at the social media company for professionals, owned by Microsoft. “The evaluation criteria and guidelines grew and grew because it’s very hard to codify.”
The team achieved around 80% of its experience target, then spent four additional months refining, tweaking and improving the system.
“The initial pace created a false sense of ‘almost there,’ which became discouraging as the rate of improvement slowed significantly for each subsequent 1% gain,” Bottaro explained in a co-authored report with LinkedIn Distinguished Engineer Karthik Ramgopal.
Evaluation frameworks are critical
In one of the company’s first prototypes, the chatbot would tell users they were a bad fit for a job without any sort of helpful information.
“That is not a good response, even if it’s correct,” Bottaro said. “That’s why when you’re developing the criteria and guidelines, it’s hand in hand with product development. “
Curating the evaluation criteria is specific to the business. Bottaro compared the process to different teachers grading a paper rather than a multiple choice exam.
“We have a very, very high bar,” Bottaro said. “These topics of quality and evaluation [have] become so much more prominent than in other instances.”
Feature Image Credit: Justin Sullivan via Getty Images