I spoke with Venky Veeraraghavan, Chief Product Officer at DataRobot, about key strategies for deploying AI in enterprise settings, including choosing large language models and managing the concerns about getting started with AI.
DataRobot is involved with a fierce competition in the AI sector. Industry observers are wondering: which type of vendor will dominate the lucrative AI sector in the years ahead?
On one hand, some observers predict that the cloud hyperscalers – AWS, Microsoft Azure, Google Cloud – will be the dominant vendors. First, these deep-pocketed cloud providers all sell a full array of AI tools and services. Additionally, many companies already use these cloud providers for a long menu of cloud-based tools. So it’s an easy step for companies to also select them for AI support.
On the other hand, some experts say that the independent AI vendors will have an edge – and here’s where DataRobot comes in. Founded in 2012, DataRobot is part of a small but well-established group of top AI companies that offer a complete AI platform to customers.
Companies like DataRobot can offer personalized service and custom deployments. Most important, these independent AI vendors are cloud agnostic. They work with the full range of leading cloud vendors – which is important for multicloud customers.
In my view, the competition between the big cloud players and the independent AI vendors won’t resolve anytime soon. Both types of vendors will thrive – the investment surging into the AI sector is so large that there’s room for both types of vendors.
In any case, DataRobot’s Venky Veeraraghavan is exceptionally knowledge about all things AI. Read on for key quotes from the interview.
Concerns Around AI Adoption
It’s something I’ve heard often among enterprise IT managers: our company wants to get involved with artificial intelligence – we need to invest more – but the path is unclear. AI is expensive, and making the wrong move can result in a major loss.
Veeraraghavan has encountered these concerns. “I think broadly that everyone is doing artificial intelligence, I think they have to do AI,” he says. “The big issue is: do they have confidence in the solution? Does it work? Does it provide value?
The worries, he concedes, can be notable. “Companies are concerned that there’s randomness in the output…in the case of generative AI, what will the bot say, will it be toxic output?
“So people are nervous about that, and being able to build your business process – and your entire company around taking advantage of the power of AI – means you need to understand how to control the confidence issue. That is really the thing that is holding people back.”
To learn more about the generative AI software sector, read our article: Top Generative AI Apps and Tools
Tips on Getting Started with AI
To those enterprise executives who are busy over-thinking AI, Veeraraghavan has some advice.
“To say it very simply, I paraphrase Nike: just do it, which means just get in. I see customers and prospects – I talk to a lot of them – doing a lot of learning, they’re understanding it, they’re looking at the risks and everything else.
“So I would say that is all good, but the best way to do it is to start getting your hands dirty. You can literally just build your own bot that says, ‘I’m going to chat with my manuals, right?’ Or ‘I’m going to look at my customer support tickets.’ The idea is: get it started and then by doing you’ll learn a lot more.”
The reality of generative AI is that it truly is different from past technologies – it’s more complex, its potential for change is dramatically greater.
“I think one thing we have realized is that this is such a disruptive technology,” Veeraraghavan says. “I suggest every company get in, use a tool, build something out and then say, ‘Hey look, it doesn’t quite work the way I want, or it’s not really as exciting as I thought it would be. Why not?’”
But don’t let the challenges of the initial AI build out stop you, he advises.
“You can start exploring the different aspects. You could say, Look, should I put more data in it? Should I put in different data? Should I change my prompting strategy? Should I pre-train my AI model or should I fine-tune my model? These are all great questions, but you get those questions by actually trying out some concrete use cases. So come up with a use case, get started and then learn and iteratively develop the idea.”
Understanding Large Language Models
One supporting element in building an AI deployment can be particularly difficult to understand: large language models. LLMs are the data repositories that feed a given AI software, so they are central in determining how a company’s AI software actually functions.
The difficulty is that there so many different types of LLMs. So companies wonder: What’s the best one to select for our purposes?
Veeraraghavan’s advice: “I would say the best way to think about large language models is sort of a new level of platform, just like how we went from on-prem to cloud. Obviously, just as there’s no one single cloud approach, there’s a similar factor with AI.
“With these large language models, there’s clearly the big providers, like Microsoft, Google, Amazon, and then there’s open source, and then there’s the proprietary models.
“I would say generally: start with the larger models, because they’re general purpose. They’re quite forgiving and they have a lot more knowledge. Discover where the use case is and what value they can provide. And once you understand a use case, where the generative AI can actually provide value, then you can start optimizing.”
For a better understanding of the AI software market, read our coverage: Best Artificial Intelligence Software
Listen to the podcast:
Also available on Apple Podcasts
Watch the video: