Now that OpenAI’s ChatGPT is in the public domain, the big discussion point being pondered revolves around what impact it will have on society as a whole, and in the enterprise.
When it comes to the latter, London, Ont.-based Info-Tech Research Group describes the release as a “watershed moment in the history of generative AI, as it can deliver human-like conversations on diverse topics, including writing poetry, debugging code and even assisting with troubleshooting software and hardware issues.”
Earlier this month, the research firm organized a webinar that explored the potential uses of a chatbot it says that, unlike other chatbots or intelligent software assistants, is much more adept at engaging in dialogue with its users, and can even respond to feedback, request clarification, and iterate on its answers based on a user’s response.
The information session, which was moderated by Jeremy Roberts, research director at Info-Tech, and Jack Hakimian, the organization’s senior vice president of workshops and advisory research, examined three specific areas in which generative AI could be used in the enterprise:
- Enterprise Support: ChatGPT or another conversational AI tool could serve as the back end of an information concierge that automates enterprise support. According to the firm’s research, “chatbots already exist, but ChatGPT could be a game changer.”
- Customer Interaction: The automated workflow of current chatbots or website search functions can frustrate users when they return a list of semi-related results. With that in mind, research revealed that “generative AI can answer queries more cheaply, intelligently direct users to appropriate products and services, and improve the customer journey so substantially as to be a differentiator.”
- Product Development: Key business applications for generative AI include generating marketing copy, summarizing long documents, and authoring communications, says Info-Tech. “Anyone who creates content can see their workflow supplemented with an intelligent solution like ChatGPT.”
Roberts suggests there are a “few steps that IT departments should take to refine their use case for generative AI. First, (they) should review their capability map for high-value processes, then conduct a basic cost-benefit analysis for the technology, and finally, explore the vendor landscape to find the solution that meets their requirements.”
There were three key recommendations that came out of the online seminar that IT managers should consider:
- First, ChatGPT and other generative AI solutions are tools – nothing more – and as such, there are “things that this technology is especially good at and others that is not especially useful for. The key is to understand your business processes and highlight opportunities to reduce friction, increase the quality of the service experience and drive efficiency.”
- Secondly, though it may be appealing to to implement an aggressive AI strategy, IT teams should start with augmentation. Generative AI, says Info-Tech, “is an incredible technology, but it’s still not self-sufficient. It still needs guidance and feedback from human curators.”
- Third, talk to a lawyer and seek legal advice before implementing the technology. Chatbots that manage workflows are not complicated, the firm says, “but a bot that will interact with users and customers and produce content could expose you to legal risk.”
In an interview with IT World Canada, Roberts said that in terms of security precautions once ChatGPT is commercialized and can be purchased, standard practices that exist currently within an IT organization will need to be followed.
As for next steps, he said that OpenAI is in the research phase with the language tool, and while it is currently free to download and has well over one million downloads in circulation, he questioned whether “they can continue to subsidize all of our fun indefinitely.”
According to a recent report from Reuters, the San Francisco-based company founded by Elon Musk, who is no longer involved, and investor Sam Altman, and backed by US$1 billion in funding from Microsoft Corp., is expecting its business to surge.
The story goes on to say that three sources briefed on OpenAI’s recent pitch to investors said the organization expects US$200 million in revenue next year and US$1 billion by 2024.
Microsoft, for one, said Roberts, is “probably going to expect some sort of a return on their investment at some point. They are nice people, but they are not a charity. I suspect that we will see (ChatGPT functionality) gradually built into Microsoft products, but it will be fascinating what that might look like. Is it going to be another feature that they don’t charge extra for that OpenAI licenses, or is it going to be a whole new product – Clippy coming back, but actually useful this time?”
What is also likely going to happen is that, due to the compute costs of the ChatGPT launch, which Altman has described as “eye watering,” OpenAI will more than likely “slap some sort of paywall on it,” he said.
The launch of the tool represents major leap forward in the bot world, he said, one that is far more advanced than the bot “writing you poetry in the style of John Milton about the burrito that you left in the microwave too long.
“Now they are getting the Napoleon Dynamite fan fiction of Twilight. The thing about technology and societal change as a whole is it tends to happen very slowly, and then all at once. OpenAI was like that for technology in a way that not a lot of things have been recently.”
Also of interest is the impact the arrival of ChatGPT will have on Google, and whether or not it will impact the organization’s overall revenue model.
“The challenge isn’t technical for them,” says Roberts, adding that they could likely introduce a similar AI tool on Google.com “and create a sensation. I just don’t think they have figured out what the end game is.”