Table of Contents
The struggle over who controls the course of key technology, and how much, is as old as the tech sector itself. So it’s no surprise that this scuffle has crept into artificial intelligence, where free, open-source systems are lining up against patented generative AI products such as ChatGPT.
The battleground involves large language models, or LLMs — complex algorithms that lie at the heart of artificial intelligence. From Wall Street to Silicon Valley, industry players are taking sides as the AI boom pits Big Tech firms defending their proprietary technology against equally large challengers drawn to unfettered programs.
Backers of open-source AI believe it will democratize access to artificial intelligence tools. Further, they see open source making it cheaper and easier for researchers to develop new LLMs and for entrepreneurs to launch commercial products.
But Wall Street analysts who cover AI stocks like Microsoft (MSFT) and Google-parent Alphabet (GOOGL) fret that open-source AI will turn proprietary models such as ChatGPT, made by startup OpenAI, into commodities.
Microsoft is the biggest investor in OpenAI — which, despite the name, runs proprietary systems. Many regard its ChatGPT and other systems as more advanced than most, if not all, rivals. Meanwhile, Google is readying a next-generation, proprietary LLM, called Gemini.
Meta, Amazon Advocate Open Source
For all the strength of Microsoft and Google, open-source AI models from other notable tech stocks are quickly improving in capability. A big reason is that Facebook-parent Meta Platforms (META) is making its open-source large language models available to researchers and commercial developers.
Analysts see Facebook getting a lift as software developers build AI-powered apps for content creators, small businesses and advertisers.
Count e-commerce and cloud-computing giant Amazon.com (AMZN) in the open-source camp as well. It’s working with several open-source LLM developers. Analysts say there’s plenty of upside for Amazon — which lags OpenAI’s ChatGPT and Microsoft — if in fact LLMs are commoditized.
Depending on their development goals, large companies may be able to use open-source AI models to build apps rather than license technology from OpenAI, Microsoft or Google.
Anyone can reuse open-source LLMs and build on top of them. As a result, more efficient open-source models are emerging that reduce the computing power needed to “train” LLMs, or essentially feed them data.
What Is A Large Language Model?
Large language models understand the way that humans write and speak. They allow users to interact with AI systems without the need to understand or write algorithms, analysts say.
The models process “prompts,” such as internet search queries, that describe what a user wants to get. LLMs require training data for specific tasks. They’re made of neural networks — or mathematical models that imitate the human brain — that generate outputs from the training data.
What sets OpenAI’s ChatGPT, Google’s Gemini and other large language models apart is the size of data sets, called parameters, used to train the LLMs. The more data a large language model is trained upon, the more powerful its capabilities can become.
OpenAI trained its GPT-3.5 LLM using 175 billion parameters. Its new GPT-4 LLM reportedly uses 1.75 trillion parameters. Google hasn’t disclosed how many Gemini will use.
Meanwhile, Meta uses an open-source model called Llama. Its second version of Llama tops out at 70 billion parameters.
Building Competitive Moats Around Generative AI
Wall Street prefers competitive moats, or barriers to entry that prevent rivals from stealing customers, RBC Capital analyst Rishi Jaluria told Investor’s Business Daily. And so analysts like the different capabilities that various large language models possess.
“OpenAI still has the best large language models. Google and others are playing catch-up,” he said. “Commoditization would be a bad thing because that means there’s no moat and, as investors, we’re attracted to moats.”
Jaluria went on to say: “OpenAI’s advantage comes from the amount of data, the amount of money invested and the time they’ve put into it. I don’t think they’ll be commoditized anytime soon.”
Microsoft’s ability to get a return on its $10 billion investment in OpenAI would be impacted if ChatGPT’s performance edge goes away.
Microsoft said that as of mid-July its Azure-OpenAI cloud computing service had more than 9,500 customers, up from 2,500-plus in April. Microsoft is among AI stocks to watch. MSFT stock recently hit a record high on AI developments.
The company declined to comment for this story.
Linux Sets Precedent For Open-Source AI?
History provides plenty of examples, though, of technologies that became commodities. The IBM (IBM) personal computer in the 1980s is probably the most famous example.
In software, investors also may be familiar with open-source programs from Linux. Unlike proprietary operating systems like Microsoft Windows, Linux code is free. And Linux became widely distributed under open-source licenses.
Open-source developers make software available free of charge. They also enable programmers to modify and share the underlying source code and create their own apps.
Open-source AI will coexist with proprietary models, says Oren Etzioni, former chief executive of the Allen Institute for AI and an advisor and board member for the institute’s research arm, AI2. Microsoft co-founder Paul Allen started the institute.
“I think that the operating systems (OS) analogy is informative,” he told IBD in an email. “Microsoft, Apple (AAPL), and a few others invested billions in developing operating systems over decades. Despite that, Linux has become a first-class OS. I expect the same will happen here.”
Models At Heart Of Generative AI
The rub is that while free, open-source licenses drove innovation in the software industry, they yielded few winners on Wall Street. It’s hard to generate revenue and profits from open-source business models.
Jaluria notes that Red Hat, acquired by IBM for $34 billion in 2019, was an exception.
Still, the competitive landscape in the emerging LLM market will have many twists and turns. The use of generative AI technologies could roil a host of industries by creating text, images, video and computer programming code on their own.
Further, generative AI is quickly finding applications in marketing, drug development, video gaming and customer support.
With the rise of generative AI, one question for investors is whether giant tech stocks will dominate in artificial intelligence or if startups will flourish and eventually go public.
Amazon Spreads Bets On Open Source
While Microsoft and Open AI’s ChatGPT remain bonded, Amazon’s cloud-computing unit — Amazon Web Services, or AWS — works with several LLM startups. They include Hugging Face, Cohere, Anthropic, Stability.ai and AI21 Labs. Both Hugging Face and Stability.ai are open-source companies.
In addition, AWS has teamed with the United Arab Emirates’ Technology Innovation Institute to make a leading open-source LLM available for researchers and commercial use.
Developers trained the institute’s Falcon 40B open-source LLM on 40 billion parameters. It’s among the leading LLMs used by Hugging Face, which operates an AI code repository for application developers.
“We believe that there isn’t one model to rule them all,” Matt Wood, AWS vice president for products, told IBD.
While Microsoft’s cloud unit has 9,500 customers for the OpenAI service, Wood declined to say how many AWS customers are working on generative AI.
“It’s very early days,” he said. “We’re three steps into a 20-mile hike.”
Implications For Amazon
Wood went on to say: “Right now, if you triangulate forward, I think both proprietary and open-source models will have a role to play. Some customers are going to prefer open-source models because they can be optimized for use cases and some will take a proprietary approach. Those models are much larger, they have significant capability and they may be better for super-use cases that have a different cost profile.”
He added: “Different models serve different purposes, which is why model selection really matters. Customers want to use foundation models along with their own data, which is often already stored in AWS, to create a unique net-new asset for their organization.”
Some analysts expect AWS to get a boost from commoditization of large language models.
“While AWS is coming from behind on LLMs, our checks suggest that foundational models could be commoditized, which would benefit AWS,” UBS analyst Lloyd Walmsley said in a recent note to clients.
Well-funded startup Databricks is pushing an open-source model called Dolly. It’s available on Amazon’s cloud platform.
At Morgan Stanley, analyst Brian Nowak said in a note: “Partnerships and open-source models play a key role in AWS’s library of foundational models. Therefore, the extent to which open-source models continue to close the gap in performance between proprietary models likely helps drive further adoption of AWS’s (AI building) Bedrock service.”
OpenAI Is Not Open Source
OpenAI was conceived as an open-source company, but it isn’t one now. Chief Executive Sam Altman and tech industry maverick Elon Musk formed the company in 2015 as a nonprofit, research-oriented entity.
OpenAI created a for-profit arm in 2019, a year after Musk left. OpenAI now charges for access to more powerful versions of ChatGPT. Also, it licenses LLMs to businesses.
Customers can access OpenAI’s generative AI technology in the cloud using application programming interfaces, or APIs. API plug-ins act as an intermediary in building AI products, like brand-specific chatbots or virtual assistants.
In a Feb. 23 tweet, Musk said: “OpenAI was created as an open source (which is why I named it “Open” AI), nonprofit company to serve as a counterweight to Google, but now it has become a closed-source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”
Microsoft owns a 49% stake in OpenAI. It has access to the entire OpenAI and ChatGPT code base, according to Musk’s tweets.
OpenAI did release a free version of ChatGPT in November. But researchers have no access to the data and code used to build the ChatGPT family of LLMs.
Google Memo: No Generative AI Moats
Investor concern over possible commoditization of LLMs heated up after the leak of a Google employee memo in April. The memo, called “We have no moat and neither does OpenAI,” focused on the looming threat of open-source AI.
It noted that Stanford researchers built a new LLM called Alpaca using Meta’s Llama and other data sets.
Meta released the first version of Llama to researchers in February. For Google, the problem was that the Alpaca model, when tested vs. OpenAI’s ChatGPT, performed reasonably well using much less computing power.
“Who would pay for a Google product with usage restrictions, if there is a free, high-quality alternative without them,” asked the researcher’s memo.
Google declined an interview for this story. But Google executives have criticized Meta’s release of open-source large language models. Google also says bad actors could gain access to models, leading to misuse and the spread of disinformation.
In July, Microsoft, Google, OpenAI and Anthropic formed the Frontier Model Forum. The companies said the forum will focus on responsible deployment and control of AI systems.
Meta’s Mark Zuckerberg Fires Back
Meta Chief Executive Mark Zuckerberg defended his company’s open-source AI strategy in a recent podcast interview with Lex Fridman, a computer scientist at the Massachusetts Institute of Technology.
Zuckerberg said Meta’s open-source strategy will drive innovation on its social media platform. Further, he said the capabilities of current, state-of-the-art LLMs do not pose a threat to society.
“For the stage we’re at in the development of AI — I don’t think anyone believes this is super intelligence,” he said. “The models we’re talking about, the Llama models, are an order of magnitude smaller than what OpenAI or Google are doing.”
Zuckerberg continued, “At this stage, the equity in my view is balanced strongly toward doing this more openly. If you got to something that was close to super intelligent, then you might have to think through that. We haven’t made a decision yet.”
Analysts expect the social media giant to unveil more of its generative AI strategy Sept. 27 at the Meta Connect user conference.
Because of Meta, the performance gap between OpenAI’s ChatGPT and open-source LLMs has narrowed since February, Gartner analyst Arun Chandrasekaran told IBD.
Custom Artificial Intelligence
“From an enterprise CIO or CTO view, companies are clearly going to consider using open-source large language models,” he said.
Chandrasekaran said open-source LLMs will give companies a lower-cost option for fine-tuning bigger models for industry-specific apps using their own data sets.
“For companies that want to customize a model, there is going to be a much better starting point,” he added. “And the more models that come to market, clearly there will be more price competition.”
Wall Street analysts say software companies like Salesforce (CRM) and Adobe (ADBE) also have access to data troves. That will enable them to develop industry-specific models and applications for generative AI technology. Adobe has been adding AI tools to its platform.
Many experts see software companies merging company-owned data with LLMs to create new business insights. Generative AI is expected to boost productivity across the U.S. economy, though there will be job losses.
Enterprise Market For Generative AI
“Most enterprises will take a hybrid approach,” a recent TD Cowen report said. “They (will) leverage the foundational algorithms from core LLMs and then layer on smaller language models that can be fine-tuned and trained to incorporate an organization’s proprietary data and targeted use-cases.”
The report added: “This will give rise to many emerging open-source/commercial LLM providers.”
If open-source AI does eventually commoditize the LLM market, what happens then? Companies like Microsoft will need to integrate LLMs into “value-added applications and platforms,” Oppenheimer analyst Tim Horan said in a recent note.
Microsoft already is moving in that direction with its “Copilot” AI software tools, priced at $30 monthly per user.
Goldman Sachs analyst Kash Rangan shares a similar view with Horan.
“The AI space won’t be the land of just the giants,” Rangan said in a note. “The application layer will be wide-open for innovation. Use cases will be invented for AI technology that nobody has thought of yet.”
4 AI Stocks At A Glance
|Company||Ticker||AI direction||2023 stock gain*||IBD Industry Group|
|*As of 8/21/23|
Follow Reinhardt Krause on Twitter @reinhardtk_tech for updates on 5G wireless, artificial intelligence, cybersecurity and cloud computing.
YOU MIGHT ALSO LIKE:
How To Use The 10-Week Moving Average For Buying And Selling