OpenAI wants employees to sell shares to raise money.

OpenAI is reportedly in discussions with investors about a possible stock sale, with the artificial intelligence startup seeking to sell shares at a valuation of $80 billion-$90 billion (roughly Rs. 584.248 billion – Rs. 657.279 billion at current exchange rates), a valuation that nearly triples the amount it was valued at earlier this year.

The deal is expected to allow employees to sell their holdings rather than raise additional capital in the form of new shares issued by the company, the WSJ said. OpenAI has already begun convincing investors, people familiar with the matter said, adding that OpenAI revenue is expected to reach $1 billion this year and billions more by 2024.

The $1 billion figure is consistent with the numbers that broke in the media in August. OpenAI reportedly generates $80 million in monthly revenue, up from $28 million in all of 2022. ChatGPT Plus, its $20-per-month paid version of ChatGPT that launched in February, has fueled OpenAI’s revenue growth.

On the other hand, a stock sale would allow employees to understand the value of their equity without having to wait for the company to go public, could help the company attract top talent and generate liquidity, and would bring OpenAI a new valuation. an $80 billion or higher valuation would make OpenAI one of the most highly valued startups in the world, behind only ByteDance and Musk’s SpaceX.

More money raised, but more money spent
OpenAI aims to sell hundreds of millions of dollars worth of its existing stock to Silicon Valley investors, the report said. In April, OpenAI raised more than $300 million from investors including Sequoia Capital, Andreessen Horowitz, Thrive and K2 Global, valuing the company at $29 billion. This is unrelated to a large investment announced by Microsoft earlier this year, which was completed in January of this year, with an investment size of about $10 billion. By some counts, OpenAI had already received $4 billion in investments in its seven-plus years of existence before it received the $10 billion investment from Microsoft. At this point, OpenAI’s cumulative financing has amounted to 14.3 billion U.S. dollars (about 104.462 billion yuan at the current exchange rate).

While it was reported back in May that OpenAI was looking to raise more money, the share sale, if it continues, will not provide OpenAI with additional working capital, but will only allow its employees to divest some of their shares.

In May, The Information reported that three people with knowledge of OpenAI’s finances had revealed that OpenAI’s losses had roughly doubled to around $540 million as a result of the development of ChatGPT and the hiring of key staff from Google last year. Dylan Patel, principal analyst at consulting firm SemiAnalysis, estimates that it costs $700,000 per day to run ChatGPT.

According to Fortune’s disclosure, the company’s $544.5 million in total expenses for 2022 consists of $416.45 million in compute and data expenses, $89.31 million in employee expenses, and $38.75 million in other operating expenses for which no specific programs were identified. Those costs were racked up before the $10 billion investment Microsoft made earlier in the year.

OpenAI has not stopped upgrading and updating its products, and it’s clear that the huge cost investment continues:

In the early morning hours of September 21st, OpenAI announced that its text-based diagramming tool, DALL-E, will soon be upgraded to DALL-E 3, and will be natively integrated into ChatGPT. Compared to last year’s DALL-E 2, DALL-E 3 is significantly better at understanding text and generating images with the same prompts. The often criticized problem of “not being able to generate text on images” has also been solved in this update.

On September 26th, OpenAI launched multimodal ChatGPT, which, in addition to the usual text box interactions, will introduce voice and images to Plus and Enterprise users, allowing them to have a conversation with their voice or show ChatGPT what they are talking about. Voice will be available on iOS and Android (opt-in in your settings) and images will be available on all platforms.

Also on August 16, OpenAI announced that it had acquired the team at Global Illumination, its first foreign acquisition since its founding in 2015, but did not disclose the amount of money involved in the deal.

While the company’s revenue grew after OpenAI launched a paid version of its chatbot in February, those costs are likely to continue to rise as more customers use its AI technology and the company trains for future versions of the software. In fact, Microsoft and other recent investors have covered most of these costs, but they may stop investing if OpenAI doesn’t turn a profit soon.

Meanwhile, on August 24th, media reported that OpenAI CEO Sam Altman will travel to Abu Dhabi, the capital of the United Arab Emirates, and other places in the second half of the year to seek financing, which will be on a huge scale of no less than $100 billion.

It is reported that the story Altman told to VCs is not limited to AGI general artificial intelligence, he said that OpenAI’s goal is to realize Super intelligence (Super intelligence), for example, can attack cancer in a month. But OpenAI is still very far from that goal, and the amount of money OpenAI needs to achieve it is unimaginable.

For now, Altman has no plans to raise money through an IPO. In June, Altman said he wasn’t interested in going public because he didn’t want to be sued by the public markets, Wall Street, and so on. He explained that when developing AI, he may make some decisions that seem very strange to public market investors.

The biggest investor, too, can’t be pitched
Microsoft took a 49 percent stake in OpenAI after investing $10 billion in the company earlier this year. As part of the deal, OpenAI promised to work with it and integrate its AI software with Microsoft’s products in exchange for computing resources to train and run its models. Microsoft has indeed raced to build AI capabilities into most of its software products, including the GPT-4-based Windows Copilot, but today, big models are putting pressure on Microsoft as well.

The company is concerned that with Windows having more than a billion users worldwide, the cost of running these AI features could grow rapidly. And Microsoft doesn’t want to give up the financial benefits of its new AI offerings, so it’s looking for low-cost alternatives.

In recent weeks, Peter Lee, who oversees Microsoft’s 1,500 researchers, instructed many of them to develop conversational AI, which may not perform as well as GPT-4 but is smaller and much cheaper to run, according to people familiar with the matter.

Microsoft’s research team reportedly has no illusions about developing large AI models like GPT-4. The team doesn’t have the same computational resources as OpenAI, nor does it have a significant amount of human labor to provide feedback on the questions answered by LLM for engineers to improve.

Microsoft’s efforts to move to more efficient AI models are still in the early stages, though the company has reportedly begun testing internally developed models in services like Bing Chat.

Mikhail Parakhin, head of Microsoft’s search division, has previously said that Bing Chat relies 100 percent on GPT-4’s creativity and accuracy model. However, in its balanced model, it uses a new model called Prometheus and a Turing language model. The latter are not as powerful as GPT-4: they can recognize and answer simple questions, but when they are faced with tougher ones, they pass them on to GPT-4.

On the coding front, Microsoft recently unveiled its 1.3 billion-parameter Phi-1 model, which is said to be trained on “textbook-quality” data and generates code in a much more efficient way, but it’s not quite up to GPT-4 standards.

The company is also working on other AI models, such as Orca’s open-source Meta-based Llama-2 model. According to the report, Orca’s performance is close to that of OpenAI’s model, although it’s smaller and consumes fewer resources.

Microsoft’s AI research division has about 2,000 graphics cards from NVIDIA available for use, and Lee has now ordered that most of them be used to train more efficient models focused on performing specific tasks, rather than the more general-purpose GPT-4, according to the report.

There’s no denying that OpenAI and other developers, including Google and Anthropic, are ahead of Microsoft in developing advanced LLMs, but Microsoft may be able to compete in the race to build models of similar quality to OpenAI’s software at a fraction of the cost.

Translated with DeepL

Leave a Reply

Your email address will not be published. Required fields are marked *