Biden administration plans to block Nvidia, others from exporting high-performance AI chips to China

The Biden administration updated its export control regulations for artificial intelligence (AI) chips on Oct. 17, with plans to prevent companies like NVIDIA from exporting advanced AI chips to China. NVIDIA shares plunged nearly 5%, AMD shares plunged more than 2%, and Intel shares fell 1.7% after the news was announced.
Under the latest rules, NVIDIA’s exports of chips to China, including the A800 and H800, will be affected. The new rules will take effect after 30 days of public comment.
NVIDIA responded to CBN, “We comply with all applicable regulations while striving to provide products that support thousands of applications across a variety of industries. Given the global demand for our products, we do not expect (the new regulations) to have a material impact on our financial results in the near term.”

The restrictions will also affect chip sales to China by companies such as AMD and Intel, with chip equipment makers including Applied Materials, Panarin Group and KLA also implicated. This is due to the fact that the new measures expand the licensing requirements for exporting advanced chips to more than 40 other countries outside of China, and introduce licensing requirements for chip manufacturing tools for 21 countries outside of China, expanding the list of equipment that is prohibited from entering those countries.


In addition, the new measures aim to prevent companies from bypassing chip restrictions through Chiplet’s chip stacking technology.
U.S. Commerce Secretary Gina Raimondo said the new measures are intended to “close loopholes,” and indicated that they will likely be updated at least annually in the future. The new restrictions will affect only a small portion of chip exports to China,” she said. Chips used in consumer products such as game consoles or smartphones will not be subject to export controls.
Last October, the U.S. imposed bandwidth rate restrictions on AI chips exported to China, involving NVIDIA’s A100 and H100 chips. Since then, NVIDIA has offered Chinese companies alternative versions of the A800 and H800. some Chinese computer makers have gone public with details that the H800 servers are identical in every way to the H100 chips sold elsewhere in the world, except for a reduced transfer rate of 400GB per second.
Despite U.S. export restrictions that have fueled concerns about NVIDIA’s chip access to the Chinese market, NVIDIA’s business is still surging in terms of earnings reports. This has been driven by demand for big AI model development so far this year.
Earnings for the quarter ended July 30 showed that NVIDIA’s net profit for the quarter reached a record $6.7 billion, a year-on-year surge of 422%; revenues also surged 171% year-on-year to $13.5 billion; the company’s gross profit margin increased by more than 25 percentage points over the same period last year, reaching 71.2%.
NVIDIA will announce its fiscal third-quarter earnings next month, and the company expects that third-quarter revenue will be about $16 billion, well above analysts’ expectations.

World’s first! Tsinghua University announced: a major breakthrough in Chinese chips!

Tsinghua University announced that the team of Professor Huaqiang Wu and Associate Professor Bin Gao from the School of Integrated Circuits at Tsinghua University has developed the world’s first fully system-integrated amnesia memory-computing chip that supports highly efficient on-chip learning (machine learning that can be accomplished directly on the hardware side) based on the memory-computing computing paradigm. The results have been published in the latest issue of Science.

According to the article, the team of Qian He and Wu Huaqiang has been on a long march for 11 years, from amnesia devices to prototype chips to system integration, to solve the bottleneck problem of AI arithmetic, and to overcome the “neck” of the key core technologies, and the results involve the amnesia integrated chip, the integrated system, the accelerator of ADAM algorithm, and the system of storage and calculation. system, ADAM algorithm gas pedal …… It is expected to promote the development of artificial intelligence, automatic driving, wearable devices and other fields.

It is understood that in 2012, Qian He and Wu Huaqiang’s team began to study the use of memristors for storage. Memristor is the fourth basic component of circuit after resistor, capacitor and inductor. It can still “remember” the passing charge after power failure, so it is used as a new type of nanoelectronic synaptic device. 2020, the team built a complete storage and calculation system based on multi-array amnesia with all hardware components, which efficiently operated the convolutional neural network algorithm, and successfully verified the image recognition function, which is two orders of magnitude more energy-efficient than the graphic processor chip. The system runs the convolutional neural network algorithm efficiently and successfully verifies the image recognition function, which is two orders of magnitude more energy-efficient than the graphic processor chip, dramatically increasing the computing power of the computing device and realizing the completion of complex computation with smaller power consumption and lower hardware cost.

On-chip learning is important for edge smart devices to adapt to different application scenarios. Current techniques for training neural networks require moving large amounts of data between computing and storage units, which hinders the realization of learning on edge devices.

In this research, Qian He and Huaqiang Wu led a team to innovate and design a new generalized algorithm and architecture STELLAR for efficient on-chip learning applicable to amnesia storage-computing integration, which includes its learning algorithms, hardware implementation, and parallel conductance tuning scheme, and is a generalized approach to facilitate on-chip learning through the use of amnesia cross-gate arrays.

The chip is known to perform tasks such as motion control, image classification and speech recognition. It consumes only 1/35th of the power consumption of an ASIC for the same task, while promising a 75-fold improvement in energy efficiency.

“The integrated on-chip learning can effectively protect user privacy and data while realizing lower latency and smaller energy consumption.” Yao Peng, one of the first authors of the academic paper and a postdoctoral fellow, introduced that the chip can realize fast “on-chip training” and “on-chip recognition” of different tasks with reference to the bionic brain-like processing method, which can effectively complete the incremental learning tasks in edge computing scenarios and adapt to new scenarios and learn new knowledge to meet users’ personalized needs with extremely low power consumption. It can effectively accomplish incremental learning tasks in edge computing scenarios, adapt to new scenarios and learn new knowledge with very low power consumption to meet users’ personalized needs.

OpenAI wants employees to sell shares to raise money.

OpenAI is reportedly in discussions with investors about a possible stock sale, with the artificial intelligence startup seeking to sell shares at a valuation of $80 billion-$90 billion (roughly Rs. 584.248 billion – Rs. 657.279 billion at current exchange rates), a valuation that nearly triples the amount it was valued at earlier this year.

The deal is expected to allow employees to sell their holdings rather than raise additional capital in the form of new shares issued by the company, the WSJ said. OpenAI has already begun convincing investors, people familiar with the matter said, adding that OpenAI revenue is expected to reach $1 billion this year and billions more by 2024.

The $1 billion figure is consistent with the numbers that broke in the media in August. OpenAI reportedly generates $80 million in monthly revenue, up from $28 million in all of 2022. ChatGPT Plus, its $20-per-month paid version of ChatGPT that launched in February, has fueled OpenAI’s revenue growth.

On the other hand, a stock sale would allow employees to understand the value of their equity without having to wait for the company to go public, could help the company attract top talent and generate liquidity, and would bring OpenAI a new valuation. an $80 billion or higher valuation would make OpenAI one of the most highly valued startups in the world, behind only ByteDance and Musk’s SpaceX.

More money raised, but more money spent
OpenAI aims to sell hundreds of millions of dollars worth of its existing stock to Silicon Valley investors, the report said. In April, OpenAI raised more than $300 million from investors including Sequoia Capital, Andreessen Horowitz, Thrive and K2 Global, valuing the company at $29 billion. This is unrelated to a large investment announced by Microsoft earlier this year, which was completed in January of this year, with an investment size of about $10 billion. By some counts, OpenAI had already received $4 billion in investments in its seven-plus years of existence before it received the $10 billion investment from Microsoft. At this point, OpenAI’s cumulative financing has amounted to 14.3 billion U.S. dollars (about 104.462 billion yuan at the current exchange rate).

While it was reported back in May that OpenAI was looking to raise more money, the share sale, if it continues, will not provide OpenAI with additional working capital, but will only allow its employees to divest some of their shares.

In May, The Information reported that three people with knowledge of OpenAI’s finances had revealed that OpenAI’s losses had roughly doubled to around $540 million as a result of the development of ChatGPT and the hiring of key staff from Google last year. Dylan Patel, principal analyst at consulting firm SemiAnalysis, estimates that it costs $700,000 per day to run ChatGPT.

According to Fortune’s disclosure, the company’s $544.5 million in total expenses for 2022 consists of $416.45 million in compute and data expenses, $89.31 million in employee expenses, and $38.75 million in other operating expenses for which no specific programs were identified. Those costs were racked up before the $10 billion investment Microsoft made earlier in the year.

OpenAI has not stopped upgrading and updating its products, and it’s clear that the huge cost investment continues:

In the early morning hours of September 21st, OpenAI announced that its text-based diagramming tool, DALL-E, will soon be upgraded to DALL-E 3, and will be natively integrated into ChatGPT. Compared to last year’s DALL-E 2, DALL-E 3 is significantly better at understanding text and generating images with the same prompts. The often criticized problem of “not being able to generate text on images” has also been solved in this update.

On September 26th, OpenAI launched multimodal ChatGPT, which, in addition to the usual text box interactions, will introduce voice and images to Plus and Enterprise users, allowing them to have a conversation with their voice or show ChatGPT what they are talking about. Voice will be available on iOS and Android (opt-in in your settings) and images will be available on all platforms.

Also on August 16, OpenAI announced that it had acquired the team at Global Illumination, its first foreign acquisition since its founding in 2015, but did not disclose the amount of money involved in the deal.

While the company’s revenue grew after OpenAI launched a paid version of its chatbot in February, those costs are likely to continue to rise as more customers use its AI technology and the company trains for future versions of the software. In fact, Microsoft and other recent investors have covered most of these costs, but they may stop investing if OpenAI doesn’t turn a profit soon.

Meanwhile, on August 24th, media reported that OpenAI CEO Sam Altman will travel to Abu Dhabi, the capital of the United Arab Emirates, and other places in the second half of the year to seek financing, which will be on a huge scale of no less than $100 billion.

It is reported that the story Altman told to VCs is not limited to AGI general artificial intelligence, he said that OpenAI’s goal is to realize Super intelligence (Super intelligence), for example, can attack cancer in a month. But OpenAI is still very far from that goal, and the amount of money OpenAI needs to achieve it is unimaginable.

For now, Altman has no plans to raise money through an IPO. In June, Altman said he wasn’t interested in going public because he didn’t want to be sued by the public markets, Wall Street, and so on. He explained that when developing AI, he may make some decisions that seem very strange to public market investors.

The biggest investor, too, can’t be pitched
Microsoft took a 49 percent stake in OpenAI after investing $10 billion in the company earlier this year. As part of the deal, OpenAI promised to work with it and integrate its AI software with Microsoft’s products in exchange for computing resources to train and run its models. Microsoft has indeed raced to build AI capabilities into most of its software products, including the GPT-4-based Windows Copilot, but today, big models are putting pressure on Microsoft as well.

The company is concerned that with Windows having more than a billion users worldwide, the cost of running these AI features could grow rapidly. And Microsoft doesn’t want to give up the financial benefits of its new AI offerings, so it’s looking for low-cost alternatives.

In recent weeks, Peter Lee, who oversees Microsoft’s 1,500 researchers, instructed many of them to develop conversational AI, which may not perform as well as GPT-4 but is smaller and much cheaper to run, according to people familiar with the matter.

Microsoft’s research team reportedly has no illusions about developing large AI models like GPT-4. The team doesn’t have the same computational resources as OpenAI, nor does it have a significant amount of human labor to provide feedback on the questions answered by LLM for engineers to improve.

Microsoft’s efforts to move to more efficient AI models are still in the early stages, though the company has reportedly begun testing internally developed models in services like Bing Chat.

Mikhail Parakhin, head of Microsoft’s search division, has previously said that Bing Chat relies 100 percent on GPT-4’s creativity and accuracy model. However, in its balanced model, it uses a new model called Prometheus and a Turing language model. The latter are not as powerful as GPT-4: they can recognize and answer simple questions, but when they are faced with tougher ones, they pass them on to GPT-4.

On the coding front, Microsoft recently unveiled its 1.3 billion-parameter Phi-1 model, which is said to be trained on “textbook-quality” data and generates code in a much more efficient way, but it’s not quite up to GPT-4 standards.

The company is also working on other AI models, such as Orca’s open-source Meta-based Llama-2 model. According to the report, Orca’s performance is close to that of OpenAI’s model, although it’s smaller and consumes fewer resources.

Microsoft’s AI research division has about 2,000 graphics cards from NVIDIA available for use, and Lee has now ordered that most of them be used to train more efficient models focused on performing specific tasks, rather than the more general-purpose GPT-4, according to the report.

There’s no denying that OpenAI and other developers, including Google and Anthropic, are ahead of Microsoft in developing advanced LLMs, but Microsoft may be able to compete in the race to build models of similar quality to OpenAI’s software at a fraction of the cost.

Translated with DeepL

boy has had a “strange disease” for three years, 17 doctors were unable to find a solution, but ChatGPT finally succeeded in diagnosing it

A foreign mother, Courtney, shared that her 4-year-old child, Alex, had been to 17 doctors in 3 years for chronic pain, and none of them were able to explain the exact cause of the pain-until Courtney signed up for ChatGPT and uploaded Alex’s medical condition. the right diagnosis.

Three years ago, Courtney bought an inflatable trampoline because she was worried about her kids being bored at home during the outbreak, but it wasn’t long before her son, Alex, started having body aches and pains, and had to take Motrin (a brand of ibuprofen, the active ingredient) every day to keep from throwing tantrums and tantrums.

When Alex started chewing, Courtney wondered if the pain was caused by his molars or cavities and took him to the dentist. But the dentist ruled out those causes, thought Alex might just be grinding his teeth, and recommended Courtney to an orthodontist who specializes in treating airway obstructions-which, according to the dentist, can interfere with sleep, which can cause a child to be tired and moody.
After examining Alex, the orthodontist said Alex’s upper jaw was too small for his mouth and teeth, which could make it difficult for him to breathe at night, so the doctor recommended an expander be fitted to Alex’s upper jaw.

“Things seemed to improve and everything was a little bit better.” Courtney thought that would be the end of it, but then she noticed that Alex hadn’t grown in a long time, and even his left and right feet seemed to be developing out of balance: Alex was always walking with his right foot in front of him, and dragging his left.

Courtney took Alex back to the pediatrician, who speculated that Alex’s developmental problems might be due to the epidemic. Although Courtney did not agree with this theory, she followed the pediatrician’s recommendation that Alex receive physical therapy to correct the imbalance between his left and right feet.

At the same time, Alex was also experiencing severe headaches, so Courtney took him to a neurologist, who said Alex suffered from migraines, and to an ear, nose, and throat (ENT) doctor to confirm whether Alex’s constant fatigue was due to a sinus cavity or airway that was causing a sleep disorder.

However, no dentist, pediatrician, neurologist or ENT was able to find the true cause of Alex’s pain and fatigue. Courtney felt powerless: “No matter how many doctors we saw, they would only focus on their area of expertise, and no one was willing to address the bigger issues or even tell us what the diagnosis really was.”

By chance, a physical therapist told Courtney that Alex might have a condition called Chiari malformation – a congenital disorder that causes an abnormality in the brain where the skull meets the spine. Courtney began researching the condition and took Alex to more doctors: a new pediatrician, a pediatric internist, an adult internist and a musculoskeletal surgeon, but nothing worked.

Courtney counted 17 different doctors they’d taken Alex to over the past three years, but never found a cause that explained all of his symptoms – and, exhausted, Courtney signed up for ChatGPT a few months ago, hoping the AI would provide some useful information.
After entering medical information, ChatGPT found the cause!
After successfully signing up for ChatGPT, Courtney entered everything she knew about Alex’s symptoms, as well as all the information from the MRI images into ChatGPT: “I went line by line through everything in Alex’s [MRI records] and then entered it all into ChatGPT. “
As it turned out, it was one of those ways that just didn’t seem right that actually gave an answer: combined with the medical information Courtney entered, ChatGPT diagnosed him with spinal tethering syndrome.
“That diagnosis makes a lot of sense.” After learning of this possibility, Courtney quickly went to Facebook to search for information and joined a group of families with children who sounded almost exactly like Alex.

So this time, Courtney made an appointment with a new neurosurgeon and told him straight up that Alex might have spinal embolism syndrome. And the doctor, after looking at Alex’s MRI images, confirmed her story: Alex does have spinal tethering syndrome.
According to the American Association of Neurological Surgeons, spinal cord tethering syndrome occurs when spinal cord tissue forms attachments that restrict the movement of the spinal cord, causing it to stretch abnormally. Symptoms include dragging of the legs, body aches and pains, scoliosis, deformities of the feet or legs, and developmental delays in stages such as sitting up and walking. This condition is closely related to spina bifida, which is a birth defect in which part of the spinal cord is underdeveloped, resulting in partial exposure of the spinal cord and nerves.

In simple terms, spinal cord tethering syndrome means that “the spinal cord is attached to something”. This could be a tumor in the spinal canal, a bump on a bone spur, or simply too much fat at the end of the spinal cord. A pediatric neurosurgeon explains, “Pulling occurs once the abnormal area can’t be stretched …….”

Often, doctors notice these conditions soon after birth, but certain markers that may indicate hidden spina bifida, such as a dimple, a red spot or a strand of hair, can be easily overlooked – Alex has “hidden” spina bifida, and because he’s still a toddler, it’s impossible to pinpoint exactly what he has. Alex has “hidden” spina bifida, and because he’s a toddler, it’s hard to tell exactly what it is, so it’s hard to diagnose.

Thanks to a reminder from ChatGPT, the source of the pain that had plagued Alex for three years was finally identified. After the diagnosis of spinal tethering syndrome, Alex underwent surgery soon after and is well on his way to recovery, Courtney says, noting that she can finally see real relief and joy on Alex’s face.

Still, at this point, ChatGPT is not a substitute for a clinician.
There’s no doubt that for Courtney and Alex, ChatGPT was a big help. Andrew Beam, PhD, Assistant Professor of Epidemiology at Harvard University, who researches machine learning models and medicine, also commented after learning about the incident, “This (ChatGPT) is a super-powerful medical search engine.”
According to Andrew Beam, ChatGPT learns from the vast amount of textual data available on the Internet, reads the entire Internet, and therefore may “not have the same blind spots as a human doctor”. Especially for patients with complex conditions that make it difficult to get a diagnosis, ChatGPT may be better as a diagnostic tool than a typical symptom checker or Google.
However, Andrew Beam also emphasized that ChatGPT cannot replace the expertise of clinicians anytime soon, as it may “make up information” when it can’t find an answer. For example, if someone asks ChatGPT about a study on influenza, it will give them a few titles that sound plausible, and even list the authors, when in fact the papers may not exist. “This ‘illusion’ phenomenon is a big problem when we start talking about ChatGPT in medicine.”

Dr. Jesse M. Ehrenfeld, president of the American Medical Association (AMA), also mentioned that while AI products show great promise in helping to ease the burden on physicians, both ChatGPT and and other generative AI products currently have limitations and safety issues that will pose a potential risks for physicians and patients, and should be used with caution.

3 amazing GitHub open source projects that were born!

As a well-known developer technology community, GitHub in the past period of time, also one after another born a lot of practical AI tools.

These tools have the same characteristics, easy to use and efficient, the main a novelty bright eyes, by the way, to liberate your personal productivity.

Today we recommend a few popular AI tools on GitHub, to help you in your daily work, to further improve the development efficiency.

Microsoft’s new open source work helps you quickly get started with large model development
At the Build 2023 developer conference on May 23 this year, Microsoft released a number of important updates, including the launch of a number of new AI tools.

One of these tools, which was featured frequently in the official introductory video, is Prompt flow, which is designed to simplify the development cycle of big model apps.

The project is now open-sourced on GitHub, and in just one week it has surpassed 2000 Stars.

GitHub: https://github.com/microsoft/promptflow

With this project, you can quickly get through the whole process from ideation, prototyping, testing, evaluation to production deployment and monitoring, allowing developers to build a variety of high-quality large language model applications more easily.

This includes, but is not limited to, the following features:

Create and iterate development flows

Evaluate flow quality and performance

Streamline production development cycles

The project is accompanied by detailed technical documents and guides such as “Cue Flow Getting Started Tutorial” and “Talking and Chatting with PDFs”, which help you quickly get started with big model development technology by matching with VSCode.

AI Open Source Tools for Computer Vision
With MetaAI’s repeated efforts in the open source circle this year, a variety of practical AI image segmentation, processing models and frameworks have been introduced, further reducing the technical threshold of computer vision.

The rise of smart cities, factory automation production management, automatic driving and many other industries has also allowed computer vision technology to begin to be widely used.

At present, the technology ecosystem has been sufficiently perfect, ordinary developers to start learning and application, the difficulty has been greatly reduced.

Today here, to recommend a relatively hot computer vision AI toolkit: supervision, easy to install, can be reused by developers, greatly improving efficiency.

GitHub: https://github.com/roboflow/supervision

Inside the project provides a variety of main application scenarios of computer vision such as object detection, dataset processing, model evaluation, data analysis and computation, as well as practical tools.

The project can be installed with pip, which is simple and fast, and can help you save a lot of tool installation and management costs.

Out-of-the-box, play with big models on the command line.
ChatGPT’s previously released Code Interpreter plugin is considered by many developers to be the most powerful plugin recommended by OpenAI.

After integrating the virtual environment of Python sandbox, it can be based on your uploaded Excel, CSV data, automatically obtain and analyze the data content, for you to produce a professional industry analysis report, to generate a variety of different styles of data charts.

Even online Python scripts can be used to batch process various video clips, generate website demos and other operations.

The list goes on and on. I don’t think I need to tell you how powerful it is.

Unfortunately, it is currently only available on ChatGPT.

Today we recommend an open source implementation: Open Interpreter, which allows you to call Python, JavaScript, Shell and other languages directly on the command line terminal to quickly demonstrate a variety of large model capabilities.

Here’s a demo video for you to get a feel for it:

GitHub: https://github.com/KillianLucas/open-interpreter/

From the video, we can see that using Open Interpreter, you can easily realize the following various AI functions on your local computer:

Creating and editing photos, videos, PDF files;

Controlling Chrome to perform searches;

Mapping, cleaning and analyzing large data sets;

Intelligently changing system configurations;

Automatically generate and run demo source code;

AI one-on-one chat and Q&A.

The project received attention at the beginning of the month, during which it has been on the GitHub Hotlist many times, and in just one month, it has exceeded 24000 Star, which is a horrible growth rate, and also reflects the high demand for it from the side.