Facebook’s AI chief working on a new semiconductor that works differently other designs

Facebook’s AI chief working on a new semiconductor that works differently other designs

Facebook Builds Own Semiconductor Chips

The chief artificial intelligence researcher on Facebook suggested that the company is working on a new class of semiconductors that works differently than most current designs.

Yann LeCun said that future segments used to train deep learning algorithms, which support most modern advances in artificial intelligence, must be able to manipulate data without having to divide it into multiple batches. Most of the existing computer chips, to process the amount of data that these automated learning systems need to learn, are broken down into parts and processing each batch sequentially.

“We do not want to leave any stone without effort, especially if no one delivers it,” he said in an interview before launching a paper on the history and future of computers designed to deal with artificial intelligence.

Intel Corp and Facebook had in the past said they were working simultaneously in a new class of chips planned specifically for artificial intelligence applications. In January, Intel announced that it plans to produce a new chip in the second half of this year.

Facebook is part of an increasingly heated career to create semiconductors more suited to promising forms of automated learning. Google Inc., a subsidiary of Alphabet Inc. , A chip called the Tensor processing unit that helps drive AI applications in their own cloud computing data centers. In 2016, Intel bought Nervana Systems, based in San Diego, which was working on a specific IA segment.

during April, Bloomberg reported that Facebook was hiring a group of devices to build its own chips for a multiplicity of applications, including artificial intelligence, as well as to handle the multipart workloads of its big data centers.

At present, the most common chips in neural network training, a type of software based on the way the human brain works, are graphical processing units for companies like Nvidia, originally designed to handle heavy workloads. Provide images for video games.

LeCun said that at present, GPUs will remain important for deep learning research, but chips are not suitable for running artificial intelligence algorithms once trained, either in data centers or on devices such as mobile phones or digital assistants.

Instead, LeCun said that future AI chip designs would have to deal with information more efficiently. When learning, most neurons in a system, such as the human brain, do not need activation. But the current chips process the information of all the neurons in the network at every step of computing, even if they are not used. This makes the process less efficient.

Many new companies have tried to create chips to deal with sparse information more efficiently. Daniel Golden, NASA’s former director, founded a company called KnuEdge that was working on one of those chips, but the company struggled to gain strength, and in May announced that it was shedding most of its workforce.

LeCun, who is also a lecturer of computer science at New York University, is one of the pioneers of a class of mechanical learning techniques known as deep education. The system depends on the use of huge neural networks. It is particularly known to apply these deep learning techniques to computer vision tasks, such as identifying letters and numbers or classifying people and objects in images.

Please follow and like us:

Leave a Reply

Close Menu