Meta Researchers Build An AI That Learns Equally Well From Visual, Written Or Spoken Materials
https://techcrunch.comMeta (AKA Facebook) researchers are working an AI (data2vec) that can learn capably on its own whether it does so in spoken, written or visual materials. The code for data2vec is open source; it and some pretrained models are available here.
The idea for data2vec was to build an AI framework that would learn in a more abstract way, meaning that starting from scratch, you could give it books to read or images to scan or speech to sound out, and after a bit of training it would learn any of those things. It’s a bit like starting with a single seed, but depending on what plant food you give it, it grows into an daffodil, pansy or tulip.