Meta updates its Ego-Exo project to advance AI research and improve first-person perspective

MADRID, Dec. 1 (Portaltic/EP) –

Goal has announced new projects in the field of Artificial Intelligence (AI) and the update of its Ego-Exo initiative, aimed at solving the challenges presented by technology focused on offering a first-person perspective.

These days the company celebrates the tenth anniversary of the team of AI Fundamental Research (FAIR, for its acronym in English), which “has been at the forefront of numerous scientific advances,” as explained in a statement.

In this he also stressed that this group of experts is “a fundamental piece for goal success“and that thanks to his work, he has been able to contribute “to building the future of social connection.”

One of its latest advances in this field is Voicebox, an AI model that it presented in June of this year and has the ability to perform voice generation tasks as well as high-quality audio clips.

Now, the firm has presented its successor, Audiobox, that records voice prompts or text entries that describe the sounds or types of speech that you want to generate in a personalized way.

Another novelty that comes with the tenth anniversary of FAIR is Seamless Communication, which Meta has developed based on SeamlessM4T to create a set of AI translation models adapted to different languages.

Specifically, this tool has support for English, Spanish, German, French, Italian and Chinese and “preserves the emotion and style of the speaker”, in addition to addressing the speed and rhythm of speech.

SeamlessStreaming, meanwhile, unlocks real-time conversations with other people who speak different languages. Unlike conventional systems, which translate once the speaker has finished speaking, this one does so during his speech to allow the listener to access the translation instantly.

Finally, Meta explained how he is advancing research into the so-called first-person perspective, also known as egocentric perception, which tries to teach AI to interact with the world realistically, just as humans do.

This project, called Ego-Exo and which it has been working on since 2021, has been updated to Ego-Exo4D to simultaneously capture first-person views from a wearable camera, as well as external or exocentric views from cameras surrounding the user.

This combination gives AI models “a window into what people see and hear, combined with more context about the environment”, as Meta has explained in this writing, which has exemplified how he hopes to materialize these advances.

Thanks to this, a person using smart headsets will be able to acquire new skills with a virtual AI trainer, who will guide them through an instructional video to, for example, repair a bicycle tire or juggle a soccer ball.

Related articles

When are they getting married? Victoria Patiño, former participant of MasterChef Ecuador, gave...

By Ruth UzcateguiFebruary 22, 2024 at 1:27 p.m.Wind in their sails, Victoria Patiño and Santiago Barzallo have everything on track to give themselves...

Colombian in the US showed the jobs with which she earns extra in a...

A young Colombian woman residing in the United States He shared a video through his social media account in which he details a series...

Residents of La Paragua denounced that rescue in the “Bulla Loca” mine is ineffective

The collapse of the “Bulla Loca” mine in the state of Bolívar was recorded on Tuesday, February 20, leaving, according to official figures, 16...

IMF Mission in Argentina: Gita Gopinath meets with Milei and the CGT

The First Deputy Managing Director of the International Monetary Fund, Gita Gopinath, will close his first trip to Argentina this Thursday but has a...