Adaptability of Foundation Models

In this presentation, defense scientist Mathieu Pagé Fortin will analyze how foundation models, thanks to their training on massive and diverse data, can be adapted to efficiently solve specialized tasks.

Date
  • 27 September 2024
Time

12h00 to 13h00

Location

Hybrid Event
On Zoom
At Université Laval, room 2765, pav. Adrien-Pouliot

Remote participation

The event will be presented via the Zoom platform.

On-site participation

Meet in room 2765 of Pavillon Pouliot, Université Laval. A pizza dinner will be served to those attending.

About the conference

In this presentation, we will analyze how foundation models, thanks to their training on massive and diverse data, can be adapted to efficiently solve specialized tasks. This adaptability makes it possible to leverage their understanding of data to meet specific needs without the need for exhaustive retraining.

We will also examine the mechanisms underlying this adaptability, such as transfer learning, learning with few examples, and prompt engineering.

About the speaker

Mathieu Pagé Fortin, Defence Scientist, Defence Research and Development Canada

Mathieu Pagé Fortin recently completed his PhD in computer science at Université Laval, where he specialized in the study of deep neural networks.

His research focused on the adaptability and scalability of these networks, i.e. their ability to learn new tasks efficiently and to continually acquire new knowledge without forgetting what has already been learned. His work has focused on few-shot learning, continuous learning and multimodal learning.

Let’s keep in touch!

Would you like to be informed about IID news and activities? Subscribe now to our monthly newsletter.