From language to machines - how statistical learning shapes our worlds
The ability to learn structural information from our rich environments is crucial for understanding our world and predicting our future. From both infants and natural language processing algorithms learning the transition probabilities of human languages to learning to categorize objects based on visual regularities. Surprisingly, once language is learned it can in turn constrain perception; with abilities to distinguish different colour shades being related to the number of words used by language to describe them.
Provisional Schedule*
9:50 |
Intro by the SWC/Gatsby Student Symposium Team |
|
10:00 |
In-person Speaker 1: Judit Gervain Neural oscillations in response to speech at birth |
|
10:50 |
Virtual Speaker 1: Chris Emezue, Masakhane/TUM Masakhane – pioneering participatory approaches to African languages processing |
|
11:20 |
In-person Speaker 2: Alison Barker Lessons from Underground: Linking social dynamics and vocal communication in the naked mole-rat |
|
12:10 |
Virtual Speaker 2: Ahana Fernandez From babbling to singing: Vocal communication of the greater sac-winged bat |
|
12:40 |
Lunch break (60 min) |
|
13:40 |
In-person Speaker 3: Felix Hill How language helps us build better models of general intelligence |
|
14:30 |
Virtual Speaker 3: David Quiroga, Berkeley |
|
15:00 |
In-person Speaker 4: Lera Boroditsky |
|
15:50 |
Coffee break (10 min) |
|
16:00 |
Student engagement session |
|
18:00 |
Presentations to panel of faculty judges (food and drinks will be provided) |
*all timings in BST (UTC+1)
Speakers
Talk titles and abstracts will be published below in due course. For now — acquaint yourself with our brilliant speakers by reading the following short profiles we have prepared.
IN-PERSON SPEAKERS:
Judit Gervain
Judit Gervain is a full professor of developmental psychology at the University of Padua, and a senior research scientist at Centre National de la Recherche Scientifique (CNRS). Her work focuses on language acquisition and early speech perception. She implements behavioural and brain imaging techniques to understand cognitive development and native language learning in infants.
Alison Barker
Alison Barker is a group leader at the Max Planck Institute for Brain Research. Barker’s research applies a wide array of experimental and computational tools to investigate the neural mechanisms of socially meaningful communication. This work focuses on the mechanisms surrounding vocal communication in the naked mole-rat, where she has previously shown that vocalizations from these species resemble distinctive dialects within and across colonies. To investigate this, Alison and her lab combine machine learning and neuroethology to understand how information is transferred between individuals to influence group behaviour and further study how social experience influences neural circuits. Read more about Alison's work in The New York Times
Felix Hill
A research scientist at DeepMind with a background in cognitive science and computational linguistics, Felix Hill has pushed the embedding of grounded, contextual representations in language models and embodied agents. Situated language learning — learning concepts that explain stimuli and actions — allows agents to generalize understanding to new scenarios. Felix is also interested in compositionality (and the lack thereof) in language, and how this affects meaning.
Lera Boroditsky
Lera Boroditsky is a professor at UCSD and is well-known for her research on the ways in which linguistic background shapes the way humans structure our thoughts. Specifically, she focuses on how we represent abstract concepts such as space and time depending on the linguistic structures we use to describe them.
VIRTUAL SPEAKERS:
Our virtual speaker sessions highlight early-career researchers.
Ahana Fernandez
Ahana Fernandez is a behavioural biologist and postdoctoral researcher at the Museum of Natural History Berlin, Leibniz Institute for Evolution and Biodiversity Research. She studies vocal communication and vocal learning in non-human animals, particularly in bats. For example, she showed how babbling from bat pups resemble in some ways babbling in human infants, offering further comparison to study babbling across mammal species.
David Quiroga
Dr Quiroga studies how the brain processes sequences of sounds, in particular music. He uses EEG, MEG and now iEEG to understand auditory memory and predictive processing. He has worked at the Center for Music in the Brain in Aarhus, Denmark, where he also did his PhD. He joined Dr. Robert Knight's lab at UC Berkeley as a postdoc in January 2022 to investigate the neural basis of working memory and imagery for musical sound sequences using invasive techniques in humans.
Chris Emezue
Masakhane is a grassroots organisation with a focus on natural language processing (NLP) for African languages. African NLP is historically under-resourced, while the languages are rich and diverse. Chris Emezue is a Masakhane researcher, graduate student at Technical University Munich and research intern at MILA. Outside of NLP, he is interested in causality and structural learning (out of distribution modelilng). He is also the founder of Lanfrica, a catalogue of African language resources.
Those of us who cannot join us in-person but still would like to attend, please contact us at mindyourlanguage.swc@gmail.com.
Questionnaires:
Please take part in the following questionnaires to help us collect data for our student engagement session following the speaker series.
- Perception and Language
- Dialect and Accents
- Development in Language
- Computational Linguistics and NLP
SWC/GCNU Student Symposium Team
The Student Symposium is organised jointly by PhD students of the Sainsbury Wellcome Centre for Neural Circuits and Behaviour and the Gatsby Computational Neuroscience Unit of University College London. This is the fifth instalment of an annual discussion-based event that aims to bring together neuroscience researchers from the UK and abroad to engage with current and future problems in neuroscience.