Back to blog

The Multimodality Revolution: What it means to digitize the Molecular World

June 6, 2024
|
Alex Wiltschko

Since my last post about data scarcity, OpenAI has inked deals with Reddit, Vox Media, The Atlantic and Newscorp, and now, for amounts yet undisclosed, can train ChatGPT on their data reservoirs. This is good news. It gives our chatbots more verified information to work with, and our news organizations a financial basis to collect more of it. It is also a coming-of-age moment for Large Language Models, a sign that they are reaching what is sometimes called the mature segment in their S-curve of logistic growth.

This is one reason there’s been a recent spike of interest, from futurists and investors alike, in new AI modalities that are earlier in their growth cycle. If we want AI systems to be able to understand and navigate the world as well as we do, and then better, they will need all our senses. We are fundamentally multimodal, and our computers will need to be too. They can already see and hear and read and write, and they will get better at those things; now they must learn to move, touch and smell. That’s the only way to give computers a comprehensive sense of the world — of what it means to be human.  

Smell is Osmo’s wheelhouse, of course, but I sometimes think that verb doesn’t fully capture the extent of our mission. A more overarching way of describing it is that we're working to digitize the molecular landscape, opening up our computers to the chemical spectrum. Just as the radio revolution gave us access to the airwaves, we are teaching computers to record and analyze molecular signals, particularly ones that are relevant to our safety, and to reproduce ones that are positive to our wellbeing (such as beautiful, clean scents) And this isn’t a flashy engineering feat for its own sake — it will enhance our lives in measurable and immeasurable ways.

The chemical signals around us are filled with valuable information about our health and wellbeing. Dogs can pick up on many of these signals, and so can humans who train their noses to the highest level of attunement. Think of the nurse in England who proved in a study that she can smell Parkinson’s; or the sommelier who can ascertain with a sniff whether a particular batch of grape was getting sufficient sun 50 years ago; or the dogs that catch fentanyl at our borders. This rare expertise is a testament to the untapped potential of our olfactory senses, a potential waiting to be unleashed.

There’s a good reason to believe that our computers could outperform the most sensitive mammalian noses, for the simple reason that they can process much more data than any of our brains. But first that data must be amassed, and that’s what we at Osmo are in the process of doing. We already have collected the largest olfactory dataset in history. We’re improving on this every day, and it’s taking us closer to a world where every scent, every flavor, every chemical signal is readable as data – and recreatable on demand. 

Watch this space to follow our progress in the coming months.