Emile Servan Schreiber

So in spite of the fact this is our first
episode, I think it went really well.

Emil an OG in the space, uh, the
rare brilliant thinker who can

clearly explain complex topics.

, I especially love the real world
and very timely example he gave.

And now we drew some very interesting
links between prediction markets

and both social media and ai.

I hope you enjoy the conversation.

Joe Waltman: Alright, today we
have Emil Servan, Schreiber.

Emil, thank you so much for joining

Emile: Thank you for having me.

Joe Waltman: you.

You obviously have a very impressive cv.

For the listener who isn't
familiar, can you give us a 62nd

walkthrough of your background?

Emile: Sure.

So I did all my, education in in
the US in the eighties culminating

in a PhD in cognitive psychology
at Carnegie Mellon University.

And at the time it was pretty much you
know, studying the brain and studying

ai in the, in the same curriculum.

And then I, I got a first
job as an AI engineer.

Researching the way to implement
handwriting recognition.

And then there was something called
the multimedia CD rom revolution, which

some of us old enough may remember.

It was pre-internet, right?

I mean pre-web.

And I published, I spent four years
publishing C ROMs on scientific popular

science drums on with many scientists.

Talking about particle physics
and the brain and stuff like that,

and that was pretty successful
until the internet came up.

And sort of blew the c industry away.

And then I moved into prediction markets.

So that was 2000 as a way to leverage
collective human, collective intelligence

from the web something actually useful.

And that was like two, three
years before the famous book,

the Wisdom of Crowds came out.

Right.

And, and some of our prediction
market work was actually featured

in the first chapter of that
book on the wisdom of kras.

It was one, it was one of the few examples
of what you could do with collective

intelligence the time on the internet.

so you know, that's been now 25
years that we've been working in

prediction markets in various ways.

Joe Waltman: Thank you.

Thank you.

That's a.

Good segue.

So you, you've got an interesting
entrepreneurial journal journey, probably

quite normal, where news futures became
Lumin logic, which became hyper mind,

which now seems to have evolved at least
as a spinoff into forecasting machine.

Can you kinda walk us through the
vision behind each transition?

Emile: So news futures was the original
vision of a public prediction market.

something like like what poly market might
be today, right in the minds of people.

Except, it was completely illegal to do
real money markets in the US at the time.

It's still completely illegal.

In most of Europe except perhaps
in England it's still illegal

in most ways in the us right?

Even though pirate action that allows some
people to pretend to be operating outside

the US but actually operating in the us.

But that's always been the one.

Of the state of the industry, right?

It's been the, the pirates have
been pushing the industry and the

regulators towards what now and pre and.

Poly market and stuff.

So, but at the, at the time it was
completely legal, so we did some

play money and we did it with some
media partners like USA today.com.

It was our first media
partner in 2001, right?

Running these prediction markets
on their websites and very quickly.

Companies as pharmaceutical
companies at the beginning started

calling us and saying, Hey, you
know, you have these people making

predictions about all kinds of things.

Why not?

They make predictions about
our industry company or results

or business environment.

So we started this business of providing.

Forecasting, you might call it today,
super forecasting services companies.

You know, providing a panel of
prediction traders that would

sync and make predictions about
what companies were interested in.

And eventually that became, you know,
as the business model for the play

money market was really not very
exciting, after the advertising,

you know, of, of the early internet.

Fell away.

Then the B2B activity became more
serious with more different companies.

We were really at the forefront
of bringing this prediction

markets to within companies.

And that became Lummi Magic.

So it became more of a consulting
B2B business for us, you know,

providing technology, these
platforms you could put inside

companies the services of of a panel.

And and then consulting that
goes around that, right?

So that became Lummi Analogic.

And then from there the government, US
government started being really interested

in crowd forecasting and they started
this really deep, long, multi-year

research programs by iarpa, right?

Which is the intelligence advance research
project activity, within the ODNI.

And say they had this research wide scale
research combining universities private

enterprises like us to research how
useful these crowd forecasting, platforms,

could be for geopolitical analysis.

And so this was sponsored for
the US intelligence agencies to

try to figure out whether the
wisdom of crowds could be useful.

And it turns out of these
projects was super successful.

It's called The Good Judgment Project,
by Philip Tetlock, served the super

forecasting book at the University
of Pennsylvania, and we were part of

that research project with Tetlock
and his team were providing the.

You know, prediction, market
technology to the, to the

program and the discovery there.

The main discovery beyond the fact that
crowds of amateur amateurs reading the

newspapers could match the performance
of professional analysts within the

agencies, was that some of them.

2%.

The top 2%, are what we
now call super forecasters.

People who have uncanny ability decompose
a problem, forecasting problems into into

the proper parts, and think about it in
the proper way so that they can regularly

outperform pretty much anyone else, right?

2%.

this is the the super forecasting.

Discovery.

And from there we created Hyper Mind as
a way to market these super forecasters

companies, governments who would be you
know, finding those predictions useful

because they don't have access to them
easily within their own organizations.

So that's now

Joe Waltman: And the, these 2%,
these two percenters outperform

across different domains or is
there a 2% of certain domains?

2% of other domains.

Emile: So it's very different.

They, they are experts in forecasting.

They're not experts in
a particular domain.

Right.

So, and when we compare, you know,
we did the experiments with long-term

experiments with John Hopkins University.

For example, bef just before COVID,
before COVID, we started gathering the,

epidemiology, epidemiological predictions
you know, hundreds of professional public

health experts and medical experts,
comparing them also with the predictions

of super forecasters, right on the on.

19 viruses, 61 questions over 15 months.

Right?

And we found that the super forecasters
can match the predictions of the domain

experts and that it's even better when
you combine the super forecast, so the

forecasting experts and the domain experts
together in the same panel of predictions.

So then you can outperform
even the panel of the experts.

Joe Waltman: Very cool.

Very cool.

Emile: So basically, you know, just
like pe some people have ability

that is you know, that makes them
really good at math for some reason.

Some people have a
forecasting ability, right?

The good thing is that you can study
it you can understand the way they

approach these problems, and you can then
teach other people who may not be super

champions, but you can teach them to
be much better than they currently are.

Joe Waltman: Yeah.

Yeah, very interesting.

So Hyper Minds implemented prediction
markets across business and government.

As, as you alluded to.

Can you share what are two
particularly compelling use cases?

Emile: Well, the most compelling
use case I know of is actually

what we're doing right now with the
Swedish difference research agency.

So it's part of the Swedish Government
Ministry of Defense, as part of

the Swedish Aid package to Ukraine.

Military package to Ukraine.

They have asked us to set up a crowd
forecasting platform that is populated

by thousands and tens of thousands
actually of Swedes and, and now also

some French people citizens interested
in what's going on in Ukraine.

And they're forecasting questions
that are asked by the analysts

from the Ukrainian government.

Right.

So every question on the platform
comes from Ukraine and these are the

uncertainties that the analyst in
Ukraine have about what's going on

in terms of military, in terms of
economics around the war, and in terms of

politics around the war as well, right?

So it can cover all the way.

Joe Waltman: So, lemme make
sure I understand that.

So you're saying that the, the, the
questions are coming out of Ukraine,

for example, you know, where might be,
there's some military buildup and then

they're being answered by, by your.

Your panel or your experts, what have you,

Emile: it could be like, how
many missiles is Russia going

to throw at Ukraine this month?

It could is Russia going to be you
know, concurring some more territory?

In the, the next region this month.

Or it could be is the US government going
to renew its eight package, it going

to impose some new sanctions on Russia
or some other, like China or India?

Right?

Some secondary sanctions, or it could
be how many leaders are going to

attend some, ceremony in, in Moscow.

To show solidarity with
Moscow rather than Ukraine.

So everything that has to do with
politics, economics, and military

around the conflict is being, subjected
to the predictions of everybody, or

anybody in Europe or outside of Europe,
but mostly Europe, who is interested

in this conflict, is following the
news and has some insight based on

where they are, which country they
are, about what might be going on.

Joe Waltman: The, the next
logical question is, have there

been any any you know, policy?

Any actions, any sort of, have
people acted upon these forecasts

that are coming out of, out of your

Emile: Well, so that's,
that's kind of confidential.

But yes, they are

With the results.

The results are pretty

in terms of the accuracy of forecasting,
especially around the military.

The political issues are, are more
complicated political environment, whether

it's in the US or in Europe, is just
super, super complicated these days.

But in terms of economics and military
predictions is very, very good.

And especially if you take the
example of, you know, questions

we've been asking since January
around might there be a ceasefire.

Right.

that Russia would agree to the prognosis
has been extremely, extremely pessimistic

anything of significance in terms
of ceasefire would happen this year.

Right.

And, until now they've been pretty
much spot on compared to some of

the media coverage around, you know,
Trump initiatives and stuff like that.

Joe Waltman: Yeah.

Fascinating.

Fascinating.

So changing gears slightly your
research compares the efficacy of

play money versus real money models.

What were the more surprising
insights, especially around market

design and present behavior?

Emile: Right.

So prediction polls just
to so prediction markets.

I assume everybody on the podcast is
gonna know what, how they work, right?

It's basically betting platforms,
people bet against each other.

Betting exchange a prediction poll.

And those who are developed by the
Good Judgment Project basically

initially for research purposes.

But the idea is that you have people
bet directly against each other.

have everybody provide independently their
own forecast, their own probabilities,

and and the game is to is, is to the
best, the most accurate probability.

For the event rather than
buy low and sell high, right?

So it's not about speculation,
it's really about forecasting.

And then you have, once everybody has
provided independently the probabilities,

you are going to have some algorithms.

They are going to aggregate all
these probability forecast together

into a collective forecast, right?

So the simplest algorithm is just to
take the average of everyone, right?

that's, that's pretty good.

But you can do much, much, much better.

And so the standard algorithms these
days is to, first of all retain.

Aggregate only the most recent forecast.

So maybe like 25%, like, you
know, the, the most recent

quarter of everybody's forecast.

the forecast from this morning are
obviously informed than the forecast

from last week, And not everybody comes
up and update the forecast every day.

So only the most recent forecast.

And then you can also
look within that set.

At who is making those forecast and
what do we know about these people?

So maybe we know that you have been
making better forecast than me the

last, you know, 10, 10, 20 questions.

In which case your forecast would be
weighted higher than mine in the average.

So we do a weighted average
where we take that into account.

thing that we take into account
for the weights is how often.

You are updating your forecast
on this question and how often

I am updating my forecast.

So if I update every week, obviously
I'm not as interested informed

about this question as you are
you are updating every day, right?

So that also goes into the waiting.

So then we do a weighted average
based on those considerations.

And then finally.

We do some kind of externalization of the
probability so that if the probability,

say 75%, we might say, okay, that tends
to be, we know it tends to be a little bit

more conservative than it should 'cause
of the averaging of the collective, so

we're going to push it a hundred percent.

So instead of 75%, we'll push it
artificially to maybe 80, 85%.

Right.

Similarly, if it's 25%, we might push
it to 80% I mean to 20% or 15%, right?

So we push it towards the extremes,
and that tends to give you more

reliable, well calibrated probabilities.

Okay?

So that's the prediction poll, right?

So compared to prediction
markets, you can see it's, lots

of algorithms going on, whereas in
the prediction market, there's no.

no algorithm.

It's just you and me negotiating
a price and that price becomes

the market price and that's it.

Right?

There's no more algorithm than that.

Right.

so what's interesting when you compare the
two, which give you the same accuracy, is

that the prediction poll by laying out the
algorithm tells you what the prediction

market is doing implicitly, right?

The prediction market, for example.

terms of retaining o, only the most recent
is actually doing that because the last

probability comes from the last trade
of the buy on server who agreed, know,

the last two agree to a price, right?

So the recency is built in, right?

then the waiting is built in as well.

Because if you have more
money I do, you've had more

successful bets in the past.

Then you're going to have a bigger weight
than I do on the, on the market price.

A bigger ability to move
the market than I do, right?

So all these things, the algorithms
from the prediction poll help

you understand what is the, what
the market is doing implicitly

there's also some

to the prediction poll in terms of
B2B applications because they tend

to be more versatile of the types
of questions you can ask, right?

For example, if you want to ask a very
long-term question on the prediction

market, like poly market, you're
gonna have a tough time, attracting

speculators, attracting traders, 'cause
the payoff is too long into the future.

Right.

like short term stuff
on prediction markets.

In prediction polls, you
can do more long term stuff.

You can do conditional stuff you know,
what will inflation be if Biden wins the

election or if Trump wins the election?

Right?

You can't do that in the market
because only one of those worlds.

one of those scenarios is going to be,
is going to have a real payoff, right?

So, you know, what do you
do with the rest of it?

it's tough to do in a market.

It's very easy to do in a prediction poll.

So some of the advantages of
prediction polls also, you

know, they're easier to use.

So it's easier to implement
within an organization that is not

populated with, fanatics of trading.

Joe Waltman: Yeah.

Yeah.

Interesting.

Thanks for that.

Distinction.

So your book, super Collectif I
hope I'm pronouncing that correctly

brings collective intelligence
concepts to a wider audience.

What key messages are you hoping
readers take away from that?

I.

Emile: Right.

So thanks for mentioning that.

I should mention that the book
is available in English, as

well in a free PDF, right?

which I

Joe Waltman: Oh, lovely.

Emile: give you

Joe Waltman: I'll, I'll
put that in the show notes.

Emile: and you can

Joe Waltman: Yeah.

Emile: Take a look at it.

So in the book I talk about
several things, right?

And one of it is that,
collective intelligence is.

Something that needs
to be organized, right?

So there's two principles to it.

One is that there is strengths in numbers.

So, you know, the more the better.

And that also links to the importance
of diversity of sort, right?

Having people think differently.

And the more people you have
participating, for example, in the

market, and the more likely it is that
different kinds of opinions are brought

to bear in the market and more information
is brought to bear on the market.

And therefore you have.

You know, more information available, more
of the world model you can build from.

So that's one thing.

Strengths in numbers.

The other thing that's very
important is that of that diverse

diversity is going to be key, right?

So the way that you query people.

The way that you design collective
intelligence is going to be what gives

you intelligence or stupidity, right?

So for example, if you
look at social networks.

They are not designed to extract
collective intelligence from the

huge number of people participating.

are designed on the contrary, to
sort of put people inside their

own bubbles reduce the diversity
of sort therefore create, you know,

separation rather than aggregation.

Markets are built in a
way that aggregation.

is going to be an
essential component, right?

Everybody actually the aggregation
is built on the fact that people

disagree, So if you have, if you
have only people who agree with each

other, you won't have a market, right?

built into the market is this idea of
exploring and leveraging the diversity

of thought because the market price
is where people agree to disagree.

Right, and so you are obliged to
think independently for yourself in

order to participate to the market.

If you agree with the market
price, you don't participate.

only participate if you think that the
price is too low or the price is too high,

therefore it forces you not conform with
the, with the idea of the crowd to go

against the crowd, wherever the crowd is.

Right?

very important, the way that you organize
your collective is going to decide whether

you're going to extract int intelligence
or not, By promoting independence

of thought and proper aggregation.

So there's a great, sentence by a French
intellectual called Pie who hundred

years ago had this great sentence.

It says, nothing in the
universe shall resist a well

organized multitude of minds.

So multitude plus organization
gives you unstoppable.

Intelligence.

Right?

And so what that tells you is
that collective intelligence

is really artificial right?

It's something that's not natural.

It doesn't emerge
naturally from multitudes.

It needs to be properly constructed,
in order designed, you know, in order

for intelligence to emerge, right?

And you can see that, know, and,
and in a way it's vice versa.

In the book, I argue that.

All kinds of intelligence, any kind of
intelligence is built on the collective,

whether it's your own brain, no.

80,000, you know, 80, sorry,
80 billion neurons, right?

That have to collaborate whether
it's a market or whether it's GPT.

Chat, GPT basically built on the
collective intelligence of all of

us who have contributed something to
the internet at some point, right?

So all of it, any kind of intelligence is
really built on collective intelligence.

There's not like collective intelligence
on one side and intelligence

on the other, and artificial
intelligence somewhere else, right?

It's all the same idea,
you know, diversity plus

independence, plus aggregation.

Builds intelligence, right?

that's a, a key message in the book.

And also I like to highlight
that collective intelligence is

not only about the multitude of
people, but it's also about you,

How do you become a smarter person?

There is no way today that you
can become smarter by, you know.

Swallowing a pill, right?

There's no pill today is
going to make you smarter.

There's not yet any kind of
prothesis that you can plug into

your brain to make you smarter.

Maybe that will come, but today you can't.

Right?

And you are not going to get
s smarter either by doing some

crossword puzzles the New York Times.

not going to make you smarter.

only way that you can be a
smarter person today is by.

Using properly the multitude of
brains around you, your community,

in your social network, right?

to be smart and to borrow all the
brains that are around you, right?

And today we are lucky to live in a.

In a century where it's never been easier
to have access to other people's brains.

So for any kind of decision that you
have to make, whether it's through your

social network or whether it's through
even chat, GPT, which is basically

other people's brains, you know, in a
bottle that, that you can use, right?

Soap.

Now there's this French philosopher
Decar said, I think therefore I am right.

so in the book, I finish on this
idea that if he was writing today, he

would probably say, you know, I think
with many, therefore, I am smarter.

That's the key message.

Joe Waltman: I love that linking
of, of social media and ai.

To, to, to this topic.

That was, that, that
was, that was wonderful.

So, so wrapping things up and, and
you may have partially answered

this question already, but you know,
looking forward what new frontiers

do you envision for collective
intelligence and prediction markets?

Emile: Right.

So we are at a very fraught time.

Well, know, artificial intelligence
is developing super rapidly with

very, very, very little regulation.

And it's kind of crazy and
everybody's scared to death of

what might happen with that.

I'm not talking about like super
intelligence and you know, like

singularity and going to die,
but I'm afraid that everybody,

everybody might become stupid.

I think that's the, that's the
real issue we have to solve, right?

If we have AI think instead of us,
then it's going to be a big problem.

It's going to be a big problem
for us, for our kids especially.

But it's going to be a big
problem for AI as well, right?

Because the reason that AI is so
powerful today is not so much because

the algorithms the engineering.

Itself has made huge progress
since, say, the eighties, right?

Many of the algorithms are using
today in, in GPTs and stuff.

You know, it's stuff that's been
explored since the eighties.

The big difference is that
at the time there was no data

you could feed to the machine.

So the intelligence of AI comes from
the collective intelligence of humans.

And if humans become.

No, stop thinking or start thinking like
the ai, then the AI is going to experience

some profound collapse of performance.

And that's been proven in, in many
different experiments already, right?

If you, if the AI sort of starts feeding
on, AI data instead of human data,

then it becomes there's a collapse
of performance that is catastrophic.

So, you if we don't manage this properly,
if we don't use AI to make humans smarter

instead of making them dumber, then not
only will we become dumber, but theis

are going to become dumber as well.

And then that will be like total
catastrophic and of the civilization.

So that's the big

Joe Waltman: Do you have
an idea how we do that?

Yeah.

Did, did you know how to, how to, how
to solve that or how to prevent that?

Emile: Yeah, I mean, I think we should
I think governments should invest

massively in intelligent tutoring with ai.

So rather than having ai, you know, spew
out the answers for you on your exam,

like you can do now on like browser,
you can click on, you know, give me the

answer to the exam that's on my browser.

Right.

That's absurd.

They should have intelligent
tutoring system where the AI refuses.

To give you the answer, but helps you
to think through the problem like a

tutor would, And that that is the only
way that we can use the AI properly in

order to have both higher productivity
not lose everybody on the way and and,

and leave them on the side of the road.

But

Joe Waltman: Yeah.

That's great.

Emile: be a government program.

It's not going to be a Google or
open AI that's gonna do that for you.

No way.

Joe Waltman: Yeah, I, I believe there
are some schools at least coming out

with, with a, with that kind of a model.

Amil, thank you so, so much for your time.

This has been very, very
fun and educational for me.

I really appreciate it.

Emile: And you know, one last thing,
you mentioned it, but obviously the

next frontier in terms of prediction
markets is AI predictions, right?

And so

Joe Waltman: Hmm.

Emile: launched this week.

What we call the forecasting
machine, which is using these, using

this AI to the ability of humans
to predict and it if possible.

And probably advance towards some kind
of hybrid prediction capability, because

Joe Waltman: Yeah, it's a very cool tool.

I, I, I played it this morning.

This, the forecasting machine
you're talking about, I will

definitely include a link to that,
to the, in, in the show notes.

I would encourage listeners to,
to, to dabble with that as well.

Emile: cool.

Thank you very much, Joe.

Joe Waltman: Thank you.

Thank you.

Emile Servan Schreiber
Broadcast by