## Stephen Wolfram on Wolfram Alpha

### Stephen Wolfram

Posted 30 September 2009 by Volker E.David Weinberger interviews Stephen Wolfram on his highly praised “computational knowledge engine” *Wolfram Alpha* shortly before it was launched publicly for Radio Berkman.

“[…]asking if we look at the world, the universe as it is, and you know,what are the kind of underlying primitives, what are the computational,the simple programs that can potentially drive all of this stuff, andWolfram Alpha it’s sort of the realization that all this knowledge thatis out there in this world […]”

- Date of recording: Wed, 2009-05-27
- Language(s) spoken: English

00:00 Hi there. You are listening to a special release of Radio Berkman today. For the first time we’re releasing full audio from our most recent interview. David Weinberger’s interview with Stephen Wolfram, the founder of the computational knowledge engine Wolfram Alpha, will still appear as its own episode next week. But for now you can listen to the full 55 minute interview. Enjoy!

David Weinberger: 00:23 What is Wolfram Alpha?

Stephen Wolfram: So, our kind of best three-word description is “it’s a computational knowledge engine”, so its purpose is to take the knowledge that has been accumulated over the course of years, and those parts of it that can be computed with, to be able to compute with them and to be able to generate answers to specific questions using a sort of corpus of existing knowledge, which is in the data, the methods, the models and so on, that had been accumulated - put them in computational form and be able to answer sort of any specific question, where the answer can be computed.

DW: 01:06 What reported typical question do you have in mind for this?

SW: So, there can be sort of pure formal questions, that are like mathematical questions, like “What’s the intercall of this or that?”. There could be questions about the world, like “What’s the population of such and such a country?”, “What was the weather like on a particular day in a particular place?”. There can be things that sort of combine some pure data with computation, like for example, “Where will a particular… What will the tide be on a particular day at a particular place?”, where you have to know some geographical data, some historical information about tides, and then you have to compute the physics of the tides to work out what will happen. And there are things which are both… where the answers are both numbers and quantitative kinds of things, and where the answers are more kind of textual or symbolic.

DW: 01:59 The answers that the site gives are not simple one-word answers or numbers, they give that too, but you are including a range of types of information that somebody might want…

SW: 02:10 Right, so the idea is: “What would an expert do if you ask them this question, what would they actually tell you, would they just tell you one answer, would they give you kind of some background, would they give you some related answers?” Kind of the idea is to generate a report that sort of gives you both, that gives you sort of all versions of what you might immediately want to know about a particular thing. So that would be typically some graphical presentations, some tables of data, some… maybe some synthesized text, these kinds of things. It’s sort of a big challenge to sort of do this kind of automated presentation, where you try and take out and in some cases where there might be a huge amount of stuff that you could compute, the problem is, to sort of pick out those things, that are most kind of immediately cognitively useful to the person who asked that question.

DW: 03:03 In the demonstration that I’ve seen, not a lot of the answer text, the answers that are given, are links out, or links to more information in fact. Is that on purpose or is that to come?

SW: Well, so, it’s hard enough to get kind of within one kind of coherent system, to sort of build up the things that we think are worth presenting to people so to speak, without having to worry about whether some external link goes to something that’s meaningful or not. Generally what we are trying to do is to try and have this be kind of, this is giving you kind of the report that you want, there is some drill-down that you can have, which might be an infinite amount of drill-down that’s possible from… So, for example, the typical thing would be, there’s some table of results, and then there’s a little button that says “more”, you press “more”, you press it once you get more stuff, you press it again you get even more stuff, you can keep pressing it sometimes forever, and that’s kind of the main way to kind of get more information, is something sort of within this controlled environment that we have set up.

Now, we also have a few links out, but we also have a sidebar where we intend to put sort of related links to things that people might find useful given the particular questions that they are asking. And often those links will be to things that are complementary to the kind of sort of direct factual material that we are giving. Things that, for example, a linking out to, I don’t know, Wikipedia or something, where there is some narrative description of some particular topic that kind of complements, the just the facts kind of results that we’re giving on Wolfram Alpha.

DW: 04:45 There aren’t also a lot of links internally to more of your own results. And so, (again, just seeing the demonstration, I haven’t played with it and besides it’s not alive yet) the visual sense seems to be: you’ve got your answer, it’s a rich answer, there’s lots there, but this is the definitive endpoint of your inquiry. As opposed to having internal links that would say, “here is the answer and please explore more click, and get more… results.

SW: Right, so the “more”-thing, there really is a pretty… sort of a pretty elaborated world underneath all this “more”-buttons and so on. I mean, there is really a lot to explore there. If it’s a question of “go off and ask another related question, or a question about some entity that happened to come up here, then yes there are, there is a pop-up that comes up and there are links that pop up that you can go and make queries about the entities that appear. I would say there’s probably more that could be developed about that, about the way that those pop-ups work and the way that one kind of goes to explore more things. Our big emphasis has been: once we’re in an area where we can actually could compute things, we can often compute a lot of stuff, and the big issue is: what’s actually useful to show somebody, sort of at the first level? And for the people who have been developing all these wonderful computations it’s sometimes a little disappointing when I say “Look, we can only put ten results here on the first thing people see, and the rest has to be based on drill-down” because, you know, it’s not useful to give 50 things on one page, nobody is going to be able to pick out the thing that’s actually useful to them. But so there’s a pretty rich world of sort of additional stuff that you can get through just by pressing buttons and pulling down pull-downs for alternate views of things and so on.

DW: 06:40 So how does it work? It’s pretty impressive.

SW: There are two key kind of foundations that make Wolfram Alpha possible, and then there are a variety of different sort of pieces of technology that have to be assembled, to create the experience that we are setting up. So in terms of the sort of the foundations, one of them is Mathematica and the other is my NKS – New Kind of Science direction. Mathematica is sort of the underlying infrastructure, the language, the platform on which the site is built. I’ve been working on developing Mathematica for 23 years, so it’s a big thing that has had lots of developments and has had a nice sort of accelerating pace of developments in recent years, particularly as sort of all the pieces that we have assembled over the years that really sort of fit together according to the principles that we have laid down, sort of seem to be interacting in a nice way, that lets the system sort of grow very rapidly, so kind of the one important sort of answer to “How does it work?” is “It’s a big mathematical program.” It’s quite big, it’s five or six million lines of mathematical code, and most measurements one can make is perhaps ten, maybe twenty times more succinct as a programming language than a typical kind of, you know, C, Java type programming language. So that’s a lot of code but if one was doing it without the Mathematica platform, I think it is not a manageable kind of project to be doing at this point.

DW: 08:23 So Mathematica enables you to do mathematical computation but also symbolic and logical computation as well?

SW: Perhaps the biggest… Mathematica was such a great name for Mathematica when Mathematica was young, and the thing that one needed to know about what it did was: it does mathematics on a computer, in a sense it’s been… I had sort of hoped that the field of mathematics would have grown up to the point where the kinds of things that Mathematica does are actually considered mathematics. In a sense it’s a misnomer, and has been for a very long time because Mathematica has grown far beyond the doing of mathematics, it’s really a general environment for doing sort of anything that is formal and technical and so on. It has been kind of a place where we can fit together sort of all the algorithms that exist for practically anything. And we have sort of maintained these kind of unifying principles for the system that have been just wonderful in terms of developing new capabilities, because it means, you know, we are adding something that is supposed to be some kind of numerical process for working out such or such a thing, but, because we have this kind of very unified system, we are able in the inerts of that numerical thing, to be calling on all sorts of, you know, computational geometry, or some other, you know, kind of…, I don’t know, algebraic algorithm, or anything like this. So it’s really, it’s… Mathematica itself kind of outgrew the bounds of its name a long time ago.

10:03 What’s happened with Wolfram Alpha, one of the things that… Well, people used to say, they used to say “This Mathematica thing is obviously really powerful…” and lots of people around the world, lots of sort of leaders and RND in lots of places, lots people being educated, people doing educating bill use Mathematica a lot. But people said, “This is really a powerful thing, you really should be able to, you know, why are you even selling it to people, I mean, why don’t you just you keep it for yourself and build something great with it?”. Wolfram Alpha is probably the largest project that’s been done with Mathematica (I don’t necessarily know everything that everybody has done all around the world with Mathematica but ) 10:45 what Mathematica provides for us is kind of a uniform way of representing knowledge in symbolic form, and applying… and we already have in Mathematica a zillion algorithms, for operating on that kind of symbolic representation, whether it is in a purely mathematical way, whether it’s in a sort proving theorems and some logical way, whether it is representing things graphically in some way, these kinds of things. So, that’s kind of the technical platform that makes Wolfram Alpha I think conceivable. From a conceptual point of view, I think, it’s sort of the NKS New Kind of Science thing that makes Wolfram Alpha kind of conceivable. I mean, I’ve kind of wondered, you know, “Wolfram Alpha” is kind of the third big project I’ve tried to do in my life, Mathematica and “NKS” are the other two, I kind of have this meta-theory, that most people have at most one big idea in their lives, 11:46 so if I’ve got three big projects, they are really all the same idea, and they are really all about kind of taking a sort of a rich set of things that one is interesting in dealing with and understanding what are the sort of underlying primitives from which one can build up all those things, whether it is in the computer-language of Mathematica having sort of computational primitives that level with build-up programs, whether it is in the case of NKS, asking if we look at the world, the universe as it is, and you know, what are the kind of underlying primitives, what are the computational, the simple programs that can potentially drive all of this stuff, and Wolfram Alpha it’s sort of the realization that all this knowledge that is out there in this world, so there is a feasibly small sort of set of frameworks, and algorithms and computations from which one can sort of move out to deal with all this different kind of things.

But for me, kind of the paradigm of NKS, which is sort of that idea that they really are for the world at large, these kinds of small programs that are capable to be rich enough in their behavior to sort of capture a lot of what goes on in the world, the idea that there are these small programs is the thing that is sort of from a paradigmatic point of view, makes it to me conceivable that one can do something with Wolfram Alpha, and that it isn’t just too big, and that one just doesn’t have to say (and sort of the traditional kinds of sciences might say) “It’s just too big, the only way to do this is to have a billion people work on it and there just can’t be sort of underlying sort of fairly small programmatic principles that they will build this thing up”. So that for me, at least in my current…, I have noticed that when I do these projects it takes me about a decade to actually realize what the real points of what I did was. Sometimes other people pointed out to me, in much less time than that, but, you know, as I say, my meta-theory is, people have at most one big idea. So, all these things are connected and exactly how kind of NKS, the paradigm of NKS flows into Wolfram Alpha, I know some part of it, there’s probably more to that flow than I yet known.

DW: In fact I am very interested in the relation of NKS, of new kind of science and Wolfram Alpha. 14:03 Can you give an example, maybe, of where NKS-thinking shows up in the development of…?

SW: Right, so I mean, for example, a very straightforward thing is linguistic analysis of inputs. So, we have… It might seem that you…, it’s just too messy, too complicated, there is too many possibilities. The fact is that what we and up with all this fairly small simple rules that we often deduce from looking at some giant corpus of information from the web and giant collection of ways that people say things, we try and abstract these fairly simple rules, and then we have to ask ourselves, you know, “So what consequences do these rules have?” and you know, if you look at the folks who are developing the linguistic analysis parts of Wolfram Alpha, for some strange reason almost all these senior people working on linguistic analysis for Wolfram Alpha have worked on NKS. This is a strange coincidence. Or maybe it isn’t such a coincidence because when you actually look at work that’s being done, at these images, these graphics, the job on the screen that look awfully like kind of evolution of 15:10[?] or some such other thing, there are all these blobs of color and so on, that represent in this case various pieces of potential linguistic analysis for phrases or whatever else it is. But it is very much sort of an NKS approach, if you take this fairly small underlying programs and you say “What did this build up into and does successfully manage to correspond to something that is useful for sort of doing linguistic analysis?” I mean in general in developing Wolfram Alpha and actually also a lot in developing Mathematica these days, the algorithms that we have were not engineered by people. There are things that were essentially found in the computational universe of possible algorithms, I mean it’s kind of like, I view it as being kind of like an algorithmic version of mining, I mean, when we deal with sort of standard engineering technology, you know, we’re used to going out into the world and going and finding the iron ore and, you know, sort of putting it together and making something, you know, making a steel thing out of it or something like that. Now, what NKS sort of suggests is that out in the computational universe of possible programs, there’s lots of interesting quite small programs that exist, and the question is: “Can we go as humans and find programs that are useful to mine from that collection of possible programs, that are useful to mine for the particular technological purposes that we have?”. And that’s something we’ve ended up doing a lot of, I mean a lot of the pieces of code that exist in Mathematica and Wolfram Alpha are things that were sort of just found in the computational universe and turned out to be really useful, in the same way that people found that, you know, magnetite was really useful, or liquid crystals were really useful in the material world.

DW: It’s not clear to me how you are using NKS in the natural language processing 17:00 the NKS in the NLP. It looked like, just on the service of the sorts of priories that you were entering when you were giving your examples, it looked like you were doing…, that what was happening was the normal thing for the sort of query-managing, that there are stock-words that you take out because they are not interesting words and they were sort of human… referred to human fluff-finders, and your discussion is sort of fluff-words that you take out that aren’t necessary, and what you end up with is a set of core-words where you can induce or deduce a relationship, and one of your examples was… a query was MSFT, that… Microsoft’s stock symbol, - SUN, in which case we got back interesting data about both of both companies, and so somewhere Wolfram Alpha decided that with a stock-symbol like MSFT, which it had noticed presumably has some data base of stock-symbols, that, therefore is a pretty good assumption that SUN doesn’t refer to the solar entity but to another company. That’s what I sort of thought was going on, based on nothing, is there an NKS piece of that is doing the parsing or is…

SW: Well I think in that particular case… let’s see what’s going on in that particular case…

DW: Well take a… If that’s not a case where the NKS supplies then take one word…

SW: I’ve got to do more work on this because…, it’s funny: when I talk to the folks who are working on linguistic analysis for Wolfram Alpha, if you ask them, they’ll say “Yes we know lots on NKS, we worked on NKS for a bunch of years right before working on Wolfram Alpha” and I’ll say “How you are using NKS on this”, and they’ll say that, they’ll kind of scratch their heads and they’ll say “Well I’m not completely sure.” And then I’ll ask them “Okay, how does this or that work?” and they’ll start explaining “Well this is a simple algorithm, we run it, it…, you know, produces all this stuff, and then, you know, you look at the pictures that are sitting on their screens and so on, and say “This is awfully like kind of the NKS paradigm, a small program trying to figure out what it does, trying to see in this particular case whether what it does is useful for a particular purpose”

I think we have to come up with much more slam-dunk examples of, you know, this is the case where the way that this particular piece of parsing is done is sort of a pure NKS-play, but what’s kind of going on, I mean in general, you know with parsing of inputs, you know, there are certain rules, sometimes they are as simple as templates, sometimes they are more grammatical kind of things that are more like the structure of mathematical expression, or some, you know, human language expressions, and sort of the point is, there are these sort of small fragments of program that are sort of being applied over and over again, and producing kind of the richness of interpretation that we end up actually seeing and interpreting language, but we need some better slam-dunk examples.

DW: 20:17 So we’ve got Mathematica, we’ve got NKS, and there are other components to Wolfram Alpha. There is data for example…

SW: You know, the way I see the system from a sort of technology point of view, there are really kind of four pillars of the technology that we’ve built: the first of them is kind of curated data, so, you know, we have kind of… we have got a lot of data that we have organized, correlated, cleaned up, made computable - not just static data, but also data that’s kind of flowing in from all sorts of feeds and all this kind of thing. We’ve kind of built this sort of industrial pipeline for curating data, where we put a lot of effort into identifying the right sources, we get the data in, we have a lot of automation and kind of trying to find correlations, anomalisms and so on of the data, then we feed it to actual sort of human data curators, we sort of expose it to domain experts, because one thing I’ve discovered is that if there isn’t a domain expert somewhere in the pipeline you are going to get the wrong answer. And then, you know, the goal is to take sort of raw data from the outside and get it to the point where it is really consistent and computable with. So that’s kind of the first component, it’s kind of this curated data component.

21:34Another component is the, kind of the computational algorithms that go inside. There’s the question of taking the fruits of science and engineering and all sorts of forms of analysis, and encoding them in a computational form so that we can actually use them, when people need to use them, apply them to data that we have and so on. That has been a big effort of kind of just seeing just what is… and just what the sciences and so on achieved, and how do we make it computational and there’s just the big Mathematica program - five or six million lines of mathematical code now, that encodes sort of the things that one can compute about the world based on sciences and so on. That’s kind of the second component of Wolfram Alpha.

22:18 The third component is sort of a linguistic analysis component of how do we take these funny utterances that humans will feed to Wolfram Alpha and trying to understand what we can compute from that. And in a sense that that was a part of…, I mean every part of this project seemed to me like it might be impossible, this is one of ones where I had sort of an argument from history that this part would be impossible, which is that people had been trying to do natural language understanding for a really long time, and its successes are not that wide-spread, but what I realized at some key-moment was: what we are trying to do is kind of the opposite what people have mostly been trying to do. What people mostly have been trying to do is they will throw a thousand, a millions of pages of sort of perfectly formed text, linguistic texts, and they ask their computers “Go and understand all this narratives, go read this books and tell me what they are about” -kind of thing. Computers have found it really hard to do that. Our problem is kind of the opposite. There is a certain sort of things that we know how to compute things with, and that we represent in sort of symbolic form and if we can figure out, if we can get something that is in that symbolic form, then we are often running. 23:34 So really the question is “Can we take this sort of human utterances and map them to this symbolic forms that we happen to know about?” and then we can go off and actually do our thing and compute things with them. So it’s sort of the opposite from the traditional natural language understanding problem. And it’s something where we have been, I mean I was not sure whether we would sort of be squashed by the ambiguities and vicissitudes of language. It has turned out to be surprisingly that all these different notations and pieces of compact representation that people use as specialists in particular areas don’t overlap as much as I thought they did. And kind of a good test now, sometimes we’ll see these things which we can’t deal with, and I’ll show them to a bunch of people and no person will be able to figure out what on Earth this was about. And one of the things we realized is, there may be some perfect kind of almost mathematical theory of how language should be specified but it sort of can’t work, because, as we are putting on all these different kinds of specialist notations, if everyone was arbitrarily extended in perfect way, they would inevitably collide with each-other. And so we only get to where, you know, …if we can just do the part that humans can understand - that’s really what we need to do. Because it’s kind of the humans who are going to be entering the… you know, in this sort of… at least this interface to Wolfram Alpha, it’s humans who are doing the entering of things. So anyway, this sort of third component is this linguistics analysis component and being able to deal with kind of freeform linguistic input.

25:00 Fourth component is really how you present the results. Because sometimes there’s just a zillion things you can compute about something, and there’s the question of “Which actual results do we present so that it is kind of cognitively useful to the human who is reading it?”. And there’s sort of a collection of expert knowledge, heuristics, algorithms, this sort of thing that we call computational aesthetics which is kind of the question of “When you’re going to present some piece of data in graphical form, what’s the kind of way to do that that is most likely to be accessible to a person?”. So this kind of the fourth component is this kind of automated presentation component that is important. And one of the things I realized about the system as we built it is: we couldn’t really shirk on any one of these four things, we can’t… We need a lot of actual curated data, otherwise there’s no kind of raw material to do anything with. We need to be able to compute with it, it’s no good to be able to just say “Look up the data”, because the chance that somebody just wants to just look up the particular thing that you put in is not going to be very great, we need to actually be able to compute from that. And we need to have a way for people to interact with the system, you know, we need to be able to… we’re not going to expect them to read the Mathematica documentation now - even just for Mathematica, it’s 10,000 pages. So for Wolfram Alpha we’re not going to expect people to read the whole encyclopedia in order to be able to interact with us, so the only choice is to have a zero documentation approach, where we’re just dealing with sort of pure sort of language as people will produce it. And then finally this presentation of information: we can generate all sorts of wonderful stuff but if we can’t present it in a form that people can absorb, that’s also not useful, so we kind of need all these four components to actually build a system that can be useful.

DW: 26:57 You can actually take one of the components - the data component, and do you want it to split it in two and there’s a component of that that could conceivably be counted as important, which is the metadata, the decision about which information about the information you’re going to capture. This metadata, which at least much of it is apparently in the form of anthologies, of connected concepts, may have its own value…

SW: -Right, right, well I mean I think…

DW: -And its own limitations, by the way!

SW: -Right, we didn’t think of the assembly of data into Wolfram Alpha as being an exercise in anthology, we really thought of it as just “Let’s figure out how we compute things in all these different domains”. Obviously, we’re educated folks so we know about, kind of the idea of anthologies and the various efforts people have made to sort of do semantic web concept things with anthologies, these kinds of things, right? 27:54 When you’re going to describe a kind of thing, there’s a question of “What are the pieces of that description?”, “What are the…” you know, if you’re going to describe I don’t know, you have some entity…

DW: -Say, a country.

SW: -Right. So there’s a notion of a country there’s a notion of that country has certain properties and certain attributes, like what date were you talking about this country at, there’s questions about you know, are there groups of countries that are kind of, you know, classes, from which countries in NATO or something like that or countries in South America, these are all part of the anthological structure of the knowledge about countries. And for us, we have a pretty definite representation of that kind of knowledge about countries, we need that because otherwise we can’t have it sort of operate with other kinds of information, we need to be able to have, you know… This is part of making Wolfram Alpha a sort of manageable thing, is that there are global frameworks that operate across a bunch of different domains. And for that we need a sort of organized anthology of things. Now there are some tricks of…

29:10One of the big challenges with anthologies is: you have an anthology about countries: how does it relate to the anthology about cities, how does it relate to the anthology about this, that and the other, right? We were approaching this from a very kind of practical engineering point of view: just building anthologies across hundreds of domains, but we realized that there were important ways to have these anthologies communicate with each other, that one of them was actually a very sort of science-oriented way, which is units of measure of things or physical quantities that things represent.

29:49 So, for instance, if you have two things that both have units or volume, there’s a reasonable chance that these things are somehow related to each other. Not always! There can be a thing where… (I’m not sure I can generate it for length cubed in particular, but) it’s quite commonly the case that for something which has sort of the same physical dimensions, there may be three or four different kinds of concepts that that relates to. And so for example, a length could be a depth it could be an altitude it could be a height, these are related concepts and often they can be dealt with in a sort of an interoperable kind of way: you can divide two of these things and it’s meaningful, you can perhaps add them and it’s meaningful. But then there are some other kinds of purposes for which you can’t combine them and for which you might want to give some different-related information and so on. So it gets quite subtle, but we have kind of a reasonable pragmatic way at least to deal with sort of things like properties that are related and that can be combined in particular ways, these types of things. 30:51 I mean I think that… (I would love it to be the case because I’m kind of a theoretical scientist by life pursuit so to speak) it would be a great if there was just a grand theory of all this stuff, if we could just start from scratch as one of the sort of philosophers of all - whether it was you know from the Roget’s to the Mortimer Adler’s to the whatever whoever wants it to organize knowledge in a very sort of structured way. Just sort of say “Top down, let’s just describe the anthology of the world and let’s just then build a system based on the anthology of the world that we’ve thereby described”. I don’t expect…, that isn’t what we did and despite my great predilection for grand theories of things, that seems really hard and then in some sense not useful, because 31:47 what we’re dealing with is sort of human-accessible knowledge (and here I might get a little philosophical on things, but): there’s sort of the question of “What knowledge is possible?” and “What knowledge is human-related?”. So for instance from NKS we’ve learnt that in the computational universe of possible programs there’s lots of stuff out there. We as humans have only explored some tiny corners of this. For example in our mathematics, it is the case that you could invent the particular mathematics that we’ve pursued as growing from some particular axiom systems and so on - but we could just say “Let’s throw it open, let’s look at the space of all possible axiom systems”, gazillion of possible axiom systems of which the mathematics that we’ve pursued is only ten of them for example, but there are gazillions of others and they all have their properties and they all have particular features. This stuff that has humanly been accessed is only this tiny corner of this ten axiom system that we kind of grandfathered through from the Babylonians on.

32:56Now, what happens when we look at all the other ones? What happens with all this knowledge that hasn’t been humanly-accessed knowledge? Well one of the things that I learnt from NKS is a lot of it is knowledge you just can’t get in any kind of quick way. That is, you go right into computational irreducibility and undecidability and so on, there’s lots that you can’t compute about the world. There’s a certain amount that you can compute that is the parts that have formed the science that we’ve developed traditionally as humans, the mathematics we’ve developed, things like that. There’s also a lot out there that is not accessible in that way. Now the other thing that is the case is: one problem is you can’t compute it. If you fall off the parts that are in the language of NKS “computationally reduceable”, the parts for which theoretical science works well, if you fall off those parts - and there’s a lot away from those parts - you just can’t compute that much, and you just have to follow simulations to see what will happen, if things can’t quickly work out what will happen. 34:00 There’s a second problem which is: you can’t linguistically describe what you’re talking about. We as humans we’ve developed a certain language, the concepts that we’ve put into our language are concepts that we have done a lot with, that we have sort of become familiar with, those are the things that we tend to give words to. And so, one of the other limitations in a sense is sort of a philosophical level of limitation, is that Wolfram Alpha - its interface is linguistic, and so the things that once can easily describe to it are things for which there is human language. And that sort of is a…, there’s a lot else out there that is both very hard to compute with and for which there isn’t sort of a human linguistic representation that’s convenient. I mean when you see a picture of a [? 34:51] evolution and somebody says “Quick, describe it!” we don’t have words to describe these things, it’s a long…, you’d be writing…, you’d basically eventually just say “Well just look at the picture, you know, see what it looks like: that!”. 35:08 I think that in some sense one of my sort of my flip, almost flip, rather philosophical comments about Wolfram Alpha is this idea of computational irreducibility, the idea that there’s lots of sophisticated computation that goes on in the world and nature, that makes it sort of the complexity of nature, it’s the flip side of that fact that makes Wolfram Alpha possible, by which I mean that there’s in a sense comparatively little about which we can talk linguistically or about which we can compute. In the space of all possible things there’s only this small corner and that means that when we get to linguistically talk about things, because there’s only this small corner we can talk about, it is feasible to actually understand this broad range of language, because it can only talk about this small corner of things. If we were able to talk about everything, it would be much harder to understand with a short description what we could possibly be talking about. I’m sorry this is getting a very philosophical end of things. 36:11

DW: So I want to ask you about… within that small corner of things that we can talk about, the possible limitation, back on the metadata side: as many have pointed out, metadata is politics: the decision about what counts, what’s interesting to us and ought to be tracked about say, a country or any other entity is itself a reflection of cultural and possibly political interest. So if I were to ask Wolfram Alpha a question that I think it’s not designed for - such as how many people in the U.S. have been tortured, or give me a list of world religions, which is a very straightforward sort of question, that one’s…, the first question honestly is politically loaded, the second one is very straightforward sort of question.

SW: Will scientology show up on it?

DW: It’s a good question. (…) What’s the decision process, what do you do with that?

SW: Right.

DW: Inevitable decision…

SW: 37:06 So what we’re trying to do in this phase is a project. What we’ve done is to take generally reliable public sources of data so to speak, and try and make them computable. We’re not going out and individually surveying people and saying “What is your religion?” or something like that. So this question of, in the case of for example religions, we’ve actually we’ve been agonizing about this particular one, and there was a big push on part of some people on the project that we just shouldn’t put this in because it’s too much of a mess, right? But then other people pointed out “Look, it’s more useful to have something than to have nothing”. I mean it is useful to know roughly how many Christians are there in the world or are there in South America or something like that, that’s useful information. Now, even if it’s slightly wrong because there might be different definitions. “How many animists are there in America?” or something, right? There might be different definitions of that, you know, I sometimes think that after my work on NKS that I should count myself as an animist… but that’s a separate discussion.

DW: There’s a headline for you: “Wolfram becomes animist”

SW: (Laughs) Right. 38:33 Our decision procedure is probably similar to the decision procedure of folks who’ve tried to put together encyclopedias or any other kind of authoritative reference work. We try and do the best we can, we talk to experts, in our case we have a couple of constraints: we want to have something… eventually we’re going to have a definite number for something. Even though we may annotate it with a footnote, at some level we’re going to have to make a decision, there is a definite number for this thing. So in the case of religions you can look the CIA fact book will have some data on that, there are particular organizations that collect data on that kind of thing. I don’t know in particular the case of that example, I know that’s something that we’re actually actively trying to unscramble right now. With luck, those different sources will vaguely, at least close to agree. At least to the first or second decimal place they’ll agree, the first and second significant figure they’ll agree.

DW: Let me give you a different kind of example cause…, 39:37 for example if I ask for a list of broadband penetration bionation, this is hardly contested - what constitutes broadband. The FCC has had one way of counting broadband that is extremely ingenuous, but people will use these numbers from your site to the extent to which your site becomes the place where you go to get the answers that the experts would have given you, the extent in which it succeeds is going to be used in these debates. Does it concern you that some of the answers at least, are in fact based upon good decisions but decisions nonetheless about what constitutes good data and how they’re categorized, that data?

SW: 40:19 We have three escape valves for this issue. The first thing is: you type your input and we’ll typically give what we call the input interpretation. The first part of output will be the input interpretation. So if you say something like, you know, “How much do school teachers get paid?” - it’s a complicated question, right? The input interpretation will probably say “school teachers… including this and that and the other, excluding this…, medium wage”. So we’re saying what question we’re actually answering, that might be, we hope is, close to any useful variant of the question you asked. But it might not be in a question where you might have had something different in mind, you might have had the minimum wage or the maximum wage or something else. Now, typically in our sort of report that we generate we will give what we know about the range of wages or some such other thing but when we have to compute from it if you say “How much do teachers get paid divided by doctors?”, by the time you’re doing that we’ll give a definite answer for the ration of medium wages and then we may give a bunch of additional information about the data that went into it, so the first… in a sense “escape” valve, first place where we get to put things through a more precise filter is we have this input interpretation that says “This is what we’re actually dealing with”.

41:51 Another thing is that when we give answers, we try to give pithy footnotes that give an indication of what it is we’re actually assuming, I mean, a typical example is some health kind of thing, where there’ll be a classification, for your overweight for instance. Now, to many people that’s a value judgement, to some other people it’s a precise medical distinction. What we’re dealing with is more the precise medical distinction than the value judgement, and so we’ll try to make a pithy footnote cause we know if we put a giant essay there, nobody’s going to read it. It sort of says, I don’t remember that particular one but it’ll say, “based on CDC or WHO guidelines” or some such other thing. So that’s how we try to deal with these things where there’s a distinction that’s been made and it sort of, we try to go with whatever the standard’s body has said and then say what the standard’s body is.

42:54 And then the final thing that we get to do is, we sometimes get to put up a thing saying “assuming you mean blah”, then we give a bunch of results, and use “blah” instead. And so that’s another place where we get to if people have that sort of a controversial thing. One that came up recently that was a slightly different issue but there’s questions about “Do you assume this country is part of that country?” and things like that, well we can assume both cases. There were these funny cases where we just had one recently that was sort of horrifying was the pets-versus-food problem. So the issue is, if you type in “rabbit”, what do you get? And for reasons of bad linguistic paralyzation one was getting nutrition content. So what one realizes is that in the U.S. this is very upsetting to people, and in other countries it’s a different story. So that’s a case where we get to use GIP information and we’ll make the decision in the U.S. a rabbit is an animal with these characteristics; I don’t know in the case of rabbits in particular but let’s assume that they’re popular food in some other country and are not usually kept as pets there, then the GIP of that country will cause us to flip around that assumption. 44:33

DW: How open is this site? It’s open to everybody to use, it’s a free site but the data and the metadata and the algorithms are seemingly quite valuable, other people could make use of it. What are the plans for making this…, for opening the site up?

SW: Well so the first thing is that there’s going to be an API for people to make different levels of…, have different sort of kinds of access to the site.

DW: So another computer program can call yours and gets results back?

SW: Right. Or it could be another website that is including some pods of output from Wolfram Alpha on that site providing some input field with some specialized input field that can go and use the Wolfram Alpha engine and get results back or it could be some, you know, desktop application program that is making calls to Wolfram Alpha. The thing one realizes is, it’s a living breathing thing, it’s not very easy to take pieces of it and just say to people “Here’s a big piece, go do something useful with it” because, for example let’s take the data: the data changes all the time the data is changing. Some data’s changing very rapidly on times it goes over seconds some data changes on time goes over months - it’s not the case, we can’t just say sort of “Dump the data out” because the data is really just the seed for, in many many cases, the data is the seed for an algorithm.

DW: 46:02 It’s also frequently data that is proprietary and licensed, another reason why you can’t simply dump it out.

SW: Right. The idea is that we will provide a resource for anybody who wants to use what we have, we will provide a good way for them as humans to use it, or as humans using programs to have their programs use it, and the idea is to provide an efficient well kept-up industrially strong kind of thing that people can go to get what they need from this. We certainly were aware of the fact that there are computations that people will want to do that will run longer than the site could possibly run them for, and so the good news is that we have a great sort of vehicle for running this sort of computations, and that’s Mathematica, and so the issue is…, and Mathematica has for twenty years happily run on millions of actual computers around the world on people’s desktops, and so one of the things that I think we’ll see happening is that there’ll be things where there’s sort of the Wolfram Alpha kind of central store of data, sort of computational elements and so on, and then there’s the sort of the fit client for it which is the Mathematica that’s able to take pieces of what’s coming from Wolfram Alpha and run that stuff locally. So this is sort of a way of again opening out and enabling more people to get access to the capabilities that we’re providing.

DW: 47:52 The anthologies or scheme or whatever you want to call it, those themselves and the fact that you have a way of representing say, a nation and multiple other demands, that’s something that people working on the semantic web are working quite hard doing. Is that…, you’re gonna make that available as well?

SW: Probably. So one thing we hope to do is to have nice organized, almost formalized process for people to contribute data to our data repository. And in order to make that possible we kind of have to say how is this state going to be structured. And so I think we’ll probably have quite an effort for particular domains in sort of exposing some of the structures that we’ve already built and possibly people will propose other pieces of structure that need to get added on to that, and so I think that that will get exposed and if there’s good traffic in data being provided for some access by people in Wolfram Alpha, if there’s good traffic to that, I suspect that the anthology that we have will sort of become kind of a useful standard for other people who are doing things where they try to fit semantic data together.

DW: 49:02 A useful free standard, a useful open standard? Will either of those do?

SW: I’m not sure I thought through all of these, my guess is that we will want it to be an open standard because from our point of view the more data that exists in the world that is structured in particularly a way that is easy to deal with - the better everybody is, and the better we are too. I mean in a sense our objective is to make this sort of good, sort of definitive sort of place to go to get computational knowledge, and there are many components to that, I mean the data is just one component. And if more of the world would help in assembling the data that would be great. I mean so far I have to say that we hire lots of data curators and they do a great job, but we hope that there will be quite a community of sort of volunteer data curators that will provide sort of tools for volunteer data curators. But I’m a little bit less… I’ll be very interested to see what happens. You know, Wikipedia has been a fantastic success story of kind of the things assembled by crowds so to speak, but I have this feeling that writing the great essay that shows up on Wikipedia is a somehow generally more exciting activity than…, I mean I really like lining up things and data but I have a suspicion that that is more of a…, the nerdiness quotient or something has to be higher. I’m proudly in that box so to speak, but I think it’s a more professional judgement intensive kind of activity I think, because you see, here’s the point: when you see an essay on Wikipedia, the very texture of that essay sort of tells you what’s going on, it tells you whether the person knows what they’re talking about, tells you whether… it just gives you some sort of feeling for what’s going on. Data is in a sense very cold, you know, it’s just “7.2” you know, “The number is 7.2”, doesn’t come attached to an essay, it’s just “7.2”. And what you need, the kind of reliability that you need and the kind of people who are going to make that number 7.2 as opposed to 7.8, you have to have a…, at least my current view is that it’s important to have a somewhat more controlled environment because you don’t get to know all of the footnotes so to speak immediately, as you do in kind of the essay-type setting. 51:59 So right now my best mechanism is: you hire good data curators, we have all these tests for data curators, we hire good people, they have to have quite a bit of domain knowledge otherwise they get things wrong, and it’s a somewhat more sort of organized and professionalized thing. Now, I’m certainly hoping that we will be able to provide the tools and the framework to have a very good community of volunteer and so on data curators out there who can potentially help with what we’re doing, but we fully expect that what they’re doing will have to sort of fit in to this whole pipeline we’ve built and that will have to be doing our sort of data ordered in processes and so on, and right now it just doesn’t feel like the kind of thing for which it’s going to work to just say “Ok world, everybody just put the pieces you feel like in”.

DW: 52:05 In the media we’ve already seen Wolfram Alpha referred to as a Google killer. Who’re you trying to kill – anybody, with this?

SW: That’s not what I’m doing… I build something…

DW: Trying to kill Wikipedia, trying to kill reference librarians?

SW: No, no, I’m… The thing I like to do is I like to build stuff. This is a thing that I want to build. I build stuff as a practical person, it has to fit in the ecosystem of the world it has to support itself, things like that. If we look at Google and Wikipedia these are both sort of great achievements, and they’re both very complimentary to what we’re doing. We’re dealing with the facts just the facts, Wikipedia is dealing with the narrative and you might have noticed the side bar to Wolfram Alpha often pulls up Wikipedia links with the rollovers that show sort of some little piece of narrative from Wikipedia. That I think is a terrific kind of synergy there and maybe we’ll get other kinds of encyclopedia-kinds of narrative things that can sort of complement Wolfram Alpha. As far as search is concerned, traditional web search again is a very complementary kind of thing, I mean I use Wolfram Alpha lots, I also use search engines and Google and so on lots. I use them for different kinds of things, I’ll use Wolfram Alpha when I have some specific question that is where I’m trying to compute something, but whereas I’ll use a search engine when I’m looking for some fragmentary piece of information that isn’t systematic at all, where there’s just something about…David Weinberger, that just happens to be out there, that I can kind of pick up and read the narrative about it.

55:01Perhaps I’m a business idealist, ok? I’ve been lucky enough to have a successful business for twenty-something years and that I’ve been able to sort of pursue with Mathematica and so on, something that I consider to be kind of intellectually… that sort of high intellectual integrity that also can be a very pleasantly successful business, and I certainly hope to do the same kind of thing with Wolfram Alpha and I’m hoping for the best possible synergy with folks who have complimentary kinds of things. I mean I hope we won’t be killing anybody I hope nobody will be trying to kill us so to speak, I think that we’re doing something that I believe can lead to a lot of positive progress in the world and I’m just keen to see that happen.

David Weinberger: Thank you very much.

Stephen Wolfram: My pleasure./p>

- Login to post comments