“Ideally, we want News Feed to show all the posts people want to see in the order they want to read them.” -Facebook

What Are Algorithmic Values?

From our perspective, the purpose of a recommender algorithm is simply to give us the content or products we really want to see.  There is a problem right off the bat. We don’t know what content or products we really want to see. If we did, we wouldn’t need a recommender engine! In steps a type of algorithm called, “collaborative filtering”. If you’ve viewed all the Judd Apatow (Director of Knocked Up) movies, the algorithm could observe that other Apatow fans have also ordered Anchorman. Out pops the recommendation.  What? You’ve already seen Anchorman five times on a separate site, or at a friend’s house? The recommendation is just noise.

Here is the thing, though: it costs the site very little, on the margin, to deliver a recommendation with no value. Any increase in incremental clicks from populating your recommendations is gravy for them. This creates a fundamental imbalance: your time is more valuable to you than it is to the algorithm. Any improvement is clicks or time spent on the site benefits their designer’s bottom line, even if it doesn’t quite benefit you as much. Netflix gingerly steps around this issue: “We are not recommending [a movie] because it suits our business needs, but because it matches the information we have from you.” Hmmm… It might be more accurate to say that using the information they have from you serves their business needs.

What’s In It For Us?

Well, watching the TV news, for instance, or searching through IMBD for a movie, or asking a friend, they also take time. Either way we have to wade through useless information. Perhaps the Facebook News Feed algorithm is a big improvement. Or maybe it isn’t, but we are on Facebook a lot anyway, so why not. We learn to adjust to recommendation noise over time, perhaps mentally filtering out irrelevant stories or obtrusive ads as we, say, read through our Facebook News Feed. This last dynamic is important. “Digital inertia” keeps us walking down the path that the sites we depend on have laid out for us. Once we swipe through online dating profiles at a mind-numbing pace, well, we just get used to it. This is “just the way things are”. In a sense, we are trapped by this new world-view. Whatever values we had before, now we have a set of new ones that benefit the algorithm provider. As our accompanying interview with Spritzr dating app CEO Manshu Argawal describes, this shift in values may not be to our individual, or especially, our collective benefit. After all, when we enter the portal that we expect will connect us to a whole world of possibilities, what we’re really hoping is that it’ll be the scaled up equivalent of taking a walk down a friendly street we’ve never been down.

You might not think that algorithms are all that invasive. After all, the Internet is huge and full of noise (and sometimes, rife with dumb.) It’s a self-organizing map, a web of connections whose pathways are forged by whatever pilgrims made them first. Just like ants find scent trails by detecting the pheromones of the hungry ants who traveled before them, or the way neurons that fire together wire together, we leave a trail when we go from one site to the next, and that trail is recorded by an algorithm that assumes we liked our route. And so it recommends future itineraries based on what we’ve already seen before. That’s great – better than if it hadn’t paid attention at all, right? But what happens when the recommending algorithm knows you too well? Perhaps you roll over in bed one morning and open your news feed and it anticipates your interests so accurately that, to your dismay, the app that once made you laugh into your morning coffee or forget all about your boring train ride no longer has anything interesting to say?

There’s almost nowhere we go that we don’t take our mobile device with us. There are no more closed doors. It’s seen our embarrassing searches and medical questions, it knows all the dumb vines we liked. We can’t go back to first dates and first impressions. It thinks it knows who we are. Will we fall out of love?

Mystery, discovery, surprise. These are on the mind of Jarno Koponen, a network science enthusiast and developer of Random, the App. It’s guys like this you might expect to design something like the frighteningly capable and caring AI companion in the movie Her. Koponen seems to understand that the Internet, as a complex network, is in a sense a wild frontier that fluctuates between signal and noise, order and chaos. Too much chaos and links are weak, and you’re on your own in the search for relevant information. Too much order and you could get stuck on Main Street, your preferences over-defined by algorithms that attempt to guide you by making assumptions about your activity and comparing you to others. Learning algorithms are humanity’s early attempt to curate culture and relevance just like we have done on every other frontier. But these algorithms now need to learn boundaries, need to learn when we need some space to take a walk alone and be surprised.

And so dawns the age of the discovery engine. There are lots of ways to invent one, but no one’s yet done it comprehensively.

Koponen proposes the creation of personal algorithms, an “algorithmic angel” if you will, that would give us better visibility into the kinds of things that affect what information is curated for us. Today that information is mostly kept safe and proprietary by the designers of the interfaces we use. For instance when you like or comment on a post in Facebook, you don’t know exactly how that will affect your feed. These personal algorithmic systems would be ours–an ambient layer on our explorations that would be truly personal, evolving with us as individuals, taking our values into account and adapting to us as well as providing a means for discovery. They would interface with recommending algorithms, keeping them in check and making sure we have priority agency in the content environments we explore.

“For many people personal data is abstract,” Koponen says. “Generally we don’t have a lot of awareness about how our data is being used and how it affects what we see. How could this data be powering experiences that are more in tune with who we are as individuals?”

An Experiment in Discovery

Kopenen’s app, Random, aims to make your subjective reality a starting point when recommending new topics and content. The New York Times described it as minimalist and almost toylike, probably because it’s simple and yet it inspires curiosity.


Random presents you with a screen tiled full of topics and when you click on one, it gives you a bunch of articles related to that topic to choose from. That helps the algorithm learn quickly, and each time you open it, your spread of topics is a little different. There are familiar subjects and some that make you go “hmm…”

“It doesn’t have a lot of utility yet,” Koponen says modestly, “but as a paradigm it could be made more comprehensive and approachable, to evolve into the kind of experience that gives you even more agency.” Like the algorithmic angel that hasn’t quite been invented yet. As the AI researchers in our podcast feature pointed out, discovery is an important part of the human experience, and so it should be an important part of what our technology enables. Currently, Random learns and adapts to your preferences but also uses the network map of this data to enable surprise and discovery–to create a balance between relevance and serendipity.

Let’s say you’re into design, sushi, Apple, and travel. In Random, these are not categories per se, but points in a huge network graph that create your personal profile in the universe of the app. Nothing is truly random, of course. Surprise comes from:

1. Your personal choices

2. Expected correlations with other similar people

3. Trending topics

Where trends are concerned even though that particular connection may not be found in your profile, these topics are so popular at a given time that it’s likely that you’ll be interested in them. There was a bombing in Paris. Paris is something you’ve shown a lot of interest in, but you don’t always want to hear about bombings. Random takes that longer arc into account.

To take you beyond your current personal interests into new territory that won’t feel obtrusive, Random does an interesting pirouette, leapfrogging behind the scenes using subtle links within the content you consume. It looks for stepping stones. You might ask why you suddenly see an article on algae.

“Because of the interface and its underlying dynamics, it’s possible every now and then to bring in a wild card,” Koponen says.  How is that different than anything Facebook or Twitter or Pinterest does? Because it’s just one of many choices that are presented to you, not an ad you have to look at.

You might like design, so somewhere back in the articles you read or someone like you read there was a design article that was related to bioengineering and had to do with algae, and it somehow involved the design process. So now there’s just one suggestion for algae, and you don’t have to click on it unless you’re curious. There are many other choices.  (Personally, I’d be curious enough just to know what the connection was).

What Does the Future Look Like?

Koponen is a humanist, so he’s always asking technology how it’s taking our personal values into account when it uses our personal data, because what we consume feeds what we create, and all of this adaptive content universe will affect how human culture is curated–in other words, what our future looks like.

We want what we want, and even that’s hard enough to figure out, much less explain to a computer algorithm. That’s because most of those preferences bubble up from the subconscious, a far more complex network than anything we’ve ever built. We don’t want to look at the same things we’ve always seen before, but we don’t want to be insulted by stuff that’s too far out–  jarring experiences that break our technology’s rapport with us.

We are creating our world even as we experience it through our unique perceptual filters. It shouldn’t come as a surprise. Machine learning–recommendation, discovery–is only reflecting that process and making it more obvious.

We created different media to ensure that we have access to the information that we consider valuable, meaningful. Something worth keeping. The key here, Koponen says, is that there will be technology creating information for us, that can be used as a mediator to curate things for us. Culture is a repository of our connections, and it connects us to one another. But it also thrives on diversity. When machines are curating culture, we want them to understand that reality is subjective, but when it becomes too subjective it isolates us.

Personally, I want to understand how my culture, my network, is evolving–especially when machines are creating and making choices about the world that I see. Send me an angel already.

To learn how Dialog can help your business, contact us at 512.697.9425 or

This article was originally published in design4emergence, a network science community sponsored and nurtured by Dialog and Panarchy, the world’s first network design firm.

“If your objective was to invent a microwave oven, you would not be working on radars.”

These days, amidst a great collective effort to reverse engineer innovation, everybody’s looking to model the success stories. Tales of disruption pepper our social media feeds, and we want the magic formula—the algorithm—for innovation.

While magic is tricky, success is even more deceptive. That’s because our measure of success, the objective, is “blind to the true stepping stones that must be crossed.” These are the words of Joel Lehman and Kenneth Stanley, the inventors of a breakthrough evolutionary algorithm for robotic neural nets, called novelty search.

What do robot brains and algorithms have to do with our current paradigm of innovation?

At the Evolutionary Complexity Research Group (EPlex) at the University of Central Florida, Lehman and Stanley programmed their AI to abandon their objectives and search for novelty, much like nature’s evolutionary “algorithm.” “Do something you’ve never done before,” they told the robots. They put them in a maze. Guess what? The robots with the novelty search algorithm got out of the maze faster than the ones armed with a plan and a list of best practices. In other words, objectives actually hindered their search. Freed from them, they stopped banging into walls and learned to walk. Are we so different?

Disruption and adaptation ensure the survival of a species, a business, or any agent in a complex system. A network takes in diversity and puts out emergence (the real hero of anyone’s innovation story).

Case in point: two artificial intelligence researchers who use evolution to program artificial neural networks that “learn,” and end up writing a book about Why Greatness Cannot Be Planned. Are we approaching innovation all wrong by holding it against too rigid standards?

So if you want to design for emergence, the scientists in our interview say, the name of the game is to be a treasure hunter. The path isn’t always clear until it’s behind you. Go where curiosity leads you in search of novelty, whatever seems interesting, and you’ll begin to collect the right “stepping stones” for that next big thing…

d4e: Ken Stanley and Joel Lehman, two AI scientists, you wrote a book about Why Greatness Cannot Be Planned. How did that happen? (I’m guessing that wasn’t the plan.)

Ken: There are a ton of self-help books about how to pursue greatness and achieve your potential. A lot of it is speculative and philosophical. What’s unique about our perspective is that we’re offering hardcore scientific empirical research and experimentation that supports the approach that we’re advancing in the book. So people reading this book looking at these ideas can feel a certain level of confidence that they don’t normally feel about where these ideas come from: We weren’t trying to become self-help gurus; we were doing experiments in artificial intelligence. We unexpectedly stumbled on the principles we describe in this book about why greatness cannot be planned.

d4e: The Chinese finger trap is a metaphor for innovation. Why?

Joel: In the Chinese finger trap, the steps that you need to take to solve the problem are exactly the ones you wouldn’t expect would lead to the solution. It’s a model of deception in innovation, in that making a breakthrough discovery often involves taking steps that are seemingly unrelated to the objective.

Ken: It’s the simplest example of this type of innovation process which we’re claiming is very common, where what you need to do looks like it’s exactly the opposite of what you want. It turns out you need to do exactly the opposite of what you think you should. The Chinese finger trap is designed to be deceptive in that way.

You have to push yourself more into the trap to get out of it. The problems of life are far more complex than that, though, so they’re going to be even worse than a Chinese finger trap in terms of being deceptive. If they weren’t, we would just solve all of them. In order to escape the Chinese finger traps of the world, we have to sometimes be willing to step into the unknown rather than go in the direction that’s obvious or “correct.”

d4e: Great invention is defined by the realization that its prerequisites are in place. Apple spends much less than its competitors on R&D. Do you think that those two ideas are related?

We could speculate that people put a lot of effort into pursuing an objective, and that can be very expensive, because maybe the right stepping stones just haven’t been laid. So you’re going to be grinding for a long time to create all the prerequisites you need to get this thing to work. Whereas if you take an unusual approach (and I would be willing to bet that Steve Jobs wasn’t very objective-driven) where you don’t follow an objective path, you can sometimes arrive somewhere interesting and valuable with a lot less effort than someone who is following an objective. People like Steve Jobs seem to have a knack for following those types of trails and taking the kinds of risks that are necessary, and saying, “Let’s just see where this leads.”

d4e: How did an algorithm change your life? Was it a eureka moment, or a slower evolution?

Ken: This question gets to the origins of the idea behind novelty search. There was actually a particular eureka moment before this algorithm that led to the novelty search algorithm, but also later there was the gradual dawning for both Joel and I, that the algorithm is really a way of thinking about life.

Before novelty search, there was an algorithm called Picbreeder, which is a website that we put up in our research group for people to come from the internet to breed pictures, and then publish them on the site. That sounds a little strange, but basically it means that you could come in and pick your favorite picture from a set, and it would have offspring. And the picture’s “children” would be slightly different from their parents — just like if you had children, they wouldn’t be exactly the same as you, but not completely different either.

These experiments exposed a flaw in the paradigm of “innovation through continual improvement.”

I had an experience playing with Picbreeder, where I started with an image that looked like an alien face. I was playing with the image, and it eventually bred into a car. This moment when the alien face turned into a car was the epiphany moment when I was struck with the realization that I had achieved something interesting without trying to achieve it. While it may sound trivial — after all, Picbreeder is just a toy — everything I’ve been taught for years in computer science said that the way you make computers do things — in fact the way we as humans generally do things — is to set your goals and somehow help the algorithm push the computer into the direction of achieving that goal. But this experience was so different than that.

I was breeding these pictures myself, but we have evolutionary algorithms that breed automatically as well, without human assistance. So I realized that this experience of achieving something without trying to achieve it probably has implications far beyond a picture breeding service. This led to the proposition that there could be an algorithm that doesn’t have a clear objective.

This is what I began to speak to Joel about before the novelty search algorithm was created.

d4e: So the idea of discovery without objectives led you and Joel to create the novelty search algorithm. You say that novelty search is paradoxical. How so?

Ken: The novelty search algorithm reflects the philosophy that sometimes you can discover things if you’re not looking for them. It gives the computer the ability to have serendipitous discovery but not necessarily be pigeonholed in the direction of trying to search for one thing and one thing only, or create one type of solution to a problem. Instead of a robot that has one type of walking gait, for example, maybe you have many.

We were playing with this for years, and it would constantly surprise us by doing things that people wouldn’t expect. You don’t tell the computer what to do, but it ends up solving your problem better than if you did. We saw this paradox over and over again. After a few years we realized that we were seeing was about more than a computer search algorithm.

The more I spoke about the algorithm at computer conferences, the more people would ask about things unrelated to computers, such as: What does it mean for my life if sometimes the best way to find something is to be not looking for it? Does this have any broader implications for how we run innovative cultural institutions? Or how we run science?

Or how about the way we support innovation in society?

It became apparent then that it is extremely important that we have this discussion as a society. If objectives are not always the way to guide innovation and scientific progress, then why is it that almost everything we do is objective-driven? That’s when we decided to write a book, because this kind of message is hard to get out in a computer science journal article aimed only at artificial intelligence. This is a much broader issue, in terms of how we foster innovation and treat objectives in our culture.

d4e: In your book, you ask us to imagine a cavernous warehouse of all possible discoveries. You say that “the structure of the search space is just plain weird.” Can you tell us what you mean by that?

Joel: The structure of the innovation space is weird in that it’s hard to predict where certain things will be. The linkages between different kinds of innovations are surprising. That relates to the broader area of serendipity in science or artistic realms, where you might inadvertently create the next big thing. A typical example is the vacuum tube, which was created as part of fundamental research into electricity. The person who was exploring that didn’t have the idea of a computer in mind. It just turned out that from this one point in space, from discovering a vacuum tube, you actually could reach computation.

Ken: Vacuum tubes facilitate computers, and that’s a connection that exists in this big “room” of possible things. But who would ever know that? Somebody later picked up on it and said, “Now that this exists, now we can create this other thing.” There’s a lot of opportunity there for serendipity, in the sense that you wouldn’t even be working on vacuum tubes if your main interest was computation. Vacuum tubes don’t look like they have anything to do with computation. So in some way, to get all this stuff to exist, requires that people sometimes are not working intentionally on the ultimate achievement that stems from the effort that was put into this chain of events.

d4e: Order is important in search. How so?

Ken: When you first hear about novelty search, that we should search for things that are recognized for their novelty and ignore everything else, our intuition might say, “This is just random. How can that kind of search be beneficial?” I think people assume there’s some kind of coherent order that search induces. In other words, we assume that things get better as you continue to improve. That’s an order that we’ve come to expect from an objective — like if you’re trying to get better at school, your test scores will go up. We expect to start out low and get higher, and that’s the kind of order we’re comfortable with.

Whereas with novelty, it’s harder for us to think about what the order of occurrence is going to be, because we’re no longer talking about an objective metric. What we try to argue is that there is an order that’s inherent in a search for novelty — it’s just a different kind of order, one of increasing complexity.

Instead of increasing quality along some objective metric, novelty search basically creates a situation where if you continually try to do something new, you will quickly exhaust all the simple things there are to do. There are only so many simple ways to do things. By necessity, if you succeed in continually seeking novelty, things will have to become more complex over time.

When it comes to innovation, maybe we should loosen the reins just a bit and integrate some of the knowledge that we’re gaining in our scientific understanding of natural evolution.

At some point, somebody invented a wheel. Thousands of years later, someone was on the moon. Things don’t go in the other order. You don’t figure out how to go to the moon and then later come up with the wheel. So there is an order in innovative processes that are driven by invention rather than by trying to achieve a specific objective metric. And that order tends to be the increasing complexity. The reason I bring this up is that there’s good reason here to be confident that the search for novelty does have some kind of coherent principle, and it is anything but random. It’s just that it’s not following the order that we’re used to (of “worse to better”).

We wanted to suggest to our readers that going worse to better is actually not that principled, even if it makes you feel comfortable, because of the fact that it’s a mystery how to do it. We don’t necessarily know what the stepping stones are. So it’s really just a security blanket to say, “I’m going to keep on improving” if you don’t necessarily know how that’s going to happen.

d4e: The age of best practices is over. Would you agree with that?

Ken: There is room, despite everything we’ve said, for trying to improve. But we have to be clear about where that process is appropriate. If your aims are relatively modest, it can be entirely appropriate to just try to improve. If you just want to try to improve your lap time, that’s reasonable. But when it comes to fostering innovation on a larger scale, I’d be ok with endorsing the idea that the age is over, because we should have a revelation that simply trying to continually improve in an objective sense just doesn’t work.

There’s a great opportunity for a paradigm shift here. The amount of information we have now from artificial intelligence is starting to expose problems with the traditional view of achievement and innovation. Our book exists because we had the ability to do experiments that would have been impossible in the past. These experiments exposed a flaw in the paradigm of “innovation through continual improvement.”

Joel: And yet it seems that at the same time, the cultural crest is pushing more toward the paradigm of objectives and continual improvement. We have evidence that this isn’t how the world really works, especially in areas of innovation, discovery and creativity. It’s troubling that so many innovation endeavors are still ruled by objective-based approaches. When it comes to innovation, maybe we should loosen the reins just a bit and integrate some of the knowledge that we’re gaining in our scientific understanding of natural evolution, and how creativity works — and some of these insights come from artificial intelligence.

Ken: There should be a paradigm shift, but we wrote the book because there hasn’t been. This is a current argument about how we should approach innovation. When Joel says we run a lot of things in this very objective-driven way, that’s literally true. Look at what we’re doing in schools. The standardized testing craze is all about objective measurement, and it’s used for all kinds of things, not just for students. We basically say the school has to objectively improve on some metric, or the school gets penalized. It’s all based on objectives, and there’s a lot of discussion about whether that’s a good idea or not, but we’re not part of that debate explicitly.

Our work offers a different angle, which says that if you kept demanding higher scores, eventually everyone would get a 100. That looks like a pretty naive approach. There should be room for people to try new things — and that could lead to scores going down from time to time. If you always penalize for scores going down, then none of those things become possible.

In the world of science funding, one of the things you almost have to do to get money for research is to state your objectives. We’re running our entire federally funded scientific enterprise — really, billions of dollars — based almost entirely on objectives. You can hardly get your word in if you don’t state in the beginning what you’re trying to achieve. It’s not common sense; it’s a problem.

d4e: There’s a book called Why A Students Work for C Students. How does that relate to this philosophy?

Ken: I haven’t read that book, and I think it’s obvious that that’s not always the case — there are plenty of A students who are the bosses of C students. But that’s an interesting question. You could imagine there’s a connection there in that somebody might assume that if you get A’s that’s the correct goal for getting to the top of the heap in some organization. In reality, often it’s the case that the route to success is more circuitous. It may be that the C student was more willing to take risks that the A student just didn’t take because the A student was so single-mindedly focused on doing what everyone says you’re supposed to do in order to be successful.

d4e: Objectively speaking, unstructured play can be bad for us as individual adults, but good for us as a society. True or false?

Ken: I would say false, because I think it can be a good thing for individuals and society. Unstructured play can be risky, though. It may lead to no particular advance to the individual; on the other hand, it may lead to something great. You just can’t be sure. You may have a hobby, and pursuing that interest may just be “play” for you, but it could end up being the stepping stone to your next great achievement.

And of course I’m totally in agreement with the idea that it’s also beneficial to society, because we need people to pursue their passions and try the things that other people wouldn’t necessarily try, so that they can build the stepping stones for others to follow.

Everybody can benefit, but we have to just accept that anything unstructured has risk. That’s why we tend to be against this kind of approach to life as a policy matter: we like to control things with standards and objectives and metrics, because we’re afraid of risk, ultimately. At the same time, you have to take risks in order to have great achievements in the end.

d4e: Let’s say I run a venture capitalist firm. How should I go about building a portfolio of startup investments?

Ken: I think venture capitalists actually put the ideas in our book into practice in a better way than a lot of other areas in society because they understand the value of a portfolio: Not all of your bets need to pay off. Just some of them need to pay off. VC’s are willing to go in some very exploratory, risky directions. If you have one big hit, it can make up for all the ones that didn’t pan out. This is, I think, a pretty good lesson for society in general. In a lot of our institutions we guard against failure as if it’s some kind of pathology to make a mistake. Venture capitalists have good instincts and are willing to have failures, and that allows them to search in a less objective way. I think we would find that the most successful venture capitalists are less objective about their portfolios.

d4e: You don’t seem to dwell much on the concept of probability. Don’t you like it?

Ken: The book isn’t really about probability, but I think we would endorse probability as an important concept. We see its importance in our field of machine learning and artificial intelligence. The point that’s being made in the book is largely independent of an in-depth discussion of probability, although it factors in to risk.

Any individual discovery could be regarded as highly improbable. In innovative processes, the likelihood of making a particular discovery is unpredictable. And yet, overall, you can increase your ability to make discoveries and the probability that you’ll make some interesting discovery.

d4e: You say that novelty is information-rich. What did you mean by that?

Joel: One way to look at novelty is that it’s information based on not where you’re trying to go, but where you’ve been in the past. In some sense, it can be seen as more information-rich than taking an objective-driven approach, in that you completely know where you’ve been in the past, and so that’s more certain. When you say “this is novel,” you can have confidence that it actually is new. Whereas if you’re trying to take a step along the way to your potential objective, you have to be willing to be uncertain, because you really don’t know if that’s going to be a stepping stone toward your goal.

More than that, the idea of being genuinely different often requires some sort of conceptual advance. You can imagine, for example, being on a skateboard. Who’s going to be more likely to create a novel skateboard move? Will it be me, who’s likely to fall on my butt, or will it be Tony Hawk, who has all this knowledge and experience to create something genuinely new? There is some ability, knowledge, or talent that’s required to create something that’s genuinely new. In that way it’s also a source of information.

d4e: Is it possible that there’s a historical trend toward us wanting more certainty? And if so, is the value of novelty rising or falling?

Ken: I think that novelty has always been valuable. What’s happening is that because of things like the internet, there’s now a significantly greater potential for the creation and dissemination of novelty. We’re exposed to much more novelty in a short time than we used to be, because the network has created this capacity to expose people to new ideas almost instantaneously and from enormous numbers of different people. That means that it’s going to accelerate the production of novelty, and we’re all going to be exposed to more, and that’s a feedback cycle. Now that there’s more novelty around, there are more stepping stones, and so more people will create novelty.

There’s a tendency to trap people in the things that they’re comfortable with, and as long as that’s making money, everybody’s happy. But that doesn’t produce the stepping stones we need for innovation.

d4e: What about machine learning and the curation of information? What about phenomena like the popularity of the Kardashians? Aren’t we suppressing novelty?

Ken: Because computers are making decisions for us about what we look at, and those decisions might cause us to not be exposed to interesting things?

d4e: Right, like the rich get richer effect. The more that machines learn our preferences, the more they are fed back to us.

Ken: I think there is that risk. We have to guard against always being given just more of what we want, what we are already comfortable with. I’m pretty optimistic about human nature and its ability to get around the tendency toward convergence. Certainly I think the algorithms will play a role in that too. Algorithms like novelty search can give us a bit of a clue about how to create computer algorithms that are not so convergent that they just always push you in some predetermined direction.

We’re exposed to much more novelty in a short time than we used to be, because the network has created this capacity to expose people to new ideas almost instantaneously and from enormous numbers of different people.

In general, we like to be exposed to stuff that’s unexpected. And we see that there’s been some attempt to do that in services like YouTube, for example. On the homepage they try to expose you to things you weren’t searching for. Of course they may base it on things you’ve searched for in the past, so there’s a bit of a paradox there.

It’s in the interest of anyone running a business to hook people into new things. People are trying to do that, with algorithms, but at the same time, the danger you’re identifying is real, and we should be cautious about it — because there’s a tendency to trap people in the things that they’re comfortable with, and as long as that’s making money, everybody’s happy. But that doesn’t produce the stepping stones we need for innovation.

Joel: One potential danger with some of these algorithms is that they can get very good at providing us with trivial novelties — novelties that are just some modulation of some formula. “The top 10 X, Y or Z.” It fulfills a very basic human desire for novelty, at a very trivial, unfulfilling level. Maybe over time people will become more aware that they’re being exploited by these algorithms. Like Ken, I’m optimistic about humanity’s ability to adapt to technologies. But it is worrisome that this very human desire for novelty can be undermined by clickbait.

d4e: Will there be enough competition in artificial intelligence for robots to evolve, given that some firms may dominate development?

Ken: These kinds of endeavors can become rather objective when a dominant firm has set the standard for success. It does potentially dampen the ability to try new things. Something really novel might not look as good. Someone might say “Our way of doing things is the objectively superior way; these other approaches are inferior objectively, and you shouldn’t invest in those.” I think that’s a problem, and we are suffering from it right now. There is a belief that there’s a canonical approach that works really well, and therefore other things should be relegated to obscurity. To shed some daylight to some of these less conventional approaches would help foster diversification. Of course, the people still need to be experts. We’re not saying that any idea off the street is worth millions of dollars; but if an expert has an unconventional idea that looks interesting, let’s give it a try.

d4e: Making distant associations and unlikely connections within the network is, to me, crucial to innovation. For us, these processes are often subconscious. Will AI have a subconscious?

Ken: I think that’s on the minds of people in the field. Generally, people in machine learning are concerned with what you’re describing as a subconscious process — the ability to make deep, subtle connections. That’s probably a little bit ahead of where the field is at the moment in terms of making those connections through algorithms on computers, although there’s certainly work being done in that direction. Anything that’s interesting about the human intellect is fair game for AI.

To learn how Dialog can help your business, contact us at 512.697.9425 or

This article was originally published in design4emergence, a network science community sponsored and nurtured by Dialog and Panarchy, the world’s first network design firm.