Posts

The holidays are here so it’s time for us to jump on the bandwagon with a friendly PSA: If you go and buy that giant inflatable snowman to keep up with the Joneses, you’re playing into positive feedback loops, and before you do that you should know that nature abhors stagnation. I don’t mean to be a shit, but positive feedback loops are destructive like sneaky, malicious hurricanes and that’s bad news for you unless you’ve got something you want destroyed.

main-street

Now, there’s a time and place for creative destruction and weirdly enough network science can help you find it. Hang on a minute and I’ll illustrate.

When I was fifteen my English lit teacher gave me this really old copy of Main Street. It wasn’t the best book I ever read but it really got down on the physiognomy of boredom. Through his tale of Carol Milford, a cosmopolitan young woman who moves to a small town with her new husband, Sinclair Lewis meticulously unpacked the pathology of sameness as a slow, painful killer of culture and community. The story went boldly into the beige, laying bare the domestic misery that it seemed to my teenage self everyone was ignoring. I was repulsed and impressed. I remember climbing onto the roof of my suburban house gasping for fresh air, wondering, “How small can people really make their worlds?”

Pretty small, it turns out.

Fourteen years later (a Saturn cycle! the astrologer says, as if everyone should purse their lips and nod their heads), and I’m sitting in couple’s therapy with my (now ex) husband. What’s that hanging over the counselor’s desk? Why, a familiar illustration from a limited edition first printing of Main Street.

So now I’m paying attention.

On the other side of divorce I became fascinated by the psychology of boredom. Whose fault is the unhappiness that results? Should you blame circumstances or yourself? The small town, or your own resistance to conformity (a cultural positive feedback loop)? Is there a link between boredom and intelligence?  On the other side of the same coin is curiosity, which does have a link to intelligence. Studies show that a tendency to report frequent feelings of boredom, a trait scarily prevalent among people with narcissistic personality disorder, may be a function of the quality of one’s self-awareness. Boredom tendencies run higher in individuals with lower absorption (a measure of attention span–no surprise there) and in individuals with negative self-awareness tending toward evaluation and judgment. No wonder narcissists, who constantly seek external means of self-validation, are notoriously whiny about their listlessness. Boredom is something we all experience at one time or another, and it may have an important evolutionary function: inciting experiences of pattern interruption. But before it does that it can make you stupid and dull. Temporarily.

In his article “The Surprising Power of an Uncomfortable Brain,” Garth Sundem, author of Beyond IQ and Your Daily Brain, illustrates with friendly snark that “a brain shocked from its easy complacency functions better than a brain kicking along on autopilot,” whereas the repetition of familiar situations can lull your brain “zombie-like into the halls of mindless consumption.” In his article, Sundem sources several cognitive research experiments showing that people whose brains encountered situations where expectations and reality were mismatched performed better on cognitive tests because their brains switched from associative to rule-based systematic processing. The English: encountering the unexpected wakes up your brain. Anything that enforces “cultural dysfluency” should do the trick. Including culture shock. So while the culture shock of moving to a new town may be temporarily invigorating, the newness eventually wears off and the sameness can be stifling, prompting a person to seek new forms of pattern interruption.

So let’s assume you’re Carol Milford of Main Street and you want to look at boredom as a function of your network:

Depending on your own biases you may think people are chaotic or predictable, but they’re not either of these things all the time. What they are is complex, meaning they’re affected by all the feedback loops that run between them and their environment. Boredom is the product of a feedback loop between your brain, your environment, and your perceptual narrative.

You should know that complex networks (personalities, relationships, markets, and even Main Street) are characterized by feedback and have three tell-tale behaviors. If Carol Milford had understood network behavior, she might’ve take more responsibility for her own happiness from the get-go, moved somewhere more interesting and saved herself the effort of trying to transform the culture of the town. If you know these tendencies you can save yourself a lot of trouble, and if you bear with me I’ll tell you how.

Attractors – these are places where the network is moving toward some kind of equilibrium. The beginnings of order. (In our Main Street scenario, something happens and people are drawn to a certain type of behavior).

Self-reinforcement – where order begets more order. If the nodes in a network are the interconnected lives of Main Street, this is where they all keep doing the same thing because “that’s the way it’s done” and there must be some reward for doing things that way. The positive feedback loop continually validates & perpetuates itself in ways that are pretty much invisible unless you’re on the outside looking for them.

Cascades – these are shifts in direction caused by an outside intervention or an internal breakdown, as when a positive feedback loop has become so homogeneous as to be unsustainable and fragile to outside disruption. A cascade rips through it once and the network is never the same again.

Good? Bad? Neither inherently, because we aren’t talking about an abstraction–we’re talking about the fundamental structure and behavior of complex systems, and positive feedback loops always undo themselves. They either accept diversity and pivot toward greater resilience or they cascade and become something else.

Take heart though. Boredom and disruption go hand in hand, like everything else with its opposite (ever had a week of artistic frustration only to have a colossal breakthrough on the other side?) Periods of boredom and listlessness in human beings often spur discovery. In business, innovation clusters explode when a company breaks the lack of competition (a positive feedback loop) by doing something different that the network was ready for: disruption. Eventually you have the Big Idea, or someone else has it for you, because The Next Big Idea is always riding the cresting wave of the network.

The way the world works is fundamentally about linkages. Taiji master Ben Lo said that whenever you embody yin you also embody yang. A system always embodies the whole circle, and here is where the power is, because it allows for movement into one state to create disequilibrium, which incites a system to move and change in order to regain equilibrium. Nothing can be yin without yang. So if you think about it in terms of network dynamics, boredom is your signal to seek a new stimulus (internal or external) or it will seek you. One way or another, everything in a complex system shifts.

Again: A local network either invites diversity and changes, or it stays the same so long that it becomes fragile, unprepared to adapt to perturbations from its external environment. Then a germ comes along from across the pond and destroys an indigenous population, or incumbent tech company doesn’t see the little guy rising up in time… or a marriage runs into trouble and doesn’t make it. Either way change comes and you get to choose a new direction.

If you aren’t designing for emergence you might get comfortable and mistake positive feedback loops for equilibrium–when what they really are is pent-up order. Emergence will happen anyway. Novelty always prevails over habit, else networks crumble and end up on the forest floor, where as cultural detritus they give new life to emergent forms. This is the way of life.

Acrobats know that you have to move constantly to find balance and stillness. Sometimes those movements are imperceptible, but they are what allow you to keep your footing.

No doubt Sinclair Lewis quelled the demons of his own small town boredom by creating a world where he could shine a light on its secret interiors. For Main Street‘s Carol Milford, emergence did not produce a cultural renaissance in Gopher Prairie, as she’d hoped. A lot of people (myself included) got pissed off about that. But she was a network of one, and did not have the agency with which time and entropy eventually overcome all homogenous networks and the small towns that personify them. Instead, emergence produced in the small networked world of her mind a new way of seeing, a new frame of mind–one that told her she’d be ok no matter what happened. This peculiar marriage of aloofness and intent is the sweet spot where a human being can find agency in a network.

Memes matter, but not so much as mutability. Designing for emergence, or as Alfred North Whitehead might have put it, seeking ordered forms of novelty and novel forms of order, produces the lucky buds of change that networks nurture into memes, which, once they spread, flower into disruption. What happened when readers of Main Street integrated what they saw there into their own worlds certainly changed some minds, an emergent process that continues in immeasurable ways to this day. (Otherwise people wouldn’t hang it over their desks as a symbol of personal transformation.)

Main Street isn’t real. It exists in your imagination and you can leave at any time. The self-organizing nature of the universe always pulls novelty from the battle between order and entropy. Boredom leads to discovery. So before you go and copy someone else’s strategy, sit with your boredom for a while and allow the network to enable emergence.

Network dynamics dictate that everything changes, and you get to choose whether to accept that or the inevitable cascade that comes to wash away the sameness. Either way, we promise it won’t be boring.

To learn how Dialog can help your business, contact us at 512.697.9425 or LetsChat@DialogGroup.com.

This article was originally published in design4emergence, a network science community sponsored and nurtured by Dialog and Panarchy, the world’s first network design firm.

“If your objective was to invent a microwave oven, you would not be working on radars.”

These days, amidst a great collective effort to reverse engineer innovation, everybody’s looking to model the success stories. Tales of disruption pepper our social media feeds, and we want the magic formula—the algorithm—for innovation.

While magic is tricky, success is even more deceptive. That’s because our measure of success, the objective, is “blind to the true stepping stones that must be crossed.” These are the words of Joel Lehman and Kenneth Stanley, the inventors of a breakthrough evolutionary algorithm for robotic neural nets, called novelty search.

What do robot brains and algorithms have to do with our current paradigm of innovation?

At the Evolutionary Complexity Research Group (EPlex) at the University of Central Florida, Lehman and Stanley programmed their AI to abandon their objectives and search for novelty, much like nature’s evolutionary “algorithm.” “Do something you’ve never done before,” they told the robots. They put them in a maze. Guess what? The robots with the novelty search algorithm got out of the maze faster than the ones armed with a plan and a list of best practices. In other words, objectives actually hindered their search. Freed from them, they stopped banging into walls and learned to walk. Are we so different?

Disruption and adaptation ensure the survival of a species, a business, or any agent in a complex system. A network takes in diversity and puts out emergence (the real hero of anyone’s innovation story).

Case in point: two artificial intelligence researchers who use evolution to program artificial neural networks that “learn,” and end up writing a book about Why Greatness Cannot Be Planned. Are we approaching innovation all wrong by holding it against too rigid standards?

So if you want to design for emergence, the scientists in our interview say, the name of the game is to be a treasure hunter. The path isn’t always clear until it’s behind you. Go where curiosity leads you in search of novelty, whatever seems interesting, and you’ll begin to collect the right “stepping stones” for that next big thing…

d4e: Ken Stanley and Joel Lehman, two AI scientists, you wrote a book about Why Greatness Cannot Be Planned. How did that happen? (I’m guessing that wasn’t the plan.)

Ken: There are a ton of self-help books about how to pursue greatness and achieve your potential. A lot of it is speculative and philosophical. What’s unique about our perspective is that we’re offering hardcore scientific empirical research and experimentation that supports the approach that we’re advancing in the book. So people reading this book looking at these ideas can feel a certain level of confidence that they don’t normally feel about where these ideas come from: We weren’t trying to become self-help gurus; we were doing experiments in artificial intelligence. We unexpectedly stumbled on the principles we describe in this book about why greatness cannot be planned.

d4e: The Chinese finger trap is a metaphor for innovation. Why?

Joel: In the Chinese finger trap, the steps that you need to take to solve the problem are exactly the ones you wouldn’t expect would lead to the solution. It’s a model of deception in innovation, in that making a breakthrough discovery often involves taking steps that are seemingly unrelated to the objective.

Ken: It’s the simplest example of this type of innovation process which we’re claiming is very common, where what you need to do looks like it’s exactly the opposite of what you want. It turns out you need to do exactly the opposite of what you think you should. The Chinese finger trap is designed to be deceptive in that way.

You have to push yourself more into the trap to get out of it. The problems of life are far more complex than that, though, so they’re going to be even worse than a Chinese finger trap in terms of being deceptive. If they weren’t, we would just solve all of them. In order to escape the Chinese finger traps of the world, we have to sometimes be willing to step into the unknown rather than go in the direction that’s obvious or “correct.”

d4e: Great invention is defined by the realization that its prerequisites are in place. Apple spends much less than its competitors on R&D. Do you think that those two ideas are related?

We could speculate that people put a lot of effort into pursuing an objective, and that can be very expensive, because maybe the right stepping stones just haven’t been laid. So you’re going to be grinding for a long time to create all the prerequisites you need to get this thing to work. Whereas if you take an unusual approach (and I would be willing to bet that Steve Jobs wasn’t very objective-driven) where you don’t follow an objective path, you can sometimes arrive somewhere interesting and valuable with a lot less effort than someone who is following an objective. People like Steve Jobs seem to have a knack for following those types of trails and taking the kinds of risks that are necessary, and saying, “Let’s just see where this leads.”

d4e: How did an algorithm change your life? Was it a eureka moment, or a slower evolution?

Ken: This question gets to the origins of the idea behind novelty search. There was actually a particular eureka moment before this algorithm that led to the novelty search algorithm, but also later there was the gradual dawning for both Joel and I, that the algorithm is really a way of thinking about life.

Before novelty search, there was an algorithm called Picbreeder, which is a website that we put up in our research group for people to come from the internet to breed pictures, and then publish them on the site. That sounds a little strange, but basically it means that you could come in and pick your favorite picture from a set, and it would have offspring. And the picture’s “children” would be slightly different from their parents — just like if you had children, they wouldn’t be exactly the same as you, but not completely different either.

These experiments exposed a flaw in the paradigm of “innovation through continual improvement.”

I had an experience playing with Picbreeder, where I started with an image that looked like an alien face. I was playing with the image, and it eventually bred into a car. This moment when the alien face turned into a car was the epiphany moment when I was struck with the realization that I had achieved something interesting without trying to achieve it. While it may sound trivial — after all, Picbreeder is just a toy — everything I’ve been taught for years in computer science said that the way you make computers do things — in fact the way we as humans generally do things — is to set your goals and somehow help the algorithm push the computer into the direction of achieving that goal. But this experience was so different than that.

I was breeding these pictures myself, but we have evolutionary algorithms that breed automatically as well, without human assistance. So I realized that this experience of achieving something without trying to achieve it probably has implications far beyond a picture breeding service. This led to the proposition that there could be an algorithm that doesn’t have a clear objective.

This is what I began to speak to Joel about before the novelty search algorithm was created.

d4e: So the idea of discovery without objectives led you and Joel to create the novelty search algorithm. You say that novelty search is paradoxical. How so?

Ken: The novelty search algorithm reflects the philosophy that sometimes you can discover things if you’re not looking for them. It gives the computer the ability to have serendipitous discovery but not necessarily be pigeonholed in the direction of trying to search for one thing and one thing only, or create one type of solution to a problem. Instead of a robot that has one type of walking gait, for example, maybe you have many.

We were playing with this for years, and it would constantly surprise us by doing things that people wouldn’t expect. You don’t tell the computer what to do, but it ends up solving your problem better than if you did. We saw this paradox over and over again. After a few years we realized that we were seeing was about more than a computer search algorithm.

The more I spoke about the algorithm at computer conferences, the more people would ask about things unrelated to computers, such as: What does it mean for my life if sometimes the best way to find something is to be not looking for it? Does this have any broader implications for how we run innovative cultural institutions? Or how we run science?

Or how about the way we support innovation in society?

It became apparent then that it is extremely important that we have this discussion as a society. If objectives are not always the way to guide innovation and scientific progress, then why is it that almost everything we do is objective-driven? That’s when we decided to write a book, because this kind of message is hard to get out in a computer science journal article aimed only at artificial intelligence. This is a much broader issue, in terms of how we foster innovation and treat objectives in our culture.

d4e: In your book, you ask us to imagine a cavernous warehouse of all possible discoveries. You say that “the structure of the search space is just plain weird.” Can you tell us what you mean by that?

Joel: The structure of the innovation space is weird in that it’s hard to predict where certain things will be. The linkages between different kinds of innovations are surprising. That relates to the broader area of serendipity in science or artistic realms, where you might inadvertently create the next big thing. A typical example is the vacuum tube, which was created as part of fundamental research into electricity. The person who was exploring that didn’t have the idea of a computer in mind. It just turned out that from this one point in space, from discovering a vacuum tube, you actually could reach computation.

Ken: Vacuum tubes facilitate computers, and that’s a connection that exists in this big “room” of possible things. But who would ever know that? Somebody later picked up on it and said, “Now that this exists, now we can create this other thing.” There’s a lot of opportunity there for serendipity, in the sense that you wouldn’t even be working on vacuum tubes if your main interest was computation. Vacuum tubes don’t look like they have anything to do with computation. So in some way, to get all this stuff to exist, requires that people sometimes are not working intentionally on the ultimate achievement that stems from the effort that was put into this chain of events.

d4e: Order is important in search. How so?

Ken: When you first hear about novelty search, that we should search for things that are recognized for their novelty and ignore everything else, our intuition might say, “This is just random. How can that kind of search be beneficial?” I think people assume there’s some kind of coherent order that search induces. In other words, we assume that things get better as you continue to improve. That’s an order that we’ve come to expect from an objective — like if you’re trying to get better at school, your test scores will go up. We expect to start out low and get higher, and that’s the kind of order we’re comfortable with.

Whereas with novelty, it’s harder for us to think about what the order of occurrence is going to be, because we’re no longer talking about an objective metric. What we try to argue is that there is an order that’s inherent in a search for novelty — it’s just a different kind of order, one of increasing complexity.

Instead of increasing quality along some objective metric, novelty search basically creates a situation where if you continually try to do something new, you will quickly exhaust all the simple things there are to do. There are only so many simple ways to do things. By necessity, if you succeed in continually seeking novelty, things will have to become more complex over time.

When it comes to innovation, maybe we should loosen the reins just a bit and integrate some of the knowledge that we’re gaining in our scientific understanding of natural evolution.

At some point, somebody invented a wheel. Thousands of years later, someone was on the moon. Things don’t go in the other order. You don’t figure out how to go to the moon and then later come up with the wheel. So there is an order in innovative processes that are driven by invention rather than by trying to achieve a specific objective metric. And that order tends to be the increasing complexity. The reason I bring this up is that there’s good reason here to be confident that the search for novelty does have some kind of coherent principle, and it is anything but random. It’s just that it’s not following the order that we’re used to (of “worse to better”).

We wanted to suggest to our readers that going worse to better is actually not that principled, even if it makes you feel comfortable, because of the fact that it’s a mystery how to do it. We don’t necessarily know what the stepping stones are. So it’s really just a security blanket to say, “I’m going to keep on improving” if you don’t necessarily know how that’s going to happen.

d4e: The age of best practices is over. Would you agree with that?

Ken: There is room, despite everything we’ve said, for trying to improve. But we have to be clear about where that process is appropriate. If your aims are relatively modest, it can be entirely appropriate to just try to improve. If you just want to try to improve your lap time, that’s reasonable. But when it comes to fostering innovation on a larger scale, I’d be ok with endorsing the idea that the age is over, because we should have a revelation that simply trying to continually improve in an objective sense just doesn’t work.

There’s a great opportunity for a paradigm shift here. The amount of information we have now from artificial intelligence is starting to expose problems with the traditional view of achievement and innovation. Our book exists because we had the ability to do experiments that would have been impossible in the past. These experiments exposed a flaw in the paradigm of “innovation through continual improvement.”

Joel: And yet it seems that at the same time, the cultural crest is pushing more toward the paradigm of objectives and continual improvement. We have evidence that this isn’t how the world really works, especially in areas of innovation, discovery and creativity. It’s troubling that so many innovation endeavors are still ruled by objective-based approaches. When it comes to innovation, maybe we should loosen the reins just a bit and integrate some of the knowledge that we’re gaining in our scientific understanding of natural evolution, and how creativity works — and some of these insights come from artificial intelligence.

Ken: There should be a paradigm shift, but we wrote the book because there hasn’t been. This is a current argument about how we should approach innovation. When Joel says we run a lot of things in this very objective-driven way, that’s literally true. Look at what we’re doing in schools. The standardized testing craze is all about objective measurement, and it’s used for all kinds of things, not just for students. We basically say the school has to objectively improve on some metric, or the school gets penalized. It’s all based on objectives, and there’s a lot of discussion about whether that’s a good idea or not, but we’re not part of that debate explicitly.

Our work offers a different angle, which says that if you kept demanding higher scores, eventually everyone would get a 100. That looks like a pretty naive approach. There should be room for people to try new things — and that could lead to scores going down from time to time. If you always penalize for scores going down, then none of those things become possible.

In the world of science funding, one of the things you almost have to do to get money for research is to state your objectives. We’re running our entire federally funded scientific enterprise — really, billions of dollars — based almost entirely on objectives. You can hardly get your word in if you don’t state in the beginning what you’re trying to achieve. It’s not common sense; it’s a problem.

d4e: There’s a book called Why A Students Work for C Students. How does that relate to this philosophy?

Ken: I haven’t read that book, and I think it’s obvious that that’s not always the case — there are plenty of A students who are the bosses of C students. But that’s an interesting question. You could imagine there’s a connection there in that somebody might assume that if you get A’s that’s the correct goal for getting to the top of the heap in some organization. In reality, often it’s the case that the route to success is more circuitous. It may be that the C student was more willing to take risks that the A student just didn’t take because the A student was so single-mindedly focused on doing what everyone says you’re supposed to do in order to be successful.

d4e: Objectively speaking, unstructured play can be bad for us as individual adults, but good for us as a society. True or false?

Ken: I would say false, because I think it can be a good thing for individuals and society. Unstructured play can be risky, though. It may lead to no particular advance to the individual; on the other hand, it may lead to something great. You just can’t be sure. You may have a hobby, and pursuing that interest may just be “play” for you, but it could end up being the stepping stone to your next great achievement.

And of course I’m totally in agreement with the idea that it’s also beneficial to society, because we need people to pursue their passions and try the things that other people wouldn’t necessarily try, so that they can build the stepping stones for others to follow.

Everybody can benefit, but we have to just accept that anything unstructured has risk. That’s why we tend to be against this kind of approach to life as a policy matter: we like to control things with standards and objectives and metrics, because we’re afraid of risk, ultimately. At the same time, you have to take risks in order to have great achievements in the end.

d4e: Let’s say I run a venture capitalist firm. How should I go about building a portfolio of startup investments?

Ken: I think venture capitalists actually put the ideas in our book into practice in a better way than a lot of other areas in society because they understand the value of a portfolio: Not all of your bets need to pay off. Just some of them need to pay off. VC’s are willing to go in some very exploratory, risky directions. If you have one big hit, it can make up for all the ones that didn’t pan out. This is, I think, a pretty good lesson for society in general. In a lot of our institutions we guard against failure as if it’s some kind of pathology to make a mistake. Venture capitalists have good instincts and are willing to have failures, and that allows them to search in a less objective way. I think we would find that the most successful venture capitalists are less objective about their portfolios.

d4e: You don’t seem to dwell much on the concept of probability. Don’t you like it?

Ken: The book isn’t really about probability, but I think we would endorse probability as an important concept. We see its importance in our field of machine learning and artificial intelligence. The point that’s being made in the book is largely independent of an in-depth discussion of probability, although it factors in to risk.

Any individual discovery could be regarded as highly improbable. In innovative processes, the likelihood of making a particular discovery is unpredictable. And yet, overall, you can increase your ability to make discoveries and the probability that you’ll make some interesting discovery.

d4e: You say that novelty is information-rich. What did you mean by that?

Joel: One way to look at novelty is that it’s information based on not where you’re trying to go, but where you’ve been in the past. In some sense, it can be seen as more information-rich than taking an objective-driven approach, in that you completely know where you’ve been in the past, and so that’s more certain. When you say “this is novel,” you can have confidence that it actually is new. Whereas if you’re trying to take a step along the way to your potential objective, you have to be willing to be uncertain, because you really don’t know if that’s going to be a stepping stone toward your goal.

More than that, the idea of being genuinely different often requires some sort of conceptual advance. You can imagine, for example, being on a skateboard. Who’s going to be more likely to create a novel skateboard move? Will it be me, who’s likely to fall on my butt, or will it be Tony Hawk, who has all this knowledge and experience to create something genuinely new? There is some ability, knowledge, or talent that’s required to create something that’s genuinely new. In that way it’s also a source of information.

d4e: Is it possible that there’s a historical trend toward us wanting more certainty? And if so, is the value of novelty rising or falling?

Ken: I think that novelty has always been valuable. What’s happening is that because of things like the internet, there’s now a significantly greater potential for the creation and dissemination of novelty. We’re exposed to much more novelty in a short time than we used to be, because the network has created this capacity to expose people to new ideas almost instantaneously and from enormous numbers of different people. That means that it’s going to accelerate the production of novelty, and we’re all going to be exposed to more, and that’s a feedback cycle. Now that there’s more novelty around, there are more stepping stones, and so more people will create novelty.

There’s a tendency to trap people in the things that they’re comfortable with, and as long as that’s making money, everybody’s happy. But that doesn’t produce the stepping stones we need for innovation.

d4e: What about machine learning and the curation of information? What about phenomena like the popularity of the Kardashians? Aren’t we suppressing novelty?

Ken: Because computers are making decisions for us about what we look at, and those decisions might cause us to not be exposed to interesting things?

d4e: Right, like the rich get richer effect. The more that machines learn our preferences, the more they are fed back to us.

Ken: I think there is that risk. We have to guard against always being given just more of what we want, what we are already comfortable with. I’m pretty optimistic about human nature and its ability to get around the tendency toward convergence. Certainly I think the algorithms will play a role in that too. Algorithms like novelty search can give us a bit of a clue about how to create computer algorithms that are not so convergent that they just always push you in some predetermined direction.

We’re exposed to much more novelty in a short time than we used to be, because the network has created this capacity to expose people to new ideas almost instantaneously and from enormous numbers of different people.

In general, we like to be exposed to stuff that’s unexpected. And we see that there’s been some attempt to do that in services like YouTube, for example. On the homepage they try to expose you to things you weren’t searching for. Of course they may base it on things you’ve searched for in the past, so there’s a bit of a paradox there.

It’s in the interest of anyone running a business to hook people into new things. People are trying to do that, with algorithms, but at the same time, the danger you’re identifying is real, and we should be cautious about it — because there’s a tendency to trap people in the things that they’re comfortable with, and as long as that’s making money, everybody’s happy. But that doesn’t produce the stepping stones we need for innovation.

Joel: One potential danger with some of these algorithms is that they can get very good at providing us with trivial novelties — novelties that are just some modulation of some formula. “The top 10 X, Y or Z.” It fulfills a very basic human desire for novelty, at a very trivial, unfulfilling level. Maybe over time people will become more aware that they’re being exploited by these algorithms. Like Ken, I’m optimistic about humanity’s ability to adapt to technologies. But it is worrisome that this very human desire for novelty can be undermined by clickbait.

d4e: Will there be enough competition in artificial intelligence for robots to evolve, given that some firms may dominate development?

Ken: These kinds of endeavors can become rather objective when a dominant firm has set the standard for success. It does potentially dampen the ability to try new things. Something really novel might not look as good. Someone might say “Our way of doing things is the objectively superior way; these other approaches are inferior objectively, and you shouldn’t invest in those.” I think that’s a problem, and we are suffering from it right now. There is a belief that there’s a canonical approach that works really well, and therefore other things should be relegated to obscurity. To shed some daylight to some of these less conventional approaches would help foster diversification. Of course, the people still need to be experts. We’re not saying that any idea off the street is worth millions of dollars; but if an expert has an unconventional idea that looks interesting, let’s give it a try.

d4e: Making distant associations and unlikely connections within the network is, to me, crucial to innovation. For us, these processes are often subconscious. Will AI have a subconscious?

Ken: I think that’s on the minds of people in the field. Generally, people in machine learning are concerned with what you’re describing as a subconscious process — the ability to make deep, subtle connections. That’s probably a little bit ahead of where the field is at the moment in terms of making those connections through algorithms on computers, although there’s certainly work being done in that direction. Anything that’s interesting about the human intellect is fair game for AI.

To learn how Dialog can help your business, contact us at 512.697.9425 or LetsChat@DialogGroup.com.

This article was originally published in design4emergence, a network science community sponsored and nurtured by Dialog and Panarchy, the world’s first network design firm.